preface

We all know the importance of the CMDB in operations, but we don’t want it to be a vase, so Operations Thinking: CMDB through Zabbix, JumpServer Quest is one of our current challenges. We can solve this problem with the help of CMDB event push, so the event push gateway is introduced.

The event push gateway is a target system of the Blue Whale CMDB event push. When the configuration information in the CMDB changes, the event push gateway is notified in real time. The gateway is associated with each operation and maintenance subsystem, such as JumpServer and Zabbix, to achieve consistency and synchronization of configuration information and provide data support for upper-layer applications.

Consistency synchronization of each system is the premise of all the configuration information source for the CMDB, which requires the asset allocation must pass the CMDB configuration, unite by the event push synchronization, not in the manual configuration of each system (only refers to the group of assets in the base configuration), otherwise will eventually break the consistency synchronization, CMDB at this time will be on the road of “vase”.

The solution

Before the official use of the event push gateway, we need to understand the differences in THE HTTP requests made by various CMDB event pushes, so that our gateway can make corresponding operations.

1. Development framework

The event Push Gateway is developed with PYTHon3.9 + Django3.2 for CMDB event push callback.

# 1. The python environment
conda create -n gateway python=3.9
source activate gateway
pip install django redis 

# 2. Create the project
django-admin startproject gateway
cd gateway
python manage.py startapp gw_cmdb

# 3. Configuration Settings
vim gateway/settings.py
INSTALLED_APPS = [
    'django.contrib.admin'.'django.contrib.auth'.'django.contrib.contenttypes'.'django.contrib.sessions'.'django.contrib.messages'.'django.contrib.staticfiles'.# add app
    'gw_cmdb'
]

MIDDLEWARE = [
    'django.middleware.security.SecurityMiddleware'.'django.contrib.sessions.middleware.SessionMiddleware'.'django.middleware.common.CommonMiddleware'.# Disable CSRF authentication
    #'django.middleware.csrf.CsrfViewMiddleware',
    'django.contrib.auth.middleware.AuthenticationMiddleware'.'django.contrib.messages.middleware.MessageMiddleware'.'django.middleware.clickjacking.XFrameOptionsMiddleware',]# 4. Log configuration (default: gateway/logs directory)
cur_path = os.path.dirname(os.path.realpath(__file__))  # log_path is the path where logs are stored
log_path = os.path.join(os.path.dirname(cur_path), 'logs')
if not os.path.exists(log_path): os.mkdir(log_path)  If the logs folder does not exist, one will be created automatically
LOGGING = {
    'version': 1.'disable_existing_loggers': True.'formatters': {
        # Log format
        'standard': {
            'format': '[%(asctime)s] [%(filename)s:%(lineno)d] [%(module)s:%(funcName)s] '
                      '[%(levelname)s]- %(message)s'},
        'simple': {  # Simple format
            'format': '%(levelname)s %(message)s'}},# filter
    'filters': {},# Define how logs are processed
    'handlers': {
        All logs are logged by default
        'default': {
            'level': 'INFO'.'class': 'logging.handlers.RotatingFileHandler'.'filename': os.path.join(log_path, 'all-{}.log'.format(time.strftime('%Y-%m-%d'))),
            'maxBytes': 1024 * 1024 * 5.# File size
            'backupCount': 5.# backup number
            'formatter': 'standard'.# Output format
            'encoding': 'utf-8'.Set default code, otherwise print out garbled Characters
        },
        # Output error log
        'error': {
            'level': 'ERROR'.'class': 'logging.handlers.RotatingFileHandler'.'filename': os.path.join(log_path, 'error-{}.log'.format(time.strftime('%Y-%m-%d'))),
            'maxBytes': 1024 * 1024 * 5.# File size
            'backupCount': 5.# backup number
            'formatter': 'standard'.# Output format
            'encoding': 'utf-8'.# Set default encoding
        },
        # Console output
        'console': {
            'level': 'DEBUG'.'class': 'logging.StreamHandler'.'formatter': 'standard'
        },
        # Output info log
        'info': {
            'level': 'INFO'.'class': 'logging.handlers.RotatingFileHandler'.'filename': os.path.join(log_path, 'info-{}.log'.format(time.strftime('%Y-%m-%d'))),
            'maxBytes': 1024 * 1024 * 5.'backupCount': 5.'formatter': 'standard'.'encoding': 'utf-8'.# Set default encoding}},Configures which handlers to use to handle logs
    'loggers': {
        Django handles all types of logging, which is called by default
        'django': {
            'handlers': ['default'.'console'].'level': 'INFO'.'propagate': False
        },
        # Log must be passed as a parameter when called
        'log': {
            'handlers': ['error'.'info'.'console'.'default'].'level': 'INFO'.'propagate': True}}},Copy the code

2. Directory structure

D:\work\blueking>tree /f Gateway │ ├─gateway │ manage.py │ ├─gateway │ ├─gateway │ ├─ asgi.py │ ├─gateway │ ├─ settings.py │ urls.py │ ├─ wsgi.py │ │ set py │ └ ─ gw_cmdb │ admin. Py │ apps. Py │ models. Py │ urls. Py │ views. Py │ set py │ ├ ─ common │ CMDB. Py │ ├─ JumpServer (Not yet Open) ├─ Zabbix (Not yet Open)Copy the code

Among them:

  • Gateway is the project directory.

  • Gw_cmdb is the app directory.

  • Gw_cmdb /views.py receives HTTP requests pushed by CMDB events in a unified manner and associates them with modules of other subsystems.

  • Gw_cmdb /common/cmdb.py is the directory of modules that parse views.py to receive HTTP requests.

  • Gw_cmdb/jumpServer Total directory of modules that perform JumpServer-related operations for the gateway;

  • Gw_cmdb /zabbix is the total directory of modules that implement Zabbix-related operations for the gateway.

Deployment of 3.

# 1. Item routing
vim gateway/urls.py
from django.contrib import admin
from django.urls import path
from django.conf.urls import include

urlpatterns = [
    path('admin/', admin.site.urls),
    path('cmdb/',include('gw_cmdb.urls')))# 2. App routing
from django.urls import path
from . import views

# Receive HTTP requests pushed by CMDB events uniformly;
urlpatterns = [
    path(r'',views.cmdb_request),
]

# 3.cmdb_request
Receive the HTTP request from the CMDB event push gateway
vim gw_cmdb/view.py
from django.shortcuts import render
from django.http import HttpRequest,HttpResponse
from .common.cmdb import cmdb
from .zabbix.main import zabbix_main
import json
import logging

logger = logging.getLogger('log')

# Create your views here
def cmdb_request(request) :
    if request.method == 'POST':
        data = json.loads(request.body)
        logger.info('CMDB sends message: {}'.format(data))
        ## Get parameters for the specified data format
        res=cmdb(data)
        Whether to link Zabbix and JumpServer
        if res['result'] = =1:
            return HttpResponse("ok")
        else:
            logger.info(res['data'])
            #zabbix
            return HttpResponse("ok")
            #jumpserver
            return HttpResponse("ok")
    else:
        logger.info('This interface only supports POST mode')
        return HttpResponse("This interface only supports POST mode")

        
# 4. Start
python manage.py runserver 0.0. 0. 0:8000
Copy the code

4. Event push parsing

After the event push gateway is started, after our operation on the Blue Whale CMDB is triggered, the CMDB will call back the event push Gateway, and the gateway will print the received request:

(1) the host (10.164.193.138) from the “free” to “recycling programme (cluster)” – “unclassified” (module)

Viewing logs Gateway /logs/info-2021-05-21.log

As can be seen from the figure, CMDB event push performs the following operations:

  • Delete action: Delete the host from idle.

  • Create action: Create a host in the Reclamation Plan cluster.

  • Update action: Associate the related module “unclassified” with the cluster, and repeat twice.

At this point, all four requests have the same request_id and the update action has been repeated twice.

(2) Add new module “Nginx (module)” to host (10.164.193.138)

As can be seen from the figure, CMDB event push performs the following operations:

  • Delete action: Deletes a host from Unclassified (Module).

  • Create Action: Create a host in the Unclassified (Module) cluster.

  • Create action: Create a host in cluster nginx(module).

  • Update action: Associate the related modules “unclassified” and “nginx” with the cluster, repeat three times.

At this point, all six requests have the same request_id and the update action has been repeated twice.

(3) Delete only module nginx(module) from host (10.164.193.138)

As can be seen from the figure, CMDB event push performs the following operations:

  • Delete action: Deletes a host from Unclassified (Module).

  • Delete action: delete the host from “nginx(module)”;

  • Create Action: Create a host in the Unclassified (Module) cluster.

  • Update action: Associate the related module “unclassified” with the cluster, and repeat three times.

At this point, all six requests have the same request_id and the update action has been repeated three times.

(4) Transfer host 10.164.193.138 to Idle

As can be seen from the figure, CMDB event push performs the following operations:

  • Delete action: Deletes a host from Unclassified (Module).

  • Create action: Create a host in the Idle Machine (Cluster and Module) cluster.

  • Update action: Associate the related module “unclassified” with the cluster, and repeat twice.

At this point, all six requests have the same request_id and the update action has been repeated twice.

Based on the above four conditions, we can draw the following conclusions:

  1. Each configuration change process is: delete (delete)– create (create)– Update (update);

  2. Number of updates = Number of delete + number of create;

  3. Update (redis+ REQUEST_id); update (redis+ REQUEST_id); update (redis+ REQUEST_id);

Based on the results of the update action, we can filter out the parameters that we really need.

Argument parsing

According to the log, the update information is redundant. In this case, you need to parse the results to obtain the specified data format.

vim gw_cmdb/common/cmdb.py
import redis
import json
import hashlib

r = redis.StrictRedis(host='127.0.0.1',port=6379,db=1)

def cmdb(data) :
    ## Define the data format
    datajson={'key':' '.'data': {'ip':' '.'group': []}}## Obtain the CMDB action id
    ## judge is for CMDB change operation
    if data['action'] = ='update':
        for i in data['data']:
            datajson['data'] ['ip'] = i['cur_data'] ['bk_host_innerip']
            grouplist = i['cur_data'] ['associations']
            for j in grouplist:
                groupname = grouplist[j]['bk_set_name'] +"_"+grouplist[j]['bk_biz_name'] +"_"+grouplist[j]['bk_module_name']
                datajson['data'] ['group'].append(groupname)
            datajson['key']= hashlib.md5((data['request_id']+ i['cur_data'] ['bk_host_innerip']).encode('utf-8')).hexdigest()
        rkey = r.hget('cmdb',datajson['key'])
        if rkey is None:
            r.hset('cmdb',datajson['key'],json.dumps(datajson['data']))
            result = {
                'result': 0.'data': datajson
            }
        else:
            result = {
                'result': 1.'data': datajson
            }
    else:
        result = {
                'result': 1.'data': datajson
            }
        return result
    return result
Copy the code

(1) the host (10.164.193.138) from the “free” to “recycling programme (cluster)” – “unclassified” (module)

After excluding delete, create, and update action requests, we can get the results of the data format we defined:

{'key': '3005575b6d1b681c236896a8d35d199e'.'data': {'ip': '10.164.193.138'.'group': ['Recycling Plan _ Recycling Plan _ Unsorted']}}
Copy the code

(2) Add new module “Nginx (module)” to host (10.164.193.138)

After excluding delete, create, and update action requests, we can get the results of the data format we defined:


{'key': '3005575b6d1b681c236896a8d35d199e'.'data': {'ip': '10.164.193.138'.'group': ['Recycling Plan _ Recycling Plan _ Unsorted'.'Recycle Plan _ Recycle Plan _nginx']}}
Copy the code

(3) Transfer host 10.164.193.138 to Idle

After excluding delete, create, and update action requests, we can get the results of the data format we defined:


{'key': '3005575b6d1b681c236896a8d35d199e'.'data': {'ip': '10.164.193.138'.'group': ['Idle Pool _ Reclamation Plan _ Idle Machine']}}
Copy the code

Finally, according to the parameters of the specified data format, it is passed to each subsystem API for the consistency and synchronization of basic information such as host group configuration, so as to ensure the uniform data source feature of CMDB.

daemon

In order to ensure that our gateway can start automatically, we use supervisor for guarding. The configuration is as follows:

# 1.It is designed to run the systemctl is enabled supervisord supervisor.yum install supervisor # yum install supervisor # systemctl enable supervisor.yum install supervisor # yum install supervisor #2.Vim configuration file/etc/supervisord. D/gateway. Ini/program: gateway; CMDB event push WA male PR; User =root; Program startup command command=/usr/local/miniconda/envs/gateway/bin/python manage.py runserver 0.0. 0. 0:8000; Program startup directory directory=/app/python/gateway ; Supervisord starts autostart= when it is runtrue; Unexpected value: [unexpected,true.false], default to unexpected autorestart=true; Start the10If the process does not exit abnormally after seconds, the process starts normally. Startsecs =10; Number of automatic retries after startup failure startretries=3

# 3.Supervisorctl update supervisorctl Status gateway supervisorctl start Gateway supervisorctlCopy the code

conclusion

The idea of an event push gateway started with the repeated maintenance of the CMDB, which took a lot of effort each time, with a completely mismatched input-output ratio! In terms of CMDB, we don’t have to use it, but given that it’s an industry standard and a cornerstone of operations, we want to go further than that.

Just think, maintaining a set of data sources, save n sets of associated systems of basic information maintenance time, isn’t it not sweet?