Neutron software architecture analysis and implementation

Neutron’s software architecture is not complicated. We try to illustrate it through three pictures.



The first oneNeutron is a distributed application with multiple service processes (e.g. neutron-server.service and l3_agent.service etc.) in asynchronous communication mode. It is divided into Neutron Server, which acts as the central controller (receiving northbound API requests, controlling logic, and issuing tasks), and Agents, which act as the local executor (performing tasks and reporting results). The two are mutually producer/consumer models and communicate through message queue (MQ).



The secondTo connect to diversified underlying physical and virtual nes, Neutron Server implements Plugins to use these nes to support upper-layer logic. Therefore, Neutron Server can be further divided into:

  • The API layer that receives northbound RESTful API requests
  • Connect to the Plugin layer supported by nes from different vendors





The third: Neutron To balance excellent stability (core resource model function subset supported by default nes) and scalability (extended resource model function set supported by diversified nes), Neutron Server further subdivides the API layer into: Core API and Extension API; The Plugin layer is subdivided into Core Plugins and Service Plugins (also called extension plug-ins, Service Plugins). Different network providers can expand Neutron’s function set according to their own requirements, but if they do not expand, Neutron can provide a complete solution. This is the key to Neutron’s introduction of the Core & Plugin architecture concept.

In short, Neutron’s software architecture is not too unique. It adheres to the consistent design style of OpenStack projects and has the following characteristics:

  • Distributed – Multiple server processes
  • RESTful API – Unified northbound interface
  • Plugin – Underlying heterogeneity compatibility
  • Asynchronous message queue – MQ
  • Agents — Workers

Neutron Server Startup process

NOTE: The code below is from OpenStack Rocky.

Neutron-server — Accepts and routes API requests to the appropriate OpenStack Networking plug-in for action.

Neutron Server Corresponds to the neutron-server.service service process. It includes Web Server, Plugins (Core Plugins, Extension Plugis), RCP Client/Server, DB ORM and other functional modules.

Neutron-server. service Start command:

neutron-server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/api-paste.ini
Copy the code

Start with the startup script of the neutron-server.service process.

# /opt/stack/neutron/setup.cfg

[entry_points]
...
console_scripts =
...
    neutron-server = neutron.cmd.eventlet.server:main
Copy the code

Find the program entry function:

# /opt/stack/neutron/neutron/cmd/eventlet/server/__init__.py def main(): server.boot_server(wsgi_eventlet.eventlet_wsgi_server) # /opt/stack/neutron/neutron/server/wsgi_eventlet.py def eventlet_wsgi_server(): # get WSGI Application neutron_api = service. Serve_wsgi (service. NeutronApiService) # start API and RPC service start_api_and_rpc_workers(neutron_api)Copy the code

The startup process of neutron-server.service is very simple:

  1. Initial configuration (loading and parsing configuration files)
  2. Get WSGI Application
  3. Start API and RPC services

NOTE: The first step to initialize the configuration file is to use the Oslo. config library to load the neutron.conf file and parse its contents. The content of Oslo. config library has been mentioned in oslo_config, the general library of OpenStack Implementation Technology Decomposition (7), and will not be described here. We focus on the second and third steps.

Get WSGI Application

WSGI Application is a function module of Neutron Web Server. Python Web Server usually uses WSGI. Divide the Web Server into WSGI Server, WSGI Middleware, and WSGI Application.

NOTE: The WSGI protocol is covered in the Python Web Development Specification – WSGI, and will not be covered here.

The key code to get the WSGI Application is as follows:

# /opt/stack/neutron/neutron/server/wsgi_eventlet.py neutron_api = service.serve_wsgi(service.NeutronApiService) # /opt/stack/neutron/neutron/common/config.py def load_paste_app(app_name): """Builds and returns a WSGI app from a paste config file. :param app_name: Name of the application to load """ Loader = wsgi. loader (cfg.conf) # Log the values of registered opts if cfg.conf. Debug: Cfg.conf. log_opt_values(LOG, logging.debug) # Argument app_name is' neutron 'app = loader.load_app(app_name) return appCopy the code

Related logs:

DEBUG oslo.service.wsgi [-] Loading app neutron from /etc/neutron/api-paste.ini
Copy the code

Api-paste. ini is the paste library configuration file. Paste + PasteDeploy + Routes + WebOb is described in Openstack Restful API Development Framework. I won’t repeat it here. The configuration is as follows:

# /etc/neutron/api-paste.ini

[composite:neutronapi_v2_0]
use = call:neutron.auth:pipeline_factory
noauth = cors http_proxy_to_wsgi request_id catch_errors osprofiler extensions neutronapiapp_v2_0
keystone = cors http_proxy_to_wsgi request_id catch_errors osprofiler authtoken keystonecontext extensions neutronapiapp_v2_0

[filter:extensions]
paste.filter_factory = neutron.api.extensions:plugin_aware_extension_middleware_factory

[app:neutronapiapp_v2_0]
paste.app_factory = neutron.api.v2.router:APIRouter.factory
Copy the code

After a series of processes from the Paste library, the program flow enters pipeline_factory Function.

# /opt/stack/neutron/neutron/auth.py def pipeline_factory(loader, global_conf, **local_conf): """Create a paste pipeline based on the 'auth_strategy' config option.""" # Use the auth_strategy configuration item to specify whether to enable Keystone authentication Pipeline = local_conf[cfg.conf.auth_strategy] pipeline = pipeline.split() # Load all WSGI Middleware filter filters = [loader.get_filter(n) for n in pipeline[:-1]] # The argument passed is neutronapiapp_v2_0 app = loader.get_app(pipeline[-1]) filters.reverse() # make WSGI Application pass (execute) all WSGI in reverse order Middleware filters for filter in filters: app = filter(app) return app # /opt/stack/neutron/neutron/api/v2/router.py def _factory(global_config, **local_config): return pecan_app.v2_factory(global_config, **local_config) /opt/stack/neutron/neutron/pecan_wsgi/app.py def v2_factory(global_config, **local_config): # Processing Order: # As request enters lower priority called before higher. # Response from controller is passed from higher priority to lower. app_hooks = [ hooks.UserFilterHook(), # priority 90 hooks.ContextHook(), # priority 95 hooks.ExceptionTranslationHook(), # priority 100 hooks.BodyValidationHook(), # priority 120 hooks.OwnershipValidationHook(), # priority 125 hooks.QuotaEnforcementHook(), # priority 130 hooks.NotifierHook(), # priority 135 hooks.QueryParametersHook(), # priority 139 hooks.PolicyHook(), Root.v2controller app = pecan.make_app(root.v2Controller (), debug=False, force_canonical=False, hooks=app_hooks, Guess_content_type_from_ext =True) # Initialize Neutron Server startup.initialize_all() return appCopy the code

From there, we get a final WSGI Application, and we find the “/” of the API Request. We can see from the code that the Web framework currently used by Neutron is Pecan (A WSGI Object-Dispatching Web Framework, Designed to be lean and fast with few dependencies.), not old version of PPRW (Paste + PasteDeploy + Routes + WebOb). Pecan’s “object distributed routing” design makes routing mapping and view function implementation of WSGI Application much easier than PPRW, which requires a lot of code irrelevant to the actual business. Pecan is now the preferred Web framework for most OpenStack projects.

Core API & Extension API

Neutron root controller root.v2Controller () provides interfaces for printing all Core API and Extension API Refs (resource model application).

Get Core API refs:

/ root @ localhost ~ # curl -i "http://172.18.22.200:9696/v2.0/" / > -x GET \ > - H 'the content-type: application/json' \ > -H 'Accept: application/json' \ > -H "X-Auth-Project-Id: admin" \ > -H 'X-Auth-Token:gAAAAABchg8IRf8aMdYbm-7-vPJsFCoSecCJz9GZcPgrS0UirgSpbxIaF1f5duFsrkwRePBP6duTmVhV3GSIrHLqZ3RT21GQ1oDipTwCe8 HTTP / 1.1 200 OK RktCnkEg5kXrUuQfAXmvjltRm5_0w5XbltJahVY0c3QXlrpP9G - IBdBWI7mpvyoP6h0x94000Ux20 'Content - Length: 516 Content-Type: application/json X-Openstack-Request-Id: req-7c8aa1e6-1a18-433e-8ff5-95e59028cce5 Date: Mon, 11 Mar 2019 07:36:17 GMT {" resources ": [{" links" : [{" href ":" http://172.18.22.200:9696/v2.0/subnets ", "rel" : "self" }], "name": "subnet", "collection": "subnets" }, { "links": [{ "href": "Http://172.18.22.200:9696/v2.0/subnetpools", "rel" : "the self"}], "name" : "subnetpool", "collection" : "Subnetpools"}, {" links ": [{" href" : "http://172.18.22.200:9696/v2.0/networks", "rel" : "the self"}], "name" : "Network", "collection" : "networks"}, {" links ": [{" href" : "http://172.18.22.200:9696/v2.0/ports", "rel" : "self" }], "name": "port", "collection": "ports" }] }Copy the code

Get Extension API refs:

[root@localhost ~]# curl -i "http://172.18.22.200:9696/v2.0/extensions/" \
> -X GET \
> -H 'Content-type: application/json' \
> -H 'Accept: application/json' \
> -H "X-Auth-Project-Id: admin" \
> -H 'X-Auth-Token:gAAAAABchg8IRf8aMdYbm-7-vPJsFCoSecCJz9GZcPgrS0UirgSpbxIaF1f5duFsrkwRePBP6duTmVhV3GSIrHLqZ3RT21GQ1oDipTwCe8RktCnkEg5kXrUuQfAXmvjltRm5_0w5XbltJahVY0c3QXlrpP9G-IBdBWI7mpvyoP6h0x94000Ux20'
HTTP/1.1 200 OK
Content-Length: 9909
Content-Type: application/json
X-Openstack-Request-Id: req-4dad9963-57c2-4b3e-a4d5-bc6fea5e78e8
Date: Mon, 11 Mar 2019 07:37:25 GMT

{
    "extensions": [{
        "alias": "default-subnetpools",
        "updated": "2016-02-18T18:00:00-00:00",
        "name": "Default Subnetpools",
        "links": [],
        "description": "Provides ability to mark and use a subnetpool as the default."
    }, {
        "alias": "availability_zone",
        "updated": "2015-01-01T10:00:00-00:00",
        "name": "Availability Zone",
        "links": [],
        "description": "The availability zone extension."
    }, {
        "alias": "network_availability_zone",
        "updated": "2015-01-01T10:00:00-00:00",
        "name": "Network Availability Zone",
        "links": [],
        "description": "Availability zone support for network."
    }, {
        "alias": "auto-allocated-topology",
        "updated": "2016-01-01T00:00:00-00:00",
        "name": "Auto Allocated Topology Services",
        "links": [],
        "description": "Auto Allocated Topology Services."
    }, {
        "alias": "ext-gw-mode",
        "updated": "2013-03-28T10:00:00-00:00",
        "name": "Neutron L3 Configurable external gateway mode",
        "links": [],
        "description": "Extension of the router abstraction for specifying whether SNAT should occur on the external gateway"
    }, {
        "alias": "binding",
        "updated": "2014-02-03T10:00:00-00:00",
        "name": "Port Binding",
        "links": [],
        "description": "Expose port bindings of a virtual port to external application"
    }, {
        "alias": "agent",
        "updated": "2013-02-03T10:00:00-00:00",
        "name": "agent",
        "links": [],
        "description": "The agent management extension."
    }, {
        "alias": "subnet_allocation",
        "updated": "2015-03-30T10:00:00-00:00",
        "name": "Subnet Allocation",
        "links": [],
        "description": "Enables allocation of subnets from a subnet pool"
    }, {
        "alias": "dhcp_agent_scheduler",
        "updated": "2013-02-07T10:00:00-00:00",
        "name": "DHCP Agent Scheduler",
        "links": [],
        "description": "Schedule networks among dhcp agents"
    }, {
        "alias": "external-net",
        "updated": "2013-01-14T10:00:00-00:00",
        "name": "Neutron external network",
        "links": [],
        "description": "Adds external network attribute to network resource."
    }, {
        "alias": "standard-attr-tag",
        "updated": "2017-01-01T00:00:00-00:00",
        "name": "Tag support for resources with standard attribute: subnet, trunk, router, network, policy, subnetpool, port, security_group, floatingip",
        "links": [],
        "description": "Enables to set tag on resources with standard attribute."
    }, {
        "alias": "flavors",
        "updated": "2015-09-17T10:00:00-00:00",
        "name": "Neutron Service Flavors",
        "links": [],
        "description": "Flavor specification for Neutron advanced services."
    }, {
        "alias": "net-mtu",
        "updated": "2015-03-25T10:00:00-00:00",
        "name": "Network MTU",
        "links": [],
        "description": "Provides MTU attribute for a network resource."
    }, {
        "alias": "network-ip-availability",
        "updated": "2015-09-24T00:00:00-00:00",
        "name": "Network IP Availability",
        "links": [],
        "description": "Provides IP availability data for each network and subnet."
    }, {
        "alias": "quotas",
        "updated": "2012-07-29T10:00:00-00:00",
        "name": "Quota management support",
        "links": [],
        "description": "Expose functions for quotas management per tenant"
    }, {
        "alias": "revision-if-match",
        "updated": "2016-12-11T00:00:00-00:00",
        "name": "If-Match constraints based on revision_number",
        "links": [],
        "description": "Extension indicating that If-Match based on revision_number is supported."
    }, {
        "alias": "l3-port-ip-change-not-allowed",
        "updated": "2018-10-09T10:00:00-00:00",
        "name": "Prevent L3 router ports IP address change extension",
        "links": [],
        "description": "Prevent change of IP address for some L3 router ports"
    }, {
        "alias": "availability_zone_filter",
        "updated": "2018-06-22T10:00:00-00:00",
        "name": "Availability Zone Filter Extension",
        "links": [],
        "description": "Add filter parameters to AvailabilityZone resource"
    }, {
        "alias": "l3-ha",
        "updated": "2014-04-26T00:00:00-00:00",
        "name": "HA Router extension",
        "links": [],
        "description": "Adds HA capability to routers."
    }, {
        "alias": "filter-validation",
        "updated": "2018-03-21T10:00:00-00:00",
        "name": "Filter parameters validation",
        "links": [],
        "description": "Provides validation on filter parameters."
    }, {
        "alias": "multi-provider",
        "updated": "2013-06-27T10:00:00-00:00",
        "name": "Multi Provider Network",
        "links": [],
        "description": "Expose mapping of virtual networks to multiple physical networks"
    }, {
        "alias": "quota_details",
        "updated": "2017-02-10T10:00:00-00:00",
        "name": "Quota details management support",
        "links": [],
        "description": "Expose functions for quotas usage statistics per project"
    }, {
        "alias": "address-scope",
        "updated": "2015-07-26T10:00:00-00:00",
        "name": "Address scope",
        "links": [],
        "description": "Address scopes extension."
    }, {
        "alias": "extraroute",
        "updated": "2013-02-01T10:00:00-00:00",
        "name": "Neutron Extra Route",
        "links": [],
        "description": "Extra routes configuration for L3 router"
    }, {
        "alias": "net-mtu-writable",
        "updated": "2017-07-12T00:00:00-00:00",
        "name": "Network MTU (writable)",
        "links": [],
        "description": "Provides a writable MTU attribute for a network resource."
    }, {
        "alias": "empty-string-filtering",
        "updated": "2018-05-01T10:00:00-00:00",
        "name": "Empty String Filtering Extension",
        "links": [],
        "description": "Allow filtering by attributes with empty string value"
    }, {
        "alias": "subnet-service-types",
        "updated": "2016-03-15T18:00:00-00:00",
        "name": "Subnet service types",
        "links": [],
        "description": "Provides ability to set the subnet service_types field"
    }, {
        "alias": "floatingip-pools",
        "updated": "2018-03-21T10:00:00-00:00",
        "name": "Floating IP Pools Extension",
        "links": [],
        "description": "Provides a floating IP pools API."
    }, {
        "alias": "port-mac-address-regenerate",
        "updated": "2018-05-03T10:00:00-00:00",
        "name": "Neutron Port MAC address regenerate",
        "links": [],
        "description": "Network port MAC address regenerate"
    }, {
        "alias": "standard-attr-timestamp",
        "updated": "2016-09-12T10:00:00-00:00",
        "name": "Resource timestamps",
        "links": [],
        "description": "Adds created_at and updated_at fields to all Neutron resources that have Neutron standard attributes."
    }, {
        "alias": "provider",
        "updated": "2012-09-07T10:00:00-00:00",
        "name": "Provider Network",
        "links": [],
        "description": "Expose mapping of virtual networks to physical networks"
    }, {
        "alias": "service-type",
        "updated": "2013-01-20T00:00:00-00:00",
        "name": "Neutron Service Type Management",
        "links": [],
        "description": "API for retrieving service providers for Neutron advanced services"
    }, {
        "alias": "l3-flavors",
        "updated": "2016-05-17T00:00:00-00:00",
        "name": "Router Flavor Extension",
        "links": [],
        "description": "Flavor support for routers."
    }, {
        "alias": "port-security",
        "updated": "2012-07-23T10:00:00-00:00",
        "name": "Port Security",
        "links": [],
        "description": "Provides port security"
    }, {
        "alias": "extra_dhcp_opt",
        "updated": "2013-03-17T12:00:00-00:00",
        "name": "Neutron Extra DHCP options",
        "links": [],
        "description": "Extra options configuration for DHCP. For example PXE boot options to DHCP clients can be specified (e.g. tftp-server, server-ip-address, bootfile-name)"
    }, {
        "alias": "port-security-groups-filtering",
        "updated": "2018-01-09T09:00:00-00:00",
        "name": "Port filtering on security groups",
        "links": [],
        "description": "Provides security groups filtering when listing ports"
    }, {
        "alias": "standard-attr-revisions",
        "updated": "2016-04-11T10:00:00-00:00",
        "name": "Resource revision numbers",
        "links": [],
        "description": "This extension will display the revision number of neutron resources."
    }, {
        "alias": "pagination",
        "updated": "2016-06-12T00:00:00-00:00",
        "name": "Pagination support",
        "links": [],
        "description": "Extension that indicates that pagination is enabled."
    }, {
        "alias": "sorting",
        "updated": "2016-06-12T00:00:00-00:00",
        "name": "Sorting support",
        "links": [],
        "description": "Extension that indicates that sorting is enabled."
    }, {
        "alias": "security-group",
        "updated": "2012-10-05T10:00:00-00:00",
        "name": "security-group",
        "links": [],
        "description": "The security groups extension."
    }, {
        "alias": "l3_agent_scheduler",
        "updated": "2013-02-07T10:00:00-00:00",
        "name": "L3 Agent Scheduler",
        "links": [],
        "description": "Schedule routers among l3 agents"
    }, {
        "alias": "fip-port-details",
        "updated": "2018-04-09T10:00:00-00:00",
        "name": "Floating IP Port Details Extension",
        "links": [],
        "description": "Add port_details attribute to Floating IP resource"
    }, {
        "alias": "router_availability_zone",
        "updated": "2015-01-01T10:00:00-00:00",
        "name": "Router Availability Zone",
        "links": [],
        "description": "Availability zone support for router."
    }, {
        "alias": "rbac-policies",
        "updated": "2015-06-17T12:15:12-00:00",
        "name": "RBAC Policies",
        "links": [],
        "description": "Allows creation and modification of policies that control tenant access to resources."
    }, {
        "alias": "standard-attr-description",
        "updated": "2016-02-10T10:00:00-00:00",
        "name": "standard-attr-description",
        "links": [],
        "description": "Extension to add descriptions to standard attributes"
    }, {
        "alias": "ip-substring-filtering",
        "updated": "2017-11-28T09:00:00-00:00",
        "name": "IP address substring filtering",
        "links": [],
        "description": "Provides IP address substring filtering when listing ports"
    }, {
        "alias": "router",
        "updated": "2012-07-20T10:00:00-00:00",
        "name": "Neutron L3 Router",
        "links": [],
        "description": "Router abstraction for basic L3 forwarding between L2 Neutron networks and access to external networks via a NAT gateway."
    }, {
        "alias": "allowed-address-pairs",
        "updated": "2013-07-23T10:00:00-00:00",
        "name": "Allowed Address Pairs",
        "links": [],
        "description": "Provides allowed address pairs"
    }, {
        "alias": "binding-extended",
        "updated": "2017-07-17T10:00:00-00:00",
        "name": "Port Bindings Extended",
        "links": [],
        "description": "Expose port bindings of a virtual port to external application"
    }, {
        "alias": "project-id",
        "updated": "2016-09-09T09:09:09-09:09",
        "name": "project_id field enabled",
        "links": [],
        "description": "Extension that indicates that project_id field is enabled."
    }, {
        "alias": "dvr",
        "updated": "2014-06-1T10:00:00-00:00",
        "name": "Distributed Virtual Router",
        "links": [],
        "description": "Enables configuration of Distributed Virtual Routers."
    }]
}
Copy the code

As can be seen from these two API calls, Neutron’s Core API Resources only include networks, subnets, subnetpools and ports related to large layer 2. The rest are Extension API Resources. Core API is the foundation of Neutron and the minimum and stable Core function set of Neutron. In contrast, The Extension API is Neutron’s “way to make money”. The excellent extensible API enables Neutron to have a good open source ecology.

If you look at the source implementation of the root controller V2Controller, you may wonder why V2Controller only implements the ExtensionsController child controller, And the ExtensionsController implementation simply prints out a listing of Extension API Resources. So, where is the Controller corresponding to so many resource objects provided by Neutron official documentation (Networking API V2.0) implemented? The answer is in startup.initialize_all() function, but before describing the Controllers’ implementation, we need to understand how Neutron Plugins are loaded. Because API Resources, Controllers, and Plugins are all related.

Core Plugins & Service Plugins

Plugins can also be classified into Core Plugin and Service (Extension) Plugin. You can select the specific implementation of Plugins through configuration items. e.g.

[DEFAULT]
...
# The core plugin Neutron will use (string value)
core_plugin = ml2

# The service plugins Neutron will use (list value)
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
Copy the code

As you can see from the configuration item attributes, the Core Plugin can only have one (ml2 by default), whereas Service Plugins can specify multiple at once. For example, L3RouterPlugin, FWaaSPlugin, LBaaSPlugin, VPNaaSPlugin and so on.

The code to load the Plugins is as follows:

# /opt/stack/neutron/neutron/manager.py def init(): # plugins (core plugin + extension services plugin) # directory Neutron_lib/Plugins /directory.py Plugins (e.g. add_plugin, get_plugins, get_unique_plugins, is_loaded) NeutronManager instance The singleton pattern was implemented (to maintain a uniform set of Plugins) and neutronManager.get_instance () class NeutronManager(object): """Neutron's Manager class. Neutron's Manager class is responsible for parsing a config file and instantiating the correct plugin that concretely implements neutron_plugin_base class. """ ... def __init__(self, options=None, config_file=None): ... Ml2 plugin_provider = cfg.conf.core_plugin log.info ("Loading Core Plugin: %s", plugin_provider) # NOTE(armax): Keep hold of the actual plugin object e.g. ML2Plugin Class plugin = self._get_plugin_instance(CORE_PLUGINS_NAMESPACE, Plugins Directory directory.add_plugin(lib_conconst. Core, Plugin)... # Load Extension Plugins supported by the core plugin by default Self._load_services_from_core_plugin (plugin) # Extension Plugins self._load_service_plugins()... def _load_services_from_core_plugin(self, plugin): """Puts core plugin in service_plugins for supported services.""" LOG.debug("Loading services supported by the core Plugin ") # supported service types are derived from supported extensions # Lbaas, fwaas, VPnaas, Router, qos) for ext_alias in getattr(Plugin, "supported_extension_aliases", []): if ext_alias in constants.EXT_TO_SERVICE_MAPPING: Constants.EXT_TO_SERVICE_MAPPING[ext_alias] # Register Extension Plugins to Plugins Directory directory.add_plugin(service_type, plugin) LOG.info("Service %s is supported by the core plugin", service_type) def _load_service_plugins(self): """Loads service plugins. Starts from the core plugin and checks if it supports advanced services then loads classes Provided in configuration. "" # Plugins = cfg.conf. service_plugins Native Neutron Datastore returns default Extension Plugins (e.g. tag, timestamp, flavors, revisions) plugin_providers.extend(self._get_default_service_plugins()) LOG.debug("Loading service plugins: %s", plugin_providers) for provider in plugin_providers: if provider == '': continue LOG.info("Loading Plugin: %s", provider) # plugin_inst = self._get_plugin_instance('neutron.service_plugins'), provider) # only one implementation of svc_type allowed # specifying more than one plugin # for the same type is a fatal  exception # TODO(armax): simplify this by moving the conditional into the # directory itself. plugin_type = plugin_inst.get_plugin_type() if directory.get_plugin(plugin_type): Raise ValueError(_("Multiple plugins for service ""%s were configured") % plugin_type) # Plugins Directory directory.add_plugin(plugin_type, plugin_inst) # search for possible agent notifiers declared in service plugin # (needed by agent management extension) # Plugin = directory. Get_plugin () if (hasattr(Plugin, 'agent_notiFIERS ') and hasattr(plugin_inst, 'agent_notifiers')): Update the Agent NotiFIERS corresponding to Extension Plugins to the Agent NotiFIERS dictionary of the Core Plugin plugin.agent_notifiers.update(plugin_inst.agent_notifiers) # disable incompatible extensions in core plugin if any utils.disable_extension_by_service_plugin(plugin, plugin_inst) LOG.debug("Successfully loaded %(type)s plugin. " "Description: %(desc)s", {"type": plugin_type, "desc": plugin_inst.get_plugin_description()})Copy the code

At this point, Core and Extension Plugins classes currently supported by Neutron are instantiated and registered in the Plugins Directory. The Plugins Directory is an important tool module that can be found in any code logic that requires Loaded Plugins.

After the Plugins are registered, the Workers process or coroutine responded by the Plugins is started during the neutron-server.service service process.

# / opt/stack/neutron neutron/server/wsgi_eventlet py # in starting the neutron - server servce performed def start_api_and_rpc_workers(neutron_api): try: Worker_launcher = service.start_all_workers( Eventlet.greenpool () # Start WSGI Application api_thread = pool.spawn(neutron_api.wait) # Start RPC and Plugins as coroutines workers plugin_workers_thread = pool.spawn(worker_launcher.wait) # api and other workers should die together. When one dies, # kill the other. api_thread.link(lambda gt: plugin_workers_thread.kill()) plugin_workers_thread.link(lambda gt: api_thread.kill()) pool.waitall() except NotImplementedError: LOG.info("RPC was already started in parent process by " "plugin.") neutron_api.wait() # /opt/stack/neutron/neutron/service.py def _get_rpc_workers(): Plugins = directory.get_plugin() # Plugins = Plugins () # Plugins = Plugins ( Service_plugins = directory.get_plugins().values()... # passing service plugins only, Because core plugin is among them # create RpcWorker object instances of Plugins (core + service) rpc_workers = [RpcWorker(service_plugins, worker_process_count=cfg.CONF.rpc_workers)] if (cfg.CONF.rpc_state_report_workers > 0 and plugin.rpc_state_report_workers_supported()): rpc_workers.append( RpcReportsWorker( [plugin], worker_process_count=cfg.CONF.rpc_state_report_workers ) ) return rpc_workers class RpcWorker(neutron_worker.BaseWorker): """Wraps a worker to be handled by ProcessLauncher""" start_listeners_method = 'start_rpc_listeners' def __init__(self, plugins, worker_process_count=1): super(RpcWorker, self).__init__( worker_process_count=worker_process_count ) self._plugins = plugins self._servers = [] def start(self): super(RpcWorker, self).start() for plugin in self._plugins: if hasattr(plugin, self.start_listeners_method): try: Servers = getattr(plugin, self.start_listeners_method)() except NotImplementedError: continue self._servers.extend(servers) ... class RpcReportsWorker(RpcWorker): start_listeners_method = 'start_rpc_state_reports_listener'Copy the code

The difference between RpcWorker and RpcReportsWorker is that the latter only has the Report function of RPC state, while the former is the real Neutron business logic RPC worker. You can specify whether to enable the RPC state Report function by setting the configuration item rpc_state_report_workers.

The startup mode of RPC workers is related to configuration items rpc_workers and rpc_state_report_workers. If the values (int) of the two configuration items are less than 1, RPC Workers will be started as coroutines in the neutron-server process. If it is greater than 1, a new process is forked and started as a coroutine within the new process.

Core Controller & Extension Controller

Controller is a very important concept in Pecan. Controller is the encapsulation of URL Collision, View Function, HTTP Method and Mapper in the WSGI Application. Is the core object of a Web framework. More information about Controller can be found on Pecan’s official website. Here we mainly focus on how Neutron realizes Controller.

As well as generating and returning WSGI Application objects, the startup.initialize_all() statement is executed in the code of the neutronapiapp_v2_0 Factory Function. All it does is prepare the prerequisites necessary for neutron-server.service to start. This includes loading Plugins, instantiating API Resources Controllers, and handling mappings between API Resources, Controllers, and Plugins.

# / opt/stack/neutron neutron/pecan_wsgi/startup. Py # Core Resources listing Resources = {' network ':' networks' and 'subnet configures' : 'subnets', 'subnetpool': 'subnetpools', 'port': 'ports'} def initialize_all(): # load Plugins, such as the above manager. The init () # PluginAwareExtensionManager do is configured from the extension path loaded extensions, It also provides some common management functions for Extension Plugins, E.g. Add_extension, extend_resources, get_resources ext_mgr = extensions. PluginAwareExtensionManager. Get_instance () # will be Ext_mgr. extend_resources("2.0", attributes.RESOURCES) # At this stage we have a fully populated resource attribute map; Build Pecan controllers and routes for all core resources Plugin = directory.get_plugin(  Core Resources for resource, collection in RESOURCES.items(): # Keeps track of Neutron resources for which quota limits are enforced. Register_resource_by_name (resource) # encapsulate Core resource and Core Plugin into the new_controller instance object new_controller = res_ctrl.CollectionsController(collection, resource, Plugin =plugin) # save new_controller as resource_name:new_controller to the NeutronManager instance property manager.NeutronManager.set_controller_for_resource( collection, New_controller) # save plugin to the NeutronManager instance property as resource_name:plugin manager.NeutronManager.set_plugin_for_resource(collection, plugin) pecanized_resources = ext_mgr.get_pecan_resources() for pec_res in pecanized_resources: manager.NeutronManager.set_controller_for_resource( pec_res.collection, pec_res.controller) manager.NeutronManager.set_plugin_for_resource( pec_res.collection, Pec_res.plugin) # Now build Pecan Controllers and routes for all extensions # ResourceExtension resources = ext_mgr.get_resources() # Extensions controller is already defined, We don't need it. Resources.pop (0) # path_prefix = ext_res.path_prefix.strip('/') collection = ext_res.collection # Retrieving the parent resource. It is expected the format of # the parent resource to be: # {'collection_name': 'name-of-collection', # 'member_name': 'name-of-resource'} # collection_name does not appear to be used in the legacy code # inside the controller logic, so we can assume we do not need it. parent = ext_res.parent or {} parent_resource = parent.get('member_name') collection_key = collection if parent_resource: collection_key = '/'.join([parent_resource, collection]) collection_actions = ext_res.collection_actions member_actions = ext_res.member_actions if manager.NeutronManager.get_controller_for_resource(collection_key): # This is a collection that already has a pecan controller, we # do not need to do anything else continue legacy_controller = getattr(ext_res.controller, 'controller', ext_res.controller) new_controller = None if isinstance(legacy_controller, base.Controller): resource = legacy_controller.resource plugin = legacy_controller.plugin attr_info = legacy_controller.attr_info member_actions = legacy_controller.member_actions pagination = legacy_controller.allow_pagination sorting = legacy_controller.allow_sorting # NOTE(blogan): legacy_controller and ext_res both can both have # member_actions. the member_actions for ext_res are strictly for # routing, while member_actions for legacy_controller are used for # handling the request once the routing has found the Controller. # They're always the same so we will just use the ext_res # member_action Plugin, original Extension Controller part attributes to encapsulate to new_controller new_controller = res_ctrl instance objects. The CollectionsController ( collection, resource, resource_info=attr_info, parent_resource=parent_resource, member_actions=member_actions, plugin=plugin, allow_pagination=pagination, allow_sorting=sorting, collection_actions=collection_actions) # new_controller.collection has replaced hyphens with underscores manager.NeutronManager.set_plugin_for_resource( new_controller.collection, plugin) if path_prefix: manager.NeutronManager.add_resource_for_path_prefix( collection, path_prefix) else: new_controller = utils.ShimCollectionsController( collection, None, legacy_controller, collection_actions=collection_actions, member_actions=member_actions, action_status=ext_res.controller.action_status, Collection_methods =ext_res. Collection_methods) # store new_controller as resource_name:new_controller to NeutronManager Instance attributes manager. NeutronManager. Set_controller_for_resource (collection_key, new_controller) # Certain policy checks require that the extensions are loaded # and the RESOURCE_ATTRIBUTE_MAP populated before they can be # properly initialized. This can only be claimed with certainty # once this point in the code has been reached. In the event # that the policies have been initialized before this point, # calling reset will cause the next policy check to # re-initialize with all of the required data in place. policy.reset()Copy the code

In short, all the def Initialize_all function does is load all the Plugins first, Then repackage Core Plugins + Core Resources and Extension Plugins + Extension Resources into a CollectionsController instance object and register it NeutronManger instance attributes self.resource_plugin_mappings and self.resource_controller_mappings.

It has come a long way from the separation of the original Core and Extension API to the unification of the Core and Extension Controller.

Core API request processing

Root Controller V2Controller does not display the declaration Core Resources Controller, But all Core API requests are in method def _lookup(self, collection, * REMAINDER): All URL paths not explicitly defined are routed to the _lookup method.

@utils.expose() def _lookup(self, collection, *remainder): # if collection exists in the extension to service plugins map then # we are assuming that collection is the service plugin and # needs to be remapped. # Example: https://neutron.endpoint/v2.0/lbaas/loadbalancers if (remainder and manager.NeutronManager.get_resources_for_path_prefix( collection)): Remainder [0] remainder = remainder[1:] # The argument to collection is networks, subnets, subnetpools, ports, etc Core Resources # Obtain Resource Controller = from the NeutronManager instance object manager.NeutronManager.get_controller_for_resource( collection) if not controller: LOG.warning("No controller found for: %s - returning response " "code 404", collection) pecan.abort(404) # Store resource and collection names in pecan request context so that # hooks can leverage  them if necessary. The following code uses # attributes from the controller instance to ensure names have been # properly sanitized (eg: replacing dashes with underscores) request.context['resource'] = controller.resource request.context['collection'] = controller.collection # NOTE(blogan): initialize a dict to store the ids of the items walked # in the path for example: /networks/1234 would cause uri_identifiers # to contain: {'network_id': '1234'} # This is for backwards compatibility with legacy extensions that # defined their own controllers and expected kwargs to be passed in # with the uri_identifiers request.context['uri_identifiers'] = {} return controller, remainderCopy the code

The def initialize_all phase registered Core Controllers with the instance property self.resource_controller_mappings of NeutronManager. This is extracted from the instance properties of NeutronManager based on the type of API Request (e.g. networks, subnets).

(Pdb) controller <neutron.pecan_wsgi.controllers.resource.CollectionsController object at 0x7f0fc2b60e10> (Pdb) controller.resource 'network' (Pdb) controller.plugin <weakproxy at 0x7f0fc2b69cb0 to Ml2Plugin at 0x7f0fc3343fd0> (Pdb)  controller.plugin_lister <bound method Ml2Plugin.get_networks of <neutron.plugins.ml2.plugin.Ml2Plugin object at 0x7f0fc3343fd0>>Copy the code

By printing the above NetworkController instance properties, it can be seen that each Resource network is associated with Core Plugin ML2, and the “real view function” for this Resource is implemented in the Plugin class. For example, the view function for THE API request GET /v2.0/networks is ml2plugin.get_networks. In fact, all Core Resources are associated with the same Core Plugin, but Extension Resources are associated with Service Plugins of different types. Neutron implements call encapsulation from the Neutron API layer to the Neutron Plugin layer in this way.

# /opt/stack/neutron/neutron/plugins/ml2/plugin.py @db_api.retry_if_session_inactive() def get_networks(self, context, filters=None, fields=None, sorts=None, limit=None, marker=None, page_reverse=False): # NOTE(ihrachys) use writer manager to be able to update mtu # TODO(ihrachys) remove in Queens when mtu is not nullable with db_api.CONTEXT_WRITER.using(context): nets_db = super(Ml2Plugin, self)._get_networks( context, filters, None, sorts, limit, marker, page_reverse) # NOTE(ihrachys) pre Pike networks may have null mtus; update them # in database if needed # TODO(ihrachys) remove in Queens+ when mtu is not nullable net_data = [] for net in  nets_db: if net.mtu is None: net.mtu = self._get_network_mtu(net, validate=False) net_data.append(self._make_network_dict(net, context=context)) self.type_manager.extend_networks_dict_provider(context, net_data) nets = self._filter_nets_provider(context, net_data, filters) return [db_utils.resource_fields(net, fields) for net in nets]Copy the code

Extension API request processing

The Essence of the Extensions API is a WSGI Middleware rather than a WSGI Application.

# /opt/stack/neutron/neutron/api/extensions.py import routes def plugin_aware_extension_middleware_factory(global_config, **local_config): """Paste factory.""" def _factory(app): Ext_mgr = PluginAwareExtensionManager. Get_instance () # ExtensionMiddleware is Extensions middleware for Encapsulation of WSGI (Routing Insinuation, View functions), Receive Extensions Resources Request and process return ExtensionMiddleware(app, ext_mgr=ext_mgr) return _factory class ExtensionMiddleware(base.ConfigurableMiddleware): """Extensions middleware for WSGI.""" def __init__(self, application, ... # extended resources for resource in self.ext_mgr.get_resources(): ... # define Actions for action, method in resource.collection_actions.items(): conditions = dict(method=[method]) path = "/%s/%s" % (resource.collection, action) with mapper.submapper(controller=resource.controller, action=action, path_prefix=path_prefix, conditions=conditions) as submap: submap.connect(path_prefix + path, path) submap.connect(path_prefix + path + "_format", "%s.:(format)" % path) # set Methods for action, method in resource.collection_methods.items(): conditions = dict(method=[method]) path = "/%s" % resource.collection with mapper.submapper(controller=resource.controller, action=action, path_prefix=path_prefix, conditions=conditions) as submap: submap.connect(path_prefix + path, path) submap.connect(path_prefix + path + "_format", "%s.:(format)" % path) # map ResourceCollection, ResourceController, ResourceMemberAction mapper.resource(resource.collection, resource.collection, controller=resource.controller, member=resource.member_actions, parent_resource=resource.parent, path_prefix=path_prefix) ...Copy the code

As can be seen from the above code, although Core API uses Pecan framework, Extension API still uses Routes to maintain Mapper.

(Pdb) Resource. Collection 'routers' (Pdb) Resource. Collection 'routers' resource.member_actions {'remove_router_interface': 'PUT', 'add_router_interface': 'PUT'} (Pdb) resource.controller.__class__ <class 'webob.dec.wsgify'> (Pdb) resource.controller.controller <neutron.api.v2.base.Controller object at 0x7f81fd694ed0> (Pdb) resource.controller.controller.plugin <weakproxy at 0x7f81fd625158 to L3RouterPlugin at 0x7f81fd6c09d0>Copy the code

The Plugin corresponding to the Extension Resource Routers is L3RouterPlugin. API request GET/v2.0 / routers, corresponding to the true view function, is the neutron. Services. L3_router. L3_router_plugin: L3RouterPlugin. Get_routers.

# / opt/stack/neutron neutron/db/l3_db. Py # L3RouterPlugin L3_NAT_dbonly_mixin @ db_api inherited from the parent class. Retry_if_session_inactive () def get_routers(self, context, filters=None, fields=None, sorts=None, limit=None, marker=None, page_reverse=False): marker_obj = lib_db_utils.get_marker_obj( self, context, 'router', limit, marker) return model_query.get_collection(context, l3_models.Router, self._make_router_dict, filters=filters, fields=fields, sorts=sorts, limit=limit, marker_obj=marker_obj, page_reverse=page_reverse)Copy the code

Neutron Server summary

Neutron Server startup process:

  1. Load (instantiate) Core WSGI Application and Extension WSGI Middleware
  2. Load (instantiate) Core & Extension Plugins
  3. Start the Web Server service
  4. Start the Plugins RPC service

Developers familiar with OpenStack can feel that Neutron code writing is not routine compared with other projects such as Nova and Cinder. It is difficult for experienced developers to quickly grasp the essentials, which is the reason why Neutron is difficult to get started. It’s obviously not a good idea, but think about who made Neutron.

Neutron API mainly includes Core API and Extension API. At the Web Server level (WSGI Server, WSGI Middleware, WSGI Application) correspond to WSGI Application and WSGI Middleware respectively. Both Core AND Extension apis encapsulate a Resource through the Controller Class. The difference is that the former uses the Pecan framework. The latter still uses routes library to complete Mapping among URL Router, View Function and HTTP Method. Although where the code is written and how it is implemented are not uniform, the end result is the same – the Request is passed from the API layer to the Plugin layer, which then passes asynchronously through MQ using the RPC protocol to the Agents service process that actually performs the task.

NOTE: Not all requests are sent asynchronously to the Agents service process, some requests are completed in the Plugins layer, for example, to get network resource information.

Plug-ins and Agents

Neutron Plugins are part of Neutron Server, but they are mentioned here because Plugins are closely related to Agents. Neutron Plugins, as the “transfer” layer of Neutron internal invocation, inherit Neutron API layer and Neutron Agents layer from top to bottom, and the bridge in the middle is naturally RPC communication protocol and MQ.

OpenStack Networking Plug-ins and agents — Plug and unplug Ports, Create networks or subnets, and provide IP addressing. These plug-ins and agents differ depending on the vendor and technologies used in the particular cloud. OpenStack Networking ships with plug-ins and agents for Cisco virtual and physical switches, NEC OpenFlow products, Open vSwitch, Linux bridging, and the VMware NSX product. The common agents are L3 (layer 3), DHCP (dynamic host IP addressing), and a plug-in agent.

The Messaging queue — Used by most OpenStack Networking installations to route information between the neutron-server and various agents. Also acts as a database to store networking state for particular plug-ins.

Plugin RPC

The Plugin layer encapsulates RPC protocol, and Plugin acts as both RPC Producer and RPC Consumer.

  • RPC Producer: The Plugin sends messages to the Agent
  • RPC Consumer: Plugin receives messages sent by the Agent

First, to become a Consumer, a Plugin needs to apply to the RPC Server, a process I call Registered Endpoints. By Registered Endpoints, the Plugin registers the endpoint (calling interface) that communicates with the corresponding Agent.

Registered Endpoints code logic:

# / opt/stack/neutron neutron/plugins/ml2 / plugin. Py # to the Core of the plugin RPC Listeners boot method, for example the class Ml2Plugin (...). :... @log_helpers.log_method_call def start_rpc_listeners(self): """Start the RPC loop to let the plugin communicate with agents Self._setup_rpc () self.topic = topic.plugin self.conn = n_rpc.connection () # register endpoints with RPC Consumer and create RPC Consumer instance object self.conn.create_consumer(self.topic, self.endpoints, fanout=False) self.conn.create_consumer( topics.SERVER_RESOURCE_VERSIONS, [resources_rpc.ResourcesPushToServerRpcCallback()], fanout=True) # process state reports despite dedicated rpc workers self.conn.create_consumer(topics.REPORTS, [agents_db.AgentExtRpcCallback()], Fanout =False) # Start the RPC Servers instance object in the endpoint as a thread return self.conn.consume_in_threads() def start_rpc_state_reports_listener(self): self.conn_reports = n_rpc.Connection() self.conn_reports.create_consumer(topics.REPORTS, [agents_db.AgentExtRpcCallback()], fanout=False) return self.conn_reports.consume_in_threads() def _setup_rpc(self): ""Initialize Components to support agent communication."" # Agents endPoints = [ rpc.RpcCallbacks(self.notifier, self.type_manager), securitygroups_rpc.SecurityGroupServerRpcCallback(), dvr_rpc.DVRServerRpcCallback(), dhcp_rpc.DhcpRpcCallback(), agents_db.AgentExtRpcCallback(), metadata_rpc.MetadataRpcCallback(), resources_rpc.ResourcesPullRpcCallback() ]Copy the code

The most important functions, start_rpc_listeners and start_rpc_state_reports_listener, are called on the RpcWorker and RpcReportsWorker classes mentioned above, respectively. So as to realize the loading and running of RPC Workers.

Print the self. Endpoints:

(Pdb) pp self.endpoints
[<neutron.plugins.ml2.rpc.RpcCallbacks object at 0x7f17fcd9f350>,
 <neutron.api.rpc.handlers.securitygroups_rpc.SecurityGroupServerRpcCallback object at 0x7f17fcd9f390>,
 <neutron.api.rpc.handlers.dvr_rpc.DVRServerRpcCallback object at 0x7f17fcd9f3d0>,
 <neutron.api.rpc.handlers.dhcp_rpc.DhcpRpcCallback object at 0x7f17fcd9f410>,
 <neutron.db.agents_db.AgentExtRpcCallback object at 0x7f17fcd9f450>,
 <neutron.api.rpc.handlers.metadata_rpc.MetadataRpcCallback object at 0x7f17fcd9f5d0>,
 <neutron.api.rpc.handlers.resources_rpc.ResourcesPullRpcCallback object at 0x7f17fcd9f610>]
Copy the code

An example of calling PRC functions in a Create Port business process:

/opt/stack/neutron/neutron/plugins/ml2/plugin.py class Ml2Plugin(...) :... def create_port(self, context, port): ... return self._after_create_port(context, result, mech_context) def _after_create_port(self, context, result, mech_context): ... try: bound_context = self._bind_port_if_needed(mech_context) except ml2_exc.MechanismDriverError: ... return bound_context.current @db_api.retry_db_errors def _bind_port_if_needed(self, context, allow_notify=False, need_notify=False, allow_commit=True): ... if not try_again: if allow_notify and need_notify: self._notify_port_updated(context) return context ... return context def _notify_port_updated(self, mech_context): port = mech_context.current segment = mech_context.bottom_bound_segment if not segment: # REVISIT(rkukura): This should notify agent to unplug port network = mech_context.network.current LOG.debug("In _notify_port_updated(), no bound segment for " "port %(port_id)s on network %(network_id)s", {'port_id': port['id'], 'network_id': network['id']}) return self.notifier.port_update(mech_context._plugin_context, port, segment[api.NETWORK_TYPE], segment[api.SEGMENTATION_ID], segment[api.PHYSICAL_NETWORK]) # /opt/stack/neutron/neutron/plugins/ml2/rpc.py class AgentNotifierApi(...) :... def port_update(self, context, port, network_type, segmentation_id, physical_network): CCTXT = self.client.prepare(topic=self.topic_port_update, fanout=True) 'port_update', port=port, network_type=network_type, segmentation_id=segmentation_id, physical_network=physical_network)Copy the code

Finally, the RPC message emitted by the ML2Plugin is received by the Agent (RPC consumer) subscribed to the Target and performs the final task.

(Pdb) self.client.target < target topic=q-agent-notifier, version=1.0>Copy the code

For example, the OvS Agent receives this message:

# /opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py

    def port_update(self, context, **kwargs):
        port = kwargs.get('port')
        # Put the port identifier in the updated_ports set.
        # Even if full port details might be provided to this call,
        # they are not used since there is no guarantee the notifications
        # are processed in the same order as the relevant API requests
        self.updated_ports.add(port['id'])
Copy the code

The Plugins Callback System

In addition to calling RPC functions (call, cast), Plugins also implement a Callback System mechanism. Official documents Neutron Messaging Callback System and Neutron Callback System.

Callback System, like RPC, is for communication, but the difference is that RPC is for task message transmission between neutron-server and Agent. Callback System is used to implement communication between core and Service Components within the same process. Lifecycle Events (e.g., before creation, before deletion, etc.) Make state changes of a particular Resource perceptible between different Core and Services, and between different Services. For example, if the Neutron Network Resource is associated with multiple services (VPN, Firewall, and Load Balancer), When operating on a Network, the Service needs to determine the correct state of the Network.

Service A, B, and Call need to know the Router creation event. If there is no intermediary to inform the Services by message, then A needs to call B/C directly when performing router creation and say “I want to create A router”. But with mediation X (Callback System), the process would be:

  1. B and C subscribed to the event of A Create Router from X
  2. When A completes the Created Router
  3. A calls X (A has A call handle to X)
  4. X will notify B and C of A created router event (X -> notify)

During the whole process, A, B, and C do not communicate directly, thus realizing the decoupling between A, B, and C (Services). This is called a Callback.

Callback System is widely used in Neutron, and the code implementation is similar to the following example:

# /opt/stack/neutron/neutron/plugins/ml2/plugin.py class Ml2Plugin(...) :... def create_network(self, context, network): self._before_create_network(context, network) ... def _before_create_network(self, context, network): Net_data = network[net_def.RESOURCE_NAME] Event Before Create messages are sent to Services registry. Notify (resources.NETWORK, events.before_create, self, context=context, network=net_data)Copy the code

There are two types of roles in the Callback System’s view: the event handler role and the event publisher role. The event processing role is responsible for subscribing to an event Registry. subscribe API, and the event publishing role is responsible for notifying an event Registry. notifyAPI. The specific code implementation and module use are listed in Neutron official documents with many examples that will not be described here.

Agents



As can be seen from the Neutron deployment architecture, Neutron has a large number of Networking Agents service processes, and these Agents are deployed to run on various nodes. The objects to be configured are physical or virtual nes (such as DHCP, Linux Bridge, Open vSwitch, and Router) deployed on these nodes. The Agent provides ne management and execution services for Neutron. Through the different combinations of Agents, users have the flexibility to construct the desired network topology.

[root@localhost ~]# openstack network agent list +--------------------------------------+--------------------+-----------------------+-------------------+-------+------- +---------------------------+ | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | +--------------------------------------+--------------------+-----------------------+-------------------+-------+------- +---------------------------+ | 2698f558-6b20-407c-acf5-950e707432ed | Metadata agent | localhost.localdomain | None | :-) | UP | neutron-metadata-agent | | 7804fb5a-fe22-4f02-8e4c-5689744bb0aa | Open vSwitch agent | localhost.localdomain | None | :-) | UP | neutron-openvswitch-agent | | a7b30a22-0a8a-4d31-bf20-9d96dbe420bc | DHCP agent | localhost.localdomain | nova | :-) | UP | neutron-dhcp-agent | | eb1da27b-3fa2-4304-965a-f6b15c475419 | L3 agent | localhost.localdomain | nova | :-) | UP | neutron-l3-agent | +--------------------------------------+--------------------+-----------------------+-------------------+-------+------- +---------------------------+Copy the code

The abstract architecture of Agent can be divided into three layers:

  1. Northbound PROVIDES an RPC interface for the Neutron Server to invoke
  2. Southbound Neutron VNF (virtual NETWORK Element) is configured using CLI protocol stack
  3. There are two types of model conversion: from RPC model to CLI model



For example, when Neutron creates and binds a Port to a VM, the Linux Bridge Agent and OvS Agent execute the following commands to support the Neutron network implementation model on compute nodes.

# Port UUID: 15c7b577-89f5-46f6-8111-5f4e0c8ebaa1 # VM UUID: 80996760-0c30-4e2a-847a-b9d882182df brctl addbr qbr15c7b577-89 brctl setfd qbr15c7b577-89 0 brctl stp qbr15c7b577-89 off  brctl setageing qbr15c7b577-89 0 ip link add qvb15c7b577-89 type veth peer name qvo15c7b577-89 ip link set qvb15c7b577-89 up ip link set qvb15c7b577-89 promisc on ip link set qvb15c7b577-89 mtu 1450 ip link set qvo15c7b577-89 up ip link set qvo15c7b577-89 promisc on ip link set qvo15c7b577-89 mtu 1450 ip link set qbr15c7b577-89 up brctl addif qbr15c7b577-89 qvb15c7b577-89 ovs-vsctl -- --may-exist add-br br-int -- set Bridge br-int datapath_type=system ovs-vsctl  --timeout=120 -- --if-exists del-port qvo15c7b577-89 -- add-port br-int qvo15c7b577-89 -- set Interface qvo15c7b577-89 external-ids:iface-id=15c7b577-89f5-46f6-8111-5f4e0c8ebaa1 external-ids:iface-status=active external-ids:attached-mac=fa:16:3e:d0:f6:a4 external-ids:vm-uuid=80996760-0c30-4e2a-847a-b9d882182df ip link set qvo15c7b577-89 mtu 1450Copy the code

The Neutron Agent program entry is still defined in the setup. CFG file. The OvS Agent is used as an example to start the neutron-OpenVswitch-agent. service service process.

Start the OvS Agent

# /opt/stack/neutron/setup.cfg
neutron-openvswitch-agent = neutron.cmd.eventlet.plugins.ovs_neutron_agent:main
Copy the code

Find the program entry function for the server process:

# /opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/main.py

_main_modules = {
    'ovs-ofctl': 'neutron.plugins.ml2.drivers.openvswitch.agent.openflow.'
                 'ovs_ofctl.main',
    'native': 'neutron.plugins.ml2.drivers.openvswitch.agent.openflow.'
                 'native.main',
}


def main():
    common_config.init(sys.argv[1:])
    driver_name = cfg.CONF.OVS.of_interface
    mod_name = _main_modules[driver_name]
    mod = importutils.import_module(mod_name)
    mod.init_config()
    common_config.setup_logging()
    profiler.setup("neutron-ovs-agent", cfg.CONF.host)
    mod.main()
Copy the code

Here you can see that OvS Agent has two different startup modes, OVS-ofCTL and native, which are specified by the of_interface configuration item.

# openvswitch_agent.ini

of_interface -- OpenFlow interface to use.
  Type:	string
  Default:	native
  Valid Values:	ovs-ofctl, native
Copy the code

The ovs-ofctl command of Open vSwitch is used to operate the flow table.

# /opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/ovs_ofctl/main.py def main(): # three different OvS Bridge type definition, corresponding br - int, br - an ethX, br - top bridge_classes = {' br_int: br_int OVSIntegrationBridge, 'br_phys' : br_phys.OVSPhysicalBridge, 'br_tun': br_tun.OVSTunnelBridge, } ovs_neutron_agent.main(bridge_classes) # /opt/stack/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py def main(bridge_classes): ... try: OVSNeutronAgent(bridge_classes, ext_mgr, cfg.CONF) capabilities.notify_init_event(n_const.AGENT_TYPE_OVS, agent) except (RuntimeError, ValueError) as e: LOG.error("%s Agent terminated!" Daemon_loop () class OVSNeutronAgent(...) :... def __init__(self, bridge_classes, ext_manager, conf=None): ... Create Consumer self.setup_rpc()... def setup_rpc(self): self.plugin_rpc = OVSPluginApi(topics.PLUGIN) # allow us to receive port_update/delete callbacks from the cache self.plugin_rpc.register_legacy_notification_callbacks(self) self.sg_plugin_rpc = sg_rpc.SecurityGroupServerAPIShim( self.plugin_rpc.remote_resource_cache) self.dvr_plugin_rpc = dvr_rpc.DVRServerRpcApi(topics.PLUGIN) self.state_rpc = agent_rpc.PluginReportStateAPI(topics.REPORTS) # RPC network init self.context = context.get_admin_context_without_session() # Made a simple RPC call to Neutron Server. while True: try: self.state_rpc.has_alive_neutron_server(self.context) except oslo_messaging.MessagingTimeout as e: LOG.warning('l2-agent cannot contact neutron server. ' 'Check connectivity to neutron server. ' 'Retrying... ' 'Detailed message: %(msg)s.', {'msg': E}) continue break # Define the listening consumers for the agent consumers = [[constants.TUNNEL, topics.UPDATE], [constants.TUNNEL, topics.DELETE], [topics.DVR, topics.UPDATE]] if self.l2_pop: consumers.append([topics.L2POPULATION, topics.UPDATE]) self.connection = agent_rpc.create_consumers([self], topics.AGENT, consumers, start_listening=False)Copy the code

After the RPC Consumer is created, the RPC Consumer function defined by the OvS Agent can “consume” messages sent from Neutron Server to MQ. e.g.

    def port_update(self, context, **kwargs):
        port = kwargs.get('port')
        # Put the port identifier in the updated_ports set.
        # Even if full port details might be provided to this call,
        # they are not used since there is no guarantee the notifications
        # are processed in the same order as the relevant API requests
        self.updated_ports.add(port['id'])

    def port_delete(self, context, **kwargs):
        port_id = kwargs.get('port_id')
        self.deleted_ports.add(port_id)
        self.updated_ports.discard(port_id)

    def network_update(self, context, **kwargs):
        network_id = kwargs['network']['id']
        for port_id in self.network_ports[network_id]:
            # notifications could arrive out of order, if the port is deleted
            # we don't want to update it anymore
            if port_id not in self.deleted_ports:
                self.updated_ports.add(port_id)
        LOG.debug("network_update message processed for network "
                  "%(network_id)s, with ports: %(ports)s",
                  {'network_id': network_id,
                   'ports': self.network_ports[network_id]})
...
Copy the code

NOTE: OvS Agent only listens for UPDATE and DELETE RPC messages, but does not listen for CREATE. This is because the creation of Neutron Port is not completed by OvS Agent, but by L3 Agent and DHCP Agent.

Reproduced please note the author: JmilkFan Fan GUI Hurricane

\