This series of articles will teach you how to build an OpenStack development environment from scratch across multiple OpenStack systems. The installation version of OpenStack used in the current tutorial is version 20 Train (version T for short). Release Note Train, Originally Released: 16 October, 2019 13 May, 2020 Victoria, Originally Released: 14 October, 2020

The nuggets community


OpenStack Ussuri offline Deployment: OpenStack Train Offline Deployment: OpenStack Ussuri Offline deployment

OpenStack Train Offline deployment | 0 Local offline deployment yum OpenStack Train offline deployment | 1 Controller node – Environment Preparation OpenStack Train Offline deployment | 2 Compute node – Environment Preparation OpenStack Train offline deployment | 3 controller nodes -Keystone authentication service component OpenStack Train offline deployment | 4 controller nodes -Glance image service component OpenStack Train offline deployment | 5 Controller nodes -Placement service component OpenStack Train Offline deployment | 6.1 Controller Node -Nova Computing service component OpenStack Train Offline deployment | 6.2 Compute Node -Nova Computing service Component OpenStack Train Offline deployment | 6.3 Controller Node -Nova Computing service component OpenStack Train Offline Deployment | 7.1 Controller Node -Neutron Network service Component OpenStack Train Offline Deployment | 7.2 Compute Node -Neutron Network service Component OpenStack Train deployment | 7.3 Controller Node -Neutron Service component OpenStack Train Deployment | 8 Controller Node -Horizon Service component OpenStack Train Deployment | 9 Start an OpenStack instance Train Offline deployment | 10 Controller node -Heat service component OpenStack Train Offline deployment | 11.1 Controller Node -Cinder Storage Service Component OpenStack Train Offline deployment | 11.2 Storage node -Cinder storage service component OpenStack Train Offline Deployment | 11.3 Controller Node -Cinder Storage Service Component OpenStack Train Offline Deployment | 11.4 Compute Node -Cinder Storage Service Component OpenStack Offline Deployment of Train | 11.5 Instance Using -Cinder storage service components


Gold Mining community: Customizing OpenStack Images | Customizing OpenStack images | Environment Preparation Customizing OpenStack images | Windows7 Customizing OpenStack images | Windows10 Customizing OpenStack images | Linux Customize an OpenStack image | Windows Server2019


CSDN

CSDN: OpenStack Ussuri Offline Installation and Deployment Series (full) OpenStack Train Offline Installation and Deployment Series (full) Looking forward to making progress together with you.


OpenStack Train Offline deployment | 7.2 Compute Node -Neutron Network service component

Official Reference: OpenStack Official Installation Guide Service component neutron-install neutron-install-rdo neutron-install-controller-install-rdo neutron-install-controller-install-rdo Neutron-install-controller-install-rdo Blog: Install OpenStack(Rocky edition)-06 on CentOS7. Install the Neutron network service (controller Node).

I. Components to be installed

yum install -y openstack-neutron-linuxbridge ebtables ipset
Copy the code

Configure common components

The network common component configuration includes authentication mechanisms, message queues, and plug-ins. /etc/neutron/neutron.conf

cd 
touch compute-node-neutron.conf.sh
vim compute-node-neutron.conf.sh
bash compute-node-neutron.conf.sh
Copy the code

The contents of the compute-node-neutron.conf.sh file are as follows:

#! bin/bash
#compute-node-neutron.conf.sh
openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url  rabbit://openstack:openstack@controller
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone

openstack-config --set /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri  http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password neutron

openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp

echo "Result of Configuration"
egrep -v '(^$|^#)' /etc/neutron/neutron.conf

Copy the code
bash compute-node-neutron.conf.sh
Copy the code

3. Configure network options

Select the same network options as you did for the controller node to configure the services specific to that node.

1. Configure network option 1: Provider Networks

Configuration: docs.openstack.org/neutron/tra…

(1) Configure the Linux bridge proxy

/etc/neutron/plugins/ml2/linuxbridge_agent.ini

The Linux bridge broker builds the Layer 2 (bridging and switching) virtual network infrastructure for the instance and handles the security groups.

cd 
touch compute-node-linuxbridge_agent.ini.sh
vim compute-node-linuxbridge_agent.ini.sh
bash compute-node-linuxbridge_agent.ini.sh
Copy the code

Sh specifies the contents of the compute-node- Linuxbridge_agent.ini. sh file

#! bin/bash
#compute-node-linuxbridge_agent.ini.sh

#Map the provider virtual network to the provider physical network interface, the name of the underlying provider physical network interface
openstack-config --set   /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings  provider:ens34

openstack-config --set   /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan  enable_vxlan  False
openstack-config --set   /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup  enable_security_group  True 
openstack-config --set   /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup  firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

echo "Result of Configuration"
egrep -v '(^$|^#)' /etc/neutron/plugins/ml2/linuxbridge_agent.ini
Copy the code
bash compute-node-linuxbridge_agent.ini.sh
Copy the code

Note: The first option: Physical_interface_mappings specifies the nic name of the compute node. Provider: ENS34

Load the BR_netFilter kernel module. To enable network bridge support, usually the BR_netFilter kernel module needs to be loaded. Configuration Reference Link

echo net.bridge.bridge-nf-call-iptables = 1 >> /etc/sysctl.conf
echo net.bridge.bridge-nf-call-ip6tables = 1 >> /etc/sysctl.conf

cat /etc/sysctl.conf
sysctl -p
modprobe br_netfilter
ls /proc/sys/net/bridge
sysctl -p

sysctl net.bridge.bridge-nf-call-iptables
sysctl net.bridge.bridge-nf-call-ip6tables
Copy the code

2. Configure network option 2: self-service Networks

Configuration: docs.openstack.org/neutron/tra…

(1) Configure the Linux bridge proxy

/etc/neutron/plugins/ml2/linuxbridge_agent.ini

The Linux bridge broker builds the Layer 2 (bridging and switching) virtual network infrastructure for the instance and handles the security groups.

#! bin/bash
#compute-node-linuxbridge_agent.ini.sh

#Map the provider virtual network to the provider physical network interface, the name of the underlying provider physical network interface
openstack-config --set   /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings  provider:ens34

#change to true
openstack-config --set   /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan  enable_vxlan  True

#add new 
#Local_ip = OVERLAY_INTERFACE_IP_ADDRESS, replace OVERLAY_INTERFACE_IP_ADDRESS with the management IP address of the computes nodeIt - config - set/etc/neutron/plugins/ml2 / linuxbridge_agent ini vxlan local_ip 192.168.232.111#add new
openstack-config --set   /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population true
openstack-config --set   /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup  enable_security_group  True 
openstack-config --set   /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup  firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

echo "Result of Configuration"
egrep -v '(^$|^#)' /etc/neutron/plugins/ml2/linuxbridge_agent.ini
Copy the code

Load the BR_netFilter kernel module. To enable network bridge support, usually the BR_netFilter kernel module needs to be loaded. Configuration Reference Link

echo net.bridge.bridge-nf-call-iptables = 1 >> /etc/sysctl.conf
echo net.bridge.bridge-nf-call-ip6tables = 1 >> /etc/sysctl.conf

cat /etc/sysctl.conf
sysctl -p
modprobe br_netfilter
ls /proc/sys/net/bridge
sysctl -p

sysctl net.bridge.bridge-nf-call-iptables
sysctl net.bridge.bridge-nf-call-ip6tables
Copy the code

Configure the Compute service to use the network service

Docs.openstack.org/neutron/tra…

/etc/nova.nova. conf In the “neutron” section, set access parameters

cd 
touch compute-node-neutron-nova.conf.sh
vim compute-node-neutron-nova.conf.sh
bash compute-node-neutron-nova.conf.sh
Copy the code

Sh Specifies the contents of the compute-node-neutron-nova.conf.sh file

#! bin/bash
#compute-node-neutron-nova.conf.shopenstack-config --set /etc/nova/nova.conf neutron url http://controller:9696 openstack-config --set /etc/nova/nova.conf  neutron auth_url http://controller:5000 openstack-config --set /etc/nova/nova.conf neutron auth_type password openstack-config --set /etc/nova/nova.conf neutron project_domain_name default openstack-config --set /etc/nova/nova.conf neutron user_domain_name default openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne openstack-config --set /etc/nova/nova.conf neutron project_name service openstack-config --set /etc/nova/nova.conf neutron username neutron openstack-config --set /etc/nova/nova.conf neutron password neutron echo "Result of Configuration" egrep -v '(^$|^#)' /etc/nova/nova.confCopy the code
bash compute-node-neutron-nova.conf.sh
Copy the code

V. Determine the installation of compute node network services

1. Restart the computing service

systemctl restart openstack-nova-compute.service
systemctl status openstack-nova-compute.service

Copy the code

2. Start the Linux bridge agent and configure the system to automatically start upon startup

systemctl restart neutron-linuxbridge-agent.service
systemctl status neutron-linuxbridge-agent.service

systemctl enable neutron-linuxbridge-agent.service
systemctl list-unit-files |grep neutron* |grep enabled

Copy the code

Vi. Installation is complete

At this point, the network configuration of the compute node is complete, and the verification of the Neutron network service is performed on the control node.