Pacemaker: Resource manager (CRM), responsible for starting and stopping services, is located in the resource management and resource agent layer of HA cluster architecture

Corosync: A component of the Messaging Layer that manages membership, Messaging, and arbitration. It provides communication services in high availability environments. It is located at the bottom of the high availability cluster architecture and provides heartbeat information between nodes. Resource-agents: Resource agents. A tool that receives the SCHEDULING of CRM on a node and manages a resource. The management tool is usually a script. PCS: command line toolset; Fence-agents: fencing closes a node when it is unstable or does not respond so that it does not damage other resources in the cluster. Its main function is to eliminate split brains

# Install related services on all controller160 nodes.

3.1 Configuring SSH Authentication-free

[root@controller160 ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:WV0VywXmgnocShYxbETsggN5Y75Ls+jMZsvbu6iEvmc root@controller160
The key's randomart image is: +---[RSA 3072]----+ | . =*. +++| | o + +o o +. o| | = o oo + o .o | | + .o.* . . | | o .S o | |. + . | |.. o + | |o++Eo | |.B%o+o | +----[SHA256]-----+ [root@controller160 ~]# ssh-copy-id root@controller161 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub" The authenticity of host 'Controller161 (172.16.1.161)' can't be established.
ECDSA key fingerprint is SHA256:7QfTWDISgUB5tbDsuL21tTBgWAfN+9kB2buwObFt32o.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@controller161's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'root@controller161'" and check to make sure that only the key(s) you wanted were added. [root@controller160 ~]# ssh-copy-id root@controller162 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub" The authenticity of host 'Controller162 (172.16.1.162)' can't be established.
ECDSA key fingerprint is SHA256:7QfTWDISgUB5tbDsuL21tTBgWAfN+9kB2buwObFt32o.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@controller162's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'root@controller162'"
and check to make sure that only the key(s) you wanted were added.
Copy the code

3.2 Enabling HighAvailability Repo

yum -y install yum-utils
dnf config-manager --set-enabled HighAvailability
dnf config-manager --set-enabled PowerTools
Copy the code

Install and enable the EPEL repo

rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
Copy the code

3.3 Building a Pacemaker Cluster

yum install -y pacemaker pcs corosync fence-agents resource-agents
Copy the code

# Build cluster

Start PCS service on all controller nodes, for example, controller160

[root@controller160 ~]# systemctl enable pcsd
[root@controller160 ~]# systemctl start pcsd
Copy the code

Change the password of cluster administrator hacluster (generated by default) on all controller nodes. Take controller160 as an example

[root@controller160 ~]# echo hacluster.123 | passwd --stdin hacluster
Changing password for user hacluster.
passwd: all authentication tokens updated successfully
Copy the code

For example, the controller160 node is used as an example. # Node authentication, cluster building, need to use the password set in the previous step

[root@controller160 ~]# pcs host auth controller160 controller161 controller162 -u hacluster -p hacluster.123
controller160: Authorized
controller161: Authorized
controller162: Authorized
Copy the code

Create and name a cluster on any node. For example, controller160; # generate the configuration file: / etc/corosync corosync. Conf

[root@controller160 ~]# pcs cluster setup openstack-u-cluster --start controller160 controller161 controller162
No addresses specified for host 'controller160', using 'controller160'
No addresses specified for host 'controller161', using 'controller161'
No addresses specified for host 'controller162', using 'controller162'
Destroying cluster on hosts: 'controller160'.'controller161'.'controller162'. controller160: Successfully destroyed cluster controller162: Successfully destroyed cluster controller161: Successfully destroyed cluster Requesting remove'pcsd settings' from 'controller160'.'controller161'.'controller162'
controller160: successful removal of the file 'pcsd settings'
controller161: successful removal of the file 'pcsd settings'
controller162: successful removal of the file 'pcsd settings'
Sending 'corosync authkey'.'pacemaker authkey' to 'controller160'.'controller161'.'controller162'
controller160: successful distribution of the file 'corosync authkey'
controller160: successful distribution of the file 'pacemaker authkey'
controller161: successful distribution of the file 'corosync authkey'
controller161: successful distribution of the file 'pacemaker authkey'
controller162: successful distribution of the file 'corosync authkey'
controller162: successful distribution of the file 'pacemaker authkey'
Sending 'corosync.conf' to 'controller160'.'controller161'.'controller162'
controller160: successful distribution of the file 'corosync.conf'
controller161: successful distribution of the file 'corosync.conf'
controller162: successful distribution of the file 'corosync.conf'
Cluster has been successfully set up.
Starting cluster on hosts: 'controller160'.'controller161'.'controller162'.Copy the code

3.4 Starting a Cluster

Start the cluster service on the controller160 node

[root@controller160 ~]# pcs cluster enable --all
controller160: Cluster Enabled
controller161: Cluster Enabled
controller162: Cluster Enabled
[root@controller160 ~]# pcs cluster status
Cluster Status:
 Cluster Summary:
   * Stack: corosync
   * Current DC: controller161 (version 2.0.3-5.el8_2.1-4b1f869f0f) - partition with quorum
   * Last updated: Thu Jun 18 01:00:33 2020
   * Last change:  Thu Jun 18 00:58:20 2020 by hacluster via crmd on controller161
   * 3 nodes configured
   * 0 resource instances configured
 Node List:
   * Online: [ controller160 controller161 controller162 ]

PCSD Status:
  controller160: Online
  controller161: Online
  controller162: Online
Copy the code

Check corosync status; # “corosync” means a way to synchronize information such as the underlying state

[root@controller160 ~]# pcs status corosync

Membership information
----------------------
    Nodeid      Votes Name
         1          1 controller160 (local)
         2          1 controller161
         3          1 controller162
Copy the code

# View nodes

[root@controller160 ~]# corosync-cmapctl | grep membersRuntime.members.1.config_version (u64) = 0 Runtime.members.1.ip (STR) = r(0) IP (172.16.1.160) runtime.members.1.join_count (u32) = 1 runtime.members.1.status (str) = joined runtime.members.2.config_version (u64) = 0 runtime. Members.2. IP (STR) = r(0) IP (172.16.1.161) runtime (STR) = joined runtime.members.3.config_version (u64) = 0 runtime.members.3.ip (STR) = r(0) IP (172.16.1.162) runtime.members.3.join_count (u32) = 1 runtime.members.3.status (str) = joinedCopy the code

# View cluster resources

[root@controller160 ~]# pcs resource
NO resources configured
Copy the code

Or access any controller node via web:https://172.16.1.160:2224# Account/password (the password generated during cluster construction) : ==hacluster/hacluster.123==

3.5 High Availability Configuration

Set properties on any controller160 node, for example; # Set appropriate input processing history and policy engine generated errors and warnings, useful for troulbshoot

[root@controller160 ~]# pcs property set pe-warn-series-max=1000 \
> pe-input-series-max=1000 \
> pe-error-series-max=1000
Copy the code

# Pacemaker processes status in a time-driven manner. “cluster-recheck-interval” defines the event interval of certain Pacemaker operations as 15min by default. It is recommended to set the interval to 5min or 3min

[root@controller160 ~]# pcs property set cluster-recheck-interval=5
Copy the code

# Corosync enables stonith by default, but the stonith mechanism (using IPMI or SSH to shut down the node) does not configure the stonith device (using “crm_verify -l -v” to verify that the configuration is correct), At this point Pacemaker will refuse to start any resources; It can be adjusted flexibly in production environment, and can be turned off in verification environment

[root@controller160 ~]# pcs property set stonith-enabled=false
Copy the code

If more than half of the nodes are online, the cluster considers itself to have a quorum and is “valid”. When two nodes are faulty and the cluster status does not meet the above formula, the cluster is invalid. When the cluster has only two nodes, the failure of one node cluster is illegal, the so-called “two-node cluster” is meaningless; In the actual production environment, a two-node cluster is used, and arbitration is not available. For a three-node cluster, you can flexibly set the high availability threshold for the cluster nodes

[root@controller160 ~]# pcs property set no-quorum-policy=ignore
Copy the code

In order to support multi-node cluster, the heartbeat of v2 provides an integral policy to control the switching strategy of each resource among nodes in the cluster. By calculating the total score of each node, the node with the highest score will become active to manage a resource (or a group of resources).

The default initial score for each resource (take the global parameter default-resource-stickiness and view it via “PCS Property list –all”) is 0, At the same time, the score deducted after each failure of each resource (take the global parameter default-resource-failure-stickiness) is also 0. In this case, heartbeat only performs the restart operation no matter how many times a resource fails. If you set an initial score of “resource-stickiness” or “resource-failure-stickiness” for a particular resource, you will set a separate score for the resource;

In general, the value of resource-stickiness is positive and the value of resource-failure is negative; One special value, INFINITY and -infinity, is “never switch” and “must switch if you fail”, which are simple configuration items that satisfy extreme rules;

# If a node’s score is negative, the node will not take over resources under any circumstances (cold standby); If a node’s score is greater than that of the node currently running the resource, Heartbeat performs a switchover, with the node now running the resource releasing the resource and the node with the higher score taking over the resource

# PCS property list can only be used to view the modified property values. Parameter “–all” can be used to view all property values with default values.

[root@controller160 ~]# pcs property listCluster Properties: cluster-infrastructure: corosync cluster-name: openstack-u-cluster cluster-recheck-interval: 5 dc - version: the 2.0.3-5. El8_2. 1-4 b1f869f0f have - watchdog:false
 no-quorum-policy: ignore
 pe-error-series-max: 1000
 pe-input-series-max: 1000
 pe-warn-series-max: 1000
 stonith-enabled: false
Copy the code

# can also view the/var/lib/pacemaker/cib/cib. The XML file, or “PCS cluster cib”, or “cibadmin – query – scope crm_config” view property is set, “Cibadmin –query — Scope Resources” to view resource configuration

3.6 the configuration of VIP

# Set VIP (resource_id attribute) on any controller node and name it “VIP”;

Ocf (standard attribute) : a type of resource agent, other than Systemd, LSB, service, etc.

The OCF specification allows multiple vendors to provide the same resource agent. Most resource agents provided by the OCF specification use heartbeat as a provider.

#IPaddr2: The name of the resource agent (type attribute). IPaddr2 is the type of the resource.

# Define the resource attribute (standard:provider:type) to locate the RA script location corresponding to the “VIP” resource;

In centos, the OCF-compliant RA script is located in /usr/lib/ocf/resource-. d/, which stores all providers. Each provider directory has multiple types.

#op: indicates Operations

[root@controller160 ~]Ocf :heartbeat:IPaddr2 IP =172.16.1.168 CIDr_netmask =24 op monitor interval=30s
[root@controller160 ~]# ip a show eth02: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether fa:f2:63:23:7a:e5 BRD FF: FF: FF: FF: FF: FF inet 172.16.1.160/24 BRD 172.16.1.255 Scope Global noPrefixRoute eth0 valid_lft forever Preferred_lft forever inet 172.16.1.166/24 BRD 172.16.1.255 scope global secondary eth0 valid_lft forever preferred_lft forever inet6 fe80::f8f2:63ff:fe23:7ae5/64 scope link valid_lft forever preferred_lft foreverCopy the code

“PCS Resouce” is used to query the VIP resource on the controller160 node.

[root@controller160 ~]# pcs resource
  * vip	(ocf::heartbeat:IPaddr2):	Started controller160
Copy the code

If the API is admin/internal/public and only the public interface is open to clients, it is common to set two VIPs: vip_management and vip_public;

Vip_management and vip_public are restricted to one node

[root@controller160 ~]# pcs constraint colocation add vip_management with vip_public
Copy the code

3.7 HA Management

To manually add a cluster, you only need to add any node in the cluster as follows

3.8 deployment Haproxy

Install haProxy on all controller160 nodes.

yum install haproxy -y 
Copy the code

You are advised to enable the log function of HAProxy to facilitate the follow-up troubleshooting! Create HAProxy log file and authorize it

mkdir /var/log/haproxy && chmod a+w /var/log/haproxy
Copy the code

Vim /etc/rsyslog.conf

Enable the TCP/UDP module
module(load="imudp") # needs to be done just once
input(type="imudp" port="514")

module(load="imtcp") # needs to be done just once
input(type="imtcp" port="514")

# Add haProxy configuration
local0.=info    -/var/log/haproxy/haproxy-info.log
local0.=err     -/var/log/haproxy/haproxy-err.log local0.notice; local0.! =err -/var/log/haproxy/haproxy-notice.log

Copy the code

# restart rsyslog

systemctl restart rsyslog
Copy the code

Configure haproxy. CFG on all controller160 nodes. # HaProxy relies on rSyslog to output logs. # Backup the original haproxy. CFG file

cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak
Copy the code

The configuration of haproxy in the cluster involves many services. You can change the configuration according to the actual deployment content in the future. The configuration is completed for the openstack services involved, as follows:

[root@controller160 ~]# grep -v ^# /etc/haproxy/haproxy.cfg
global
  chroot  /var/lib/haproxy
  daemon
  group  haproxy
  user  haproxy
  maxconn  4000
  pidfile  /var/run/haproxy.pid
  log 127.0.0.1 local0 info

defaults
  log  global
  maxconn  4000
  option  redispatch
  retries  3
  timeout  http-request 10s
  timeout  queue 1m
  timeout  connect 10s
  timeout  client 1m
  timeout  server 1m
  timeout  check 10s

# Haproxy monitoring page
listen stats
  bind0.0.0.0:1080 HTTP stats modeenable
  stats uri /
  stats realm OpenStack\ Haproxy
  stats auth admin:admin
  stats  refresh 30s
  stats  show-node
  stats  show-legends
  stats  hide-version

# horizon service
 listen dashboard_cluster
  bind 172.16.1.168:80
  balance  sourceOption tcpka option HTTPCHK option tcplog server controller160 172.16.1.160:80 check inter 2000 rise 2 fall 5 server Controller161 172.16.1.161:80 check inter 2000 rise 2 fall 5 Server controller162 172.16.1.162:80 check inter 2000 rise 2 fall 5# mariadb service;
Set controller160 to master and controller161/162 to backup.
In the case of mariadb service downtime, the /usr/bin/clustercheck script cannot detect the service, but the 9200 port controlled by xinetd is still normal. Haproxy will always forward requests to the node where mariadb service is down, and will temporarily change the listening mode to port 3306
listen galera_cluster
  bind 172.16.1.168:3306
  balance  sourceMode TCP server controller160 172.16.1.160:3306 Check inter 2000 rise 2 fall 5 Server controller161 172.16.1.161:3306 Backup check Inter 2000 rise 2 fall 5 Server controller162 172.16.1.162:3306 Backup check inter 2000 rise 2 fall 5# Provide ha cluster access ports for Rabbirmq to be accessed by openstack services;
If openstack services are directly connected to the RabbitMQ cluster, do not configure load balancing for RabbitMQ
 listen rabbitmq_cluster
   bind 172.16.1.168:5673
   mode tcp
   option tcpka
   balance roundrobin
   timeout client  3h
   timeout server  3h
   option  clitcpka
   server controller160 172.16.1.160:5672 check inter 10s rise 2 fall 5
   server controller161 172.16.1.161:5672 check inter 10s rise 2 fall 5
   server controller162 172.16.1.162:5672 check inter 10s rise 2 fall 5

# glance_api service
 listen glance_api_cluster
  bind 172.16.1.168:9292
  balance  sourceOption tcpka option HTTPCHK option tcplog server Controller160 172.16.1.160:9292 check inter 2000 rise 2 fall 5 server Controller161 172.16.1.161:9292 check inter 2000 Rise 2 fall 5 Server controller162 172.16.1.162:9292 check inter 2000 rise 2 fall 5# Keystone_public _API service
 listen keystone_public_cluster
  bind 172.16.1.168:5000
  balance  sourceOption tcpka option HTTPCHK option tcplog server controller160 172.16.1.160:5000 check inter 2000 rise 2 fall 5 server Controller161 172.16.1.161:5000 check inter 2000 Rise 2 fall 5 Server controller162 172.16.1.162:5000 check inter 2000 rise 2 fall 5 listen nova_compute_api_clusterbind 172.16.1.168:8774
  balance  sourceOption tcpka option HTTPCHK option tcplog server controller160 172.16.1.160:8774 check inter 2000 rise 2 fall 5 server Controller161 172.16.1.161:8774 check inter 2000 Rise 2 fall 5 Server controller162 172.16.1.162:8774 check inter 2000 rise 2 fall 5 listen nova_placement_clusterbind 172.16.1.168:8778
  balance  sourceOption tcpka option tcplog server controller160 172.16.1.160:8778 check inter 2000 rise 2 fall 5 server controller161 172.16.1.161:8778 check inter 2000 rise 2 fall 5 Server controller162 172.16.1.162:8778 check inter 2000 rise 2 fall 5 listen nova_metadata_api_clusterbind 172.16.1.168:8775
  balance  sourceOption tcpka option tcplog server controller160 172.16.1.160:8775 check inter 2000 rise 2 fall 5 server controller161 172.16.1.161:8775 check inter 2000 rise 2 fall 5 Server controller162 172.16.1.162:8775 check inter 2000 rise 2 fall 5 listen nova_vncproxy_clusterbind 172.16.1.168:6080
  balance  sourceOption tcpka option tcplog server controller160 172.16.1.160:6080 check inter 2000 rise 2 fall 5 server controller161 172.16.1.161:6080 check inter 2000 rise 2 fall 5 Server controller162 172.16.1.162:6080 check inter 2000 rise 2 fall 5 listen neutron_api_clusterbind 172.16.1.168:9696
  balance  sourceOption tcpka option HTTPCHK option tcplog server Controller160 172.16.1.160:9696 check inter 2000 rise 2 fall 5 server Controller161 172.16.1.161:9696 check inter 2000 Rise 2 fall 5 Server controller162 172.16.1.162:9696 check inter 2000 rise 2 fall 5 listen cinder_api_clusterbind 172.16.1.168:8776
  balance  sourceOption tcpka option HTTPCHK option tcplog server controller160 172.16.1.160:8776 check inter 2000 rise 2 fall 5 server Controller161 172.16.1.161:8776 check inter 2000 Rise 2 fall 5 Server controller162 172.16.1.162:8776 check inter 2000 rise 2 fall 5Copy the code

3.9 Configuring kernel parameters for each node

Change kernel parameters for all controller nodes. Take controller160 node as an example. #net.ipv4. ip_nonLOCAL_bind: whether to allow no-local IP binding, which is related to whether haProxy instance and VIP can bind and switch; #net.ipv4.ip_forward: whether to allow forwarding

[root@controller160 ~]# echo "net.ipv4.ip_nonlocal_bind = 1" >>/etc/sysctl.conf
[root@controller160 ~]# echo "net.ipv4.ip_forward = 1" >>/etc/sysctl.conf
[root@controller160 ~]# sysctl -p
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
Copy the code

You can choose whether to set the haProxy service after startup. After using Pacemaker to set haProxy related resources, Pacemaker can control whether the HAProxy service is started on each node

[root@controller160 ~]# systemctl restart haproxy
[root@controller160 ~]# systemctl status haproxyLow haproxy. Service - haproxy Load Balancer the Loaded: the Loaded (/ usr/lib/systemd/system/haproxy. Service; disabled; vendor preset: disabled) Active: active (running) since Thu 2020-06-18 01:29:43 CST; 4s ago Process: 19581 ExecStartPre=/usr/sbin/haproxy -f$CONFIG -c -q (code=exited, status=0/SUCCESS)
 Main PID: 19582 (haproxy)
    Tasks: 2 (limit: 11490) Memory: 4.4m CGroup: / system. Slice/haproxy service ├ ─ 19582 / usr/sbin/haproxy - Ws - f/etc/haproxy/haproxy CFG - p/run/haproxy. Pid └ ─ 19584 /usr/sbin/haproxy -Ws -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid Jun 18 01:29:43 controller160 haproxy[19582]: [WARNING] 169/012943 (19582) : config :log format ignored for proxy 'glance_api_cluster' since it has no log address.
Jun 18 01:29:43 controller160 haproxy[19582]: [WARNING] 169/012943 (19582) : config : log format ignored for proxy 'keystone_public_cluster' since it has no log address.
Jun 18 01:29:43 controller160 haproxy[19582]: [WARNING] 169/012943 (19582) : config : log format ignored for proxy 'nova_ec2_api_cluster' since it has no log address.
Jun 18 01:29:43 controller160 haproxy[19582]: [WARNING] 169/012943 (19582) : config : log format ignored for proxy 'nova_compute_api_cluster' since it has no log address.
Jun 18 01:29:43 controller160 haproxy[19582]: [WARNING] 169/012943 (19582) : config : log format ignored for proxy 'nova_placement_cluster' since it has no log address.
Jun 18 01:29:43 controller160 haproxy[19582]: [WARNING] 169/012943 (19582) : config : log format ignored for proxy 'nova_metadata_api_cluster' since it has no log address.
Jun 18 01:29:43 controller160 haproxy[19582]: [WARNING] 169/012943 (19582) : config : log format ignored for proxy 'nova_vncproxy_cluster' since it has no log address.
Jun 18 01:29:43 controller160 haproxy[19582]: [WARNING] 169/012943 (19582) : config : log format ignored for proxy 'neutron_api_cluster' since it has no log address.
Jun 18 01:29:43 controller160 haproxy[19582]: [WARNING] 169/012943 (19582) : config : log format ignored for proxy 'cinder_api_cluster' since it has no log address.
Jun 18 01:29:43 controller160 systemd[1]: Started HAProxy Load Balancer.

Copy the code

# visit:http://172.16.1.168:1080User name and password: admin/admin

3.10 Set PCS. Haproxy resources follow VIP

# Any controller node operation, take controller160 node as an example; # Add resource lb-haproxy-clone

[root@controller160 ~]# pcs resource create lb-haproxy systemd:haproxy clone
[root@controller160 ~]# pcs resource
  * vip	(ocf::heartbeat:IPaddr2):	Started controller160
  * Clone Set: lb-haproxy-clone [lb-haproxy]:
    * Started: [ controller160 controller161 controller162 ]
Copy the code

# Set resource startup sequence, VIP first and lb-haproxy-clone; Cibadmin –query –scope constraints

[root@controller160 ~]# pcs constraint order start vip then lb-haproxy-clone kind=Optional
Adding vip lb-haproxy-clone (kind: Optional) (Options: first-action=start then-action=start)
Copy the code

You are advised to set VIP to run on the node where HAProxy is active and bind lB-haproxy-Clone and VIP services to restrict the two resources to one node. From the perspective of resources, haProxy of other nodes that have not obtained VIP temporarily will be shut down by PCS

[root@controller160 ~]# pcs constraint colocation add lb-haproxy-clone with vip
[root@controller160 ~]# pcs resource
  * vip	(ocf::heartbeat:IPaddr2):	Started controller160
  * Clone Set: lb-haproxy-clone [lb-haproxy]:
    * Started: [ controller160 ]
    * Stopped: [ controller161 controller162 ]
Copy the code

Use high Availability Management to view resource Settings

So far, the high availability configuration (pacemaker&haproxy) has been deployed, if you have any problems, please contact me to correct, thank you!

3. X Summary of problems encountered during the deployment

eg1.Error: Problem: Package fence-agents-all-4.2.1-41.el8.x86_64 requires fence-agents -apC-snmp >= 4.2.1-41.el8, But none of the providers can be installed - package fence-agents-apc-snmp-4.2.1-41.el8. Noarch requires net-snmp-utils, but none of the providers can be installed - conflicting requests - nothing provides net-snmp-libs(x86-64) = El8 needed by net-snmp-utils-1:5.8-14.el8.x86_64 RPM -ivh net-snmp-libs-5.8-14.el8.x86_64. RPM eg2.[root@controller160 ~] RPM -ivh net-snmp-libs-5.8-14.el8.x86_64.# dnf config-manager --set-enabled HighAvailability
Repository AppStream is listed more than once in the configuration
Repository extras is listed more than once in the configuration
Repository PowerTools is listed more than once in the configuration
Repository centosplus is listed more than once inThe Configuration solution: Restore centos8 Base source mv /etc/yum. Repos. d/ centos-base. repo /etc/yum /etc/yum.repos.d/CentOS-Base.repo.bak /etc/yum.repos.d/CentOS-Base.repo yum makecacheCopy the code