Writing in the front

With the rapid development of the Internet, enterprises accumulate more and more data, which requires more and more expansibility of the data storage layer. In today’s Internet enterprises, most of them use MySQL to store relational data. How to realize the high scalability of MySQL data storage layer has become a problem that Internet enterprises must solve. So, how to achieve the true sense of MySQL unlimited expansion? Today, Glacier will talk to you about how to achieve unlimited expansion of MySQL database.

The article has been included: github.com/sunshinelyz… And gitee.com/binghe001/t… Don’t forget to give a little star

An overview of the

In this paper, how to Ensure high Availability of Mycat under Massive Data Architecture? On the basis of this paper, further expansion, so as to realize the high availability of each link of the data storage layer, so as to realize the unlimited expansion of MySQL.

Problems to be solved

How to Ensure High Availability of Mycat under Massive Data Architecture? In this paper, our architecture diagram is as follows:

As can be seen from the figure above, HAProxy has a single point of danger. Once the HAProxy service is down, the entire service architecture will be unavailable. So, how to solve the single point of hidden dangers of HAProxy? That’s what this blog post is about.

Software version

  • Operating system: centos-6.8-x86_64
  • JDK version: JDk1.8
  • HAProxy version: haproxy-1.5.19.tar.gz
  • Mycat version: mycat-server-1.6
  • Keepalived version: keepalived – 1.2.18. Tar. Gz
  • MySQL version: mysql-5.7.tar.gz

The deployment plan

High availability load balancing cluster deployment architecture

The architectural details of the data store section are simplified in the figure above. For example, for each part of the architecture, we can be independently extended to provide services as an independent cluster without a single point of failure.

Illustration:

(1) HAProxy implements Mycat multi-node cluster high availability and load balancing, while HAProxy’s own high availability can be implemented by Keepalived. Therefore, HAProxy host should be installed with both HAProxy and Keepalived. Keepalived is responsible for preemption of the VIRTUAL IP for the server (192.168.209.130 in the figure). The host can be accessed using the original IP address (192.168.209.135) or VIP (192.168.209.130).

(2) Keepalived preemption of VIP has priority, which is determined by the priority attribute in keepalive.conf configuration. However, Keepalived service on any host will be preempted by viPs if it is started first. Even slaves can be preempted by Keepalived service if it is started first.

(3) HAProxy is responsible for distributing VIP requests to Mycat cluster nodes to play the role of load balancing. HAProxy can also detect whether Mycat is alive and only forwards requests to live Mycat.

(4) If one server in the Keepalived+HAProxy high availability cluster goes down, Keepalived on another server in the cluster will immediately preempt the VIP and take over the service. In this case, the HAProxy node that preempts the VIP can continue to provide service.

(5) If a Mycat server goes down, HAPorxy will not forward the request to the down Mycat, so Mycat is still available.

To sum up: The high availability and load balancing of Mycat are implemented by HAProxy, while the high availability of HAProxy is implemented by Keepalived.

HAProxy node 2 is deployed

For details about the installation and deployment of HAProxy host 2 (Liuyazhuang136, 192.168.209.136), see the blog post how to Ensure High Availability of Mycat in massive Data Architecture. When multiple nodes are deployed, adjust the node and description values in the haproxy. CFG configuration file accordingly.

The HAProxy configuration on HAProxy host 2 (Liuyazhuang136, 192.168.209.136) is as follows:

Parameters in the ## global configuration are process-level parameters, usually dependent on the operating system on which they are running

global

log 127.0.0.1 local0 info A maximum of two syslog servers can be defined

### local0 is the logging device, corresponding to the configuration in /etc/rsyslog.conf, which is reclaimed by default at the info log level

# log 127.0.0.1 local1 info

chroot /usr/share/haproxy Change the working directory of HAProxy to the specified directory and perform this before giving up permissions

### chroot() can improve the security level of haProxy

group haproxy The gid is the same as the user group name

user haproxy ## is the same as the uid, but the username is used here

daemon Set haProxy to run as a daemon

nbproc 1 Number of haProxy processes started

Haproxy can only be used in daemon mode; Start 1 process by default,

The multi-process mode is generally only used in fields where a single process can open only a few file descriptors

maxconn 4096 Set the maximum number of concurrent connections accepted by each haproxy process.

### this is equivalent to the command line option "-n", "ulimit-n" automatically calculates the result of the formal reference from the parameter setting

Pid file (default path: /var/run/haproxy.pid)

node liuyazhuang136 ## Define the name of the current node, which is used when multiple HAproxy processes share the same IP address in HA scenarios

description liuyazhuang136 ## Description of the current instance

## Defaults: Used to provide default parameters for all other configuration segments, which can be reset by the next "defaults"

defaults

log global Inherit the definition of log from global

mode http (TCP: layer 4, HTTP: layer 7, health: status check, will only return OK)

TCP: The instance runs in pure TCP mode. A full-duplex connection will be established between the client and server.

#### And does not perform any type of check on Layer 7 packets, which is the default mode

### HTTP: Instances run in HTTP mode, client requests are deeply parsed before being forwarded to back-end servers,

#### All requests that are not compatible with RFC mode will be rejected

### health: The instance runs in health mode and only responds to inbound requests with an "OK" message and closes the connection,

#### and no log information is recorded. This mode is used for monitoring status detection requests of corresponding external components

option httplog

retries 3

option redispatch If the server corresponding to the serverId fails, force redirect to another healthy server

maxconn 2000 Maximum number of concurrent connections for the front-end (default: 2000)

### This cannot be used for the Backend section. For large sites, you can increase this value as much as possible to allow HAProxy to manage the connection queue.

### to avoid being unable to answer user requests. Of course, this maximum cannot exceed the definition in the "global" section.

Also, keep in mind that HaProxy maintains two buffers per connection, each 8KB in size,

Plus other data, each connection will take up approximately 17KB of RAM space, which means that after proper optimization,

Maintain 40,000-50,000 concurrent connections with 1GB of available RAM space.

### If you specify a large value, extreme scenarios may end up taking up more space than the current host memory available,

### This may have unexpected results, so set it to an acceptable value of wise absolute, which defaults to 2000

timeout connect 5000ms Connection timeout (default: us,ms,s,m,h,d)

timeout client 50000ms The client timed out

timeout server 50000ms The server timed out

## HAProxy status information statistics page

listen admin_stats

bind: 48800## Bind port

stats uri /admin-status ## Statistics page

stats auth admin:admin ## Set the user and password for statistics page authentication. If you want to set more than one user, write another line

mode http

option httplog ## Enable logging HTTP requests

## LISTEN: Used to define a complete proxy by associating the "front end" with the "back end", usually only useful for TCP traffic

listen mycat_servers

bind: 3307## Bind port

mode tcp

option tcplog Log TCP requests

option tcpka Whether to allow sending Keepalive messages to the server and clientOption HTTPCHK OPTIONS * HTTP/1.1\r\nHost:\ WWWCheck backend service status

Port 48700 on the back-end server (port value configured via xinetd on the back-end server) sends OPTIONS requests

HAProxy will determine whether the backend service is available based on the returned content.

### 2XX and 3XX response codes indicate health status, other response codes or no response indicate server failure.

balance roundrobin Load balancing algorithms can be used in "defaults", "Listen", and "backend". The default is pollingServer mycat_01 192.168.209.133:8066 Check Port 48700 Inter 2000ms Rise 2 fall 3 weight 10 Server mycat_02 192.168.209.134:8066 Check Port 48700 Inter 2000ms Rise 2 Fall 3 weight 10Server 
       
       
[: port]] [param*]
### Serser Declares a server on the backend, which can only be used in the Listen and Backend sections. ### Specifies the internal name for this server, which will appear in logs and warning messages ###
IPv4 address of this server. Resolvable hostname is also supported, but the hostname needs to be resolved to the responding IPv4 address at startup
### [:[port]] Specifies, optionally, the destination port for sending client connection requests to this server ### [param*] ### [param*] ### [param*] ### [param*] #### weight: indicates the weight. The default value is 1. The maximum value is 256 #### backup: set as the standby server. This server cannot be enabled on other servers only in load balancing scenarios #### check: Starts monitoring the status of this server, which can be set more finely with additional parameters #### inter: Set the interval for monitoring status check. The unit is ms. The default value is 2000. ##### can also use Fastinter and downinter to optimize this event delay based on server-side themes #### rise: Set the number of times that a server needs to be checked when switching from the offline state to the normal state. If this parameter is not set, the default value is 2. #### fall: Sets the number of times that a server needs to be checked when switching from the normal state to the offline state. If this parameter is not set, the default value is 3. #### cookie: Sets the cookie value for the specified server. The value specified here will be checked when the request is inbound. ##### The server first selected for this value will be selected by subsequent requests for persistent connection functionality #### maxconn: Specifies the maximum number of concurrent connections accepted by this server. If the number of connections sent to this server is higher than the value specified here, ##### it will be placed on the request queue waiting for other connections to be released Copy the code

HAProxy node 1 state information page: http://192.168.209.135:48800/admin-status

HAProxy node 2 state information page: http://192.168.209.136:48800/admin-status

Keepalived is introduced

Website: www.keepalived.org/

Keepalived is a high performance server high availability or hot spare solution, Keepalived can be used to prevent the single point of failure of the server, with Haproxy to achieve high availability of web front-end services. Keepalived is based on THE VRRP protocol for high availability (HA). Virtual Router Redundancy Protocol (VRRP) is used for Router Redundancy. VRRP virtualizes two or more Router devices into one device and provides one or more Virtual Router IP addresses externally. If the router that actually owns the external IP is the MASTER if it works properly, or is elected by an algorithm. MASTER implements various network functions for the IP address of the virtual router, such as ARP request, ICMP, and data forwarding. Other devices do not have this virtual IP address and are in the BACKUP state. They do not perform external network functions except for receiving the VRRP status notification from the MASTER. When a host fails, BACKUP takes over the network functions of the original MASTER. VRRP uses multicast data to transmit VRRP data. VRRP data sends data using a special virtual source MAC address rather than the MAC address of its network card. When VRRP is running, only the MASTER router periodically sends VRRP notification messages. Indicates that the MASTER works properly and the IP address (group) of the virtual router. BACKUP only receives VRRP data but does not send data. If it does not receive any notification from the MASTER within a certain period of time, each BACKUP declares that it is the MASTER and sends the notification. Rerun the MASTER election state.

The installation of Keepalived

Note: Keepalived needs to be installed on two servers 192.168.209.135 and 192.168.209.136.

Keepalived (www.keepalived.org/download.ht)…

Upload or download Keepalived

Upload or download Keepalived (keepalived-1.2.18.tar.gz) to the /usr/local/src directory

Unpack the installation

Openssl is required to install Keepalived

# yum install gcc gcc-c++ openssl openssl-devel
# cd /usr/local/src
# tar - ZXVF keepalived - 1.2.18. Tar. Gz
# CD keepalived - 1.2.18
# ./configure --prefix=/usr/local/keepalived
# make && make install
Copy the code

Install Keepalived as a Linux service

Since there is no default path to install keepalived (default is /usr/local), once the installation is complete, some work needs to be done to copy the default configuration file to the default path

# mkdir /etc/keepalived
# cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/
Copy the code

Copy the Keepalived service script to the default address

# cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
# cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
# ln -s /usr/local/keepalived/sbin/keepalived /usr/sbin/
# ln -s /usr/local/keepalived/sbin/keepalived /sbin/
Copy the code

Set up Keepalived service to start upon boot

# chkconfig keepalived on
Copy the code

Modify Keepalived configuration file

(1) MASTER node configuration file (192.168.209.135)

! Configuration File for keepalived
global_defs {
Keepalived comes with email reminders that require the Sendmail service to be enabled. Independent monitoring or third-party SMTP is recommended
	router_id liuyazhuang135 ## The string that identifies the node, usually hostname
}
Keepalived executes scripts periodically and analyzes the results of script execution to dynamically adjust the priority of VRrP_instance.
If the script execution result is 0 and the weight configuration value is greater than 0, the priority is increased accordingly.
If the script execution result is non-zero and the weight configuration is less than zero, the priority is reduced accordingly.
In other cases, maintain the original configured priority, that is, the value corresponding to priority in the configuration file.
vrrp_script chk_haproxy {
	script "/etc/keepalived/haproxy_check.sh" Script path to check haProxy status
	interval 2 ## Detection interval
	weight 2 If the condition is true, the weight is +2
}
## Define the virtual route. VI_1 is the identifier of the virtual route
vrrp_instance VI_1 {
	state BACKUP ## Set primary and secondary devices to BACKUP by default.
	Priority controls the default active/standby mode in the case of simultaneous startup. Otherwise, the primary device starts first
	interface eth3 The network interface bound to the virtual IP address is the same as the network interface where the local IP address resides. Mine is eth3
	virtual_router_id 35 ## Virtual route ID, two nodes must be set the same, optional IP last segment is used,
	## Same VRID as a group, he will determine the multicast MAC address
	priority 120 The value ranges from 0 to 254. MASTER is higher than BACKUP
	nopreempt Nopreempt must be added to the main device configuration, otherwise non-preempt will not work
	advert_int 1 The interval for sending multicast messages must be the same for both nodes, default is 1s
	Set the authentication information. The two nodes must be consistent
	authentication {
		auth_type PASS
		auth_pass 1111 ## Real production, according to the demand should come
	}
	Add the track_script block to the instance configuration block
	track_script {
		chk_haproxy Check whether the HAProxy service is alive
	}
	For the virtual IP address pool, the two nodes must have the same SettingsVirtual_ipaddress {192.168.209.130## Virtual IP, can define multiple, one per line}}Copy the code

BACKUP node configuration file (192.168.209.136)

! Configuration File for keepalived
global_defs {
	router_id liuyazhuang136
}
vrrp_script chk_haproxy {
	script "/etc/keepalived/haproxy_check.sh"interval 2 weight 2 } vrrp_instance VI_1 { state BACKUP interface eth3 virtual_router_id 35 priority 110 advert_int 1 Authentication {auth_type PASS auth_pass 1111} track_script {chk_haproxy} virtual_ipaddress {192.168.209.130}}Copy the code

Special note: if the non-preemption mode does not work, Keepalived will preempt the VIP again after the failed node is restored, thus causing the risk of flash interruption due to the VIP switch (video explanation). According to the above configuration, the Keepalived non-preemption mode is configured. The configuration and notes are as follows: (2) Do not configure McAst_src_ip (local IP address) on either master or slave devices. (3) Default master devices (Keepalived nodes with high priority value) (4) Firewall configuration allows multicast (need to be configured on both Master and standby devices, Keepalived uses 224.0.0.18 as communication IP for Master and Backup health check)

# iptables -I INPUT -i eth3 -d 224.0.0.0/8 -p vrrp -j ACCEPT
# iptables -I OUTPUT -o eth3 -d 224.0.0.0/8 -p vrrp -j ACCEPT(Eth3 is the name of the host network adapter. The production server can use an independent network adapter to process multicast and heartbeat detection.)# service iptables saveRestart the firewall:# service iptables restart
Copy the code

Write the Haproxy status detection script

The script we wrote is /etc/keepalive/haproxy_check. sh (already configured in keepalive.conf). If haProxy stops running, try to start it. If it can’t start, kill the keepalived process on the local machine. Keepalied binds the virtual IP to the BACKUP machine.

As follows:

# mkdir -p /usr/local/keepalived/log
# vi /etc/keepalived/haproxy_check.sh
Copy the code

The content of the haproxy_check.sh script is as follows:

#! /bin/bash
START_HAPROXY="/etc/rc.d/init.d/haproxy start"
STOP_HAPROXY="/etc/rc.d/init.d/haproxy stop"
LOG_FILE="/usr/local/keepalived/log/haproxy-check.log"
HAPS=`ps -C haproxy --no-header |wc -l`
date "+%Y-%m-%d %H:%M:%S" >> $LOG_FILE
echo "check haproxy status" >> $LOG_FILE
if [ $HAPS -eq 0 ];then
echo $START_HAPROXY >> $LOG_FILE
$START_HAPROXY >> $LOG_FILE 2>&1
sleep 3
if [ `ps -C haproxy --no-header |wc -l` -eq 0 ];then
echo "start haproxy failed, killall keepalived" >> $LOG_FILE
killall keepalived
fi
fi
Copy the code

After saving, grant execute permission to the script:

# chmod +x /etc/keepalived/haproxy_check.sh
Copy the code

Starting Keepalived

# service keepalived start
Starting keepalived: [ OK ]
Copy the code

Keepalived service management command:

Stop: service keepalived stop Start: service keepalived start Restart: service keepalived restart Check the status: Service keepalived statusCopy the code

High availability testing

(1) Disable Haproxy in 192.168.209.135 and Keepalived will restart it

# service haproxy stop
Copy the code

(2) Disable Keepalived in 192.168.209.135. VIP (192.168.209.130) will be preempted by 192.168.209.136

# service keepalived stop
Copy the code

After Keepalived is stopped, the VIP (192.168.209.130) in the network interface of 192.168.209.135 will disappear

In the preceding figure, you can see that a VIP (192.168.209.130) is displayed in the network port of node 192.168.209.136.

To check the MAC corresponding to VIP, run CMD in Windows:

Note The VIP has migrated to the physical host 192.168.209.136

Then use VIP(192.168.209.130) to access the Haproxy cluster, which is also 192.168.209.136

(3) Restart Keepalived in 192.168.209.135

Keepalived (192.168.209.130) is reserved for 192.168.209.136. Keepalived (192.168.209.130) is reserved for 192.168.209.136. Keepalived (192.168.209.130) is reserved for 192.168.209.136.

# service keepalived start
Copy the code

(4) Simulated HAProxy failure or startup failure of the VIP preempted node (192.168.209.136)

Method: Haproxy. CFG file on 192 nodes is renamed as haproxy.cfg_bak, and haproxy service is killed by Keepalived. Keepalived will try to start Haproxy. The killall Keepalived command in the haproxy_check.sh script will be executed to stop keepalived. Then 192.168.209.135 preempts the VIP again

Note The VIP has migrated to the physical host 192.168.209.135

Then use VIP(192.168.209.130) to access the Haproxy cluster, which is also 192.168.209.135

Verify database access

Access the database through VIP and verify the database access after the VIP switch

(1) Command line access database

(2) Navicat accesses the database

At this point, the implementation of Mycat high availability load balancing cluster (HAProxy + Keepalived + Mycat) has been built

You can go to the link download.csdn.net/detail/l102… Download Keepalived for building Mycat high Availability load balancing cluster implementation (HAProxy + Keepalived + Mycat)

Other recommended articles

  • How to Ensure high Availability of Mycat under Massive Data Architecture?
  • Glacier, can you tell me how Mycat implements read/write separation in MySQL?
  • How does MySQL implement terabyte data Storage?
  • Mycat core developers take you to easily master Mycat routing and forwarding!!
  • Mycat core developer will show you all the three core Mycat configuration files!!
  • As the core developer of Mycat, how can I not have a series of Mycat articles?

Glacier Original PDF

Follow Glacier Technology wechat official account:

Reply to “Concurrent Programming” to receive the PDF of In-depth Understanding of High Concurrent Programming (1st edition).

Reply “concurrent source code” to get the “Concurrent programming core Knowledge (source code Analysis first edition)” PDF document.

Reply to “Limit Traffic” to get the PDF document “Distributed Solution under 100 million Traffic”.

Reply to “design patterns” to get the PDF of “simple Java23 design patterns”.

Reply “new Java8 features” obtain the Java8 new features tutorial PDF document.

Reply to “Distributed Storage” to receive the PDF of “Learn Distributed Storage Techniques from Glacier”.

Reply to “Nginx” to receive the PDF of Learn Nginx Technology from Glacier.

Reply to “Internet Engineering” to get the PDF of “Learn Internet Engineering techniques from Glacier”.

Big welfare

WeChat search the ice technology WeChat 】 the public, focus on the depth of programmers, daily reading of hard dry nuclear technology, the public, reply within [PDF] have I prepared a line companies interview data and my original super hardcore PDF technology document, and I prepared for you more than your resume template (update), I hope everyone can find the right job, Learning is a way of unhappy, sometimes laugh, come on. If you’ve worked your way into the company of your choice, don’t slack off. Career growth is like learning new technology. If lucky, we meet again in the river’s lake!

In addition, I open source each PDF, I will continue to update and maintain, thank you for your long-term support to glacier!!

Write in the last

If you think glacier wrote good, please search and pay attention to “glacier Technology” wechat public number, learn with glacier high concurrency, distributed, micro services, big data, Internet and cloud native technology, “glacier technology” wechat public number updated a large number of technical topics, each technical article is full of dry goods! Many readers have read the articles on the wechat public account of “Glacier Technology” and succeeded in job-hopping to big factories. There are also many readers to achieve a technological leap, become the company’s technical backbone! If you also want to like them to improve their ability to achieve a leap in technical ability, into the big factory, promotion and salary, then pay attention to the “Glacier Technology” wechat public account, update the super core technology every day dry goods, so that you no longer confused about how to improve technical ability!