1 Deployment architecture design

In the database cluster, each MySQL should be published to the Docker virtual machine instance, that is, each MySQL is deployed to an independent container, through load balancing to disperse the database requests. The front and back ends also achieve high availability through clusters, using multiple nodes for deployment, one node down, other nodes can still provide services, middleware using Nginx.

2 Preparation of experimental environment

2.1 the virtual machine

  1. Disk: ≥ 5 GB
  2. Memory: ≥ 2 GB
  3. Processor: 1 quad-core CPU
  4. Cent OS benefits:
    • Cross-platform hardware support
    • Reliable security
    • Rich software support
    • Multi-user multi-task
    • Good stability
    • Perfect network function

2.2 Back-end projects

  1. Technology stack: SpringBoot + Shrio + SSM + JWT + Redis + Swagger
  2. Maven environment
  3. MySQL environment

2.3 Front-end Projects

  1. Technology stack: Vue + ElementUI
  2. Node. Js environment

2.4 Docker VM

All virtual instances created by Docker share the same Linux kernel, which occupies little hardware and is a lightweight VIRTUAL machine. A container is a virtual instance created from an image. The container is a read-write layer for running programs, and the image is a read-only layer for installing programs.

  1. Update yum software manager to install Docker
yum -y update
yum install -y docker
service docker start/stop/restart
Copy the code

Error [Errno 14] HTTP Error 404-not Found

“Solution”

yum clean all
rpm --rebuilddb
Copy the code
  1. Docker VM management commands

  1. DaoCloud Docker accelerator configuration

  2. Install the Java image online

docker search java docker pull java [root@localhost ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE java latest D23bdf5b1b1b 4 years ago 643MB # Save the image as a file $docker. IO/Java > /home/java.tar.gz # Remove image $docker rmi $docker load < /home/java.tar.gzCopy the code
  1. Start container: Starting the image creates a running container
$ docker run -it --name myjava java bash $ docker run -it --name myjava -p 9000:8080 -p 9001:8085 java bash $ docker run  -it -p 9000:8000 -p 9001:8085 -v /home/project:/soft --privileged --name myjava docker.io/java bashCopy the code

[Error] docker: Error response from daemon: Conflict. The container name “/myjava” is already in use by container “8301ced0145f6f10223f7403c222630118791de905eb9c119445495e67c7d7fd”. You have to remove (or rename) that container to be Docker rm id/name reuse docker rm id/name reuse docker rm id/name reuse docker ps-a

  1. Pause and stop containers
[root@localhost/] docker pause myjava [root@localhost/] docker uppause myjava [root@localhost/] docker stop myjava # Enter the container [root@localhost/] docker start -i myjavaCopy the code

3 Create a database cluster

3.1 Defects of the single-node database

  1. Large Internet program users are huge, single node database can not meet the performance requirements
  2. The single-node database has no redundancy design and cannot meet high availability

3.2 Common MySQL cluster Solutions

plan The characteristics of Applicable scenario
Replication Fast, weak consistency, low value Logs, news, posts
PXC Slow, consistent, high value Orders, accounts, finance

PXC principle

The full name of PXC is Percona XtraDB Cluster. It is a Cluster based on Galera technology. The nodes of any database can be read and written

The database instance can be native MySQL, but PerconaServer is recommended for PXC. PerconaServer is an improved version of MySQL with good performance.

Comparison of PXC and Replication schemes

PXC solution: PXC has strong data consistency, through synchronous replication, transactions are either committed at the same time on all nodes or not committed at all.

Replication solution: Replication uses asynchronous Replication and cannot ensure data consistency.

3.3 Installing a PXC Cluster

Docker mirror warehouse includes PXC database mirroring, can through the Docker pull Docker. IO/percona percona xtradb – cluster direct download.

For security reasons, you need to create a Docker internal network for PXC cluster instances

Docker network inspect net1 docker network rm net1Copy the code
[root@localhost ~]# docker network create --subnet=172.18.0.0/24 net19322b84026c901b5501d2e680d388e8dfcf6cd0fa54cbeeb38de82e3fa095859 [root@localhost ~]# docker inspect net1 [ { "Name":  "net1", "Id": "9322b84026c901b5501d2e680d388e8dfcf6cd0fa54cbeeb38de82e3fa095859", "Created": "2021-04-11T08:24:17.009369885z ", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": {"Driver": "Default", "Options" : {}, "Config" : [{" Subnet configures ":" 172.18.0.0/24 "}]}, "Internal" : false, "Attachable" : false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": {}, "Options": {}, "Labels": {} } ]Copy the code

3.4 Creating a Docker Volume

PXC nodes in containers map data directories

[root@localhost ~]#  docker volume create v1
v1
[root@localhost ~]# docker inspect v1
[
    {
        "CreatedAt": "2021-04-11T08:29:20Z",
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/v1/_data",
        "Name": "v1",
        "Options": {},
        "Scope": "local"
    }
]
Copy the code

3.5 Creating a PXC Container

Create a PXC container by passing run parameters to the PXC image.

Related parameters:

  • -d The created instance runs in the background
  • -p Specifies the port (port1 host port: port2 container port)
  • -v Path Mapping (File directories in data volumes)
  • -e Indicates the boot parameter

Start 5 containers:

docker run -d -p 3306:3306 -e MYSQL_ROOT_PASSWORD=123456 -e CLUSTER_NAME=PXC -e XTRABACKUP_PASSWORD=123456 -v V1 :/var/lib/mysql --privileged --name=node1 --net=net1 -- IP 172.18.0.2pxcCopy the code
docker run -d -p 3307:3306 -e MYSQL_ROOT_PASSWORD=123456 -e CLUSTER_NAME=PXC -e XTRABACKUP_PASSWORD=123456 -e CLUSTER_JOIN=node1 -v v2:/var/lib/mysql --privileged --name=node2 --net=net1 -- IP 172.18.0.3pxc docker run -d -p CLUSTER_JOIN=node1 -v v2:/var/lib/mysql --privileged --name=node2 --net=net1 3308:3306 -e MYSQL_ROOT_PASSWORD=123456 -e CLUSTER_NAME=PXC -e XTRABACKUP_PASSWORD=123456 -e CLUSTER_JOIN=node1 -v V3 :/var/lib/mysql --privileged --name=node3 --net=net1 -- IP 172.18.0.4 PXC docker run -d -p 3309:3306-e MYSQL_ROOT_PASSWORD=123456 -e CLUSTER_NAME=PXC -e XTRABACKUP_PASSWORD=123456 -e CLUSTER_JOIN=node1 -v v4:/var/lib/mysql -- Privileged --name=node4 --net=net1 -- IP 172.18.0.5pxc docker run -d -p 3310:3306 -e MYSQL_ROOT_PASSWORD= 123456-e CLUSTER_NAME=PXC -e XTRABACKUP_PASSWORD=123456 -e CLUSTER_JOIN=node1 -v v5:/var/lib/mysql --privileged --name=node5 - net = net1 - IP 172.18.0.6 PXCCopy the code

[Error] Docker uses PXC to create mysql cluster nodes

Docker pull percona/percona-xtradb-cluster:5.7.21 docker pull Percona /percona-xtradb-cluster:5.7.21 docker pull Percona-xtradb-cluster :5.7.21 Wait for a while and then connect to MySQL using the client. After the client succeeds, create node2. Set the volume directory read and write access, chmod -r/var/lib/docker 777 / volumes/v1 / _data while forming

Complete the startup of five nodes:

3.5 Database Load Balancing

When building a cluster, you need to use database load balancing. Otherwise, a single node processes all requests, resulting in high load and poor performance. Middleware such as Haroxy can be used for load balancing. Requests are evenly distributed to each node, resulting in low load on a single node and good performance.

Middleware products free The virtual machine The HTTP protocol TCP/IP protocol The plug-in performance
Haproxy is support support support Does not support good
Nginx is support support support support good
Apache is support support Does not support Does not support general
LVS is Does not support support support Does not support The best

Pull Haproxy: docker pull Haproxy

Create the Haproxy configuration file: touch /home/sof/haproxy.cfg

Global # working directory chroot /usr/local/etc/haproxy # log file, using rsyslog service local5 log device (/var/log/local5), Level info log 127.0.0.1 local5 INFO # Daemon defaults log Global mode HTTP # Log format option Httplog # Heartbeat check records of load balancing are not recorded in the log Option DontLogNULL # Connection timeout (ms) timeout Connect 5000 # Client timeout (ms) timeout Client 50000 # Server timeout (ms) timeout server 50000 # Monitor interface Listen admin_stats # Monitor interface access IP and port bind 0.0.0.0:8888 # access protocol mode HTTP #URI relative address stats URI/DBS # Statistics report format stats realm Global\ Stats auth admin:abc123456 # listen proxy-mysql # Access IP and port bind 0.0.0.0:3306 # Network protocol mode TCP Roundrobin: static-rr: least connection: leastconn: request source IP: Create a haproxy user in MySQL with no permission and leave the password empty. Option mysql-check user Haproxy server MySQL_1 172.18.0.2:3306 check weight 1 maxconn 2000 Server MySQL_2 172.18.0.3:3306 Check weight 1 maxConn 2000 Server MySQL_3 172.18.0.4:3306 Check weight 1 maxconn 2000 Server MySQL_5 172.18.0.6:3306 check weight 1 maxconn 2000 server MySQL_5 172.18.0.6:3306 check weight 1 maxconn 2000 Use Keepalive to detect dead links option tcpkaCopy the code

Create Haproxy container:

# daemon port is 8888, and the database instance on a network segment docker run - it - d - 4001: p. 8888-4002: p. 3306 - v/home/soft/haproxy: / usr/local/etc/haproxy --name h1 --privileged --net=net1 -- IP 172.18.0.7 haproxy bashCopy the code

Enter the Haproxy container: docker exec it h1 bash

[Error] Error response from daemon: Container dd729aa0c8bdbbee428ceda7af5802dbedf454ee7c4fbee7a796ebedf36165c9 is not running

[Solution] Docker ps -a finds that h1 is in the exit state, and adds less bash to the above statement. After 2s run, h1 is in the exit state.

Loading the configuration file: haproxy -f/usr/local/etc/haproxy/haproxy CFG

[Error] The file does not exist

Copies the file from the host machine “Solution” to the container: docker cp/home/soft/haproxy CFG h1: / usr/local/etc/haproxy /

Create user ‘haproxy’@’%’ identified by ”;

Defined in the configuration file access DBS: http://192.168.3.113:4001/dbs input user name and password

4 Dual-system hot backup

Single-node Haproxy does not have high availability and must have a redundant design.

Linux can define multiple virtual IP addresses in a single network card that can be assigned to different applications.

4.1 Implement dual-system hot backup by Keepalived

Keepalived is used to preempt the virtual IP address. The server fighting for the virtual IP address serves as the primary server, and the server waiting for the virtual IP address serves as the standby server. Heartbeat detection exists between the two servers.

4.2 Haproxy Dual-system Hot Backup Solution

Create a database cluster with multiple INSTANCES of PXC and forward requests through load balancing. Create two containers to deploy Haproxy separately and install Keepalived internally so that if either container goes down, the other container can be used. Two Keepalived preempt the 172.18.0.x virtual IP address. The virtual IP in Docker cannot be used by the external network, so Keepalived of the host is needed to map to the virtual IP accessible by the external network.

4.3 installation Keepalived

  1. Keepalived must be installed in the same container as Haproxy
[root@localhost ~]# docker exec -it h1 bash # install in h1 root@a5dfcd63c066:/# apt-get update root@a5dfcd63c066:/# apt-get install keepalivedCopy the code
  1. Edit keepalived configuration file/etc/keepalived/keepalived.conf
Vrrp_instance VI_1 {# keepalived identity (MASTER/BACKUP) The id of the active and standby virtual routes must be the same. The value ranges from 0 to 255. Virtual_router_id 51 # The weight of the MASTER route is higher than that of the BACKUP route. Unit for 1 # s advert_int master server authentication way, Lord must use the same password can normal communication authentication {123456} the auth_type auth_pass known as PASS # virtual IP address, you can set multiple virtual IP, Virtual_ipaddress {172.18.0.201}}Copy the code
  1. Starting Keepalived:service keepalived start
  2. The host can ping through the virtual IP address.

【Error】 Keepalived failed to bind the virtual IP address.

vrrp_instance VI_1 { state MASTER interface eth1 virtual_router_id 51 priority 100 advert_int 1 authentication { Auth_type PASS auth_pass 123456} virtual_ipAddress {172.18.0.201}}

【Error】 Apt-get source change problem

W: GPG error: http://mirrors.ustc.edu.cn/ubuntu trusty Release: The following signature cannot be verified because there is no public key: NO_PUBKEY 40976EAF437D05B5 NO_PUBKEY 3b4FE6ACC0b21f32e: Warehouse "http://mirrors.ustc.edu.cn/ubuntu trusty Release" no digital signature.Copy the code

“Solution”

Sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 40976EAF437D05B5 sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 40976EAF437D05B5Copy the code

4.4 Hot Backup Data

A cold backup is a backup method when a database is shut down, usually by copying data files. Cold backup is the simplest and safest backup method. Large websites cannot shut down service backup data, so cold backup is not a good choice.

Hot backup is to back up data while the system is running. It is also the most difficult backup. The common hot backup schemes of MySQL are LVM and XtraBackup.

LVM is a built-in backup method in Linux. Linux creates a snapshot for a partition to back up the partition. However, the database needs to be locked during database backup, and the database can only read data but cannot write data.

XtraBackup is an online hot backup tool based on InnoDB. It is open source and free and supports online hot backup. It occupies little disk space and can quickly back up and restore database. So XtraBackup is recommended.

XtraBackup includes full backup and incremental backup. Full backup backs up all data, takes a long time, and occupies a large space. In incremental backup, only the changed data is backed up, which takes a short time and occupies a small space. Generally, full backup is performed once a week and incremental backup is performed once a day.

4.5 Full PXC Backup

Install XtraBackup in the PXC container and perform a backup. Add backup to node1, delete node1, and recreate node1.

docker run -d -p 3306:3306 -e MYSQL_ROOT_PASSWORD=123456 -e CLUSTER_NAME=PXC -e XTRABACKUP_PASSWORD=123456 -v V1 :/var/lib/mysql -v backup:/data --privileged -e CLUSTER_JOIN=node2 --name=node1 --net=net1 -- IP 172.18.0.2pxcCopy the code
Docker exec -it node1 bash apt-get update apt-get install percona-xtrabackup-24  --user=root --password=123456 /data/backup/fullCopy the code

Complete backup:

4.6 PXC Full Restoration

The database can be hot backed up, but not hot restored. In order to avoid data synchronization during the recovery process, we used blank MySQL to restore data, and then established PXC cluster.

Before restoring data, roll back uncommitted transactions. After restoring data, restart MySQL.

rm -rf /var/lib/mysql/*
innobackupex --user=root --password=123456 --apply-back /data/backup/2020-4-12_02-58-43/
innobackupex --user=root --password=123456 --copy-back /data/backup/2020-4-12_02-58-43/
Copy the code

5 Redis cluster construction

Redis is an open source and free key-value NoSQL cache product developed by Wmvare. Redis has very good performance and can provide up to 10W reads/writes per second.

Redis’s current cluster solutions include:

  • RedisCluster: Official recommendation, no central node, client and Redis node directly connected, no intermediary agent layer required. Data can be stored in fragments. Easy to manage, you can add or delete nodes.
  • Codis: Middleware product with a central node
  • Twemproxy: Middleware product with a central node

5.1 Redis primary/secondary Synchronization

Database replication in Redis cluster is realized through master/slave synchronization. The master node distributes data to the slave node. The advantage of master/slave synchronization is high availability and the Redis node has redundancy design.

The Redis cluster should contain an odd number of masters, with at least three masters. Redis has an election mechanism, and more than half of the nodes are down, so elections cannot be performed. And every Master should have a Slave. 1

5.2 Obtaining a Redis Image

5.2.1 Creating a Base Image of Redis Docker

  1. Download the Redis installation package:Wget HTTP: / / http://download.redis.io/releases/redis-4.0.1.tar.gz
  2. Extract:The tar ZXVF redis - 4.0.1. Tar. Gz
  3. Go to redis folder and compile:make
  4. Modify redis configuration:Vi/home/soft/docker_redis/redis - 4.0.1 / redis. Conf
Daemonize yes # Cluster-enabled yes # cluster-config-file nodes.conf # cluster-node-timeout 15000 # Appendonly yes # Enable AOF modeCopy the code
  1. Go to /home/sof/docker_redis and create an image:vi Dockerfile
ENV REDIS_HOME /usr/local ADD redis-4.0.1.tar.gz / The ADD command will automatically unpack after copying. The copied object must be in the same path as the Dockerfile, RUN mkdir -p $REDIS_HOME/redis # ADD redis-4.0.1/redis.conf $REDIS_HOME/redis # RUN yum install -y GCC make # WORKDIR /redis-4.0.1 RUN make WORKDIR /redis-4.0.1 RUN make RUN mv /redis-4.0.1/ SRC /redis-server $REDIS_HOME/redis/ Redis-server WORKDIR/RUN rm -rf /redis-4.0.1 RUN yum remove -y GCC make # redis-server WORKDIR/RUN rm -rf /redis-4.0.1 Make VOLUME ["/var/log/redis"] # EXPOSE 6379 #Copy the code
  1. Build an image:docker build -t xd1998/cluster-redis

5.2.2 Pull official Redis

One thing to note here is that the official Redis image does not have a Redis configuration file, so it needs to be manipulated.

  1. Download the redis.conf file from the redis website
  2. Modify the redis. Conf
# bind 127.0.0.1 daemonize no # start appendonly yes # redis persisting TCP-Keepalive 300 # prevent remote host forcing closure of an existing connection the default is 300Copy the code
  1. Create a local redis configuration:mkdir /data/redis/data
  2. Copy:cp -p redis.conf /data/redis/
  3. Start the docker:docker run -p 6379:6379 --name redis -v /data/redis/redis.conf:/etc/redis/redis.conf -v /data/redis/data:/data -d redis redis-server /etc/redis/redis.conf --appendonly yes

5.3 install redis – trib. Rb

Redis-trib is a Ruby – based Redis cluster command tool.

cp /usr/redis/src/redis-trib.rb /usr/redis/cluster/
cd /usr/redis/cluster
apt-get install ruby
apt-get install rubygems
gem install redis
Copy the code

Create a redis cluster with redis-trib.rb: /redis-trib.rb create –replicas 1 172.19.0.2:6379 172.19.0.3:6379 172.19.0.4:6379 172.19.0.5:6379 172.19.0.6:6379 172.19.0.7:6379