In China, no matter the MySQL database of ali Cloud, Tencent Cloud or Huawei Cloud’s cloud platform version is used, Percona XtraBackup will be used for data backup and restoration in the scenario of data backup and restoration.

Looking at a bunch of long-winded and backward backup and recovery schemes on the Internet, I was tired of them. Taking the opportunity to help my friends with data migration again, I sorted out and shared my previous combat notes, hoping to help students in need.

Writing in the front

There are not many cloud platform practitioners in China, and mature solutions are relatively fixed, so it is not difficult to see that the product backup and recovery strategies and even documents of “Yusanjia” are very “similar”.

  • RDS MySQL physical backup file restore to self-built database
  • Tencent Cloud: “Cloud database MySQL – Using physical backup to restore database”
  • Huawei Cloud: Restoring a Self-built Database using backup Files (MySQL)

This article will be based on the container tool approach to data recovery processing to avoid unnecessary software dependencies.

Write a database recovery instance configuration file

In the container era, if you are not an operation and maintenance worker in the container environment, you do not need to worry too much about the system configuration. We directly use the official image provided by Percona, the following uses MySQL 5.7 as an example, you can change the version number according to your own needs.

# https://hub.docker.com/r/percona/percona-xtradb-cluster/

version: "3"

services:
  percona:
    image: Percona/percona xtradb - cluster: 5.7
    container_name: percona
    restart: always
	Declare exposed ports according to your needs
    # ports:
    # - 3306:3306
    environment:
      - MYSQL_ALLOW_EMPTY_PASSWORD=1
    volumes:
      - ./node.cnf:/etc/mysql/node.cnf
      - ./data:/var/lib/mysql:rw
      - ./restore:/var/lib/mysql-files:rw
Copy the code

In the above configuration, I declared two directories to hold the data, first the Restore directory to hold the cloud database backups, and then the Data directory to hold the restored database files temporarily. Save the above as docker-comemage.yml for later use.

Next, write a database configuration file that can be used for restoration:

[mysqld]

skip-grant-tables

ignore-db-dir=lost+found
datadir=/var/lib/mysql
socket=/tmp/mysql.sock
skip-host-cache

#coredumper
#server_id=0
binlog_format=ROW
default_storage_engine=InnoDB

innodb_flush_log_at_trx_commit  = 0
innodb_flush_method             = O_DIRECT
innodb_file_per_table           = 1
innodb_autoinc_lock_mode=2

bind_address = 0.0. 0. 0

wsrep_slave_threads=2
wsrep_cluster_address=gcomm://
wsrep_provider=/usr/lib64/galera3/libgalera_smm.so

wsrep_cluster_name=noname
Wsrep_node_address = 172.20.12.2
wsrep_node_incoming_address=0cdb19fc56e4:3306

wsrep_sst_method=xtrabackup-v2
wsrep_sst_auth='xtrabackup:xtrabackup'

[client]
socket=/tmp/mysql.sock

[sst]
progress=/var/lib/mysql/sst_in_progresss
Copy the code

Save the above configuration as node.cnf, and then put it in the same directory as the docker-compose. Yml, and use the familiar docker-compose up -d to start up the database instance for data restoration.

.
The 2021-10-12 T06:08:37. 329788 z 0 [Note] Server socket created on IP: '0.0.0.0'.
The 2021-10-12 T06:08:37. 385234 z 0 [Note] InnoDB: Buffer pool(s) load completed at 211012  6: 08:37
The 2021-10-12 T06:08:37. 665867 z 0 [Note] mysqld: ready for connections.
Version: '5.7.33-36-57'  socket: '/tmp/mysql.sock'  port: 3306  Percona XtraDB Cluster (GPL), Release rel36, Revision a1ed9c3, WSREP version 31.49. 49 wsrep_31.
The 2021-10-12 T06:08:37. 666282 z 2 [Note] WSREP: Initialized wsrep sidno 2
.
Copy the code

Use the docker – compose logs -f check run log, wait a moment, see something similar to the above log, contains “ready for connections” can start for data recovery operations.

Restoring data

Copy the data you need to restore to the local restore directory (the /var/lib/mysql-files/ directory in the container). You can also use the docker cp command to copy the data directly to the container, but the experience is not friendly for large files.

Data “decompression”

With the backup files ready, we enter the container for further operations:

docker exec -it percona bash
Copy the code

After entering the container, first switch to the working directory:

cd /var/lib/mysql-files/
Copy the code

Assume that the backup file is stored in tar format and needs to be decompressed. For other formats, such as.xb, qpress and XBStream tools are built into the container and can be used directly by referring to the documentation provided with your cloud platform.

tar zxvf *.tar
Copy the code

After the backup files are uncompressed, we can start the data recovery operation in earnest.

innobackupex --defaults-file=/etc/mysql/node.cnf --apply-log /var/lib/mysql-files/
Copy the code

Data recovery time, depending on the size of your backup file.

InnoDB: 5.7.32 started; log sequence number 3006781461
xtrabackup: starting shutdown with innodb_fast_shutdown = 1
InnoDB: FTS optimize thread exiting.
InnoDB: Starting shutdown...
InnoDB: Shutdown completed; log sequence number 3006781480
211013 07:57:02 completed OK!
Copy the code

When you see the log output above, you should have all the data files in place for a healthy MySQL instance.

But in order to fully export the data, we need to do some additional things.

Export data file

In the previous operation, the database instance needs to run stably, so the data is decompressed in the mysql-files directory instead of being directly restored to the /var/lib/mysql directory.

In order to export the data correctly, we need the database instance to be able to read the data we recovered, so we completely overwrite the database instance data with the decompressed data.

cp -r /var/lib/mysql-files/* /var/lib/mysql/
rm -rf /var/lib/mysql-files/*
Copy the code

After the execution, we switch to the outside of the container, perform docker – compose down && docker – compose up – d before removing the container, and to create a clean new container, to continue to recover the data. Enter the container again using docker execit:

docker exec -it percona bash
Copy the code

Use the default user name to enter the MySQL interactive terminal:

mysql -u xtrabackup
Copy the code

Try listing the databases that are currently readable:

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| YOUR_DATABASE      |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
5 rows in set (0.01 sec)
Copy the code

You will find that the MySQL database in the cloud has been correctly restored to the local server.

However, if you try to export data directly using mysqldump, you may receive a “PXC” error.

mysqldump: Got error: 1105: Percona-XtraDB-Cluster prohibits use of LOCK TABLE/FLUSH TABLE <table> WITH READ LOCK/FOR EXPORT with pxc_strict_mode = ENFORCING when using LOCK TABLES
Copy the code

To resolve this file, we need to set the global Settings in the MySQL interactive terminal:

mysql> set global pxc_strict_mode=DISABLED ;
Query OK, 0 rows affected (0.00 sec)
Copy the code

Then the database export should not be a problem:

mysqldump -u xtrabackup YOUR_DATABASE > backup.sql
Copy the code

Since we are exporting a standard database backup, it is easy to continue with the migration using things like:

mysql -u USER -p DATABSE_NAME < backup.sql
Copy the code

Or file load to quickly restore and rebuild the database.

The last

For engineers, laziness is a virtue, but laziness can only be established if you can correctly and easily locate and solve problems.

— EOF


We have a little group of hundreds of people who like to do things.

In the case of no advertisement, we will talk about software and hardware, HomeLab and programming problems together, and also share some information of technical salon irregularly in the group.

Like to toss small partners welcome to scan code to add friends. (To add friends, please note your real name, source and purpose, otherwise it will not be approved)

All this stuff about getting into groups


If you think the content is still practical, welcome to share it with your friends. Thank you.


This article is published under a SIGNATURE 4.0 International (CC BY 4.0) license. Signature 4.0 International (CC BY 4.0)

Author: Su Yang

Creation time: October 13, 2021 statistical word count: 4300 words reading time: 9 minutes to read this article links: soulteary.com/2021/10/13/…