First, what kind of backup is a logical backup database?

As everybody knows, the database in the data returned to when we use are in accordance with our original design have certain expectations logic format a in the form of a data to show, have a certain business logic properties, and on the level of physical storage database software is according to the database software design of a particular format stored after a certain processing.

Database logical backup is the backup software in accordance with the initial design of the logical relationship, the logical structure of the database object as a unit, the data in the database in accordance with the predefined logical association format to generate relevant text files, to achieve the purpose of backup.

Common logical backup

Logical backup is the simplest and most commonly used backup method for small and medium-sized systems. The common logical backup in MySQL is mainly two kinds, one is to generate data that can completely reproduce the data in the current database INSERT statement, the other is to use logical backup software to separate our database table data with specific delimiters and record them in text files.

① Generate a backup of INSERT statements

The two types of logical backup have their own advantages and disadvantages, and are used in slightly different scenarios. Let’s first look at logical backups that generate INSERT statements.

In MySQL database, we generally use mysqldump in MySQL database software to implement the logical backup file claiming INSERT statement. Its use method is basically as follows:

Dumping definition and data mysql database or table
Usage: mysqldump [OPTIONS] database [tables]
OR mysqldump [OPTIONS] --databases [OPTIONS] DB1 [DB2 DB3...]
OR mysqldump [OPTIONS] --all-databases [OPTIONS]
Copy the code

Since mysqldump is relatively simple to use, most of the information you need can be obtained by running “mysqldump –help”. Here I just want to introduce some of the concepts and principles of MySQL database and discuss some of the techniques and things to be aware of when using mysqldump to perform a logical backup of the database.

We all know that most software or web sites that use databases want their databases to be as available as possible, rather than having to go down from time to time. Because once the database cannot provide services, the system can no longer access data to provide some dynamic functions.

It would be unacceptable for most systems to stop every backup, but the mysqldump program works by fetching data table by table using the parameter information and then generating INSERT statements to write to the backup file. The problem is that during normal operation of the system, there are likely to be repeated requests for data changes, which can cause data backups in mysqldump to be inconsistent.

This means that the backup data may not be from the same point in time and may not even meet integrity constraints. Such a backup set may not be too much of a problem for some systems, but for some systems that are more stringent on data consistency and integrity, it is a completely invalid backup set.

What should we do about this scenario? We know that if you want consistency in the database, there are only two cases where you can do it.

  • First, take out all data at the same time;
  • Second, the data in the database is in a static state.

In the first case, you might think, is it possible?

Anyway, as long as there are more than two tables, no matter how we program it, it’s impossible to get the exact same time last night. Yes, it’s true that we can’t get the exact same points of time in a conventional way, but remember that it’s possible for a database to read data at the same point in time in the same transaction.

For Innodb or BDB, we can control the entire backup process in the same transaction to achieve consistency and integrity of backup data. Mysqldump also provides related parameters to support this function. With the “–single-transaction” option, any normal service of the database is not affected.

In the second case, I think the first thing that comes to mind is locking the tables that need to be backed up so that they can only be read but not written.

Yes, we really have to do that. We can only use a compromise processing method, let the database in the backup process only provide data query service, lock the writing service, so that the data is temporarily in a consistent state will not be modified, after the backup of mysqldump, cancel the write lock, and start to provide complete service again.

Mysqldump also provides options such as –lock-tables and –lock-all-tables to lock the table before execution and release the lock automatically after execution.

It is important to note that –lock-tables does not lock all tables that need to be dumped at once. Instead, it locks only one database table at a time. Be sure to use “–lock-all-tables” to ensure consistent data integrity.

When generating logical backups of INSERT statements with mysqldump, there is a very useful option to use: “–master-data[=value]”. When “–master-data=1” is added to mysqldump, the name and location of the current MySQL binlog is recorded as a CHANGE_MASTER statement. If only “–master-data” or “–master-data = 2” is used, the CHANGE_MASTER statement will exist as a comment. This option is very useful when performing online slave construction. Even if the slave is not performing online construction, it can be used to perform further recovery operations through the backup binlog during recovery in some cases.

In some cases, we may want to export some special data to another database and do not want to create a temporary table first. We can also use mysqldump ‘– where=’where-condition’ to do this. This can only be used when only one table is dumped.

In addition to the above tips, mysqlDump provides many other useful options that you can use in different scenarios, such as creating scripts with “–no-data” only to dump database structures, Mysqldump: –no-create-info: mysqldump: –no-create-info: mysqldump: –no-create-info:

② Generate a specific format of plain text backup data file backup

In addition to creating a logical backup by generating INSERT commands, there is another way to achieve a logical backup by dividing the data in the database into text files with specific delimiters. Compared with INSERT command files, such backup data requires less storage space, clearer data format, and easier editing. However, the disadvantage is that the backup data of multiple tables cannot exist in the same backup file, and there is no command for rebuilding the database structure. The need for multiple files in a backup set is nothing more than an increase in file maintenance and recovery costs, but these can be achieved by writing a few simple scripts.

That we can generally use what method to generate such a backup set file, in fact, MySQL has given us the corresponding function.

In MySQL, the following two methods are commonly used to obtain plain text backup files that can be customized delimiters.

1. Run the SELECT… TO OUTFILE FROM … Command to implement

MySQL provides a SELECT syntax, which allows users to output certain data to a text file in a specified format through SQL statements. It also provides utility tools and related commands to easily import the exported file to the database as it is. Isn’t that what we need for backups?

Note the following parameters:

  • “To implement character escape function.FIELDS ESCAPED BY ['name']“To escape the characters to be escaped in the SQL statement.
  • Can “wrap” the contents of a fieldFIELDS [OPTIONALLY] ENCLOSED BY 'name'“, if not”OPTIONALLY“All types of data, including numeric types, are” wrapped “so that”OPTIONALLYAfter that, numeric data is not “wrapped” with the specified character.
  • By”FIELDS TERMINATED BY“You can set the separator between each two fields;
  • And by”LINES TERMINATED BY“Tells the MySQL output file what characters to add at the end of each record.

For example:

root@localhost : test 10:02:02> SELECT * INTO OUTFILE '/tmp/dump.text'
-> FIELDS TERMINATED BY ', ' OPTIONALLY ENCLOSED BY '"'
-> LINES TERMINATED BY '\n'
-> FROM test_outfile limit 100;
Query OK, 100 rows affected (0.00 sec)
root@localhost : test 10:02:11> exit
Bye
root@sky:/tmp# cat dump.text
350021.21,"A","abcd"
350022.22,"B","abcd"
350023.23,"C","abcd"
350024.24,"D","abcd"
350025.25,"A","abcd" ... .Copy the code

2. Use mysqldump to export data

Mysqldump generates backup files from a database as INSERT statements. Mysqldump also generates backup files from a database as INSERT statements. The TO OUTFILE FROM…” And generate a create script corresponding to the relevant database structure.

For example:

root@sky:~# ls -l /tmp/mysqldump
total 0
root@sky:~# mysqldump -uroot -T/tmp/mysqldump test test_outfile --fieldsenclosed-by=\" --fields-terminated-by=,
root@sky:~# ls -l /tmp/mysqldump
total 8
-rw-r--r-- 1 root root 1346 2021-4-20 22:18 test_outfile.sql
-rw-rw-rw- 1 mysql mysql 2521 20214 -- 20 22:18 test_outfile.txt
root@sky:~# cat /tmp/mysqldump/test_outfile.txt
350021.21,"A","abcd"
350022.22,"B","abcd"
350023.23,"C","abcd"
350024.24,"D","abcd"
350025.25,"A","abcd" ... . root@sky:~# cat /tmp/mysqldump/test_outfile.sql
10.11 - MySQL dump
--
-- Host: localhost Database: test
-- ------------------------------------------------------
- Server version 5.0.51 a - log
/ *! 40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
/ *! 40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */;
/ *! 40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
/ *! 40101 SET NAMES utf8 */;
/ *! 40103 SET @OLD_TIME_ZONE=@@TIME_ZONE */;
/ *! 40103 SET TIME_ZONE='+00:00' */;
/ *! 40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='' */;
/ *! 40111 SET @OLD_SQL_NOTES=@@SQL_NOTES, SQL_NOTES=0 */;
--
-- Table structure for table `test_outfile`
--
DROP TABLE IF EXISTS `test_outfile`;
SET @saved_cs_client = @@character_set_client;
SET character_set_client = utf8;
CREATE TABLE `test_outfile` (
`id` int(11) NOT NULL default '0',
`t_id` int(11) default NULL,
`a` char(1) default NULL,
`mid` varchar(32) default NULL
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
SET character_set_client = @saved_cs_client;
/ *! 40103 SET TIME_ZONE=@OLD_TIME_ZONE */;
/ *! 40101 SET SQL_MODE=@OLD_SQL_MODE */;
/ *! 40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */;
/ *! 40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */;
/ *! 40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */;
/ *! 40111 SET SQL_NOTES=@OLD_SQL_NOTES */;
-- Dump completed on 2021-4-20 14:18:23
Copy the code

This output structure is suitable for use as a backup, but if multiple tables need to be dumped at once, two files will be generated for each table.

3. Logical backup and restoration methods

We need to know how to use these backups. Now let’s look at how to restore a logical backup.

Since all the backup data is stored in a form relevant to our original database structure design, the recovery of logical backups is relatively simple. Of course, for the two different forms of logical backup, the recovery method is slightly different. The following is a brief introduction to the recovery methods of the two logical backup files.

INSERT statement file recovery

Restoring a backup file in the form of an INSERT statement is easiest; we simply run all (or some) of the SQL commands in the backup file. First, if you need to do a full restore, you can use “mysql < backup. SQL” to call the backup file and execute all the commands in it to restore the data to the backup state. If you have used mysql to connect to mysql, you can also run “source/path/backup. SQL” or “\. /path/backup.

② Recovery of pure data text backup

If it is the second form of the above logical backup, recovery will be a bit more troublesome, a table by table through the relevant commands to restore, of course, if the script to achieve automatic multi-table recovery is also more convenient. There are two ways to restore DATA, one is through MySQL “LOAD DATA INFILE” command to achieve, the other method is through MySQL to provide the use of the tool mysqlimport to restore.

What can logical backups do? What can’t you do?

Now that we know how to use logical backups for recovery, we need to know what we can do with them.

  1. Through logical backup, related data in the database can be fully restored to the backup state by executing related SQL or commands without affecting irrelevant data.
  2. Through the logical backup of the full library, we can completely rebuild a database in the new MySQL environment, which is exactly the same as when the backup, and is not limited by the platform type of MySQL;
  3. With logical backups under certain conditions, we can easily migrate (or synchronize) certain data to another MySQL or another database environment;
  4. With logical backup, we can restore only part of the data in the backup set without restoring all the data.

Once we know what a logical backup can do, we must also know what it can’t do so that we know for ourselves whether such a backup meets our expectations and is really what we want.

  1. A logical backup cannot restore data to any point in time other than the backup moment;

4. Logical backup and recovery test

Sometimes I hear that someone’s database has a problem, and when I confidently prepare to restore the database I made before, I find that my backup set is not available, or I can not achieve the expected recovery effect when I do backup. When this happens, I’m afraid everyone will be extremely depressed. One of the most important and critical uses of database backup is when our database has some abnormal conditions and needs to be used for data recovery.

As a maintainer, we should never make such stupid mistakes. So how do we avoid this kind of problem?

There is only one way to do this, and that is to periodically perform mock restore tests to verify that our backup sets really work and do restore as we expect them to.

What about recovery tests? We can’t actually recover data from the online environment.

Yes, it’s true that data in the online environment cannot be recovered, but why can’t we do it in the test environment or somewhere else?

The restore test is just to verify that our backups work and live up to our expectations.

So before we do restore test, we must first know exactly what kind of scenario we are making backup for.

Just like we did a whole library of logical backup, the goal may be to when abnormal logical or physical database to restore the entire database data to the backup time, so we only need it evil recovery test to resume the whole logical backup all libraries, look to whether can reconstruct a complete database of success.

As for whether the restored data is consistent with the backup time, we can only rely on our own manual judgment comparison.

In addition, we may want to be able to restore the data of a database object, such as a table, to the backup point as soon as possible after a problem occurs. We can then perform sample recovery tests against a single named table.

Let’s assume that the database host crashes and the hardware is damaged, resulting in the loss of all database data, to do a full database recovery test example:

When our database suffered a hardware failure and all the data was lost, we had to find a new host as soon as possible to replace the damaged one and restore the corresponding service. Before we can restore the service, we first need to rebuild the damaged database. Let’s say we have a new host, MySQL software installed, and Settings adjusted, and we’re ready to restore the database.

We need to retrieve the latest logical backup file of the full library from the crash time, copy it to the new host prepared, and start MySQL that has been installed.

Since we have two logical backup formats, and the recovery method for each format is different, we will sample the recovery for both.

① If it is a logical backup of INSERT statements

A. Prepare backup files and copy them to a specific directory, for example, / TMP.

B. Run the following command to execute related commands in the backup set:

mysql -uusername -p < backup.sql
Copy the code

Alternatively, log in to the database using mysql and run the following command:

root@localhost : (none) 09:59:40> source /tmp/backup.sql
Copy the code

C, then check the corresponding database object in the database to see whether it is complete;

D. Check the data in several tables for manual verification, and notify the start of in-application test verification. When all the verification is passed, the service can be provided externally.

Of course, the steps mentioned above are carried out on the premise that each step is normal by default, if there is a problem with a step. If there is an exception in step B and we cannot continue, we first need to check whether there is any fault in our recovery command according to the error. Is there something wrong with our environment? And so on.

If we confirm that the backup file is the problem, then our backup is invalid and the test failed. If our recovery process is normal, but during validation, we find that database objects are missing, or the data in some objects is incorrect, or there is no data at all. It also indicates that the backup level cannot meet expectations and the backup fails.

Of course, if we run into a similar situation during the actual working restore process, if there is an earlier backup set, we must step back and do the same restore operation with the earlier backup set. While data in an earlier backup set may be distorted, it can be at least partially recovered without losing all data.

② If we are a backup plain data text file separated by a special delimiter

A, the first step is no different from the INSERT backup file, is to prepare the backup file closest to the crash time;

B. Import data into a database using specific tools or commands:

Since the database structure creation script is stored separately from the plain text data backup file, we need to execute the database structure creation script first and then import the data. The structure creates the script in exactly the same way as step B in the restore test for the first backup above.

Now that we have the database structure, we can import the backup data as follows:

mysqlimport --user=name --password=pwd test --fields-enclosed-by=\" --fields-terminated-by=, /tmp/test_outfile.txt
Copy the code

or

LOAD DATA INFILE '/tmp/test_outfile.txt' INTO TABLE test_outfile FIELDS TERMINATED BY '"' ENCLOSED BY ', ';
Copy the code

The following steps are exactly the same as restoring the backup file for the INSERT statement, which is not covered here.