MySQL replication principle and process

Master-slave replication: Transfer DDL and DML operations from the master database to the slave database using binary logs, and then re-execute (redo) the logs. This keeps the data from the slave database consistent with the master database.

The role of master-slave replication

  • If the primary database has a problem, you can switch to the secondary database.
  • Read/write separation can be performed at the database level.
  • Daily backups can be made on a slave database.
  • MySQL master-slave replication solves a problem
  • Data distribution: Start or stop replication at will and distribute data backups across geographic locations
  • Load balancing: Reduce the stress on a single server
  • High availability and failover: Helps applications avoid single points of failure
  • Upgrade testing: you can use a higher version of MySQL as the slave library

MySQL master-slave replication works

  • Log data higher to binary logs on the main library
  • The slave library copies the master library’s logs to its own relay logs
  • An event that reads the relay log from the library and replaces it into slave library data
  • The rationale for the flow, the three threads and the associations between them
  • Master: binlog thread — records all statements that change the database data in the master binlog;
  • Slave: IO thread — after using the start slave, it is responsible to pull the binlog content from the master and put it into its own relay log.
  • From: SQL thread — execute statements in relay log;

Replication process

  • Binary log: Binary log of the primary database
  • Relay log: indicates the Relay log of the secondary server
  • Step 1: The master writes the operation record serially to a binlog file before each transaction updates the data.
  • Step 2: Salve starts an I/O Thread. This Thread opens a normal connection on master. If the reading has caught up with the master, it goes to sleep and waits for the master to generate new events. The ultimate goal of the I/O thread is to write these events to the relay log.
  • Step 3: THE SQL Thread reads the relay log and executes the SQL events in the log in order to be consistent with the data in the primary database.

What are the solutions for read/write separation?

Read/write separation depends on master/slave replication, which in turn serves read/write separation. Since master/slave replication requires that the slave cannot write and can only read (if a write operation is performed on the slave, show slave status will show Slave_SQL_Running=NO, in which case you need to manually synchronize the slave as mentioned above).

Plan a

  • Use the mysql-proxy proxy
  • Advantages: Directly implements read/write separation and load balancing without modifying the code. The master and slave use the same account. It is not recommended to use this account in actual production
  • Disadvantages: Reduced performance, no transaction support

Scheme 2

  • Using aop AbstractRoutingDataSource + + annotation in the dao layer decision data source.
  • If mybatis is used, you can put read/write separation in ORM layer. For example, Mybatis can use mybatis plugin to block SQL statements, all inserts /update/delete access master library, all select access salve library. This is transparent to the DAO layer. Plugins can be implemented to select master and slave libraries by annotations or by analyzing whether the statement is a read-write method. But it still has a problem, that is, do not support transactions, so we need to rewrite the DataSourceTransactionManager, to throw in the affairs of the read – only read library, the rest have read write the thrown into library.

Plan 3

  • Using aop AbstractRoutingDataSource + + annotation in the service layer decision data sources, can support the transaction.
  • Disadvantages: Aop does not intercept internal class methods that call each other in this.xx() mode, requiring special handling.

Backup plan, mysqlDump and XtranBackup implementation principle

(1) Backup plan

  • Mysqldump can be used to perform full backup every day (the files backed up by mysqldump are smaller and smaller after compression) because it is lighter and more flexible.
  • Xtranbackup is a faster backup than mysqlDump for libraries over 100GB. Full backup is performed once a week and incremental backup is performed every other day during off-peak service periods.

(2) Backup and restoration time

  • Physical backup is fast, but logical backup is slow
  • Here with the machine, especially the hard disk speed has a relationship, the following list a few for reference only
  • Mysqldump 20G 2 minutes (mysqldump)
  • 30 minutes for 80G (mysqldump)
  • 111G 30 minutes (mysqldump)
  • 288GB in 3 hours (Xtra)
  • 4 hours of 3T (XTRA)
  • The logical import time is usually five times longer than the backup time

(3) How to handle the backup and restoration failure

  • First of all, we should make full preparations before recovery to avoid mistakes during recovery. For example, the validity check, permission check, space check after backup. If any error occurs, adjust accordingly according to the error prompt.

(4) MysqlDump and Xtrabackup implementation principle

The mysqldump:

  • Mysqldump is a logical backup. Add the – single-transaction option for consistency backup. The background process sets the TRANSACTION ISOLATION LEVELREPEATABLE READ level of the session to RR(SET Session TRANSACTION ISOLATION LEVELREPEATABLE READ) and then explicitly starts a TRANSACTION (START TRANSACTION /*! 40100 WITH CONSISTENTSNAPSHOT */), this ensures that the data read in this transaction is the snapshot of the transaction. And then read the data out of the table. — master-data=1 FLUSH TABLES WITH READ LOCK; showmaster status =1 FLUSH TABLES WITH READ LOCK Unlock it now and read the table. When all data has been derived, the transaction can be terminated

Xtrabackup:

  • Xtrabackup is a physical backup. It copies tablespace files and scans for redo logs. When innoDB is finally backed up, a flush engine logs operation is performed to ensure that all redo logs have been dropped. Because Xtrabackup does not copy binlogs, you must ensure that all redo logs fall to disk, otherwise the last set of committed transaction data may be lost. This point in time is when InnoDB completes the backup. Although the data files are not consistent, having redo during this time period makes the data files consistent. Flush tables with read lock for myISam and other engines This makes for perfect hot spare.

What are the repair methods for data table corruption?

Use MyISamchk to fix it.

  • 1) Stop mysql service before repair.
  • 2) Open the cli and go to the /bin directory of mysql.
  • 3) Run myisamchk -recover database path /*.myi

Use the repair table or OPTIMIZE table command to repair, Repair table table_name Repair table OPTIMIZE Table table_name Repair table is used to repair damaged tables. The OPTIMIZE TABLE command OPTIMIZE TABLE is used to reclaim the spare database space, the disk space is not immediately reclaimed when the TABLE rows are removed, and the rows are rearranged with the OPTIMIZE TABLE command.