1. Common MySql tools

Mysql 1.1

Mysql is not a mysql service, but a client tool for mysql.

Grammar:

mysql [options] [database]
Copy the code

1.1.1 Connection options

Parameters:-u, --user=name Specifies the user name
	-p, --password[=name] Specifies the password
	-h, --host=name Specifies the IP address or domain name of the server
	-P, --port=# Specifies the connection portExample: the mysql-h 127.0. 01. -P 3306 -u root -p
	
	mysql -h127. 0. 01. -P3306 -uroot -p2143  //I don't need to add any SpacesCopy the code

1.1.2 Execution Options

-e, --execute=name Executes the SQL statement and exits
Copy the code

This option allows you to execute SQL statements on the Mysql client without connecting to the Mysql database, which is especially convenient for some batch scripts.

Example: the mysql-uroot -p2143 db01 -e "select * from tb_book";
Copy the code

1.2 mysqladmin

Mysqladmin is a client program that performs administrative operations. You can use it to check the configuration and current state of the server, create and delete databases, and so on.

You can view the help documentation with the mysqladmin –help directive

Example: mysqladmin-uroot -p2143 create 'test01';  
	mysqladmin -uroot -p2143 drop 'test01';
	mysqladmin -uroot -p2143 version;
	
Copy the code

1.3 mysqlbinlog

Because the binary log files generated by the server are stored in binary format, if you want to check the text format of the text, you use the mysqlBinlog log management tool.

Grammar:

mysqlbinlog [options]  log-files1 log-files2 ... Options:-d, --database=name: specifies the database name. Only the specified database operations are listed.
	
	-o, --offset=# : the first n lines in the log are ignored.
	
	-r,--result-file=name: outputs text format logs to the specified file.
	
	-s, Short-form: Displays a simple format, omitting some information.
	
	--start-datatime=date1 --stop-datetime=date2: all logs within the specified date interval.
	
	--start-position=pos1 --stop-position=pos2: specifies all logs within the position interval.
Copy the code

1.4 the mysqldump

The mysqlDump client tool is used to back up databases or migrate data between different databases. The backup contains SQL statements for creating tables and inserting tables.

Grammar:

mysqldump [options] db_name [tables]

mysqldump [options] --database/-B db1 [db2 db3...]

mysqldump [options] --all-databases/-A
Copy the code

1.4.1 Connection options

Parameters:-u, --user=name Specifies the user name
	-p, --password[=name] Specifies the password
	-h, --host=name Specifies the IP address or domain name of the server
	-P, --port=# Specifies the connection port
Copy the code

1.4.2 Output content options

Parameters:--add-drop-database Precedes each database creation statement with a drop DATABASE statement
	--add-drop-table Specifies whether to add a drop table statement before each table creation statement. Don't open (- skip - add - drop - table)
	
	-n, --no-create-db Does not contain database creation statements
	-t, --no-create-info does not contain data table creation statements
	-d --no-data Contains no data
	
	 -T, -- TAB =name Automatically generates two files: a. SQL file, create table structure statements;A.txt file, a data file, is equivalent toselect into outfile  
Copy the code
Example: the mysqldump-uroot -p2143 db01 tb_book --add-drop-database --add-drop-table > a
	
mysqldump -uroot -p2143 -T /tmp test city  - will be output to the city in the test database table tem directory, generate two files, one. A SQL file. TXT file
Copy the code

1.5 information on mysqlimport/source

Mysqlimport is a client data import tool, used to import the text file mysqldump with -t parameter.

Grammar:

mysqlimport [options]  db_name  textfile1  [textfile2...]
Copy the code

Example:

mysqlimport -uroot -p2143 test /tmp/city.txt
Copy the code

If you need to import a SQL file, you can use the source command in mysql:

source /root/tb_book.sql
Copy the code

1.6 mysqlshow

Mysqlshow client object lookup tool, used to quickly find which databases, tables in the database, columns in the table, or indexes exist.

Grammar:

mysqlshow [options] [db_name [table_name [col_name]]]
Copy the code

Parameters:

--count Displays database and table statistics.

-I Displays the status of a specified database or tableCopy the code

Example:

Mysql > select * from mysqlshow-uroot -p2143 --countMysql > select * from test; mysql > select * from test-uroot -p2143 test --countMysql > select * from test where book = mysqlshow-uroot -p2143 test book --countIf the count above is replaced by I, the details are displayedCopy the code

2. Mysql logs

In any kind of database, there are various kinds of logs that record all aspects of the database work to help the database administrator keep track of the various events that have occurred in the database. MySQL is no exception. In MySQL, there are four different types of logs, namely error logs, binary logs, query logs, and slow query logs. These logs record the traces of the database in different aspects.

2.1 Error Logs

The error log is one of the most important logs in MySQL, recording information about when mysqld is started and stopped, and when any serious errors occur during the server’s run. You can view this log when the database cannot be used properly due to any fault.

The log file is saved in the mysql data directory (var/lib/mysql) by default. The default log file name is hostname.err (hostname indicates the hostname).

View log location directive:

show variables like 'log_error%';
Copy the code

View log content:

tail -f /var/lib/mysql/xaxh-server.err
Copy the code

2.2 Binary Logs

2.2.1 overview

The binary log records all DDL (Data Definition Language) statements and DML (Data Manipulation Language) statements, but not the data query statements. This log is very important for data recovery in the event of a disaster. The master/slave replication of MySQL is implemented through this log.

Binary logs are disabled by default. You need to enable binary logs in the MySQL configuration file and configure the format of MySQL logs.

Configuration file location: /usr/my.cnf

Log storage location: If the file name is specified but the path is not specified, logs are written to the Mysql data directory by default.

Enable binlog. The log file prefix is mysqlbin-- -- -- -- -- > the generated file name, such as: mysqlbin. 000001, mysqlbin. 000002
log_bin=Mysqlbin # set the binary log format binlog_format=STATEMENT
Copy the code

2.2.2 Log Format

STATEMENT

In this log format, all SQL statements are recorded inlog files. Each SQL statement that modifs data is recorded inlog files. You can use the mysqlBinlog tool provided by Mysql to view the text of each statement. In master/slave replication, the slave library parses the log into the original text and executes it again in the slave library.

ROW

This log format records data changes for each row in a log file, rather than logging SQL statements. For example, run the following SQL STATEMENT: update tb_book set status=’1′. In the STATEMENT log format, a row of SQL files will be recorded. In the case of ROW, data changes for each ROW are recorded in the ROW format log.

MIXED

This is the default MySQL log format, which is a mixture of STATEMENT and ROW. Statements are used by default, but rows are used for recording in some special cases. MIXED formats take advantage of the best of both modes while avoiding their worst.

2.2.3 Reading Logs

Since logs are stored in binary mode, they cannot be read directly. You need to use the mysqlbinlog tool to view logs. The syntax is as follows:

mysqlbinlog log-The file;Copy the code

View STATEMENT logs

Execute insert statement:

insert into tb_book values(null.'Lucene'.'2088-05-01'.'0');
Copy the code

To view the log file:

Mysqlbin. index: this file is the log index file and the name of the log file.

Mysqlbing.000001: Log file

View log content:

mysqlbinlog mysqlbing000001.;Copy the code

View logs in ROW format

Configuration:

Enable binlog. The log file prefix is mysqlbin-- -- -- -- -- > the generated file name, such as: mysqlbin. 000001, mysqlbin. 000002
log_bin=Mysqlbin # set the binary log format binlog_format=ROW
Copy the code

Insert data:

insert into tb_book values(null.'SpringCloud practical'.'2088-05-05'.'0');
Copy the code

If the log format is ROW, you cannot view the data directly. You can add -vv to mysqlbinlog

mysqlbinlog -vv mysqlbin000002. 
Copy the code

2.2.4 Deleting logs

For a busy system, a large number of logs are generated every day. If logs are not cleared for a long time, they occupy a large amount of disk space. Here are some common ways to delete logs:

Methods a

Run the Reset Master command to delete all binlog logs. After deletion, the log number will start from XXXX.000001.

Before querying logs, check log files:

Execute delete log command:

Reset Master
Copy the code

After execution, view the log file:

Way 2

Run the purge master logs to ‘mysqlbin.******’ command, which will delete all logs up to ******.

Methods three

Purge master logs before ‘YYYY-MM-DD HH24 :mi:ss’ Run the purge master logs before’ YYYY-MM-DD HH24 :mi:ss’ command to delete all logs generated before “YYYY-MM-DD HH24 :mi:ss”.

Methods four

Set the parameter –expire_logs_days=#. This parameter specifies the expiration days of logs. After the expiration days, logs are automatically deleted, which helps reduce the workload of the DBA.

The configuration is as follows:

2.3 Querying Logs

Query logs record all operation statements of the client, while binary logs do not contain SQL statements for querying data.

By default, log query is disabled. To enable log query, set the following parameters:

# This option is used to enable query logs. Optional values:0or10Stands for closed,1General_log is enabled=1If not specified, the default file name is host_name. Log general_log_file=file_name
Copy the code

/usr/my.cnf /usr/my.cnf /usr/my.cnf /usr/my.cnf

After the configuration is complete, perform the following operations in the database:

select * from tb_book;
select * from tb_book where id = 1;
update tb_book set name = 'Lucene Guide to Getting Started' where id = 5;
select * from tb_book where id < 8;
Copy the code

After execution, query log file again:

2.4 Slowly Querying Logs

Slow query logs record all SQL statements whose execution time exceeds the value of long_QUERy_time and the number of scanned records is not less than min_examined_ROW_limit. Long_query_time the default value is 10 seconds, the minimum value is 0, and the accuracy can be up to microseconds.

2.4.1 File location and format

Slow query logs are disabled by default. Slow query logging can be controlled using two parameters:

# This parameter is used to control whether the slow query log function is enabled.101Stands for open,0Represents disabling slow_query_log=1Slow_query_log_file specifies the slow_query_log_file file name for slow query logs=Slow_query. log # This option configures the time limit for the query, beyond which the value will be considered slow and will require logging, by default10s
long_query_time=10
Copy the code

2.4.2 Reading Logs

Like error logs and query logs, slow query logs are in plain text format and can be read directly.

1) Query the value of long_query_time.

2) Perform query operations

select id, title,price,num ,status from tb_item where id = 1;
Copy the code

This statement is not recorded in the slow query log because it takes a short time of 0s.

select * from tb_item where title like '% Alcatel (OT-927) Carbon Black Unicom 3G Mobile phone dual card dual waiting 165454%' ;
Copy the code

The execution time of this SQL statement is 26.77 seconds and exceeds 10 seconds, so it is recorded in the slow query log file.

3) View the slow query log file

Run the cat command to query the log file directly:

If the slow query logs contain a large number of logs, it is difficult to view the files directly. In this case, you can use the mysqlDumpslow tool of mysql to classify and summarize slow query logs.

3. The Mysql replication

3.1 Replication Overview

Replication refers to transferring the DDL and DML operations of the master database to the slave server via binary logs, and then re-executing (also called redoing) those logs on the slave to keep the data in sync with the master.

MySQL supports simultaneous replication from one primary database to multiple secondary databases. The secondary database can also function as the primary database of other secondary servers to implement chain replication.

3.2 Replication Principles

The primary/secondary replication of MySQL works as follows.

At the top, there are three steps to replication:

  • When a transaction is committed, the Master database records data changes as time Events in the binary log file Binlog.
  • The master library pushes Log events in the binary Log file Binlog to the slave library Relay Log.
  • Slave rewrites events in the trunk log to alter data that reflects itself.

3.3 Replication Advantages

MySQL replication features include the following three aspects:

  • When a problem occurs in the master library, you can quickly switch to the slave library to provide services.
  • You can perform query operations on the secondary library, update from the master library, achieve read and write separation, reduce the access pressure of the master library.
  • Backups can be performed in the slave repository to avoid impacting the services of the master repository during the backup.

3.4 Construction Procedure

3.4.1 track master

1) In the master configuration file (/usr/my.cnf), configure the following information:

Mysql server ID, which is unique in the entire cluster environment
server-id=1

Mysql binlog Specifies the path and file name of the log file
log-bin=/var/lib/mysql/mysqlbin

Error logging is enabled by default
#log-err

Mysql install directory
#basedir

Mysql temporary directory
#tmpdir

Mysql > select * from 'mysql'
#datadir

1 indicates read-only, 0 indicates read/write
read-only=0

# Data ignored refers to databases that do not need to be synchronized
binlog-ignore-db=mysql

# specify the database to synchronize
#binlog-do-db=db01
Copy the code

Mysql > restart Mysql

Service mysql restart;Copy the code

3) Create an account to synchronize data and perform authorization operations: to complete the primary/secondary replication

grant replication slave on *.* to 'itcast'@'192.168.192.131' identified by 'itcast';	

flush privileges;

The IP address of the primary server is 192.168.192.130
The IP address of the secondary server is 192.168.192.131
Copy the code

4) Check the master status:

show master status;
Copy the code

Parameter Meaning:

File: Indicates the log File from which logs are pushed Position: indicates the Position from which logs are pushed Binlog_Ignore_DB: indicates the database that does not need to be synchronizedCopy the code

3.4.2 slave

1) Configure the following information in the slave configuration file:

Mysql server ID is unique
server-id=2

# specify binlog
log-bin=/var/lib/mysql/mysqlbin
Copy the code

Mysql > restart Mysql

Service mysql restart;Copy the code

3) Execute the following commands:

change master to master_host= '192.168.192.130', master_user='itcast', master_password='itcast', master_log_file='mysqlbin.000001', master_log_pos=413;

-- Specify the IP of the master node, account, password, binary log file name, and which line to start synchronization from
Copy the code

Specify the IP address, user name, password of the master library corresponding to the current slave library, and the location from which the log file starts to synchronize the push log.

4) Enable synchronization

start slave;

show slave status \G;
Copy the code

5) Stop the synchronization operation

stop slave;
Copy the code

3.4.3 Verifying synchronization operations

SQL > alter database create table;

create database db01;

user db01;

create table user(
	id int(11) not null auto_increment,
	name varchar(50) not null,
	sex varchar(1),
	primary key (id)
)engine=innodb default charset=utf8;

insert into user(id,name,sex) values(null.'Tom'.'1');
insert into user(id,name,sex) values(null.'Trigger'.'0');
insert into user(id,name,sex) values(null.'Dawn'.'1');
Copy the code

2) Query data from the library for verification:

From the library, you can view the database you just created:

In this database, query the data in the user table:

4. Comprehensive cases

4.1 Requirement Analysis

A service system needs to record the access logs of the current service system. The access logs include the operator, operation time, access type, access method, request parameters, request result, request result type, and request duration. Detailed system access logs help you trace user requests in the system and view user access records in the system management background.

Logging information in the system can be achieved through AOP of the Spring framework. The specific request processing flow is as follows:

4.2 Building a case environment

4.2.1 Database tables

CREATE DATABASE mysql_demo DEFAULT CHARACTER SET utf8mb4 ;

CREATE TABLE `brand` (
  `id` bigint(20) NOT NULL AUTO_INCREMENT,
  `name` varchar(255) DEFAULT NULL COMMENT 'Brand Name',
  `first_char` varchar(1) DEFAULT NULL COMMENT 'Brand Initials'.PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;



CREATE TABLE `item` (
  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT 'commodity id',
  `title` varchar(100) NOT NULL COMMENT 'Commodity title',
  `price` double(10.2) NOT NULL COMMENT 'Commodity price, unit: yuan',
  `num` int(10) NOT NULL COMMENT 'Stock quantity',
  `categoryid` bigint(10) NOT NULL COMMENT 'Category, leaf category',
  `status` varchar(1) DEFAULT NULL COMMENT 'Product status, 1- Normal, 2- Removed, 3- Deleted',
  `sellerid` varchar(50) DEFAULT NULL COMMENT Merchants' ID ',
  `createtime` datetime DEFAULT NULL COMMENT 'Creation time',
  `updatetime` datetime DEFAULT NULL COMMENT 'Update Time'.PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COMMENT='List of Goods';



CREATE TABLE `user` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `username` varchar(45) NOT NULL,
  `password` varchar(96) NOT NULL,
  `name` varchar(45) NOT NULL,
  `birthday` datetime DEFAULT NULL,
  `sex` char(1) DEFAULT NULL,
  `email` varchar(45) DEFAULT NULL,
  `phone` varchar(45) DEFAULT NULL,
  `qq` varchar(32) DEFAULT NULL.PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;


CREATE TABLE `operation_log` (
  `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'ID',
  `operate_class` varchar(200) DEFAULT NULL COMMENT 'Action class',
  `operate_method` varchar(200) DEFAULT NULL COMMENT 'Operation method',
  `return_class` varchar(200) DEFAULT NULL COMMENT 'Return value type',
  `operate_user` varchar(20) DEFAULT NULL COMMENT 'Operating user',
  `operate_time` varchar(20) DEFAULT NULL COMMENT 'Operation time',
  `param_and_value` varchar(500) DEFAULT NULL COMMENT 'Request Parameter Name and parameter Value',
  `cost_time` bigint(20) DEFAULT NULL COMMENT 'Execution method time, in ms',
  `return_value` varchar(200) DEFAULT NULL COMMENT 'Return value'.PRIMARY KEY (`id`)
) ENGINE=InnoDB  DEFAULT CHARSET=utf8mb4;
Copy the code

4.2.2 pom. XML

<properties>
  <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
  <maven.compiler.source>1.7</maven.compiler.source>
  <maven.compiler.target>1.7</maven.compiler.target>

  <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
  <maven.compiler.source>1.8</maven.compiler.source>
  <maven.compiler.target>1.8</maven.compiler.target>
  <spring.version>5.0.2. RELEASE</spring.version>
  <slf4j.version>1.6.6</slf4j.version>
  <log4j.version>1.2.12</log4j.version>
  <mybatis.version>3.4.5</mybatis.version>
</properties>

<dependencies> <! -- spring -->
  <dependency>
    <groupId>org.aspectj</groupId>
    <artifactId>aspectjweaver</artifactId>
    <version>1.6.8</version>
  </dependency>

  <dependency>
    <groupId>org.projectlombok</groupId>
    <artifactId>lombok</artifactId>
    <version>1.16.16</version>
  </dependency>

  <dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-context</artifactId>
    <version>${spring.version}</version>
  </dependency>

  <dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-context-support</artifactId>
    <version>${spring.version}</version>
  </dependency>

  <dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-orm</artifactId>
    <version>${spring.version}</version>
  </dependency>

  <dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-test</artifactId>
    <version>${spring.version}</version>
  </dependency>

  <dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-webmvc</artifactId>
    <version>${spring.version}</version>
  </dependency>

  <dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-tx</artifactId>
    <version>${spring.version}</version>
  </dependency>

  <dependency>
    <groupId>junit</groupId>
    <artifactId>junit</artifactId>
    <version>4.12</version>
    <scope>test</scope>
  </dependency>

  <dependency>
    <groupId>javax.servlet</groupId>
    <artifactId>javax.servlet-api</artifactId>
    <version>3.1.0</version>
    <scope>provided</scope>
  </dependency>

  <dependency>
    <groupId>javax.servlet.jsp</groupId>
    <artifactId>jsp-api</artifactId>
    <version>2.0</version>
    <scope>provided</scope>
  </dependency>


  <dependency>
    <groupId>log4j</groupId>
    <artifactId>log4j</artifactId>
    <version>${log4j.version}</version>
  </dependency>

  <dependency>
    <groupId>org.mybatis</groupId>
    <artifactId>mybatis</artifactId>
    <version>${mybatis.version}</version>
  </dependency>

  <dependency>
    <groupId>org.mybatis</groupId>
    <artifactId>mybatis-spring</artifactId>
    <version>1.3.0</version>
  </dependency>

  <dependency>
    <groupId>c3p0</groupId>
    <artifactId>c3p0</artifactId>
    <version>0.9.1.2</version>
  </dependency>

  <dependency>
    <groupId>mysql</groupId>
    <artifactId>mysql-connector-java</artifactId>
    <version>5.1.5</version>
  </dependency>

  <dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-core</artifactId>
    <version>2.9.0</version>
  </dependency>

  <dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-databind</artifactId>
    <version>2.9.0</version>
  </dependency>

  <dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-annotations</artifactId>
    <version>2.9.0</version>
  </dependency>
</dependencies>




 <build>
   <plugins>
     <plugin>
       <groupId>org.apache.tomcat.maven</groupId>
       <artifactId>tomcat7-maven-plugin</artifactId>
       <version>2.2</version>
       <configuration>
         <port>8080</port>
         <path>/</path>
         <uriEncoding>utf-8</uriEncoding>
       </configuration>
     </plugin>
   </plugins>
 </build>
Copy the code

Holdings web. XML


      
<web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://java.sun.com/xml/ns/javaee"
       xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"
       version="2.5">

    <! --> < span style = "max-width: 100%;
    <filter>
        <filter-name>CharacterEncodingFilter</filter-name>
        <filter-class>org.springframework.web.filter.CharacterEncodingFilter</filter-class>
        <init-param>
            <param-name>encoding</param-name>
            <param-value>utf-8</param-value>
        </init-param>
        <init-param>
            <param-name>forceEncoding</param-name>
            <param-value>true</param-value>
        </init-param>
    </filter>
    <filter-mapping>
        <filter-name>CharacterEncodingFilter</filter-name>
        <url-pattern>/ *</url-pattern>
    </filter-mapping>

    <context-param>
        <param-name>contextConfigLocation</param-name>
        <param-value>classpath:applicationContext.xml</param-value>
    </context-param>
    <listener>
        <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class>
    </listener>


    <servlet>
        <servlet-name>springmvc</servlet-name>
        <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
        <! -- Specify the loaded configuration file with contextConfigLocation -->
        <init-param>
            <param-name>contextConfigLocation</param-name>
            <param-value>classpath:springmvc.xml</param-value>
        </init-param>
    </servlet>
    <servlet-mapping>
        <servlet-name>springmvc</servlet-name>
        <url-pattern>*.do</url-pattern>
    </servlet-mapping>

    <welcome-file-list>
      <welcome-file>log-datalist.html</welcome-file>
    </welcome-file-list>
</web-app>
Copy the code

4.2.4 db. The properties

jdbc.driver=com.mysql.jdbc.Driver
jdbc.url=JDBC: mysql: / / 192.168.142.128:3306 / mysql_demo
jdbc.username=root
jdbc.password=itcast
Copy the code

4.2.5 applicationContext. XML


      
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:aop="http://www.springframework.org/schema/aop"
       xmlns:tx="http://www.springframework.org/schema/tx"
       xmlns:context="http://www.springframework.org/schema/context"
	   xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd">

    <! Load configuration file -->
    <context:property-placeholder location="classpath:db.properties"/>

    <! -- Configure the package to scan when Spring creates containers -->
    <context:component-scan base-package="cn.itcast">
        <context:exclude-filter type="annotation" expression="org.springframework.stereotype.Controller">	
        </context:exclude-filter>
    </context:component-scan>

    <! -- Configure MyBatis Session factory -->
    <bean id="sqlSessionFactory" class="org.mybatis.spring.SqlSessionFactoryBean">
        <property name="dataSource" ref="dataSource"/>
        <property name="typeAliasesPackage" value="cn.itcast.pojo"/>
     </bean>

    <! -- Configure data source -->
    <bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource">
        <property name="driverClass" value="${jdbc.driver}"></property>
        <property name="jdbcUrl" value="${jdbc.url}"></property>
        <property name="user" value="${jdbc.username}"></property>
        <property name="password" value="${jdbc.password}"></property>
    </bean>

    <! -- Configure Mapper scanner -->
    <bean class="org.mybatis.spring.mapper.MapperScannerConfigurer">
        <property name="basePackage" value="cn.itcast.mapper"/>
    </bean>

    <! -- Configure transaction manager -->
    <bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
        <property name="dataSource" ref="dataSource"/>
    </bean>

    <! Configure transaction annotation-driven -->
    <tx:annotation-driven transaction-manager="transactionManager"></tx:annotation-driven>
</beans>
Copy the code

4.2.6 for springmvc. XML


      
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:mvc="http://www.springframework.org/schema/mvc"
       xmlns:context="http://www.springframework.org/schema/context"
       xmlns:aop="http://www.springframework.org/schema/aop"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd">

    <context:component-scan base-package="cn.itcast.controller"></context:component-scan>

    <mvc:annotation-driven></mvc:annotation-driven>

    <aop:aspectj-autoproxy />

</beans>
Copy the code

4.2.7 Importing Basic Projects

4.3 Logging operations using AOP

4.3.1 Custom annotations

Use a custom annotation to indicate whether a method needs to be logged or not. If the method needs to be logged during access, the annotation can be marked on the method.

@Inherited
@Documented
@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
public @interface OperateLog {
}
Copy the code

4.3.2 Defining notification classes

@Component
@Aspect
public class OperateAdvice {
   
   private static Logger log = Logger.getLogger(OperateAdvice.class);
   
   @Autowired
   private OperationLogService operationLogService;
   

   @Around("execution(* cn.itcast.controller.*.*(..) ) && @annotation(operateLog)")
   public Object insertLogAround(ProceedingJoinPoint pjp , OperateLog operateLog) throws Throwable{
      System.out.println("* * * * * * * * * * * * * * * * * * * * * * * * log [start] * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *");
      
      OperationLog op = new OperationLog();
      
      DateFormat sdf = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");

      op.setOperateTime(sdf.format(new Date()));
      op.setOperateUser(DataUtils.getRandStr(8));
      
      op.setOperateClass(pjp.getTarget().getClass().getName());
      op.setOperateMethod(pjp.getSignature().getName());
      
      // Gets the argument passed when the method is called
      Object[] args = pjp.getArgs();
      op.setParamAndValue(Arrays.toString(args));

      long start_time = System.currentTimeMillis();

      / / release
      Object object = pjp.proceed();

      long end_time = System.currentTimeMillis();
      op.setCostTime(end_time - start_time);

      if(object ! =null){
         op.setReturnClass(object.getClass().getName());
         op.setReturnValue(object.toString());
      }else{
         op.setReturnClass("java.lang.Object");
         op.setParamAndValue("void");
      }

      log.error(JsonUtils.obj2JsonString(op));

      operationLogService.insert(op);
      
      System.out.println("* * * * * * * * * * * * * * * * * * * * * * * * * * log [end] * * * * * * * * * * * * * * * * * * * * * * * * * * *");
      
      returnobject; }}Copy the code

4.3.3 Annotate the method

Annotate @operatelog with the method you want to log.

@OperateLog
@RequestMapping("/insert")
public Result insert(@RequestBody Brand brand){
    try {
        brandService.insert(brand);
        return new Result(true."Operation successful");
    } catch (Exception e) {
        e.printStackTrace();
        return new Result(false."Operation failed"); }}Copy the code

4.4 Back-end Code implementation of Log Query

4.4.1 Mapper interfaces

public interface OperationLogMapper {

    public void insert(OperationLog operationLog);

    public List<OperationLog> selectListByCondition(Map dataMap);

    public Long countByCondition(Map dataMap);

}
Copy the code

4.4.2 mapper. XML Mapping configuration file


      
<! DOCTYPEmapper PUBLIC "- / / mybatis.org//DTD Mapper / 3.0 / EN" "http://mybatis.org/dtd/mybatis-3-mapper.dtd" >
<mapper namespace="cn.itcast.mapper.OperationLogMapper" >

    <insert id="insert" parameterType="operationLog">
        INSERT INTO operation_log(id,return_value,return_class,operate_user,operate_time,param_and_value,
        operate_class,operate_method,cost_time)
      VALUES(NULL,#{returnValue},#{returnClass},#{operateUser},#{operateTime},#{paramAndValue},
        #{operateClass},#{operateMethod},#{costTime})
    </insert>

    <select id="selectListByCondition" parameterType="map" resultType="operationLog">
      select
        id ,
        operate_class as operateClass ,
        operate_method as operateMethod,
        return_class as returnClass,
        operate_user as operateUser,
        operate_time as operateTime,
        param_and_value as paramAndValue,
        cost_time as costTime,
        return_value as returnValue
      from operation_log
      <include refid="oplog_where"/>
      limit #{start},#{size}
    </select>


    <select id="countByCondition" resultType="long" parameterType="map">
        select count(*) from operation_log
        <include refid="oplog_where"/>
    </select>


    <sql id="oplog_where">
        <where>
            <if test="operateClass != null and operateClass != '' ">
                and operate_class = #{operateClass}
            </if>
            <if test="operateMethod != null and operateMethod != '' ">
                and operate_method = #{operateMethod}
            </if>
            <if test="returnClass != null and returnClass != '' ">
                and return_class = #{returnClass}
            </if>
            <if test="costTime ! = null">
                and cost_time =  #{costTime}
            </if>
        </where>
    </sql>

</mapper>
Copy the code

4.4.3 Service

@Service
@Transactional
public class OperationLogService {

    //private static Logger logger = Logger.getLogger(OperationLogService.class);

    @Autowired
    private OperationLogMapper operationLogMapper;

    // Insert data
    public void insert(OperationLog operationLog){
        operationLogMapper.insert(operationLog);
    }

    // Query by condition
    public PageResult selectListByCondition(Map dataMap, Integer pageNum , Integer pageSize){

       if(paramMap ==null){
            paramMap = new HashMap();
        }
        paramMap.put("start" , (pageNum-1)*rows);
        paramMap.put("rows",rows);

        Object costTime = paramMap.get("costTime");
        if(costTime ! =null) {if("".equals(costTime.toString())){
                paramMap.put("costTime".null);
            }else{
                paramMap.put("costTime".new Long(paramMap.get("costTime").toString()));
            }
        }

        System.out.println(dataMap);


        long countStart = System.currentTimeMillis();
        Long count = operationLogMapper.countByCondition(dataMap);
        long countEnd = System.currentTimeMillis();
        System.out.println("Count Cost Time : " + (countEnd-countStart)+" ms");


        List<OperationLog> list = operationLogMapper.selectListByCondition(dataMap);
        long queryEnd = System.currentTimeMillis();
        System.out.println("Query Cost Time : " + (queryEnd-countEnd)+" ms");


        return new PageResult(count,list);

    }


Copy the code
@RestController
@RequestMapping("/operationLog")
public class OperationLogController {

    @Autowired
    private OperationLogService operationLogService;

    @RequestMapping("/findList")
    public PageResult findList(@RequestBody Map dataMap, Integer pageNum , Integer pageSize){
        PageResult page = operationLogService.selectListByCondition(dataMap, pageNum, pageSize);
        returnpage; }}Copy the code

4.5 Front-end code implementation of log query

The front-end code uses BootStrap + AdminLTE for layout and Vuejs for view layer presentation.

4.5.1 js

<script>
   var vm = new Vue({
       el: '#app'.data: {
           dataList: [].searchEntity: {operateClass:' '.operateMethod:' '.returnClass:' '.costTime:' '
           },

           page: 1.// Which page is displayed
           pageSize: 10.// The number of entries per page
           total: 150.// Record total
           maxPage:8  // Maximum number of pages
       },
       methods: {
           pageHandler: function (page) {
               this.page = page;
               this.search();
           },

           search: function () {
               var _this = this;
               this.showLoading();
               axios.post('/operationLog/findList.do? pageNum=' + _this.page + "&pageSize=" + _this.pageSize, _this.searchEntity).then(function (response) {
                   if(response) { _this.dataList = response.data.dataList; _this.total = response.data.total; _this.hideLoading(); }})},showLoading: function () {$('#loadingModal').modal({backdrop: 'static'.keyboard: false});
           },

           hideLoading: function () {$('#loadingModal').modal('hide'); }},created:function(){
           this.pageHandler(1); }}); </script>Copy the code

4.5.2 Display of List Data

<tr v-for="item in dataList">
    <td><input name="ids" type="checkbox"></td>
    <td>{{item.id}}</td>
    <td>{{item.operateClass}}</td>
    <td>{{item.operateMethod}}</td>
    <td>{{item.returnClass}}</td>
    <td>{{item.returnValue}}</td>
    <td>{{item.operateUser}}</td>
    <td>{{item.operateTime}}</td>
    <td>{{item.costTime}}</td>
    <td class="text-center">
        <button type="button" class="btn bg-olive btn-xs">details</button>
        <button type="button" class="btn bg-olive btn-xs">delete</button>
    </td>
</tr>
Copy the code

4.5.3 Paging plug-ins

<div class="wrap" id="wrap">
    <zpagenav v-bind:page="page" v-bind:page-size="pageSize" v-bind:total="total"
              v-bind:max-page="maxPage"  v-on:pagehandler="pageHandler">
    </zpagenav>
</div>
Copy the code

4.6 Joint test

You can use postman to access service systems and view logs in the database to check whether user access logs can be recorded.

4.7 Analyzing Performance Problems

As time goes by, the amount of data in this table will become larger and larger. Therefore, we need to optimize the performance of the log query module according to business requirements.

1) Paging query optimization

Since log queries are paginated, this means that at least two queries are required while viewing:

A. Query the total number of records that match the conditions. – > count operation

B. Query the list data that meets the conditions. –> Paging query limit operation

In general, count() requires a large number of rows to be scanned (meaning a large amount of data to be accessed) to get accurate results, making it difficult to optimize this SQL. If you need to optimize count, you can use another idea, you can increase the summary table, or redis cache to record the number of records corresponding to the table. In this way, you can easily realize the query of summary data, and it is very efficient, but this kind of statistics can not guarantee 100% accuracy. For database operation, “fast, accurate, simple implementation”, three can only satisfy two, must give up one.

2) Conditional query optimization

For conditional query, it is necessary to build indexes for query conditions and sorting fields.

3) Read-write separation

In a master/slave replication cluster, read/write operations go to the master node and read operations go to the slave node.

4) MySQL server optimization

5) Application optimization

4.8 Performance Optimization – Paging

The count 4.8.1 optimization

Create a table to record the total amount of data in the log table:

create table log_counter(
	logcount bigint not null
)engine = innodb default CHARSET = utf8;
Copy the code

Update the table after each insert:

<update id="updateLogCounter" >
    update log_counter set logcount = logcount + 1
</update>
Copy the code

In paging queries, get the total number of records, either from this table.

<select id="countLogFromCounter" resultType="long">
    select logcount from log_counter limit 1
</select>
Copy the code

4.8.2 optimize limit

When paging, you can generally improve performance by creating an overwrite index. A very common and annoying paging scenario is “limit 100000010,10”, where MySQL needs to search for the first 1000010 records and only returns the 1000001-1000010 records, the first 1000000 records will be discarded. The query is very costly.

This problem occurs when you click on a lower page number, making the query very slow.

Optimized SQL:

select * from operation_log limit 3000000 , 10;
Copy the code

Optimize the above SQL to:

select * from operation_log t , (select id from operation_log order by id limit 3000000.10) b where t.id = b.id ;
<select id="selectListByCondition" parameterType="map" resultType="operationLog">
  select
    id ,
    operate_class as operateClass ,
    operate_method as operateMethod,
    return_class as returnClass,
    operate_user as operateUser,
    operate_time as operateTime,
    param_and_value as paramAndValue,
    cost_time as costTime,
    return_value as returnValue
  from operation_log t,
    
  (select id from operation_log 
  <where>
    <include refid="oplog_where"/>
  </where>
  order by id limit #{start# {},rows}) b  where t.id = b.id  
</select>
Copy the code

4.9 Performance Optimization – Indexes

Query by operator is inefficient and time-consuming. The reason is that when the database table structure is created, the operator fields are not indexed.

CREATE INDEX idx_user_method_return_cost ON operation_log(operate_user,operate_method,return_class,cost_time);
Copy the code

Similarly, in order to improve query efficiency, we also need to create indexes for fields such as operation method, return value type and operation duration to improve query efficiency.

CREATE INDEX idx_optlog_method_return_cost ON operation_log(operate_method,return_class,cost_time);

CREATE INDEX idx_optlog_return_cost ON operation_log(return_class,cost_time);

CREATE INDEX idx_optlog_cost ON operation_log(cost_time);
Copy the code

4.10 Performance Optimization – Sorting

When querying data, if business requirements require us to sort the result content, at this time, we also need to establish appropriate indexes for the sorted fields to improve the sorting efficiency.

4.11 Performance Optimization – Read/write Separation

4.11.1 overview

On the basis of Mysql master-slave replication, read/write separation can be used to reduce the pressure on a single Mysql node and improve access efficiency. The architecture of read/write separation is as follows:

For the implementation of read/write separation, Spring AOP can be used to dynamically switch data sources for operations:

4.11.2 Implementation method

db.properties

Write the main library
jdbc.write.driver=com.mysql.jdbc.Driver
jdbc.write.url=JDBC: mysql: / / 192.168.142.128:3306 / mysql_demo
jdbc.write.username=root
jdbc.write.password=itcast
# Read from the library
jdbc.read.driver=com.mysql.jdbc.Driver
jdbc.read.url=JDBC: mysql: / / 192.168.142.129:3306 / mysql_demo
jdbc.read.username=root
jdbc.read.password=itcast
Copy the code

applicationContext-datasource.xml


      
<beans xmlns="http://www.springframework.org/schema/beans"
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xmlns:aop="http://www.springframework.org/schema/aop"
       xmlns:tx="http://www.springframework.org/schema/tx"
       xmlns:context="http://www.springframework.org/schema/context"
       xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd">


    <! Read -->
    <bean id="readDataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close"  lazy-init="true">
        <property name="driverClass" value="${jdbc.read.driver}"></property>
        <property name="jdbcUrl" value="${jdbc.read.url}"></property>
        <property name="user" value="${jdbc.read.username}"></property>
        <property name="password" value="${jdbc.read.password}"></property>
    </bean>


    <! -- Configure the data source -->
    <bean id="writeDataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource"  destroy-method="close"  lazy-init="true">
        <property name="driverClass" value="${jdbc.write.driver}"></property>
        <property name="jdbcUrl" value="${jdbc.write.url}"></property>
        <property name="user" value="${jdbc.write.username}"></property>
        <property name="password" value="${jdbc.write.password}"></property>
    </bean>


    <! Configure dynamically allocated read/write data sources <! - inherited from AbstractRoutingDataSource -- -- >
    <bean id="dataSource" class="cn.itcast.aop.datasource.ChooseDataSource" lazy-init="true">
        <property name="targetDataSources">
            <map key-type="java.lang.String" value-type="javax.sql.DataSource">
                <entry key="write" value-ref="writeDataSource"/>
                <entry key="read" value-ref="readDataSource"/>
            </map>
        </property>

        <property name="defaultTargetDataSource" ref="writeDataSource"/>

        <property name="methodType">
            <map key-type="java.lang.String">
                <entry key="read" value=",get,select,count,list,query,find"/>
                <entry key="write" value=",add,create,update,delete,remove,insert"/>
            </map>
        </property>
    </bean>

</beans>
Copy the code

ChooseDataSource

public class ChooseDataSource extends AbstractRoutingDataSource {

    public static Map<String, List<String>> METHOD_TYPE_MAP = new HashMap<String, List<String>>();

    /** * Implements the abstract method in the parent class to get the data source name *@return* /
    protected Object determineCurrentLookupKey(a) {
        return DataSourceHandler.getDataSource();
    }

    // Set the data source corresponding to the method name prefix
    public void setMethodType(Map<String, String> map) {
        for (String key : map.keySet()) {
            List<String> v = new ArrayList<String>();
            String[] types = map.get(key).split(",");
            for (String type : types) {
                if(! StringUtils.isEmpty(type)) { v.add(type); } } METHOD_TYPE_MAP.put(key, v); } System.out.println("METHOD_TYPE_MAP : "+METHOD_TYPE_MAP); }}Copy the code

DataSourceHandler

public class DataSourceHandler {

    // Data source name
    public static final ThreadLocal<String> holder = new ThreadLocal<String>();

    /** * Add the configured read and write data sources to the holder at project startup */
    public static void putDataSource(String datasource) {
        holder.set(datasource);
    }

    /** * Get the data source string */ from holer
    public static String getDataSource(a) {
        returnholder.get(); }}Copy the code

DataSourceAspect

@Aspect
@Component
@Order(-9999)
@EnableAspectJAutoProxy(proxyTargetClass = true)
public class DataSourceAspect {

    protected Logger logger = LoggerFactory.getLogger(this.getClass());

    /** * Configure pre-notification, using the pointcut registered on method aspect() */
    @Before("execution(* cn.itcast.service.*.*(..) )"
    @Order(-9999)
    public void before(JoinPoint point) {
        
        String className = point.getTarget().getClass().getName();
        String method = point.getSignature().getName();
        logger.info(className + "." + method + "(" + Arrays.asList(point.getArgs())+ ")");

        try {
            for (String key : ChooseDataSource.METHOD_TYPE_MAP.keySet()) {
                for (String type : ChooseDataSource.METHOD_TYPE_MAP.get(key)) {
                    if (method.startsWith(type)) {
                        System.out.println("key : " + key);
                        DataSourceHandler.putDataSource(key);
                        break; }}}}catch(Exception e) { e.printStackTrace(); }}}Copy the code

The @order (-9999) annotation controls the transaction manager and the loading Order of the notification class, so you need to let the notification class load first to decide which data source to use.

4.11.3 validation

In the master and slave libraries, execute the following SQL statement to check whether to read from the slave library; Whether to write to the master library during a write operation.

show status like 'Innodb_rows_%' ;
Copy the code

4.11.4 principle

4.12 Performance Optimization – Application optimization

4.12.1 cache

Redis can be used in the business system to do cache, cache some basic data, to reduce the pressure of the relational database, improve the efficiency of access.

4.12.2 Full text retrieval

If the amount of data in the business system is large (tens of millions), the speed of database query, especially paging query, will be very slow (because count is needed first in paging). In order to improve the access efficiency, at this time, Consider adding Solr or ElasticSearch to improve access efficiency.

4.13.3 Non-relational databases

You can also consider storing non-core (critical) data in MongoDB to improve the efficiency of inserts and queries.