preface

Some time ago, I bought a book about TCP/IP communication and figured out how computers communicate with each other. The foundation of network communication is the TCP/IP protocol cluster, also known as THE TCP/IP protocol stack, also known as TCP/IP protocol for short. TCP/IP protocol is not only TCP and IP protocol, but these two are used more, use these two names.

We currently use HTTP, FTP, SMTP, DNS, HTTPS, SSH, MQTT, RPC, etc. based on TCP/IP protocol. The following figure shows TCP at the transport layer.

The Linux kernel hides the complexity of the TCP/IP communication model for us, and everything in Linux is a file, so it abstracts the Socket file for us, and we actually code with the Socket through some system calls.

In Java, the network communication part of NetTY provides a lot of convenience, but once you understand these principles, netty is pretty much the same.

Kernel Parameters

The/proc/sys/net / *

TCP/IP kernel parameters

File system section /proc/sys/fs/*

https://www.kernel.org/doc/Documentation/sysctl/net.txt
https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt
https://www.kernel.org/doc/Documentation/sysctl/fs.txtCopy the code

Kernel parameters can be changed in two ways, for example, tcp_syn_retries = 5

  • A temporary change
# for parameters of complete value net. Ipv4. Tcp_syn_retries = 6 sysctl -a | grep tcp_syn_retries # Linux all files, so it is kept in a file, we can modify the file content, The path to the # kernel properties file is in /proc/sys. The remaining path is in net.ipv4.tcp_syn_retries. Replaced with 5 > / echo/proc/sys/net/ipv4 / tcp_syn_retries # check the value of the modified sysctl -a | grep tcp_syn_retriesCopy the code
  • A permanent change
# tcp_syn_retries = 7 echo ".net. Ipv4. Tcp_syn_retries = 7 "> > / etc/sysctl. Conf # # sysctl for changes to take effect - p view changes the value of the sysctl -a | grep tcp_syn_retriesCopy the code

In this paper, the content

  • BIO communication model (illustrated) and Java code implementation
  • NIO communication model and Java code implementation
  • Multiplexing communication model (illustrated), mainlyepoll, will explain in detail

The communication model has evolved along the lines of BIO -> NIO -> multiplexing, due to the high concurrency requirements of the Internet.

The code address used in this article

https://github.com/zhangpanqin/fly-java-socketCopy the code

Context of this article:

  • jdk .18
  • The Linux version 3.10.0-693.5.2. El7. X86_64

BIO communication

In the BIO communication model, the server serversocket. accPET blocks and waits for a new client to establish a connection through the TCP three-way handshake. When the client Socket establishes a connection, the server serversocket. accPET obtains the Socket. Then read and write data to the Socket.

When a Socket reads or writes data, it blocks the current thread until the operation is complete, so we need to allocate a thread to each client, and then loop through the thread to read data from the Socket (sent by the client). You also need to allocate a thread pool to write data to the Socket (sending data to the client).

The application calls the system call read to move data from kernel to user state, which is blocked in the BIO. And you don’t know when the data is coming, so you have to loop through a thread to see if the data is readable.

Try {// When the kernel is not ready for data, While ((length = inputStreamBysocket.read (data)) >= 0) {s = new String(data, 0, length) StandardCharsets.UTF_8); if (s.contains(EOF)) { this.close(); return; } log.info(" Received message from client,clientId: {},message: {}", clientId, s); } if (length == -1) {log.info(" client is down,clientId: {}, server frees resources ", clientId); this.close(); } } catch (IOException e) { if (length == -1) { this.close(); }}Copy the code

The server actively writes data to the client, and the application calls to write block. We can do this through thread pools. An ID attribute is assigned to each client to maintain the session using ConcurrentHashMap

. To write data to client 1, take the client directly from the Map and write data to it.
,>

public void writeMessage(Integer clientId, String message) { Objects.requireNonNull(clientId); Objects.requireNonNull(message); // Fetch the client according to the client ID. final SocketBioClient socketBioClient = CLIENT.get(clientId); Option.ofnullable (socketBioClient).orelsethrow (() -> new RuntimeException("clientId: "+ clientId +" invalid ")); / / write data in the thread pool threadPoolExecutor execute (() - > {the if (socketBioClient. IsClosed ()) {CLIENT. Remove (clientId); return; } socketBioClient.writeMessage(message); }); }Copy the code

BIO communication is difficult when it comes to concurrency. If you have 50,000 links, you need 50,000 threads to maintain communication. In Java, the memory occupied by threads is assumed to be 512KB and 24GB(50000*0.5/1024GB). Moreover, the CPU needs to schedule 50,000 threads to read client data and reply, so the vast majority of CPU resources will be wasted in thread switching, and the real-time communication cannot be guaranteed.

Full connection queue and half link queue

1. The server needs to bind a serverIp and serverPort. The Java API is serversocket.bind

2. Then listen for the arrival of the client link on the serverIp and serverPort

3, the client bind a clientIp and clientPort, and then call socket. conect(serverIp,serverPort), through the kernel to establish Tcp connection.

4. Then call serversocket. accept in an infinite loop on the server to get the Socket for establishing the connection

5. Socket.read reads data sent by the client, and socket. wirte writes data to the client

ServerIp and serverPort are determined, and any difference between clientIp and clientPort can be considered as different clients.

ClientIp clientPort serverIp serverPort is also called a quplet in communication. These four parameters are required to establish a TCP/IP connection.

For example, when our browser loads a page, it actually randomly creates a valid local port, plus a known clientIp to request data from the serverIp and serverPort.

TCP three-way handshake between a client and a server:

1. The client sends a SYN packet to the server. After the netstat -natp command is run on the client, the syn-sent state is displayed

2. The server receives a SYN packet from the client, places the connection in a half-link queue, and sends a SYN+ACK packet in the SYN_REVD state to the client

TCP TCP TCP TCP TCP TCP TCP TCP TCP TCP TCP TCP TCP TCP TCP TCP TCP TCP

4. The server receives the ACK from the client, changes the TCP connection state to ESTABLISHED, and places the connection to the full connection queue

A queue is a bounded queue. When a fully connected queue or a semi-connected queue overflows, the configured kernel parameters determine the corresponding processing policy.

TCP caught

Wireshark = wireshark = wireshark = wireshark = wireshark = wireshark Tshark -i eth0 port 10222 # Tcpdump -nn -i eth0 port 10222Copy the code

The full connection queue overflows

When I was writing code verification and packet capture, I found that the length of the whole queue was set as 10, but 11 links could be established. When 12 links were established, full connection overflow occurred.

Echo 1 > /proc/sys/net/ipv4/tcp_abort_on_overflow echo 1 > /proc/sys/net/ipv4/tcp_abort_on_overflow echo 1 > /proc/sys/net/ipv4/tcp_abort_on_overflow Echo 2 > /proc/sys/net/ipv4/tcp_synack_retriesCopy the code

When tcp_ABORT_ON_overflow is 0 (the default), it means that if the full connection queue is full on the third handshake (the client sent an ACK), the server will send a packet to the client to retry sending an ACK. Sysctl -a | grep tcp_synack_retries view server configuration third handshake retry times, defaults to 5 times.

In the TCP three-way handshake, the client sends an ACK packet to the server for the third time. When the full connection queue is full, the client discards the ACK packet for the third time. Therefore, in the subsequent process, the client sends ACK packets again to the server.

Tcp_abort_on_overflow is 1, which means that if the full connection queue is full on the third handshake (the client sent an ACK), the server will reply with an RST packet to close the connection process

The half-link queue overflows

The length of a TCP half-connection queue is calculated from a Connection Reset

  • backlog.listenWhen I pass in, I pass in 10
  • somaxconnMine is 128
  • tcp_max_syn_backlog, mine is 128

Somaxconn and tcp_max_syn_backlog Parameters Meaning

Ss # check corresponding port Send - Q - LNT #.net. Core. Somaxconn = 128 sysctl -a | grep somaxconn # net. Ipv4. Tcp_max_syn_backlog = 128 sysctl -a | grep tcp_max_syn_backlogCopy the code

Syn flood attack: Simulates half-link overflow

-s sends only SYN packets. # --flood Incessant attacks. # 10.211.55.8 Destination IP address hping3 -- S --flood --rand-source - p # 10222 10.211.55.8 calculate the amount of half link netstat natp | grep SYN | wc -lCopy the code

I set the backlog to 7,123,511 respectively and tested the correct formula

nr_table_entries = min(backlog, somaxconn, tcp_max_syn_backlog) nr_table_entries = max(nr_table_entries, 8) // roundup_pow_of_two: Round parameter (nr_table_entries + 1) up to the minimum 2^n nr_table_entries = roundup_pow_of_two(nr_table_entries + 1) max_qlen_log = Max (3, log2(nr_table_entries)) max_queue_length = 2^max_qlen_logCopy the code

SYN FLOOD attack defense

The client sends a large number of SYN packets and then does not go through the subsequent handshake process, causing the server semi-link queue to become full and unable to accept normal users’ handshake links.

Syn cookie cat /proc/sys/net/ipv4/tcp_syncookies tcp_syncookies echo 0 > /proc/sys/net/ipv4/tcp_syncookiesCopy the code

The kernel parameter tcp_syncookies can help us to defend against SYN FLOOD attacks. When the value is set to 0, the half-link queue is full, and the server discards the SYN packets from the client. When the client is connected, it will retry sending SYN packets if it does not receive SYN+ACK packets. Failed to establish a connection because the number of retries exceeded. Procedure

In Linux, the kernel parameter net.ipv4.tcp_syn_retries = 6 limits the number of SYN retries that can be sent when the half-link queue is full.

When tcp_syncookies is set to 0, SYN FLOOD attacks cannot be defended and new normal users cannot establish links.

When tcp_syncookies=1 is set, new normal links (three-way handshake) can still establish TCP connections, if the full connection queue is not full, go to the logic of the full connection queue.

Echo 1 > /proc/sys/net/ipv4/tcp_syncookiesCopy the code

If the full connection queue is not full, the server sends a SYN+ACK packet with syncookie to the client, adding a session id to the packet. The client must send an ACK packet with Syncookie to establish a three-way handshake.

If the full connection queue is full, the full connection queue is connected from above.

Socket Bio Communicates with GitHub addresses

NIO communication

The evolution from BIO to NIO only supports synchronous non-blocking. Don’t underestimate the non-blocking feature, which reduces our thread model to one (regardless of the real-time nature of the read/write client). BIO always requires one client for one reader thread, no matter how you modify it. NIO can theoretically manage n clients on a thread, regardless of performance.

ServerSocketChannel. Accept can not block waiting for the client connection is established;

While (true) {try {// Bio blocks here waiting for a new client to be created. // NIO does not block waiting, there is a link established, back to the client. Null final SocketChannel Accept = serversocket.accept (); if (Objects.nonNull(accept)) { accept.configureBlocking(false); final int currentIdClient = CLIENT_ID.incrementAndGet(); final SocketNioClient socketNioClient = new SocketNioClient(currentIdClient, accept); CLIENT.put(currentIdClient, socketNioClient); New Thread(socketNioClient, "client -" + currentIdClient).start(); }} catch (IOException e) {log.info(" you failed to accept the client ", e); }}Copy the code

Socketchannel. read can wait for data to go from the kernel state to the user state without blocking.

ByteBuffer byteBuffer = ByteBuffer.allocateDirect(1024); While (true) {// nio When there is no data in the kernel to read, the kernel returns 0 length = this.client.read(byteBuffer); if (length > 0) { byteBuffer.flip(); s = StandardCharsets.UTF_8.decode(byteBuffer).toString(); Log.info (" Received message from client,clientId: {},message: {}", clientId, s); if (s.contains(EOF)) { this.close(); return; }} if (length == -1) {log.info(" client shutdown,clientId: {}, server release resource ", clientId); this.close(); return; } // Here is where you can execute some other business code when the kernel is not ready for the data}Copy the code

Under the NIO model, a single thread manages all reads and writes (regardless of the real-time nature of the responding client).

package com.fly.socket.nio; import com.fly.socket.nio.chat.model.ChatPushDTO; import lombok.extern.slf4j.Slf4j; import java.io.IOException; import java.net.InetSocketAddress; import java.nio.ByteBuffer; import java.nio.channels.ServerSocketChannel; import java.nio.channels.SocketChannel; import java.nio.charset.StandardCharsets; import java.util.HashMap; import java.util.Map; import java.util.Objects; import java.util.concurrent.ConcurrentLinkedDeque; /** * @author * @date 2020-07-19-16:32 */ @slf4j public class implements AutoCloseable {// Private static final String EOF = "exit"; private static final String EOF = "exit"; Private static final Map<Integer, SocketChannel> Map = new HashMap<>(16); // When the HTTP interface sends a message, Private static final ConcurrentLinkedDeque<ChatPushDTO> QUEUE = new ConcurrentLinkedDeque<>(); // The size of the message sent by the client is considered as 1024 bytes. final ByteBuffer byteBuffer = ByteBuffer.allocateDirect(1024); Private int port; // The backlog for all linked queues cannot be accounted for. ServerSocketChannel Private ServerSocketChannel Open; // The NioSingleThread is registered with ioc. Private Boolean closed = false; public ServerSocketChannel getOpen() { return open; } public NioSingleThread(int port, int backlog) { this.port = port; this.backlog = backlog; try { open = ServerSocketChannel.open(); / / set the use NIO model, ServerSocketChannel. Accept no traffic when open. ConfigureBlocking (false); open.bind(new InetSocketAddress(port), backlog); this.init(); } catch (IOException e) { throw new RuntimeException(e); } } /** * @Bean(destroyMethod = "close") * public NioSingleThread nioSingleThread() { * return new NioSingleThread(9998, 20); * } */ @Override public void close() throws IOException { closed = true; if (Objects.nonNull(open)) { if (! open.socket().isClosed()) { open.close(); Log.info (" closed client "); Private void init() {new Thread(() -> {Integer clientIdAuto = 1; While (true) {// Check whether the bean is destroyed. If it is destroyed, the server is closing. If (open.socket().isclosed ()) {try {open.close(); } catch (IOException e) { e.printStackTrace(); } } return; } try {// Handle new client links to create final SocketChannel Accept = open.accept(); if (Objects.nonNull(accept)) { accept.configureBlocking(false); MAP.put(clientIdAuto, accept); clientIdAuto++; } // Handle the read event map.foreach ((clientId, client) -> {if (! client.socket().isClosed()) { byteBuffer.clear(); try { final int read = client.read(byteBuffer); if (read == -1) { client.close(); MAP.remove(clientId); } if (read > 0) { byteBuffer.flip(); final String s = StandardCharsets.UTF_8.decode(byteBuffer).toString(); Log.info (" Read data from client clientId: {} : {}", clientId, s); if (s.contains(EOF)) { if (! client.socket().isClosed()) { client.close(); }}}} catch (IOException e) {log.error(" error,clientId: {}", clientId); }}}); // Handle the write event while (! QUEUE.isEmpty()) { final ChatPushDTO peek = QUEUE.remove(); if (Objects.isNull(peek)) { break; } final Integer chatId = peek.getChatId(); final String message = peek.getMessage(); final SocketChannel socketChannel = MAP.get(chatId); if (Objects.isNull(socketChannel) || socketChannel.socket().isClosed()) { continue; } byteBuffer.clear(); byteBuffer.put(message.getBytes(StandardCharsets.UTF_8)); byteBuffer.flip(); socketChannel.write(byteBuffer); }} catch (IOException e) {throw new RuntimeException(" server exception ", e); } } }, "NioSingleThread" ).start(); } public void writeMessage(ChatPushDTO ChatPushDTO) {object.requirenonNULL (ChatPushDTO); QUEUE.add(chatPushDTO); }}Copy the code

NIO code GitHub address

The NIO model is already good, reducing threads and memory footprint. One drawback is that the client has no data and still needs to call the system call read to see if any data arrives.

Int read = client.read(byteBuffer). In other words, we need to switch from user state to kernel state 50,000 times, which is not a small amount of computer resource consumption.

The IO model continues to evolve to the more extensive multiplexing commonly used today, which solves the problem of multiple system calls, reducing the number of 50,000 system calls to one or more.

IO multiplexing

The downside of NIO: Regardless of whether your client has data coming in or not, I call the system call to see if any data is coming in.

After the client establishes a connection, the kernel assigns a FD (file descriptor) to the client.

IO multiplexing refers to the kernel monitoring whether the client (FD) data is coming. When we want to know which client data is coming, we just call the system call provided by the multiplexer select, poll, epoll, and pass in the desired client (FD) data. The kernel will then return what client (FD) data is ready. We went from 50,000 system calls to one, which significantly reduced overhead. Epoll is the most efficient of the three multiplexers.

1. There is a limit to the number of FDS that can be passed in a single call (1024 at a time, depending on the kernel parameters), and the kernel still traverses the 50,000 links to check if there is any data to read. Then call the corresponding system call to obtain the client (FD) with data arrival, and then operate fd to copy the data from the kernel state to the user state for business processing.

Poll is similar to SELECT, except that there is no limit to how many FDS can be passed in a system call. Poll and SELECT only reduce the number of system calls, and the actual kernel also traverses each link to check whether it is readable, so the efficiency is linear with the total number of connections, and the more clients that establish connections, the less efficient they are.

3. Epoll does not check whether each fd of kernel rotation is readable. Arrive when the client data, the kernel will nic data read into their own memory space, the kernel will have data to connect into into a queue, user mode applications only need to call the epoll provides a system call, from the team to get the link to the corresponding fd can, so the efficiency and the number of active connections, Has nothing to do with the total number of links (maybe only 20% of the million links are active).

Epoll-related system calls

Epoll internally maintains a red-black tree that records the operations (read, write, etc.) of which links the multiplexer is currently monitoring, and a queue of which links are ready.

epoll_create

// Return the file descriptor corresponding to the epoll instance, fd used in subsequent epoll-related system calls int epoll_create(int size);Copy the code

Epoll_create creates a multiplexer instance, epoll, and returns an EPFD that points to an instance of epoll. Epfd is really just a file descriptor.

epoll_ctl

int epoll_ctl(int epfd, int op, int fd, struct epoll_event *event);Copy the code

Epoll_ctl registers the socket FD corresponding to the client or server with epoll. Op specifies the type of the current system call, whether to register the FD with epoll, delete the FD from epoll, or modify the event on epoll. An event is an IO operation (read, write, etc.).

Epoll_ctl sets which clients or servers epoll instances listen on and specifies which IO operations to listen on.

epoll_wait

Int epoll_wait(int epfd, struct epoll_event *events,int maxEvents,int timeout);Copy the code

Gets how many client IO operations are ready on the current multiplexer (EPFD) (the operation specified when registering epoll). Epoll_wait If timeout is not specified, the system blocks until at least one client I/O operation is ready. If timeout is greater than 0, 0 is returned when timeout occurs.

Epoll_event accepts the event prepared in the system call, and the corresponding client FD can be retrieved from the event data structure.

Epoll_wait blocks the call and returns:

  • An I/O is available

  • The specified timeout period is up

  • The call returns if it is interrupted

Epoll Trigger mode

Epoll monitors I/O events for multiple file descriptors. In what case epoll considers it to be read-write, this is how the event is triggered. Epoll supports two trigger modes, edge trigger (ET) and Level trigger (LT).

Each FD buffer, FD buffer can be divided into read buffer and write buffer. Each client link corresponds to one FD.

When the client data arrives, the nic writes the client data from the nic’s memory to the fd read buffer in the corresponding kernel. The application calls epoll_wait to know that data has arrived at the link, reads the data from kernel state to user state, and then processes the data.

Write data to the client. The application calls the Socket (corresponding to a FD) API and writes data from the user state to the FD write buffer in the kernel state. The kernel then writes the data to the network card, which sends it to the client at the appropriate time.

If the fd write buffer is full, a call to write will block until the write buffer is free.

When sending TCP connection data, there is a sliding window to control the sending of data. If the packet is sent fast but received slowly, and the packet exceeds the traffic control, the client will retry sending the packet but does not receive the ACK from the client.

The following figure shows the normal sending in flow control, the server sends the packet, the client receives it, and recovers an ACK.

This is not successfully sent outside the flow control, it will wait for the next to be sent.

If the client buffer is full, no matter how long the server sends the buffer, it will not succeed.

The server writes data to the client

1. Horizontal trigger timing

  • For read operations, LT mode returns read ready as long as the read buffer is not empty.

  • For write operations, LT mode returns write ready whenever the write buffer is insufficient.

2. Edge trigger timing

A read operation
  • When the buffer changes from unreadable to readable, that is, when the buffer changes from empty to non-empty.

  • When new data arrives, there is more data to read in the buffer.

The write operation
  • When the buffer changes from unwritable to writable.

  • When old data is sent away, the buffer becomes smaller.

Edge triggering is equivalent to triggering only when incrementing.

Java multiplexing

The Java abstraction for a multiplexer is a Selector. Different selectorProviders are available through SPI for different platforms.

Public Abstract AbstractSelector openSelector()throws IOException; OpenServerSocketChannel ()throws IOException; openServerSocketChannel()throws IOException. OpenSocketChannel ()throws IOException; openSocketChannel()throws IOException.Copy the code
Public static Selector open() public static Selector open() {// implements Closeable throws IOException { return SelectorProvider.provider().openSelector(); } // The epoll_wait // select implementation uses synchronized locks, and register is blocked when select is blocked. public abstract int select(long timeout)throws IOException; public abstract int select() throws IOException; Public abstract Selector wakeUp (); Public abstract void close() throws IOException; / / method in AbstractSelector extends Selector that protected the abstract SelectionKey register (AbstractSelectableChannel ch, int ops, Object att); }Copy the code
public abstract class SocketChannel extends AbstractSelectableChannel implements ByteChannel, ScatteringByteChannel, GatheringByteChannel, NetworkChannel {/** * Read data from a channel is locked, the method is thread-safe. Synchronized (this.readlock) */ @override public abstract int read(ByteBuffer DST) throws synchronized(this.readlock) */ @override public abstract int read(ByteBuffer DST) throws  IOException; /** * write buffer data to channel, lock. Synchronized (this.writelock) */ @override public abstract int write(ByteBuffer SRC) throws synchronized(this.writelock) */ @override public Abstract int write(ByteBuffer SRC) throws IOException; }Copy the code

A simple Demo

/** * @author @date 2020-07-26-16:15 */ public class SocketDemo1 {public static void main(String[] args) throws IOException {// Call the socket() system call to get socketfd final ServerSocketChannel Open = ServerSocketChannel.open(); // The socket to register the multiplexer must be open.configureBlocking(false); Bind (new InetSocketAddress("10.211.55.8", 10224), 8); Call epoll_create to create a multiplexer, epoll final Selector open1 = Selector. Open (); Open. Register (open1, selectionkey.op_accept); Selector. Register is blocked when calling Selector. Select. Final LinkedBlockingQueue<Runnable> objects = new LinkedBlockingQueue<>(1024); Open2 = select.open (); openpoll = select.open (); New Thread(() -> {while (true) {try {select (true) {register (true) {select (true) {register (true); Int select = open2.select(); if (select > 0) { final Set<SelectionKey> selectionKeys = open2.selectedKeys(); final Iterator<SelectionKey> iterator = selectionKeys.iterator(); While (iterator.hasnext ()) {system.out.println (" input any data "); // Block here to read data from kernel to user mode, mainly to validate buffers and Tcp's sliding window system.in.read (); final SelectionKey next = iterator.next(); iterator.remove(); if (next.isReadable()) { final SocketChannel channel = (SocketChannel) next.channel(); final ByteBuffer allocate = ByteBuffer.allocate(1024); final int read = channel.read(allocate); If (read == -1) {channel.close(); } if (read > 0) { allocate.flip(); System.out.println(StandardCharsets.UTF_8.decode(allocate).toString()); }}}} // Resolve the select blocking register problem here. final Runnable poll = objects.poll(); if (Objects.nonNull(poll)) { poll.run(); } } catch (IOException e) { e.printStackTrace(); } } }).start(); // The logical new Thread(() -> {while (true) {try {if (open1.select(100) <= 0) {continue; } final Set<SelectionKey> selectionKeys = open1.selectedKeys(); final Iterator<SelectionKey> iterator = selectionKeys.iterator(); while (iterator.hasNext()) { final SelectionKey next = iterator.next(); iterator.remove(); if (next.isValid() & next.isAcceptable()) { final ServerSocketChannel channel = (ServerSocketChannel) next.channel(); final SocketChannel accept = channel.accept(); if (Objects.nonNull(accept)) { accept.configureBlocking(false); objects.put(() -> { open2.wakeup(); try { accept.register(open2, SelectionKey.OP_READ | SelectionKey.OP_WRITE); } catch (ClosedChannelException e) { e.printStackTrace(); }}); open2.wakeup(); } } } } catch (IOException | InterruptedException e) { e.printStackTrace(); } } }).start(); }}Copy the code

The resources

TCP/IP is introduced


This article was created by Zhang Panqin on his blog www.mflyyou.cn/. It can be reproduced and quoted freely, but the author must be signed and indicate the source of the article.

If reprinted to wechat official account, please add the author’s official qr code at the end of the article. Wechat official account name: Mflyyou