What the hell is Netty

From the HTTP

With Netty, you can implement your own HTTP server, FTP server, UDP server, RPC server, WebSocket server, Redis Proxy server, MySQL Proxy server, etc.

Let’s review how a traditional HTTP server works

1. Create a ServerSocket to listen to and bind a port

2. A series of clients requesting this port

3. The server uses Accept to obtain a Socket connection object from the client

4. Start a new thread to process the connection

4.1. Read the Socket to get the byte stream

4.2 decode the protocol and get the Http request object

4.3 process the Http request, get a result, encapsulate as an HttpResponse object

4.4 coding protocol, write the result serialized byte stream into Socket, and send the byte stream to the client

5. Continue with Step 3

HTTP servers are called HTTP servers because the encoding and decoding protocol is HTTP, if the protocol is Redis, it becomes a Redis server, if the protocol is WebSocket, it becomes a WebSocket server, etc. With Netty you can customize the codec protocol and implement your own protocol specific server.

NIO

This is a traditional HTTP server, but in a high-concurrency environment, the number of threads will be high, and the System load will be high, hence the NIO.

It’s not a concept unique to Java, NIO stands for a term called IO multiplexing. It was a system call provided by the operating system. In the early days, the operating system call was called SELECT, but its performance was poor, and later evolved into epoll on Linux and Kqueue on Macs. We usually just call it epoll, because no one uses an Apple computer as a server for external services. Netty is a framework packaged based on Java NIO technology. Why encapsulate? Because native Java NIO is not so easy to use and is notoriously buggy, Netty encapsulates it and provides an easy-to-use usage pattern and interface that makes it much easier for users to use.

So before WE say NIO, let’s say BIO (Blocking IO), what do you mean by Blocking?

  1. When a client listens, Accept blocks, and only a new connection is accepted, allowing the main thread to continue
  2. When a socket is Read or written, the Read is blocked and cannot be returned until the request message arrives, allowing child threads to continue processing
  3. When a socket is read or written, Write is blocked. Only after the client receives the message can Write be returned, and the child thread can continue to read the next request

In traditional BIO mode, all threads are blocked from beginning to end, and they just wait and do nothing, taking up resources on the system.

So how does NIO do that? It uses the event mechanism. It allows a single thread to dry up the logic for Accept, read and write operations, and request processing. If there is nothing to do, it does not loop, it will sleep until the next event comes, such a thread is called a NIO thread. In pseudocode:


while true{events = takeEvents(FDS) // Get events, if there is no event, the thread will sleepfor event in events {
        if event.isAcceptable {
            doAccept() // New link comes}elif event.isReadable {
            request = doRead() // Read the messageif request.isComplete() {
                doProcess()
            }
        } elif event.isWriteable {
            doWrite() // Write message}}}Copy the code

Reactor thread model

Reactor single-threaded model

NIO thread + accept thread:



Reactor multithreaded model



Reactor master-slave model

Principal/Slave Reactor Multithreading: NIO thread pools of multiple acceptors are used to accept connections from clients

Netty can be flexibly configured based on the above three models.

conclusion

Netty is based on NIO, and Netty provides a higher level of abstraction on top of NIO.

In Netty, Accept connections are handled by a separate thread pool, and read and write operations are handled by a separate thread pool.

Accept connections and read and write operations can also be processed using the same thread pool. The request processing logic can be processed using a separate thread pool or in parallel with the read-write thread. Each thread in the thread pool is a NIO thread. Users can construct a high performance concurrency model according to the actual situation.


Why Netty

If you do not use netty and use the native JDK, you have the following problems:

1. Complex API
2. Familiar with multithreading: NIO relates to the Reactor schema
3, high availability: need to solve the disconnection reconnection, half packet read and write, failure cache and other problems
4, JDK NIO bugs

Netty, for its part, has a simple API, high performance, and active community (dubbo, RocketMQ, and others use it)

What is TCP sticky/unpack

The phenomenon of

ByteBuf is a netty byte container that holds the data that needs to be sent

public class FirstClientHandler extends ChannelInboundHandlerAdapter {    
    @Override    
    public void channelActive(ChannelHandlerContext ctx) {       
        for (int i = 0; i < 1000; i++) {            
            ByteBuf buffer = getByteBuf(ctx);            
            ctx.channel().writeAndFlush(buffer);        
        }    
    }    
    private ByteBuf getByteBuf(ChannelHandlerContext ctx) {        
        byte[] bytes = "Hello, my name is 1234567!".getBytes(Charset.forName("utf-8"));        
        ByteBuf buffer = ctx.alloc().buffer();        
        buffer.writeBytes(bytes);        
        returnbuffer; }}Copy the code

The data read from the client is as follows:



As you can see from the console output on the server side, there are three types of output

  1. One is normal string output.
  2. One is when multiple strings are “stuck” together, and we define this ByteBuf as a sticky packet.
  3. One is when a string is “unpacked” to form a broken packet, which we define as half a packet.

Analyze the cause through the phenomenon

The application layer uses Netty, but for the operating system, only TCP protocol, although our application layer sends data according to the unit of ByteBuf, the server reads data according to ByteBuf, but to the underlying operating system still sends data according to the byte stream, therefore, the data to the server, It is also read as a byte stream, and then goes to the Netty application layer and reassembles ByteBuf, which may not be equal to the sequential ByteBuf sent by the client. Therefore, we need to assemble the packets of our application layer according to the custom protocol on the client side, and then assemble the packets according to the protocol of our application layer on the server side. This process is usually called unpacking on the server side, and sticky packet on the client side.

Unpacking and gluing packets are relative. If a packet is pasted on one end, the other end needs to unpack the pasted packet. The sending end glue the three packets into two TCP packets and send them to the receiving end.

How to solve

In the absence of Netty, the user if they need to unpacking, the basic principle is to read data from the TCP buffer, read out every time you need to determine whether a complete packet If the current read data packets to joining together into a complete business, then keep the data and continue to read from the TCP buffer, Until you get a complete packet. If the current read data and the read data are enough to form a data packet, the read data is combined with the read data to form a complete service data packet and transferred to the service logic. The redundant data is still reserved for the next read data.

In Netty, there are many types of unpackers built that we can use directly:



After selecting the unpacker, add the unpacker to the chanelPipeline on the client and server side of the code:

In the example above:

Client:

ch.pipeline().addLast(new FixedLengthFrameDecoder(31));Copy the code

Server:

ch.pipeline().addLast(new FixedLengthFrameDecoder(31));Copy the code



Zero copy of Netty

A copy of the traditional meaning

When sending data, the traditional implementation is:

1. ‘file.read (bytes)’ 2. ‘socket.send (bytes)’ This method requires four data copies and four context switches:

1. Data is read from disk to the kernel’s Read buffer





The concept of zero copy

Obviously, steps 2 and 3 above are not necessary, and Java’s Filechannel.transferto method can avoid the above two redundant copies (which of course requires the underlying operating system support).

1. The transferTo is called, and the data is copied from the file by the DMA engine to the kernel read Buffer
2. Then DMA copies the data from the kernel read buffer to the nic interface buffer
Neither of the above operations requires any CPU input, so it is zero copy.

Zero copy in Netty

It is mainly reflected in three aspects:
1, bytebuffer

Netty uses ByteBuffer to send and receive messages. Bytebuffer uses DirectMemory to read and write Socket messages directly.

Cause: If you use traditional heap memory for Socket reading and writing, the JVM copies the heap buffer to direct memory and then writes it to the Socket, making an extra memory copy of the buffer. DirectMemory can send directly to the nic interface via DMA
2, the Composite Buffers
In traditional Bytebuffers, if we want to combine the data from two bytebuffers, we first create a new array of size=size1+size2, and then copy the data from the two arrays into the new array. The CompositeByteBuf provided by Netty, however, avoids this because the CompositeByteBuf does not actually combine buffers, but instead keeps references to them, avoiding copying data and achieving zero copy.
3. Use of Filechannel. transferTo
Netty uses FileChannel’s transferTo method, which relies on the operating system to achieve zero copy.

Netty internal execution process

Server:





1. Create ServerBootStrap instance
Set up and bind the Reactor thread pool EventLoopGroup, which processes all the channels registered to the Selector of the Reactor thread
3. Set and bind the channel of the server
Create a ChannelPipeline and handler to handle network events, in which network time flows as a stream, and the handler performs most of the custom functions, such as codec SSl security authentication
6. Bind and enable listening ports
7. When the Reactor thread: NioEventLoop executes the pipline method, and finally schedules and executes the channelHandler

The client







conclusion

The above is my knowledge of Netty, if you have different views, welcome to discuss!