Limited space, please see above: 30 thousand words plus 50 illustrations, with in-depth analysis of Netty architecture and principles (1)

2. Netty architecture and working principles

2.1. Why Netty

Why build a Netty when Java provides NIO is mainly because Java NIO has the following disadvantages:

1) Java NIO class library and API is huge and complicated, which is very troublesome to use and heavy development workload.

2) To use Java NIO, programmers need to have excellent Java multithreaded coding skills, as well as very familiar with network programming, such as to deal with a series of difficult work such as disconnection and reconnection, network flash disconnection, half-packet read and write, failure caching, network congestion and abnormal flow processing.

3) There are bugs in Java NIO, such as Epoll Bug, which will lead to empty rotation of Selector and greatly consume CPU resources.

Netty for JDK built-in NIO API encapsulation, to solve the above problems, improve the IO program development efficiency and reliability, Netty:

1) Elegant design, providing blocking and non-blocking sockets; Provide flexible and extensible event models; Provides a highly customizable threading model.

2) With higher performance and greater throughput, using zero copy technology to minimize unnecessary memory replication, reduce resource consumption.

3) Provide secure transmission features.

4) Support a variety of mainstream protocols; Preset a variety of codec functions, support user development of private protocol.

* * note: The so-called support TCP, UDP, HTTP, WebSocket and other protocols, that is, Netty provides the relevant programming classes and interfaces, so the following article mainly based on Netty TCP Server/Client development cases are explained, to show the core principles of Netty. For other protocols Server/Client development will not give examples, to help readers improve internal forces rather than teach tricks is my starting point 🙂 **

The following figure shows the Netty architecture diagram on the Netty official website.

Netty’s power can be seen in a few key words: zero-copy, extensible event model; Supports protocols such as TCP, UDP, HTTP, and WebSocket. Provides secure transmission, compression, large file transfer, codec support and so on.

2.2. Several Reactor threading patterns

The traditional BIO server programming adopts the processing model of “per thread per connection”. The disadvantages are obvious, that is, when facing a large number of concurrent client connections, the resource of the server is under great pressure. And thread utilization is very low, if the current thread has no data to read, it will block the read operation. The basic shape of this model is shown below (image from network).

BIO server programming uses the Reactor model (also known as the Dispatcher model). The Reactor model has two elements:

1) Based on IO multiplexing technology, multiple connections share a multiplexer, the application thread does not need to block waiting for all connections, just block waiting for the multiplexer. When there is new data available for processing on a connection, the application thread returns from the blocked state and begins processing business on that connection.

2) Reuse thread resources based on thread pool technology. Instead of creating a dedicated thread for each connection, the application allocates the business processing tasks on the connection to threads in the thread pool for processing. One thread can process the business of multiple connections.

The following diagram illustrates the basic form of the Reactor model (image from the network) :

The Reactor schema has two core components:

1) Reactor (ServiceHandler) : The Reactor runs in a separate thread that listens and distributes events to the appropriate processing threads to react to IO events.

Handlers perform processing methods in response to I/O events. Handlers perform non-blocking operations.

Reactor model is the key to realize high concurrency of network IO program. It can be divided into single-reactor single-thread mode, single-reactor multi-thread mode and principal/slave Reactor multi-thread mode.

2.2.1. Single-reactor Single-thread model

The basic form of single-reactor single-thread pattern is as follows (image from network) :

The basic workflow of this mode is as follows:

1) Reactor listens for client request events through SELECT, and distributes events through Dispatch after receiving them

2) If the event is a connection request event, the Acceptor processes the connection request through Accept, and then creates a Handler object to handle subsequent business processing after the connection is established.

3) If the event is not a connection request event, the Reactor object is distributed to the connection Handler for processing.

4) Handler completes the complete process of read–> business process –>send.

The advantages of this mode are: the model is simple, there is no multi-threading, process communication, competition problems, one thread to complete all the event response and business processing. Of course, the disadvantages are obvious:

1) There are performance problems, only one thread, can not fully play the performance of multi-core CPU. When a Handler processes services on a connection, the entire process cannot process other connection events, resulting in performance bottlenecks.

2) There is a reliability problem. If the thread terminates unexpectedly or enters an infinite loop, the communication module of the whole system will be unavailable, unable to receive and process external messages, resulting in node failure.

The single-reactor single-thread model is used when the number of clients is limited and the process is fast. For example, the time complexity of Redis is O(1).

2.2.2. Single-reactor multithreaded model

The basic form of single-reactor single-thread pattern is as follows (image from network) :

The basic workflow of this mode is as follows:

1) The Reactor object listens for client request events through SELECT, and distributes the events through Dispatch after receiving them.

2) If the event is a connection request event, the Acceptor processes the connection request through Accept, and then creates a Handler object to handle subsequent business processing after the connection is established.

3) If the event is not a connection request event, the Reactor object is distributed to the connection Handler for processing. Handler is only responsible for responding to events, not specific business processing. After the Handler reads the request data through read, it distributes it to the following Worker thread pool to process the business request.

4) The Worker thread pool will allocate independent threads to complete the real business processing and return the processing results to the Handler. The Handler sends response data to the client using SEND.

The advantage of this model is that it can make full use of the processing capacity of multi-core CPU, but the disadvantage is that the multi-thread data sharing and control is relatively complex. Reactor processes all the monitoring and response of events, and runs in a single thread, which is prone to performance bottlenecks in the face of high concurrency scenarios.

2.2.3. Principal/Slave Reactor multithreaded model

The main Reactor’s multithreaded patterns are as follows (the images in Chapter 1 are from the network, and the images in Chapter 2 are the same in Scalable IO in Java by JUC author Doug Lea) :

In the single-reactor multi-thread model, Reactor runs in a single thread, which is easy to become a performance bottleneck in the face of high concurrency. The master/slave Reactor multithreading mode allows a Reactor to run in multiple threads (main and SubReactor threads). The basic workflow of this mode is as follows:

1) Reactor main thread The MainReactor listens for client connection events through SELECT, and processes client connection events through Acceptor.

2) When an Acceptor process a client connection (establishing a Socket connection with the client), the MainReactor assigns the connection to the SubReactor. (That is, MainReactor only listens for client connection requests, and sends the connection to SubReactor to listen for SUBSEQUENT I/O events.)

3) The SubReactor adds the connection to its connection queue for listening and creates a Handler to process various events.

4) When a new event occurs on the connection, the SubReactor calls the corresponding Handler.

5) Handler reads the request data from the connection through read and distributes the request data to the Worker thread pool for business processing.

6) The Worker thread pool will allocate independent threads to complete the real business processing and return the processing results to the Handler. The Handler sends response data to the client using SEND.

7) A MainReactor can correspond to multiple subreactors, that is, a MainReactor thread can correspond to multiple SubReactor threads.

The advantages of this model are:

1) The data interaction between the MainReactor thread and the SubReactor thread is simple and has clear responsibilities. The MainReactor thread only needs to receive new connections, and the SubReactor thread completes the subsequent business processing.

2) The data interaction between the MainReactor thread and the SubReactor thread is simple, the MainReactor thread only needs to transfer the new connection to the SubReactor thread, and the SubReactor thread does not need to return data.

3) Multiple SubReactor threads can handle higher concurrent requests.

The disadvantage of this pattern is high programming complexity. However, because of its obvious advantages, it is widely used in many projects, including Nginx, Memcached, Netty, etc.

This pattern is also known as the 1+M+N threading pattern for servers, which means that a server developed using this pattern has one (or more) connection establishment threads +M IO threads +N business processing threads. This is a mature server programming pattern in the industry.

2.3. What Netty looks like

The design of Netty is based on the principal/Slave Reactor multithreading model with some improvements. This section presents Netty in an incremental way, starting with a simple version of Netty and gradually flesking it out in detail until the full picture is presented.

The simple version of Netty looks like this:

With regard to this picture, the following points should be made:

1) The BossGroup thread maintains the Selector, and the ServerSocketChannel registers with that Selector, only looking at connection establishment request events (equivalent to the primary Reactor).

2) when received from the client connection request event, through the ServerSocketChannel. Accept method to obtain corresponding SocketChannel, Each Selector runs in one thread (equivalent to a Reactor), encapsulated as a NIAN SocketChannel and registered in the WorkerGroup thread.

3) When the Selector in the WorkerGroup thread listens for the I/O event it is interested in, it calls Handler to handle it.

Let’s add some details to this simple version of Netty:

With regard to this picture, the following points should be made:

1) There are two groups of thread pools: BossGroup and WorkerGroup. Threads in BossGroup are responsible for establishing connections with clients, while threads in WorkerGroup are responsible for reading and writing connections.

2) BossGroups and workergroups have multiple threads that perform event processing in a continuous loop, each containing a Selector that listens for channels registered on it.

3) The threads in each BossGroup loop through the following three steps:

3.1) Rotate the accept event of ServerSocketChannel registered on it (OP_ACCEPT event)

3.2) handle the accept event, establish a connection with the client, generate a NioSocketChannel, and register it with a Selector on a thread in the WorkerGroup

3.3) Process the next event in the task queue in this loop

4) The threads in each WorkerGroup loop through the following three steps:

4.1) Train read/write events (OP_READ/OP_WRITE events) of NioSocketChannel registered on it

4.2) handle read/write events on the corresponding NioSocketChannel

4.3) Process the next event in the task queue in this loop

Let’s take a look at the ultimate Version of Netty, as shown below.

With regard to this picture, the following points should be made:

Netty abstracts two groups of thread pools: BossGroup and WorkerGroup, also called BossNioEventLoopGroup and WorkerNioEventLoopGroup. There are NioEventLoop threads in each thread pool. Threads in the BossGroup are responsible for establishing connections with clients, while threads in the WorkerGroup are responsible for reading and writing connections. BossGroup and WorkerGroup are both of type NioEventLoopGroup.

2) A NioEventLoopGroup is equivalent to a group of event loops. This group contains multiple event loops, each of which is a NioEventLoop.

3) NioEventLoop represents a thread that performs event processing in a continuous loop. Each NioEventLoop contains a Selector that listens for the Socket network connection (Channel) registered on it.

4) NioEventLoopGroup can contain multiple threads, that is, multiple NioEventLoops.

5) Each BossNioEventLoop loops through the following three steps:

5.1) SELECT: accept event (OP_ACCEPT) of ServerSocketChannel registered on it

5.2) processSelectedKeys: Handles the Accept event, establishes a connection with the client, generates a NioSocketChannel, and registers it with a Selector on a WorkerNioEventLoop

5.3) runAllTasks: Loop through the rest of the tasks in the task queue

6) Each WorkerNioEventLoop loops through the following three steps:

6.1) SELECT: read/write event (OP_READ/OP_WRITE event) of NioSocketChannel registered on it

6.2) processSelectedKeys: processes read/write events on the corresponding NioSocketChannel

6.3) runAllTasks: Cycle through other tasks in the task queue

ProcessSelectedKeys = processSelectedKeys = processSelectedKeys = processSelectedKeys = processSelectedKeys A number of handlers (intercepting handlers, filtering handlers, custom handlers, etc.) are maintained in Pipeline. Pipeline will not be explained in detail here.

2.4. Netty-based TCP Server/Client Case

Let’s write some code to understand what Netty looks like. The following two pieces of code are Netty based TCP Server and TCP Client respectively.

The server code is:

/** * required dependencies: * <dependency> * <groupId> io.ty </groupId> * <artifactId>netty-all</artifactId> * <version> 4.1.52.final </version> * </dependency> */ public static void main(String[] args) throws InterruptedException {// Create BossGroup and WorkerGroup // 1. BossGroup only handles connection requests // 2. BossGroup = new NioEventLoopGroup(); bossGroup = new NioEventLoopGroup(); EventLoopGroup workerGroup = new NioEventLoopGroup(); ServerBootstrap bootstrap = new ServerBootstrap(); // Set the thread group. Group (bossGroup, Channel workerGroup) / / that the server implementation class (easy to do reflection with Netty.) channel (NioServerSocketChannel. Class) / / set to wait for the connection queue capacity (when the client connection request rate / / in NioServerSocketChannel receives the rate The option() method is used to add configuration to ServerSocketChannel on the server side. Option (channeloption.so_backlog, The childOption() method is used to add configuration to the SocketChannel received by the server .childOption(ChannelOption.SO_KEEPALIVE, The handler() method is used to set the business handler for BossGroup. The childHandler() method is used to set the business handler for WorkerGroup. ChildHandler (// Creates a channel to initialize the object new ChannelInitializer<SocketChannel>() {// Add a service handler to Pipeline @override protected void initChannel(SocketChannel) socketChannel ) throws Exception { socketChannel.pipeline().addLast( new NettyServerHandler() ); Socketchannel.pipeline ().addlast () // Add more handlers}}); System.out.println("server is ready..." ); // channelFuture refers to the asynchronous model of Netty. ChannelFuture = bootstrap.bind(8080).sync(); Channelfuture.channel ().closeFuture().sync(); } finally { bossGroup.shutdownGracefully(); workerGroup.shutdownGracefully(); }} /** * a custom Handler, You need to inherit some HandlerAdapter (specification) specified by Netty * InboundHandler for handling I/O events when data flows into the local (server) * InboundHandler for handling I/O events when data flows out of the local (server) */ Static class NettyServerHandler extends ChannelInboundHandlerAdapter {/ * * * * * that should execute when the channel have data can be read, @ param CTX context object, * @param MSG Data sent by the client * @throws Exception */ @override public void ChannelRead (ChannelHandlerContext CTX, Object MSG) throws Exception {// Receive data from the client. System.out.println("client address: " + ctx.channel().remoteAddress()); ByteBuf = (ByteBuf) MSG; // ByteBuf is a Netty class with higher performance than NIO's ByteBuffer. System.out.println("data from client: " + byteBuf.toString(CharsetUtil.UTF_8)); } /** * Execute ** @param CTX context object * @throws Exception */ @override public void ChannelReadComplete (ChannelHandlerContext CTX) throws Exception {// Send a response to the client CTx. writeAndFlush(// The Unpooled class is Netty Unpooled. CopiedBuffer (" Hello client! i have got your data.", CharsetUtil.UTF_8 ) ); } /** * Execute ** @param CTX context object * @param Cause Exception object * @throws Exception */ @override public void exceptionCaught(ChannelHandlerContext ctx, Throwable Cause) throws Exception {// Close the Socket connection with the client ctx.channel().close(); }}Copy the code

The client side code is:

/** * required dependencies: * <dependency> * <groupId> io.ty </groupId> * <artifactId>netty-all</artifactId> * <version> 4.1.52.final </version> * </dependency> */ public static void main(String[] args) throws InterruptedException {// The client only needs one event loop group, BossGroup EventLoopGroup EventLoopGroup = new NioEventLoopGroup(); Bootstrap = new Bootstrap(); Group (eventLoopGroup) // Specifies the implementation class of the client channel(for Netty to reflect). Channel (niosocketchannel.class) // The handler() method is used to set the business handler for the BossGroup. Handler (// Creates a channel initialization object new ChannelInitializer<SocketChannel>() {// to Pipeline Add a service handler @override protected void initChannel(SocketChannel SocketChannel) throws Exception { socketChannel.pipeline().addLast( new NettyClientHandler() ); Socketchannel.pipeline ().addlast () // Add more handlers}}); System.out.println("client is ready..." ); ChannelFuture refers to Netty's asynchronous model. ChannelFuture = bootstrap.connect("127.0.0.1", 8080).sync(); ChannelFuture = bootstrap.connect("127.0.0.1", 8080). Channelfuture.channel ().closeFuture().sync(); } finally { eventLoopGroup.shutdownGracefully(); }} /** * a custom Handler, You need to inherit some HandlerAdapter (specification) specified by Netty * InboundHandler is used to handle IO events when data flows into the local end (client) * InboundHandler is used to handle IO events when data flows out of the local end (client) */ Static class NettyClientHandler extends ChannelInboundHandlerAdapter {/ channel in order to perform * * * * * * @ throws @ param CTX context object Exception */ @override public void channelActive(ChannelHandlerContext CTX) throws Exception {// Sends data to the server Ctx. writeAndFlush(// Unpooled class is a Netty tool for handling buffers. CopiedBuffer returns a ByteBuf object similar to // NIO's ByteBuffer. Unpooled. CopiedBuffer (" Hello server! , CharsetUtil.UTF_8 ) ); ** @param CTX context object * @param MSG Data sent by the server * @throws Exception */ @override public void channelRead(ChannelHandlerContext ctx, System.out.println("server address: "+ ctx.channel().remoteAddress())); ByteBuf = (ByteBuf) MSG; // ByteBuf is a Netty class with higher performance than NIO's ByteBuffer. System.out.println("data from server: " + byteBuf.toString(CharsetUtil.UTF_8)); } /** * Execute ** @param CTX context object * @param Cause Exception object * @throws Exception */ @override public void exceptionCaught(ChannelHandlerContext ctx, Throwable Cause) throws Exception {// Close the Socket connection with the server ctx.channel().close(); }}Copy the code

What? Do you find programming with Netty harder and more work? Come on, come on, you know, you’ve got a server based on the principal/Slave Reactor multithreaded model, a high throughput and concurrency server, an asynchronous processing server in these two short pieces of code… What more do you want?

For the above two pieces of code, make the following simple remarks:

1) Bootstrap and ServerBootstrap are Bootstrap classes for the client and server respectively. A Netty application usually starts from a Bootstrap class, which is used to configure the entire Netty program, set the service processing class (Handler), bind ports, initiate connections, and so on.

2) the client creates a NioSocketChannel as the client channel to connect to the server.

3) the server first creates a NioServerSocketChannel as the server side channel. Each time a client connection is received, a NioSocketChannel is generated for this client.

4) When using Channel to build network IO programs, different protocols and blocking types correspond to different Channels in Netty. The commonly used channels are as follows:

  • NioSocketChannel: non-blocking TCP client Channel (the Channel used by the client in this case)
  • NioServerSocketChannel: non-blocking TCP server side Channel (the Channel used by the server side in this case)
  • NioDatagramChannel: non-blocking UDP Channel
  • NioSctpChannel: a non-blocking SCTP client Channel
  • NioSctpServerChannel: a non-blocking SCTP server Channel……

Start the server and client code, debug the above server code, and find:

1) BossGroup and WorkerGroup both contain 16 threads (NioEventLoop) by default, this is because my PC is 8-core with the number of NioEventloops =coreNum*2. These 16 threads are equivalent to the primary Reactor.

Create a BossGroup and a WorkerGroup by specifying the number of nioEventloops as follows:

EventLoopGroup bossGroup = new NioEventLoopGroup(1);
EventLoopGroup workerGroup = new NioEventLoopGroup(16);
Copy the code

This allows better allocation of thread resources.

2) Each NioEventLoop contains the following attributes (such as its own Selector, task queue, executor, etc.) :

3) the code is broken in the server-side NettyServerHandler. ChannelRead on:

You can see that CTX contains the following attributes:

You can see:

  • The current ChannelHandlerContext CTX is a link in the ChannelHandlerContext responsibility chain, and its next and prev properties can be seen
  • The current ChannelHandlerContext CTX contains a Handler
  • The current ChannelHandlerContext CTX contains a Pipeline
  • A Pipeline is essentially a two-way loop list, with tail and head attributes visible
  • A Pipeline contains a Channel, which in turn refers to the Pipeline…

Starting in the next section, I’ll take a closer look at these two pieces of code and show you Netty in more detail.

2.5. Netty Handler component

Whether server code custom NettyServerHandler or custom NettyClientHandler in client code, are inherited in the ChannelInboundHandlerAdapter, ChannelInboundHandlerAdapter is inherited from ChannelHandlerAdapter, ChannelHandlerAdapter realized ChannelHandler again:

public class ChannelInboundHandlerAdapter 
    extends ChannelHandlerAdapter 
    implements ChannelInboundHandler {
    ......


public abstract class ChannelHandlerAdapter 
    implements ChannelHandler {
    ......
Copy the code

Therefore, both the custom NettyServerHandler in the server code and the custom NettyClientHandler in the client code can be collectively referred to as ChannelHandler.

ChannelHandler handles IO events in the current ChannelHandler and passes them to the next ChannelHandler. Therefore, multiple channelhandlers form a responsibility chain, which is located in the ChannelPipeline.

Data in the Netty server or client processing process is: read data -> decode data -> process data -> encode data -> send data. Each of these processes uses the ChannelHandler responsibility chain.

The ChannelHandler system in Netty is as follows (the first figure is from the network) :

Among them:

  • ChannelInboundHandler is used to handle inbound IO events
  • ChannelOutboundHandler Handles outbound I/O events
  • ChannelInboundHandlerAdapter handles inbound IO events
  • ChannelOutboundHandlerAdapter used to handle the outbound IO events

ChannelPipeline provides a container for the ChannelHandler chain. In a client application, for example, if the direction of the event is from the client to the server, we call the event outbound. Then the data sent by the client to the server is processed by a series of ChannelOutboundHandlers in the Pipeline. If the direction of the event is from the server to the client, we call the event inbound, then the data sent by the server to the client is processed by a series of ChannelinboundHandlers in the Pipeline.

Whether server code custom NettyServerHandler or custom NettyClientHandler in client code, are inherited in the ChannelInboundHandlerAdapter, ChannelInboundHandlerAdapter provide method is as follows:

As you can see from the method names, they are fired after different events, For example, channelRegistred() is executed when a Channel is registered, handlerAdded() is executed when a ChannelHandler is added, channelRead() is executed when inbound data is received, and after inbound data is read ChannelReadComplete () and so on.

2.6. Netty Pipeline components

Netty’s ChannelPipeline, described in the previous section, maintains a chain of ChannelHandler responsibilities for intercepting or handling inbound and outbound events and operations. This section gives a further description.

ChannelPipeline implements an advanced form of intercepting filter pattern that gives the user complete control over how events are handled and how the various Channelhandlers in a Channel interact with each other.

Each Netty Channel contains a Channel pipeline (Channel and Channel pipeline reference each other). The ChannelPipeline maintains a two-way loop list of ChannelHandlerContext, each of which contains a ChannelHandler. (for the sake of simplicity, the ChannelPipeline contains a ChannelHandler responsibility chain, full details are given here.)

As shown below (picture from network) :

Remember the picture below? Here is a screenshot of the Netty-based Server application from above. You can see what the ChannelHandlerContext contains:

The ChannelHandlerContext contains the ChannelHandler and is associated with the corresponding Channel and Pipeline. ChannelHandlerContext, ChannelHandler, Channel, and ChannelPipeline refer to each other as their own attributes.

When processing an inbound event, the inbound event and data flow from the top ChannelHandlerContext in the Pipeline to the bottom ChannelHandlerContext, And is processed in turn in each of these ChannelinboundHandlers (e.g., decoding handlers); Outbound events and data flow from the tail ChannelHandlerContext of the bidirectional list in Pipeline to the header ChannelHandlerContext, And is processed in turn in each of these ChannelOutboundHandler (for example, encoding Handler).

2.7. Netty’s EventLoopGroup component

In the Netty based TCP Server code, there are two EventLoop groups — bossGroup and workerGroup. The EventLoopGroup is an abstraction of a set of Eventloops.

Netty’s EventLoop inherits from JUC Executor, so EventLoop is essentially a JUC Executor.

public interface Executor {
    /**
     * Executes the given command at some time in the future.
     */
    void execute(Runnable command);
}
Copy the code

In order to make better use of the performance of multi-core CPU, Netty usually has multiple Eventloops working at the same time. Each EventLoop maintains a Selector instance, which listens for I/O events of the Channel registered on it.

The EventLoopGroup contains a Next method that selects an EventLoop from the Group according to certain rules to process IO events.

On the server side, the Boss EventLoopGroup usually contains only one Boss EventLoop (single-threaded), which maintains an instance of Selector registered with ServerSocketChannel. The EventLoop continuously polls for the OP_ACCEPT event (the client connection event) and then passes the received SocketChannel to the Worker EventLoopGroup. The Worker EventLoopGroup selects a Worker EventLoop using the next() method and registers the SocketChannel with its Selector. The Worker EventLoop is responsible for subsequent I/O event processing on the SocketChannel. The whole process is shown below:

2.8. Netty TaskQueue

Each NioEventLoop in Netty has a TaskQueue, which is designed to buffer when a task is being submitted faster than the thread can process it. Or it can be used to asynchronously handle an I/O event that a Selector listens on.

Task queues in Netty can be used in three scenarios:

1) When dealing with custom common tasks of user programs

2) When dealing with custom scheduled tasks of user programs

3) When the non-current Reactor thread calls various methods of the current Reactor Channel.

For the first scenario, for example, if the channelRead method executes a time-consuming procedure in the Netty server-side Handler described in Section 2.4, the following blocking would definitely reduce the concurrency of the current NioEventLoop:

** @param CTX context object * @param MSG client data * @throws Exception */ @override public void ChannelRead (ChannelHandlerContext CTX, Object MSG) throws Exception {thread. sleep(LONG_TIME); ByteBuf byteBuf = (ByteBuf) msg; System.out.println("data from client: " + byteBuf.toString(CharsetUtil.UTF_8)); }Copy the code

The improvement method is to use the task queue, the code is as follows:

** @param CTX context object * @param MSG client data * @throws Exception */ @override public void ChannelRead (ChannelHandlerContext CTX, Object MSG) throws Exception {// Final Object finalMsg = MSG; // Asynchronously execute ctx.channel().eventloop ().execute(new Runnable() {public by putting time-consuming // operations into the task queue via ctx.channel().eventloop ().execute(new Runnable() {public Void run() {// Try {thread.sleep (LONG_TIME); } catch (InterruptedException e) { e.printStackTrace(); } ByteBuf byteBuf = (ByteBuf) finalMsg; System.out.println("data from client: " + byteBuf.toString(CharsetUtil.UTF_8)); }}); Ctx.channel ().eventloop ().execute() // To queue more operations system.out.println ("return right now."); }Copy the code

The breakpoint traces the execution of this function to see that the time-consuming task is actually placed in the taskQueue of the current NioEventLoop.

For the second scenario, for example, in the Netty-based server-side Handler of Section 2.4, if the procedure executed in the channelRead method does not need to be executed immediately, but timed, the code could be written like this:

** @param CTX context object * @param MSG client data * @throws Exception */ @override public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception { final Object finalMsg = msg; Ctx.channel ().eventloop ().schedule().ctx.channel ().eventloop ().schedule(new Runnable() { public void run() { ByteBuf byteBuf = (ByteBuf) finalMsg; System.out.println("data from client: " + byteBuf.toString(CharsetUtil.UTF_8)); } }, 5, TimeUnit.MINUTES); // Call ctx.channel().eventloop ().schedule() // queue more operations system.out.println ("return right now."); }Copy the code

The breakpoint traces the execution of this function to see that the scheduled task is actually placed in the scheduleTasjQueue of the current NioEventLoop.

For the third scenario, for example, in the business thread of the push system built on Netty, the corresponding SocketChannel reference is found according to the user id, and then the write method is called to push the message to the user. In this case, the write task will be put in the task queue. The write task is eventually consumed asynchronously. This situation is the application of the first two situations, and involves too much business content, no sample code is given, readers interested can complete by themselves, here are the following tips:

2.9. Netty’s Future and Promise

Most of the IO interfaces provided by Netty to consumers (i.e., the IO methods in The Netty Channel) are asynchronous (i.e., they all return a Netty Future immediately, while the IO process is asynchronous), so the caller cannot get the result of the IO operation directly. To get the result of an IO operation, use Netty’s Future (ChannelFuture inherits Netty Future). Netty Future inherits from JUC Future) to query execution status, wait for execution results, fetch execution results, etc. Those who have used the JUC Future interface will be familiar with this mechanism and will not be described here. You can also use Netty Future addListener() to add a callback method to process the I/O result asynchronously, as follows:

// Since bootstrap.connect() is an asynchronous operation, Final ChannelFuture ChannelFuture = bootstrap.connect("127.0.0.1", 8080).sync(); ChannelFuture. AddListener (new ChannelFutureListener () {/ * * * callback methods, */ public void operationComplete(ChannelFuture Future) throws Exception {if (channelFuture.isSuccess()) { System.out.println("client has connected to server!" ); } else {system.out. println(" Connect to serverfail!") ); // TODO other processing}}});Copy the code

Netty Future provides the following interfaces:

Note: some sources say that “all IO operations in Netty are asynchronous”, which is obviously incorrect. Netty is based on Java NIO, which is synchronous non-blocking IO. Netty provides an asynchronous interface based on Java NIO encapsulation, so this article says that Netty** is asynchronous to most of the IO interfaces (i.e., the IO methods in Netty Channel) provided to consumers. For example, in io.net ty. Channel. ChannelOutboundInvoker (Netty IO method multiple inheritance in this channel) provide most of the IO interface returns Netty Future:

A Promise is a writable Future. The Future itself has no interface related to write operations. Netty extends the Future through promises to set the result of IO operations. The interface definition of the Future is shown in the figure below. The interface definition of the Future is shown in the figure above. It has some more setXXX methods than the interface of the Future.

When Netty initiates an IO write operation, a new ChannelPromise Object is created. For example, when the write(Object Object) method of the ChannelHandlerContext is called, a new ChannelPromise Object is created. The relevant codes are as follows:

@Override public ChannelFuture write(Object msg) { return write(msg, newPromise()); }... @Override public ChannelPromise newPromise() { return new DefaultChannelPromise(channel(), executor()); }...Copy the code

When an IO operation fails or completes, set the result with promise.setsuccess () or promise.setFailure () and notify all listeners. I’ll take a source level look at how Netty’s Future/Promise works in the next article.

Three things to watch ❤️

If you find this article helpful, I’d like to invite you to do three small favors for me:

  1. Like, forward, have your “like and comment”, is the motivation of my creation.

  2. Follow the public account “Java rotten pigskin” and share original knowledge from time to time.

  3. Also look forward to the follow-up article ing🚀

  4. [666] Scan the code to obtain the learning materials package