“This is the 9th day of my participation in the Gwen Challenge in November. Check out the details: The Last Gwen Challenge in 2021.”

When a client makes an Http request, what happens on the server side?

Simply speaking, it can be divided into the following steps:

  1. Establish network communication based on TCP protocol.
  2. Data is transferred to the server.
  3. The server receives the data, parses it, and processes the request logic.
  4. The server returns the result to the client.

In traditional BIO mode, the client initiates a data read request to the server. The client blocks the request until the server completes the session after receiving the data. This process is called synchronous blocking IO. In the BIO model, if you want to implement asynchronous operations, you have to use a multi-threaded model, that is, one thread for each request, so that the server link is not tied up by a client and the connection count is not increased.

Synchronous blocking IO consists of two choke points

  • Blocking when a server receives a client connection.
  • Block when data is not ready for IO communication between client and server.

In this traditional BIO mode, there is a serious problem, as shown in the following figure. If N clients are making requests at the same time, the BIO model allows the server to process only one request at a time. This will result in client requests being queued, with the effect that users will be waiting a very long time for a request processing return. Means that the server has no concurrent processing capability, which is obviously not appropriate.

So how can the server be optimized?

Non-blocking IO

From the previous analysis, we found that the server will be blocked when processing a request and cannot process subsequent requests. Can we optimize the blocked area to not block? So you have non-blocking IO(NIO)

Non-blocking I/O: When a client sends a request to the server, if the server data is not ready, the request will not be blocked but directly returned. However, it is possible that when the server data is not ready, the client receives an empty return, so how does the client get the final data?

As shown in the figure, the client can only obtain the result of the request through polling. NIO significantly improves performance and connection counts with less blocking than BIO.

NIO still suffers from a lot of empty polling that involves a lot of system calls (initiating kernel instructions to load data from the nic buffer, switching from user space to kernel space), which can cause performance problems as the number of connections increases.

Multiplexing mechanism

The essence of I/O multiplexing is to allow a single process to monitor multiple file descriptors by means of a mechanism (the system kernel buffers I/O data), and to inform the program to perform read and write operations once a descriptor is ready (typically read or write)

What is fd: In Linux, the kernel operates on all external devices as if they were a file. Reading or writing to a file calls system commands provided by the kernel and returns a FD (file descriptor). For reads and writes to a socket, there is a file descriptor called socketfd.

** [SELECT, poll, epoll] ** are common IO multiplexing methods, which are provided by the Linux API, so the next focus on the SELECT, and epoll models

  • ** SELECT: ** the process can block the SELECT operation by passing one or more FD’s to the SELECT system call. In this way, select can help us detect whether multiple FD’s are ready. This mode has two disadvantages

    • Since it can listen for multiple file descriptors at the same time, say 1000, if one of the FDS is ready, the current process needs to poll all FDS linearly, so the more FDS it listens for, the higher the performance cost.
    • Also, select has a limit on the number of FD’s that can be opened in a single process. The default is 1024, which is a bit low for those who need to support thousands of TCP connections on a single machine
  • Epoll: Linux system call, also provides epoll epoll is based on the event-driven approach instead of a sequential scan, so the performance is relatively higher, the main principle is that when they were listening to fd, fd are ready, will inform the current process which one specific fd in place, then the current process only needs to read data from the specified fd can, in addition, The maximum number of fd handles that epoll can support is the operating system’s maximum number of file handles, which is much higher than 1024

[because the epoll can pass events to inform application process which fd is readable, so we also call this IO for asynchronous non-blocking IO, of course it is asynchronous, because it also need to copy the data from the kernel synchronization to the user space, real asynchronous non-blocking, should be data has been completely ready, I just need to read from the user space 】

The benefit of I/O multiplexing is that the system can process multiple client requests simultaneously in a single thread by reusing the blocks of multiple I/ OS on the same SELECT block. Its biggest advantage is that the system cost is small and it does not need to create new processes or threads, reducing the resource cost of the system. Figure 2-3 shows the overall implementation idea of the system.

When the client requests data to the Server, the Server registers the request to the Selector multiplexer to avoid blocking during the read client data transmission. The Server does not need to wait, but only needs to start a thread. The selector. Select () blocks the ready channel on the writ, that is, if a client connection data transfer completes, the select() method returns the ready channel and performs the relevant processing.

Asynchronous I/o

The main difference between asynchronous IO and multiplexing is that when the data is ready, the client does not need to send kernel instructions to read the data from the kernel space. Instead, the system asynchronously copies the data directly to the user space, and the application only needs to use the data directly.

In Java, we can use NIO’s API to implement multiplexing mechanism and pseudo-asynchronous IO. In the network communication evolution model analysis this article demonstrates the Java API to achieve multiplexing mechanism of the code, found that the code is not only cumbersome, but also very troublesome to use.

Netty’s I/O model is based on the implementation of non-blocking IO. The underlying implementation relies on the MULTIplex Selector of the JDK NIO framework.

A single multiplexer Selector can poll multiple channels at the same time, and with epoll mode, thousands of client connections can be accessed with only one thread polling the Selector.

Reactor model

Gee.cs.oswego.edu/dl/cpjslide…

A Reactor multiplexing high performance I/O design pattern is proposed based on NIO multiplexing. The core idea of the Reactor is to separate the response I/O events from business processes. IO events are processed by one or more threads, and ready events are then distributed to the business handlers thread for asynchronous non-blocking processing, as shown in Figure 2-5.

The Reactor model has three important components:

  • **Reactor: ** Sends I/O events to the corresponding Handler
  • **Acceptor: ** Processes client connection requests
  • Handlers: ** Perform non-blocking read/write

This is the basic single-reactor single-thread model ** (the entire I/O operation is done by the same thread) **.

The Reactor thread, which is responsible for multiplexing sockets, sends the CONNECT event to acceptors when a new connection arrives, and sends the I/O event to Hanlder for processing.

An Acceptor builds a handler that fetches a SocketChannel associated with the client and binds it to the corresponding Hanlder. The corresponding SocketChannel receives an read/write event. Based on racotor distribution,hanlder can handle it (all I/O events are bound to selector, Reactor distribution)

The Reactor pattern essentially refers to useI/O multiplexing + non-blocking I/O multiplexingThe model.

Multithreaded single Reactor model

The implementation method of single-thread Reactor has some disadvantages. It can be seen from the example code that the handler is executed sequentially. If one of the handler processing threads blocks, other business processes will block. Since the handler and reactor execute on the same thread, this will also cause the new one to fail to receive new requests. Let’s do a little experiment:

  • Add thread.sleep () to the DispatchHandler run method of the Reactor code above.
  • Multiple client Windows were opened and connected to the Reactor Server. One window was blocked after sending a message, and the other window could not process the subsequent request because the previous request was blocked.

In order to solve this problem, some people propose to use multi-threading to process the business, that is, add asynchronous processing in the thread pool at the business processing place and execute reactor and handler in different threads, as shown in Figure 4-7.

Multithreaded multiple Reactor model

In the multithreaded single-reactor model, we find that all THE I/O operations are performed by a single Reactor, which runs in a single thread, including Accept(), read(), write, and connect operations. It has little impact on small volume scenarios. However, it is easy to become a bottleneck in application scenarios with high load, large concurrency, or large data volume. The main reasons are as follows:

  • A NIO thread processing hundreds of links at the same time cannot support performance. Even if the CPU load of NIO thread reaches 100%, it cannot meet the requirements of reading and sending massive messages.
  • When the NIO thread is overloaded, the processing speed will slow down, which will lead to a large number of client connections timeout. After the timeout, retransmission is often carried out, which will increase the load of NIO thread, and eventually lead to a large number of messages backlog and processing timeout, becoming a performance bottleneck of the system.

Therefore, we can further optimize by introducing the multi-reactor and multi-thread model. As shown in Figure 2-7, the Main Reactor is responsible for receiving the connection requests from the client, and then transferring the received requests to the SubReactor (there may be multiple SUBreactors). The specific business IO processing is done by SubReactor.

The Multiple Development model can also be equivalent to the Master-workers model. For example, Nginx and Memcached use this multi-threaded model. Although different projects have slightly different implementation details, the model is generally the same.

  • Acceptors, request receivers, act as servers in practice. They do not actually establish connection requests, but delegate their requests to the Main Reactor thread pool, which acts as a forwarding agent.
  • The Main Reactor thread group is responsible for connecting events and forwarding I/O reads and writes to the SubReactor thread pool.
  • Sub Reactor, Main Reactor usually listens to client connections and forwards channel reads and writes to a thread in the Sub Reactor thread pool (load balancing), which is responsible for reading and writing data. Read (OP_READ) and write events (OP_WRITE) of channels are usually registered in NIO.

Netty, a high performance communication framework

In Java, there are many network programming frameworks, such as Java NIO, Mina, Netty, Grizzy, etc. However, of all the middleware you are exposed to, the vast majority are using Netty.

The reason is that Netty is currently the most popular high-performance Java network programming framework, which is widely used in middleware, live, social, games and other fields. When it comes to open source middleware, Dubbo, RocketMQ, Elasticsearch, Hbase, RocketMQ, etc. are all implemented using Netty.

In the actual development, 99% of the students here today will not be involved in using Netty to do network programming development, but why bother to tell you? There are several reasons

  • When interviewing in a lot of big factory, can involve relevant knowledge point
    • What are the high performance of Netty
    • What are the important components in Netty
    • Netty memory pool, object pool design
  • Many middleware are using NetTY to do network communication, so we analyze the source code of these middleware, reduce the difficulty of understanding network communication
  • Improve the Java body of knowledge to achieve as comprehensive an understanding of technical architecture as possible.

Why Netty

Netty is actually a high performance NIO framework, so it is based on NIO encapsulation, essentially provides high performance network IO communication functions. Since we have analyzed network communication in detail in the previous course, it should be easier to learn Netty.

Netty supports the above three Reactor models. We can use the API encapsulated by Netty to develop different Reactor models quickly, which is one of the reasons why many people choose Netty. In addition, compared with NIO native API, Netty has the following characteristics:

  • Provides efficient I/O model, threading model and time handling mechanism
  • Provides a very simple and easy-to-use API that provides a higher level of encapsulation for the basic Channel, Selector, Sockets, Buffers apis than NIO does, masking the complexity of NIO
  • Good support for data protocols and serialization
  • Stability, Netty fixed JDK NIO issues such as 100% CPU consumption due to SELECT idle, TCP disconnection reconnection, keep-alive detection, etc.
  • Extensibility is done very well in the same type of framework, such as a customizable threading model, where the user can choose between the Reactor model and the extensible event-driven model in the startup parameters, separating business and framework concerns.
  • Optimization at the performance level, as a network communication framework, needs to handle a large number of network requests, and inevitably faces the problem of network objects need to be created and destroyed. This is not very friendly to THE JVM’S GC. In order to reduce the pressure of JVM garbage collection, two optimization mechanisms are introduced
    • Object pool reuse,
    • Zero copy technology

Netty’s ecological introduction

First, we need to understand the functions provided by Netty. Figure 2-1 shows the functions provided by the Netty ecosystem. These features will be examined step by step in subsequent articles.

Basic use of Netty

To clarify, the Netty version we are talking about is the 4.x version. There was a time when Netty released a 5.x version, but it was officially abandoned due to the increased complexity of ForkJoinPool use and the lack of significant performance benefits. ** Keeping all branches synchronized at the same time is quite a bit of work and not necessary.

Add jar package dependencies

Use version 4.1.66

<dependency>
    <groupId>io.netty</groupId>
    <artifactId>netty-all</artifactId>
</dependency>
Copy the code

Example Create a Netty Server service

In most scenarios, we use the master-slave multithreaded Reactor model, Boss thread is the resident Reactor, Worker is the Reactor. They use different NioEventLoopGroups

The primary Reactor handles Accept, and then registers the Channel with the secondary Reactor, which is responsible for all I/O events during the Channel’s life cycle.

public class NettyBasicServerExample {

    public void bind(int port){
        // Create two EventLoopGroups
        // A boss is used to accept connections.
        // The other is the worker, which can focus on events other than accept and handle subtasks.
        // The boss thread is usually set to one thread, and only one thread will be used.
        // Worker threads are usually tuned for the server and default to twice the CPU if not written.
        EventLoopGroup bossGroup=new NioEventLoopGroup();
        EventLoopGroup workerGroup=new NioEventLoopGroup();
        try {
            // To start the server, create ServerBootStrap
            // Netty encapsulates niO's template code
            ServerBootstrap bootstrap = new ServerBootstrap();
            bootstrap.group(bossGroup, workerGroup) // Configure the boss and worker threads
                // Configure the Server channel, which is equivalent to ServerSocketChannel in NIO
                .channel(NioServerSocketChannel.class)
                //childHandler specifies that the worker's threads are configured with a handler.
                // Configure the channel initialization, that is, configure the corresponding handler for the worker thread. When the request is received from the client, it is assigned to the specified handler for processing
                .childHandler(new ChannelInitializer<SocketChannel>() {
                    @Override
                    protected void initChannel(SocketChannel socketChannel) throws Exception {
                        socketChannel.pipeline().addLast(new NormalMessageHandler()); // Add handler, which is the specific IO event handler}});// Since NIO is asynchronous non-blocking by default, after binding the port, block through sync() until the connection is established
            // Bind the port and wait synchronously for the client to connect (sync method blocks until the whole startup process is complete)
            ChannelFuture channelFuture=bootstrap.bind(port).sync();
            System.out.println("Netty Server Started,Listening on :"+port);
            // Wait for the server listening port to close
            channelFuture.channel().closeFuture().sync();
        } catch (InterruptedException e) {
            e.printStackTrace();
        } finally {
            // Release thread resourcesbossGroup.shutdownGracefully(); workerGroup.shutdownGracefully(); }}public static void main(String[] args) {
        new NettyBasicServerExample().bind(8080); }}Copy the code

The above code is described as follows:

  • EventLoopGroup, which defines thread groups, is equivalent to the thread we defined earlier when writing NIO code. Two thread groups are defined here: boss thread and worker thread. The boss thread is responsible for receiving connections, and the worker thread is responsible for processing IO events. Boss thread is generally set to a thread, set more than one will only be used, and many of the current application scenarios. Worker threads are usually tuned for the server and default to twice the CPU if not written.
  • ServerBootstrap. To start the server, you need to create ServerBootstrap, in which Netty encapsulates niO template code.
  • ChannelOption.SO_BACKLOG

Setting the Channel Type

The NIO model is the most mature and widely referenced model of Netty, so we use NioServerSocketChannel as the Channel type when using Netty.

bootstrap.channel(NioServerSocketChannel.class);
Copy the code

In addition to NioServerSocketChannel, it is provided

  • EpollServerSocketChannel. The epoll model is supported only in Linux kernel 2.6 or higher, but not in Windows or MAC. If epoll is set to run in Windows, an error will be reported.
  • OioServerSocketChannel, which is used by the server to block receiving TCP connections
  • KQueueServerSocketChannel kqueue model, IO multiplexing technique is more efficient in Unix, common IO multiplexing technology are select, poll, epoll and kqueue and so on. Epoll is a Linux monopoly, while KQueue exists on many UNIX systems.

Registered ChannelHandler

In Netty, multiple ChannelHandlers can be registered via ChannelPipeline. This handler is the handler given to the worker thread to execute. When the IO event is ready, it will be called according to the handler configured here.

You can register multiple ChannelHandlers, each for encoding and decoding, heartbeat, message processing, and so on. This maximizes code reuse.

.childHandler(new ChannelInitializer<SocketChannel>() {
    @Override
    protected void initChannel(SocketChannel socketChannel) throws Exception {
        socketChannel.pipeline().addLast(newNormalMessageHandler()); }});Copy the code

The ServerBootstrap childHandler method needs to register a ChannelHandler. An implementation class of ChannelInitializer is configured to initialize the Channel by instantiating ChannelInitializer.

When an IO event is received, the data is propagated through multiple handlers. A NormalMessageHandler is configured in the above code to receive client messages and output them.

Binding port

After completing the basic configuration of Netty, the bind() method actually triggers startup, while the sync() method blocks until the entire startup process is complete.

ChannelFuture channelFuture=bootstrap.bind(port).sync();
Copy the code

NormalMessageHandler

ServerHandler inherited ChannelInboundHandlerAdapter, this is an event handler of netty, netty the processor can be divided into the Inbound (pit) and Outbound (Outbound) processor, later in detail.

public class NormalMessageHandler extends ChannelInboundHandlerAdapter {
    // The channelReadComplete method means that the message has been read, and the writeAndFlush method means that the message is written and sent
    @Override
    public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
        // All messages are written back to the client. Unpooled EMPTY_BUFFER said an empty message, addListener (ChannelFutureListener. CLOSE) after finished, CLOSE the connection
        ctx.writeAndFlush(Unpooled.EMPTY_BUFFER).addListener(ChannelFutureListener.CLOSE);
    }

    The exceptionCaught method is used to handle the exception
    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
        cause.printStackTrace();
        ctx.close();
    }

    // The channelRead method tells you what to do with the message once it has been read
    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
        ByteBuf in=(ByteBuf) msg;
        byte[] req=new byte[in.readableBytes()];
        in.readBytes(req); // Read data into the byte array
        String body=new String(req,"UTF-8");
        System.out.println("The server received a message:+body);
        // Write back data
        ByteBuf resp=Unpooled.copiedBuffer(("receive message:"+body+"").getBytes());
        ctx.write(resp);
        //ctx.write Indicates that the message is sent back to the client, but only if it is written to the buffer}}Copy the code

The above code shows that we only need very little code to complete the development of NIO server. Compared with the traditional NIO native class library server, the amount of code is greatly reduced and the difficulty of development is also greatly reduced.

Netty and NIO API correspond

TransportChannel —- corresponds to a channel in NIO

EventLoop—- corresponds to the while loop in NIO

EventLoopGroup: Multiple eventloops

ChannelHandler and ChannelPipeline– implementation of handleRead/handleWrite(interceptor pattern) corresponding to NIO customer logic

ByteBuf—- corresponds to ByteBuffer in NIO

Bootstrap and ServerBootstrap: create, configure, and start the Selector and ServerSocketChannel in the NIO

The overall workings of Netty

The overall working mechanism of Netty is as follows. The overall design is the multithreaded Reactor model we talked about earlier, which separates request monitoring and request processing and executes specific handlers respectively through multithreading.

Network communication layer

The main responsibility of the network communication layer is to perform IO operations of the network. It supports the linking operations of various network communication protocols and I/O models. When network data is read into the kernel buffer, read and write events are triggered, and these events are distributed to the time scheduler for processing.

The core components of network communication in Netty are the following three components

  • Bootstrap: a client startup API that connects to a remote Netty server and binds only one EventLoopGroup
  • ServerBootStrap, the serverside listening API, is used to listen on specified ports and is bound to two EventLoopGroups. The Bootstrap component can be used to start Netty applications easily and quickly
  • Channel is the carrier of network communication. The Channel implemented by Netty is based on JDK NIO Channel, providing a higher level of abstraction, shielding the complexity of the underlying Socket, and providing a more powerful function for Channel.

AbstractChannel is the base class of the whole Channel implementation, deriving AbstractNioChannel (non-blocking IO) and AbstractOioChannel(blocking IO). Each subclass represents a different I/O model and protocol type.

As connections and data change, channels can have multiple states, such as connection establishment, connection registration, connection read and write, and connection destruction. As states change, channels have different lifetimes, and each state is bound to a corresponding event callback. The following are common time callbacks.

  • ChannelRegistered, when a channel is created, it is registered with an EventLoop
  • ChannelUnregistered, unregistered after a channel is created or unregistered from EventLoop
  • ChannelActive, the channel is ready and can be read or written
  • ChannelInactive: Channel is in the unready state
  • ChannelRead, a Channel that reads data from the source
  • ChannelReadComplete, Channel Data reading is complete

To sum up, Bootstrap and ServerBootStrap are responsible for starting the client and server respectively. Channel is the carrier of network communication and provides the ability to interact with the underlying Socket.

When an event changes in a Channel’s life cycle, it triggers further processing, which is done by Netty’s event scheduler.

Event scheduler

The event scheduler aggregates all kinds of events through Reactor thread model and integrates various events (I/O time and signal time) through Selector main loop thread. When these events are triggered, the specific processing of the event needs to be processed by relevant handlers in the service choreography layer.

Event scheduler core components:

  • EventLoopGroup. Equivalent to a thread pool

  • EventLoop. Equivalent to a thread in a thread pool

An EventLoopGroup is essentially a thread pool that receives I/O requests and allocates threads to execute processing requests. To better understand the relationship between EventLoopGroup, EventLoop, and Channel, let’s look at the process shown in Figure 2-4.

As can be seen from the picture

  • An EventLoopGroup can contain multiple EventLoops, which process all I/O events, such as Accept, Connect, read, and write, within a Channel’s life cycle
  • EventLoop is bound to one thread at a time, and each EventLoop handles multiple channels
  • Each time a Channel is created, the EventLoopGroup selects an EventLoop for binding. The Channel can bind and unbind the EventLoop multiple times within its life cycle.

Figure 2-5 shows the class diagram of The EventLoopGroup. Netty provides various implementations of the EventLoopGroup, such as NioEventLoop, EpollEventLoop, and NioEventLoopGroup.

As can be seen from the figure, EventLoop is a subinterface of EventLoopGroup. We can equate EventLoop to EventLoopGroup on the premise that the EventLoopGroup contains only one EventLoop.

The EventLoopGroup is the core processing engine for Netty. How does it relate to the Reactor thread model described above? In fact, we can simply regard EventLoopGroup as the concrete implementation of Netty’s Reactor thread model. We can configure different EventLoopgroups to make Netty support different Reactor models.

  • In the single-threaded model, the EventLoopGroup contains only one EventLoop, and the Boss and Worker use the same EventLoopGroup.
  • Multi-threaded model: The EventLoopGroup contains multiple Eventloops, and the Boss and Worker use the same EventLoopGroup.
  • The EventLoopGroup contains multiple Eventloops, the Boss is the primary Reactor, and the Worker is the secondary Reactor model. They use different EventLoopGroups. The primary Reactor is responsible for creating a new network connection Channel (that is, the event of the connection), and the primary Reactor receives the connection from the client and sends it to the secondary Reactor for processing.

Service Choreography Layer

The responsibility of the service choreography layer is to assemble all kinds of services. In simple terms, after I/O events are triggered, there needs to be a Handler to deal with them. Therefore, the service choreography layer can realize the dynamic arrangement and orderly transmission of network events through a Handler processing chain.

It contains three components

  • ChannelPipeline, which uses a two-way linked list to link multiple ChannelHandlers together. When an I/O event is triggered, The ChannelPipeline calls the assembled channelHandlers in turn to process the Channel’s data. ChannelPipeline is thread-safe because each new Channel is bound to a new Channel pipeline. A ChannelPipeline is associated with an EventLoop, and an EventLoop binds only one thread, as shown in Figure 2-6.

    Figure 2-6 ChannelPipeline

    As can be seen from the figure, ChannelPipeline contains inbound ChannelInBoundHandler and outbound ChannelOutboundHandler. The former receives data and the latter writes data. In fact, they are InputStream and OutputStream. To better understand, let’s look at Figure 2-7.

  • ChannelHandler: processor for I/O data. After receiving data, the data is processed by the specified Handler.

  • ChannelHandlerContext, the ChannelHandlerContext is used to hold the ChannelHandler context information, that is, the data between multiple handlers when an event is triggered, It’s passed through the ChannelHandlerContext. Figure 2-8 shows the relationship between ChannelHandler and ChannelHandlerContext.

    Each ChannelHandler corresponds to its own ChannelHandlerContext, which holds the context information needed by the ChannelHandler, data passing between multiple ChannelHandlers, This is done through ChannelHandlerContext.

The above is an introduction to the features and working mechanism of the core components of Netty. The following content will also analyze these components in detail. As can be seen, Netty’s architecture layer design is very reasonable, it hides the implementation details of the underlying NIO and framework layer, for business developers, only need to care about the orchestration and implementation of business logic.

Component relationship and principle summary

Figure 2-9 shows the coordination mechanisms of key Netty components. The coordination mechanisms are described as follows:

  • The Boss thread group is responsible for listening for network connection events. When a new connection is established, the Boss thread will register the connection Channel and bind it to the Worker thread
  • The Worker thread group assigns an EventLoop to handle read and write events for the Channel, and each EventLoop is equivalent to a thread. Listen for an event loop through Selector.
  • When the client initiates an I/O event, the server EventLoop distributes the ready Channel to the Pipeline for data processing
  • Once the data is transferred to the ChannelPipeline, it is processed from the first ChannelInBoundHandler, passing through the pipeline chain one by one
  • The server writes the data back to the client. The data is propagated through a chain of ChannelOutboundHandlers to the client.

This section describes the core components of Netty

After we get a global overview of Netty in Section 2.5, we’ll give a very detailed description of these components to help you understand them better.

Bootstrap and ServerBootstrap, as the intersection between the Netty client and server, are the first step to write Netty network programs. It allows us to put together the core components of Netty like building blocks. There are three important steps to focus on during the Netty Server build process

  • Configuring a thread pool
  • The Channel to initialize
  • Handler Handler build

Copyright Notice: All articles on this blog are subject to a CC BY-NC-SA 4.0 license unless otherwise stated. Reprint please specify from Mic to take you to learn structure! If this article is helpful to you, please also help to point attention and like, your persistence is the power of my continuous creation. Welcome to pay attention to the same wechat public account for more technical dry goods!