preface

This is the last article on Netty. In this article, I will summarize what I have learned and what I think of Netty. In addition, several design patterns used in Netty will be analyzed, and finally, a summary of the learning experience in this series.

A first meeting Netty

This section will review what Netty is, why the library was created, and what Netty is good for.

1.1 What is Netty

Definition in Netty In Action: “Netty is an asynchronous, event-driven network application framework that enables the rapid development of maintainable, high-performance protocol-oriented servers and clients.” As I learned during this time, Netty’s internal implementation is complex, but Netty provides easy-to-use apis to decouple business logic from network processing code. Because of its excellence, it has been verified by hundreds of commercial and commercial projects, many frameworks and open source components of the underlying RPC are using Netty, such as Dubbo, Elasticsearch, Zookeeper, Hadoop, RocketMQ and so on. Netty was started by a guy named Trustin Lee and was owned by JBoss until version 4.0, after which it was managed by the Netty community. His history is as follows:

  • Netty2 was released in June 2004, claiming at the time to be the first event-driven, easy-to-use networking framework in the Java community
  • Netty3 was released in October 2008
  • Netty4 was released in July 2013
  • 5.0.0.Alpha1 was released in December 2013
  • 5.0.0 was scrapped in January 2015

Netty: Version 5.0 is being scrapped. It’s a bit of a surprise to see why version 5.0 was scrapped.

The main implication here is that the use of ForkJoinPool makes the code complex without any noticeable performance improvement, and that maintaining too many branches requires a lot of work, much of it unnecessary. In fact, Netty is the encapsulation of native NiO, OiO, Epoll and other basic network programming, so why don’t we use them directly, instead of Netty, let’s take a look.

1.2 Why Is Netty generated

Netty is primarily an encapsulation of NIO, so let’s compare Netty with Java NIO to see what drives Netty’s creation. First of all,java.nioThe full namejava non-blocking IONew IO (actually new IO) is a new API in JDK 1.4 and above (New IO) that provides caching support for all primitive types (except Boolean types) and provides a non-blocking, highly scalable network. Here’s why you don’t use native NIO in favor of Netty.First of all,Can better avoid JDK NIO bugs. Here’s the classicEpoll Bug: Abnormal wake up idle caused 100% CPU, as shown in the figure:

We can see that the result of his solution is unfixable, and Netty to help us solve this bug, specific we can look at the source code:

/** * nioEventloop. Java 788 line */
  if (SELECTOR_AUTO_REBUILD_THRESHOLD > 0 &&
                        selectCnt >= SELECTOR_AUTO_REBUILD_THRESHOLD) {
                    // The selector returned prematurely many times in a row.
                    // Rebuild the selector to work around the problem.
                    logger.warn(
                            "Selector.select() returned prematurely {} times in a row; rebuilding Selector {}.",
                            selectCnt, selector);

                    rebuildSelector();
                    selector = this.selector;

                    // Select again to populate selectedKeys.
                    selector.selectNow();
                    selectCnt = 1;
                    break;
                }
Copy the code

The code here, which I analyzed in the third article, is to determine that the number of empty polls is greater thanSELECTOR_AUTO_REBUILD_THRESHOLDFor this value (512 by default), one will be calledrebuildSelector()Method, which registers all Selector keys from the old Selector to a new Selector, so that blocking operations in the new Selector may not have the empty polling bug. Of course, Netty is not the only solution to the Java NIO bug, there are others that are not covered here.thenAnother reason Why Netty is doing better is thatNetty’s API is friendlier and more powerful. I can gather two examples here,It a:In the JDKByteBufferNetty is not user-friendly, and features are weak, and Netty encapsulates its ownByteBufThis package makes the API more user-friendly and more powerful.Second:Netty has made enhancements to some classes in the Jdk, such as theTheadLocalEnhanced, encapsulatedFastThreadLocal.thenNetty helps us to mask many details of the implementation and isolate many changes. For example, NIO helps us mask the IMPLEMENTATION of the JDK. The BIO(OIO), NIO, and Epoll changes are isolated, and it only takes a few lines of this table to switch them.The lastIf we want to use the JDK NIO implementation, we need to solve how many problems, here we first look at Netty solved how many problems, as shown in the figure:

Here we can see that Netty has solved over 4000 bugs. Of course, if we implement a network application, we may not have as many bugs as Netty, but we are not as good as Netty and we may have more bugs. In addition, let’s see how many NIO bugs there are in the JDK source code, as shown in figure:

Here we can see, JDK NIO bugs have more than one thousand, if we use the native JDK NIO to implement network procedures, then we must go to avoid these bugs, this will be a huge project. Through the above points, it also decided that Netty’s development is both accidental and inevitable.

1.3 Netty Advantages

Given the analysis in the previous section, it’s easy to understand the advantages of Netty. Here are some of the advantages:

  • The API is simple to use and the development threshold is low.
  • Powerful, preset a variety of encoding and decoding functions, support a variety of mainstream protocols.
  • With strong customization capability, the communication framework can be flexibly expanded through channelHandler.
  • High performance.
  • Mature, stable, fixed a lot of JDK NIO bugs.
  • The community is active.
  • Experienced large-scale commercial application test, quality has been verified.

This section focuses on some of Netty’s advantages, development history, etc. In the next chapter, I will analyze the overall architecture of Netty.

Netty Architecture

In this section, I will focus on the overall framework of Netty. I have already analyzed the details of Netty in this section, so I will take a look at the features of Netty.

Here we can see the zero-copy capability in the core at the bottom of the value, as well as more generic apis and extensible event drivers. In the above, transport services support NIO and OIO, for protocol support including Http, Protobuf, MQTT and so on, including security, container integration and so on have corresponding implementation.

2.1 Task processing model.

In this section, I will take a macro look at the threading model in Netty. Before that, we will take a look at several common task processing models.

2.1.1 Serial processing model

This model uses a thread to handle network connection requests and task processing. We can use a picture to see the details, as follows:

Here we can see that when the worker receives a task, it immediately processes it, that is to say, task acceptance and task processing are carried out in the same worker thread without distinction. A big problem with this is that you have to wait for one task to complete before you can accept the next task. Typically, the task is processed much slower than the task is received. For example, when working on a task, we may need to access a remote database, which is a type of network IO. In general, I/O operations are time-consuming, which directly affects the acceptance of the next task. Moreover, during I/O operations, the CPU is idle and resources are wasted. With this disadvantage, we can make the following improvements.

In this case, we separate the receiving and processing phases of the task. One thread receives the task and puts it on the task queue, while the other thread asynchronously processes the task in the task queue. This improves throughput a bit, but it’s not perfect.

2.1.2 Parallel processing model

With the optimization in the above section, worker threads can be further pooled, as shown in the following figure:

Here, because the task processing is generally slow, it will lead to the backlog of tasks in the task queue can not be processed for a long time, in this case, you can use multithreading to process. Here, a common task queue is used. In multithreaded environment, locking is unavoidable to ensure thread safety. The thread pool is commonly used in this mode. This model can be improved by maintaining a task queue for each thread. So with that in mind let’s look at the thread processing model.

2.2 Netty’s threading model

There are many threads models in Netty, including ThreadPerConnection, Reactor, and Proactor. First, the ThreadPerConnection pattern corresponds to the BIO thread model and also corresponds to the serial task processing pattern above. Then the Reactor model corresponds to NIO’s threading model, which corresponds to the above parallel processing model. Finally, the Proactor model is the thread model of AIO. Note that Netty has abandoned the implementation of AIO, so we will not introduce it here. This section is mainly about the Reactor model.

2.2.1 Reactor model

The REACTOR thread model focuses on: Task after accepting the process continue to shard, divided into a number of different steps, each step in a different thread to handle, namely originally handled by a single thread of tasks by multiple threads to handle now, after each thread in dealing with their own steps, also need to forward the task to the next phase of thread to continue processing. Specifically, we can see a picture as follows:

2.2.2 REACTOR thread model in Netty

There are three possible types in NettyReactor thread model, respectively,Single thread reactor,Multithreaded reactorandMaster/slave multithreaded reactor. Let’s analyze them separately. First is single-thread REACTOR. Let’s look at a picture as follows:

Acceptors are the main acceptors that distribute tasks in Netty. When this happens, use this thread model as an example.

EventLoopGroup eventGroup = new NioEventLoopGroup(1);
ServerBootstrap serverBootstrap = new ServerBootstrap();
serverBootstrap.group(eventGroup);
Copy the code

So with the example above, what is itMultithreaded reactorSpecifically, I will start with a picture as follows:

Here we see an additional ThreadPool ThreadPool to handle business logic, which is similar to the second serial task processing mode mentioned above. When might this happen in Netty? Specific example program source code as follows:

EventLoopGroup eventGroup = new NioEventLoopGroup();
ServerBootstrap serverBootstrap = new ServerBootstrap();
serverBootstrap.group(eventGroup);
Copy the code

The only difference is that NioEventLoopGroup creates a thread pool with twice the number of CPU cores by default if it is not passed. I have covered this in the previous article. Here is the last thread model, the master-slave multithreading model, which we can also guess is the parallel task processing model we mentioned above. The details are represented by a picture as follows:

These mainReacotor, subReactor, ThreadPool are three thread pool. The mainReactor handles client connection requests and registers accepted connections to one of the subReactor threads. The subReactor handles data reads and writes on the client channel. Threadpools are concrete business logic thread pools that handle specific businesses. Here specific where may be used, specific look at the example program source code, as follows:

EventLoopGroup bossGroup = newNioEventLoopGroup();
EventLoopGroup workerGroup = new NioEventLoopGroup(); 
ServerBootstrap serverBootstrap = new ServerBootstrap();
serverBootstrap.group(bossGroup, workerGroup); 
Copy the code

I believe that many people have used the above code, this is probably the NIO thread model in Netty, so Netty is specific how to use these models, below we analyze its implementation.

2.3 Netty Running Process

To see how the threading model is used in Netty, we need to look at the following diagram:

There are a couple of caveats to this, and I’m going to analyze it briefly. The first is about NioEventLoop and NioEventLoopGroup. NioEventLoop is actually a worker thread and can be understood as a thread directly. NioEventLoopGroup is a thread pool, and the threads in the thread pool are nioEventloops. In fact, there are multiple NioEventLoop threads in the bossGroup, and each NioEventLoop is bound to a port. In other words, if the application only listens on one port, it only needs to have one NioEventLoop thread in the bossGroup. Each NioEventLoop is bound to a Selector, so in Netty’s threading model, multiple SelecotRs listen for IO ready events. And the Channel registers with the Selector. A NioEventLoop bound to a Channel is equivalent to a thread bound to a connection, where all channelHandlers are executed in one thread, avoiding multithreading. More importantly, the ChannelPipline list must be executed strictly in order. The single-threaded design ensures that channelHandlers are executed sequentially. A Selector for a NioEventLoop can be registered by multiple channels, which means that multiple channels share an EventLoop. EventLoop’s Selecctor checks these channels. As for other related concepts I have analyzed in the previous time but here is not much analysis, I will go through the Netty process.

2.3.1 Starting the Service

This section mainly analyzes the running process of Netty. This section mainly analyzes the startup service. In fact, we have made it clear from the previous analysis that this is only a summary, mainly from the perspectives of worker thread and boss thread.

worker Thread

  • Create a selector
  • Create a serverSocketChannel
  • Initialize the serverSocketChannel
  • Select a NioEventLoop from the bossGroup for the serverSocketChannel

boss Thread

  • Register serverSocketChannel with the selector of the selected NioEventLoop
  • Bind address boot
  • Register the accept connection event (OP_ACCEPT) to selector

Bind () = bind(); bind() = bind(); bind() = bind();

The important thing about this is that the Selector is created when a new NioEventLoopGroup() is created, and the final listening for OP_ACCEPT is triggered by fireChannelActive() after bind is done, NioEventLoop is started by executing a Register operation, and that’s about all that’s important in this section.

2.3.2 Establishing a Connection

When the service is started, it needs to build connections and wait for connection access. The analysis in this section still refers to the previous mode, mainly analyzing worker threads and boss threads, as follows:

boss thread

  • Selector in NioEventLoop polls for connection creation events (OP_ACCEPT)
  • Create a socketChannel
  • Initialize socketChannel and select a NioEventLoop from the workerGroup

worker thread

  • Registers socketChannel with the selector of the selected NioEventLoop
  • Registers the read event (OP_READ) on selector

In fact, many of the steps in this section are similar to the knowledge points in the previous section, except that the focus of this section is on the workerGroup. Here we can combine these two processes and use a figure as follows:

Another point to note above is that the NioEventLoop in the Worker thread is started by executing the Register operation, which is clearly represented in the figure in the previous section. It is also worth noting that the connection is accepted as a read and does not attempt to read more than 16 times, which is the default maximum.

2.3.3 Receiving Data

Here to receive the data main consideration is the initial time take how much containers to receive data, Netty is a distributor of adaptive data size AdaptiveRecvByteBufAllocator, this container will be based on the current amount of data to predict the next should is how much the container to hold data. Another consideration is that when a container is full, it is sure to want to take a new container to fill it. Will it continue to be filled forever? In fact, at the end of the above section, it was mentioned 16 times at most by default. These are all done in worker thread. Now LET me comb through this line.

  • First the Selector receives the OP_READ event through a select policy.
  • And then there’s the process of handling this event, and there’s a process of handling this event.
  • An initial 1024 byte byteBuf is allocated to receive the data.
  • The data is then received from the Channel to byteBuf, where the actual received data size is recorded and the size of the next allocated byteBuf is adjusted.
  • Pipeline. FireChannelRead (byteBuf) is then triggered to propagate the read data.
  • Finally determine whether accepting byteBuf is full: Yes, try to continue reading until there is no data or 16 times; If no, finish reading this round and wait for the next OP_READ event.

There is some attention is, to read data nature: sun. Nio. Ch. SocketChannelImpl# read (Java. Nio. Byte Buffer); NioSocketChannel read () is to read the data, NioServerSocketChannel read () is to create connections, pipeline. FireChannelReadComplete () says a read event processing is completed; Pipeline. fireChannelRead(byte Buf) indicates that data is read at one time. A read event may contain multiple read operations. It is important to note, finally AdaptiveRecvByteBufAllocator speculation of bytebuf amplification is very decisive direct amplification, but need to determine whether two in narrow really need will shrink.

2.3.4 Service Processing

This deals with the essence of the business: the data is executed by the channelRead() of all handlers in the pipeline. Again, I refer to the diagram from the previous article as follows:

Here the processing of business logic is mainly InBound event processing, The Handler to realize the io.net ty. Channel. ChannelInboundHandler# channelRead (ChannelHandlerContext CTX, Object MSG). It cannot be executed with the comment @Skip. Another default processing threads here is the Channel binding NioEventLoop thread, can also set up other, such as pipeline. AddLast (new UnorderedThreadPoolEventExecutor (10), serverHandler).

2.3.5 Sending (Writing) Data

Before you know how to send data, here are a few operations to understand:

  • Write: Writes to a buffer
  • Flush: Sends the data in the buffer
  • WriteAndFlush: Write to buffer and send immediately
  • There is a ChannelOutboundBuffer between Write and Flush

Netty writes data, stops writing when it fails to write, and registers an OP_WRITE event to tell it when it can write again. Other Netty batch data, if it want to write, you can adjust maxBytesPerGatheringWrite to write more quickly. As long as Netty has data to write and can write out, it keeps trying until it fails to write out or reaches 16 times. When the 16 times of writing is not complete, it directly plans a task to continue writing instead of using a registered write event to trigger the writing. Finally Netty to write data is too much, beyond a certain water level (writeBufferWaterMark. High ()), will be to write a sign of a change to be false, let the application side do you want to send data to make my own decisions. Let’s take a look at the specific steps of sending data, as shown in the figure below:

  • Write: Write data to buffe by calling addMessage of ChannelOutboundBuffer
  • Flush: Send the inside of the buffer data, that is, call AbstractChannel. AbstractUnsafe flush here to two specific steps, the first call ChannelOutboundBuffer addFlush prepare data, It then calls NioSocketChannel’s doWrite to send.

. There are several important points above channelHandlerContext channel (), write () starts execution, TailContext channelHandlerContext. Write () begins with the current Context.

2.3.6 Disconnecting the server

The operation of disconnecting is actually quite simple, and is mainly distinguished by the OP_READ event received. The following is a line.

  • The Selector receives an OP_READ event
  • Processing OP_READ events: NioSocketChannel. NioSocketChannelUnsafe. Read () :
  • Receive data
  • Check whether the received data size is less than 0. If yes, the system closes the data.
  • Close channel (key containing Cancel multiplexer)
  • Message cleanup: no new messages are received and all messages in the queue are discarded.
  • Fire fireChannelInactive and fireChannelUnregistered.

There are a few things to note here. Closing the connection will trigger the OP_READ method. The number of bytes read is -1 to close. IOException is triggered, and then the shutdown is performed.

2.3.7 Disabling the Service

Disabling the service is a hassle, where the bossGroup and workerGroup’s Shutdownloop () method is called. The NioEventLoop is disabled on all groups, where a set of judgments are sent. See the following below:

The above analysis is not carried out here, the essence of closing the service is to close all connections and selectors, and the other is to first do not receive the goods, try to finish the work at hand, and finally close all threads, exit the event loop. Here I will analyze the entire running process of Netty, and the following I will mainly introduce some design patterns used in Netty.

Design pattern of Netty

Netty as such an excellent library, he uses a lot of design patterns naturally, of course, this article here I just select a few typical examples analysis. To begin with, I’ll use a diagram to illustrate the design patterns used in Netty, as follows:

In fact, there may be more than these design patterns in Netty. In this section, I will not analyze all of the above design patterns. I will mainly select a few design patterns to analyze, before we look at the classification of design patterns.

3.1 Classification of design patterns

In fact, the traditional design pattern is divided into three types, namely, the creation design pattern, the structural design pattern and the behavior design pattern, each of which has different emphasis on solving problems. Creative design patterns: They are designed to solve object creation problems, encapsulate the complex creation process, and decouple the code used to create and use objects. These include singleton, factory, Builder, and prototype patterns (not commonly used). Structural patterns: It mainly summarizes some classical structures of classes or objects grouped together, which can solve the problem of pending application scenarios. These include proxy mode, adapter mode, decorator mode, bridge mode, facade mode (uncommon), composite mode (uncommon), and share mode (uncommon). Behavioral design patterns: Mainly solve the problem of interaction between classes or objects. There are more modes of this type, 11. They are: observer pattern, chain of responsibility pattern, iterator pattern, template pattern, policy pattern, state pattern, visitor pattern (uncommon), memo pattern (uncommon), command pattern (uncommon), interpreter pattern (uncommon), and mediation pattern (uncommon). The above is almost most of the design pattern, Netty almost used half of it, there are too many, I have not fully absorbed and learned, so I will analyze part of it, and then the opportunity to write an article analysis. Here I will focus on the strategy pattern, decoration pattern, and template pattern.

3.2 Decoration Mode

As can be seen from the classification of design patterns in the previous section, decoration patterns are structural patterns, which can be summed up by dynamically attaching responsibility to objects and providing more flexible alternatives than inheritance to extend functionality. Here we directly use Netty WrappedByteBuf source code to analyze the design pattern, the source code is as follows:

class WrappedByteBuf extends ByteBuf {
    protected final ByteBuf buf;
    protected WrappedByteBuf(ByteBuf buf) {
        if (buf == null) {
            throw new NullPointerException("buf");
        }
        this.buf = buf;
    }
 
    @Override
    public final boolean hasMemoryAddress(a) {
        return buf.hasMemoryAddress();
    }
 
    @Override
    public final long memoryAddress(a) {
        returnbuf.memoryAddress(); }... }Copy the code

The constructor is passed in an instance of ByteBuf. This instance is the decorator, whose behavior can be dynamically changed by the current class, WrappedByteBuf. Since WrappedByteBuf is just the base class of the decorator, it simply returns the behavior of the decorator passed in without modifying it. There are more methods that call the decorator’s methods directly. Here is a subclass of WrappedByteBuf:

final class UnreleasableByteBuf extends WrappedByteBuf {
 
    private SwappedByteBuf swappedBuf;
 
    UnreleasableByteBuf(ByteBuf buf) {
        super(buf); }...@Override
    public boolean release(a) {
        return false; }... }Copy the code

The constructor of the parent class WrappedByteBuf is called to pass in the decorator for later use. And then if we look at the release() method inside, we don’t care what it releases, it releases returns false, decorates or changes the behavior of the decorator buF. WrappedByteBuf: WrappedByteBuf: WrappedByteBuf: WrappedByteBuf: WrappedByteBuf: WrappedByteBuf: WrappedByteBuf: WrappedByteBuf: WrappedByteBuf

final class SimpleLeakAwareByteBuf extends WrappedByteBuf {
    private final ResourceLeak leak;
    SimpleLeakAwareByteBuf(ByteBuf buf, ResourceLeak leak) {
        super(buf);
        this.leak = leak; }...@Override
    public boolean release(a) {
        boolean deallocated =  super.release();
        if (deallocated) {
            leak.close();
        }
        returndeallocated; }... }Copy the code

The constructor still calls the parent’s method, and in the release method, if a memory leak is found, the leak.close() method is executed and returned, which in turn decorates the decorator and dynamically changes the decorator’s behavior. AdvancedLeakAwareByteBuf: AdvancedLeakAwareByteBuf

final class SimpleLeakAwareByteBuf extends WrappedByteBuf {
    private final ResourceLeak leak;
    AdvancedLeakAwareByteBuf(ByteBuf buf, ResourceLeak leak) {
        super(buf);
        this.leak = leak; }...@Override
    public boolean release(a) {
        boolean deallocated = super.release();
        if (deallocated) {
            leak.close();
        } else {
            leak.record();
        }
        returndeallocated; }... }Copy the code

As you can see, this class follows the same pattern as above, with the constructor still calling the methods of the parent class. The release method is implemented differently. If a memory leak is found, the leak.close() method is executed and then returned. But if you don’t have a memory leak and you call the release method there’s a record, a stack trace for troubleshooting. Here we can look at the use of ByteBuf above, as follows:

/ * * * AbstractByteBufAllocator. Java 53 * / line
 protected static CompositeByteBuf toLeakAwareBuffer(CompositeByteBuf buf) {
        ResourceLeak leak;
        switch (ResourceLeakDetector.getLevel()) {
            case SIMPLE:
                leak = AbstractByteBuf.leakDetector.open(buf);
                if(leak ! =null) {
                    buf = new SimpleLeakAwareCompositeByteBuf(buf, leak);
                }
                break;
            case ADVANCED:
            case PARANOID:
                leak = AbstractByteBuf.leakDetector.open(buf);
                if(leak ! =null) {
                    buf = new AdvancedLeakAwareCompositeByteBuf(buf, leak);
                }
                break;
            default:
                break;
        }
        return buf;
    }
Copy the code

Here you can see that butebuf is decorated differently according to different strategies. The main thing about decorator mode is that this decorator class decorates and you can add another decorator class and decorate it again. One is that the decorator class and the original class inherit the same parent class, so we can nest multiple decorator classes to the original class. The other is that the decorator class is the enhancement of functions, which is also an important feature of the application scenario of the decorator pattern. In the example above we seem to see that different decorator classes are selected according to different policies, so let’s look at the policy pattern.

3.3 Policy Mode

All of the template patterns in this section are behavioral patterns, so what is a policy pattern? The strategy pattern is to define a set of algorithm classes and encapsulate each algorithm individually so that they can be replaced with each other. It contains a policy interface and a set of policy classes that implement the interface. Because all policy classes implement the same interface, the client code is based on the interface rather than implementation programming, which allows flexibility to replace different policies. Specifically, we use EventExecutorChooser in Netty to illustrate. Before that, let me review the previous knowledge. We analyzed the following method earlier, as follows:

public EventExecutorChooser newChooser(EventExecutor[] executors) {
    if (isPowerOfTwo(executors.length)) {
        return new PowerOfTwoEventExecutorChooser(executors);
    } else {
        return newGenericEventExecutorChooser(executors); }}Copy the code

This is the creation of the EventExecurotChooser, which is used to select a NioEventLoop to serve when a new connection comes in. Different selectors are created, but they all iterate through an array. For example, the NioEventLoop array is 8 in length. After adding the client to the server, the server uses the EventExecutorChooser to select the first NioEventLoop to serve it, and then the second client to connect to the server. The EventExecutorChooser selects the second NioEventLoop to serve it, and so on, if the eighth client connects to the server, it will be served by the eighth NioEventLoop. After the ninth client connects to the server, it is served by the first NioEventLoop. After all, the Selector that NioEventLoop encapsulates is multiplexed and that’s where it comes in. The isPowerOfTwo() function is called to determine whether the length of the array is a multiple of 2 and to select different strategies. First let’s look at the policy interface, as follows:

    / * * * EventExecutorChooserFactory. 34 line up * / Java
    @UnstableApi
    interface EventExecutorChooser {

        /**
         * Returns the new {@link EventExecutor} to use.
         */
        EventExecutor next(a);
    }
Copy the code

Here is a very simple next function, let’s look at the implementation of its functions, as follows:

/ * * * DefaultEventExecutorChooserFactory. Java 46 * / line
 private static final class PowerOfTowEventExecutorChooser implements EventExecutorChooser {
        private final AtomicInteger idx = new AtomicInteger();
        private final EventExecutor[] executors;

        PowerOfTowEventExecutorChooser(EventExecutor[] executors) {
            this.executors = executors;
        }

        @Override
        public EventExecutor next(a) {
            return executors[idx.getAndIncrement() & executors.length - 1]; }}private static final class GenericEventExecutorChooser implements EventExecutorChooser {
        private final AtomicInteger idx = new AtomicInteger();
        private final EventExecutor[] executors;

        GenericEventExecutorChooser(EventExecutor[] executors) {
            this.executors = executors;
        }

        @Override
        public EventExecutor next(a) {
            returnexecutors[Math.abs(idx.getAndIncrement() % executors.length)]; }}Copy the code

As you can see, both of the above classes are implementation interfaces to EventExecutorChooser, and they have different implementations of Next. As you can see, they implement the “%” and “&” operations. The latter one is much more efficient than the former one. This is the strength of Netty. In fact, the above implementation is a very simple implementation of the policy pattern, in most cases the policy pattern avoids if-else branch judgment logic. It mainly creates and uses the definition of decoupling strategy to control the complexity of the code so that each part is not too complicated and the code quantity is too much. At the same time, for complex code, policy mode can be added when the policy, can minimize, centralized code change.

3.4 Template Mode

In this section, I examine the template pattern, which is to define an algorithm skeleton in a method and defer some steps to subclasses. The template approach allows subclasses to redefine the silent steps in an algorithm without changing the overall structure of the algorithm. Here we Netty ByteToMessageDecoder as an example, this class after the previous explanation we should be very clear. We can review the implementation here, starting with his channelRead() method.

 @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
        if (msg instanceof ByteBuf) {
            CodecOutputList out = CodecOutputList.newInstance();
            try {
                ByteBuf data = (ByteBuf) msg;
                first = cumulation == null;
                if (first) {
                    cumulation = data;
                } else{ cumulation = cumulator.cumulate(ctx.alloc(), cumulation, data); } callDecode(ctx, cumulation, out); . }Copy the code

Here we follow up the callDecode method as follows:

  protected void callDecode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) {...intoldInputLength = in.readableBytes(); decode(ctx, in, out); . }Copy the code

This is mainly a decode method, we can continue to look at this method, as follows:

 /**
     * Decode the from one {@link ByteBuf} to an other. This method will be called till either the input
     * {@link ByteBuf} has nothing to read when return from this method or till nothing was read from the input
     * {@link ByteBuf}.
     *
     * @param ctx           the {@link ChannelHandlerContext} which this {@link ByteToMessageDecoder} belongs to
     * @param in            the {@link ByteBuf} from which to read data
     * @param out           the {@link List} to which decoded messages should be added
     * @throws Exception    is thrown if an error accour
     */
    protected abstract void decode(ChannelHandlerContext ctx, ByteBuf in, List<Object> out) throws Exception;
Copy the code

As you can see, this method is an abstract method. In fact, the first big chunk is the algorithm skeleton, and then the concrete steps of the decoding process are left to the subclasses to implement. There are many subclasses of this example, such as the simplest fixed length decoder, Http decoder, MQTT decoder, and so on. The template pattern has two main functions: reuse and extension. Reuse means that all subclasses can reuse the code of the template method provided in the parent class. The extension value is that the framework provides extension points through a template pattern that allows users of the framework to customize the functionality of the framework based on the extension without modifying the framework source code. I’m sure there are more examples of these three patterns in Netty, and there are also more design patterns. In fact, his source code about the design pattern and design ideas involved too much, can be left behind slowly to experience and perception.

Four summarizes

Here is a big summary of the first stage of Learning Netty, this stage of learning Netty harvest is quite big, but also feel the power of Netty. Netty really requires a high level of performance, such as object pooling, zero copy, etc., which this series did not analyze. In addition, Netty author Trustin Lee published a performance analysis of Twitter’s use of Netty, which can be seen in the figure below.

In addition Netty code style is quite unified, some of his code is written very elegant, the use of design patterns is also very good. In fact, THIS stage I learn Netty learn very rough, there are a lot of details to slowly analysis, learning. To get the most out of Netty, see how some open source projects like Zookeeper, Hadoop, and RocketMQ use Netty. Finally, the hope behind can make good use of Netty excellent source code, learning his design ideas, code style, design pattern, etc., of course, also hope that the opportunity behind Netty for secondary development, write some practical middleware tool library.

The resources

  • Netty In Action
  • The Definitive Netty Guide
  • Blog.csdn.net/u013857458/…
  • Blog.csdn.net/u013828625/…

Note: The source code used here is Netty4.1.6 for analysis