This article is for my notes only.

The development of RPC

Before we get to the high performance of Netty, let’s take a look at what traditional RPC does

Traditional RPC call defects

For traditional RPC, its call performance is a certain bottleneck.

First, network transmission mode problem

Traditional RPC transmission adopts synchronous blocking I/O. I/O threads often block when client concurrency pressure or network latency is high. The biggest problem of the server using BIO communication model is that it does not have elastic scalability. When the concurrent visits increase greatly, the number of threads will expand, and the system performance will decline sharply. As the amount of concurrency continues to increase, handle overflows, thread stack overflows, and other issues can occur.

Second, poor serialization performance

Java’s own serialization also has some typical pitfalls.

Java’s serialization mechanism is an internal Java object codec technology that cannot be used across languages. Second, compared with other open source serialization frameworks, Java serialized code flow is large, and its serialization performance is poor, high resource consumption.

Third, the threading model problem

Due to its synchronous blocking BIO, each TCP connection takes up 1 thread. If I/O read/write is blocked, threads cannot be released in a timely manner. As a result, system performance deteriorates and new threads cannot be created on VMS.

Netty’s high performance

As a high-performance NIO communication framework, Netty has been widely used in various Internet messaging middleware, gaming and financial industries.

The main advantages of Netty are the following:

Asynchronous non-blocking communication

Multithreading or I/O multiplexing can be used when multiple access requests need to be processed simultaneously.

I/O multiplexing technology, by multiplexing multiple I/O blocks to the same SELECT block, so that the system can simultaneously process multiple requests in the case of a single thread. The biggest advantage of I/O multiplexing is that it is low overhead and the system does not need to create new processes or threads.

Netty’s NioEventLoop, an I/O thread that aggregates multiplexer selectors, can process multiple Socketchannels concurrently. Its read and write operations are non-blocking, greatly improving the efficiency of I/O threads.

In addition, Netty adopts the asynchronous communication mode. One I/O thread can concurrently process N clients and read/write operations, which greatly improves performance, flexibility, and reliability.

2. Efficient Reactor thread model

There are three common Reactor thread models: (1) single-thread model (2) Reactor multi-thread model (3) principal/slave Reactor multi-thread model

1. Reactor single-thread model

The Reactor single-thread model means that all I/O operations can be performed on a SINGLE NIO thread, including receiving TCP connections from clients, initiating TCP connections to servers, reading request or reply messages, sending request or reply messages, and so on.

Due to the ASYNCHRONOUS, non-blocking I/O approach used by the Reactor pattern, a single thread can theoretically handle all I/O operations.

However, in the face of high load and high concurrency applications, the single thread model has great disadvantages.

Because if a thread needs to process hundreds of links at the same time, even if the CPU load reaches 100%, it is difficult to meet its message encoding, decoding, reading and sending operations.

In addition, once the NIO thread becomes overloaded, its processing speed slows down, resulting in a large number of client connection timeouts. Timeout also tends to be retransmitted, which further aggravates NIO thread load.

There is also a reliability problem. There is only one NIO thread, and if you accidentally run off or enter an infinite loop, the entire system module becomes unavailable.

Based on the above problems, Netty upgraded the Reactor multithreading model.

2. Reactor Multithreaded model

To be continued