Overview of the IO threading model

The current IO threading processing model can be generally divided into the following three categories

  1. Single thread blocking I/O service model
  2. Multithreading blocks the IO service model
  3. Reactor model

Based on the number of reactors and the number of processing pool threads, there are three typical implementations of the Reactor mode

  1. Single Reactor single thread
  2. Single Reactor multi-thread
  3. Master-slave Reactor multithreading

Traditional blocking I/O threading model

Traditional non-blocking IO thread thread model in dealing with the IO events is constantly using a loop listener port if there is a new socket connection, if you have made the corresponding processing, but in the case of business logic is relatively complex, unable to return quickly for a new connection operation, will lead to subsequent connection block, the efficiency is too low. Its thread model is shown in the figure below:



(For multithreaded blocking I/O, I will not repeat it here. It is one thread for one connection.)

Reactor model

There are generally two solutions to the shortcomings of the traditional blocking I/O service model

  1. Based on THE I/O multiplexing model: multiple connections share a blocking object. The application only needs to wait in one blocking object. It does not need to wait for all connections. When new data is available for a connection, the operating system notifies the application that the thread returns from the blocked state to begin business processing.
  2. Reuse of thread resources based on thread pool: There is no need to create threads for each connection, and the business processing tasks after the connection is completed are allocated to threads for processing. A thread can handle the business of multiple connections.

I/O reuse combined with thread pool is the basic design idea of the Reactor model, as shown in the figure below:

Single Reactor Single-thread mode



Advantages: Simple model, no multithreading, process communication, competition problems, all completed in a thread

Disadvantages:

  • Performance issues, with only one thread, can not fully utilize the performance of multi-core CPU. Handler When processing services on one connection, the entire process cannot process other connection events, which may cause performance bottlenecks.
  • Reliability problems, unexpected termination of the thread or entry into an infinite loop will cause the communication module of the whole system to be unavailable, unable to receive and process external messages, resulting in node failure

Application scenario: The number of clients is limited, and the service processing is very fast. For example, Redis needs O(1) time for service processing

Single Reactor multi-threaded model



Advantages: Fully utilizes the processing power of the multi-core CPU

Disadvantages: Multi-threaded data sharing and access is complicated, and the reactor processes all event monitoring and response. When running in single thread, performance bottleneck is likely to occur in high concurrency scenarios.

Master-slave Reactor multithreaded model



Advantage: the parent thread and the child thread data interaction is simple and clear responsibility, the parent thread only needs to receive the new connection, the child thread completed the subsequent business processing. The Reactor main thread only needs to pass the new connection to the child thread. The child thread does not need to return data.

Cons: High programming complexity

Combined example: this model is widely used in many projects, including Nginx master-slave Reactor multi-process model, Memcached master-slave multi-thread model, and Netty master-slave multi-thread model

Reactor model in Netty

Netty is based on the multi-thread model of the master-slave Reactor and has made some improvements, in which the master-slave Reactor has changed from single thread to multi-thread.



Server side contains 1 Boss NioEventLoopGroup and 1 Worker NioEventLoopGroup. NioEventLoopGroup is equivalent to an event loop group. This group contains multiple event loops called NioEventLoop. Each NioEventLoop contains a Selector and an event loop thread. The task performed by each Boss’s NioEventLoop consists of three steps:

  1. Polling Accept events;
  2. Handle Accept I/O events, establish a connection with the Client, generate the NioSocketChannel, and register the NioSocketChannel with a Worker NioEventLoop Selector;
  3. Process tasks in the task queue, runAllTasks. Tasks in the task queue include tasks executed by the user calling EventLoop. Execute or schedule, or tasks submitted to the EventLoop by other threads.

The tasks performed by each Worker NioEventLoop loop consist of three steps:

  1. Polling Read and Write events.
  2. Handle I/O events, namely Read and Write events, when the NioSocketChannel readable and writable events occur.
  3. Process tasks in the task queue, runAllTasks.

Tasks in the Task queue can be used in three typical scenarios: 1. Common tasks customized by user programs:

Ctx.channel ().eventLoop().execute(newRunnable() {@override public void run() {ctx.channel().eventLoop().execute(newRunnable() {@override public void run()}});

2. Various methods used by non-current Reactor threads to invoke the Channel

For example, in the business thread of the push system, the corresponding Channel reference is found according to the user's identity, and then the Write class method is called to push the message to the user. The final Write is submitted to the task queue and consumed asynchronously.

3. User-defined scheduled tasks

Ctx.channel ().eventloop ().schedule(newRunnable() {@override public void run() {// execute the task logic}}, 60, timeunit.seconds);