preface

This series is the Webflux part of my Java programming methodology Responsive Interpretation series, which is now shared. The pre-knowledge Rxjava2, Reactor’s interpretation has been recorded and posted on site B, and the address is as follows:

Rxjava source code reading and sharing: www.bilibili.com/video/av345…

Reactor source code reading and sharing: www.bilibili.com/video/av353…

NIO source reading related video sharing: www.bilibili.com/video/av432…

NIO source code interpretation video related supporting articles:

BIO to NIO source code some things BIO

BIO to NIO source code for something on NIO

BIO to NIO source code for something in NIO

Here’s a look at the Selector from BIO to NIO source code and a look at the Selector from BIO to NIO source code and a look at the Selector from BIO to NIO source code

Java Programming Methodology -Spring WebFlux 01 Why Spring WebFlux

Among them, Rxjava and Reactor will not be open to the public as the content of my book. If you are interested, you can spend some time to watch the video. I have made a thorough and detailed interpretation of the two libraries, including the design concept and related methodology.

Servlet 3.1 with Spring MVC

With the introduction of Servlet 3.1, non-blocking behavior is now possible through Spring MVC. However, the Servlet API still contains several blocked interfaces. Similarly, we might use blocking in an application design API that is supposed to be non-blocking. In this case, the use of the associated blocking API will definitely degrade application performance. Let’s look at the following code:

@GetMapping
void onResponse(a){
  try{
     //some logic here
  }catch(Exception e){
      //sendError() is a blocking API
     response.sendError(500); }}Copy the code

This code is used in Spring MVC, and the Spring container blocks rendering of the corresponding page for this error. As follows:

@Controller
public class MyCustomErrorController implements ErrorController {
    @RequestMapping(path = "/error")
    public String greeting(a) {
        return "myerror";
    }
    @Override
    public String getErrorPath(a) {
        return "/error"; }}Copy the code

The page rendered here is myerror.jsp, the code will not be attached. Of course, there are ways to solve this error handling problem asynchronously, but the chances of making an error are higher. Remember, we still have to pass through Servlet objects eventually, and servlet-related apis have blocking and non-blocking. Let’s use a diagram to understand this.

When produce request access to the event, the event processing flow as shown in the above (we only focus on entered the stage of the processing of the Servlet container), you can know, this process, especially the Filter chain, here are all can happen IO blocking, speak content according to the previous section, we can use a picture to show we can determine the non-blocking IO.

Spring MVC
The Servlet 3.1 +
Servlet
Spring WebFlux

Service layer asynchronous processing is difficult to analyze

On the business side, most programmers are not good at concurrent operations, so it is difficult to write codes with good performance and standards. This also makes it difficult for us to carry out reasonable asynchronous operations for our businesses under Spring Web MVC. For example, we tend to I/O operation with the currently executing thread binding together, is the production and consumption bundle together two kinds of business, so, if we are asynchronous, both in the same thread, so, if the concurrency value under the condition of large, asynchronous will produce a large number of threads, the CPU will consume more performance on switch threads, This is not desirable. RxJava and Reactor provide us with scheduling apis such as publishOn in Reactor and observeOn in RxJava to keep production and consumption separate and at the same time, as a thread pool of production or consumption threads, It is often to use multiple subscription service in the thread pool, in this way, each thread is likely to be at the same time for multiple subscription service relationship, a separate subscription relationship doesn’t always occupies the thread, when you have issued elements, will be based on the subscriber number of requests and the element of speed, and whether to have multiple threads in the treatment of the relationship between this subscription of distributed elements, Use the scheduler, here in the Reactor publishOn, when only supports synchronous upstream (FluxPublishOn. PublishOnSubscriber# onSubscribe judgment call source requestFusion method), It is always in the same thread consumption (FluxPublishOn. PublishOnSubscriber# judgment within the trySchedule) through the WIP control, when we define good publishOn queue size, when consumed within the queue element, then the upstream elements generate too slow, It will jump out of the current consuming thread until a new element is delivered, and it will get another thread consumption from the thread pool. If you have any questions here, please review the previous content of the book (since the book was not published, please review the video I shared). In this way, the server’s performance can be maximized. This is really hard to implement on our own in Spring MVC. In addition, through Reactor’s implementation of back pressure, we can make the message backlog similar to that of messaging middleware, so that data will not be lost in the process of network transmission, which can better cope with the access requirements of high concurrency scenarios. Next, let’s take a quick look at the use of back pressure under Webflux.

The use of back pressure in Webflux

To help understand the underlying workings of Backpressure when used with WebFlux, it is necessary to review the TCP/IP transport layer used by default. As we know, normal communication between the browser and the server (and usually server-to-server communication as well) is done over a TCP connection (again including communication between WebClient and the server in WebFlux). In addition, we will review the meaning of back pressure from the perspective of Reactive Streams specification in order to better control back pressure.

In Reactive Streams, the back pressure consists of a backlog of messages on the receiving end, and the ability of consumers to mediate demand by sending notifications expressing how many elements they can consume. The whole process is the element object of the operation, so here we run into a tricky problem: TCP is for byte abstraction rather than logical element abstraction. What we commonly refer to as back pressure control refers to the number of logical elements sent or received to or from the network. TCP’s own flow control is based on bytes rather than logical elements.

From above, it can be known that in the implementation of WebFlux, the back pressure is adjusted by data transmission process control, but it will not expose the actual demand of the receiver. We can see the interaction flow in the following figure:

The figure above shows the communication between two microservices, where the left side sends a data stream and the right side consumes the stream. Here is a brief explanation of the whole process:

  1. inWebFluxIn, it converts logical object elements into byte streams and transfers them to the TCP network or receives byte streams from the TCP network and converts them into logical object elements.
  2. Here element processing begins for a period of time, and the next element is requested when the element processing is complete.
  3. Although there is no requirement from the business logic, WebFlux will queue bytes from the TCP network, which involves the back pressure policy that we have covered in the previous chapter of Reactor.
  4. Due to the nature of TCP’s own data flow control, service A can still send data to the network.

As we can see from the figure above, the needs of the receiver are different from those of the sender (here referring to the logical elements of the Request in the figure). This means that the requirements of the two are independent of each other. That is to say, in WebFlux, we can show the requirements through business logic (service) interaction, but few details of the back pressure of service A and service B interaction are exposed. That is, the back pressure design in WebFlux does not design the data sending server on demand, which may not be as perfect or as fair as we would like.

Custom back pressure control

If we want to control the back pressure in a simple way, we can control the number of requests through the relevant operations of Reactor or set a limit when defining subscribers. Here, we set limitRate(n) under Flux. PublishOn is an intermediate store that separates upstream from downstream and manages the number of downstream requests. PublishOn has a limit on how many requests it can make upstream at a time. For details on the publishOn source code, please review the previous section (since the book is not published, please review the video I shared). That is, we just need to wrap an API on top of publishOn to implement it:

//reactor.core.publisher.Flux#limitRate(int)
public final Flux<T> limitRate(int prefetchRate) {
    return onAssembly(this.publishOn(Schedulers.immediate(), prefetchRate));
}
Copy the code

If we have a source that contains questions and want to limit its flow due to its limited ability to solve the problem, we can perform the following operations:

@PostMapping("/questions")
public Mono<Void> postAllQuestions(Flux<Question> questionsFlux) {

    return questionService.process(questionsFlux.limitRate(10))
                       .then();
}
Copy the code

Now that we are familiar with publishOn, we know that the limitRate() operation first fetches 10 elements from upstream and stores them in its defined queue. This means that even if the number of request elements set by our defined subscriber is long.max_value, the limitRate operation will split this requirement into pieces to send the request. The source code involved here is as follows, we can compare to understand:

//reactor.core.publisher.FluxPublishOn.PublishOnSubscriber#runAsync
if (e == limit) {
    if(r ! = Long.MAX_VALUE) { r = REQUESTED.addAndGet(this, -e);
    }
    s.request(e);
    e = 0L;
}
Copy the code

The above is the block processing of submitted data. We sometimes involve the processing of database request data, such as query, and at the same time, the sent data can be gradually sent by limiting the flow. The following operations can be carried out:

@GetMapping("/questions")
public Flux<Question> getAllQuestions(a) {

    return questionService.retreiveAll()
                       .limitRate(10);
}
Copy the code

Therefore, we can also understand the mechanism of back pressure in Webflux. For these features, Spring MVC is hard to provide.

summary

I believe you have clearly felt the benefits of using Spring WebFlux, understood why Servlet 3.1+ was required, and gained a clearer understanding of the role of back pressure in WebFlux. However, it is important to note that Spring Webflux can run on Servlet Container or Netty, according to the official documentation, whereas this book is more concerned with Spring Webflux running on a Netty server. So, next, we’ll get to the inner details of reactor-NetTY.