This article is part of a series of articles about channels in.NET. Of course, it’s best to start with Part 1, but you can skip anywhere you want using the links below. This series of articles are my translation, translation is also spontaneous, not literal translation, good English can go to see the original, translation can be freely reproduced, but please indicate the source!

  • Part 1 — Getting started with channels
  • Part 2 — Advanced channels
  • Part 3 — Understanding back pressure

1, review

The last article talked about infinite queues, read/write separation, thread safety, and the elegant looping of channel data, as well as getting a signal that a channel is closed. Of course, you remember how we set up channels.

var myChannel = Channel.CreateUnbounded<int> ();Copy the code

“Well, is there a finite queue?”

‘Yes, you’ve guessed it again, yes!

var myChannel = Channel.CreateBounded<int> (1000);
Copy the code

This is not very similar to creating other collection types with limited capacity, such as lists or arrays.

In our example, we created a channel that could hold up to 1000 projects.

Well, why limit yourself?

HMM.. That’s where the back pressure comes from.

2. What is back pressure

In computing (especially in messaging/queuing), back pressure is an idea. Because in a computer, whether it is memory, RAM, network capacity, or the API speed required by the external API is limited. So we should be able to put “pressure” on the chain to take some of the burden off, at the very least, to let others in the ecosystem know that we are bearing the load and we may need some time to process their requests.

Of course, when it comes to back pressure, flow control is usually mentioned. Back pressure is only one of the solutions to flow control.

It’s like elementary school math: a sink, an inlet pipe and an outlet pipe. If there is more water in the intake pipe, after a while the pool will overflow. This is what happens when there is no flow control.

How many ideas are there to solve Flow Control?

  1. Backpressure means that producers produce as much as consumers want. This is similar to traffic control in TCP, where the receiver controls the receive rate according to its own receive window, and controls the sender’s send rate through reverse ACK packets. This scheme only works for Cold Observables. Cold Observables are sources of sending that allow for slow speeds, such as sending a file from one machine to another, even down to a few bytes per second, if it takes a long enough time. The opposite example is live audio and video streaming, where the rate drops below a certain value and the entire function fails (this is similar to a Hot Observable).
  2. Throttling, “Throttling,” in plain English, means discarding. If you can’t consume it, you get rid of some of it and throw away the rest. There are different strategies for what to dispose of and what to discard, namely sample (or throttleLast), throttleFirst, and debounce (or throttleWithTimeout). In the case of live audio and video streaming, packets need to be discarded when the downstream can’t handle them.
  3. Package (Buffer and Window). Buffers are basically the same as Windows, except that the output format is different. They are small packages of upstream into large packages and distributed downstream. This reduces the number of packages that need to be processed downstream.
  4. Callstack blocking is a special case. This is a special case because it only works if the entire chain of calls is executed synchronously on one thread, which requires that no intermediate operator can start a new thread. In ordinary use, this should be relatively rare, because we often switch execution threads using subscribeOn or observeOn, and some complex operators themselves start new threads internally to process them. In addition, if a fully synchronized chain of calls does occur, the previous (1) (2) (3) might still apply, but this way of blocking is simpler and does not require additional support.

For example, compare (1) with (4). (4) It is equivalent to many cars driving on a winding mountain road with only one lane. Then the first car at the front of the line blocks the whole road, and the cars behind have to queue up behind. And (1) the window that is equivalent to the bank when doing business calls a number, the window calls a certain number in the past actively (equivalent to request), that talent is dealt with in the past.

So, how do.NET channels work?

3, channel back pressure options

When using channels, we actually have a very simple way to increase back pressure. The code looks like this:

var channelOptions = new BoundedChannelOptions(5)
{
    FullMode = BoundedChannelFullMode.Wait
};

var myChannel = Channel.CreateBounded<int>(channelOptions);
Copy the code

FullMode has the following options:

  • Wait

Publishers simply wait until they can call WriteAsync ().

  • DropNewest/DropOldest

Discard the oldest/most recent messages to make room for further writing messages.

  • DropWrite

Throw away the message you want to write, because you can’t get it in anyway.

4. There are two more ways to pay attention

There are two more methods to focus on:

await myChannel.Writer.WaitToWriteAsync();
Copy the code

This lets us “wait” for the channel size limit to satisfy the write condition.

For example, when a channel is full, it waits until new space is available, which means that even with DropWrite FullMode turned on, we can limit the number of messages we discard by simply waiting until capacity is available.

Here’s another code:

var success = myChannel.Writer.TryWrite(i);
Copy the code

This allows us to attempt to write to the queue and return success or failure.

Important note: This method is not asynchronous.

Whether or not we can write to the channel, there is no “HMM. If you wait a little longer, you might be able to.

5, summary

Choose this topic, but also for common familiarity, common progress! Pay attention to the landlord, do not get lost, I am the net dust.