Recently, it is common to see that certain anchors are punished by streaming restriction by live broadcasting platforms, and the platform gives less traffic to anchors, or even directly no traffic. This article is going to talk about the back-end service’s traffic restriction punishment, and this anchor is a bit similar to the traffic restriction punishment, but also a bit different.

What is the punishment for limiting traffic in this article?

If a service caller calls the service for more times than the upper limit allowed by the service, that is, reaches the traffic limiting threshold, the service will return a traffic limiting error to the caller and impose some additional penalties, such as limiting the caller’s access to the service for 10 seconds.

Why do we need additional punishment if we are restricted?

The logic here is that the reason why the caller is restricted is because he does not use the service reasonably (of course, this rationality is declared by the service provider). As the service provider, it is the responsibility to urge the caller to adjust the way of using the service, and punishment is a form of urging. The caller’s business can also be significantly affected by penalties for improper use of the service, forcing service consumers to consider improving their behavior in accessing the service.

One of the reasons for unreasonable service invocation may be the caller’s program error, such as falling into an infinite loop and continuously accessing the service. In this case, it is necessary to add a traffic limiting penalty to avoid the server being overwhelmed by a large number of meaningless requests and causing the problem of overloading.

There are also design considerations. For example, frequent query of user information leads to the penalty of traffic limiting. The purpose of the service provider may be to require the service caller to cache user information rather than query the service every time, which brings great pressure to the service.

How to implement traffic restriction penalty?

A natural way to think about it is to flag the caller when limiting is triggered and set an expiration time. If the caller calls within the expiration time, an error is returned, and if the caller calls after the expiration time, the limiting count resumes. This can further reduce the consumption of traffic limiting on the server.

Let’s take a look at the implementation. In FireflySoft.RateLimit, both in-process traffic limiting and Redis traffic limiting support traffic limiting penalty.

Define flow limiting penalties

Because all traffic limiting algorithms may need to support traffic limiting penalties, a field is defined in the base class of the traffic limiting rule: LockSeconds, which indicates the number of seconds that need to be locked after traffic limiting is triggered. That is, the length of time that the caller cannot continue to access the service after traffic limiting is triggered.

public abstract class RateLimitRule
{
    /// <summary>
    /// The number of seconds locked after triggering rate limiting. 0 means not locked
    /// </summary>
    public int LockSeconds { get; set; }}Copy the code

Initiate a traffic restriction penalty

The following uses intra-process traffic limiting as an example. After traffic limiting is triggered, a cache entry is added and the expiration time of the cache is set to the LockSeconds defined above.

protected bool TryLock(string target, DateTimeOffset currentTime, TimeSpan expireTimeSpan)
{
    var expireTime = currentTime.Add(expireTimeSpan);
    return _cache.Add($"{target}-lock".1, expireTime);
}
Copy the code

The principle of Redis initiating the current limiting penalty is similar to this, but it is written into Redis KV. Those of you who want to know can check it out here.

Apply current limiting penalties

In this example, traffic limiting in the process is used as an example. Before each traffic limiting count, check whether there is a traffic limiting penalty cache entry. If there is, an error is directly returned.

protected bool CheckLocked(string target)
{
    return _cache.Get($"{target}-lock") = =null ? false : true;
}
Copy the code

Well, that’s the main content of this article.

If you are a little bit interested in fireflysoft.ratelimit, please visit my Github: github.com/bosima/Fire… .

This article is published by OpenWrite!