Some time ago, I used JWT to implement mailbox verification code and user authentication and login, and also wrote a special article as a conclusion.

In that article, a point was mentioned about how to speed limit.

In the SMS verification code and email verification code, if the speed limit is not limited, the malicious attack caused a large number of QPS, not only dragged down the service, but also distressed like water. For the benefit of the gentleman, add a speed limit to my E-mail service.

How to speed, there are two well-known algorithms, leakage bucket algorithm and token bucket algorithm, here is a brief introduction, and finally practice in my email API

Here is the email API, which is limited to twice a minute, and you can experiment by modifying your email. You can also experiment directly on my site

curl 'https://graphql.xiange.tech/graphql' -H 'Content-Type: application/json' --data-binary '{"query":"mutation SEND($email: String!) {\n  sendEmailVerifyCode (email: $email)\n}","variables":{"email":"[email protected]"}}'
Copy the code

The following is my series of articles on login practices

  1. Implement the Material Design login style
  2. Login things use JWT login and verification codes
  3. Mail sending, traffic limiting, missed buckets and token buckets

Shanyue. tech/ POST /rate-l…

Leaky Bucket algorithm

Leaky bucket algorithm means that water droplets (requests) enter the leaky bucket first, and the bucket gives water at a certain speed. When the bucket is full, no more water can be added.

  • Maintains a counter as a bucket, capped at the size of the bucket
  • Reject the request when the counter is full
  • Empty the counter at regular intervals

Use option to indicate that the maximum number of options. Max requests can be made within the option.window window time

The following is pseudocode for limiting traffic using redis counters

const option = {
  max: 10.// Limit 10 requests in window time
  window: 1000    // 1s
}

function access(req) {
  // Generate unique flags on request
  const key = identity(req)
  // The counter increases
  const counter = redis.incr(key)
  if (counter === 1) {
    // If it is the first request in the current time window, set the expiration time
    redis.expire(key, window)}if (counter > option.window) {
    return false
  }
  return true
}
Copy the code

IO /commands/IN… redis.io/commands/IN…

One of the things that’s not a problem is that it doesn’t have a sliding window where one ball goes out in the bucket and another ball comes in. It’s more like a fixed window of time where you get a bunch of balls out of the bucket and start scoring. Because of this, it may receive Max -1 requests in the second half of the fixed window and make Max -1 requests in the next fixed window, at most 2 * Max -1 requests in a random window.

In addition, there is an atomic problem of REDis INCR and EXPIRE, which is easy to cause Race Condition, which can be solved by SETNX

redis.set(key, 0.'EX', option.window, 'NX')
Copy the code

It can also be done with a LUA script, but SETNX is obviously easier

local current
current = redis.call("incr",KEYS[1])
if tonumber(current) == 1 then
    redis.call("expire",KEYS[1].1)
end
Copy the code

To solve the 2N problem, you can change from maintaining a counter to maintaining a queue. The cost is high memory footprint, and Race conditions are harder to solve

Here is the limiting using Redis’s SET/GET String

const option = {
  max: 10.// Limit 10 requests in window time
  window: 1000    // 1s
}

function access(req) {
  // Generate unique flags on request
  const key = identity(req)
  const current = Date.now()
  // Cache is treated as a cache object
  // Filter out the number of requests in the current time window. Each request is marked as a timestamp
  // For simplicity, we don't serialize or deserialize json...
  const timestamps = [current].concat(redis.get('timestamps')).filter(ts= > ts + option.window > current)
  if (timestamps.length > option.max) {
    return false 
  }
  // If the read and write are not synchronized, a Race Condition will occur
  redis.set('timestamps', timestamps, 'EX', option.window)
  return true
}
Copy the code

Here’s another LUA script to solve the Race Condition problem

TODO

Token Bucket algorithm

Take a look at the difference between token buckets and leaky buckets

  1. The token bucket initial state bucket is full, and the leak bucket initial state bucket is empty
  2. The token bucket rejects new requests when the bucket is empty, and the leak bucket rejects new requests when the bucket is full
  3. When a request comes in, assume that a request consumes one token, the token bucket decreases by one token and the leak bucket increases by one token

The token bucket is implemented using Redis below

TODO