Redis optimizes concurrent writing of real-time data – original text link

How do you quickly update concurrent requests without forcing a lock? Here’s a hot online practice…

background

The logic of the current architecture is to write concurrent request data to a queue and then start a separate asynchronous thread to process the data serially. The advantage of this approach is that you don’t have to worry about concurrency, but the downside is obvious

Optimistic locking enables concurrent updates of data

According to the current service data update in the second level, the key collision rate is low. The author intends to use CAS optimistic locking scheme: using Lua script to implement Redis atomic update of data, even in the case of concurrent performance is a level higher. The following is the flow chart of CAS optimistic lock to realize concurrent data update:

The Lua script is designed according to the above flowchart:

local keys,values=KEYS,ARGV
local version = redis.call('get',keys[1]) 
if  values[1] = =' ' and version == false
then
	redis.call('SET',keys[1].'1')
	redis.call('SET',keys[2],values[2])
	return 1
end

if version == values[1]
then
	redis.call('SET',keys[2],values[2])
	redis.call('INCR',keys[1])
	return 1
else
	return 0
end
Copy the code

Possible problems and their solutions

1. In a highly competitive environment with a high probability of concurrent conflicts, if the CAS fails all the time, the system keeps trying again, resulting in high CPU overhead. One way to solve this problem is to introduce an exit mechanism, such as a failure to exit after the number of retries exceeds a certain threshold. Such as:

func main(a) {
    for i := 0; i < 10; i++ {
        isRetry := execLuaScript()
        if! isRetry {break}}}func execLuaScript(a) bool {
    ctx := context.Background()
	r := client.GetRedisKVClient(ctx)
	defer r.Close()

	luaScript := ` local keys,values=KEYS,ARGV local version = redis.call('get',keys[1]) if values[1] == '' and version == false then redis.call('SET',keys[1],'1') redis.call('SET',keys[2],values[2]) return 1 end if version == values[1] then redis.call('SET',keys[2],values[2]) redis.call('INCR',keys[1]) return 1 else return 0 end`

	casVersion, err := r.Get("test_version")

	kvs := make([]redis.KeyAndValue, 0)
	kvs = append(kvs, redis.KeyAndValue{"test_version", casVersion.String()})
	kvs = append(kvs, redis.KeyAndValue{"test"."123123123"})
	mv, err := r.Eval(luaScript, kvs...)

	iferr ! =nil {
		log.Errorf("%v", err)
	}

	val, _ := mv.Int64()
	log.Debugf(">>>>>> Lua script run result: %d", val)
    if val == 1 {
        // The lua script is successfully executed without retry
        return false
    } else if val == 0 {
        return true}}Copy the code

2. Lua scripts can only be executed on the same machine, so the Redis cluster requires associated keys to be assigned to the same machine. In fact, Redis is single-threaded, and if the key of Lua script is executed on a different machine, there is no guarantee of atomicity.

The solution is based on the principle of sharding technology: Data sharding is a hash process. Hash algorithms such as MD5 and SHA1 are performed on keys and distributed to different machines according to the hash value.

In order to distribute the key to the same machine, you need the same hash value, that is, the same key (you can change the hash algorithm, but it’s more complicated). But it is unrealistic to have the same key, because each key serves a different purpose. But we made it possible for our business implementation to have parts of the key be the same. So can I take the key part and hash it? And the answer is yes,

That’s the Hash Tag. Allows partial strings of keys to compute hashes. When a key contains {}, it does not hash the entire key, but only the string contained in {}. Assume that the hash algorithm is SHA1. For user:{user1}: IDS and user:{user1}:tweets, its hash value is equal to sha1(user1).

summary

As for the above optimization process, the code refactoring development work has been completed, but it has not been officially launched. We will make up for the performance improvement after the optimization when it is launched

Pay attention to our