Cache consistency issues

When data timeliness is high, ensure that the data in the cache is consistent with the data in the database, and ensure that the data in the cache node and the copy are consistent. This is more dependent on the expiration and update strategies of the cache. When data changes, the cache is actively updated or removed.

Cache concurrency issues

It is a plausible process to try to fetch data from the back-end database after the cache expires. However, in a high concurrency scenario, it is possible for multiple requests to obtain data from the database concurrently, causing a great impact on the back-end database, or even “avalanche” phenomenon. In addition, when a cache key is being updated, it can also be acquired by a large number of requests, which can lead to consistency issues. So how do you avoid similar problems? We can think of a mechanism similar to “lock”, in the case of cache update or expiration, first try to acquire the lock, when the update or from the database is complete and then release the lock, other requests only need to sacrifice a certain amount of wait time, can continue to fetch data directly from the cache.

Cache penetration problem

Cache penetration is also called “breakdown” in some places. Many friends understand cache penetration as: a large number of requests penetrate the back-end database server due to cache failure or cache expiration, resulting in a huge impact on the database.

This is actually a misunderstanding. True cache penetration would look like this:

In high concurrency scenario, if one key is high concurrent access, not to be hit, consider for fault tolerance, will try to obtain from the back-end database, resulting in a large number of requests to the database, and when the key corresponding to the data itself is empty, this leads to a database of concurrent to perform a lot of unnecessary query operation, This leads to tremendous impact and stress.

Traditional caching problems can be avoided in several common ways:

1. Cache empty objects

Objects whose query results are empty are also cached. If they are collections, an empty collection (non-null) can be cached. If they are individual objects, they can be distinguished by field identifiers. This prevents requests from penetrating into the back-end database. At the same time, it is necessary to ensure the timeliness of cached data. This approach is cheaper to implement and is better for hitting data that is not high, but may be frequently updated.

2. Filter them separately

All keys that may have empty data are stored uniformly and intercepted before a request is made to prevent the request from penetrating the back-end database. This approach is relatively complex to implement and is suitable for data that is not hit high but not updated frequently.

 

Cache bump problem

Cache jitter, sometimes referred to as “cache jitter,” can be considered a milder failure than “avalanche,” but can also impact the system and affect performance for a period of time. It is usually caused by a cache node failure. The recommended solution is a consistent Hash algorithm. I do not elaborate too much here, but can refer to other chapters

Cache avalanche

Cache avalanche means that a large number of requests arrive at the back-end database due to caching, resulting in database crash, system crash and disaster. There are a number of reasons for this. The above mentioned “cache concurrency”, “cache penetration”, and “cache bumping” can all lead to cache avalanche. These questions can also be exploited by malicious attackers. An avalanche can also occur, for example, when the system’s preloaded cache periodically fails in a cluster at some point in time. To avoid such periodic invalidation, you can stagger cache expiration by setting different expiration times to avoid cache cluster invalidation.

From an application architecture perspective, we can mitigate the impact by limiting traffic, downgrading, and fusing, or we can avoid this disaster by multi-level caching.

In addition, from the perspective of the whole RESEARCH and development system process, we should strengthen the pressure test, try to simulate the real scene, as soon as possible to expose the problem so as to prevent.

Cache bottomless phenomenon

This question was raised by the folks at Facebook, which had 3,000 memcached nodes around 2010, caching thousands of gigabytes of content.

They found a problem with the frequency of memcached connections, which was slowing down, so they added memcached nodes. When they added memcached nodes, they found that the problem with the frequency of memcached connections was still there and not getting better, calling it the “bottomless pit.”

At present, the mainstream database, cache, Nosql, search middleware and other technology stack, all support “sharding” technology, to meet the “high performance, high concurrency, high availability, scalability” and other requirements. Some are modelled (or consistent Hash) on the client side to map values to different instances, and some are mapped on the client side to range values. Some, of course, are done on the server side. However, each operation may require network communication with different nodes to complete, and the more instances there are, the more overhead will be, and the greater the impact on performance.

 

It can be avoided and optimized from the following aspects:

1. Data distribution mode

Some business data may be suitable for Hash distribution, while others may be suitable for scope distribution to avoid network IO overhead to a certain extent.

2. IO optimization

You can make full use of connection pooling, NIO and other technologies to minimize connection overhead and enhance concurrent connection capability.

3. Data access mode

Obtaining a large data set at once is less costly than obtaining a small data set several times.

Of course, the cache pit phenomenon is not common. In the vast majority of companies you probably won’t come across it at all.

Reference: https://blog.csdn.net/dinglang_2009/article/details/53464196

Copyright notice: The content is from the network, and the copyright belongs to the originator. Unless we can not confirm, we will indicate the author and source, if there is any infringement, please inform us, we will immediately delete and apologize. thank you