Caching is an important means of architectural design. It is relatively simple in technology and has a particularly significant effect on performance improvement. It is used in many places. There are three key factors that determine the effectiveness, efficiency, and implementation of the cache:

1. Cache key set size

2. Size of cache space

3. Service life of cache

Reading this article will take 5 minutes to help you improve your cache hit ratio.

0. What is cache hit ratio?

The main feature of cache is to write and read multiple times at a time. In this way, the use of database is reduced, data is read from the cache as soon as possible, and performance is improved. Therefore, whether the cache is effective depends mainly on whether the cache written in once can be read out many times in response to business requests. This index is called cache hit ratio. What about the cache hit ratio? If you compare the total number of queries that get the correct cached result to the total number of queries, the index you get is the cache hit ratio. For example, if you get the correct cached result nine times out of ten, the cache hit ratio is 90%.

There are three main factors affecting the cache hit ratio, namely the size of the cache key set, the size of the memory space and the cache life.

1. Cache key set size.

Each object in the cache is identified by the cache key. So let’s say we get the key, value structure, key is the string ABC, value is the string hello, ABC is one of the cache keys. The key is a unique identifier in the cache, and the only way to locate an object is to make an exact match on the cache key.

For example, if we want to cache online product information for each item, we need to use the item ID as the cache key. In other words, the cache key space is the number of all the keys your application can generate. Statistically, the more unique keys an application generates, the less chance of reuse. Caching weather data by IP address, for example, could require more than 4 billion keys. But if you cache weather data based on countries, you only need a few hundred cache keys, and there are only a few hundred countries in the world.

Therefore, minimize the number of cached keys. The smaller the number of keys, the more efficient the cache. When designing the cache, focus on how the cache key is designed, and limit the scope of its entire collection to one that can be used efficiently and reduced in number, and cache performance is best.

2. Memory size of available cache space

The amount of memory available to the cache determines the average size and number of cached objects. Because caches are typically stored in memory, the amount of memory available for cached objects is relatively expensive and strictly limited.

If you want to cache more objects, you need to delete the old objects and then add new ones. When these old objects are deleted, the cache hit ratio will be affected. So physically, the larger the cache space, the more objects are cached, and the higher the cache hit ratio.

3. Lifetime of cached objects

The lifetime of a cache object is called TTL.

The longer an object is cached, the more likely it is to be reused. There are two ways to invalidate a cache:

1) Timeout expires

Timeout invalidation is when the cache is built, that is, when the cache is written, a timeout period is set for each cache object. Accessing the cache before the timeout period will return the cached data, and once the cache expires, accessing the cache again will return null.

2) Real-time clearance

Real-time cleanup means that when a cache object is updated, the cache is directly notified to clear the updated data. After that, the next time the application accesses the cache object key, it will get the latest data because the cache has been cleared and it will have to look in the database to read it. Because updates are always updates in the database.

In the other case, new objects need to be written to the cache and memory space is insufficient. In this case, some of the old objects need to be cleared to make room for the new objects.

The main algorithm used to clear memory space is LRU algorithm, LRU algorithm is the most recent and longest unused algorithm. When clear, to clear those most recently not accessed objects, this algorithm uses the linked list structure to achieve. All cache objects are placed on the same linked list. When an object is accessed, it is moved to the head of the list. When it is necessary to clear the most recent unused objects through the LRU algorithm, we only need to search from the tail of the queue. The more recent and longest objects in the tail of the queue have not been accessed, the priority is cleared, and the memory space freed up for new objects to join in.

The above three conditions are the key factors to determine the cache hit ratio. After mastering them, you will have a deeper understanding of cache.

The above content is taken from the pull check

The Architecture of Ali Predecessors Lecture 02 (1) : Distributed caching Click here to see more

Speaker: Li Zhihui, former Technical expert of Alibaba, author of Technical Architecture of Large Websites

Lagouandy will participate in resume 1V1 diagnosis lottery from time to time, and the official technology exchange community will be waiting for you to join