Not envy mandarin duck not envy fairy, a line of code half a day. Original: Taste of Little Sister (wechat official ID: XjjDog), welcome to share, please reserve the source.

I want to cache my memories so I can recognize you when I see you again.

I can say this philosophically because I understand and value caching. Without caching, my life has no meaning.

Caching is very important, and most of your work is done with caching. Because it’s so widely used, any optimization to a caching system that improves performance by a fraction of a second is very exciting.

I’ve been using Guava’s LoadingCache for a long time. It is very similar to ConcurrentHashMap, but encapsulates some good evocation strategies and concurrency optimizations on top of it, making it much easier to use.

Welcome to star: github.com/xjjdog/bcma… . It includes ToB complex business, Internet high concurrency business, cache applications; DDD, microservices guidance. Model driven, data driven. Understand the evolution of large-scale services, coding skills, Learning Linux, performance tuning. Docker/ K8S power, monitoring, log collection, middleware learning. Front-end technology, back-end practices, etc. Main technology: SpringBoot+JPA+Mybatis-plus+Antd+Vue3.

Today, Caffeine, the Chinese name for Caffeine, is a substance that makes people feel high. It’s a rewrite of Guava, but it’s very efficient.

Below is a performance test of Caffeine. You can see what it does. It’s way ahead of GuavaCache. Why is that? localfile://media/15938442367524/15938448827543.jpgLet’s start with its author. The author’s Github is (github.com/ben-manes), has writtenConcurrentLinkedHashMapThis class, in turn, is the basis of GuavaCache.Ben Manes A slap of the head, decided to go higher.

Why is Caffeine good?

Once Caffeine hits, GuavaCache is OUT.

Caffeine supports asynchronous loading, which returns CompletableFutures directly and does not block waiting for data to load, as opposed to GuavaCache’s synchronous mode. In addition, its programming model is friendly, eliminating a lot of repetitive work.

GuavaCache is based on LRU, while Caffeine is based on LRU and LFU, combining the best of both. For those who are not familiar with these two algorithms, please refer to xjjdog’s previous article: “three In-heap caching algorithms: Source code and Design ideas”.

After the combination of the two, it becomes a new W-TinylFu algorithm, which has a very high hit ratio and a smaller memory footprint, which is the main reason.

Another reason why Caffeine is relatively fast is that many operations use async, which commits these events to queues. Disruptor (lMAX) Disruptor, which has become synonymous with high concurrency without locking.

Test hit ratio

We decided to test it with online data. As a matter of fact, I have replaced most of the important caches with Caffeine, which has been upgraded from Caffeine.

Because their apis look so much alike, the process is painless and no anesthetic is required.

There’s a business that has a large in-heap cache that caches user data. It contains attributes such as username, gender, address, and credits to form a JSON object with a size less than 1KB. By grayscale, we test its actual hit ratio according to different strategies.

Strategy 1

  • The largest cache1wThe user
  • Data is invalid in 5 minutes after entering cache (need to be read again)

Shooting:

  • Caffeine is 29.22%
  • Guava 21.95%

Strategy 2

  • Increase the cache size to 6W users
  • Data is cached and expires after 20 minutes, which is pretty much the same as Session

Hit rate (still better) :

  • Caffeine is 56.04%
  • Guava 50.01%

Strategy 3

  • Cache directly to 15W users
  • Data is invalid 30 minutes after entering the cache

Hit ratio at this time:

  • Caffeine is 71.10%
  • Guava 62.76%

Caffeine has always been the leader in hit percentage. High hit rate, high efficiency. If we go above 50%, we have a lot of caching.

Asynchronous loading

Here are the two official test charts:

(1) Read (75%) / Write (25%) localfile://media/15938442367524/15938463845793.jpg

(2) Write (100%) localfile://media/15938442367524/15938464259214.jpg

(3) Read (100%) localfile://media/15938442367524/15938464366776.jpg

We’ve been working on asynchronous loading in Caffeine. So what does the code look like? The asynchronous load cache uses the reactive programming model and returns the CompletableFuture object. To be honest, the code looks a lot like Guava.

public static void main(String[] args) {
        AsyncLoadingCache<String, String> asyncLoadingCache = Caffeine.newBuilder()
                .maximumSize(1000)
                .buildAsync(key -> slowMethod(key));

        CompletableFuture<String> g = loadingCache.get("test");
        String value = g.get();
    }

    static String slowMethod(String key) throws Exception {
        Thread.sleep(1000);
        return key + ".result";
    }
Copy the code

I remember looking through the Spring source code a while back and seeing it. localfile://media/15938442367524/15938479597151.jpgIn SpringBoot, by providing aCacheManagerThe Bean can be used withSpringboot-cacheIntegration, so to speak, is very convenient.

Critical code.

/ / bean is generated
@Bean("caffeineCacheManager")
public CacheManager cacheManager(a) {
    CaffeineCacheManager cacheManager = new CaffeineCacheManager();
    cacheManager.setCaffeine(Caffeine.newBuilder() .maximumSize(1000));
    return cacheManager;
}

// Use injection
@CacheConfig(cacheNames = "caffeineCacheManager")

// Information cache
@Cacheable(key = "#id")
Copy the code

So many technical frameworks, when is the end.

Welcome to star: github.com/xjjdog/bcma… . It includes ToB complex business, Internet high concurrency business, cache applications; DDD, microservices guidance. Model driven, data driven. Understand the evolution of large-scale services, coding skills, Learning Linux, performance tuning. Docker/ K8S power, monitoring, log collection, middleware learning. Front-end technology, back-end practices, etc. Main technology: SpringBoot+JPA+Mybatis-plus+Antd+Vue3.

Xjjdog is a public account that doesn’t allow programmers to get sidetracked. Focus on infrastructure and Linux. Ten years architecture, ten billion daily flow, and you discuss the world of high concurrency, give you a different taste. My personal wechat xjjdog0, welcome to add friends, further communication.

Communication.