I want to cache my memories so I can recognize you when I see you again.

I can say this philosophically because I understand and value caching. Without caching, my life has no meaning.

Caching is very important, and most of your work is done with caching. Because it’s so widely used, any optimization to a caching system that improves performance by a fraction of a second is very exciting.

I’ve been using Guava’s LoadingCache for a long time. It is very similar to ConcurrentHashMap, but encapsulates some good evocation strategies and concurrency optimizations on top of it, making it much easier to use.

Today, Caffeine, the Chinese name for Caffeine, is a substance that makes people feel high. It’s a rewrite of Guava, but it’s very efficient.

Below is a performance test of Caffeine. You can see what it does. It’s way ahead of GuavaCache. Why is that?

Let’s start with its author. The author’s Github is (github.com/ben-manes), has writtenConcurrentLinkedHashMapThis class, in turn, is the basis of GuavaCache.Ben ManesA slap of the head, decided to go higher.

Why is Caffeine good?

Once Caffeine hits, GuavaCache is OUT.

Caffeine supports asynchronous loading, which returns CompletableFutures directly and does not block waiting for data to load, as opposed to GuavaCache’s synchronous mode. In addition, its programming model is friendly, eliminating a lot of repetitive work.

GuavaCache is based on LRU, while Caffeine is based on LRU and LFU, combining the best of both.

After the combination of the two, it becomes a new W-TinylFu algorithm, which has a very high hit ratio and a smaller memory footprint, which is the main reason.

Another reason why Caffeine is relatively fast is that many operations use async, which commits these events to queues. Disruptor (lMAX) Disruptor, which has become synonymous with high concurrency without locking.

Test hit ratio

We decided to test it with online data. As a matter of fact, I have replaced most of the important caches with Caffeine, which has been upgraded from Caffeine.

Because their apis look so much alike, the process is painless and no anesthetic is required.

There’s a business that has a large in-heap cache that caches user data. It contains attributes such as username, gender, address, and credits to form a JSON object with a size less than 1KB. By grayscale, we test its actual hit ratio according to different strategies.

Strategy 1

  • The largest cache1wThe user
  • Data is invalid in 5 minutes after entering cache (need to be read again)

Shooting:

  • Caffeine is 29.22%
  • Guava 21.95%

Strategy 2

  • Increase the cache size to 6W users
  • Data is cached and expires after 20 minutes, which is pretty much the same as Session

Hit rate (still better) :

  • Caffeine is 56.04%
  • Guava 50.01%

Strategy 3

  • Cache directly to 15W users
  • Data is invalid 30 minutes after entering the cache

Hit ratio at this time:

  • Caffeine is 71.10%
  • Guava 62.76%

Caffeine has always been the leader in hit percentage. High hit rate, high efficiency. If we go above 50%, we have a lot of caching.

Asynchronous loading

Here are the two official test charts:

(1) Read (75%) / Write (25%)

(2) Write (100%)

(3) Read (100%)

We’ve been working on asynchronous loading in Caffeine. So what does the code look like? The asynchronous load cache uses the reactive programming model and returns the CompletableFuture object. To be honest, the code looks a lot like Guava.

public static void main(String[] args) {
        AsyncLoadingCache<String, String> asyncLoadingCache = Caffeine.newBuilder()
                .maximumSize(1000)
                .buildAsync(key -> slowMethod(key));

        CompletableFuture<String> g = loadingCache.get("test");
        String value = g.get();
    }

    static String slowMethod(String key) throws Exception {
        Thread.sleep(1000);
        return key + ".result";
    }
Copy the code

I remember looking through the Spring source code a while back and seeing it.

In SpringBoot, by providing aCacheManagerThe Bean can be used withSpringboot-cacheIntegration, so to speak, is very convenient.

Critical code.

// Generate @Bean("caffeineCacheManager") public CacheManager CacheManager () {caffeineCacheManager CacheManager = new CaffeineCacheManager(); cacheManager.setCaffeine(Caffeine.newBuilder() .maximumSize(1000)); return cacheManager; } @cacheconfig (cacheNames = "caffeineCacheManager");Copy the code

So many technical frameworks, when is the end.

You can follow my B station account →→→→B station account

Study communication groups →→→→ →Communication group