As you can see, Caffeine is the best memory cache framework available, with both read and write efficiency far higher than other caches, and the default cache implementation starting with Spring5 will replace Caffeine with Google Guava

Based on using

<! -- https://mvnrepository.com/artifact/com.github.ben-manes.caffeine/caffeine -->
<dependency>
    <groupId>com.github.ben-manes.caffeine</groupId>
    <artifactId>caffeine</artifactId>
    <version>3.03.</version>
</dependency>
Copy the code

Manually creating a cache

        Cache<Object, Object> cache = Caffeine.newBuilder()
                // Initial quantity
                .initialCapacity(10)
                // The maximum number of entries
                .maximumSize(10)
                //PS: If both expireAfterWrite and expireAfterAccess exist, expireAfterWrite prevails.
                // The specified time expired after the last write operation
                .expireAfterWrite(1, TimeUnit.SECONDS)
                // The specified time has expired after the last read or write operation
                .expireAfterAccess(1, TimeUnit.SECONDS)
                // The listener cache has been removed
                .removalListener((key, val, removalCause) -> { })
                // Record hit
                .recordStats()
                .build();

        cache.put("1"."Zhang");
        System.out.println(cache.getIfPresent("1"));
        System.out.println(cache.get("2",o -> "Default"));
Copy the code

Automatic cache creation

LoadingCache<String, String> loadingCache = Caffeine.newBuilder()
        // Refresh the cache at a specified interval after the cache was created or last updated; RefreshAfterWrite Only supports LoadingCache
        .refreshAfterWrite(10, TimeUnit.SECONDS)
        .expireAfterWrite(10, TimeUnit.SECONDS)
        .expireAfterAccess(10, TimeUnit.SECONDS)
        .maximumSize(10)
        // Query the value of the key in the database
        .build(key -> new Date().toString());
Copy the code

Asynchronous cache acquisition

About JDK8 CompletableFuture

AsyncLoadingCache<String, String> asyncLoadingCache = Caffeine.newBuilder()
        // Refresh the cache at a specified interval after creating the cache or the last cache update; Only supports LoadingCache
        .refreshAfterWrite(1, TimeUnit.SECONDS)
        .expireAfterWrite(1, TimeUnit.SECONDS)
        .expireAfterAccess(1, TimeUnit.SECONDS)
        .maximumSize(10)
        // Query the value of the key in the database
        .buildAsync(key -> {
            Thread.sleep(1000);
            return new Date().toString();
        });

// The asynchronous cache returns a CompletableFuture
CompletableFuture<String> future = asyncLoadingCache.get("1");
future.thenAccept(System.out::println);

Copy the code

PS: You can use.executor() to customize the thread pool

Record hit data

LoadingCache<String, String> cache = Caffeine.newBuilder()
        // Refresh the cache at a specified interval after the cache was created or last updated; RefreshAfterWrite Only supports LoadingCache
        .refreshAfterWrite(1, TimeUnit.SECONDS)
        .expireAfterWrite(1, TimeUnit.SECONDS)
        .expireAfterAccess(1, TimeUnit.SECONDS)
        .maximumSize(10)
        // Enable recording information such as cache hit ratio
        .recordStats()
        // Query the value of the key in the database
        .build(key -> {
            Thread.sleep(1000);
            return new Date().toString();
        });


cache.put("1"."Xiao Ming");
cache.get("1");

/* * hitCount: number of hits * missCount: number of missed hits * requestCount: number of requests * hitRate: hitRate * missRate: number of lost values * loadSuccessCount: number of new values successfully loaded * LoadExceptionCount: number of times the new value failed to be loaded * totalLoadCount: total number of times the new value failed to be loaded * loadExceptionRate: rate of times the new value failed to be loaded * totalLoadTime: totalLoadTime * EvictionCount: Number of lost entries */
System.out.println(cache.stats());
Copy the code

PS: Performance may be affected. You are advised not to enable this function in a production environment

Elimination strategy

Let’s start with some common elimination strategies

  • LRU has used the least recently and eliminated the pages that have not been used the longest.
  • LRU is the least used, and the least used pages are eliminated over a period of time
  • FIFO first in first out

Advantages of LRU: Compared with LFU, LRU has better performance, because its algorithm is relatively simple, it does not need to record the access frequency, and it can better deal with burst traffic.

LRU’s downside: Although it performs better, it is limited in predicting the future from historical data; it considers the last data to be the most likely to be revisited and gives it the highest priority. Some non-hot data is accessed and occupies a high priority, which can take a long time in the cache, resulting in a waste of space.

Advantages of the LFU: The LFU accesses data based on the frequency of access. In most cases, the frequency of hot data is higher than that of non-hot data, so its hit ratio is very high.

Disadvantages of LFU: LFU algorithm is relatively complex and its performance is worse than THAT of LRU. Problem is below this kind of situation, such as weibo, some time ago heat is very high, there is a hot topic like that kind of can let stop service weibo for short periods of time, then hurriedly cached, LFU algorithm recorded one of the hot word access frequency, could be as high as billions, and after a long period of time, this topic is not hot, new hot spots came also, However, the heat of new hot topics can not reach more than a billion, that is, the frequency of access is not as high as the previous topics, then the previous hot topics will always occupy the cache space, can not be removed for a long time.

And Caffeine use W – TinyLFU elimination algorithm, combining LFU and LRU to achieve a better shot and performance, specific reference: www.cnblogs.com/zhaoxinshan…

Four ways of elimination and examples

Caffeine has four cache elimination Settings

  1. Size (weeded out using the w-TinylFU algorithm described above)
  2. Weight (you can only choose between size and weight)
  3. time
  4. References (not commonly used, not covered in this article)

Example:

import com.github.benmanes.caffeine.cache.Cache;
import com.github.benmanes.caffeine.cache.Caffeine;
import com.github.benmanes.caffeine.cache.Scheduler;
import com.github.benmanes.caffeine.cache.Weigher;
import lombok.extern.slf4j.Slf4j;
import org.junit.Test;

import java.util.concurrent.TimeUnit;

/ * * *@author yejunxi
 * @date2021/7/23 * /
@Slf4j
public class CacheTest {


    /** * Cache size obsoleted */
    @Test
    public void maximumSizeTest(a) throws InterruptedException {
        Cache<Integer, Integer> cache = Caffeine.newBuilder()
                // If the number exceeds 10, the w-TinylFU algorithm will be used for elimination
                .maximumSize(10)
                .evictionListener((key, val, removalCause) -> {
                    log.info(Val :{} val:{}, key, val);
                })
                .build();

        for (int i = 1; i < 20; i++) {
            cache.put(i, i);
        }
        Thread.sleep(500);// Cache obsolescence is asynchronous

        // Print the cache that has not been rendered obsolete
        System.out.println(cache.asMap());
    }

    /** * Weight elimination */
    @Test
    public void maximumWeightTest(a) throws InterruptedException {
        Cache<Integer, Integer> cache = Caffeine.newBuilder()
                // Limit the total weight. If the total weight of all caches is greater than the total weight, the cache with small weight will be eliminated
                .maximumWeight(100)
                .weigher((Weigher<Integer, Integer>) (key, value) -> key)
                .evictionListener((key, val, removalCause) -> {
                    log.info(Val :{} val:{}, key, val);
                })
                .build();

        // The total weight is = the total weight of all caches
        int maximumWeight = 0;
        for (int i = 1; i < 20; i++) {
            cache.put(i, i);
            maximumWeight += i;
        }
        System.out.println("Total weight =" + maximumWeight);
        Thread.sleep(500);// Cache obsolescence is asynchronous

        // Print the cache that has not been rendered obsolete
        System.out.println(cache.asMap());
    }


    /** * Expire after access (the time is reset for each access, so if you have been accessed all the time, you will not be eliminated) */
    @Test
    public void expireAfterAccessTest(a) throws InterruptedException {
        Cache<Integer, Integer> cache = Caffeine.newBuilder()
                .expireAfterAccess(1, TimeUnit.SECONDS)
                // Instead of waiting for Caffeine to trigger periodic maintenance, you can specify a scheduler to remove expired cache entries in time
                // If scheduler is not set, the cache will be passively deleted on the next get call
                .scheduler(Scheduler.systemScheduler())
                .evictionListener((key, val, removalCause) -> {
                    log.info(Val :{} val:{}, key, val);

                })
                .build();
        cache.put(1.2);
        System.out.println(cache.getIfPresent(1));
        Thread.sleep(3000);
        System.out.println(cache.getIfPresent(1));//null
    }

    /** * Expire after write */
    @Test
    public void expireAfterWriteTest(a) throws InterruptedException {
        Cache<Integer, Integer> cache = Caffeine.newBuilder()
                .expireAfterWrite(1, TimeUnit.SECONDS)
                // Instead of waiting for Caffeine to trigger periodic maintenance, you can specify a scheduler to remove expired cache entries in time
                // If scheduler is not set, the cache will be passively deleted on the next get call
                .scheduler(Scheduler.systemScheduler())
                .evictionListener((key, val, removalCause) -> {
                    log.info(Val :{} val:{}, key, val);
                })
                .build();
        cache.put(1.2);
        Thread.sleep(3000);
        System.out.println(cache.getIfPresent(1));//null}}Copy the code

Another refreffterWrite () is to refresh the cache automatically after x seconds

private static int NUM = 0;

@Test
public void refreshAfterWriteTest(a) throws InterruptedException {
    LoadingCache<Integer, Integer> cache = Caffeine.newBuilder()
            .refreshAfterWrite(1, TimeUnit.SECONDS)
            // Simulate fetching data, incrementing each fetching by 1
            .build(integer -> ++NUM);

    // Get ID=1, which is not already in the cache, so it will be put in the cache automatically
    System.out.println(cache.get(1));/ / 1

    // After a delay of 2 seconds, the value of the automatic cache refresh is theoretically 2
    Reffterwrite = reffterWrite = reffterWrite = reffterWrite = reffterWrite = reffterWrite = reffterWrite = reffterWrite = reffterWrite
    // The second getIfPresent is called after x seconds
    Thread.sleep(2000);
    System.out.println(cache.getIfPresent(1));/ / 1

    // At this point, the cache is refreshed, and the old value is returned the first time
    System.out.println(cache.getIfPresent(1));/ / 2
}
Copy the code

Best practices

In the actual development of how to configure the elimination strategy is the best, according to my experience is commonly used in size elimination

Practice 1

Configuration: set the maxSize, refreshAfterWrite, do not set expireAfterWrite/expireAfterAccess

Advantages and disadvantages: When expireAfterWrite is set, the cache will be acquired by locking synchronously when the cache expires. Therefore, the performance of expireAfterWrite is better. However, the old data will be fetched in some cases

Practice 2 configuration: set the maxSize, expireAfterWrite/expireAfterAccess, not set refreshAfterWrite

Pros and cons: Contrary to the above, good data consistency, no old data is retrieved, but performance is not as good (by comparison), suitable for scenarios where data acquisition is not time consuming

Integrated with Srping Boot

<! - the cache - > < the dependency > < groupId > org. Springframework. Boot < / groupId > < artifactId > spring - the boot - starter - cache < / artifactId > </dependency> <! -- https://mvnrepository.com/artifact/com.github.ben-manes.caffeine/caffeine --> <dependency> < the groupId > com. Making. Ben - manes. Caffeine < / groupId > < artifactId > caffeine < / artifactId > < version > 3.0.3 < / version > </dependency>Copy the code

There are two configuration modes

  1. Yml is not recommended because the phase-out policy is common and it is not possible to configure a different phase-out policy for each cache, which is not demonstrated here
  2. use@Configuration

The second configuration is shown here

  1. Open the cache@EnableCaching
@Configuration
public class CaffeineConfig {

    @Bean
    public CacheManager caffeineCacheManager(a) {
        List<CaffeineCache> caffeineCaches = new ArrayList<>();

        // You can set up multiple caches by yourself in YML or using enumerations, with different configurations for caches with different names
        caffeineCaches.add(new CaffeineCache("cache1",
                Caffeine.newBuilder()
                        .expireAfterWrite(10, TimeUnit.SECONDS)
                        .build())
        );

        SimpleCacheManager cacheManager = new SimpleCacheManager();
        cacheManager.setCaches(caffeineCaches);
        returncacheManager; }}Copy the code

You can directly use Spring cache annotations, @cacheable, @cacheevict, @cacheput, etc., which are not explained here

@Service
@Slf4j
public class StudentService {
    @Cacheable(value = "cache1")
    public String getNameById(int id) {
        log.info(Select * from DB where id=" + id);
        return newDate().toString(); }}Copy the code

Cooperate with Redis for level 2 cache

There are generally three solutions to caching

  1. Local memory cache, such as Caffeine and Ehcache; Suitable for single-machine system, fastest, but limited capacity, and cache loss after system restart
  2. Centralized caches, such as Redis and Memcached; It is suitable for distributed systems and solves problems such as capacity and cache loss after restart. However, when the traffic volume is very large, performance is often not the first consideration, but bandwidth. The phenomenon is that the Redis service load is not high, but the machine network card bandwidth is full, resulting in very slow data reading
  3. The third solution is to combine the above two solutions of the second level cache came into being, with the memory cache as the first level cache, the centralized cache as the second level cache

There is a mature framework on the market, open source China official open source tool: J2Cache

That’s the general principle

Use reference with Spring Boot:

<! -- You can not import caffine, j2Cache defaults to 2.x version -->
<dependency>
    <groupId>com.github.ben-manes.caffeine</groupId>
    <artifactId>caffeine</artifactId>
    <version>3.0.3</version>
</dependency>

<dependency>
    <groupId>net.oschina.j2cache</groupId>
    <artifactId>j2cache-spring-boot2-starter</artifactId>
    <version>2.8.0 - release</version>
</dependency>

<dependency>
    <groupId>net.oschina.j2cache</groupId>
    <artifactId>j2cache-core</artifactId>
    <version>2.8.0 - release</version>
    <exclusions>
        <exclusion>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-simple</artifactId>
        </exclusion>
    </exclusions>
</dependency>
Copy the code

bootstrap.yml

j2cache:
  config-location: classpath:/j2cache-${spring.profiles.active}.properties
  Enable support for Spring Cahce
  open-spring-cache: true
  # jedis or lettuce corresponds to configuration in j2cache.properties
  redis-client: lettuce
  # Allow null values
  allow-null-values: true
  Whether to start level 2 cache
  l2-cache-open: true
  The following configuration in application.properties allows you to select the cache clearance mode
  # * Cache clear mode
  # * active: active clear, the second level cache expiration active notify each node clear, the advantage is that all nodes can receive the cache clear at the same time
  # * passive: passive clearance. When the level-1 cache expires, each node is notified to clear level-1 and level-2 caches
  # * Blend: Both modes work together and can be used for nodes with high cache accuracy and timeliness (one of the two modes is recommended)
  cache-clean-mode: passive
Copy the code

Added 2 properties profiles (yML not supported) :

j2cache-test.properties

Caffeine local cache definition file: Caffeine. Properties =/caffeine j2cache.broadcast=net.oschina.j2cache.cache.support.redis.SpringRedisPubSubPolicy j2cache.L1.provider_class=caffeine j2cache.L2.provider_class=net.oschina.j2cache.cache.support.redis.SpringRedisProvider j2cache.L2.config_section=redis # Serialization j2cache. Serialization = json # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Redis connection configuration ######################################### ######################################### # Redis Cluster Mode # # single -> Single Redis server # Sentinel -> master-Slaves # cluster -> Cluster Servers # sharded -> Sharded Servers # sharded -> Sharded Servers # sharded -> Sharded Servers # example: redis://user:[email protected]:6379/0) # sharded Redis. Cluster_name = mymaster # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # redis. Mode = # single redis notify node delete local cache channel name Redis.channel =j2cache #redis cache key prefix redis.namespace=j2cache ## connection #redis.hosts = 127.0.0.1:26378127.00 0.1:26379127.00 0.1:26380 redis. Hosts = 127.0.0.1:6379 redis. Timeout = 2000 redis. Password = xfish311 redis.database=1Copy the code

Caffeine. The properties:

# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Caffeine configuration # define local cache [name] = the size, XXXX [s | h | | m d] (expiration time) # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # default = 1000, 30 m cache1 = 1000, 30 mCopy the code

To complete. The above configuration is the configuration after I sorted out, and you need to refer to the official document for specific use.

In addition to the second level cache principle interested in suggest to see another project, source code compared to J2Cache is easier to understand, belong to the castration of the second level cache: github.com/pig-mesh/mu…