When querying a mysql database that requires more than one fetch for the same input or a query that requires a lot of computation, it is easy to think of using the Redis cache. However, if the query concurrency is very large, requesting redis service will also be time-consuming. In this scenario, migrating Redis locally to reduce the query time is a common solution

The basic architecture of multi-level caching

Description: Storage selectedmysql,redisandguava cache.mysqlAs persistence,redisAs a distributed cache,guava cacheAs a local cache. Level 2 caching is actually in theredisThere’s another layer on topguava cahe

Guava Cache

A Guava cache is similar to a Concurrent HashMap in that it is k-V storage, but a concurrent HashMap can only display removed elements, while a Guava cache is automatically removed when memory runs out or storage times out

Encapsulation guava cache

  • Abstract class: SuperBaseGuavaCache. Java

    @slf4j public abstract class SuperBaseGuavaCache<K, V> {private LoadingCache<K, V>; /** * maximum cache size, default is 10 ** / protected Integer maximumSize = 10; /** * protected Long duration = 10L; /** * Cache invalidation unit, default 5s */ protected TimeUnit TimeUnit = timeunit.seconds; /** * Loading cache(singleton mode) ** @return LoadingCache<K, V> ** / private LoadingCache<K, V> getCache() { if (cache == null) { synchronized (SuperBaseGuavaCache.class) { if (cache == null) { CacheBuilder<Object, Object> tempCache = null; if (duration > 0 && timeUnit ! = null) { tempCache = CacheBuilder.newBuilder() .expireAfterWrite(duration, timeUnit); If (maximumSize > 0) {tempCache. MaximumSize (maximumSize); Build (new CacheLoader<K, V>() {@override public V Load (K key) throws Exception {// Null value is not allowed to return V target = getLoadData(key)! = null ? getLoadData(key) : getLoadDataIfNull(key); return target; }}); } } } return cache; } /** * getLoadData(K key);} /** * return (K key); Return V ** / abstract V getLoadDataIfNull(K key); Public void batchInvalidate(List<K> keys) {public void batchInvalidate(List<K> keys) {if (keys! = null ) { getCache().invalidateAll(keys); Log.info (" batch clear cache, keys: {}", keys); } else { getCache().invalidateAll(); Log.info (" Cleared all caches "); Public void invalidateOne(K key) {getCache().invalidate(key); Log.info (" clear guava cache, key: {}", key); Public void putIntoCache(K key, V value) {getCache().put(key,); value); } public V getCacheValue(K key) {V cacheValue = null; try { cacheValue = getCache().get(key); } catch (ExecutionException e) {log.error(" Failed to get guava cache value, {}"); } return cacheValue; }}Copy the code
  • Abstract class description:

    • 1. Double lock checks concurrency securityLoadingCacheThe singleton object of
    • expireAfterWrite()Methods specifiedguava cacheThe expiration time of the middle key-value pair. The default cache duration is 10 seconds
    • maximumSize()Method specifies the maximum number of key-value pairs that can be stored in memory.guava cacheThe LRU algorithm will be used to eliminate key-value pairs
    • Here we use the method of CacheLoader to load the cache value, which needs to be implementedload()Methods. When callingguava cachetheget()Method, ifguava cacheThe presence in will return the value directly, otherwise calledload()The method loads the value toguava cacheIn the. In this class,loadMethods are two abstract methods that require subclasses to implement, one of which isgetLoadData()Method, which typically looks up data from a database, and the othergetLoadDataIfNull()Method, whengetLoadData()Method returns a null value,guava cacheDetermine whether loading is required by returning null,load()Method that returns a null value is thrownInvalidCacheLoadExceptionException:
    • invalidateOne()Method actively invalidates the cache of a key
    • batchInvalidate()Method to clear the cache in batches or all of the cache, depending on the parameter passed in
    • putIntoCache()Method to cache key/value pairs
    • getCacheValue()Method returns the value in the cache
  • The implementation of the abstract class: StudentGuavaCache. Java

    @Component @Slf4j public class StudentGuavaCache extends SuperBaseGuavaCache<Long, Student> { @Resource private StudentDAO studentDao; @Resource private RedisService<Long, Student> redisService; /** * returns data loaded into memory, Return V * */ @override Student getLoadData(Long key) {Student Student = redisService.get(key); if (student ! = null) {log.info(" load data from redis to Guava cache according to key: {} ", key); } return student; } @param key * @return ** / @override Student getLoadDataIfNull(Long key) { Student student = null; if (key ! = null) { Student studentTemp = studentDao.findStudent(key); student = studentTemp ! = null ? studentTemp : new Student(); } log.info(" load data from mysql to Guava cache, key:{}", key); Redisservice. set(key, student); redisservice. set(key, student); return student; }}Copy the code

    Implement the getLoadData() and getLoadDataIfNull() methods of the parent class

    • getLoadData()Method returns the value in redis
    • getLoadDataIfNull()Method, if not found in the redis cache, returns an empty object from mysql, if not found in mysql

The query

  • Flow chart:

Local cache does not match query redis cache – 3. Redis cache does not match query mysql-4. The results of the query are loaded into the local cache and returned

  • Code implementation:
    Public Student findStudent(Long id) {if (id == null) {throw new ErrorException(" passwd "); } return studentGuavaCache.getCacheValue(id); }Copy the code

delete

  • Flow chart:

  • Code implementation:
    @Transactional(rollbackFor = Exception.class) public int removeStudent(Long id) { //1. Removal of guava cache cache studentGuavaCache. InvalidateOne (id); //2. Clear the redis cache redisservice.delete (id); Return studentDao. RemoveStudent (id); }Copy the code

update

  • Flow chart:

  • Code implementation:
    @Transactional(rollbackFor = Exception.class) public int updateStudent(Student student) { //1. Removal of guava cache cache studentGuavaCache. InvalidateOne (student getId ()); //2. Clear the redis cache redisservice.delete (student.getid ()); Return studentdao.updatestudent (student); }Copy the code

    Update and delete are not the same for mysql in the last step, both caches are deleted








Finally: Attached: complete project address above code on the master branch

================= The following is updated at 2019.01.18==============

Using multi-level caching in an annotation-based manner

  • Why do we need to provide an annotation-based way to use multi-level caching 1: Use multi-level caching without annotations, business code and cache code are coupled, annotations can be decoupled, business code and cache code are separated 2: easy to develop
  • Definition of annotations
    @Target({ ElementType.TYPE, METHOD}) @Retention(retentionPolicy.runtime) public @interface DoubleCacheDelete {/** ** Cache key ** / String  key(); }Copy the code

    An @doublecachedelete annotation is declared

  • Interception of annotations
    @ Aspect @ Component public class DoubleCacheDeleteAspect {/ * * * * * / LocalVariableTableParameterNameDiscoverer access method parameters discoverer = new LocalVariableTableParameterNameDiscoverer(); @Resource private StudentGuavaCache studentGuavaCache; @Resource private RedisService<Long, Student> redisService; /** * The annotation is processed before the method is executed ** @param PJD * @param doubleCacheDelete annotation * @return returns the value ** / @Around("@annotation(com.cqupt.study.annotation.DoubleCacheDelete) && @annotation(doubleCacheDelete)") @Transactional(rollbackFor = Exception.class) public Object dealProcess(ProceedingJoinPoint pjd, DoubleCacheDelete doubleCacheDelete) { Object result = null; Method method = ((MethodSignature) pjd.getSignature()).getMethod(); / / get the parameter name String [] params = discoverer. GetParameterNames (method); Object[] Object = jd.getargs (); SpelParser<String> spelParser = new SpelParser<>(); EvaluationContext context = spelParser.setAndGetContextValue(params, object); If (doublecachedelete.key () == null) {throw new ErrorException(" @doublecachedelete key is not null"); } String key = spelParser.parse(doubleCacheDelete.key(), context); if (key ! = null) {/ / 1. Removal of guava cache cache studentGuavaCache. InvalidateOne (Long) the valueOf (key)); Redisservice.delete (long.valueof (key)); //2. } else {throw new ErrorException(" @doublecacheDelete key definition does not exist, please check if it is the same as method parameter "); } // Proceed to the target method try {result = pgD.proceed (); } catch (Throwable throwable) { throwable.printStackTrace(); } return result; }}Copy the code

    Intercepts the annotation, parses the value of the SpEL expression and removes the corresponding cache

  • SpEL expression parsing
    Public class SpelParser<T> {/** */ ExpressionParser parser = new SpelExpressionParser(); @param SpEL @param context @return T parse(String SpEL, String SpEL) EvaluationContext context) { Class<T> keyClass = (Class<T>) ((ParameterizedType) getClass().getGenericSuperclass()).getActualTypeArguments()[0]; T key = parser.parseExpression(spel).getValue(keyClass); return key; } /** * Stores the parameter name and value into the EvaluationContext object ** @param object parameter name * @return EvaluationContext object ** / public EvaluationContext setAndGetContextValue(String[] params, Object[] object) { EvaluationContext context = new StandardEvaluationContext(); for (int i = 0; i < params.length; i++) { context.setVariable(params[i], object[i]); } return context; }}Copy the code

    Abstract out a special class for SpEL parsing

  • Delete student from student;
    public int removeStudent(Long id) {
            return studentDao.removeStudent(id);
        }
    Copy the code

    This method has no code to delete the cache compared to the original method, and the deletion of the cache is left to the annotations

















At present, the following problems have been improved [update yucache_annotation_20190114 branch] :

  • Cache avalanche and cache penetration
  • Add cache dual-delete to ensure data consistency
  • Optimize query logic to avoid invalid load, see comments on the first floor or github issue description for the problem