Concept: In a single-process system, when multiple threads can change a variable (variable shared variable) at the same time, it is necessary to synchronize the variable or code block so that the variable can be modified linearly to prevent data inconsistency or data pollution caused by concurrent modification of variables. In order for multiple threads to have only one thread executing a block of code at a time, there needs to be a flag somewhere that each thread can see and that can be set when the flag doesn’t exist. The remaining subsequent threads find that the tag already exists and wait for the thread that owns the tag to terminate the synchronization block and untag it before attempting to set the tag. This tag can be understood as a lock.

Note: In the case of a single machine (single JVM), where memory is shared between threads, concurrency can be resolved by simply using local locks. However, if the situation is distributed (multiple JVMS), thread A and thread B are probably not in the same JVM, so the thread lock will not work, and the distributed lock will be used to solve the problem

Let’s test the case without locking in a multi-threaded single machine,

One: the code example is as follows

public Map<String, List<Catelog2Vo>> getCatalogJson() {
    String category = redisTemplate.opsForValue().get("category");
    Map<String, List<Catelog2Vo>> map=null;
    if(StringUtils.isEmpty(category)){
        System.out.println("Missed cache --");
        // Query the database
        map= getCatalogJsonWithDb();
        // Insert the cache
        String s = JSONObject.toJSONString(map);
        redisTemplate.opsForValue().set("category",s);
    }else{
        System.out.println("Cache hit --");
         map= JSON.parseObject(category, new TypeReference<Map<String, List<Catelog2Vo>>>(){
        }); / / inner classes
    }
    return map;
}
Copy the code

Two: start 200 threads and loop for 3 times

Check the background print logs. It is found that 100 missed hits are printed and 500 cache hits are printed

So that’s not what we want to do in our ideal situation, which is we want to print one miss no matter how many concurrent accesses we start and since we’re in a stand-alone environment, we’re going to try to solve this problem with local locks

3. Modify the code to add synchronized lock mechanism

public Map<String, List<Catelog2Vo>> getCatalogJsonWithDb() {
    synchronized (this){
        System.out.println("Start query database --");
        // Query all categories to reduce database IO
        List<CategoryEntity> selectList=this.baseMapper.selectList(null);// Do not pass the query criteria to query all
        //List<CategoryEntity> level1 = this.listWithTree1();
        // Query all level 1 categories, avoid direct query in database
        List<CategoryEntity> level1 =getParentCid(selectList,0L);
        Map<String,List<Catelog2Vo>> map=level1.stream().collect(Collectors.toMap(k->k.getCatId().toString(),v->{
            // Query all sub-categories corresponding to the current sub-category
            List<CategoryEntity> categoryEntities= getParentCid(selectList,v.getCatId());
            log.info("The current primary classification ID is: {}, and all corresponding secondary classifications are: {}",v.getCatId(),categoryEntities);
            // Encapsulate the result
            List<Catelog2Vo> catelog2Vos=null;
            if(categoryEntities! =null){
                catelog2Vos=categoryEntities.stream().map(l2->{
                    Catelog2Vo catelog2Vo=new Catelog2Vo(v.getCatId().toString(),l2.getCatId().toString(),l2.getName(),null);
                    // Select vo as vo
                    List<CategoryEntity> level3Catelog= getParentCid(selectList,l2.getCatId());
                    log.info("The current secondary classification ID is: {}, and all corresponding tertiary classifications are: {}",l2.getCatId(),level3Catelog);
                    if(level3Catelog! =null){
                        List<Catelog2Vo.Category3Vo> collect=level3Catelog.stream().map(l3->{
                            Catelog2Vo.Category3Vo  category3Vo=new Catelog2Vo.Category3Vo(l2.getCatId().toString(),l3.getCatId().toString(),l3.getName());
                            return category3Vo;
                        }).collect(Collectors.toList());
                        catelog2Vo.setCatalog3List(collect);
                    }
                    return catelog2Vo;
                }).collect(Collectors.toList());
            }
            return catelog2Vos;
        }));
        returnmap; }}Copy the code

The locking mechanism is very simple, directly add synchronized keyword to the database query method, remember to delete the key data in Redis before the test, because my back-end memory setting is not high, this time set 50 threads to run twice, start the JMeter test

Check the background logs and find that there are still more than 20 times to query the logic of the database. The reason is that we set the time to run in one second, which is too shortJemeter.sql () : jemeter.sql () : jemeter.sql () : jemeter.sql () : jemeter.sql

The reason for this is the next issue of lock timing, colloquially speaking, is to release the lock at the wrong time

The solution is not to release the lock immediately after the thread queries the data. Instead, it should write the result to the cache and then release the lock.

Four: solve the locking time sequence problem

Sample code is as follows

public Map<String, List<Catelog2Vo>> getCatalogJsonWithDb() {
    synchronized (this) {
        // Get the lock from the cache again
        String category = redisTemplate.opsForValue().get("category");
        if(! StringUtils.isEmpty(category)) {Map<String, List<Catelog2Vo>> stringListMap = JSON.parseObject(category, new TypeReference<Map<String, List<Catelog2Vo>>>() {
            });
            return stringListMap;
        } else {
            System.out.println("Start query database --");
            // Query all categories to reduce database IO
            List<CategoryEntity> selectList = this.baseMapper.selectList(null);// Do not pass the query criteria to query all
            //List<CategoryEntity> level1 = this.listWithTree1();
            // Query all level 1 categories, avoid direct query in database
            List<CategoryEntity> level1 = getParentCid(selectList, 0L);
            Map<String, List<Catelog2Vo>> map = level1.stream().collect(Collectors.toMap(k -> k.getCatId().toString(), v -> {
                // Query all sub-categories corresponding to the current sub-category
                List<CategoryEntity> categoryEntities = getParentCid(selectList, v.getCatId());
                log.info("The current primary classification ID is: {}, and all corresponding secondary classifications are: {}", v.getCatId(), categoryEntities);
                // Encapsulate the result
                List<Catelog2Vo> catelog2Vos = null;
                if(categoryEntities ! =null) {
                    catelog2Vos = categoryEntities.stream().map(l2 -> {
                        Catelog2Vo catelog2Vo = new Catelog2Vo(v.getCatId().toString(), l2.getCatId().toString(), l2.getName(), null);
                        // Select vo as vo
                        List<CategoryEntity> level3Catelog = getParentCid(selectList, l2.getCatId());
                        log.info("The current secondary classification ID is: {}, and all corresponding tertiary classifications are: {}", l2.getCatId(), level3Catelog);
                        if(level3Catelog ! =null) {
                            List<Catelog2Vo.Category3Vo> collect = level3Catelog.stream().map(l3 -> {
                                Catelog2Vo.Category3Vo category3Vo = new Catelog2Vo.Category3Vo(l2.getCatId().toString(), l3.getCatId().toString(), l3.getName());
                                return category3Vo;
                            }).collect(Collectors.toList());
                            catelog2Vo.setCatalog3List(collect);
                        }
                        return catelog2Vo;
                    }).collect(Collectors.toList());
                }
                return catelog2Vos;
            }));
            // Write cache to redis
            String s = JSONObject.toJSONString(map);
            redisTemplate.opsForValue().set("category", s);
            returnmap; }}}Copy the code

There are two key codes: one is that the thread obtains the lock and then goes to Redis to confirm whether there is a cache; the other is to write the data into the cache after the query and release the lock

Five: Restart the service and enable the Jemeter for testing

As you can see, 300 threads loop 600 times, which is a total of only one DB query, satisfying the requirement

Record throughput, response time and other parameters, then compare performance with distributed lock!!