First, heavy client

Write cache:

  


The application updates both the database and the cache

If the database update is successful, the cache update begins, otherwise if the database update fails, the entire update process fails.

Determines whether the cache update was successful and returns if so

If the cache is not updated successfully, the data is sent to MQ

The application monitors the MQ channel and continues to update Redis after receiving the message.

Problem: If Redis fails to update and the application restarts before sending data to MQ, then THERE is no data for MQ to update. If Redis does not set expiration time for all data and there are too many reads and too few writes, the cache can only be updated by human intervention.

Read cache:

How to solve this problem? When writing Redis data, add a timestamp to the data and insert it into Redis. When reading data from Redis, first determine whether the current time has expired. If not, read from the cache. If expired, read the latest data from the database to overwrite the current Redis data and update the timestamp. The specific process is shown in the figure below:

  

Second, client-side database and cache decoupling

The above solution is relatively heavy for application developers, developers need to consider whether the database and Redis is successful to do different solutions, how to let developers only focus on the database layer, and do not care about the cache layer? See the picture below:

  


The application writes data directly to the database.

The database updates binlog logs.

Use Canal middleware to read binlog logs.

Canal uses a stream limiting component to send data to MQ on a frequency basis.

The application monitors the MQ channel and updates the MQ data to the Redis cache.

As you can see, this is a lightweight solution for developers who don’t have to worry about the cache layer, and this solution is heavy, but it is easy to form a unified solution.