Express, as a framework for Node.js, is in its heyday. I like its flexible and easily extensible design concept. In particular, the middleware architecture of the framework is designed to make adding new features to applications more standardized and cost less. In this article, I will try to write a very simple, small middleware that implements server-side caching to optimize performance.

About Middleware

When it comes to middleware, the Express website says:

“Express is a web development framework that is minimalist in its own right, all about routing and middleware: essentially, an Express application is a call to middleware.”

You may have worked with a variety of middleware for development, but you may not understand the principles of middleware, nor have you delved into Express source code to explore its implementation. Here is not intended to help you with a lengthy analysis, but the usage level can be roughly referred to the following:

Middleware Principles

It is suggested that readers who are interested in and want to analyze it by themselves are welcome to discuss with me if they have any questions. Even if you don’t want to go deeper, it doesn’t affect your understanding of the middleware writing that follows.

About server side caching

Caching has been widely used to improve page performance. When it comes to caching, readers may immediately come up in their minds: “client cache, CDN cache, server cache……” . On the other side, you might also think: “200 (from cache), expire, eTag……” Such as the concept.

Of course, as front-end developers, we have to understand these concepts of caching, which is relative to a specific user’s access, and performance optimization for a single user. For example, the first time I open page A, it takes A long time. The next time I open page A, the time is shortened due to caching.

But on the server side, there’s another dimension. Consider this scenario:

We have a static page B, on which the server needs to obtain part of data B1 from the database, calculate part of data B2 according to B1, and perform various highly complex operations to finally “assemble” the complete page B that needs to be returned. The whole process takes 2s.

The disaster is that user1 takes 2s to open the page, and user2 takes 2s to open the page…… These pages are static page B, and the content is exactly the same. In order to solve this disaster, we also need caching, which is called server-side cache first.

To summarize, the purpose of server-side caching is to return the same page content for the same page request. This process is completely independent of different users.

The above words are some mouthful, can refer to the English expression clearer:

The goal of server side cache is responding to The same content for The same request knock of The client’s request.

Therefore, the demo shown below takes 5 seconds for the server to return HTML when the first request arrives; A subsequent request for the page will hit the cache, but whichever user accesses it will get the full page in milliseconds.

Show me the code & Demo

In fact, the concept of caching mentioned above is very simple, students with some back-end experience will understand well. In addition to the basic concepts, this article is more important to introduce the idea of Express middleware, and to implement a server-side caching middleware.

Let’s get to work! Final Demo code, welcome to its Github address.

I will use the memory-cache package on NPM to facilitate cache reading and writing. The final middleware code is simple:

'use strict'

var mcache = require('memory-cache');

var cache = (duration) => {
  return (req, res, next) => {
    let key = '__express__' + req.originalUrl || req.url
    let cachedBody = mcache.get(key)
    if (cachedBody) {
      res.send(cachedBody)
      return
    } else {
      res.sendResponse = res.send
      res.send = (body) => {
        mcache.put(key, body, duration * 1000);
        res.sendResponse(body)
      }
      next()
    }
  }
}Copy the code

For simplicity, I used the request URL as the cache key:

  • If the cache key and its value exist, the value is returned.
  • When it (cache key) and its corresponding value do not exist, we will do a layer of interception on the Express Send method: store the key-value pair before final return.

The cache is valid for 10 seconds.

Eventually out of judgment, our middleware passes control to the next middleware.

Finally use and test the following code:

app.get('/', cache(10), (req, res) => {
  setTimeout(() => {
    res.render('index', { title: 'Hey', message: 'Hello there', date: new Date()})
  }, 5000) //setTimeout was used to simulate a slow processing request
})Copy the code

I used setTimeout to simulate an extremely long (5s) operation.

Open the browser control panel and find that the cache expires within 10 seconds:

Loading information

I won’t go into why cache middleware writes that way or why next() passes control. Interested readers can take a look at the Express source code.

There are a few minor problems

Take a closer look at our page and look at the implementation code. If you are a careful reader, you will notice that in the previous implementation we cached the entire page and passed date: new Date() into the Jade template index.jade. Then, under the condition of hitting the cache, the page cannot be dynamically refreshed to synchronize for 10 seconds until the 10 second cache expires.

Also, when can we use the middleware mentioned above for server-side caching? Of course, only static content can be used. Also, PUT, DELETE, and POST operations should not be cached similarly.

Similarly, we use the NPM module: memory-cache, which has the following advantages and disadvantages:

  • Reading and writing are quick and easy, requiring no dependencies;
  • When the server or the process dies, the contents of the cache are lost.
  • Memcache stores cached content in its own process, so it cannot be shared between multiple Node.js processes.

If these drawbacks really matter, we can choose distributed cache services such as Redis in practical development. Also available on NPM: express-Redis-cache module.

conclusion

Server-side caching is common sense in real development scenarios, but in the world of Node.js, it’s just as much fun to experience the middleware mindset and write your own services by hand.

In conjunction with practice, I don’t think it’s recommended to actually cache the entire page (as in the demo), as well as to use the request URL as the cache key. For example, some static content on a page may be reused in other pages, making reuse an issue.

In real life, all design and logic is responsible for your business situation. Divorced from the need to talk about implementation, are playing rogue. This demo is simple and light, and readers who need it can visit its Github address, welcome to play all kinds of tricks.

Happy Coding!

PS: Author Github warehouse and Zhihu Q&A link welcome all forms of communication.