Nuxt is a VUE based SSR solution that can be isomorphic to the front and back ends using vUE syntax.

Nuxt needs to generate the virtual DOM on the server side and then serialize the HTML string. We often say that nodeJS ‘high performance refers to asynchronous IO intensive scenarios rather than CPU intensive scenarios. Nodejs, after all, runs on a single thread, and performance degrades when high concurrency is involved, so a reasonable caching strategy can be considered

Nuxt caches can be divided into component level caches, API level caches and page level caches

Component-level caching

The configuration item nuxt.config.js looks something like this:

const LRU = require('lru-cache')
module.exports = {
  render: {
    bundleRenderer: {
      cache: LRU({
        max: 1000.// Maximum number of caches
        maxAge: 1000 * 60 * 15        // Cache for 15 minutes})}}}Copy the code

It is not necessary to implement component-level caching. You also need to add the name and serverCacheKey fields to the VUE components that need to be cached to determine the unique key values for the cache, such as:

export default {
  name: 'AppHeader'.props: ['type'].serverCacheKey: props= > props.type
}
Copy the code

AppHeader::${props. Type}. Therefore, when a new request comes in, as long as the type attribute passed by the parent has been processed before, it can reuse the render cache result to improve performance

As you can see from this example, if the component depends on attributes other than the parent component’s Type attribute, the serverCacheKey should also be changed accordingly. Therefore, if the component depends on a lot of global states, or if the values of dependent states are too high, this means that the cache will be set frequently and overflow will occur. In the lru-cache configuration, the maximum number of caches is set to 1000, and the excess will be cleared

Second, you should not cache subcomponents that may have adverse effects on the rendering context. For example, the created and beforeCreated hooks of the component will also go on the server side, and the component will not be executed after being cached

In general, a good scenario is v-for rendering large amounts of data, because loop operations are CPU intensive

Api-level caching

In the rendering of a service scenario, tend to request on the server to do it, rendering the page back to the browser, and some of the interface is to do a cache, for example, do not rely on the login state and does not depend on the interface or is simply too much information configuration data interface, etc., the processing of the interface is also need time, Caching of interfaces improves performance by speeding up the processing of each request and releasing requests more quickly

The API requests use AXIos, and AXIos can be used either on the server or in the browser, and the code looks something like this

import axios from 'axios'
import md5 from 'md5'
import LRU from 'lru-cache'

// Add a 3-second cache to the API
const CACHED = LRU({
  max: 1000.maxAge: 1000 * 3
})

function request (config) {
  let key
  // The server does not need to be cached
  if(config.cache && ! process.browser) {const { params = {}, data = {} } = config
    key = md5(config.url + JSON.stringify(params) + JSON.stringify(data))
    if (CACHED.has(key)) {
      // Cache hit
      return Promise.resolve(CACHED.get(key))
    }
  }
  return axios(config)
    .then(rsp= > {
      if(config.cache && ! process.browser) {// Set the cache before returning the result
        CACHED.set(key, rsp.data)
      }
      return rsp.data
    })
}
Copy the code

Use the same as usual with axios, but add a cache attribute to indicate whether the server needs to cache

const api = {
  getGames: params= > request({
    url: '/gameInfo/gatGames',
    params,
    cache: true})}Copy the code

Page level caching

Without relying on logins and too many parameters, if concurrency is high, consider using page-level caching and adding the serverMiddleware property in nuxt.config.js

const nuxtPageCache = require('nuxt-page-cache')

module.exports = {
  serverMiddleware: [
    nuxtPageCache.cacheSeconds(1, req => {
      if (req.query && req.query.pageType) {
        return req.query.pageType
      }
      return false}})]Copy the code

In the example above, the server will only render the same pageType request once, if the link has pageType, the server will only render the same pageType request once

End (HTML, ‘utF-8 ‘). Nuxt returns the final data in res.end(HTML,’ UTF-8 ‘).

const LRU = require('lru-cache')

let cacheStore = new LRU({
  max: 100.// Set the maximum number of caches
  maxAge: 200
})

module.exports.cacheSeconds = function (secondsTTL, cacheKey) {
  // Set the cache time
  const ttl = secondsTTL * 1000
  return function (req, res, next) {
    // Get the cache key
    let key = req.originalUrl
    if (typeof cacheKey === 'function') {
      key = cacheKey(req, res)
      if(! key) {return next() }
    } else if (typeof cacheKey === 'string') {
      key = cacheKey
    }

    // If the cache hits, return directly
    const value = cacheStore.get(key)
    if (value) {
      return res.end(value, 'utf-8')}// Cache the original end scheme
    res.original_end = res.end

    // Rewrite the res.end scheme, from which nuxt calls res.end actually calls the method,
    res.end = function () {
      if (res.statusCode === 200) {
        // Set the cache
        cacheStore.set(key, data, ttl)
      }
      // Finally returns the result
      res.original_end(data, 'utf-8')}}}Copy the code

If the cache hits, the original calculation results are returned directly, providing significant performance

conclusion

In the case of high concurrency, we can consider the use of caching. The use of caching strategy depends on the scenario and will not be described here. We can also consider using PM2 to enable cluster mode to manage our processes to meet higher concurrency.