preface

I recently wrote a static file server (published) for Node, and wanted to use this small example to illustrate some common problems in front and back end HTTP-based interactions.

The code address

https://github.com/alive1541/static-server below the post code in this directory.

Installation method

       npm install static-server2 -g

The node version

Use the async function, support version 7.6 or above

Usage examples

server-start
localhost:8080

Managed static file

After the service is started successfully, you can access localhost: 8080 to view static files in the root directory. When the command line starts, you can change the root directory by using server-start -d. You can also use the -o parameter to configure the host, -p parameter to configure the port, and -h parameter to view help information.

File upload

Supports uploading files. You can resume the uploading by pausing.

Tell me about the cache

Based on this example, I will talk about a few problems in the process of front and back end interaction. Let’s start with caching, so let’s start with the code. This is a method in the index.js file at the root of the example. This method is used to filter the request, returning 304 if it hits the cache, or a new resource if it misses. This function handles forced caching and comparison caching.

// Cache handler handleCatch(req, res, fileStat) {// Force cache res.setheader ('Expries', new Date(Date.now() + 30 * 1000).toGMTString())
        res.setHeader('Catch-Control'.'private,max-age=30') // Compare cachelet ifModifiedSince = req.headers['if-modified-since']
        let ifNoneMatch = req.headers['if-none-match']
        let lastModified = fileStat.ctime.toGMTString()
        let eTag = fileStat.mtime.toGMTString()
        res.setHeader('Last-Modified', lastModified)
        res.setHeader('ETag', eTag) // If any comparison header does not match, the cache will not goif (ifModifiedSince && ifModifiedSince ! = lastModified) {return false
        }
        if (ifNoneMatch && ifNoneMatch ! = eTag) {return false} // If there are any comparison headers in the request, return 304, otherwise do not leave the cacheif (ifModifiedSince || ifNoneMatch) {
            res.writeHead(304)
            res.end()
            return true
        } else {
            return false}}Copy the code

Mandatory cache

The advantage of forced caching is that the browser does not need to send an HTTP request, and pages that do not change frequently are typically set to a long forced cache. You can skip this by clearing the browser cache and forcing a page refresh (CTRL +F5). It is implemented primarily with two HTTP headers.

The cache-control and Expires

The two heads do the same thing. Both tell the browser how long it takes not to send a request but to use the local cache. Cache-control is an HTTP1.1 specification, and Expires is an HTTP1.0 specification, so catch-control takes precedence if it exists at the same time. It’s usually set to both, as I did in the above code. Because earlier browsers do not support cache-control, catch-control also has a more detailed configuration that allows for more precise Control. The rules are as follows:

Public: The cache can be cached by both the client and the proxy server private: The cache can be cached only by the client but not by the proxy server no-cache: The mandatory cache is forbidden no-store: Disabling mandatory and comparative caching must-revalidation/proxy-revalidation: If the cached content is invalid, the request must be sent to the server /proxy for revalidation max-age= XXX: The cached content will be invalid after XXX seconds

Compared to the cache

Last-modified/If – Modified – Since

Last-modified is the header carried by the server that represents the Last updated time of the resource. If-modified-since is the header carried by the client. In the browser, if the resource is not requested for the first time, the browser will send this header. If the Last header returned by the server had a last-Modified value, it will have the same last-modified value as the Last header returned.

Etag/If-None-Match

The purpose of these two headers is the same as the above two headers: to verify resources. They’re designed to solve some of the problems with the top two heads. Such as:

1. The file time on each cluster server may be different. 2. It is possible that the file has been updated, but the content has not changed. 3, the last-modified time precision is seconds, if the file has millisecond modification, the last-modified time is not recognized

ETag is a resource tag. It doesn’t change if the resource doesn’t change. This solves the three problems mentioned above. But while ETag solves a problem, it also creates a new problem: calculating the contents of the file to be read by the ETag takes extra performance and time. So it’s not a complete replacement for last-Modified and needs to be used in a tradeoff based on actual needs. In the actual development of ETag algorithm is also different, like MY example directly used mtime.

Tell me about the compression


Accept-Ecoding
Content-Encoding

// Handle compression handleZlib(req, res) {let acceptEncoding = req.headers['accept-encoding']
        if (/\bgzip\b/g.test(acceptEncoding)) {
            res.setHeader('Content-Encoding'.'gzip'); // Zlib is a module of nodereturn zlib.createGzip()
        } else if (/\bdeflate\b/g.test(acceptEncoding)) {
            res.setHeader('Content-Encoding'.'deflate');
            return zlib.createDeflate()
        } else {
            return null
        }
    }
Copy the code

Talk about breakpoint resume

The idea behind breakpoint continuation is to use the Range in the HTTP header to tell the server the content Range of the file I am uploading. Of course, there are different ways to handle breakpoint continuation in different scenarios. This is just a demonstration based on this simple scenario. The front-end logic looks like this: 1, get the user to upload files 2, cutting, access to upload the first part of the 3, invoke the backend interface to upload files, upload this part 4, interface returns success after cutting file, upload the second part 5, each byte Range upload with Range hair to send documents Below is the cutting file and XHR upload code, The full code is in the project directory/SRC/template/list.html (using the Handlebars template engine).

    if(end > file.size) {end = file.size} var blob = file.slice(start, end) var formData = new formData (); formData.append('filechunk', blob);
    formData.append('filename', file.name); // Add Range header var Range ='bytes=' + start + The '-' + end
    xhr.setRequestHeader('Range', range) // Send xhr.send(formData);Copy the code

(1) obtain the file name from Range; (2) obtain the file location from Range; (3) delete the file location from Range; (4) delete the file location from Range; (4) delete the file location from Range;

let path = require('path')
let fs = require('fs')
functionHandleFile (req, res, fields, files, filepath) {// Get the file nameletName = fields. Filename [0] // File read pathletRdPath = files.filechunk[0]. Path // File write pathletWsPath = path.join(filepath, name) // Determine the location of the uploaded file by rangelet range = req.headers['range']
    let start = 0
    if (range) {
        start = range.split('=')[1].split(The '-')[0]} // Read the contents of the file from the Multiparty plug-in and write to the local filelet buf = fs.readFileSync(rdPath)
    fs.exists(wsPath, function(exists) {// If this is the first upload, delete the file with the same name under publicif (exists && start == 0) {
            fs.unlink(wsPath, function () {
                fs.writeFileSync(wsPath, buf, { flag: 'a+' })
                res.end()
            })
        } else {
            fs.writeFileSync(wsPath, buf, { flag: 'a+' })
            res.end()
        }
    })

}
module.exports = handleFile
Copy the code

I deal with relatively rough here, the actual project requirements may not be so simple, but they are based on the Range header to do the corresponding processing, I hope my description can help you some.

conclusion

This is the end of the article. All the code snippets quoted above are just to show the processing logic. If you are interested, you can check them out on gitHub.