“This article is participating in the technical topic essay node.js advanced road, click to see details”

preface

The “go from” to “proficient” in Node.js is a Flag that has been around for a long time, and has been around since last year. Awakening of Insects has passed, the wind warm clouds open, the next year’s Flag is the time to take out to achieve. I decide to choose the latter.

At least the exploration of Node.js will have a perfect exclamation point this year.

Specific goals

This beautiful spring day is perfect for reading by the window. Dappled light and shadow, and a glass of water. Recently I looked at the API of Zlib compression and found it strange to understand or use, so I chose some APIS that seemed interesting to me for further exploration.

Go with the flow of nowhere, ride the wind and waves to aid the sea

Zlib compression

Take a look, how much knowledge is included in a compression/decompression function?

File compression and decompression of the implementation

let zlib = require('zlib');
const { createReadStream, createWriteStream } = require('fs');
const { pipeline } = require('stream');

/** * Error capture method during compression */
const onError = err= > {
  if (err) {
    console.error('An error occurred:', err);
    process.exitCode = 1; }};/** * Compression or decompression method type is zip to perform compression method, type is ungzip to perform decompression method */
function zipFunc(source, destination, type) {
  const gzip = zlib.createGzip();
  const ungzip = zlib.createGunzip();
  switch (type) {
    case 'zip':
      return pipeline(source, gzip, destination, onError);
    case 'ungzip':
      return source.pipe(ungzip).pipe(destination);
    default:
      returnpipeline(source, gzip, destination, onError); }}/ / compression
const source = createReadStream('./zlib/input.txt');
const destination = createWriteStream('./zlib/input.txt.gz');
zipFunc(source, destination, 'zip');

/ /
const source = createReadStream('./zlib/input.txt.gz');
const destination = createWriteStream('./zlib/input.txt');
zipFunc(source, destination, 'ungzip');
Copy the code
  • The input.txt.gz file is generated in the zlib directory.
  • When decompressing, the input. TXT file is generated in the zlib directory.

pipeline

The stream.pipeline() method, which is used to pipe forward errors between the stream and the generator, clean them up properly and provide a callback when the pipe completes.

Difficult feature description?

I have read the above function introduction for many times, but I do not understand it very well, especially the concept of pipeline is vague. I searched the Stream articles and found several good ones, including a passage called “A Clean Stream in Node.js: Understanding the Basic Concepts of Stream.

A pipe is a mechanism for taking the output of one stream as the input of another stream. It is typically used to take data from one stream and pass the output of that stream to another. There are no restrictions on pipe operations; in other words, pipes are used to process stream data step by step.

Therefore, stream.pipeline() is used to provide a pipeline to complete data stream processing during file compression. Multiple streams can be transmitted in the pipeline, and a callback is provided when the pipeline task is finished.

usage

stream.pipeline(source[, ...transforms], destination, callback)
Copy the code

attribute

Source: readable stream

. Tranforms: duplex flow, read/write flow

Destination: writable stream

Callback: Callback when the pipe completes

pipe

The readable.pipe() method binds a writable stream to a readable stream so that it automatically switches to flow mode and pushes all its data to the bound writable stream. To summarize this statement, the main use of the PIPE method is to read data from a readable stream and write it to a writable stream.

usage

readable.pipe(destination[, options])
Copy the code

The sample

You can look at the official example, which is straightforward and pipes all the data from readable into a file named file.txt:

const fs = require('fs');
const readable = getReadableStreamSomehow();
const writable = fs.createWriteStream('file.txt');
// All data for the readable stream goes to 'file.txt'.
readable.pipe(writable);
Copy the code

It is also possible to bind multiple Writable streams to a single Readable stream.

The readable.pipe() method returns a reference to the target stream so that a pipeline stream chain can be established

It is also possible to bind multiple Writable streams to a single Readable stream. The readable.pipe() method returns a reference to the target stream so that a pipeline stream chain can be establishedCopy the code
const fs = require('fs');
const r = fs.createReadStream('file.txt');
const z = zlib.createGzip();
const w = fs.createWriteStream('file.txt.gz');
r.pipe(z).pipe(w);
Copy the code

The stream flow

What is flow?

Take a look at the introduction on the official website.

Streams are an abstract interface used to process streaming data in Node.js. The STREAM module provides an API for implementing the stream interface.

Streams can be readable, writable, or both. All streams are instances of EventEmitter.

I read it, as if I understood and as if I didn’t understand. But I did find a great article about how to fix a Node.js Stream.

The introduction of convection in this article, I feel that I understand some

A stream is an abstract data structure. Like an array or a string, a stream is a collection of data.

The difference is that a stream can output a small amount of data at a time, and it doesn’t have to be stored in memory.

For example, a Request/Response object that makes an HTTP request to the server is a Stream.

In summary, using streams can reduce server stress by splitting file resources into smaller chunks for processing.

Now that you know what streams do, you can see why file compression uses the module methods provided by Stream. If you want to learn more about Streams, I recommend reading The article On How to Fix Node.js Streams, which is detailed and easy to read.

Compress HTTP requests and responses

Gzip, Deflate and BR

  • Gzip is a data format that compresses the data portion by default and currently only uses the Deflate algorithm;
  • Deflate is a lossless data compression algorithm using both LZ77 algorithm and Huffman Coding.
  • Brotli performs data compression through LZ77 algorithm, Huffman coding and second-order text modeling. Compared with other compression algorithms, Brotli has higher compression efficiency.

Local experiments with examples from the official website

Based on the example provided on the official website, I generated different files for the HTTP response content. It can be seen that the file size of compressed and uncompressed files is different.

The sample code

// Example client request
const zlib = require('zlib');
const http = require('http');
const fs = require('fs');
const { pipeline } = require('stream');

const request = http.get({
  host: 'example.com'.path: '/'.port: 80.headers: { 'Accept-Encoding': 'br,gzip,deflate'}}); request.on('response'.response= > {
  console.log(response.headers['content-encoding'].'headers');
  const output = fs.createWriteStream('./zlib/example.com_index_default.html');
  // const output = fs.createWriteStream('./zlib/example.com_index_gzip.html');
  // const output = fs.createWriteStream('./zlib/example.com_index_deflate.html');

  const onError = err= > {
    if (err) {
      console.error('An error occurred:', err);
      process.exitCode = 1; }};switch (response.headers['content-encoding']) {
    case 'br':
      pipeline(response, zlib.createBrotliDecompress(), output, onError);
      break;
    case 'gzip':
      pipeline(response, zlib.createGzip(), output, onError);
      break;
    case 'deflate':
      pipeline(response, zlib.createGzip(), output, onError);
      break;
    default:
      pipeline(response, output, onError);
      break; }});Copy the code
  • The uncompressed file size is 1.2K;
  • The compressed file size is 600b +;

summary

For HTTP request and response compression, I still need to study and practice in the actual application scenarios, simple implementation of the official website example, I feel that I have not fully mastered.

conclusion

Although the learning in the past month was fragmented, with the accumulation of technology, a set of learning system of my own has been formed, which can help me master the new technology faster and better.

Next, is the stage of practice, although there is no node.js development scene in the work, but their own can create projects, just I have a ready-made small program, can develop a set of article management background system.