An analysis Stream is an abstract interface implemented by many objects in Node. For example, a request to an HTTP server is a stream, and stdout is a stream. Streams are readable, writable, or both. Stream was first introduced to Unix in the early days, and decades of practice have proved that the Stream idea can be easily developed into some large systems. In Unix, the Stream is achieved by “|”. In Node, as the built-in Stream module, many core and tripartite modules are used. Like Unix, node Stream’s main operation is.pipe(), which allows users to control the balance between reads and writes using a backpressure mechanism. Streams can provide developers with a unified interface that can be reused, with an abstract Stream interface to control the read and write balance between streams. Unlike a TCP connection, which is both a readable stream and a writable stream, an Http request object is a readable stream and an Http Response object is a writable stream. Streams are transmitted as buffers by default, unless you set them to a different encoding. Here’s an example:

  1. <p><font size=”3″>
  2. </font></p>
  3. <p><font size=”3″>  var http = require(‘http’) ;</font></p>
  4. var server = http.createserver (function(req,res){

  5. res.writeheader (200, {‘ content-type ‘: ‘text/plain’});

  6. <p><font size=”3″>  res.end(“Hello,大熊!”) ;</font></p>
  7. < p > < font size = “3” >});

  8. < p > < font size = “3” > server. Listen (8888);

  9. console.log(” HTTP server running on port 8888… );

Garbled characters are displayed because the specified character set is not set, for example, “UTF-8”. Var HTTP = require(‘ HTTP ‘);


  1. var server = http.createserver (function(req,res){

  2. < p > < font size = “3” > res. WriteHeader (200, {< / font > < / p >
  3. < p > < font size = “3” > ‘the content-type:’ text/plain;

  4. < p > < font size = “3” >});

  5. <p><font size=”3″>  res.end(“Hello,大熊!”) ;</font></p>
  6. < p > < font size = “3” >});

  7. < p > < font size = “3” > server. Listen (8888);

  8. console.log(” HTTP server running on port 8888… );

I/O in Stream node is asynchronous, so reading/writing to disk and network requires a callback function to read data. Here is a sample file download code:


  1. <p><font size=”3″>  var http = require(‘http’) ;</font></p>
  2. var fs = require(‘fs’);

  3. var server = http.createserver (function (req, res) {

  4. < p > < font size = “3” > fs. ReadFile (__dirname + ‘/ data. TXT’, function (err, data) {< / font > < / p >
  5. < p > < font size = “3” > res. The end (data);

  6. < p > < font size = “3” >});

  7. < p > < font size = “3” >});

  8. < p > < font size = “3” > server. Listen (8888);

The code does what it needs to do, but the service needs to cache the entire file data into memory before sending it. If the “data.txt” file is very large and has a large number of concurrent requests, it can waste a lot of memory. Because the user needs to wait until the entire file is cached in memory to receive the file data, the user experience is quite poor. Fortunately, both arguments (req,res) are Stream, so we can use fs.createreadstream () instead of fs.readfile (). Var HTTP = require(‘ HTTP ‘);


  1. var fs = require(‘fs’);

  2. var server = http.createserver (function (req, res) {

  3. var stream = fs.createreadStream (__dirname + ‘/data.txt’);

  4. < p > < font size = “3” > stream. The pipe (res);

  5. < p > < font size = “3” >});

  6. < p > < font size = “3” > server. Listen (8888);

The.pipe() method listens for the ‘data’ and ‘end’ events of fs.createreadstream () so that the “data.txt” file does not need to cache the entire file and can send a block of data to the client as soon as the client connection completes. Another benefit of using.pipe() is that it solves the problem of unbalanced reads and writes when client latency is very high. There are five basic streams: readable, Writable, Transform, Duplex, and “Classic”. (Please refer to the API for details.) 2, instance introduction Data flow is used when the memory cannot hold the data to be processed at one time, or when it is more efficient to read and process the data at the same time. NodeJS provides operations on data streams through various streams. Using a large file copy program as an example, we can create a read-only data stream for the data source as follows: var rs = fs.createreadStream (pathname);


  1. < p > < font size = “3” > rs. On (‘ data ‘, function (the chunk) {< / font > < / p >
  2. < p > < font size = “3” > doSomething (the chunk);

  3. < p > < font size = “3” >});

  4. < p > < font size = “3” > rs. On (‘ end ‘, function () {< / font > < / p >
  5. < p > < font size = “3” > the cleanUp ();

  6. < p > < font size = “3” >});

Data events are constantly fired in the code, whether or not the doSomething function can handle them. The code can be modified as follows to solve this problem.

  1. <p><font size=”3″>
  2. </font></p>
  3. var rs = fs.createreadstream (SRC);

  4. < p > < font size = “3” > rs. On (‘ data ‘, function (the chunk) {< / font > < / p >
  5. < p > < font size = “3” > rs. Pause ();

  6. doSomething(chunk, function () {

  7. < p > < font size = “3” > rs. Resume ();

  8. < p > < font size = “3” >});

  9. < p > < font size = “3” >});

  10. < p > < font size = “3” > rs. On (‘ end ‘, function () {< / font > < / p >
  11. < p > < font size = “3” > the cleanUp ();

  12. < p > < font size = “3” >});

A callback has been added to the doSomething function so that we can pause reading data before processing it and continue reading data after processing it. Alternatively, we can create a write-only data stream for the data target as follows: var rs = fs.createreadStream (SRC);


  1. var ws = fs.createWritestream (DST);

  2. < p > < font size = “3” > rs. On (‘ data ‘, function (the chunk) {< / font > < / p >
  3. < p > < font size = “3” > ws. Write (the chunk);

  4. < p > < font size = “3” >});

  5. < p > < font size = “3” > rs. On (‘ end ‘, function () {< / font > < / p >
  6. < p > < font size = “3” > ws. The end ();

  7. < p > < font size = “3” >});

Instead of writing data to a write-only data stream, doSomething looks like a file copy program. However, the above code has the problem mentioned above. If the write speed cannot keep up with the read speed, the cache inside the write data stream will burst. We can use the return value of the.write method to determine whether incoming data has been written to the target or temporarily placed in the cache, and use the drain event to determine when the write-only stream has written to the target and is ready to pass in the next data to be written. Var rs = fs.createreadstream (SRC);


  1. var ws = fs.createWritestream (DST);

  2. < p > < font size = “3” > rs. On (‘ data ‘, function (the chunk) {< / font > < / p >
  3. < p > < font size = “3” > if (ws. Write the chunk () = = = false) {< / font > < / p >
  4. < p > < font size = “3” > rs. Pause ();

  5. < p > < font size = “3” >} < / font > < / p >
  6. < p > < font size = “3” >});

  7. < p > < font size = “3” > rs. On (‘ end ‘, function () {< / font > < / p >
  8. < p > < font size = “3” > ws. The end ();

  9. < p > < font size = “3” >});

  10. < p > < font size = “3” > ws) on (‘ drain ‘tapping, function () {< / font > < / p >
  11. < p > < font size = “3” > rs. Resume ();

  12. < p > < font size = “3” >});

Finally, the data transfer from read-only data stream to write data stream is realized, and the explosion-proof warehouse control is included. Because there are many scenarios for using this, such as the big file copy program above, NodeJS directly provides the.pipe method to do this, which is internally implemented in a similar way to the code above. Var fs = require(‘fs’) var fs = require(‘fs’)


  1. < p > < font size = “3” > path = the require (” path “), < / font > < / p >
  2. < p > < font size = “3” > out = process. The stdout.

  3. var filePath = ‘/bb/bigbear.mkv’;

  4. <p><font size=”3″>  var readStream = fs.createReadStream(filePath);</font></p>
  5. <p><font size=”3″>  var writeStream = fs.createWriteStream(‘file.mkv’);</font></p>
  6. <p><font size=”3″>  var stat = fs.statSync(filePath);</font></p>
  7. var totalSize = stat.size;

  8. var passedLength = 0;

  9. var lastSize = 0;

  10. <p><font size=”3″>  var startTime = Date.now();</font></p>
  11. < p > < font size = “3” > readStream) on (‘ data ‘, function (the chunk) {< / font > < / p >
  12. < p > < font size = “3” > passedLength + = the chunk. The length;

  13. < p > < font size = “3” > if (writeStream. Write the chunk () = = = false) {< / font > < / p >
  14. < p > < font size = “3” > readStream. Pause ();

  15. < p > < font size = “3” >} < / font > < / p >
  16. < p > < font size = “3” >});

  17. < p > < font size = “3” > readStream) on (‘ end ‘, function () {< / font > < / p >
  18. < p > < font size = “3” > writeStream. The end ();

  19. < p > < font size = “3” >});

  20. < p > < font size = “3” > writeStream) on (‘ drain ‘tapping, function () {< / font > < / p >
  21. < p > < font size = “3” > readStream. Resume ();

  22. < p > < font size = “3” >});

  23. < p > < font size = “3” > setTimeout (the function show () {< / font > < / p >
  24. var percent = math.ceil ((passedLength/totalSize) * 100);

  25. var size= math.ceil (passedLength / 1000000);

  26. var diff = size-lastsize;

  27. < p > < font size = “3” > lastSize = size;

  28. < p > < font size = “3” > out. ClearLine ();

  29. < p > < font size = “3” > out. CursorTo (0);

  30. < p > < font size = “3” > out. Write (‘ completed ‘+ size +’ MB, ‘+ percent +’ %, speed: ‘+ diff * 2 +
  31. ‘MB/s’);

  32. if (passedLength < totalSize) {

  33. < p > < font size = “3” > setTimeout (show, 500);

  34. < p > < font size = “3” >} else {< / font > < / p >
  35. <p><font size=”3″>  var endTime = Date.now();</font></p>
  36. < p > < font size = “3” > console. The log ();

  37. console.log(‘ shared: ‘+ (endtime-starttime) / 1000 +’ seconds. ‘);

  38. < p > < font size = “3” >} < / font > < / p >
  39. < p > < font size = “3” >}, 500);

We added a recursive setTimeout (or setInterval) to be a bystander, observing the progress every 500ms, and writing the size, percentage, and copy speed of the completion to the console. When the copy is complete, it takes time to calculate the total. (1) Understand the concept of Stream. (2) Proficient in using relevant Stream API (3), pay attention to the control of details, such as copying large files, using the form of “chunk data” for fragment processing. (4), the use of pipe (5), again emphasize the concept that a TCP connection is both a readable flow and a writable flow, while Http connection is different, an Http Request object is a readable flow and an Http Response object is a writable flow.