Stream[src]#
Source Code:lib/stream.js
A stream is an abstract interface for working with streaming data in Node.js.Thenode:stream
module provides an API for implementing the stream interface.
There are many stream objects provided by Node.js. For instance, arequest to an HTTP server andprocess.stdout
are both stream instances.
Streams can be readable, writable, or both. All streams are instances ofEventEmitter
.
To access thenode:stream
module:
const stream =require('node:stream');
Thenode:stream
module is useful for creating new types of stream instances.It is usually not necessary to use thenode:stream
module to consume streams.
Organization of this document#
This document contains two primary sections and a third section for notes. Thefirst section explains how to use existing streams within an application. Thesecond section explains how to create new types of streams.
Types of streams#
There are four fundamental stream types within Node.js:
Writable
: streams to which data can be written (for example,fs.createWriteStream()
).Readable
: streams from which data can be read (for example,fs.createReadStream()
).Duplex
: streams that are bothReadable
andWritable
(for example,net.Socket
).Transform
:Duplex
streams that can modify or transform the data as itis written and read (for example,zlib.createDeflate()
).
Additionally, this module includes the utility functionsstream.duplexPair()
,stream.pipeline()
,stream.finished()
stream.Readable.from()
, andstream.addAbortSignal()
.
Streams Promises API#
Thestream/promises
API provides an alternative set of asynchronous utilityfunctions for streams that returnPromise
objects rather than usingcallbacks. The API is accessible viarequire('node:stream/promises')
orrequire('node:stream').promises
.
stream.pipeline(source[, ...transforms], destination[, options])
#
stream.pipeline(streams[, options])
#
History
Version | Changes |
---|---|
v18.0.0, v17.2.0, v16.14.0 | Add the |
v15.0.0 | Added in: v15.0.0 |
streams
<Stream[]> |<Iterable[]> |<AsyncIterable[]> |<Function[]>source
<Stream> |<Iterable> |<AsyncIterable> |<Function>- Returns:<Promise> |<AsyncIterable>
...transforms
<Stream> |<Function>source
<AsyncIterable>- Returns:<Promise> |<AsyncIterable>
destination
<Stream> |<Function>source
<AsyncIterable>- Returns:<Promise> |<AsyncIterable>
options
<Object> Pipeline optionssignal
<AbortSignal>end
<boolean> End the destination stream when the source stream ends.Transform streams are always ended, even if this value isfalse
.Default:true
.
- Returns:<Promise> Fulfills when the pipeline is complete.
const { pipeline } =require('node:stream/promises');const fs =require('node:fs');const zlib =require('node:zlib');asyncfunctionrun() {awaitpipeline( fs.createReadStream('archive.tar'), zlib.createGzip(), fs.createWriteStream('archive.tar.gz'), );console.log('Pipeline succeeded.');}run().catch(console.error);
import { pipeline }from'node:stream/promises';import { createReadStream, createWriteStream }from'node:fs';import { createGzip }from'node:zlib';awaitpipeline(createReadStream('archive.tar'),createGzip(),createWriteStream('archive.tar.gz'),);console.log('Pipeline succeeded.');
To use anAbortSignal
, pass it inside an options object, as the last argument.When the signal is aborted,destroy
will be called on the underlying pipeline,with anAbortError
.
const { pipeline } =require('node:stream/promises');const fs =require('node:fs');const zlib =require('node:zlib');asyncfunctionrun() {const ac =newAbortController();const signal = ac.signal;setImmediate(() => ac.abort());awaitpipeline( fs.createReadStream('archive.tar'), zlib.createGzip(), fs.createWriteStream('archive.tar.gz'), { signal }, );}run().catch(console.error);// AbortError
import { pipeline }from'node:stream/promises';import { createReadStream, createWriteStream }from'node:fs';import { createGzip }from'node:zlib';const ac =newAbortController();const { signal } = ac;setImmediate(() => ac.abort());try {awaitpipeline(createReadStream('archive.tar'),createGzip(),createWriteStream('archive.tar.gz'), { signal }, );}catch (err) {console.error(err);// AbortError}
Thepipeline
API also supports async generators:
const { pipeline } =require('node:stream/promises');const fs =require('node:fs');asyncfunctionrun() {awaitpipeline( fs.createReadStream('lowercase.txt'),asyncfunction* (source, { signal }) { source.setEncoding('utf8');// Work with strings rather than `Buffer`s.forawait (const chunkof source) {yieldawaitprocessChunk(chunk, { signal }); } }, fs.createWriteStream('uppercase.txt'), );console.log('Pipeline succeeded.');}run().catch(console.error);
import { pipeline }from'node:stream/promises';import { createReadStream, createWriteStream }from'node:fs';awaitpipeline(createReadStream('lowercase.txt'),asyncfunction* (source, { signal }) { source.setEncoding('utf8');// Work with strings rather than `Buffer`s.forawait (const chunkof source) {yieldawaitprocessChunk(chunk, { signal }); } },createWriteStream('uppercase.txt'),);console.log('Pipeline succeeded.');
Remember to handle thesignal
argument passed into the async generator.Especially in the case where the async generator is the source for thepipeline (i.e. first argument) or the pipeline will never complete.
const { pipeline } =require('node:stream/promises');const fs =require('node:fs');asyncfunctionrun() {awaitpipeline(asyncfunction* ({ signal }) {awaitsomeLongRunningfn({ signal });yield'asd'; }, fs.createWriteStream('uppercase.txt'), );console.log('Pipeline succeeded.');}run().catch(console.error);
import { pipeline }from'node:stream/promises';import fsfrom'node:fs';awaitpipeline(asyncfunction* ({ signal }) {awaitsomeLongRunningfn({ signal });yield'asd'; }, fs.createWriteStream('uppercase.txt'),);console.log('Pipeline succeeded.');
Thepipeline
API providescallback version:
stream.finished(stream[, options])
#
History
Version | Changes |
---|---|
v19.5.0, v18.14.0 | Added support for |
v19.1.0, v18.13.0 | The |
v15.0.0 | Added in: v15.0.0 |
stream
<Stream> |<ReadableStream> |<WritableStream> A readable and/or writablestream/webstream.options
<Object>error
<boolean> |<undefined>readable
<boolean> |<undefined>writable
<boolean> |<undefined>signal
<AbortSignal> |<undefined>cleanup
<boolean> |<undefined> Iftrue
, removes the listeners registered bythis function before the promise is fulfilled.Default:false
.
- Returns:<Promise> Fulfills when the stream is nolonger readable or writable.
const { finished } =require('node:stream/promises');const fs =require('node:fs');const rs = fs.createReadStream('archive.tar');asyncfunctionrun() {awaitfinished(rs);console.log('Stream is done reading.');}run().catch(console.error);rs.resume();// Drain the stream.
import { finished }from'node:stream/promises';import { createReadStream }from'node:fs';const rs =createReadStream('archive.tar');asyncfunctionrun() {awaitfinished(rs);console.log('Stream is done reading.');}run().catch(console.error);rs.resume();// Drain the stream.
Thefinished
API also provides acallback version.
stream.finished()
leaves dangling event listeners (in particular'error'
,'end'
,'finish'
and'close'
) after the returned promise isresolved or rejected. The reason for this is so that unexpected'error'
events (due to incorrect stream implementations) do not cause unexpectedcrashes. If this is unwanted behavior thenoptions.cleanup
should be set totrue
:
awaitfinished(rs, {cleanup:true });
Object mode#
All streams created by Node.js APIs operate exclusively on strings,<Buffer>,<TypedArray> and<DataView> objects:
Strings
andBuffers
are the most common types used with streams.TypedArray
andDataView
lets you handle binary data with types likeInt32Array
orUint8Array
. When you write a TypedArray or DataView to astream, Node.js processesthe raw bytes.
It is possible, however, for streamimplementations to work with other types of JavaScript values (with theexception ofnull
, which serves a special purpose within streams).Such streams are considered to operate in "object mode".
Stream instances are switched into object mode using theobjectMode
optionwhen the stream is created. Attempting to switch an existing stream intoobject mode is not safe.
Buffering#
BothWritable
andReadable
streams will store data in an internalbuffer.
The amount of data potentially buffered depends on thehighWaterMark
optionpassed into the stream's constructor. For normal streams, thehighWaterMark
option specifies atotal number of bytes. For streams operatingin object mode, thehighWaterMark
specifies a total number of objects. Forstreams operating on (but not decoding) strings, thehighWaterMark
specifiesa total number of UTF-16 code units.
Data is buffered inReadable
streams when the implementation callsstream.push(chunk)
. If the consumer of the Stream does notcallstream.read()
, the data will sit in the internalqueue until it is consumed.
Once the total size of the internal read buffer reaches the threshold specifiedbyhighWaterMark
, the stream will temporarily stop reading data from theunderlying resource until the data currently buffered can be consumed (that is,the stream will stop calling the internalreadable._read()
method that isused to fill the read buffer).
Data is buffered inWritable
streams when thewritable.write(chunk)
method is called repeatedly. While thetotal size of the internal write buffer is below the threshold set byhighWaterMark
, calls towritable.write()
will returntrue
. Oncethe size of the internal buffer reaches or exceeds thehighWaterMark
,false
will be returned.
A key goal of thestream
API, particularly thestream.pipe()
method,is to limit the buffering of data to acceptable levels such that sources anddestinations of differing speeds will not overwhelm the available memory.
ThehighWaterMark
option is a threshold, not a limit: it dictates the amountof data that a stream buffers before it stops asking for more data. It does notenforce a strict memory limitation in general. Specific stream implementationsmay choose to enforce stricter limits but doing so is optional.
BecauseDuplex
andTransform
streams are bothReadable
andWritable
, each maintainstwo separate internal buffers used for reading andwriting, allowing each side to operate independently of the other whilemaintaining an appropriate and efficient flow of data. For example,net.Socket
instances areDuplex
streams whoseReadable
side allowsconsumption of data receivedfrom the socket and whoseWritable
side allowswriting datato the socket. Because data may be written to the socket at afaster or slower rate than data is received, each side shouldoperate (and buffer) independently of the other.
The mechanics of the internal buffering are an internal implementation detailand may be changed at any time. However, for certain advanced implementations,the internal buffers can be retrieved usingwritable.writableBuffer
orreadable.readableBuffer
. Use of these undocumented properties is discouraged.
API for stream consumers#
Almost all Node.js applications, no matter how simple, use streams in somemanner. The following is an example of using streams in a Node.js applicationthat implements an HTTP server:
const http =require('node:http');const server = http.createServer((req, res) => {// `req` is an http.IncomingMessage, which is a readable stream.// `res` is an http.ServerResponse, which is a writable stream.let body ='';// Get the data as utf8 strings.// If an encoding is not set, Buffer objects will be received. req.setEncoding('utf8');// Readable streams emit 'data' events once a listener is added. req.on('data',(chunk) => { body += chunk; });// The 'end' event indicates that the entire body has been received. req.on('end',() => {try {const data =JSON.parse(body);// Write back something interesting to the user: res.write(typeof data); res.end(); }catch (er) {// uh oh! bad json! res.statusCode =400;return res.end(`error:${er.message}`); } });});server.listen(1337);// $ curl localhost:1337 -d "{}"// object// $ curl localhost:1337 -d "\"foo\""// string// $ curl localhost:1337 -d "not json"// error: Unexpected token 'o', "not json" is not valid JSON
Writable
streams (such asres
in the example) expose methods such aswrite()
andend()
that are used to write data onto the stream.
Readable
streams use theEventEmitter
API for notifying applicationcode when data is available to be read off the stream. That available data canbe read from the stream in multiple ways.
BothWritable
andReadable
streams use theEventEmitter
API invarious ways to communicate the current state of the stream.
Duplex
andTransform
streams are bothWritable
andReadable
.
Applications that are either writing data to or consuming data from a streamare not required to implement the stream interfaces directly and will generallyhave no reason to callrequire('node:stream')
.
Developers wishing to implement new types of streams should refer to thesectionAPI for stream implementers.
Writable streams#
Writable streams are an abstraction for adestination to which data iswritten.
Examples ofWritable
streams include:
- HTTP requests, on the client
- HTTP responses, on the server
- fs write streams
- zlib streams
- crypto streams
- TCP sockets
- child process stdin
process.stdout
,process.stderr
Some of these examples are actuallyDuplex
streams that implement theWritable
interface.
AllWritable
streams implement the interface defined by thestream.Writable
class.
While specific instances ofWritable
streams may differ in various ways,allWritable
streams follow the same fundamental usage pattern as illustratedin the example below:
const myStream =getWritableStreamSomehow();myStream.write('some data');myStream.write('some more data');myStream.end('done writing data');
Class:stream.Writable
#
Event:'close'
#
History
Version | Changes |
---|---|
v10.0.0 | Add |
v0.9.4 | Added in: v0.9.4 |
The'close'
event is emitted when the stream and any of its underlyingresources (a file descriptor, for example) have been closed. The event indicatesthat no more events will be emitted, and no further computation will occur.
AWritable
stream will always emit the'close'
event if it iscreated with theemitClose
option.
Event:'drain'
#
If a call tostream.write(chunk)
returnsfalse
, the'drain'
event will be emitted when it is appropriate to resume writing datato the stream.
// Write the data to the supplied writable stream one million times.// Be attentive to back-pressure.functionwriteOneMillionTimes(writer, data, encoding, callback) {let i =1000000;write();functionwrite() {let ok =true;do { i--;if (i ===0) {// Last time! writer.write(data, encoding, callback); }else {// See if we should continue, or wait.// Don't pass the callback, because we're not done yet. ok = writer.write(data, encoding); } }while (i >0 && ok);if (i >0) {// Had to stop early!// Write some more once it drains. writer.once('drain', write); } }}
Event:'error'
#
The'error'
event is emitted if an error occurred while writing or pipingdata. The listener callback is passed a singleError
argument when called.
The stream is closed when the'error'
event is emitted unless theautoDestroy
option was set tofalse
when creating thestream.
After'error'
, no further events other than'close'
should be emitted(including'error'
events).
Event:'finish'
#
The'finish'
event is emitted after thestream.end()
methodhas been called, and all data has been flushed to the underlying system.
const writer =getWritableStreamSomehow();for (let i =0; i <100; i++) { writer.write(`hello, #${i}!\n`);}writer.on('finish',() => {console.log('All writes are now complete.');});writer.end('This is the end\n');
Event:'pipe'
#
src
<stream.Readable> source stream that is piping to this writable
The'pipe'
event is emitted when thestream.pipe()
method is called ona readable stream, adding this writable to its set of destinations.
const writer =getWritableStreamSomehow();const reader =getReadableStreamSomehow();writer.on('pipe',(src) => {console.log('Something is piping into the writer.'); assert.equal(src, reader);});reader.pipe(writer);
Event:'unpipe'
#
src
<stream.Readable> The source stream thatunpiped this writable
The'unpipe'
event is emitted when thestream.unpipe()
method is calledon aReadable
stream, removing thisWritable
from its set ofdestinations.
This is also emitted in case thisWritable
stream emits an error when aReadable
stream pipes into it.
const writer =getWritableStreamSomehow();const reader =getReadableStreamSomehow();writer.on('unpipe',(src) => {console.log('Something has stopped piping into the writer.'); assert.equal(src, reader);});reader.pipe(writer);reader.unpipe(writer);
writable.cork()
#
Thewritable.cork()
method forces all written data to be buffered in memory.The buffered data will be flushed when either thestream.uncork()
orstream.end()
methods are called.
The primary intent ofwritable.cork()
is to accommodate a situation in whichseveral small chunks are written to the stream in rapid succession. Instead ofimmediately forwarding them to the underlying destination,writable.cork()
buffers all the chunks untilwritable.uncork()
is called, which will pass themall towritable._writev()
, if present. This prevents a head-of-line blockingsituation where data is being buffered while waiting for the first small chunkto be processed. However, use ofwritable.cork()
without implementingwritable._writev()
may have an adverse effect on throughput.
See also:writable.uncork()
,writable._writev()
.
writable.destroy([error])
#
History
Version | Changes |
---|---|
v14.0.0 | Work as a no-op on a stream that has already been destroyed. |
v8.0.0 | Added in: v8.0.0 |
Destroy the stream. Optionally emit an'error'
event, and emit a'close'
event (unlessemitClose
is set tofalse
). After this call, the writablestream has ended and subsequent calls towrite()
orend()
will result inanERR_STREAM_DESTROYED
error.This is a destructive and immediate way to destroy a stream. Previous calls towrite()
may not have drained, and may trigger anERR_STREAM_DESTROYED
error.Useend()
instead of destroy if data should flush before close, or wait forthe'drain'
event before destroying the stream.
const {Writable } =require('node:stream');const myStream =newWritable();const fooErr =newError('foo error');myStream.destroy(fooErr);myStream.on('error',(fooErr) =>console.error(fooErr.message));// foo error
const {Writable } =require('node:stream');const myStream =newWritable();myStream.destroy();myStream.on('error',functionwontHappen() {});
const {Writable } =require('node:stream');const myStream =newWritable();myStream.destroy();myStream.write('foo',(error) =>console.error(error.code));// ERR_STREAM_DESTROYED
Oncedestroy()
has been called any further calls will be a no-op and nofurther errors except from_destroy()
may be emitted as'error'
.
Implementors should not override this method,but instead implementwritable._destroy()
.
writable.destroyed
#
Istrue
afterwritable.destroy()
has been called.
const {Writable } =require('node:stream');const myStream =newWritable();console.log(myStream.destroyed);// falsemyStream.destroy();console.log(myStream.destroyed);// true
writable.end([chunk[, encoding]][, callback])
#
History
Version | Changes |
---|---|
v22.0.0, v20.13.0 | The |
v15.0.0 | The |
v14.0.0 | The |
v10.0.0 | This method now returns a reference to |
v8.0.0 | The |
v0.9.4 | Added in: v0.9.4 |
chunk
<string> |<Buffer> |<TypedArray> |<DataView> |<any> Optional data to write. Forstreams not operating in object mode,chunk
must be a<string>,<Buffer>,<TypedArray> or<DataView>. For object mode streams,chunk
may be anyJavaScript value other thannull
.encoding
<string> The encoding ifchunk
is a stringcallback
<Function> Callback for when the stream is finished.- Returns:<this>
Calling thewritable.end()
method signals that no more data will be writtento theWritable
. The optionalchunk
andencoding
arguments allow onefinal additional chunk of data to be written immediately before closing thestream.
Calling thestream.write()
method after callingstream.end()
will raise an error.
// Write 'hello, ' and then end with 'world!'.const fs =require('node:fs');const file = fs.createWriteStream('example.txt');file.write('hello, ');file.end('world!');// Writing more now is not allowed!
writable.setDefaultEncoding(encoding)
#
History
Version | Changes |
---|---|
v6.1.0 | This method now returns a reference to |
v0.11.15 | Added in: v0.11.15 |
Thewritable.setDefaultEncoding()
method sets the defaultencoding
for aWritable
stream.
writable.uncork()
#
Thewritable.uncork()
method flushes all data buffered sincestream.cork()
was called.
When usingwritable.cork()
andwritable.uncork()
to manage the bufferingof writes to a stream, defer calls towritable.uncork()
usingprocess.nextTick()
. Doing so allows batching of allwritable.write()
calls that occur within a given Node.js event loop phase.
stream.cork();stream.write('some ');stream.write('data ');process.nextTick(() => stream.uncork());
If thewritable.cork()
method is called multiple times on a stream, thesame number of calls towritable.uncork()
must be called to flush the buffereddata.
stream.cork();stream.write('some ');stream.cork();stream.write('data ');process.nextTick(() => { stream.uncork();// The data will not be flushed until uncork() is called a second time. stream.uncork();});
See also:writable.cork()
.
writable.writable
#
Istrue
if it is safe to callwritable.write()
, which meansthe stream has not been destroyed, errored, or ended.
writable.writableAborted
#
History
Version | Changes |
---|---|
v24.0.0 | Marking the API stable. |
v18.0.0, v16.17.0 | Added in: v18.0.0, v16.17.0 |
Returns whether the stream was destroyed or errored before emitting'finish'
.
writable.writableEnded
#
Istrue
afterwritable.end()
has been called. This propertydoes not indicate whether the data has been flushed, for this usewritable.writableFinished
instead.
writable.writableCorked
#
Number of timeswritable.uncork()
needs to becalled in order to fully uncork the stream.
writable.writableFinished
#
Is set totrue
immediately before the'finish'
event is emitted.
writable.writableHighWaterMark
#
Return the value ofhighWaterMark
passed when creating thisWritable
.
writable.writableLength
#
This property contains the number of bytes (or objects) in the queueready to be written. The value provides introspection data regardingthe status of thehighWaterMark
.
writable.writableNeedDrain
#
Istrue
if the stream's buffer has been full and stream will emit'drain'
.
writable.writableObjectMode
#
Getter for the propertyobjectMode
of a givenWritable
stream.
writable[Symbol.asyncDispose]()
#
History
Version | Changes |
---|---|
v24.2.0 | No longer experimental. |
v22.4.0, v20.16.0 | Added in: v22.4.0, v20.16.0 |
Callswritable.destroy()
with anAbortError
and returnsa promise that fulfills when the stream is finished.
writable.write(chunk[, encoding][, callback])
#
History
Version | Changes |
---|---|
v22.0.0, v20.13.0 | The |
v8.0.0 | The |
v6.0.0 | Passing |
v0.9.4 | Added in: v0.9.4 |
chunk
<string> |<Buffer> |<TypedArray> |<DataView> |<any> Optional data to write. Forstreams not operating in object mode,chunk
must be a<string>,<Buffer>,<TypedArray> or<DataView>. For object mode streams,chunk
may be anyJavaScript value other thannull
.encoding
<string> |<null> The encoding, ifchunk
is a string.Default:'utf8'
callback
<Function> Callback for when this chunk of data is flushed.- Returns:<boolean>
false
if the stream wishes for the calling code towait for the'drain'
event to be emitted before continuing to writeadditional data; otherwisetrue
.
Thewritable.write()
method writes some data to the stream, and calls thesuppliedcallback
once the data has been fully handled. If an erroroccurs, thecallback
will be called with the error as itsfirst argument. Thecallback
is called asynchronously and before'error'
isemitted.
The return value istrue
if the internal buffer is less than thehighWaterMark
configured when the stream was created after admittingchunk
.Iffalse
is returned, further attempts to write data to the stream shouldstop until the'drain'
event is emitted.
While a stream is not draining, calls towrite()
will bufferchunk
, andreturn false. Once all currently buffered chunks are drained (accepted fordelivery by the operating system), the'drain'
event will be emitted.Oncewrite()
returns false, do not write more chunksuntil the'drain'
event is emitted. While callingwrite()
on a stream thatis not draining is allowed, Node.js will buffer all written chunks untilmaximum memory usage occurs, at which point it will abort unconditionally.Even before it aborts, high memory usage will cause poor garbage collectorperformance and high RSS (which is not typically released back to the system,even after the memory is no longer required). Since TCP sockets may neverdrain if the remote peer does not read the data, writing a socket that isnot draining may lead to a remotely exploitable vulnerability.
Writing data while the stream is not draining is particularlyproblematic for aTransform
, because theTransform
streams are pausedby default until they are piped or a'data'
or'readable'
event handleris added.
If the data to be written can be generated or fetched on demand, it isrecommended to encapsulate the logic into aReadable
and usestream.pipe()
. However, if callingwrite()
is preferred, it ispossible to respect backpressure and avoid memory issues using the'drain'
event:
functionwrite(data, cb) {if (!stream.write(data)) { stream.once('drain', cb); }else { process.nextTick(cb); }}// Wait for cb to be called before doing any other write.write('hello',() => {console.log('Write completed, do more writes now.');});
AWritable
stream in object mode will always ignore theencoding
argument.
Readable streams#
Readable streams are an abstraction for asource from which data isconsumed.
Examples ofReadable
streams include:
- HTTP responses, on the client
- HTTP requests, on the server
- fs read streams
- zlib streams
- crypto streams
- TCP sockets
- child process stdout and stderr
process.stdin
AllReadable
streams implement the interface defined by thestream.Readable
class.
Two reading modes#
Readable
streams effectively operate in one of two modes: flowing andpaused. These modes are separate fromobject mode.AReadable
stream can be in object mode or not, regardless of whetherit is in flowing mode or paused mode.
In flowing mode, data is read from the underlying system automaticallyand provided to an application as quickly as possible using events via the
EventEmitter
interface.In paused mode, the
stream.read()
method must be calledexplicitly to read chunks of data from the stream.
AllReadable
streams begin in paused mode but can be switched to flowingmode in one of the following ways:
- Adding a
'data'
event handler. - Calling the
stream.resume()
method. - Calling the
stream.pipe()
method to send the data to aWritable
.
TheReadable
can switch back to paused mode using one of the following:
- If there are no pipe destinations, by calling the
stream.pause()
method. - If there are pipe destinations, by removing all pipe destinations.Multiple pipe destinations may be removed by calling the
stream.unpipe()
method.
The important concept to remember is that aReadable
will not generate datauntil a mechanism for either consuming or ignoring that data is provided. Ifthe consuming mechanism is disabled or taken away, theReadable
willattemptto stop generating the data.
For backward compatibility reasons, removing'data'
event handlers willnot automatically pause the stream. Also, if there are piped destinations,then callingstream.pause()
will not guarantee that thestream willremain paused once those destinations drain and ask for more data.
If aReadable
is switched into flowing mode and there are no consumersavailable to handle the data, that data will be lost. This can occur, forinstance, when thereadable.resume()
method is called without a listenerattached to the'data'
event, or when a'data'
event handler is removedfrom the stream.
Adding a'readable'
event handler automatically makes the streamstop flowing, and the data has to be consumed viareadable.read()
. If the'readable'
event handler isremoved, then the stream will start flowing again if there is a'data'
event handler.
Three states#
The "two modes" of operation for aReadable
stream are a simplifiedabstraction for the more complicated internal state management that is happeningwithin theReadable
stream implementation.
Specifically, at any given point in time, everyReadable
is in one of threepossible states:
readable.readableFlowing === null
readable.readableFlowing === false
readable.readableFlowing === true
Whenreadable.readableFlowing
isnull
, no mechanism for consuming thestream's data is provided. Therefore, the stream will not generate data.While in this state, attaching a listener for the'data'
event, calling thereadable.pipe()
method, or calling thereadable.resume()
method will switchreadable.readableFlowing
totrue
, causing theReadable
to begin activelyemitting events as data is generated.
Callingreadable.pause()
,readable.unpipe()
, or receiving backpressurewill cause thereadable.readableFlowing
to be set asfalse
,temporarily halting the flowing of events butnot halting the generation ofdata. While in this state, attaching a listener for the'data'
eventwill not switchreadable.readableFlowing
totrue
.
const {PassThrough,Writable } =require('node:stream');const pass =newPassThrough();const writable =newWritable();pass.pipe(writable);pass.unpipe(writable);// readableFlowing is now false.pass.on('data',(chunk) => {console.log(chunk.toString()); });// readableFlowing is still false.pass.write('ok');// Will not emit 'data'.pass.resume();// Must be called to make stream emit 'data'.// readableFlowing is now true.
Whilereadable.readableFlowing
isfalse
, data may be accumulatingwithin the stream's internal buffer.
Choose one API style#
TheReadable
stream API evolved across multiple Node.js versions and providesmultiple methods of consuming stream data. In general, developers should chooseone of the methods of consuming data andshould never use multiple methodsto consume data from a single stream. Specifically, using a combinationofon('data')
,on('readable')
,pipe()
, or async iterators couldlead to unintuitive behavior.
Class:stream.Readable
#
Event:'close'
#
History
Version | Changes |
---|---|
v10.0.0 | Add |
v0.9.4 | Added in: v0.9.4 |
The'close'
event is emitted when the stream and any of its underlyingresources (a file descriptor, for example) have been closed. The event indicatesthat no more events will be emitted, and no further computation will occur.
AReadable
stream will always emit the'close'
event if it iscreated with theemitClose
option.
Event:'data'
#
chunk
<Buffer> |<string> |<any> The chunk of data. For streams that are notoperating in object mode, the chunk will be either a string orBuffer
.For streams that are in object mode, the chunk can be any JavaScript valueother thannull
.
The'data'
event is emitted whenever the stream is relinquishing ownership ofa chunk of data to a consumer. This may occur whenever the stream is switchedin flowing mode by callingreadable.pipe()
,readable.resume()
, or byattaching a listener callback to the'data'
event. The'data'
event willalso be emitted whenever thereadable.read()
method is called and a chunk ofdata is available to be returned.
Attaching a'data'
event listener to a stream that has not been explicitlypaused will switch the stream into flowing mode. Data will then be passed assoon as it is available.
The listener callback will be passed the chunk of data as a string if a defaultencoding has been specified for the stream using thereadable.setEncoding()
method; otherwise the data will be passed as aBuffer
.
const readable =getReadableStreamSomehow();readable.on('data',(chunk) => {console.log(`Received${chunk.length} bytes of data.`);});
Event:'end'
#
The'end'
event is emitted when there is no more data to be consumed fromthe stream.
The'end'
eventwill not be emitted unless the data is completelyconsumed. This can be accomplished by switching the stream into flowing mode,or by callingstream.read()
repeatedly until all data has beenconsumed.
const readable =getReadableStreamSomehow();readable.on('data',(chunk) => {console.log(`Received${chunk.length} bytes of data.`);});readable.on('end',() => {console.log('There will be no more data.');});
Event:'error'
#
The'error'
event may be emitted by aReadable
implementation at any time.Typically, this may occur if the underlying stream is unable to generate datadue to an underlying internal failure, or when a stream implementation attemptsto push an invalid chunk of data.
The listener callback will be passed a singleError
object.
Event:'pause'
#
The'pause'
event is emitted whenstream.pause()
is calledandreadableFlowing
is notfalse
.
Event:'readable'
#
History
Version | Changes |
---|---|
v10.0.0 | The |
v10.0.0 | Using |
v0.9.4 | Added in: v0.9.4 |
The'readable'
event is emitted when there is data available to be read fromthe stream, up to the configured high water mark (state.highWaterMark
). Effectively,it indicates that the stream has new information within the buffer. If data is availablewithin this buffer,stream.read()
can be called to retrieve that data.Additionally, the'readable'
event may also be emitted when the end of the stream has beenreached.
const readable =getReadableStreamSomehow();readable.on('readable',function() {// There is some data to read now.let data;while ((data =this.read()) !==null) {console.log(data); }});
If the end of the stream has been reached, callingstream.read()
will returnnull
and trigger the'end'
event. This is also true if there never was any data to be read. For instance,in the following example,foo.txt
is an empty file:
const fs =require('node:fs');const rr = fs.createReadStream('foo.txt');rr.on('readable',() => {console.log(`readable:${rr.read()}`);});rr.on('end',() => {console.log('end');});
The output of running this script is:
$node test.jsreadable: nullend
In some cases, attaching a listener for the'readable'
event will cause someamount of data to be read into an internal buffer.
In general, thereadable.pipe()
and'data'
event mechanisms are easier tounderstand than the'readable'
event. However, handling'readable'
mightresult in increased throughput.
If both'readable'
and'data'
are used at the same time,'readable'
takes precedence in controlling the flow, i.e.'data'
will be emittedonly whenstream.read()
is called. ThereadableFlowing
property would becomefalse
.If there are'data'
listeners when'readable'
is removed, the streamwill start flowing, i.e.'data'
events will be emitted without calling.resume()
.
Event:'resume'
#
The'resume'
event is emitted whenstream.resume()
iscalled andreadableFlowing
is nottrue
.
readable.destroy([error])
#
History
Version | Changes |
---|---|
v14.0.0 | Work as a no-op on a stream that has already been destroyed. |
v8.0.0 | Added in: v8.0.0 |
Destroy the stream. Optionally emit an'error'
event, and emit a'close'
event (unlessemitClose
is set tofalse
). After this call, the readablestream will release any internal resources and subsequent calls topush()
will be ignored.
Oncedestroy()
has been called any further calls will be a no-op and nofurther errors except from_destroy()
may be emitted as'error'
.
Implementors should not override this method, but instead implementreadable._destroy()
.
readable.isPaused()
#
- Returns:<boolean>
Thereadable.isPaused()
method returns the current operating state of theReadable
. This is used primarily by the mechanism that underlies thereadable.pipe()
method. In most typical cases, there will be no reason touse this method directly.
const readable =new stream.Readable();readable.isPaused();// === falsereadable.pause();readable.isPaused();// === truereadable.resume();readable.isPaused();// === false
readable.pause()
#
- Returns:<this>
Thereadable.pause()
method will cause a stream in flowing mode to stopemitting'data'
events, switching out of flowing mode. Any data thatbecomes available will remain in the internal buffer.
const readable =getReadableStreamSomehow();readable.on('data',(chunk) => {console.log(`Received${chunk.length} bytes of data.`); readable.pause();console.log('There will be no additional data for 1 second.');setTimeout(() => {console.log('Now data will start flowing again.'); readable.resume(); },1000);});
Thereadable.pause()
method has no effect if there is a'readable'
event listener.
readable.pipe(destination[, options])
#
destination
<stream.Writable> The destination for writing dataoptions
<Object> Pipe optionsend
<boolean> End the writer when the reader ends.Default:true
.
- Returns:<stream.Writable> Thedestination, allowing for a chain of pipes ifit is a
Duplex
or aTransform
stream
Thereadable.pipe()
method attaches aWritable
stream to thereadable
,causing it to switch automatically into flowing mode and push all of its datato the attachedWritable
. The flow of data will be automatically managedso that the destinationWritable
stream is not overwhelmed by a fasterReadable
stream.
The following example pipes all of the data from thereadable
into a filenamedfile.txt
:
const fs =require('node:fs');const readable =getReadableStreamSomehow();const writable = fs.createWriteStream('file.txt');// All the data from readable goes into 'file.txt'.readable.pipe(writable);
It is possible to attach multipleWritable
streams to a singleReadable
stream.
Thereadable.pipe()
method returns a reference to thedestination streammaking it possible to set up chains of piped streams:
const fs =require('node:fs');const zlib =require('node:zlib');const r = fs.createReadStream('file.txt');const z = zlib.createGzip();const w = fs.createWriteStream('file.txt.gz');r.pipe(z).pipe(w);
By default,stream.end()
is called on the destinationWritable
stream when the sourceReadable
stream emits'end'
, so that thedestination is no longer writable. To disable this default behavior, theend
option can be passed asfalse
, causing the destination stream to remain open:
reader.pipe(writer, {end:false });reader.on('end',() => { writer.end('Goodbye\n');});
One important caveat is that if theReadable
stream emits an error duringprocessing, theWritable
destinationis not closed automatically. If anerror occurs, it will be necessary tomanually close each stream in orderto prevent memory leaks.
Theprocess.stderr
andprocess.stdout
Writable
streams are neverclosed until the Node.js process exits, regardless of the specified options.
readable.read([size])
#
size
<number> Optional argument to specify how much data to read.- Returns:<string> |<Buffer> |<null> |<any>
Thereadable.read()
method reads data out of the internal buffer andreturns it. If no data is available to be read,null
is returned. By default,the data is returned as aBuffer
object unless an encoding has beenspecified using thereadable.setEncoding()
method or the stream is operatingin object mode.
The optionalsize
argument specifies a specific number of bytes to read. Ifsize
bytes are not available to be read,null
will be returnedunlessthe stream has ended, in which case all of the data remaining in the internalbuffer will be returned.
If thesize
argument is not specified, all of the data contained in theinternal buffer will be returned.
Thesize
argument must be less than or equal to 1 GiB.
Thereadable.read()
method should only be called onReadable
streamsoperating in paused mode. In flowing mode,readable.read()
is calledautomatically until the internal buffer is fully drained.
const readable =getReadableStreamSomehow();// 'readable' may be triggered multiple times as data is buffered inreadable.on('readable',() => {let chunk;console.log('Stream is readable (new data received in buffer)');// Use a loop to make sure we read all currently available datawhile (null !== (chunk = readable.read())) {console.log(`Read${chunk.length} bytes of data...`); }});// 'end' will be triggered once when there is no more data availablereadable.on('end',() => {console.log('Reached end of stream.');});
Each call toreadable.read()
returns a chunk of data ornull
, signifyingthat there's no more data to read at that moment. These chunks aren't automaticallyconcatenated. Because a singleread()
call does not return all the data, usinga while loop may be necessary to continuously read chunks until all data is retrieved.When reading a large file,.read()
might returnnull
temporarily, indicatingthat it has consumed all buffered content but there may be more data yet to bebuffered. In such cases, a new'readable'
event is emitted once there's moredata in the buffer, and the'end'
event signifies the end of data transmission.
Therefore to read a file's whole contents from areadable
, it is necessaryto collect chunks across multiple'readable'
events:
const chunks = [];readable.on('readable',() => {let chunk;while (null !== (chunk = readable.read())) { chunks.push(chunk); }});readable.on('end',() => {const content = chunks.join('');});
AReadable
stream in object mode will always return a single item froma call toreadable.read(size)
, regardless of the value of thesize
argument.
If thereadable.read()
method returns a chunk of data, a'data'
event willalso be emitted.
Callingstream.read([size])
after the'end'
event hasbeen emitted will returnnull
. No runtime error will be raised.
readable.readable
#
Istrue
if it is safe to callreadable.read()
, which meansthe stream has not been destroyed or emitted'error'
or'end'
.
readable.readableAborted
#
History
Version | Changes |
---|---|
v24.0.0 | Marking the API stable. |
v16.8.0 | Added in: v16.8.0 |
Returns whether the stream was destroyed or errored before emitting'end'
.
readable.readableDidRead
#
History
Version | Changes |
---|---|
v24.0.0 | Marking the API stable. |
v16.7.0, v14.18.0 | Added in: v16.7.0, v14.18.0 |
Returns whether'data'
has been emitted.
readable.readableEncoding
#
Getter for the propertyencoding
of a givenReadable
stream. Theencoding
property can be set using thereadable.setEncoding()
method.
readable.readableFlowing
#
This property reflects the current state of aReadable
stream as describedin theThree states section.
readable.readableHighWaterMark
#
Returns the value ofhighWaterMark
passed when creating thisReadable
.
readable.readableLength
#
This property contains the number of bytes (or objects) in the queueready to be read. The value provides introspection data regardingthe status of thehighWaterMark
.
readable.readableObjectMode
#
Getter for the propertyobjectMode
of a givenReadable
stream.
readable.resume()
#
History
Version | Changes |
---|---|
v10.0.0 | The |
v0.9.4 | Added in: v0.9.4 |
- Returns:<this>
Thereadable.resume()
method causes an explicitly pausedReadable
stream toresume emitting'data'
events, switching the stream into flowing mode.
Thereadable.resume()
method can be used to fully consume the data from astream without actually processing any of that data:
getReadableStreamSomehow() .resume() .on('end',() => {console.log('Reached the end, but did not read anything.'); });
Thereadable.resume()
method has no effect if there is a'readable'
event listener.
readable.setEncoding(encoding)
#
Thereadable.setEncoding()
method sets the character encoding fordata read from theReadable
stream.
By default, no encoding is assigned and stream data will be returned asBuffer
objects. Setting an encoding causes the stream datato be returned as strings of the specified encoding rather than asBuffer
objects. For instance, callingreadable.setEncoding('utf8')
will cause theoutput data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex')
will cause the data to be encoded in hexadecimalstring format.
TheReadable
stream will properly handle multi-byte characters deliveredthrough the stream that would otherwise become improperly decoded if simplypulled from the stream asBuffer
objects.
const readable =getReadableStreamSomehow();readable.setEncoding('utf8');readable.on('data',(chunk) => { assert.equal(typeof chunk,'string');console.log('Got %d characters of string data:', chunk.length);});
readable.unpipe([destination])
#
destination
<stream.Writable> Optional specific stream to unpipe- Returns:<this>
Thereadable.unpipe()
method detaches aWritable
stream previously attachedusing thestream.pipe()
method.
If thedestination
is not specified, thenall pipes are detached.
If thedestination
is specified, but no pipe is set up for it, thenthe method does nothing.
const fs =require('node:fs');const readable =getReadableStreamSomehow();const writable = fs.createWriteStream('file.txt');// All the data from readable goes into 'file.txt',// but only for the first second.readable.pipe(writable);setTimeout(() => {console.log('Stop writing to file.txt.'); readable.unpipe(writable);console.log('Manually close the file stream.'); writable.end();},1000);
readable.unshift(chunk[, encoding])
#
History
Version | Changes |
---|---|
v22.0.0, v20.13.0 | The |
v8.0.0 | The |
v0.9.11 | Added in: v0.9.11 |
chunk
<Buffer> |<TypedArray> |<DataView> |<string> |<null> |<any> Chunk of data to unshiftonto the read queue. For streams not operating in object mode,chunk
mustbe a<string>,<Buffer>,<TypedArray>,<DataView> ornull
.For object mode streams,chunk
may be any JavaScript value.encoding
<string> Encoding of string chunks. Must be a validBuffer
encoding, such as'utf8'
or'ascii'
.
Passingchunk
asnull
signals the end of the stream (EOF) and behaves thesame asreadable.push(null)
, after which no more data can be written. The EOFsignal is put at the end of the buffer and any buffered data will still beflushed.
Thereadable.unshift()
method pushes a chunk of data back into the internalbuffer. This is useful in certain situations where a stream is being consumed bycode that needs to "un-consume" some amount of data that it has optimisticallypulled out of the source, so that the data can be passed on to some other party.
Thestream.unshift(chunk)
method cannot be called after the'end'
eventhas been emitted or a runtime error will be thrown.
Developers usingstream.unshift()
often should consider switching touse of aTransform
stream instead. See theAPI for stream implementerssection for more information.
// Pull off a header delimited by \n\n.// Use unshift() if we get too much.// Call the callback with (error, header, stream).const {StringDecoder } =require('node:string_decoder');functionparseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable);const decoder =newStringDecoder('utf8');let header ='';functiononReadable() {let chunk;while (null !== (chunk = stream.read())) {const str = decoder.write(chunk);if (str.includes('\n\n')) {// Found the header boundary.const split = str.split(/\n\n/); header += split.shift();const remaining = split.join('\n\n');const buf =Buffer.from(remaining,'utf8'); stream.removeListener('error', callback);// Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable);if (buf.length) stream.unshift(buf);// Now the body of the message can be read from the stream.callback(null, header, stream);return; }// Still reading the header. header += str; } }}
Unlikestream.push(chunk)
,stream.unshift(chunk)
will notend the reading process by resetting the internal reading state of the stream.This can cause unexpected results ifreadable.unshift()
is called during aread (i.e. from within astream._read()
implementation on acustom stream). Following the call toreadable.unshift()
with an immediatestream.push('')
will reset the reading state appropriately,however it is best to simply avoid callingreadable.unshift()
while in theprocess of performing a read.
readable.wrap(stream)
#
Prior to Node.js 0.10, streams did not implement the entirenode:stream
module API as it is currently defined. (SeeCompatibility for moreinformation.)
When using an older Node.js library that emits'data'
events and has astream.pause()
method that is advisory only, thereadable.wrap()
method can be used to create aReadable
stream that usesthe old stream as its data source.
It will rarely be necessary to usereadable.wrap()
but the method has beenprovided as a convenience for interacting with older Node.js applications andlibraries.
const {OldReader } =require('./old-api-module.js');const {Readable } =require('node:stream');const oreader =newOldReader();const myReader =newReadable().wrap(oreader);myReader.on('readable',() => { myReader.read();// etc.});
readable[Symbol.asyncIterator]()
#
History
Version | Changes |
---|---|
v11.14.0 | Symbol.asyncIterator support is no longer experimental. |
v10.0.0 | Added in: v10.0.0 |
- Returns:<AsyncIterator> to fully consume the stream.
const fs =require('node:fs');asyncfunctionprint(readable) { readable.setEncoding('utf8');let data ='';forawait (const chunkof readable) { data += chunk; }console.log(data);}print(fs.createReadStream('file')).catch(console.error);
If the loop terminates with abreak
,return
, or athrow
, the stream willbe destroyed. In other terms, iterating over a stream will consume the streamfully. The stream will be read in chunks of size equal to thehighWaterMark
option. In the code example above, data will be in a single chunk if the filehas less then 64 KiB of data because nohighWaterMark
option is provided tofs.createReadStream()
.
readable[Symbol.asyncDispose]()
#
History
Version | Changes |
---|---|
v24.2.0 | No longer experimental. |
v20.4.0, v18.18.0 | Added in: v20.4.0, v18.18.0 |
Callsreadable.destroy()
with anAbortError
and returnsa promise that fulfills when the stream is finished.
readable.compose(stream[, options])
#
History
Version | Changes |
---|---|
v24.0.0 | Marking the API stable. |
v19.1.0, v18.13.0 | Added in: v19.1.0, v18.13.0 |
stream
<Stream> |<Iterable> |<AsyncIterable> |<Function>options
<Object>signal
<AbortSignal> allows destroying the stream if the signal isaborted.
- Returns:<Duplex> a stream composed with the stream
stream
.
import {Readable }from'node:stream';asyncfunction*splitToWords(source) {forawait (const chunkof source) {const words =String(chunk).split(' ');for (const wordof words) {yield word; } }}const wordsStream =Readable.from(['this is','compose as operator']).compose(splitToWords);const words =await wordsStream.toArray();console.log(words);// prints ['this', 'is', 'compose', 'as', 'operator']
Seestream.compose
for more information.
readable.iterator([options])
#
History
Version | Changes |
---|---|
v24.0.0 | Marking the API stable. |
v16.3.0 | Added in: v16.3.0 |
options
<Object>destroyOnReturn
<boolean> When set tofalse
, callingreturn
on theasync iterator, or exiting afor await...of
iteration using abreak
,return
, orthrow
will not destroy the stream.Default:true
.
- Returns:<AsyncIterator> to consume the stream.
The iterator created by this method gives users the option to cancel thedestruction of the stream if thefor await...of
loop is exited byreturn
,break
, orthrow
, or if the iterator should destroy the stream if the streamemitted an error during iteration.
const {Readable } =require('node:stream');asyncfunctionprintIterator(readable) {forawait (const chunkof readable.iterator({destroyOnReturn:false })) {console.log(chunk);// 1break; }console.log(readable.destroyed);// falseforawait (const chunkof readable.iterator({destroyOnReturn:false })) {console.log(chunk);// Will print 2 and then 3 }console.log(readable.destroyed);// True, stream was totally consumed}asyncfunctionprintSymbolAsyncIterator(readable) {forawait (const chunkof readable) {console.log(chunk);// 1break; }console.log(readable.destroyed);// true}asyncfunctionshowBoth() {awaitprintIterator(Readable.from([1,2,3]));awaitprintSymbolAsyncIterator(Readable.from([1,2,3]));}showBoth();
readable.map(fn[, options])
#
History
Version | Changes |
---|---|
v20.7.0, v18.19.0 | added |
v17.4.0, v16.14.0 | Added in: v17.4.0, v16.14.0 |
fn
<Function> |<AsyncFunction> a function to map over every chunk in thestream.data
<any> a chunk of data from the stream.options
<Object>signal
<AbortSignal> aborted if the stream is destroyed allowing toabort thefn
call early.
options
<Object>concurrency
<number> the maximum concurrent invocation offn
to callon the stream at once.Default:1
.highWaterMark
<number> how many items to buffer while waiting for userconsumption of the mapped items.Default:concurrency * 2 - 1
.signal
<AbortSignal> allows destroying the stream if the signal isaborted.
- Returns:<Readable> a stream mapped with the function
fn
.
This method allows mapping over the stream. Thefn
function will be calledfor every chunk in the stream. If thefn
function returns a promise - thatpromise will beawait
ed before being passed to the result stream.
import {Readable }from'node:stream';import {Resolver }from'node:dns/promises';// With a synchronous mapper.forawait (const chunkofReadable.from([1,2,3,4]).map((x) => x *2)) {console.log(chunk);// 2, 4, 6, 8}// With an asynchronous mapper, making at most 2 queries at a time.const resolver =newResolver();const dnsResults =Readable.from(['nodejs.org','openjsf.org','www.linuxfoundation.org',]).map((domain) => resolver.resolve4(domain), {concurrency:2 });forawait (const resultof dnsResults) {console.log(result);// Logs the DNS result of resolver.resolve4.}
readable.filter(fn[, options])
#
History
Version | Changes |
---|---|
v20.7.0, v18.19.0 | added |
v17.4.0, v16.14.0 | Added in: v17.4.0, v16.14.0 |
fn
<Function> |<AsyncFunction> a function to filter chunks from the stream.data
<any> a chunk of data from the stream.options
<Object>signal
<AbortSignal> aborted if the stream is destroyed allowing toabort thefn
call early.
options
<Object>concurrency
<number> the maximum concurrent invocation offn
to callon the stream at once.Default:1
.highWaterMark
<number> how many items to buffer while waiting for userconsumption of the filtered items.Default:concurrency * 2 - 1
.signal
<AbortSignal> allows destroying the stream if the signal isaborted.
- Returns:<Readable> a stream filtered with the predicate
fn
.
This method allows filtering the stream. For each chunk in the stream thefn
function will be called and if it returns a truthy value, the chunk will bepassed to the result stream. If thefn
function returns a promise - thatpromise will beawait
ed.
import {Readable }from'node:stream';import {Resolver }from'node:dns/promises';// With a synchronous predicate.forawait (const chunkofReadable.from([1,2,3,4]).filter((x) => x >2)) {console.log(chunk);// 3, 4}// With an asynchronous predicate, making at most 2 queries at a time.const resolver =newResolver();const dnsResults =Readable.from(['nodejs.org','openjsf.org','www.linuxfoundation.org',]).filter(async (domain) => {const { address } =await resolver.resolve4(domain, {ttl:true });return address.ttl >60;}, {concurrency:2 });forawait (const resultof dnsResults) {// Logs domains with more than 60 seconds on the resolved dns record.console.log(result);}
readable.forEach(fn[, options])
#
fn
<Function> |<AsyncFunction> a function to call on each chunk of the stream.data
<any> a chunk of data from the stream.options
<Object>signal
<AbortSignal> aborted if the stream is destroyed allowing toabort thefn
call early.
options
<Object>concurrency
<number> the maximum concurrent invocation offn
to callon the stream at once.Default:1
.signal
<AbortSignal> allows destroying the stream if the signal isaborted.
- Returns:<Promise> a promise for when the stream has finished.
This method allows iterating a stream. For each chunk in the stream thefn
function will be called. If thefn
function returns a promise - thatpromise will beawait
ed.
This method is different fromfor await...of
loops in that it can optionallyprocess chunks concurrently. In addition, aforEach
iteration can only bestopped by having passed asignal
option and aborting the relatedAbortController
whilefor await...of
can be stopped withbreak
orreturn
. In either case the stream will be destroyed.
This method is different from listening to the'data'
event in that ituses thereadable
event in the underlying machinery and can limit thenumber of concurrentfn
calls.
import {Readable }from'node:stream';import {Resolver }from'node:dns/promises';// With a synchronous predicate.forawait (const chunkofReadable.from([1,2,3,4]).filter((x) => x >2)) {console.log(chunk);// 3, 4}// With an asynchronous predicate, making at most 2 queries at a time.const resolver =newResolver();const dnsResults =Readable.from(['nodejs.org','openjsf.org','www.linuxfoundation.org',]).map(async (domain) => {const { address } =await resolver.resolve4(domain, {ttl:true });return address;}, {concurrency:2 });await dnsResults.forEach((result) => {// Logs result, similar to `for await (const result of dnsResults)`console.log(result);});console.log('done');// Stream has finished
readable.toArray([options])
#
options
<Object>signal
<AbortSignal> allows cancelling the toArray operation if thesignal is aborted.
- Returns:<Promise> a promise containing an array with the contents of thestream.
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits ofstreams. It's intended for interoperability and convenience, not as the primaryway to consume streams.
import {Readable }from'node:stream';import {Resolver }from'node:dns/promises';awaitReadable.from([1,2,3,4]).toArray();// [1, 2, 3, 4]// Make dns queries concurrently using .map and collect// the results into an array using toArrayconst dnsResults =awaitReadable.from(['nodejs.org','openjsf.org','www.linuxfoundation.org',]).map(async (domain) => {const { address } =await resolver.resolve4(domain, {ttl:true });return address;}, {concurrency:2 }).toArray();
readable.some(fn[, options])
#
fn
<Function> |<AsyncFunction> a function to call on each chunk of the stream.data
<any> a chunk of data from the stream.options
<Object>signal
<AbortSignal> aborted if the stream is destroyed allowing toabort thefn
call early.
options
<Object>concurrency
<number> the maximum concurrent invocation offn
to callon the stream at once.Default:1
.signal
<AbortSignal> allows destroying the stream if the signal isaborted.
- Returns:<Promise> a promise evaluating to
true
iffn
returned a truthyvalue for at least one of the chunks.
This method is similar toArray.prototype.some
and callsfn
on each chunkin the stream until the awaited return value istrue
(or any truthy value).Once anfn
call on a chunk awaited return value is truthy, the stream isdestroyed and the promise is fulfilled withtrue
. If none of thefn
calls on the chunks return a truthy value, the promise is fulfilled withfalse
.
import {Readable }from'node:stream';import { stat }from'node:fs/promises';// With a synchronous predicate.awaitReadable.from([1,2,3,4]).some((x) => x >2);// trueawaitReadable.from([1,2,3,4]).some((x) => x <0);// false// With an asynchronous predicate, making at most 2 file checks at a time.const anyBigFile =awaitReadable.from(['file1','file2','file3',]).some(async (fileName) => {const stats =awaitstat(fileName);return stats.size >1024 *1024;}, {concurrency:2 });console.log(anyBigFile);// `true` if any file in the list is bigger than 1MBconsole.log('done');// Stream has finished
readable.find(fn[, options])
#
fn
<Function> |<AsyncFunction> a function to call on each chunk of the stream.data
<any> a chunk of data from the stream.options
<Object>signal
<AbortSignal> aborted if the stream is destroyed allowing toabort thefn
call early.
options
<Object>concurrency
<number> the maximum concurrent invocation offn
to callon the stream at once.Default:1
.signal
<AbortSignal> allows destroying the stream if the signal isaborted.
- Returns:<Promise> a promise evaluating to the first chunk for which
fn
evaluated with a truthy value, orundefined
if no element was found.
This method is similar toArray.prototype.find
and callsfn
on each chunkin the stream to find a chunk with a truthy value forfn
. Once anfn
call'sawaited return value is truthy, the stream is destroyed and the promise isfulfilled with value for whichfn
returned a truthy value. If all of thefn
calls on the chunks return a falsy value, the promise is fulfilled withundefined
.
import {Readable }from'node:stream';import { stat }from'node:fs/promises';// With a synchronous predicate.awaitReadable.from([1,2,3,4]).find((x) => x >2);// 3awaitReadable.from([1,2,3,4]).find((x) => x >0);// 1awaitReadable.from([1,2,3,4]).find((x) => x >10);// undefined// With an asynchronous predicate, making at most 2 file checks at a time.const foundBigFile =awaitReadable.from(['file1','file2','file3',]).find(async (fileName) => {const stats =awaitstat(fileName);return stats.size >1024 *1024;}, {concurrency:2 });console.log(foundBigFile);// File name of large file, if any file in the list is bigger than 1MBconsole.log('done');// Stream has finished
readable.every(fn[, options])
#
fn
<Function> |<AsyncFunction> a function to call on each chunk of the stream.data
<any> a chunk of data from the stream.options
<Object>signal
<AbortSignal> aborted if the stream is destroyed allowing toabort thefn
call early.
options
<Object>concurrency
<number> the maximum concurrent invocation offn
to callon the stream at once.Default:1
.signal
<AbortSignal> allows destroying the stream if the signal isaborted.
- Returns:<Promise> a promise evaluating to
true
iffn
returned a truthyvalue for all of the chunks.
This method is similar toArray.prototype.every
and callsfn
on each chunkin the stream to check if all awaited return values are truthy value forfn
.Once anfn
call on a chunk awaited return value is falsy, the stream isdestroyed and the promise is fulfilled withfalse
. If all of thefn
callson the chunks return a truthy value, the promise is fulfilled withtrue
.
import {Readable }from'node:stream';import { stat }from'node:fs/promises';// With a synchronous predicate.awaitReadable.from([1,2,3,4]).every((x) => x >2);// falseawaitReadable.from([1,2,3,4]).every((x) => x >0);// true// With an asynchronous predicate, making at most 2 file checks at a time.const allBigFiles =awaitReadable.from(['file1','file2','file3',]).every(async (fileName) => {const stats =awaitstat(fileName);return stats.size >1024 *1024;}, {concurrency:2 });// `true` if all files in the list are bigger than 1MiBconsole.log(allBigFiles);console.log('done');// Stream has finished
readable.flatMap(fn[, options])
#
fn
<Function> |<AsyncGeneratorFunction> |<AsyncFunction> a function to map overevery chunk in the stream.data
<any> a chunk of data from the stream.options
<Object>signal
<AbortSignal> aborted if the stream is destroyed allowing toabort thefn
call early.
options
<Object>concurrency
<number> the maximum concurrent invocation offn
to callon the stream at once.Default:1
.signal
<AbortSignal> allows destroying the stream if the signal isaborted.
- Returns:<Readable> a stream flat-mapped with the function
fn
.
This method returns a new stream by applying the given callback to eachchunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable fromfn
and the result streams will be merged (flattened) into the returnedstream.
import {Readable }from'node:stream';import { createReadStream }from'node:fs';// With a synchronous mapper.forawait (const chunkofReadable.from([1,2,3,4]).flatMap((x) => [x, x])) {console.log(chunk);// 1, 1, 2, 2, 3, 3, 4, 4}// With an asynchronous mapper, combine the contents of 4 filesconst concatResult =Readable.from(['./1.mjs','./2.mjs','./3.mjs','./4.mjs',]).flatMap((fileName) =>createReadStream(fileName));forawait (const resultof concatResult) {// This will contain the contents (all chunks) of all 4 filesconsole.log(result);}
readable.drop(limit[, options])
#
limit
<number> the number of chunks to drop from the readable.options
<Object>signal
<AbortSignal> allows destroying the stream if the signal isaborted.
- Returns:<Readable> a stream with
limit
chunks dropped.
This method returns a new stream with the firstlimit
chunks dropped.
import {Readable }from'node:stream';awaitReadable.from([1,2,3,4]).drop(2).toArray();// [3, 4]
readable.take(limit[, options])
#
limit
<number> the number of chunks to take from the readable.options
<Object>signal
<AbortSignal> allows destroying the stream if the signal isaborted.
- Returns:<Readable> a stream with
limit
chunks taken.
This method returns a new stream with the firstlimit
chunks.
import {Readable }from'node:stream';awaitReadable.from([1,2,3,4]).take(2).toArray();// [1, 2]
readable.reduce(fn[, initial[, options]])
#
fn
<Function> |<AsyncFunction> a reducer function to call over every chunkin the stream.previous
<any> the value obtained from the last call tofn
or theinitial
value if specified or the first chunk of the stream otherwise.data
<any> a chunk of data from the stream.options
<Object>signal
<AbortSignal> aborted if the stream is destroyed allowing toabort thefn
call early.
initial
<any> the initial value to use in the reduction.options
<Object>signal
<AbortSignal> allows destroying the stream if the signal isaborted.
- Returns:<Promise> a promise for the final value of the reduction.
This method callsfn
on each chunk of the stream in order, passing it theresult from the calculation on the previous element. It returns a promise forthe final value of the reduction.
If noinitial
value is supplied the first chunk of the stream is used as theinitial value. If the stream is empty, the promise is rejected with aTypeError
with theERR_INVALID_ARGS
code property.
import {Readable }from'node:stream';import { readdir, stat }from'node:fs/promises';import { join }from'node:path';const directoryPath ='./src';const filesInDir =awaitreaddir(directoryPath);const folderSize =awaitReadable.from(filesInDir) .reduce(async (totalSize, file) => {const { size } =awaitstat(join(directoryPath, file));return totalSize + size; },0);console.log(folderSize);
The reducer function iterates the stream element-by-element which means thatthere is noconcurrency
parameter or parallelism. To perform areduce
concurrently, you can extract the async function toreadable.map
method.
import {Readable }from'node:stream';import { readdir, stat }from'node:fs/promises';import { join }from'node:path';const directoryPath ='./src';const filesInDir =awaitreaddir(directoryPath);const folderSize =awaitReadable.from(filesInDir) .map((file) =>stat(join(directoryPath, file)), {concurrency:2 }) .reduce((totalSize, { size }) => totalSize + size,0);console.log(folderSize);
Duplex and transform streams#
Class:stream.Duplex
#
History
Version | Changes |
---|---|
v6.8.0 | Instances of |
v0.9.4 | Added in: v0.9.4 |
Duplex streams are streams that implement both theReadable
andWritable
interfaces.
Examples ofDuplex
streams include:
duplex.allowHalfOpen
#
Iffalse
then the stream will automatically end the writable side when thereadable side ends. Set initially by theallowHalfOpen
constructor option,which defaults totrue
.
This can be changed manually to change the half-open behavior of an existingDuplex
stream instance, but must be changed before the'end'
event isemitted.
Class:stream.Transform
#
Transform streams areDuplex
streams where the output is in some wayrelated to the input. Like allDuplex
streams,Transform
streamsimplement both theReadable
andWritable
interfaces.
Examples ofTransform
streams include:
transform.destroy([error])
#
History
Version | Changes |
---|---|
v14.0.0 | Work as a no-op on a stream that has already been destroyed. |
v8.0.0 | Added in: v8.0.0 |
Destroy the stream, and optionally emit an'error'
event. After this call, thetransform stream would release any internal resources.Implementors should not override this method, but instead implementreadable._destroy()
.The default implementation of_destroy()
forTransform
also emit'close'
unlessemitClose
is set in false.
Oncedestroy()
has been called, any further calls will be a no-op and nofurther errors except from_destroy()
may be emitted as'error'
.
stream.duplexPair([options])
#
options
<Object> A value to pass to bothDuplex
constructors,to set options such as buffering.- Returns:<Array> of two
Duplex
instances.
The utility functionduplexPair
returns an Array with two items,each being aDuplex
stream connected to the other side:
const [ sideA, sideB ] =duplexPair();
Whatever is written to one stream is made readable on the other. It providesbehavior analogous to a network connection, where the data written by the clientbecomes readable by the server, and vice-versa.
The Duplex streams are symmetrical; one or the other may be used without anydifference in behavior.
stream.finished(stream[, options], callback)
#
History
Version | Changes |
---|---|
v19.5.0 | Added support for |
v15.11.0 | The |
v14.0.0 | The |
v14.0.0 | Emitting |
v14.0.0 | Callback will be invoked on streams which have already finished before the call to |
v10.0.0 | Added in: v10.0.0 |
stream
<Stream> |<ReadableStream> |<WritableStream> A readable and/or writablestream/webstream.options
<Object>error
<boolean> If set tofalse
, then a call toemit('error', err)
isnot treated as finished.Default:true
.readable
<boolean> When set tofalse
, the callback will be called whenthe stream ends even though the stream might still be readable.Default:true
.writable
<boolean> When set tofalse
, the callback will be called whenthe stream ends even though the stream might still be writable.Default:true
.signal
<AbortSignal> allows aborting the wait for the stream finish. Theunderlying stream willnot be aborted if the signal is aborted. Thecallback will get called with anAbortError
. All registeredlisteners added by this function will also be removed.
callback
<Function> A callback function that takes an optional errorargument.- Returns:<Function> A cleanup function which removes all registeredlisteners.
A function to get notified when a stream is no longer readable, writableor has experienced an error or a premature close event.
const { finished } =require('node:stream');const fs =require('node:fs');const rs = fs.createReadStream('archive.tar');finished(rs,(err) => {if (err) {console.error('Stream failed.', err); }else {console.log('Stream is done reading.'); }});rs.resume();// Drain the stream.
Especially useful in error handling scenarios where a stream is destroyedprematurely (like an aborted HTTP request), and will not emit'end'
or'finish'
.
Thefinished
API providespromise version.
stream.finished()
leaves dangling event listeners (in particular'error'
,'end'
,'finish'
and'close'
) aftercallback
has beeninvoked. The reason for this is so that unexpected'error'
events (due toincorrect stream implementations) do not cause unexpected crashes.If this is unwanted behavior then the returned cleanup function needs to beinvoked in the callback:
const cleanup =finished(rs,(err) => {cleanup();// ...});
stream.pipeline(source[, ...transforms], destination, callback)
#
stream.pipeline(streams, callback)
#
History
Version | Changes |
---|---|
v19.7.0, v18.16.0 | Added support for webstreams. |
v18.0.0 | Passing an invalid callback to the |
v14.0.0 | The |
v13.10.0 | Add support for async generators. |
v10.0.0 | Added in: v10.0.0 |
streams
<Stream[]> |<Iterable[]> |<AsyncIterable[]> |<Function[]> |<ReadableStream[]> |<WritableStream[]> |<TransformStream[]>source
<Stream> |<Iterable> |<AsyncIterable> |<Function> |<ReadableStream>- Returns:<Iterable> |<AsyncIterable>
...transforms
<Stream> |<Function> |<TransformStream>source
<AsyncIterable>- Returns:<AsyncIterable>
destination
<Stream> |<Function> |<WritableStream>source
<AsyncIterable>- Returns:<AsyncIterable> |<Promise>
callback
<Function> Called when the pipeline is fully done.err
<Error>val
Resolved value ofPromise
returned bydestination
.
- Returns:<Stream>
A module method to pipe between streams and generators forwarding errors andproperly cleaning up and provide a callback when the pipeline is complete.
const { pipeline } =require('node:stream');const fs =require('node:fs');const zlib =require('node:zlib');// Use the pipeline API to easily pipe a series of streams// together and get notified when the pipeline is fully done.// A pipeline to gzip a potentially huge tar file efficiently:pipeline( fs.createReadStream('archive.tar'), zlib.createGzip(), fs.createWriteStream('archive.tar.gz'),(err) => {if (err) {console.error('Pipeline failed.', err); }else {console.log('Pipeline succeeded.'); } },);
Thepipeline
API provides apromise version.
stream.pipeline()
will callstream.destroy(err)
on all streams except:
Readable
streams which have emitted'end'
or'close'
.Writable
streams which have emitted'finish'
or'close'
.
stream.pipeline()
leaves dangling event listeners on the streamsafter thecallback
has been invoked. In the case of reuse of streams afterfailure, this can cause event listener leaks and swallowed errors. If the laststream is readable, dangling event listeners will be removed so that the laststream can be consumed later.
stream.pipeline()
closes all the streams when an error is raised.TheIncomingRequest
usage withpipeline
could lead to an unexpected behavioronce it would destroy the socket without sending the expected response.See the example below:
const fs =require('node:fs');const http =require('node:http');const { pipeline } =require('node:stream');const server = http.createServer((req, res) => {const fileStream = fs.createReadStream('./fileNotExist.txt');pipeline(fileStream, res,(err) => {if (err) {console.log(err);// No such file// this message can't be sent once `pipeline` already destroyed the socketreturn res.end('error!!!'); } });});
stream.compose(...streams)
#
History
Version | Changes |
---|---|
v21.1.0, v20.10.0 | Added support for stream class. |
v19.8.0, v18.16.0 | Added support for webstreams. |
v16.9.0 | Added in: v16.9.0 |
stream.compose
is experimental.streams
<Stream[]> |<Iterable[]> |<AsyncIterable[]> |<Function[]> |<ReadableStream[]> |<WritableStream[]> |<TransformStream[]> |<Duplex[]> |<Function>- Returns:<stream.Duplex>
Combines two or more streams into aDuplex
stream that writes to thefirst stream and reads from the last. Each provided stream is piped intothe next, usingstream.pipeline
. If any of the streams error then allare destroyed, including the outerDuplex
stream.
Becausestream.compose
returns a new stream that in turn can (andshould) be piped into other streams, it enables composition. In contrast,when passing streams tostream.pipeline
, typically the first stream isa readable stream and the last a writable stream, forming a closedcircuit.
If passed aFunction
it must be a factory method taking asource
Iterable
.
import { compose,Transform }from'node:stream';const removeSpaces =newTransform({transform(chunk, encoding, callback) {callback(null,String(chunk).replace(' ','')); },});asyncfunction*toUpper(source) {forawait (const chunkof source) {yieldString(chunk).toUpperCase(); }}let res ='';forawait (const bufofcompose(removeSpaces, toUpper).end('hello world')) { res += buf;}console.log(res);// prints 'HELLOWORLD'
stream.compose
can be used to convert async iterables, generators andfunctions into streams.
AsyncIterable
converts into a readableDuplex
. Cannot yieldnull
.AsyncGeneratorFunction
converts into a readable/writable transformDuplex
.Must take a sourceAsyncIterable
as first parameter. Cannot yieldnull
.AsyncFunction
converts into a writableDuplex
. Must returneithernull
orundefined
.
import { compose }from'node:stream';import { finished }from'node:stream/promises';// Convert AsyncIterable into readable Duplex.const s1 =compose(asyncfunction*() {yield'Hello';yield'World';}());// Convert AsyncGenerator into transform Duplex.const s2 =compose(asyncfunction*(source) {forawait (const chunkof source) {yieldString(chunk).toUpperCase(); }});let res ='';// Convert AsyncFunction into writable Duplex.const s3 =compose(asyncfunction(source) {forawait (const chunkof source) { res += chunk; }});awaitfinished(compose(s1, s2, s3));console.log(res);// prints 'HELLOWORLD'
Seereadable.compose(stream)
forstream.compose
as operator.
stream.isErrored(stream)
#
History
Version | Changes |
---|---|
v24.0.0, v22.17.0 | Marking the API stable. |
v17.3.0, v16.14.0 | Added in: v17.3.0, v16.14.0 |
stream
<Readable> |<Writable> |<Duplex> |<WritableStream> |<ReadableStream>- Returns:<boolean>
Returns whether the stream has encountered an error.
stream.isReadable(stream)
#
History
Version | Changes |
---|---|
v24.0.0, v22.17.0 | Marking the API stable. |
v17.4.0, v16.14.0 | Added in: v17.4.0, v16.14.0 |
stream
<Readable> |<Duplex> |<ReadableStream>- Returns:<boolean>
Returns whether the stream is readable.
stream.Readable.from(iterable[, options])
#
iterable
<Iterable> Object implementing theSymbol.asyncIterator
orSymbol.iterator
iterable protocol. Emits an 'error' event if a nullvalue is passed.options
<Object> Options provided tonew stream.Readable([options])
.By default,Readable.from()
will setoptions.objectMode
totrue
, unlessthis is explicitly opted out by settingoptions.objectMode
tofalse
.- Returns:<stream.Readable>
A utility method for creating readable streams out of iterators.
const {Readable } =require('node:stream');asyncfunction *generate() {yield'hello';yield'streams';}const readable =Readable.from(generate());readable.on('data',(chunk) => {console.log(chunk);});
CallingReadable.from(string)
orReadable.from(buffer)
will not havethe strings or buffers be iterated to match the other streams semanticsfor performance reasons.
If anIterable
object containing promises is passed as an argument,it might result in unhandled rejection.
const {Readable } =require('node:stream');Readable.from([newPromise((resolve) =>setTimeout(resolve('1'),1500)),newPromise((_, reject) =>setTimeout(reject(newError('2')),1000)),// Unhandled rejection]);
stream.Readable.fromWeb(readableStream[, options])
#
History
Version | Changes |
---|---|
v24.0.0 | Marking the API stable. |
v17.0.0 | Added in: v17.0.0 |
readableStream
<ReadableStream>options
<Object>encoding
<string>highWaterMark
<number>objectMode
<boolean>signal
<AbortSignal>
- Returns:<stream.Readable>
stream.Readable.isDisturbed(stream)
#
History
Version | Changes |
---|---|
v24.0.0 | Marking the API stable. |
v16.8.0 | Added in: v16.8.0 |
stream
<stream.Readable> |<ReadableStream>- Returns:
boolean
Returns whether the stream has been read from or cancelled.
stream.Readable.toWeb(streamReadable[, options])
#
History
Version | Changes |
---|---|
v24.0.0 | Marking the API stable. |
v18.7.0 | include strategy options on Readable. |
v17.0.0 | Added in: v17.0.0 |
streamReadable
<stream.Readable>options
<Object>strategy
<Object>highWaterMark
<number> The maximum internal queue size (of the createdReadableStream
) before backpressure is applied in reading from the givenstream.Readable
. If no value is provided, it will be taken from thegivenstream.Readable
.size
<Function> A function that size of the given chunk of data.If no value is provided, the size will be1
for all the chunks.
- Returns:<ReadableStream>
stream.Writable.fromWeb(writableStream[, options])
#
History
Version | Changes |
---|---|
v24.0.0 | Marking the API stable. |
v17.0.0 | Added in: v17.0.0 |
writableStream
<WritableStream>options
<Object>decodeStrings
<boolean>highWaterMark
<number>objectMode
<boolean>signal
<AbortSignal>
- Returns:<stream.Writable>
stream.Writable.toWeb(streamWritable)
#
History
Version | Changes |
---|---|
v24.0.0 | Marking the API stable. |
v17.0.0 | Added in: v17.0.0 |
streamWritable
<stream.Writable>- Returns:<WritableStream>
stream.Duplex.from(src)
#
History
Version | Changes |
---|---|
v19.5.0, v18.17.0 | The |
v16.8.0 | Added in: v16.8.0 |
src
<Stream> |<Blob> |<ArrayBuffer> |<string> |<Iterable> |<AsyncIterable> |<AsyncGeneratorFunction> |<AsyncFunction> |<Promise> |<Object> |<ReadableStream> |<WritableStream>
A utility method for creating duplex streams.
Stream
converts writable stream into writableDuplex
and readable streamtoDuplex
.Blob
converts into readableDuplex
.string
converts into readableDuplex
.ArrayBuffer
converts into readableDuplex
.AsyncIterable
converts into a readableDuplex
. Cannot yieldnull
.AsyncGeneratorFunction
converts into a readable/writable transformDuplex
. Must take a sourceAsyncIterable
as first parameter. Cannot yieldnull
.AsyncFunction
converts into a writableDuplex
. Must returneithernull
orundefined
Object ({ writable, readable })
convertsreadable
andwritable
intoStream
and then combines them intoDuplex
where theDuplex
will write to thewritable
and read from thereadable
.Promise
converts into readableDuplex
. Valuenull
is ignored.ReadableStream
converts into readableDuplex
.WritableStream
converts into writableDuplex
.- Returns:<stream.Duplex>
If anIterable
object containing promises is passed as an argument,it might result in unhandled rejection.
const {Duplex } =require('node:stream');Duplex.from([newPromise((resolve) =>setTimeout(resolve('1'),1500)),newPromise((_, reject) =>setTimeout(reject(newError('2')),1000)),// Unhandled rejection]);
stream.Duplex.fromWeb(pair[, options])
#
History
Version | Changes |
---|---|
v24.0.0 | Marking the API stable. |
v17.0.0 | Added in: v17.0.0 |
pair
<Object>readable
<ReadableStream>writable
<WritableStream>
options
<Object>- Returns:<stream.Duplex>
import {Duplex }from'node:stream';import {ReadableStream,WritableStream,}from'node:stream/web';const readable =newReadableStream({start(controller) { controller.enqueue('world'); },});const writable =newWritableStream({write(chunk) {console.log('writable', chunk); },});const pair = { readable, writable,};const duplex =Duplex.fromWeb(pair, {encoding:'utf8',objectMode:true });duplex.write('hello');forawait (const chunkof duplex) {console.log('readable', chunk);}
const {Duplex } =require('node:stream');const {ReadableStream,WritableStream,} =require('node:stream/web');const readable =newReadableStream({start(controller) { controller.enqueue('world'); },});const writable =newWritableStream({write(chunk) {console.log('writable', chunk); },});const pair = { readable, writable,};const duplex =Duplex.fromWeb(pair, {encoding:'utf8',objectMode:true });duplex.write('hello');duplex.once('readable',() =>console.log('readable', duplex.read()));
stream.Duplex.toWeb(streamDuplex)
#
History
Version | Changes |
---|---|
v24.0.0 | Marking the API stable. |
v17.0.0 | Added in: v17.0.0 |
streamDuplex
<stream.Duplex>- Returns:<Object>
readable
<ReadableStream>writable
<WritableStream>
import {Duplex }from'node:stream';const duplex =Duplex({objectMode:true,read() {this.push('world');this.push(null); },write(chunk, encoding, callback) {console.log('writable', chunk);callback(); },});const { readable, writable } =Duplex.toWeb(duplex);writable.getWriter().write('hello');const { value } =await readable.getReader().read();console.log('readable', value);
const {Duplex } =require('node:stream');const duplex =Duplex({objectMode:true,read() {this.push('world');this.push(null); },write(chunk, encoding, callback) {console.log('writable', chunk);callback(); },});const { readable, writable } =Duplex.toWeb(duplex);writable.getWriter().write('hello');readable.getReader().read().then((result) => {console.log('readable', result.value);});
stream.addAbortSignal(signal, stream)
#
History
Version | Changes |
---|---|
v19.7.0, v18.16.0 | Added support for |
v15.4.0 | Added in: v15.4.0 |
signal
<AbortSignal> A signal representing possible cancellationstream
<Stream> |<ReadableStream> |<WritableStream> A stream to attach a signalto.
Attaches an AbortSignal to a readable or writeable stream. This lets codecontrol stream destruction using anAbortController
.
Callingabort
on theAbortController
corresponding to the passedAbortSignal
will behave the same way as calling.destroy(new AbortError())
on the stream, andcontroller.error(new AbortError())
for webstreams.
const fs =require('node:fs');const controller =newAbortController();const read =addAbortSignal( controller.signal, fs.createReadStream(('object.json')),);// Later, abort the operation closing the streamcontroller.abort();
Or using anAbortSignal
with a readable stream as an async iterable:
const controller =newAbortController();setTimeout(() => controller.abort(),10_000);// set a timeoutconst stream =addAbortSignal( controller.signal, fs.createReadStream(('object.json')),);(async () => {try {forawait (const chunkof stream) {awaitprocess(chunk); } }catch (e) {if (e.name ==='AbortError') {// The operation was cancelled }else {throw e; } }})();
Or using anAbortSignal
with a ReadableStream:
const controller =newAbortController();const rs =newReadableStream({start(controller) { controller.enqueue('hello'); controller.enqueue('world'); controller.close(); },});addAbortSignal(controller.signal, rs);finished(rs,(err) => {if (err) {if (err.name ==='AbortError') {// The operation was cancelled } }});const reader = rs.getReader();reader.read().then(({ value, done }) => {console.log(value);// helloconsole.log(done);// false controller.abort();});
stream.getDefaultHighWaterMark(objectMode)
#
Returns the default highWaterMark used by streams.Defaults to65536
(64 KiB), or16
forobjectMode
.
API for stream implementers#
Thenode:stream
module API has been designed to make it possible to easilyimplement streams using JavaScript's prototypal inheritance model.
First, a stream developer would declare a new JavaScript class that extends oneof the four basic stream classes (stream.Writable
,stream.Readable
,stream.Duplex
, orstream.Transform
), making sure they call the appropriateparent class constructor:
const {Writable } =require('node:stream');classMyWritableextendsWritable {constructor({ highWaterMark, ...options }) {super({ highWaterMark });// ... }}
When extending streams, keep in mind what options the usercan and should provide before forwarding these to the base constructor. Forexample, if the implementation makes assumptions in regard to theautoDestroy
andemitClose
options, do not allow theuser to override these. Be explicit about whatoptions are forwarded instead of implicitly forwarding all options.
The new stream class must then implement one or more specific methods, dependingon the type of stream being created, as detailed in the chart below:
Use-case | Class | Method(s) to implement |
---|---|---|
Reading only | Readable | _read() |
Writing only | Writable | _write() ,_writev() ,_final() |
Reading and writing | Duplex | _read() ,_write() ,_writev() ,_final() |
Operate on written data, then read the result | Transform | _transform() ,_flush() ,_final() |
The implementation code for a stream shouldnever call the "public" methodsof a stream that are intended for use by consumers (as described in theAPI for stream consumers section). Doing so may lead to adverse side effectsin application code consuming the stream.
Avoid overriding public methods such aswrite()
,end()
,cork()
,uncork()
,read()
anddestroy()
, or emitting internal events suchas'error'
,'data'
,'end'
,'finish'
and'close'
through.emit()
.Doing so can break current and future stream invariants leading to behaviorand/or compatibility issues with other streams, stream utilities, and userexpectations.
Simplified construction#
For many simple cases, it is possible to create a stream without relying oninheritance. This can be accomplished by directly creating instances of thestream.Writable
,stream.Readable
,stream.Duplex
, orstream.Transform
objects and passing appropriate methods as constructor options.
const {Writable } =require('node:stream');const myWritable =newWritable({construct(callback) {// Initialize state and load resources... },write(chunk, encoding, callback) {// ... },destroy() {// Free resources... },});
Implementing a writable stream#
Thestream.Writable
class is extended to implement aWritable
stream.
CustomWritable
streamsmust call thenew stream.Writable([options])
constructor and implement thewritable._write()
and/orwritable._writev()
method.
new stream.Writable([options])
#
History
Version | Changes |
---|---|
v22.0.0 | bump default highWaterMark. |
v15.5.0 | support passing in an AbortSignal. |
v14.0.0 | Change |
v11.2.0, v10.16.0 | Add |
v10.0.0 | Add |
options
<Object>highWaterMark
<number> Buffer level whenstream.write()
starts returningfalse
.Default:65536
(64 KiB), or16
forobjectMode
streams.decodeStrings
<boolean> Whether to encodestring
s passed tostream.write()
toBuffer
s (with the encodingspecified in thestream.write()
call) before passingthem tostream._write()
. Other types of data are notconverted (i.e.Buffer
s are not decoded intostring
s). Setting tofalse will preventstring
s from being converted.Default:true
.defaultEncoding
<string> The default encoding that is used when noencoding is specified as an argument tostream.write()
.Default:'utf8'
.objectMode
<boolean> Whether or not thestream.write(anyObj)
is a valid operation. When set,it becomes possible to write JavaScript values other than string,<Buffer>,<TypedArray> or<DataView> if supported by the stream implementation.Default:false
.emitClose
<boolean> Whether or not the stream should emit'close'
after it has been destroyed.Default:true
.write
<Function> Implementation for thestream._write()
method.writev
<Function> Implementation for thestream._writev()
method.destroy
<Function> Implementation for thestream._destroy()
method.final
<Function> Implementation for thestream._final()
method.construct
<Function> Implementation for thestream._construct()
method.autoDestroy
<boolean> Whether this stream should automatically call.destroy()
on itself after ending.Default:true
.signal
<AbortSignal> A signal representing possible cancellation.
const {Writable } =require('node:stream');classMyWritableextendsWritable {constructor(options) {// Calls the stream.Writable() constructor.super(options);// ... }}
Or, when using pre-ES6 style constructors:
const {Writable } =require('node:stream');const util =require('node:util');functionMyWritable(options) {if (!(thisinstanceofMyWritable))returnnewMyWritable(options);Writable.call(this, options);}util.inherits(MyWritable,Writable);
Or, using the simplified constructor approach:
const {Writable } =require('node:stream');const myWritable =newWritable({write(chunk, encoding, callback) {// ... },writev(chunks, callback) {// ... },});
Callingabort
on theAbortController
corresponding to the passedAbortSignal
will behave the same way as calling.destroy(new AbortError())
on the writeable stream.
const {Writable } =require('node:stream');const controller =newAbortController();const myWritable =newWritable({write(chunk, encoding, callback) {// ... },writev(chunks, callback) {// ... },signal: controller.signal,});// Later, abort the operation closing the streamcontroller.abort();
writable._construct(callback)
#
callback
<Function> Call this function (optionally with an errorargument) when the stream has finished initializing.
The_construct()
method MUST NOT be called directly. It may be implementedby child classes, and if so, will be called by the internalWritable
class methods only.
This optional function will be called in a tick after the stream constructorhas returned, delaying any_write()
,_final()
and_destroy()
calls untilcallback
is called. This is useful to initialize state or asynchronouslyinitialize resources before the stream can be used.
const {Writable } =require('node:stream');const fs =require('node:fs');classWriteStreamextendsWritable {constructor(filename) {super();this.filename = filename;this.fd =null; }_construct(callback) { fs.open(this.filename,'w',(err, fd) => {if (err) {callback(err); }else {this.fd = fd;callback(); } }); }_write(chunk, encoding, callback) { fs.write(this.fd, chunk, callback); }_destroy(err, callback) {if (this.fd) { fs.close(this.fd,(er) =>callback(er || err)); }else {callback(err); } }}
writable._write(chunk, encoding, callback)
#
History
Version | Changes |
---|---|
v12.11.0 | _write() is optional when providing _writev(). |
chunk
<Buffer> |<string> |<any> TheBuffer
to be written, converted from thestring
passed tostream.write()
. If the stream'sdecodeStrings
option isfalse
or the stream is operating in object mode,the chunk will not be converted & will be whatever was passed tostream.write()
.encoding
<string> If the chunk is a string, thenencoding
is thecharacter encoding of that string. If chunk is aBuffer
, or if thestream is operating in object mode,encoding
may be ignored.callback
<Function> Call this function (optionally with an errorargument) when processing is complete for the supplied chunk.
AllWritable
stream implementations must provide awritable._write()
and/orwritable._writev()
method to send data to the underlyingresource.
Transform
streams provide their own implementation of thewritable._write()
.
This function MUST NOT be called by application code directly. It should beimplemented by child classes, and called by the internalWritable
classmethods only.
Thecallback
function must be called synchronously inside ofwritable._write()
or asynchronously (i.e. different tick) to signal eitherthat the write completed successfully or failed with an error.The first argument passed to thecallback
must be theError
object if thecall failed ornull
if the write succeeded.
All calls towritable.write()
that occur between the timewritable._write()
is called and thecallback
is called will cause the written data to bebuffered. When thecallback
is invoked, the stream might emit a'drain'
event. If a stream implementation is capable of processing multiple chunks ofdata at once, thewritable._writev()
method should be implemented.
If thedecodeStrings
property is explicitly set tofalse
in the constructoroptions, thenchunk
will remain the same object that is passed to.write()
,and may be a string rather than aBuffer
. This is to support implementationsthat have an optimized handling for certain string data encodings. In that case,theencoding
argument will indicate the character encoding of the string.Otherwise, theencoding
argument can be safely ignored.
Thewritable._write()
method is prefixed with an underscore because it isinternal to the class that defines it, and should never be called directly byuser programs.
writable._writev(chunks, callback)
#
chunks
<Object[]> The data to be written. The value is an array of<Object>that each represent a discrete chunk of data to write. The properties ofthese objects are:chunk
<Buffer> |<string> A buffer instance or string containing the data tobe written. Thechunk
will be a string if theWritable
was created withthedecodeStrings
option set tofalse
and a string was passed towrite()
.encoding
<string> The character encoding of thechunk
. Ifchunk
isaBuffer
, theencoding
will be'buffer'
.
callback
<Function> A callback function (optionally with an errorargument) to be invoked when processing is complete for the supplied chunks.
This function MUST NOT be called by application code directly. It should beimplemented by child classes, and called by the internalWritable
classmethods only.
Thewritable._writev()
method may be implemented in addition or alternativelytowritable._write()
in stream implementations that are capable of processingmultiple chunks of data at once. If implemented and if there is buffered datafrom previous writes,_writev()
will be called instead of_write()
.
Thewritable._writev()
method is prefixed with an underscore because it isinternal to the class that defines it, and should never be called directly byuser programs.
writable._destroy(err, callback)
#
err
<Error> A possible error.callback
<Function> A callback function that takes an optional errorargument.
The_destroy()
method is called bywritable.destroy()
.It can be overridden by child classes but itmust not be called directly.
writable._final(callback)
#
callback
<Function> Call this function (optionally with an errorargument) when finished writing any remaining data.
The_final()
methodmust not be called directly. It may be implementedby child classes, and if so, will be called by the internalWritable
class methods only.
This optional function will be called before the stream closes, delaying the'finish'
event untilcallback
is called. This is useful to close resourcesor write buffered data before a stream ends.
Errors while writing#
Errors occurring during the processing of thewritable._write()
,writable._writev()
andwritable._final()
methods must be propagatedby invoking the callback and passing the error as the first argument.Throwing anError
from within these methods or manually emitting an'error'
event results in undefined behavior.
If aReadable
stream pipes into aWritable
stream whenWritable
emits anerror, theReadable
stream will be unpiped.
const {Writable } =require('node:stream');const myWritable =newWritable({write(chunk, encoding, callback) {if (chunk.toString().indexOf('a') >=0) {callback(newError('chunk is invalid')); }else {callback(); } },});
An example writable stream#
The following illustrates a rather simplistic (and somewhat pointless) customWritable
stream implementation. While this specificWritable
stream instanceis not of any real particular usefulness, the example illustrates each of therequired elements of a customWritable
stream instance:
const {Writable } =require('node:stream');classMyWritableextendsWritable {_write(chunk, encoding, callback) {if (chunk.toString().indexOf('a') >=0) {callback(newError('chunk is invalid')); }else {callback(); } }}
Decoding buffers in a writable stream#
Decoding buffers is a common task, for instance, when using transformers whoseinput is a string. This is not a trivial process when using multi-bytecharacters encoding, such as UTF-8. The following example shows how to decodemulti-byte strings usingStringDecoder
andWritable
.
const {Writable } =require('node:stream');const {StringDecoder } =require('node:string_decoder');classStringWritableextendsWritable {constructor(options) {super(options);this._decoder =newStringDecoder(options?.defaultEncoding);this.data =''; }_write(chunk, encoding, callback) {if (encoding ==='buffer') { chunk =this._decoder.write(chunk); }this.data += chunk;callback(); }_final(callback) {this.data +=this._decoder.end();callback(); }}const euro = [[0xE2,0x82], [0xAC]].map(Buffer.from);const w =newStringWritable();w.write('currency: ');w.write(euro[0]);w.end(euro[1]);console.log(w.data);// currency: €
Implementing a readable stream#
Thestream.Readable
class is extended to implement aReadable
stream.
CustomReadable
streamsmust call thenew stream.Readable([options])
constructor and implement thereadable._read()
method.
new stream.Readable([options])
#
History
Version | Changes |
---|---|
v22.0.0 | bump default highWaterMark. |
v15.5.0 | support passing in an AbortSignal. |
v14.0.0 | Change |
v11.2.0, v10.16.0 | Add |
options
<Object>highWaterMark
<number> The maximumnumber of bytes to storein the internal buffer before ceasing to read from the underlying resource.Default:65536
(64 KiB), or16
forobjectMode
streams.encoding
<string> If specified, then buffers will be decoded tostrings using the specified encoding.Default:null
.objectMode
<boolean> Whether this stream should behaveas a stream of objects. Meaning thatstream.read(n)
returnsa single value instead of aBuffer
of sizen
.Default:false
.emitClose
<boolean> Whether or not the stream should emit'close'
after it has been destroyed.Default:true
.read
<Function> Implementation for thestream._read()
method.destroy
<Function> Implementation for thestream._destroy()
method.construct
<Function> Implementation for thestream._construct()
method.autoDestroy
<boolean> Whether this stream should automatically call.destroy()
on itself after ending.Default:true
.signal
<AbortSignal> A signal representing possible cancellation.
const {Readable } =require('node:stream');classMyReadableextendsReadable {constructor(options) {// Calls the stream.Readable(options) constructor.super(options);// ... }}
Or, when using pre-ES6 style constructors:
const {Readable } =require('node:stream');const util =require('node:util');functionMyReadable(options) {if (!(thisinstanceofMyReadable))returnnewMyReadable(options);Readable.call(this, options);}util.inherits(MyReadable,Readable);
Or, using the simplified constructor approach:
const {Readable } =require('node:stream');const myReadable =newReadable({read(size) {// ... },});
Callingabort
on theAbortController
corresponding to the passedAbortSignal
will behave the same way as calling.destroy(new AbortError())
on the readable created.
const {Readable } =require('node:stream');const controller =newAbortController();const read =newReadable({read(size) {// ... },signal: controller.signal,});// Later, abort the operation closing the streamcontroller.abort();
readable._construct(callback)
#
callback
<Function> Call this function (optionally with an errorargument) when the stream has finished initializing.
The_construct()
method MUST NOT be called directly. It may be implementedby child classes, and if so, will be called by the internalReadable
class methods only.
This optional function will be scheduled in the next tick by the streamconstructor, delaying any_read()
and_destroy()
calls untilcallback
iscalled. This is useful to initialize state or asynchronously initializeresources before the stream can be used.
const {Readable } =require('node:stream');const fs =require('node:fs');classReadStreamextendsReadable {constructor(filename) {super();this.filename = filename;this.fd =null; }_construct(callback) { fs.open(this.filename,(err, fd) => {if (err) {callback(err); }else {this.fd = fd;callback(); } }); }_read(n) {const buf =Buffer.alloc(n); fs.read(this.fd, buf,0, n,null,(err, bytesRead) => {if (err) {this.destroy(err); }else {this.push(bytesRead >0 ? buf.slice(0, bytesRead) :null); } }); }_destroy(err, callback) {if (this.fd) { fs.close(this.fd,(er) =>callback(er || err)); }else {callback(err); } }}
readable._read(size)
#
size
<number> Number of bytes to read asynchronously
This function MUST NOT be called by application code directly. It should beimplemented by child classes, and called by the internalReadable
classmethods only.
AllReadable
stream implementations must provide an implementation of thereadable._read()
method to fetch data from the underlying resource.
Whenreadable._read()
is called, if data is available from the resource,the implementation should begin pushing that data into the read queue using thethis.push(dataChunk)
method._read()
will be called againafter each call tothis.push(dataChunk)
once the stream isready to accept more data._read()
may continue reading from the resource andpushing data untilreadable.push()
returnsfalse
. Only when_read()
iscalled again after it has stopped should it resume pushing additional data intothe queue.
Once thereadable._read()
method has been called, it will not be calledagain until more data is pushed through thereadable.push()
method. Empty data such as empty buffers and strings will not causereadable._read()
to be called.
Thesize
argument is advisory. For implementations where a "read" is asingle operation that returns data can use thesize
argument to determine howmuch data to fetch. Other implementations may ignore this argument and simplyprovide data whenever it becomes available. There is no need to "wait" untilsize
bytes are available before callingstream.push(chunk)
.
Thereadable._read()
method is prefixed with an underscore because it isinternal to the class that defines it, and should never be called directly byuser programs.
readable._destroy(err, callback)
#
err
<Error> A possible error.callback
<Function> A callback function that takes an optional errorargument.
The_destroy()
method is called byreadable.destroy()
.It can be overridden by child classes but itmust not be called directly.
readable.push(chunk[, encoding])
#
History
Version | Changes |
---|---|
v22.0.0, v20.13.0 | The |
v8.0.0 | The |
chunk
<Buffer> |<TypedArray> |<DataView> |<string> |<null> |<any> Chunk of data to pushinto the read queue. For streams not operating in object mode,chunk
mustbe a<string>,<Buffer>,<TypedArray> or<DataView>. For object mode streams,chunk
may be any JavaScript value.encoding
<string> Encoding of string chunks. Must be a validBuffer
encoding, such as'utf8'
or'ascii'
.- Returns:<boolean>
true
if additional chunks of data may continue to bepushed;false
otherwise.
Whenchunk
is a<Buffer>,<TypedArray>,<DataView> or<string>, thechunk
of data will be added to the internal queue for users of the stream to consume.Passingchunk
asnull
signals the end of the stream (EOF), after which nomore data can be written.
When theReadable
is operating in paused mode, the data added withreadable.push()
can be read out by calling thereadable.read()
method when the'readable'
event isemitted.
When theReadable
is operating in flowing mode, the data added withreadable.push()
will be delivered by emitting a'data'
event.
Thereadable.push()
method is designed to be as flexible as possible. Forexample, when wrapping a lower-level source that provides some form ofpause/resume mechanism, and a data callback, the low-level source can be wrappedby the customReadable
instance:
// `_source` is an object with readStop() and readStart() methods,// and an `ondata` member that gets called when it has data, and// an `onend` member that gets called when the data is over.classSourceWrapperextendsReadable {constructor(options) {super(options);this._source =getLowLevelSourceObject();// Every time there's data, push it into the internal buffer.this._source.ondata =(chunk) => {// If push() returns false, then stop reading from source.if (!this.push(chunk))this._source.readStop(); };// When the source ends, push the EOF-signaling `null` chunk.this._source.onend =() => {this.push(null); }; }// _read() will be called when the stream wants to pull more data in.// The advisory size argument is ignored in this case._read(size) {this._source.readStart(); }}
Thereadable.push()
method is used to push the contentinto the internal buffer. It can be driven by thereadable._read()
method.
For streams not operating in object mode, if thechunk
parameter ofreadable.push()
isundefined
, it will be treated as empty string orbuffer. Seereadable.push('')
for more information.
Errors while reading#
Errors occurring during processing of thereadable._read()
must bepropagated through thereadable.destroy(err)
method.Throwing anError
from withinreadable._read()
or manually emitting an'error'
event results in undefined behavior.
const {Readable } =require('node:stream');const myReadable =newReadable({read(size) {const err =checkSomeErrorCondition();if (err) {this.destroy(err); }else {// Do some work. } },});
An example counting stream#
The following is a basic example of aReadable
stream that emits the numeralsfrom 1 to 1,000,000 in ascending order, and then ends.
const {Readable } =require('node:stream');classCounterextendsReadable {constructor(opt) {super(opt);this._max =1000000;this._index =1; }_read() {const i =this._index++;if (i >this._max)this.push(null);else {const str =String(i);const buf =Buffer.from(str,'ascii');this.push(buf); } }}
Implementing a duplex stream#
ADuplex
stream is one that implements bothReadable
andWritable
, such as a TCP socket connection.
Because JavaScript does not have support for multiple inheritance, thestream.Duplex
class is extended to implement aDuplex
stream (as opposedto extending thestream.Readable
andstream.Writable
classes).
Thestream.Duplex
class prototypically inherits fromstream.Readable
andparasitically fromstream.Writable
, butinstanceof
will work properly forboth base classes due to overridingSymbol.hasInstance
onstream.Writable
.
CustomDuplex
streamsmust call thenew stream.Duplex([options])
constructor and implementboth thereadable._read()
andwritable._write()
methods.
new stream.Duplex(options)
#
History
Version | Changes |
---|---|
v8.4.0 | The |
options
<Object> Passed to bothWritable
andReadable
constructors. Also has the following fields:allowHalfOpen
<boolean> If set tofalse
, then the stream willautomatically end the writable side when the readable side ends.Default:true
.readable
<boolean> Sets whether theDuplex
should be readable.Default:true
.writable
<boolean> Sets whether theDuplex
should be writable.Default:true
.readableObjectMode
<boolean> SetsobjectMode
for readable side of thestream. Has no effect ifobjectMode
istrue
.Default:false
.writableObjectMode
<boolean> SetsobjectMode
for writable side of thestream. Has no effect ifobjectMode
istrue
.Default:false
.readableHighWaterMark
<number> SetshighWaterMark
for the readable sideof the stream. Has no effect ifhighWaterMark
is provided.writableHighWaterMark
<number> SetshighWaterMark
for the writable sideof the stream. Has no effect ifhighWaterMark
is provided.
const {Duplex } =require('node:stream');classMyDuplexextendsDuplex {constructor(options) {super(options);// ... }}
Or, when using pre-ES6 style constructors:
const {Duplex } =require('node:stream');const util =require('node:util');functionMyDuplex(options) {if (!(thisinstanceofMyDuplex))returnnewMyDuplex(options);Duplex.call(this, options);}util.inherits(MyDuplex,Duplex);
Or, using the simplified constructor approach:
const {Duplex } =require('node:stream');const myDuplex =newDuplex({read(size) {// ... },write(chunk, encoding, callback) {// ... },});
When using pipeline:
const {Transform, pipeline } =require('node:stream');const fs =require('node:fs');pipeline( fs.createReadStream('object.json') .setEncoding('utf8'),newTransform({decodeStrings:false,// Accept string input rather than Buffersconstruct(callback) {this.data ='';callback(); },transform(chunk, encoding, callback) {this.data += chunk;callback(); },flush(callback) {try {// Make sure is valid json.JSON.parse(this.data);this.push(this.data);callback(); }catch (err) {callback(err); } }, }), fs.createWriteStream('valid-object.json'),(err) => {if (err) {console.error('failed', err); }else {console.log('completed'); } },);
An example duplex stream#
The following illustrates a simple example of aDuplex
stream that wraps ahypothetical lower-level source object to which data can be written, andfrom which data can be read, albeit using an API that is not compatible withNode.js streams.The following illustrates a simple example of aDuplex
stream that buffersincoming written data via theWritable
interface that is read back outvia theReadable
interface.
const {Duplex } =require('node:stream');const kSource =Symbol('source');classMyDuplexextendsDuplex {constructor(source, options) {super(options);this[kSource] = source; }_write(chunk, encoding, callback) {// The underlying source only deals with strings.if (Buffer.isBuffer(chunk)) chunk = chunk.toString();this[kSource].writeSomeData(chunk);callback(); }_read(size) {this[kSource].fetchSomeData(size,(data, encoding) => {this.push(Buffer.from(data, encoding)); }); }}
The most important aspect of aDuplex
stream is that theReadable
andWritable
sides operate independently of one another despite co-existing withina single object instance.
Object mode duplex streams#
ForDuplex
streams,objectMode
can be set exclusively for either theReadable
orWritable
side using thereadableObjectMode
andwritableObjectMode
options respectively.
In the following example, for instance, a newTransform
stream (which is atype ofDuplex
stream) is created that has an object modeWritable
sidethat accepts JavaScript numbers that are converted to hexadecimal strings ontheReadable
side.
const {Transform } =require('node:stream');// All Transform streams are also Duplex Streams.const myTransform =newTransform({writableObjectMode:true,transform(chunk, encoding, callback) {// Coerce the chunk to a number if necessary. chunk |=0;// Transform the chunk into something else.const data = chunk.toString(16);// Push the data onto the readable queue.callback(null,'0'.repeat(data.length %2) + data); },});myTransform.setEncoding('ascii');myTransform.on('data',(chunk) =>console.log(chunk));myTransform.write(1);// Prints: 01myTransform.write(10);// Prints: 0amyTransform.write(100);// Prints: 64
Implementing a transform stream#
ATransform
stream is aDuplex
stream where the output is computedin some way from the input. Examples includezlib streams orcryptostreams that compress, encrypt, or decrypt data.
There is no requirement that the output be the same size as the input, the samenumber of chunks, or arrive at the same time. For example, aHash
stream willonly ever have a single chunk of output which is provided when the input isended. Azlib
stream will produce output that is either much smaller or muchlarger than its input.
Thestream.Transform
class is extended to implement aTransform
stream.
Thestream.Transform
class prototypically inherits fromstream.Duplex
andimplements its own versions of thewritable._write()
andreadable._read()
methods. CustomTransform
implementationsmustimplement thetransform._transform()
method andmayalso implement thetransform._flush()
method.
Care must be taken when usingTransform
streams in that data written to thestream can cause theWritable
side of the stream to become paused if theoutput on theReadable
side is not consumed.
new stream.Transform([options])
#
options
<Object> Passed to bothWritable
andReadable
constructors. Also has the following fields:transform
<Function> Implementation for thestream._transform()
method.flush
<Function> Implementation for thestream._flush()
method.
const {Transform } =require('node:stream');classMyTransformextendsTransform {constructor(options) {super(options);// ... }}
Or, when using pre-ES6 style constructors:
const {Transform } =require('node:stream');const util =require('node:util');functionMyTransform(options) {if (!(thisinstanceofMyTransform))returnnewMyTransform(options);Transform.call(this, options);}util.inherits(MyTransform,Transform);
Or, using the simplified constructor approach:
const {Transform } =require('node:stream');const myTransform =newTransform({transform(chunk, encoding, callback) {// ... },});
Event:'end'
#
The'end'
event is from thestream.Readable
class. The'end'
event isemitted after all data has been output, which occurs after the callback intransform._flush()
has been called. In the case of an error,'end'
should not be emitted.
Event:'finish'
#
The'finish'
event is from thestream.Writable
class. The'finish'
event is emitted afterstream.end()
is called and all chunkshave been processed bystream._transform()
. In the caseof an error,'finish'
should not be emitted.
transform._flush(callback)
#
callback
<Function> A callback function (optionally with an errorargument and data) to be called when remaining data has been flushed.
This function MUST NOT be called by application code directly. It should beimplemented by child classes, and called by the internalReadable
classmethods only.
In some cases, a transform operation may need to emit an additional bit ofdata at the end of the stream. For example, azlib
compression stream willstore an amount of internal state used to optimally compress the output. Whenthe stream ends, however, that additional data needs to be flushed so that thecompressed data will be complete.
CustomTransform
implementationsmay implement thetransform._flush()
method. This will be called when there is no more written data to be consumed,but before the'end'
event is emitted signaling the end of theReadable
stream.
Within thetransform._flush()
implementation, thetransform.push()
methodmay be called zero or more times, as appropriate. Thecallback
function mustbe called when the flush operation is complete.
Thetransform._flush()
method is prefixed with an underscore because it isinternal to the class that defines it, and should never be called directly byuser programs.
transform._transform(chunk, encoding, callback)
#
chunk
<Buffer> |<string> |<any> TheBuffer
to be transformed, converted fromthestring
passed tostream.write()
. If the stream'sdecodeStrings
option isfalse
or the stream is operating in object mode,the chunk will not be converted & will be whatever was passed tostream.write()
.encoding
<string> If the chunk is a string, then this is theencoding type. If chunk is a buffer, then this is the specialvalue'buffer'
. Ignore it in that case.callback
<Function> A callback function (optionally with an errorargument and data) to be called after the suppliedchunk
has beenprocessed.
This function MUST NOT be called by application code directly. It should beimplemented by child classes, and called by the internalReadable
classmethods only.
AllTransform
stream implementations must provide a_transform()
method to accept input and produce output. Thetransform._transform()
implementation handles the bytes being written, computes an output, then passesthat output off to the readable portion using thetransform.push()
method.
Thetransform.push()
method may be called zero or more times to generateoutput from a single input chunk, depending on how much is to be outputas a result of the chunk.
It is possible that no output is generated from any given chunk of input data.
Thecallback
function must be called only when the current chunk is completelyconsumed. The first argument passed to thecallback
must be anError
objectif an error occurred while processing the input ornull
otherwise. If a secondargument is passed to thecallback
, it will be forwarded on to thetransform.push()
method, but only if the first argument is falsy. In otherwords, the following are equivalent:
transform.prototype._transform =function(data, encoding, callback) {this.push(data);callback();};transform.prototype._transform =function(data, encoding, callback) {callback(null, data);};
Thetransform._transform()
method is prefixed with an underscore because itis internal to the class that defines it, and should never be called directly byuser programs.
transform._transform()
is never called in parallel; streams implement aqueue mechanism, and to receive the next chunk,callback
must becalled, either synchronously or asynchronously.
Class:stream.PassThrough
#
Thestream.PassThrough
class is a trivial implementation of aTransform
stream that simply passes the input bytes across to the output. Its purpose isprimarily for examples and testing, but there are some use cases wherestream.PassThrough
is useful as a building block for novel sorts of streams.
Additional notes#
Streams compatibility with async generators and async iterators#
With the support of async generators and iterators in JavaScript, asyncgenerators are effectively a first-class language-level stream construct atthis point.
Some common interop cases of using Node.js streams with async generatorsand async iterators are provided below.
Consuming readable streams with async iterators#
(asyncfunction() {forawait (const chunkof readable) {console.log(chunk); }})();
Async iterators register a permanent error handler on the stream to prevent anyunhandled post-destroy errors.
Creating readable streams with async generators#
A Node.js readable stream can be created from an asynchronous generator usingtheReadable.from()
utility method:
const {Readable } =require('node:stream');const ac =newAbortController();const signal = ac.signal;asyncfunction *generate() {yield'a';awaitsomeLongRunningFn({ signal });yield'b';yield'c';}const readable =Readable.from(generate());readable.on('close',() => { ac.abort();});readable.on('data',(chunk) => {console.log(chunk);});
Piping to writable streams from async iterators#
When writing to a writable stream from an async iterator, ensure correcthandling of backpressure and errors.stream.pipeline()
abstracts awaythe handling of backpressure and backpressure-related errors:
const fs =require('node:fs');const { pipeline } =require('node:stream');const {pipeline: pipelinePromise } =require('node:stream/promises');const writable = fs.createWriteStream('./file');const ac =newAbortController();const signal = ac.signal;const iterator =createIterator({ signal });// Callback Patternpipeline(iterator, writable,(err, value) => {if (err) {console.error(err); }else {console.log(value,'value returned'); }}).on('close',() => { ac.abort();});// Promise PatternpipelinePromise(iterator, writable) .then((value) => {console.log(value,'value returned'); }) .catch((err) => {console.error(err); ac.abort(); });
Compatibility with older Node.js versions#
Prior to Node.js 0.10, theReadable
stream interface was simpler, but alsoless powerful and less useful.
- Rather than waiting for calls to the
stream.read()
method,'data'
events would begin emitting immediately. Applications thatwould need to perform some amount of work to decide how to handle datawere required to store read data into buffers so the data would not be lost. - The
stream.pause()
method was advisory, rather thanguaranteed. This meant that it was still necessary to be prepared to receive'data'
eventseven when the stream was in a paused state.
In Node.js 0.10, theReadable
class was added. For backwardcompatibility with older Node.js programs,Readable
streams switch into"flowing mode" when a'data'
event handler is added, or when thestream.resume()
method is called. The effect is that, evenwhen not using the newstream.read()
method and'readable'
event, it is no longer necessary to worry about losing'data'
chunks.
While most applications will continue to function normally, this introduces anedge case in the following conditions:
- No
'data'
event listener is added. - The
stream.resume()
method is never called. - The stream is not piped to any writable destination.
For example, consider the following code:
// WARNING! BROKEN!net.createServer((socket) => {// We add an 'end' listener, but never consume the data. socket.on('end',() => {// It will never get here. socket.end('The message was received but was not processed.\n'); });}).listen(1337);
Prior to Node.js 0.10, the incoming message data would be simply discarded.However, in Node.js 0.10 and beyond, the socket remains paused forever.
The workaround in this situation is to call thestream.resume()
method to begin the flow of data:
// Workaround.net.createServer((socket) => { socket.on('end',() => { socket.end('The message was received but was not processed.\n'); });// Start the flow of data, discarding it. socket.resume();}).listen(1337);
In addition to newReadable
streams switching into flowing mode,pre-0.10 style streams can be wrapped in aReadable
class using thereadable.wrap()
method.
readable.read(0)
#
There are some cases where it is necessary to trigger a refresh of theunderlying readable stream mechanisms, without actually consuming anydata. In such cases, it is possible to callreadable.read(0)
, which willalways returnnull
.
If the internal read buffer is below thehighWaterMark
, and thestream is not currently reading, then callingstream.read(0)
will triggera low-levelstream._read()
call.
While most applications will almost never need to do this, there aresituations within Node.js where this is done, particularly in theReadable
stream class internals.
readable.push('')
#
Use ofreadable.push('')
is not recommended.
Pushing a zero-byte<string>,<Buffer>,<TypedArray> or<DataView> to a streamthat is not in object mode has an interesting side effect.Because itis a call toreadable.push()
, the call will end the reading process.However, because the argument is an empty string, no data is added to thereadable buffer so there is nothing for a user to consume.
highWaterMark
discrepancy after callingreadable.setEncoding()
#
The use ofreadable.setEncoding()
will change the behavior of how thehighWaterMark
operates in non-object mode.
Typically, the size of the current buffer is measured against thehighWaterMark
inbytes. However, aftersetEncoding()
is called, thecomparison function will begin to measure the buffer's size incharacters.
This is not a problem in common cases withlatin1
orascii
. But it isadvised to be mindful about this behavior when working with strings that couldcontain multi-byte characters.