About this documentation#
Welcome to the official API reference documentation for Node.js!
Node.js is a JavaScript runtime built on theV8 JavaScript engine.
Contributing#
Report errors in this documentation inthe issue tracker. Seethe contributing guide for directions on how to submit pull requests.
Stability index#
Throughout the documentation are indications of a section's stability. Some APIsare so proven and so relied upon that they are unlikely to ever change at all.Others are brand new and experimental, or known to be hazardous.
The stability indexes are as follows:
Experimental features are subdivided into stages:
- 1.0 - Early development. Experimental features at this stage are unfinishedand subject to substantial change.
- 1.1 - Active development. Experimental features at this stage are nearingminimum viability.
- 1.2 - Release candidate. Experimental features at this stage are hopefullyready to become stable. No further breaking changes are anticipated but maystill occur in response to user feedback or the features' underlyingspecification development. We encourage user testing and feedback so thatwe can know that this feature is ready to be marked as stable.
Experimental features leave the experimental status typically either bygraduating to stable, or are removed without a deprecation cycle.
Features are marked as legacy rather than being deprecated if their use does noharm, and they are widely relied upon within the npm ecosystem. Bugs found inlegacy features are unlikely to be fixed.
Use caution when making use of Experimental features, particularly whenauthoring libraries. Users may not be aware that experimental features are beingused. Bugs or behavior changes may surprise users when Experimental APImodifications occur. To avoid surprises, use of an Experimental feature may needa command-line flag. Experimental features may also emit awarning.
Stability overview#
JSON output#
Every.html document has a corresponding.json document. This is for IDEsand other utilities that consume the documentation.
System calls and man pages#
Node.js functions which wrap a system call will document that. The docs linkto the corresponding man pages which describe how the system call works.
Most Unix system calls have Windows analogues. Still, behavior differences maybe unavoidable.
Usage and example#
Usage#
node [options] [V8 options] [script.js | -e "script" | - ] [arguments]
Please see theCommand-line options document for more information.
Example#
An example of aweb server written with Node.js which responds with'Hello, World!':
Commands in this document start with$ or> to replicate how they wouldappear in a user's terminal. Do not include the$ and> characters. They arethere to show the start of each command.
Lines that don't start with$ or> character show the output of the previouscommand.
First, make sure to have downloaded and installed Node.js. SeeInstalling Node.js via package manager for further install information.
Now, create an empty project folder calledprojects, then navigate into it.
Linux and Mac:
mkdir ~/projectscd ~/projectsWindows CMD:
mkdir %USERPROFILE%\projectscd %USERPROFILE%\projectsWindows PowerShell:
mkdir$env:USERPROFILE\projectscd$env:USERPROFILE\projectsNext, create a new source file in theprojectsfolder and call ithello-world.js.
Openhello-world.js in any preferred text editor andpaste in the following content:
const http =require('node:http');const hostname ='127.0.0.1';const port =3000;const server = http.createServer((req, res) => { res.statusCode =200; res.setHeader('Content-Type','text/plain'); res.end('Hello, World!\n');});server.listen(port, hostname,() => {console.log(`Server running at http://${hostname}:${port}/`);});Save the file. Then, in the terminal window, to run thehello-world.js file,enter:
node hello-world.jsOutput like this should appear in the terminal:
Server running at http://127.0.0.1:3000/Now, open any preferred web browser and visithttp://127.0.0.1:3000.
If the browser displays the stringHello, World!, that indicatesthe server is working.
Assert#
Source Code:lib/assert.js
Thenode:assert module provides a set of assertion functions for verifyinginvariants.
Strict assertion mode#
History
| Version | Changes |
|---|---|
| v15.0.0 | Exposed as |
| v13.9.0, v12.16.2 | Changed "strict mode" to "strict assertion mode" and "legacy mode" to "legacy assertion mode" to avoid confusion with the more usual meaning of "strict mode". |
| v9.9.0 | Added error diffs to the strict assertion mode. |
| v9.9.0 | Added strict assertion mode to the assert module. |
| v9.9.0 | Added in: v9.9.0 |
In strict assertion mode, non-strict methods behave like their correspondingstrict methods. For example,assert.deepEqual() will behave likeassert.deepStrictEqual().
In strict assertion mode, error messages for objects display a diff. In legacyassertion mode, error messages for objects display the objects, often truncated.
To use strict assertion mode:
import { strictas assert }from'node:assert';const assert =require('node:assert').strict;
import assertfrom'node:assert/strict';const assert =require('node:assert/strict');
Example error diff:
import { strictas assert }from'node:assert';assert.deepEqual([[[1,2,3]],4,5], [[[1,2,'3']],4,5]);// AssertionError: Expected inputs to be strictly deep-equal:// + actual - expected ... Lines skipped//// [// [// ...// 2,// + 3// - '3'// ],// ...// 5// ]const assert =require('node:assert/strict');assert.deepEqual([[[1,2,3]],4,5], [[[1,2,'3']],4,5]);// AssertionError: Expected inputs to be strictly deep-equal:// + actual - expected ... Lines skipped//// [// [// ...// 2,// + 3// - '3'// ],// ...// 5// ]
To deactivate the colors, use theNO_COLOR orNODE_DISABLE_COLORSenvironment variables. This will also deactivate the colors in the REPL. Formore on color support in terminal environments, read the ttygetColorDepth() documentation.
Legacy assertion mode#
Legacy assertion mode uses the== operator in:
To use legacy assertion mode:
import assertfrom'node:assert';const assert =require('node:assert');
Legacy assertion mode may have surprising results, especially when usingassert.deepEqual():
// WARNING: This does not throw an AssertionError in legacy assertion mode!assert.deepEqual(/a/gi,newDate());Class:assert.AssertionError#
- Extends:<errors.Error>
Indicates the failure of an assertion. All errors thrown by thenode:assertmodule will be instances of theAssertionError class.
new assert.AssertionError(options)#
options<Object>message<string> If provided, the error message is set to this value.actual<any> Theactualproperty on the error instance.expected<any> Theexpectedproperty on the error instance.operator<string> Theoperatorproperty on the error instance.stackStartFn<Function> If provided, the generated stack trace omitsframes before this function.diff<string> If set to'full', shows the full diff in assertion errors. Defaults to'simple'.Accepted values:'simple','full'.
A subclass of<Error> that indicates the failure of an assertion.
All instances contain the built-inError properties (message andname)and:
actual<any> Set to theactualargument for methods such asassert.strictEqual().expected<any> Set to theexpectedvalue for methods such asassert.strictEqual().generatedMessage<boolean> Indicates if the message was auto-generated(true) or not.code<string> Value is alwaysERR_ASSERTIONto show that the error is anassertion error.operator<string> Set to the passed in operator value.
import assertfrom'node:assert';// Generate an AssertionError to compare the error message later:const { message } =new assert.AssertionError({actual:1,expected:2,operator:'strictEqual',});// Verify error output:try { assert.strictEqual(1,2);}catch (err) {assert(errinstanceof assert.AssertionError); assert.strictEqual(err.message, message); assert.strictEqual(err.name,'AssertionError'); assert.strictEqual(err.actual,1); assert.strictEqual(err.expected,2); assert.strictEqual(err.code,'ERR_ASSERTION'); assert.strictEqual(err.operator,'strictEqual'); assert.strictEqual(err.generatedMessage,true);}const assert =require('node:assert');// Generate an AssertionError to compare the error message later:const { message } =new assert.AssertionError({actual:1,expected:2,operator:'strictEqual',});// Verify error output:try { assert.strictEqual(1,2);}catch (err) {assert(errinstanceof assert.AssertionError); assert.strictEqual(err.message, message); assert.strictEqual(err.name,'AssertionError'); assert.strictEqual(err.actual,1); assert.strictEqual(err.expected,2); assert.strictEqual(err.code,'ERR_ASSERTION'); assert.strictEqual(err.operator,'strictEqual'); assert.strictEqual(err.generatedMessage,true);}
Class:assert.Assert#
TheAssert class allows creating independent assertion instances with custom options.
new assert.Assert([options])#
History
| Version | Changes |
|---|---|
| v24.9.0 | Added |
options<Object>diff<string> If set to'full', shows the full diff in assertion errors. Defaults to'simple'.Accepted values:'simple','full'.strict<boolean> If set totrue, non-strict methods behave like theircorresponding strict methods. Defaults totrue.skipPrototype<boolean> If set totrue, skips prototype and constructorcomparison in deep equality checks. Defaults tofalse.
Creates a new assertion instance. Thediff option controls the verbosity of diffs in assertion error messages.
const {Assert } =require('node:assert');const assertInstance =newAssert({diff:'full' });assertInstance.deepStrictEqual({a:1 }, {a:2 });// Shows a full diff in the error message.Important: When destructuring assertion methods from anAssert instance,the methods lose their connection to the instance's configuration options (suchasdiff,strict, andskipPrototype settings).The destructured methods will fall back to default behavior instead.
const myAssert =newAssert({diff:'full' });// This works as expected - uses 'full' diffmyAssert.strictEqual({a:1 }, {b: {c:1 } });// This loses the 'full' diff setting - falls back to default 'simple' diffconst { strictEqual } = myAssert;strictEqual({a:1 }, {b: {c:1 } });TheskipPrototype option affects all deep equality methods:
classFoo {constructor(a) {this.a = a; }}classBar {constructor(a) {this.a = a; }}const foo =newFoo(1);const bar =newBar(1);// Default behavior - fails due to different constructorsconst assert1 =newAssert();assert1.deepStrictEqual(foo, bar);// AssertionError// Skip prototype comparison - passes if properties are equalconst assert2 =newAssert({skipPrototype:true });assert2.deepStrictEqual(foo, bar);// OKWhen destructured, methods lose access to the instance'sthis context and revert to default assertion behavior(diff: 'simple', non-strict mode).To maintain custom options when using destructured methods, avoiddestructuring and call methods directly on the instance.
assert(value[, message])#
An alias ofassert.ok().
assert.deepEqual(actual, expected[, message])#
History
| Version | Changes |
|---|---|
| v25.0.0 | Promises are not considered equal anymore if they are not of the same instance. |
| v25.0.0 | Invalid dates are now considered equal. |
| v24.0.0 | Recursion now stops when either side encounters a circular reference. |
| v22.2.0, v20.15.0 | Error cause and errors properties are now compared as well. |
| v18.0.0 | Regular expressions lastIndex property is now compared as well. |
| v16.0.0, v14.18.0 | In Legacy assertion mode, changed status from Deprecated to Legacy. |
| v14.0.0 | NaN is now treated as being identical if both sides are NaN. |
| v12.0.0 | The type tags are now properly compared and there are a couple minor comparison adjustments to make the check less surprising. |
| v9.0.0 | The |
| v8.0.0 | The |
| v6.4.0, v4.7.1 | Typed array slices are handled correctly now. |
| v6.1.0, v4.5.0 | Objects with circular references can be used as inputs now. |
| v5.10.1, v4.4.3 | Handle non- |
| v0.1.21 | Added in: v0.1.21 |
Strict assertion mode
An alias ofassert.deepStrictEqual().
Legacy assertion mode
assert.deepStrictEqual() instead.Tests for deep equality between theactual andexpected parameters. Considerusingassert.deepStrictEqual() instead.assert.deepEqual() can havesurprising results.
Deep equality means that the enumerable "own" properties of child objectsare also recursively evaluated by the following rules.
Comparison details#
- Primitive values are compared with the
==operator,with the exception of<NaN>. It is treated as being identical in caseboth sides are<NaN>. - Type tags of objects should be the same.
- Onlyenumerable "own" properties are considered.
- Object constructors are compared when available.
- <Error> names, messages, causes, and errors are always compared,even if these are not enumerable properties.
- Object wrappers are compared both as objects and unwrapped values.
Objectproperties are compared unordered.- <Map> keys and<Set> items are compared unordered.
- Recursion stops when both sides differ or either side encounters a circularreference.
- Implementation does not test the
[[Prototype]]ofobjects. - <Symbol> properties are not compared.
- <WeakMap>,<WeakSet> and<Promise> instances arenot comparedstructurally. They are only equal if they reference the same object. Anycomparison between different
WeakMap,WeakSet, orPromiseinstanceswill result in inequality, even if they contain the same content. - <RegExp> lastIndex, flags, and source are always compared, even if theseare not enumerable properties.
The following example does not throw anAssertionError because theprimitives are compared using the== operator.
import assertfrom'node:assert';// WARNING: This does not throw an AssertionError!assert.deepEqual('+00000000',false);const assert =require('node:assert');// WARNING: This does not throw an AssertionError!assert.deepEqual('+00000000',false);
"Deep" equality means that the enumerable "own" properties of child objectsare evaluated also:
import assertfrom'node:assert';const obj1 = {a: {b:1, },};const obj2 = {a: {b:2, },};const obj3 = {a: {b:1, },};const obj4 = {__proto__: obj1 };assert.deepEqual(obj1, obj1);// OK// Values of b are different:assert.deepEqual(obj1, obj2);// AssertionError: { a: { b: 1 } } deepEqual { a: { b: 2 } }assert.deepEqual(obj1, obj3);// OK// Prototypes are ignored:assert.deepEqual(obj1, obj4);// AssertionError: { a: { b: 1 } } deepEqual {}const assert =require('node:assert');const obj1 = {a: {b:1, },};const obj2 = {a: {b:2, },};const obj3 = {a: {b:1, },};const obj4 = {__proto__: obj1 };assert.deepEqual(obj1, obj1);// OK// Values of b are different:assert.deepEqual(obj1, obj2);// AssertionError: { a: { b: 1 } } deepEqual { a: { b: 2 } }assert.deepEqual(obj1, obj3);// OK// Prototypes are ignored:assert.deepEqual(obj1, obj4);// AssertionError: { a: { b: 1 } } deepEqual {}
If the values are not equal, anAssertionError is thrown with amessageproperty set equal to the value of themessage parameter. If themessageparameter is undefined, a default error message is assigned. If themessageparameter is an instance of<Error> then it will be thrown instead of theAssertionError.
assert.deepStrictEqual(actual, expected[, message])#
History
| Version | Changes |
|---|---|
| v25.0.0 | Promises are not considered equal anymore if they are not of the same instance. |
| v25.0.0 | Invalid dates are now considered equal. |
| v24.0.0 | Recursion now stops when either side encounters a circular reference. |
| v22.2.0, v20.15.0 | Error cause and errors properties are now compared as well. |
| v18.0.0 | Regular expressions lastIndex property is now compared as well. |
| v9.0.0 | Enumerable symbol properties are now compared. |
| v9.0.0 | The |
| v8.5.0 | The |
| v8.0.0 | The |
| v6.1.0 | Objects with circular references can be used as inputs now. |
| v6.4.0, v4.7.1 | Typed array slices are handled correctly now. |
| v5.10.1, v4.4.3 | Handle non- |
| v1.2.0 | Added in: v1.2.0 |
Tests for deep equality between theactual andexpected parameters."Deep" equality means that the enumerable "own" properties of child objectsare recursively evaluated also by the following rules.
Comparison details#
- Primitive values are compared using
Object.is(). - Type tags of objects should be the same.
[[Prototype]]of objects are compared usingthe===operator.- Onlyenumerable "own" properties are considered.
- <Error> names, messages, causes, and errors are always compared,even if these are not enumerable properties.
errorsis also compared. - Enumerable own<Symbol> properties are compared as well.
- Object wrappers are compared both as objects and unwrapped values.
Objectproperties are compared unordered.- <Map> keys and<Set> items are compared unordered.
- Recursion stops when both sides differ or either side encounters a circularreference.
- <WeakMap>,<WeakSet> and<Promise> instances arenot comparedstructurally. They are only equal if they reference the same object. Anycomparison between different
WeakMap,WeakSet, orPromiseinstanceswill result in inequality, even if they contain the same content. - <RegExp> lastIndex, flags, and source are always compared, even if theseare not enumerable properties.
import assertfrom'node:assert/strict';// This fails because 1 !== '1'.assert.deepStrictEqual({a:1 }, {a:'1' });// AssertionError: Expected inputs to be strictly deep-equal:// + actual - expected//// {// + a: 1// - a: '1'// }// The following objects don't have own propertiesconst date =newDate();const object = {};const fakeDate = {};Object.setPrototypeOf(fakeDate,Date.prototype);// Different [[Prototype]]:assert.deepStrictEqual(object, fakeDate);// AssertionError: Expected inputs to be strictly deep-equal:// + actual - expected//// + {}// - Date {}// Different type tags:assert.deepStrictEqual(date, fakeDate);// AssertionError: Expected inputs to be strictly deep-equal:// + actual - expected//// + 2018-04-26T00:49:08.604Z// - Date {}assert.deepStrictEqual(NaN,NaN);// OK because Object.is(NaN, NaN) is true.// Different unwrapped numbers:assert.deepStrictEqual(newNumber(1),newNumber(2));// AssertionError: Expected inputs to be strictly deep-equal:// + actual - expected//// + [Number: 1]// - [Number: 2]assert.deepStrictEqual(newString('foo'),Object('foo'));// OK because the object and the string are identical when unwrapped.assert.deepStrictEqual(-0, -0);// OK// Different zeros:assert.deepStrictEqual(0, -0);// AssertionError: Expected inputs to be strictly deep-equal:// + actual - expected//// + 0// - -0const symbol1 =Symbol();const symbol2 =Symbol();assert.deepStrictEqual({ [symbol1]:1 }, { [symbol1]:1 });// OK, because it is the same symbol on both objects.assert.deepStrictEqual({ [symbol1]:1 }, { [symbol2]:1 });// AssertionError [ERR_ASSERTION]: Inputs identical but not reference equal://// {// Symbol(): 1// }const weakMap1 =newWeakMap();const weakMap2 =newWeakMap();const obj = {};weakMap1.set(obj,'value');weakMap2.set(obj,'value');// Comparing different instances fails, even with same contentsassert.deepStrictEqual(weakMap1, weakMap2);// AssertionError: Values have same structure but are not reference-equal://// WeakMap {// <items unknown>// }// Comparing the same instance to itself succeedsassert.deepStrictEqual(weakMap1, weakMap1);// OKconst weakSet1 =newWeakSet();const weakSet2 =newWeakSet();weakSet1.add(obj);weakSet2.add(obj);// Comparing different instances fails, even with same contentsassert.deepStrictEqual(weakSet1, weakSet2);// AssertionError: Values have same structure but are not reference-equal:// + actual - expected//// WeakSet {// <items unknown>// }// Comparing the same instance to itself succeedsassert.deepStrictEqual(weakSet1, weakSet1);// OKconst assert =require('node:assert/strict');// This fails because 1 !== '1'.assert.deepStrictEqual({a:1 }, {a:'1' });// AssertionError: Expected inputs to be strictly deep-equal:// + actual - expected//// {// + a: 1// - a: '1'// }// The following objects don't have own propertiesconst date =newDate();const object = {};const fakeDate = {};Object.setPrototypeOf(fakeDate,Date.prototype);// Different [[Prototype]]:assert.deepStrictEqual(object, fakeDate);// AssertionError: Expected inputs to be strictly deep-equal:// + actual - expected//// + {}// - Date {}// Different type tags:assert.deepStrictEqual(date, fakeDate);// AssertionError: Expected inputs to be strictly deep-equal:// + actual - expected//// + 2018-04-26T00:49:08.604Z// - Date {}assert.deepStrictEqual(NaN,NaN);// OK because Object.is(NaN, NaN) is true.// Different unwrapped numbers:assert.deepStrictEqual(newNumber(1),newNumber(2));// AssertionError: Expected inputs to be strictly deep-equal:// + actual - expected//// + [Number: 1]// - [Number: 2]assert.deepStrictEqual(newString('foo'),Object('foo'));// OK because the object and the string are identical when unwrapped.assert.deepStrictEqual(-0, -0);// OK// Different zeros:assert.deepStrictEqual(0, -0);// AssertionError: Expected inputs to be strictly deep-equal:// + actual - expected//// + 0// - -0const symbol1 =Symbol();const symbol2 =Symbol();assert.deepStrictEqual({ [symbol1]:1 }, { [symbol1]:1 });// OK, because it is the same symbol on both objects.assert.deepStrictEqual({ [symbol1]:1 }, { [symbol2]:1 });// AssertionError [ERR_ASSERTION]: Inputs identical but not reference equal://// {// Symbol(): 1// }const weakMap1 =newWeakMap();const weakMap2 =newWeakMap();const obj = {};weakMap1.set(obj,'value');weakMap2.set(obj,'value');// Comparing different instances fails, even with same contentsassert.deepStrictEqual(weakMap1, weakMap2);// AssertionError: Values have same structure but are not reference-equal://// WeakMap {// <items unknown>// }// Comparing the same instance to itself succeedsassert.deepStrictEqual(weakMap1, weakMap1);// OKconst weakSet1 =newWeakSet();const weakSet2 =newWeakSet();weakSet1.add(obj);weakSet2.add(obj);// Comparing different instances fails, even with same contentsassert.deepStrictEqual(weakSet1, weakSet2);// AssertionError: Values have same structure but are not reference-equal:// + actual - expected//// WeakSet {// <items unknown>// }// Comparing the same instance to itself succeedsassert.deepStrictEqual(weakSet1, weakSet1);// OK
If the values are not equal, anAssertionError is thrown with amessageproperty set equal to the value of themessage parameter. If themessageparameter is undefined, a default error message is assigned. If themessageparameter is an instance of<Error> then it will be thrown instead of theAssertionError.
assert.doesNotMatch(string, regexp[, message])#
History
| Version | Changes |
|---|---|
| v16.0.0 | This API is no longer experimental. |
| v13.6.0, v12.16.0 | Added in: v13.6.0, v12.16.0 |
Expects thestring input not to match the regular expression.
import assertfrom'node:assert/strict';assert.doesNotMatch('I will fail',/fail/);// AssertionError [ERR_ASSERTION]: The input was expected to not match the ...assert.doesNotMatch(123,/pass/);// AssertionError [ERR_ASSERTION]: The "string" argument must be of type string.assert.doesNotMatch('I will pass',/different/);// OKconst assert =require('node:assert/strict');assert.doesNotMatch('I will fail',/fail/);// AssertionError [ERR_ASSERTION]: The input was expected to not match the ...assert.doesNotMatch(123,/pass/);// AssertionError [ERR_ASSERTION]: The "string" argument must be of type string.assert.doesNotMatch('I will pass',/different/);// OK
If the values do match, or if thestring argument is of another type thanstring, anAssertionError is thrown with amessage property set equalto the value of themessage parameter. If themessage parameter isundefined, a default error message is assigned. If themessage parameter is aninstance of<Error> then it will be thrown instead of theAssertionError.
assert.doesNotReject(asyncFn[, error][, message])#
asyncFn<Function> |<Promise>error<RegExp> |<Function>message<string>- Returns:<Promise>
Awaits theasyncFn promise or, ifasyncFn is a function, immediatelycalls the function and awaits the returned promise to complete. It will thencheck that the promise is not rejected.
IfasyncFn is a function and it throws an error synchronously,assert.doesNotReject() will return a rejectedPromise with that error. Ifthe function does not return a promise,assert.doesNotReject() will return arejectedPromise with anERR_INVALID_RETURN_VALUE error. In both casesthe error handler is skipped.
Usingassert.doesNotReject() is actually not useful because there is littlebenefit in catching a rejection and then rejecting it again. Instead, consideradding a comment next to the specific code path that should not reject and keeperror messages as expressive as possible.
If specified,error can be aClass,<RegExp> or a validationfunction. Seeassert.throws() for more details.
Besides the async nature to await the completion behaves identically toassert.doesNotThrow().
import assertfrom'node:assert/strict';await assert.doesNotReject(async () => {thrownewTypeError('Wrong value'); },SyntaxError,);const assert =require('node:assert/strict');(async () => {await assert.doesNotReject(async () => {thrownewTypeError('Wrong value'); },SyntaxError, );})();
import assertfrom'node:assert/strict';assert.doesNotReject(Promise.reject(newTypeError('Wrong value'))) .then(() => {// ... });const assert =require('node:assert/strict');assert.doesNotReject(Promise.reject(newTypeError('Wrong value'))) .then(() => {// ... });
assert.doesNotThrow(fn[, error][, message])#
History
| Version | Changes |
|---|---|
| v5.11.0, v4.4.5 | The |
| v4.2.0 | The |
| v0.1.21 | Added in: v0.1.21 |
fn<Function>error<RegExp> |<Function>message<string>
Asserts that the functionfn does not throw an error.
Usingassert.doesNotThrow() is actually not useful because thereis no benefit in catching an error and then rethrowing it. Instead, consideradding a comment next to the specific code path that should not throw and keeperror messages as expressive as possible.
Whenassert.doesNotThrow() is called, it will immediately call thefnfunction.
If an error is thrown and it is the same type as that specified by theerrorparameter, then anAssertionError is thrown. If the error is of adifferent type, or if theerror parameter is undefined, the error ispropagated back to the caller.
If specified,error can be aClass,<RegExp>, or a validationfunction. Seeassert.throws() for more details.
The following, for instance, will throw the<TypeError> because there is nomatching error type in the assertion:
import assertfrom'node:assert/strict';assert.doesNotThrow(() => {thrownewTypeError('Wrong value'); },SyntaxError,);const assert =require('node:assert/strict');assert.doesNotThrow(() => {thrownewTypeError('Wrong value'); },SyntaxError,);
However, the following will result in anAssertionError with the message'Got unwanted exception...':
import assertfrom'node:assert/strict';assert.doesNotThrow(() => {thrownewTypeError('Wrong value'); },TypeError,);const assert =require('node:assert/strict');assert.doesNotThrow(() => {thrownewTypeError('Wrong value'); },TypeError,);
If anAssertionError is thrown and a value is provided for themessageparameter, the value ofmessage will be appended to theAssertionErrormessage:
import assertfrom'node:assert/strict';assert.doesNotThrow(() => {thrownewTypeError('Wrong value'); },/Wrong value/,'Whoops',);// Throws: AssertionError: Got unwanted exception: Whoopsconst assert =require('node:assert/strict');assert.doesNotThrow(() => {thrownewTypeError('Wrong value'); },/Wrong value/,'Whoops',);// Throws: AssertionError: Got unwanted exception: Whoops
assert.equal(actual, expected[, message])#
History
| Version | Changes |
|---|---|
| v16.0.0, v14.18.0 | In Legacy assertion mode, changed status from Deprecated to Legacy. |
| v14.0.0 | NaN is now treated as being identical if both sides are NaN. |
| v0.1.21 | Added in: v0.1.21 |
Strict assertion mode
An alias ofassert.strictEqual().
Legacy assertion mode
assert.strictEqual() instead.Tests shallow, coercive equality between theactual andexpected parametersusing the== operator.NaN is specially handledand treated as being identical if both sides areNaN.
import assertfrom'node:assert';assert.equal(1,1);// OK, 1 == 1assert.equal(1,'1');// OK, 1 == '1'assert.equal(NaN,NaN);// OKassert.equal(1,2);// AssertionError: 1 == 2assert.equal({a: {b:1 } }, {a: {b:1 } });// AssertionError: { a: { b: 1 } } == { a: { b: 1 } }const assert =require('node:assert');assert.equal(1,1);// OK, 1 == 1assert.equal(1,'1');// OK, 1 == '1'assert.equal(NaN,NaN);// OKassert.equal(1,2);// AssertionError: 1 == 2assert.equal({a: {b:1 } }, {a: {b:1 } });// AssertionError: { a: { b: 1 } } == { a: { b: 1 } }
If the values are not equal, anAssertionError is thrown with amessageproperty set equal to the value of themessage parameter. If themessageparameter is undefined, a default error message is assigned. If themessageparameter is an instance of<Error> then it will be thrown instead of theAssertionError.
assert.fail([message])#
Throws anAssertionError with the provided error message or a defaulterror message. If themessage parameter is an instance of<Error> thenit will be thrown instead of theAssertionError.
import assertfrom'node:assert/strict';assert.fail();// AssertionError [ERR_ASSERTION]: Failedassert.fail('boom');// AssertionError [ERR_ASSERTION]: boomassert.fail(newTypeError('need array'));// TypeError: need arrayconst assert =require('node:assert/strict');assert.fail();// AssertionError [ERR_ASSERTION]: Failedassert.fail('boom');// AssertionError [ERR_ASSERTION]: boomassert.fail(newTypeError('need array'));// TypeError: need array
assert.ifError(value)#
History
| Version | Changes |
|---|---|
| v10.0.0 | Instead of throwing the original error it is now wrapped into an [ |
| v10.0.0 | Value may now only be |
| v0.1.97 | Added in: v0.1.97 |
value<any>
Throwsvalue ifvalue is notundefined ornull. This is useful whentesting theerror argument in callbacks. The stack trace contains all framesfrom the error passed toifError() including the potential new frames forifError() itself.
import assertfrom'node:assert/strict';assert.ifError(null);// OKassert.ifError(0);// AssertionError [ERR_ASSERTION]: ifError got unwanted exception: 0assert.ifError('error');// AssertionError [ERR_ASSERTION]: ifError got unwanted exception: 'error'assert.ifError(newError());// AssertionError [ERR_ASSERTION]: ifError got unwanted exception: Error// Create some random error frames.let err;(functionerrorFrame() { err =newError('test error');})();(functionifErrorFrame() { assert.ifError(err);})();// AssertionError [ERR_ASSERTION]: ifError got unwanted exception: test error// at ifErrorFrame// at errorFrameconst assert =require('node:assert/strict');assert.ifError(null);// OKassert.ifError(0);// AssertionError [ERR_ASSERTION]: ifError got unwanted exception: 0assert.ifError('error');// AssertionError [ERR_ASSERTION]: ifError got unwanted exception: 'error'assert.ifError(newError());// AssertionError [ERR_ASSERTION]: ifError got unwanted exception: Error// Create some random error frames.let err;(functionerrorFrame() { err =newError('test error');})();(functionifErrorFrame() { assert.ifError(err);})();// AssertionError [ERR_ASSERTION]: ifError got unwanted exception: test error// at ifErrorFrame// at errorFrame
assert.match(string, regexp[, message])#
History
| Version | Changes |
|---|---|
| v16.0.0 | This API is no longer experimental. |
| v13.6.0, v12.16.0 | Added in: v13.6.0, v12.16.0 |
Expects thestring input to match the regular expression.
import assertfrom'node:assert/strict';assert.match('I will fail',/pass/);// AssertionError [ERR_ASSERTION]: The input did not match the regular ...assert.match(123,/pass/);// AssertionError [ERR_ASSERTION]: The "string" argument must be of type string.assert.match('I will pass',/pass/);// OKconst assert =require('node:assert/strict');assert.match('I will fail',/pass/);// AssertionError [ERR_ASSERTION]: The input did not match the regular ...assert.match(123,/pass/);// AssertionError [ERR_ASSERTION]: The "string" argument must be of type string.assert.match('I will pass',/pass/);// OK
If the values do not match, or if thestring argument is of another type thanstring, anAssertionError is thrown with amessage property set equalto the value of themessage parameter. If themessage parameter isundefined, a default error message is assigned. If themessage parameter is aninstance of<Error> then it will be thrown instead of theAssertionError.
assert.notDeepEqual(actual, expected[, message])#
History
| Version | Changes |
|---|---|
| v16.0.0, v14.18.0 | In Legacy assertion mode, changed status from Deprecated to Legacy. |
| v14.0.0 | NaN is now treated as being identical if both sides are NaN. |
| v9.0.0 | The |
| v8.0.0 | The |
| v6.4.0, v4.7.1 | Typed array slices are handled correctly now. |
| v6.1.0, v4.5.0 | Objects with circular references can be used as inputs now. |
| v5.10.1, v4.4.3 | Handle non- |
| v0.1.21 | Added in: v0.1.21 |
Strict assertion mode
An alias ofassert.notDeepStrictEqual().
Legacy assertion mode
assert.notDeepStrictEqual() instead.Tests for any deep inequality. Opposite ofassert.deepEqual().
import assertfrom'node:assert';const obj1 = {a: {b:1, },};const obj2 = {a: {b:2, },};const obj3 = {a: {b:1, },};const obj4 = {__proto__: obj1 };assert.notDeepEqual(obj1, obj1);// AssertionError: { a: { b: 1 } } notDeepEqual { a: { b: 1 } }assert.notDeepEqual(obj1, obj2);// OKassert.notDeepEqual(obj1, obj3);// AssertionError: { a: { b: 1 } } notDeepEqual { a: { b: 1 } }assert.notDeepEqual(obj1, obj4);// OKconst assert =require('node:assert');const obj1 = {a: {b:1, },};const obj2 = {a: {b:2, },};const obj3 = {a: {b:1, },};const obj4 = {__proto__: obj1 };assert.notDeepEqual(obj1, obj1);// AssertionError: { a: { b: 1 } } notDeepEqual { a: { b: 1 } }assert.notDeepEqual(obj1, obj2);// OKassert.notDeepEqual(obj1, obj3);// AssertionError: { a: { b: 1 } } notDeepEqual { a: { b: 1 } }assert.notDeepEqual(obj1, obj4);// OK
If the values are deeply equal, anAssertionError is thrown with amessage property set equal to the value of themessage parameter. If themessage parameter is undefined, a default error message is assigned. If themessage parameter is an instance of<Error> then it will be throwninstead of theAssertionError.
assert.notDeepStrictEqual(actual, expected[, message])#
History
| Version | Changes |
|---|---|
| v9.0.0 | The |
| v9.0.0 | The |
| v9.0.0 | The |
| v8.0.0 | The |
| v6.1.0 | Objects with circular references can be used as inputs now. |
| v6.4.0, v4.7.1 | Typed array slices are handled correctly now. |
| v5.10.1, v4.4.3 | Handle non- |
| v1.2.0 | Added in: v1.2.0 |
Tests for deep strict inequality. Opposite ofassert.deepStrictEqual().
import assertfrom'node:assert/strict';assert.notDeepStrictEqual({a:1 }, {a:'1' });// OKconst assert =require('node:assert/strict');assert.notDeepStrictEqual({a:1 }, {a:'1' });// OK
If the values are deeply and strictly equal, anAssertionError is thrownwith amessage property set equal to the value of themessage parameter. Ifthemessage parameter is undefined, a default error message is assigned. Ifthemessage parameter is an instance of<Error> then it will be throwninstead of theAssertionError.
assert.notEqual(actual, expected[, message])#
History
| Version | Changes |
|---|---|
| v16.0.0, v14.18.0 | In Legacy assertion mode, changed status from Deprecated to Legacy. |
| v14.0.0 | NaN is now treated as being identical if both sides are NaN. |
| v0.1.21 | Added in: v0.1.21 |
Strict assertion mode
An alias ofassert.notStrictEqual().
Legacy assertion mode
assert.notStrictEqual() instead.Tests shallow, coercive inequality with the!= operator.NaN isspecially handled and treated as being identical if both sides areNaN.
import assertfrom'node:assert';assert.notEqual(1,2);// OKassert.notEqual(1,1);// AssertionError: 1 != 1assert.notEqual(1,'1');// AssertionError: 1 != '1'const assert =require('node:assert');assert.notEqual(1,2);// OKassert.notEqual(1,1);// AssertionError: 1 != 1assert.notEqual(1,'1');// AssertionError: 1 != '1'
If the values are equal, anAssertionError is thrown with amessageproperty set equal to the value of themessage parameter. If themessageparameter is undefined, a default error message is assigned. If themessageparameter is an instance of<Error> then it will be thrown instead of theAssertionError.
assert.notStrictEqual(actual, expected[, message])#
History
| Version | Changes |
|---|---|
| v10.0.0 | Used comparison changed from Strict Equality to |
| v0.1.21 | Added in: v0.1.21 |
Tests strict inequality between theactual andexpected parameters asdetermined byObject.is().
import assertfrom'node:assert/strict';assert.notStrictEqual(1,2);// OKassert.notStrictEqual(1,1);// AssertionError [ERR_ASSERTION]: Expected "actual" to be strictly unequal to://// 1assert.notStrictEqual(1,'1');// OKconst assert =require('node:assert/strict');assert.notStrictEqual(1,2);// OKassert.notStrictEqual(1,1);// AssertionError [ERR_ASSERTION]: Expected "actual" to be strictly unequal to://// 1assert.notStrictEqual(1,'1');// OK
If the values are strictly equal, anAssertionError is thrown with amessage property set equal to the value of themessage parameter. If themessage parameter is undefined, a default error message is assigned. If themessage parameter is an instance of<Error> then it will be throwninstead of theAssertionError.
assert.ok(value[, message])#
History
| Version | Changes |
|---|---|
| v10.0.0 | The |
| v0.1.21 | Added in: v0.1.21 |
Tests ifvalue is truthy. It is equivalent toassert.equal(!!value, true, message).
Ifvalue is not truthy, anAssertionError is thrown with amessageproperty set equal to the value of themessage parameter. If themessageparameter isundefined, a default error message is assigned. If themessageparameter is an instance of<Error> then it will be thrown instead of theAssertionError.If no arguments are passed in at allmessage will be set to the string:'No value argument passed to `assert.ok()`'.
Be aware that in therepl the error message will be different to the onethrown in a file! See below for further details.
import assertfrom'node:assert/strict';assert.ok(true);// OKassert.ok(1);// OKassert.ok();// AssertionError: No value argument passed to `assert.ok()`assert.ok(false,'it\'s false');// AssertionError: it's false// In the repl:assert.ok(typeof123 ==='string');// AssertionError: false == true// In a file (e.g. test.js):assert.ok(typeof123 ==='string');// AssertionError: The expression evaluated to a falsy value://// assert.ok(typeof 123 === 'string')assert.ok(false);// AssertionError: The expression evaluated to a falsy value://// assert.ok(false)assert.ok(0);// AssertionError: The expression evaluated to a falsy value://// assert.ok(0)const assert =require('node:assert/strict');assert.ok(true);// OKassert.ok(1);// OKassert.ok();// AssertionError: No value argument passed to `assert.ok()`assert.ok(false,'it\'s false');// AssertionError: it's false// In the repl:assert.ok(typeof123 ==='string');// AssertionError: false == true// In a file (e.g. test.js):assert.ok(typeof123 ==='string');// AssertionError: The expression evaluated to a falsy value://// assert.ok(typeof 123 === 'string')assert.ok(false);// AssertionError: The expression evaluated to a falsy value://// assert.ok(false)assert.ok(0);// AssertionError: The expression evaluated to a falsy value://// assert.ok(0)import assertfrom'node:assert/strict';// Using `assert()` works the same:assert(2 +2 >5);// AssertionError: The expression evaluated to a falsy value://// assert(2 + 2 > 5)
const assert =require('node:assert');// Using `assert()` works the same:assert(2 +2 >5);// AssertionError: The expression evaluated to a falsy value://// assert(2 + 2 > 5)assert.rejects(asyncFn[, error][, message])#
asyncFn<Function> |<Promise>error<RegExp> |<Function> |<Object> |<Error>message<string>- Returns:<Promise>
Awaits theasyncFn promise or, ifasyncFn is a function, immediatelycalls the function and awaits the returned promise to complete. It will thencheck that the promise is rejected.
IfasyncFn is a function and it throws an error synchronously,assert.rejects() will return a rejectedPromise with that error. If thefunction does not return a promise,assert.rejects() will return a rejectedPromise with anERR_INVALID_RETURN_VALUE error. In both cases the errorhandler is skipped.
Besides the async nature to await the completion behaves identically toassert.throws().
If specified,error can be aClass,<RegExp>, a validation function,an object where each property will be tested for, or an instance of error whereeach property will be tested for including the non-enumerablemessage andname properties.
If specified,message will be the message provided by theAssertionErrorif theasyncFn fails to reject.
import assertfrom'node:assert/strict';await assert.rejects(async () => {thrownewTypeError('Wrong value'); }, {name:'TypeError',message:'Wrong value', },);const assert =require('node:assert/strict');(async () => {await assert.rejects(async () => {thrownewTypeError('Wrong value'); }, {name:'TypeError',message:'Wrong value', }, );})();
import assertfrom'node:assert/strict';await assert.rejects(async () => {thrownewTypeError('Wrong value'); },(err) => { assert.strictEqual(err.name,'TypeError'); assert.strictEqual(err.message,'Wrong value');returntrue; },);const assert =require('node:assert/strict');(async () => {await assert.rejects(async () => {thrownewTypeError('Wrong value'); },(err) => { assert.strictEqual(err.name,'TypeError'); assert.strictEqual(err.message,'Wrong value');returntrue; }, );})();
import assertfrom'node:assert/strict';assert.rejects(Promise.reject(newError('Wrong value')),Error,).then(() => {// ...});const assert =require('node:assert/strict');assert.rejects(Promise.reject(newError('Wrong value')),Error,).then(() => {// ...});
error cannot be a string. If a string is provided as the secondargument, thenerror is assumed to be omitted and the string will be used formessage instead. This can lead to easy-to-miss mistakes. Please read theexample inassert.throws() carefully if using a string as the secondargument gets considered.
assert.strictEqual(actual, expected[, message])#
History
| Version | Changes |
|---|---|
| v10.0.0 | Used comparison changed from Strict Equality to |
| v0.1.21 | Added in: v0.1.21 |
Tests strict equality between theactual andexpected parameters asdetermined byObject.is().
import assertfrom'node:assert/strict';assert.strictEqual(1,2);// AssertionError [ERR_ASSERTION]: Expected inputs to be strictly equal://// 1 !== 2assert.strictEqual(1,1);// OKassert.strictEqual('Hello foobar','Hello World!');// AssertionError [ERR_ASSERTION]: Expected inputs to be strictly equal:// + actual - expected//// + 'Hello foobar'// - 'Hello World!'// ^const apples =1;const oranges =2;assert.strictEqual(apples, oranges,`apples${apples} !== oranges${oranges}`);// AssertionError [ERR_ASSERTION]: apples 1 !== oranges 2assert.strictEqual(1,'1',newTypeError('Inputs are not identical'));// TypeError: Inputs are not identicalconst assert =require('node:assert/strict');assert.strictEqual(1,2);// AssertionError [ERR_ASSERTION]: Expected inputs to be strictly equal://// 1 !== 2assert.strictEqual(1,1);// OKassert.strictEqual('Hello foobar','Hello World!');// AssertionError [ERR_ASSERTION]: Expected inputs to be strictly equal:// + actual - expected//// + 'Hello foobar'// - 'Hello World!'// ^const apples =1;const oranges =2;assert.strictEqual(apples, oranges,`apples${apples} !== oranges${oranges}`);// AssertionError [ERR_ASSERTION]: apples 1 !== oranges 2assert.strictEqual(1,'1',newTypeError('Inputs are not identical'));// TypeError: Inputs are not identical
If the values are not strictly equal, anAssertionError is thrown with amessage property set equal to the value of themessage parameter. If themessage parameter is undefined, a default error message is assigned. If themessage parameter is an instance of<Error> then it will be throwninstead of theAssertionError.
assert.throws(fn[, error][, message])#
History
| Version | Changes |
|---|---|
| v10.2.0 | The |
| v9.9.0 | The |
| v4.2.0 | The |
| v0.1.21 | Added in: v0.1.21 |
fn<Function>error<RegExp> |<Function> |<Object> |<Error>message<string>
Expects the functionfn to throw an error.
If specified,error can be aClass,<RegExp>, a validation function,a validation object where each property will be tested for strict deep equality,or an instance of error where each property will be tested for strict deepequality including the non-enumerablemessage andname properties. Whenusing an object, it is also possible to use a regular expression, whenvalidating against a string property. See below for examples.
If specified,message will be appended to the message provided by theAssertionError if thefn call fails to throw or in case the error validationfails.
Custom validation object/error instance:
import assertfrom'node:assert/strict';const err =newTypeError('Wrong value');err.code =404;err.foo ='bar';err.info = {nested:true,baz:'text',};err.reg =/abc/i;assert.throws(() => {throw err; }, {name:'TypeError',message:'Wrong value',info: {nested:true,baz:'text', },// Only properties on the validation object will be tested for.// Using nested objects requires all properties to be present. Otherwise// the validation is going to fail. },);// Using regular expressions to validate error properties:assert.throws(() => {throw err; }, {// The `name` and `message` properties are strings and using regular// expressions on those will match against the string. If they fail, an// error is thrown.name:/^TypeError$/,message:/Wrong/,foo:'bar',info: {nested:true,// It is not possible to use regular expressions for nested properties!baz:'text', },// The `reg` property contains a regular expression and only if the// validation object contains an identical regular expression, it is going// to pass.reg:/abc/i, },);// Fails due to the different `message` and `name` properties:assert.throws(() => {const otherErr =newError('Not found');// Copy all enumerable properties from `err` to `otherErr`.for (const [key, value]ofObject.entries(err)) { otherErr[key] = value; }throw otherErr; },// The error's `message` and `name` properties will also be checked when using// an error as validation object. err,);const assert =require('node:assert/strict');const err =newTypeError('Wrong value');err.code =404;err.foo ='bar';err.info = {nested:true,baz:'text',};err.reg =/abc/i;assert.throws(() => {throw err; }, {name:'TypeError',message:'Wrong value',info: {nested:true,baz:'text', },// Only properties on the validation object will be tested for.// Using nested objects requires all properties to be present. Otherwise// the validation is going to fail. },);// Using regular expressions to validate error properties:assert.throws(() => {throw err; }, {// The `name` and `message` properties are strings and using regular// expressions on those will match against the string. If they fail, an// error is thrown.name:/^TypeError$/,message:/Wrong/,foo:'bar',info: {nested:true,// It is not possible to use regular expressions for nested properties!baz:'text', },// The `reg` property contains a regular expression and only if the// validation object contains an identical regular expression, it is going// to pass.reg:/abc/i, },);// Fails due to the different `message` and `name` properties:assert.throws(() => {const otherErr =newError('Not found');// Copy all enumerable properties from `err` to `otherErr`.for (const [key, value]ofObject.entries(err)) { otherErr[key] = value; }throw otherErr; },// The error's `message` and `name` properties will also be checked when using// an error as validation object. err,);
Validate instanceof using constructor:
import assertfrom'node:assert/strict';assert.throws(() => {thrownewError('Wrong value'); },Error,);const assert =require('node:assert/strict');assert.throws(() => {thrownewError('Wrong value'); },Error,);
Validate error message using<RegExp>:
Using a regular expression runs.toString on the error object, and willtherefore also include the error name.
import assertfrom'node:assert/strict';assert.throws(() => {thrownewError('Wrong value'); },/^Error: Wrong value$/,);const assert =require('node:assert/strict');assert.throws(() => {thrownewError('Wrong value'); },/^Error: Wrong value$/,);
Custom error validation:
The function must returntrue to indicate all internal validations passed.It will otherwise fail with anAssertionError.
import assertfrom'node:assert/strict';assert.throws(() => {thrownewError('Wrong value'); },(err) => {assert(errinstanceofError);assert(/value/.test(err));// Avoid returning anything from validation functions besides `true`.// Otherwise, it's not clear what part of the validation failed. Instead,// throw an error about the specific validation that failed (as done in this// example) and add as much helpful debugging information to that error as// possible.returntrue; },'unexpected error',);const assert =require('node:assert/strict');assert.throws(() => {thrownewError('Wrong value'); },(err) => {assert(errinstanceofError);assert(/value/.test(err));// Avoid returning anything from validation functions besides `true`.// Otherwise, it's not clear what part of the validation failed. Instead,// throw an error about the specific validation that failed (as done in this// example) and add as much helpful debugging information to that error as// possible.returntrue; },'unexpected error',);
error cannot be a string. If a string is provided as the secondargument, thenerror is assumed to be omitted and the string will be used formessage instead. This can lead to easy-to-miss mistakes. Using the samemessage as the thrown error message is going to result in anERR_AMBIGUOUS_ARGUMENT error. Please read the example below carefully if usinga string as the second argument gets considered:
import assertfrom'node:assert/strict';functionthrowingFirst() {thrownewError('First');}functionthrowingSecond() {thrownewError('Second');}functionnotThrowing() {}// The second argument is a string and the input function threw an Error.// The first case will not throw as it does not match for the error message// thrown by the input function!assert.throws(throwingFirst,'Second');// In the next example the message has no benefit over the message from the// error and since it is not clear if the user intended to actually match// against the error message, Node.js throws an `ERR_AMBIGUOUS_ARGUMENT` error.assert.throws(throwingSecond,'Second');// TypeError [ERR_AMBIGUOUS_ARGUMENT]// The string is only used (as message) in case the function does not throw:assert.throws(notThrowing,'Second');// AssertionError [ERR_ASSERTION]: Missing expected exception: Second// If it was intended to match for the error message do this instead:// It does not throw because the error messages match.assert.throws(throwingSecond,/Second$/);// If the error message does not match, an AssertionError is thrown.assert.throws(throwingFirst,/Second$/);// AssertionError [ERR_ASSERTION]const assert =require('node:assert/strict');functionthrowingFirst() {thrownewError('First');}functionthrowingSecond() {thrownewError('Second');}functionnotThrowing() {}// The second argument is a string and the input function threw an Error.// The first case will not throw as it does not match for the error message// thrown by the input function!assert.throws(throwingFirst,'Second');// In the next example the message has no benefit over the message from the// error and since it is not clear if the user intended to actually match// against the error message, Node.js throws an `ERR_AMBIGUOUS_ARGUMENT` error.assert.throws(throwingSecond,'Second');// TypeError [ERR_AMBIGUOUS_ARGUMENT]// The string is only used (as message) in case the function does not throw:assert.throws(notThrowing,'Second');// AssertionError [ERR_ASSERTION]: Missing expected exception: Second// If it was intended to match for the error message do this instead:// It does not throw because the error messages match.assert.throws(throwingSecond,/Second$/);// If the error message does not match, an AssertionError is thrown.assert.throws(throwingFirst,/Second$/);// AssertionError [ERR_ASSERTION]
Due to the confusing error-prone notation, avoid a string as the secondargument.
assert.partialDeepStrictEqual(actual, expected[, message])#
History
| Version | Changes |
|---|---|
| v25.0.0 | Promises are not considered equal anymore if they are not of the same instance. |
| v25.0.0 | Invalid dates are now considered equal. |
| v24.0.0, v22.17.0 | partialDeepStrictEqual is now Stable. Previously, it had been Experimental. |
| v23.4.0, v22.13.0 | Added in: v23.4.0, v22.13.0 |
Tests for partial deep equality between theactual andexpected parameters."Deep" equality means that the enumerable "own" properties of child objectsare recursively evaluated also by the following rules. "Partial" equality meansthat only properties that exist on theexpected parameter are going to becompared.
This method always passes the same test cases asassert.deepStrictEqual(),behaving as a super set of it.
Comparison details#
- Primitive values are compared using
Object.is(). - Type tags of objects should be the same.
[[Prototype]]of objects are not compared.- Onlyenumerable "own" properties are considered.
- <Error> names, messages, causes, and errors are always compared,even if these are not enumerable properties.
errorsis also compared. - Enumerable own<Symbol> properties are compared as well.
- Object wrappers are compared both as objects and unwrapped values.
Objectproperties are compared unordered.- <Map> keys and<Set> items are compared unordered.
- Recursion stops when both sides differ or both sides encounter a circularreference.
- <WeakMap>,<WeakSet> and<Promise> instances arenot comparedstructurally. They are only equal if they reference the same object. Anycomparison between different
WeakMap,WeakSet, orPromiseinstanceswill result in inequality, even if they contain the same content. - <RegExp> lastIndex, flags, and source are always compared, even if theseare not enumerable properties.
- Holes in sparse arrays are ignored.
import assertfrom'node:assert';assert.partialDeepStrictEqual( {a: {b: {c:1 } } }, {a: {b: {c:1 } } },);// OKassert.partialDeepStrictEqual( {a:1,b:2,c:3 }, {b:2 },);// OKassert.partialDeepStrictEqual( [1,2,3,4,5,6,7,8,9], [4,5,8],);// OKassert.partialDeepStrictEqual(newSet([{a:1 }, {b:1 }]),newSet([{a:1 }]),);// OKassert.partialDeepStrictEqual(newMap([['key1','value1'], ['key2','value2']]),newMap([['key2','value2']]),);// OKassert.partialDeepStrictEqual(123n,123n);// OKassert.partialDeepStrictEqual( [1,2,3,4,5,6,7,8,9], [5,4,8],);// AssertionErrorassert.partialDeepStrictEqual( {a:1 }, {a:1,b:2 },);// AssertionErrorassert.partialDeepStrictEqual( {a: {b:2 } }, {a: {b:'2' } },);// AssertionErrorconst assert =require('node:assert');assert.partialDeepStrictEqual( {a: {b: {c:1 } } }, {a: {b: {c:1 } } },);// OKassert.partialDeepStrictEqual( {a:1,b:2,c:3 }, {b:2 },);// OKassert.partialDeepStrictEqual( [1,2,3,4,5,6,7,8,9], [4,5,8],);// OKassert.partialDeepStrictEqual(newSet([{a:1 }, {b:1 }]),newSet([{a:1 }]),);// OKassert.partialDeepStrictEqual(newMap([['key1','value1'], ['key2','value2']]),newMap([['key2','value2']]),);// OKassert.partialDeepStrictEqual(123n,123n);// OKassert.partialDeepStrictEqual( [1,2,3,4,5,6,7,8,9], [5,4,8],);// AssertionErrorassert.partialDeepStrictEqual( {a:1 }, {a:1,b:2 },);// AssertionErrorassert.partialDeepStrictEqual( {a: {b:2 } }, {a: {b:'2' } },);// AssertionError
Asynchronous context tracking#
Source Code:lib/async_hooks.js
Introduction#
These classes are used to associate state and propagate it throughoutcallbacks and promise chains.They allow storing data throughout the lifetime of a web requestor any other asynchronous duration. It is similar to thread-local storagein other languages.
TheAsyncLocalStorage andAsyncResource classes are part of thenode:async_hooks module:
import {AsyncLocalStorage,AsyncResource }from'node:async_hooks';const {AsyncLocalStorage,AsyncResource } =require('node:async_hooks');
Class:AsyncLocalStorage#
History
| Version | Changes |
|---|---|
| v16.4.0 | AsyncLocalStorage is now Stable. Previously, it had been Experimental. |
| v13.10.0, v12.17.0 | Added in: v13.10.0, v12.17.0 |
This class creates stores that stay coherent through asynchronous operations.
While you can create your own implementation on top of thenode:async_hooksmodule,AsyncLocalStorage should be preferred as it is a performant and memorysafe implementation that involves significant optimizations that are non-obviousto implement.
The following example usesAsyncLocalStorage to build a simple loggerthat assigns IDs to incoming HTTP requests and includes them in messageslogged within each request.
import httpfrom'node:http';import {AsyncLocalStorage }from'node:async_hooks';const asyncLocalStorage =newAsyncLocalStorage();functionlogWithId(msg) {const id = asyncLocalStorage.getStore();console.log(`${id !==undefined ? id :'-'}:`, msg);}let idSeq =0;http.createServer((req, res) => { asyncLocalStorage.run(idSeq++,() => {logWithId('start');// Imagine any chain of async operations heresetImmediate(() => {logWithId('finish'); res.end(); }); });}).listen(8080);http.get('http://localhost:8080');http.get('http://localhost:8080');// Prints:// 0: start// 0: finish// 1: start// 1: finishconst http =require('node:http');const {AsyncLocalStorage } =require('node:async_hooks');const asyncLocalStorage =newAsyncLocalStorage();functionlogWithId(msg) {const id = asyncLocalStorage.getStore();console.log(`${id !==undefined ? id :'-'}:`, msg);}let idSeq =0;http.createServer((req, res) => { asyncLocalStorage.run(idSeq++,() => {logWithId('start');// Imagine any chain of async operations heresetImmediate(() => {logWithId('finish'); res.end(); }); });}).listen(8080);http.get('http://localhost:8080');http.get('http://localhost:8080');// Prints:// 0: start// 0: finish// 1: start// 1: finish
Each instance ofAsyncLocalStorage maintains an independent storage context.Multiple instances can safely exist simultaneously without risk of interferingwith each other's data.
new AsyncLocalStorage([options])#
History
| Version | Changes |
|---|---|
| v24.0.0 | Add |
| v19.7.0, v18.16.0 | Removed experimental onPropagate option. |
| v19.2.0, v18.13.0 | Add option onPropagate. |
| v13.10.0, v12.17.0 | Added in: v13.10.0, v12.17.0 |
options<Object>
Creates a new instance ofAsyncLocalStorage. Store is only provided within arun() call or after anenterWith() call.
Static method:AsyncLocalStorage.bind(fn)#
History
| Version | Changes |
|---|---|
| v23.11.0, v22.15.0 | Marking the API stable. |
| v19.8.0, v18.16.0 | Added in: v19.8.0, v18.16.0 |
fn<Function> The function to bind to the current execution context.- Returns:<Function> A new function that calls
fnwithin the capturedexecution context.
Binds the given function to the current execution context.
Static method:AsyncLocalStorage.snapshot()#
History
| Version | Changes |
|---|---|
| v23.11.0, v22.15.0 | Marking the API stable. |
| v19.8.0, v18.16.0 | Added in: v19.8.0, v18.16.0 |
- Returns:<Function> A new function with the signature
(fn: (...args) : R, ...args) : R.
Captures the current execution context and returns a function that accepts afunction as an argument. Whenever the returned function is called, itcalls the function passed to it within the captured context.
const asyncLocalStorage =newAsyncLocalStorage();const runInAsyncScope = asyncLocalStorage.run(123,() =>AsyncLocalStorage.snapshot());const result = asyncLocalStorage.run(321,() =>runInAsyncScope(() => asyncLocalStorage.getStore()));console.log(result);// returns 123AsyncLocalStorage.snapshot() can replace the use of AsyncResource for simpleasync context tracking purposes, for example:
classFoo { #runInAsyncScope =AsyncLocalStorage.snapshot();get() {returnthis.#runInAsyncScope(() => asyncLocalStorage.getStore()); }}const foo = asyncLocalStorage.run(123,() =>newFoo());console.log(asyncLocalStorage.run(321,() => foo.get()));// returns 123asyncLocalStorage.disable()#
Disables the instance ofAsyncLocalStorage. All subsequent callstoasyncLocalStorage.getStore() will returnundefined untilasyncLocalStorage.run() orasyncLocalStorage.enterWith() is called again.
When callingasyncLocalStorage.disable(), all current contexts linked to theinstance will be exited.
CallingasyncLocalStorage.disable() is required before theasyncLocalStorage can be garbage collected. This does not apply to storesprovided by theasyncLocalStorage, as those objects are garbage collectedalong with the corresponding async resources.
Use this method when theasyncLocalStorage is not in use anymorein the current process.
asyncLocalStorage.getStore()#
- Returns:<any>
Returns the current store.If called outside of an asynchronous context initialized bycallingasyncLocalStorage.run() orasyncLocalStorage.enterWith(), itreturnsundefined.
asyncLocalStorage.enterWith(store)#
store<any>
Transitions into the context for the remainder of the currentsynchronous execution and then persists the store through any followingasynchronous calls.
Example:
const store = {id:1 };// Replaces previous store with the given store objectasyncLocalStorage.enterWith(store);asyncLocalStorage.getStore();// Returns the store objectsomeAsyncOperation(() => { asyncLocalStorage.getStore();// Returns the same object});This transition will continue for theentire synchronous execution.This means that if, for example, the context is entered within an eventhandler subsequent event handlers will also run within that context unlessspecifically bound to another context with anAsyncResource. That is whyrun() should be preferred overenterWith() unless there are strong reasonsto use the latter method.
const store = {id:1 };emitter.on('my-event',() => { asyncLocalStorage.enterWith(store);});emitter.on('my-event',() => { asyncLocalStorage.getStore();// Returns the same object});asyncLocalStorage.getStore();// Returns undefinedemitter.emit('my-event');asyncLocalStorage.getStore();// Returns the same objectasyncLocalStorage.name#
- Type:<string>
The name of theAsyncLocalStorage instance if provided.
asyncLocalStorage.run(store, callback[, ...args])#
store<any>callback<Function>...args<any>
Runs a function synchronously within a context and returns itsreturn value. The store is not accessible outside of the callback function.The store is accessible to any asynchronous operations created within thecallback.
The optionalargs are passed to the callback function.
If the callback function throws an error, the error is thrown byrun() too.The stacktrace is not impacted by this call and the context is exited.
Example:
const store = {id:2 };try { asyncLocalStorage.run(store,() => { asyncLocalStorage.getStore();// Returns the store objectsetTimeout(() => { asyncLocalStorage.getStore();// Returns the store object },200);thrownewError(); });}catch (e) { asyncLocalStorage.getStore();// Returns undefined// The error will be caught here}asyncLocalStorage.exit(callback[, ...args])#
callback<Function>...args<any>
Runs a function synchronously outside of a context and returns itsreturn value. The store is not accessible within the callback function orthe asynchronous operations created within the callback. AnygetStore()call done within the callback function will always returnundefined.
The optionalargs are passed to the callback function.
If the callback function throws an error, the error is thrown byexit() too.The stacktrace is not impacted by this call and the context is re-entered.
Example:
// Within a call to runtry { asyncLocalStorage.getStore();// Returns the store object or value asyncLocalStorage.exit(() => { asyncLocalStorage.getStore();// Returns undefinedthrownewError(); });}catch (e) { asyncLocalStorage.getStore();// Returns the same object or value// The error will be caught here}Usage withasync/await#
If, within an async function, only oneawait call is to run within a context,the following pattern should be used:
asyncfunctionfn() {await asyncLocalStorage.run(newMap(),() => { asyncLocalStorage.getStore().set('key', value);returnfoo();// The return value of foo will be awaited });}In this example, the store is only available in the callback function and thefunctions called byfoo. Outside ofrun, callinggetStore will returnundefined.
Troubleshooting: Context loss#
In most cases,AsyncLocalStorage works without issues. In rare situations, thecurrent store is lost in one of the asynchronous operations.
If your code is callback-based, it is enough to promisify it withutil.promisify() so it starts working with native promises.
If you need to use a callback-based API or your code assumesa custom thenable implementation, use theAsyncResource classto associate the asynchronous operation with the correct execution context.Find the function call responsible for the context loss by logging the contentofasyncLocalStorage.getStore() after the calls you suspect are responsiblefor the loss. When the code logsundefined, the last callback called isprobably responsible for the context loss.
Class:AsyncResource#
History
| Version | Changes |
|---|---|
| v16.4.0 | AsyncResource is now Stable. Previously, it had been Experimental. |
The classAsyncResource is designed to be extended by the embedder's asyncresources. Using this, users can easily trigger the lifetime events of theirown resources.
Theinit hook will trigger when anAsyncResource is instantiated.
The following is an overview of theAsyncResource API.
import {AsyncResource, executionAsyncId }from'node:async_hooks';// AsyncResource() is meant to be extended. Instantiating a// new AsyncResource() also triggers init. If triggerAsyncId is omitted then// async_hook.executionAsyncId() is used.const asyncResource =newAsyncResource( type, {triggerAsyncId:executionAsyncId(),requireManualDestroy:false },);// Run a function in the execution context of the resource. This will// * establish the context of the resource// * trigger the AsyncHooks before callbacks// * call the provided function `fn` with the supplied arguments// * trigger the AsyncHooks after callbacks// * restore the original execution contextasyncResource.runInAsyncScope(fn, thisArg, ...args);// Call AsyncHooks destroy callbacks.asyncResource.emitDestroy();// Return the unique ID assigned to the AsyncResource instance.asyncResource.asyncId();// Return the trigger ID for the AsyncResource instance.asyncResource.triggerAsyncId();const {AsyncResource, executionAsyncId } =require('node:async_hooks');// AsyncResource() is meant to be extended. Instantiating a// new AsyncResource() also triggers init. If triggerAsyncId is omitted then// async_hook.executionAsyncId() is used.const asyncResource =newAsyncResource( type, {triggerAsyncId:executionAsyncId(),requireManualDestroy:false },);// Run a function in the execution context of the resource. This will// * establish the context of the resource// * trigger the AsyncHooks before callbacks// * call the provided function `fn` with the supplied arguments// * trigger the AsyncHooks after callbacks// * restore the original execution contextasyncResource.runInAsyncScope(fn, thisArg, ...args);// Call AsyncHooks destroy callbacks.asyncResource.emitDestroy();// Return the unique ID assigned to the AsyncResource instance.asyncResource.asyncId();// Return the trigger ID for the AsyncResource instance.asyncResource.triggerAsyncId();
new AsyncResource(type[, options])#
type<string> The type of async event.options<Object>triggerAsyncId<number> The ID of the execution context that created thisasync event.Default:executionAsyncId().requireManualDestroy<boolean> If set totrue, disablesemitDestroywhen the object is garbage collected. This usually does not need to be set(even ifemitDestroyis called manually), unless the resource'sasyncIdis retrieved and the sensitive API'semitDestroyis called with it.When set tofalse, theemitDestroycall on garbage collectionwill only take place if there is at least one activedestroyhook.Default:false.
Example usage:
classDBQueryextendsAsyncResource {constructor(db) {super('DBQuery');this.db = db; }getInfo(query, callback) {this.db.get(query,(err, data) => {this.runInAsyncScope(callback,null, err, data); }); }close() {this.db =null;this.emitDestroy(); }}Static method:AsyncResource.bind(fn[, type[, thisArg]])#
History
| Version | Changes |
|---|---|
| v20.0.0 | The |
| v17.8.0, v16.15.0 | Changed the default when |
| v16.0.0 | Added optional thisArg. |
| v14.8.0, v12.19.0 | Added in: v14.8.0, v12.19.0 |
fn<Function> The function to bind to the current execution context.type<string> An optional name to associate with the underlyingAsyncResource.thisArg<any>
Binds the given function to the current execution context.
asyncResource.bind(fn[, thisArg])#
History
| Version | Changes |
|---|---|
| v20.0.0 | The |
| v17.8.0, v16.15.0 | Changed the default when |
| v16.0.0 | Added optional thisArg. |
| v14.8.0, v12.19.0 | Added in: v14.8.0, v12.19.0 |
fn<Function> The function to bind to the currentAsyncResource.thisArg<any>
Binds the given function to execute to thisAsyncResource's scope.
asyncResource.runInAsyncScope(fn[, thisArg, ...args])#
fn<Function> The function to call in the execution context of this asyncresource.thisArg<any> The receiver to be used for the function call....args<any> Optional arguments to pass to the function.
Call the provided function with the provided arguments in the execution contextof the async resource. This will establish the context, trigger the AsyncHooksbefore callbacks, call the function, trigger the AsyncHooks after callbacks, andthen restore the original execution context.
asyncResource.emitDestroy()#
- Returns:<AsyncResource> A reference to
asyncResource.
Call alldestroy hooks. This should only ever be called once. An error willbe thrown if it is called more than once. Thismust be manually called. Ifthe resource is left to be collected by the GC then thedestroy hooks willnever be called.
asyncResource.triggerAsyncId()#
- Returns:<number> The same
triggerAsyncIdthat is passed to theAsyncResourceconstructor.
UsingAsyncResource for aWorker thread pool#
The following example shows how to use theAsyncResource class to properlyprovide async tracking for aWorker pool. Other resource pools, such asdatabase connection pools, can follow a similar model.
Assuming that the task is adding two numbers, using a file namedtask_processor.js with the following content:
import { parentPort }from'node:worker_threads';parentPort.on('message',(task) => { parentPort.postMessage(task.a + task.b);});const { parentPort } =require('node:worker_threads');parentPort.on('message',(task) => { parentPort.postMessage(task.a + task.b);});
a Worker pool around it could use the following structure:
import {AsyncResource }from'node:async_hooks';import {EventEmitter }from'node:events';import {Worker }from'node:worker_threads';const kTaskInfo =Symbol('kTaskInfo');const kWorkerFreedEvent =Symbol('kWorkerFreedEvent');classWorkerPoolTaskInfoextendsAsyncResource {constructor(callback) {super('WorkerPoolTaskInfo');this.callback = callback; }done(err, result) {this.runInAsyncScope(this.callback,null, err, result);this.emitDestroy();// `TaskInfo`s are used only once. }}exportdefaultclassWorkerPoolextendsEventEmitter {constructor(numThreads) {super();this.numThreads = numThreads;this.workers = [];this.freeWorkers = [];this.tasks = [];for (let i =0; i < numThreads; i++)this.addNewWorker();// Any time the kWorkerFreedEvent is emitted, dispatch// the next task pending in the queue, if any.this.on(kWorkerFreedEvent,() => {if (this.tasks.length >0) {const { task, callback } =this.tasks.shift();this.runTask(task, callback); } }); }addNewWorker() {const worker =newWorker(newURL('task_processor.js',import.meta.url)); worker.on('message',(result) => {// In case of success: Call the callback that was passed to `runTask`,// remove the `TaskInfo` associated with the Worker, and mark it as free// again. worker[kTaskInfo].done(null, result); worker[kTaskInfo] =null;this.freeWorkers.push(worker);this.emit(kWorkerFreedEvent); }); worker.on('error',(err) => {// In case of an uncaught exception: Call the callback that was passed to// `runTask` with the error.if (worker[kTaskInfo]) worker[kTaskInfo].done(err,null);elsethis.emit('error', err);// Remove the worker from the list and start a new Worker to replace the// current one.this.workers.splice(this.workers.indexOf(worker),1);this.addNewWorker(); });this.workers.push(worker);this.freeWorkers.push(worker);this.emit(kWorkerFreedEvent); }runTask(task, callback) {if (this.freeWorkers.length ===0) {// No free threads, wait until a worker thread becomes free.this.tasks.push({ task, callback });return; }const worker =this.freeWorkers.pop(); worker[kTaskInfo] =newWorkerPoolTaskInfo(callback); worker.postMessage(task); }close() {for (const workerofthis.workers) worker.terminate(); }}const {AsyncResource } =require('node:async_hooks');const {EventEmitter } =require('node:events');const path =require('node:path');const {Worker } =require('node:worker_threads');const kTaskInfo =Symbol('kTaskInfo');const kWorkerFreedEvent =Symbol('kWorkerFreedEvent');classWorkerPoolTaskInfoextendsAsyncResource {constructor(callback) {super('WorkerPoolTaskInfo');this.callback = callback; }done(err, result) {this.runInAsyncScope(this.callback,null, err, result);this.emitDestroy();// `TaskInfo`s are used only once. }}classWorkerPoolextendsEventEmitter {constructor(numThreads) {super();this.numThreads = numThreads;this.workers = [];this.freeWorkers = [];this.tasks = [];for (let i =0; i < numThreads; i++)this.addNewWorker();// Any time the kWorkerFreedEvent is emitted, dispatch// the next task pending in the queue, if any.this.on(kWorkerFreedEvent,() => {if (this.tasks.length >0) {const { task, callback } =this.tasks.shift();this.runTask(task, callback); } }); }addNewWorker() {const worker =newWorker(path.resolve(__dirname,'task_processor.js')); worker.on('message',(result) => {// In case of success: Call the callback that was passed to `runTask`,// remove the `TaskInfo` associated with the Worker, and mark it as free// again. worker[kTaskInfo].done(null, result); worker[kTaskInfo] =null;this.freeWorkers.push(worker);this.emit(kWorkerFreedEvent); }); worker.on('error',(err) => {// In case of an uncaught exception: Call the callback that was passed to// `runTask` with the error.if (worker[kTaskInfo]) worker[kTaskInfo].done(err,null);elsethis.emit('error', err);// Remove the worker from the list and start a new Worker to replace the// current one.this.workers.splice(this.workers.indexOf(worker),1);this.addNewWorker(); });this.workers.push(worker);this.freeWorkers.push(worker);this.emit(kWorkerFreedEvent); }runTask(task, callback) {if (this.freeWorkers.length ===0) {// No free threads, wait until a worker thread becomes free.this.tasks.push({ task, callback });return; }const worker =this.freeWorkers.pop(); worker[kTaskInfo] =newWorkerPoolTaskInfo(callback); worker.postMessage(task); }close() {for (const workerofthis.workers) worker.terminate(); }}module.exports =WorkerPool;
Without the explicit tracking added by theWorkerPoolTaskInfo objects,it would appear that the callbacks are associated with the individualWorkerobjects. However, the creation of theWorkers is not associated with thecreation of the tasks and does not provide information about when taskswere scheduled.
This pool could be used as follows:
importWorkerPoolfrom'./worker_pool.js';import osfrom'node:os';const pool =newWorkerPool(os.availableParallelism());let finished =0;for (let i =0; i <10; i++) { pool.runTask({a:42,b:100 },(err, result) => {console.log(i, err, result);if (++finished ===10) pool.close(); });}constWorkerPool =require('./worker_pool.js');const os =require('node:os');const pool =newWorkerPool(os.availableParallelism());let finished =0;for (let i =0; i <10; i++) { pool.runTask({a:42,b:100 },(err, result) => {console.log(i, err, result);if (++finished ===10) pool.close(); });}
IntegratingAsyncResource withEventEmitter#
Event listeners triggered by anEventEmitter may be run in a differentexecution context than the one that was active wheneventEmitter.on() wascalled.
The following example shows how to use theAsyncResource class to properlyassociate an event listener with the correct execution context. The sameapproach can be applied to aStream or a similar event-driven class.
import { createServer }from'node:http';import {AsyncResource, executionAsyncId }from'node:async_hooks';const server =createServer((req, res) => { req.on('close',AsyncResource.bind(() => {// Execution context is bound to the current outer scope. })); req.on('close',() => {// Execution context is bound to the scope that caused 'close' to emit. }); res.end();}).listen(3000);const { createServer } =require('node:http');const {AsyncResource, executionAsyncId } =require('node:async_hooks');const server =createServer((req, res) => { req.on('close',AsyncResource.bind(() => {// Execution context is bound to the current outer scope. })); req.on('close',() => {// Execution context is bound to the scope that caused 'close' to emit. }); res.end();}).listen(3000);
Async hooks#
createHook,AsyncHook, andexecutionAsyncResource APIs as they have usability issues, safety risks,and performance implications. Async context tracking use cases are betterserved by the stableAsyncLocalStorage API. If you have a use case forcreateHook,AsyncHook, orexecutionAsyncResource beyond the contexttracking need solved byAsyncLocalStorage or diagnostics data currentlyprovided byDiagnostics Channel, please open an issue athttps://github.com/nodejs/node/issues describing your use case so we cancreate a more purpose-focused API.Source Code:lib/async_hooks.js
We strongly discourage the use of theasync_hooks API.Other APIs that can cover most of its use cases include:
AsyncLocalStoragetracks async contextprocess.getActiveResourcesInfo()tracks active resources
Thenode:async_hooks module provides an API to track asynchronous resources.It can be accessed using:
import async_hooksfrom'node:async_hooks';const async_hooks =require('node:async_hooks');
Terminology#
An asynchronous resource represents an object with an associated callback.This callback may be called multiple times, such as the'connection'event innet.createServer(), or just a single time like infs.open().A resource can also be closed before the callback is called.AsyncHook doesnot explicitly distinguish between these different cases but will represent themas the abstract concept that is a resource.
IfWorkers are used, each thread has an independentasync_hooksinterface, and each thread will use a new set of async IDs.
Overview#
Following is a simple overview of the public API.
import async_hooksfrom'node:async_hooks';// Return the ID of the current execution context.const eid = async_hooks.executionAsyncId();// Return the ID of the handle responsible for triggering the callback of the// current execution scope to call.const tid = async_hooks.triggerAsyncId();// Create a new AsyncHook instance. All of these callbacks are optional.const asyncHook = async_hooks.createHook({ init, before, after, destroy, promiseResolve });// Allow callbacks of this AsyncHook instance to call. This is not an implicit// action after running the constructor, and must be explicitly run to begin// executing callbacks.asyncHook.enable();// Disable listening for new asynchronous events.asyncHook.disable();//// The following are the callbacks that can be passed to createHook().//// init() is called during object construction. The resource may not have// completed construction when this callback runs. Therefore, all fields of the// resource referenced by "asyncId" may not have been populated.functioninit(asyncId, type, triggerAsyncId, resource) { }// before() is called just before the resource's callback is called. It can be// called 0-N times for handles (such as TCPWrap), and will be called exactly 1// time for requests (such as FSReqCallback).functionbefore(asyncId) { }// after() is called just after the resource's callback has finished.functionafter(asyncId) { }// destroy() is called when the resource is destroyed.functiondestroy(asyncId) { }// promiseResolve() is called only for promise resources, when the// resolve() function passed to the Promise constructor is invoked// (either directly or through other means of resolving a promise).functionpromiseResolve(asyncId) { }const async_hooks =require('node:async_hooks');// Return the ID of the current execution context.const eid = async_hooks.executionAsyncId();// Return the ID of the handle responsible for triggering the callback of the// current execution scope to call.const tid = async_hooks.triggerAsyncId();// Create a new AsyncHook instance. All of these callbacks are optional.const asyncHook = async_hooks.createHook({ init, before, after, destroy, promiseResolve });// Allow callbacks of this AsyncHook instance to call. This is not an implicit// action after running the constructor, and must be explicitly run to begin// executing callbacks.asyncHook.enable();// Disable listening for new asynchronous events.asyncHook.disable();//// The following are the callbacks that can be passed to createHook().//// init() is called during object construction. The resource may not have// completed construction when this callback runs. Therefore, all fields of the// resource referenced by "asyncId" may not have been populated.functioninit(asyncId, type, triggerAsyncId, resource) { }// before() is called just before the resource's callback is called. It can be// called 0-N times for handles (such as TCPWrap), and will be called exactly 1// time for requests (such as FSReqCallback).functionbefore(asyncId) { }// after() is called just after the resource's callback has finished.functionafter(asyncId) { }// destroy() is called when the resource is destroyed.functiondestroy(asyncId) { }// promiseResolve() is called only for promise resources, when the// resolve() function passed to the Promise constructor is invoked// (either directly or through other means of resolving a promise).functionpromiseResolve(asyncId) { }
async_hooks.createHook(options)#
options<Object> TheHook Callbacks to registerinit<Function> Theinitcallback.before<Function> Thebeforecallback.after<Function> Theaftercallback.destroy<Function> Thedestroycallback.promiseResolve<Function> ThepromiseResolvecallback.trackPromises<boolean> Whether the hook should trackPromises. Cannot befalseifpromiseResolveis set.Default:true.
- Returns:<AsyncHook> Instance used for disabling and enabling hooks
Registers functions to be called for different lifetime events of each asyncoperation.
The callbacksinit()/before()/after()/destroy() are called for therespective asynchronous event during a resource's lifetime.
All callbacks are optional. For example, if only resource cleanup needs tobe tracked, then only thedestroy callback needs to be passed. Thespecifics of all functions that can be passed tocallbacks is in theHook Callbacks section.
import { createHook }from'node:async_hooks';const asyncHook =createHook({init(asyncId, type, triggerAsyncId, resource) { },destroy(asyncId) { },});const async_hooks =require('node:async_hooks');const asyncHook = async_hooks.createHook({init(asyncId, type, triggerAsyncId, resource) { },destroy(asyncId) { },});
The callbacks will be inherited via the prototype chain:
classMyAsyncCallbacks {init(asyncId, type, triggerAsyncId, resource) { }destroy(asyncId) {}}classMyAddedCallbacksextendsMyAsyncCallbacks {before(asyncId) { }after(asyncId) { }}const asyncHook = async_hooks.createHook(newMyAddedCallbacks());Because promises are asynchronous resources whose lifecycle is trackedvia the async hooks mechanism, theinit(),before(),after(), anddestroy() callbacksmust not be async functions that return promises.
Error handling#
If anyAsyncHook callbacks throw, the application will print the stack traceand exit. The exit path does follow that of an uncaught exception, butall'uncaughtException' listeners are removed, thus forcing the process toexit. The'exit' callbacks will still be called unless the application is runwith--abort-on-uncaught-exception, in which case a stack trace will beprinted and the application exits, leaving a core file.
The reason for this error handling behavior is that these callbacks are runningat potentially volatile points in an object's lifetime, for example duringclass construction and destruction. Because of this, it is deemed necessary tobring down the process quickly in order to prevent an unintentional abort in thefuture. This is subject to change in the future if a comprehensive analysis isperformed to ensure an exception can follow the normal control flow withoutunintentional side effects.
Printing inAsyncHook callbacks#
Because printing to the console is an asynchronous operation,console.log()will causeAsyncHook callbacks to be called. Usingconsole.log() orsimilar asynchronous operations inside anAsyncHook callback function willcause an infinite recursion. An easy solution to this when debugging is to use asynchronous logging operation such asfs.writeFileSync(file, msg, flag).This will print to the file and will not invokeAsyncHook recursively becauseit is synchronous.
import { writeFileSync }from'node:fs';import { format }from'node:util';functiondebug(...args) {// Use a function like this one when debugging inside an AsyncHook callbackwriteFileSync('log.out',`${format(...args)}\n`, {flag:'a' });}const fs =require('node:fs');const util =require('node:util');functiondebug(...args) {// Use a function like this one when debugging inside an AsyncHook callback fs.writeFileSync('log.out',`${util.format(...args)}\n`, {flag:'a' });}
If an asynchronous operation is needed for logging, it is possible to keeptrack of what caused the asynchronous operation using the informationprovided byAsyncHook itself. The logging should then be skipped whenit was the logging itself that caused theAsyncHook callback to be called. Bydoing this, the otherwise infinite recursion is broken.
Class:AsyncHook#
The classAsyncHook exposes an interface for tracking lifetime eventsof asynchronous operations.
asyncHook.enable()#
- Returns:<AsyncHook> A reference to
asyncHook.
Enable the callbacks for a givenAsyncHook instance. If no callbacks areprovided, enabling is a no-op.
TheAsyncHook instance is disabled by default. If theAsyncHook instanceshould be enabled immediately after creation, the following pattern can be used.
import { createHook }from'node:async_hooks';const hook =createHook(callbacks).enable();const async_hooks =require('node:async_hooks');const hook = async_hooks.createHook(callbacks).enable();
asyncHook.disable()#
- Returns:<AsyncHook> A reference to
asyncHook.
Disable the callbacks for a givenAsyncHook instance from the global pool ofAsyncHook callbacks to be executed. Once a hook has been disabled it will notbe called again until enabled.
For API consistencydisable() also returns theAsyncHook instance.
Hook callbacks#
Key events in the lifetime of asynchronous events have been categorized intofour areas: instantiation, before/after the callback is called, and when theinstance is destroyed.
init(asyncId, type, triggerAsyncId, resource)#
asyncId<number> A unique ID for the async resource.type<string> The type of the async resource.triggerAsyncId<number> The unique ID of the async resource in whoseexecution context this async resource was created.resource<Object> Reference to the resource representing the asyncoperation, needs to be released duringdestroy.
Called when a class is constructed that has thepossibility to emit anasynchronous event. Thisdoes not mean the instance must callbefore/after beforedestroy is called, only that the possibilityexists.
This behavior can be observed by doing something like opening a resource thenclosing it before the resource can be used. The following snippet demonstratesthis.
import { createServer }from'node:net';createServer().listen(function() {this.close(); });// ORclearTimeout(setTimeout(() => {},10));require('node:net').createServer().listen(function() {this.close(); });// ORclearTimeout(setTimeout(() => {},10));
Every new resource is assigned an ID that is unique within the scope of thecurrent Node.js instance.
type#
Thetype is a string identifying the type of resource that causedinit to be called. Generally, it will correspond to the name of theresource's constructor.
Thetype of resources created by Node.js itself can change in any Node.jsrelease. Valid values includeTLSWRAP,TCPWRAP,TCPSERVERWRAP,GETADDRINFOREQWRAP,FSREQCALLBACK,Microtask, andTimeout. Inspect the source code of the Node.js version usedto get the full list.
Furthermore users ofAsyncResource create async resources independentof Node.js itself.
There is also thePROMISE resource type, which is used to trackPromiseinstances and asynchronous work scheduled by them. ThePromises are onlytracked whentrackPromises option is set totrue.
Users are able to define their owntype when using the public embedder API.
It is possible to have type name collisions. Embedders are encouraged to useunique prefixes, such as the npm package name, to prevent collisions whenlistening to the hooks.
triggerAsyncId#
triggerAsyncId is theasyncId of the resource that caused (or "triggered")the new resource to initialize and that causedinit to call. This is differentfromasync_hooks.executionAsyncId() that only showswhen a resource wascreated, whiletriggerAsyncId showswhy a resource was created.
The following is a simple demonstration oftriggerAsyncId:
import { createHook, executionAsyncId }from'node:async_hooks';import { stdout }from'node:process';import netfrom'node:net';import fsfrom'node:fs';createHook({init(asyncId, type, triggerAsyncId) {const eid =executionAsyncId(); fs.writeSync( stdout.fd,`${type}(${asyncId}): trigger:${triggerAsyncId} execution:${eid}\n`); },}).enable();net.createServer((conn) => {}).listen(8080);const { createHook, executionAsyncId } =require('node:async_hooks');const { stdout } =require('node:process');const net =require('node:net');const fs =require('node:fs');createHook({init(asyncId, type, triggerAsyncId) {const eid =executionAsyncId(); fs.writeSync( stdout.fd,`${type}(${asyncId}): trigger:${triggerAsyncId} execution:${eid}\n`); },}).enable();net.createServer((conn) => {}).listen(8080);
Output when hitting the server withnc localhost 8080:
TCPSERVERWRAP(5): trigger: 1 execution: 1TCPWRAP(7): trigger: 5 execution: 0TheTCPSERVERWRAP is the server which receives the connections.
TheTCPWRAP is the new connection from the client. When a newconnection is made, theTCPWrap instance is immediately constructed. Thishappens outside of any JavaScript stack. (AnexecutionAsyncId() of0 meansthat it is being executed from C++ with no JavaScript stack above it.) With onlythat information, it would be impossible to link resources together interms of what caused them to be created, sotriggerAsyncId is given the taskof propagating what resource is responsible for the new resource's existence.
resource#
resource is an object that represents the actual async resource that hasbeen initialized. The API to access the object may be specified by thecreator of the resource. Resources created by Node.js itself are internaland may change at any time. Therefore no API is specified for these.
In some cases the resource object is reused for performance reasons, it isthus not safe to use it as a key in aWeakMap or add properties to it.
Asynchronous context example#
The context tracking use case is covered by the stable APIAsyncLocalStorage.This example only illustrates async hooks operation butAsyncLocalStoragefits better to this use case.
The following is an example with additional information about the calls toinit between thebefore andafter calls, specifically what thecallback tolisten() will look like. The output formatting is slightly moreelaborate to make calling context easier to see.
import async_hooksfrom'node:async_hooks';import fsfrom'node:fs';import netfrom'node:net';import { stdout }from'node:process';const { fd } = stdout;let indent =0;async_hooks.createHook({init(asyncId, type, triggerAsyncId) {const eid = async_hooks.executionAsyncId();const indentStr =' '.repeat(indent); fs.writeSync( fd,`${indentStr}${type}(${asyncId}):` +` trigger:${triggerAsyncId} execution:${eid}\n`); },before(asyncId) {const indentStr =' '.repeat(indent); fs.writeSync(fd,`${indentStr}before:${asyncId}\n`); indent +=2; },after(asyncId) { indent -=2;const indentStr =' '.repeat(indent); fs.writeSync(fd,`${indentStr}after:${asyncId}\n`); },destroy(asyncId) {const indentStr =' '.repeat(indent); fs.writeSync(fd,`${indentStr}destroy:${asyncId}\n`); },}).enable();net.createServer(() => {}).listen(8080,() => {// Let's wait 10ms before logging the server started.setTimeout(() => {console.log('>>>', async_hooks.executionAsyncId()); },10);});const async_hooks =require('node:async_hooks');const fs =require('node:fs');const net =require('node:net');const { fd } = process.stdout;let indent =0;async_hooks.createHook({init(asyncId, type, triggerAsyncId) {const eid = async_hooks.executionAsyncId();const indentStr =' '.repeat(indent); fs.writeSync( fd,`${indentStr}${type}(${asyncId}):` +` trigger:${triggerAsyncId} execution:${eid}\n`); },before(asyncId) {const indentStr =' '.repeat(indent); fs.writeSync(fd,`${indentStr}before:${asyncId}\n`); indent +=2; },after(asyncId) { indent -=2;const indentStr =' '.repeat(indent); fs.writeSync(fd,`${indentStr}after:${asyncId}\n`); },destroy(asyncId) {const indentStr =' '.repeat(indent); fs.writeSync(fd,`${indentStr}destroy:${asyncId}\n`); },}).enable();net.createServer(() => {}).listen(8080,() => {// Let's wait 10ms before logging the server started.setTimeout(() => {console.log('>>>', async_hooks.executionAsyncId()); },10);});
Output from only starting the server:
TCPSERVERWRAP(5): trigger: 1 execution: 1TickObject(6): trigger: 5 execution: 1before: 6 Timeout(7): trigger: 6 execution: 6after: 6destroy: 6before: 7>>> 7 TickObject(8): trigger: 7 execution: 7after: 7before: 8after: 8As illustrated in the example,executionAsyncId() andexecution each specifythe value of the current execution context; which is delineated by calls tobefore andafter.
Only usingexecution to graph resource allocation results in the following:
root(1) ^ |TickObject(6) ^ | Timeout(7)TheTCPSERVERWRAP is not part of this graph, even though it was the reason forconsole.log() being called. This is because binding to a port without a hostname is asynchronous operation, but to maintain a completely asynchronousAPI the user's callback is placed in aprocess.nextTick(). Which is whyTickObject is present in the output and is a 'parent' for.listen()callback.
The graph only showswhen a resource was created, notwhy, so to trackthewhy usetriggerAsyncId. Which can be represented with the followinggraph:
bootstrap(1) | ˅TCPSERVERWRAP(5) | ˅ TickObject(6) | ˅ Timeout(7)before(asyncId)#
asyncId<number>
When an asynchronous operation is initiated (such as a TCP server receiving anew connection) or completes (such as writing data to disk) a callback iscalled to notify the user. Thebefore callback is called just before saidcallback is executed.asyncId is the unique identifier assigned to theresource about to execute the callback.
Thebefore callback will be called 0 to N times. Thebefore callbackwill typically be called 0 times if the asynchronous operation was cancelledor, for example, if no connections are received by a TCP server. Persistentasynchronous resources like a TCP server will typically call thebeforecallback multiple times, while other operations likefs.open() will callit only once.
after(asyncId)#
asyncId<number>
Called immediately after the callback specified inbefore is completed.
If an uncaught exception occurs during execution of the callback, thenafterwill runafter the'uncaughtException' event is emitted or adomain'shandler runs.
destroy(asyncId)#
asyncId<number>
Called after the resource corresponding toasyncId is destroyed. It is alsocalled asynchronously from the embedder APIemitDestroy().
Some resources depend on garbage collection for cleanup, so if a reference ismade to theresource object passed toinit it is possible thatdestroywill never be called, causing a memory leak in the application. If the resourcedoes not depend on garbage collection, then this will not be an issue.
Using the destroy hook results in additional overhead because it enablestracking ofPromise instances via the garbage collector.
promiseResolve(asyncId)#
asyncId<number>
Called when theresolve function passed to thePromise constructor isinvoked (either directly or through other means of resolving a promise).
resolve() does not do any observable synchronous work.
ThePromise is not necessarily fulfilled or rejected at this point if thePromise was resolved by assuming the state of anotherPromise.
newPromise((resolve) =>resolve(true)).then((a) => {});calls the following callbacks:
init for PROMISE with id 5, trigger id: 1 promise resolve 5 # corresponds to resolve(true)init for PROMISE with id 6, trigger id: 5 # the Promise returned by then() before 6 # the then() callback is entered promise resolve 6 # the then() callback resolves the promise by returning after 6async_hooks.executionAsyncResource()#
- Returns:<Object> The resource representing the current execution.Useful to store data within the resource.
Resource objects returned byexecutionAsyncResource() are most often internalNode.js handle objects with undocumented APIs. Using any functions or propertieson the object is likely to crash your application and should be avoided.
UsingexecutionAsyncResource() in the top-level execution context willreturn an empty object as there is no handle or request object to use,but having an object representing the top-level can be helpful.
import { open }from'node:fs';import { executionAsyncId, executionAsyncResource }from'node:async_hooks';console.log(executionAsyncId(),executionAsyncResource());// 1 {}open(newURL(import.meta.url),'r',(err, fd) => {console.log(executionAsyncId(),executionAsyncResource());// 7 FSReqWrap});const { open } =require('node:fs');const { executionAsyncId, executionAsyncResource } =require('node:async_hooks');console.log(executionAsyncId(),executionAsyncResource());// 1 {}open(__filename,'r',(err, fd) => {console.log(executionAsyncId(),executionAsyncResource());// 7 FSReqWrap});
This can be used to implement continuation local storage without theuse of a trackingMap to store the metadata:
import { createServer }from'node:http';import { executionAsyncId, executionAsyncResource, createHook,}from'node:async_hooks';const sym =Symbol('state');// Private symbol to avoid pollutioncreateHook({init(asyncId, type, triggerAsyncId, resource) {const cr =executionAsyncResource();if (cr) { resource[sym] = cr[sym]; } },}).enable();const server =createServer((req, res) => {executionAsyncResource()[sym] = {state: req.url };setTimeout(function() { res.end(JSON.stringify(executionAsyncResource()[sym])); },100);}).listen(3000);const { createServer } =require('node:http');const { executionAsyncId, executionAsyncResource, createHook,} =require('node:async_hooks');const sym =Symbol('state');// Private symbol to avoid pollutioncreateHook({init(asyncId, type, triggerAsyncId, resource) {const cr =executionAsyncResource();if (cr) { resource[sym] = cr[sym]; } },}).enable();const server =createServer((req, res) => {executionAsyncResource()[sym] = {state: req.url };setTimeout(function() { res.end(JSON.stringify(executionAsyncResource()[sym])); },100);}).listen(3000);
async_hooks.executionAsyncId()#
History
| Version | Changes |
|---|---|
| v8.2.0 | Renamed from |
| v8.1.0 | Added in: v8.1.0 |
- Returns:<number> The
asyncIdof the current execution context. Useful totrack when something calls.
import { executionAsyncId }from'node:async_hooks';import fsfrom'node:fs';console.log(executionAsyncId());// 1 - bootstrapconst path ='.';fs.open(path,'r',(err, fd) => {console.log(executionAsyncId());// 6 - open()});const async_hooks =require('node:async_hooks');const fs =require('node:fs');console.log(async_hooks.executionAsyncId());// 1 - bootstrapconst path ='.';fs.open(path,'r',(err, fd) => {console.log(async_hooks.executionAsyncId());// 6 - open()});
The ID returned fromexecutionAsyncId() is related to execution timing, notcausality (which is covered bytriggerAsyncId()):
const server = net.createServer((conn) => {// Returns the ID of the server, not of the new connection, because the// callback runs in the execution scope of the server's MakeCallback(). async_hooks.executionAsyncId();}).listen(port,() => {// Returns the ID of a TickObject (process.nextTick()) because all// callbacks passed to .listen() are wrapped in a nextTick(). async_hooks.executionAsyncId();});Promise contexts may not get preciseexecutionAsyncIds by default.See the section onpromise execution tracking.
async_hooks.triggerAsyncId()#
- Returns:<number> The ID of the resource responsible for calling the callbackthat is currently being executed.
const server = net.createServer((conn) => {// The resource that caused (or triggered) this callback to be called// was that of the new connection. Thus the return value of triggerAsyncId()// is the asyncId of "conn". async_hooks.triggerAsyncId();}).listen(port,() => {// Even though all callbacks passed to .listen() are wrapped in a nextTick()// the callback itself exists because the call to the server's .listen()// was made. So the return value would be the ID of the server. async_hooks.triggerAsyncId();});Promise contexts may not get validtriggerAsyncIds by default. Seethe section onpromise execution tracking.
async_hooks.asyncWrapProviders#
- Returns: A map of provider types to the corresponding numeric id.This map contains all the event types that might be emitted by the
async_hooks.init()event.
This feature suppresses the deprecated usage ofprocess.binding('async_wrap').Providers.See:DEP0111
Promise execution tracking#
By default, promise executions are not assignedasyncIds due to the relativelyexpensive nature of thepromise introspection API provided byV8. This means that programs using promises orasync/await will not getcorrect execution and trigger ids for promise callback contexts by default.
import { executionAsyncId, triggerAsyncId }from'node:async_hooks';Promise.resolve(1729).then(() => {console.log(`eid${executionAsyncId()} tid${triggerAsyncId()}`);});// produces:// eid 1 tid 0const { executionAsyncId, triggerAsyncId } =require('node:async_hooks');Promise.resolve(1729).then(() => {console.log(`eid${executionAsyncId()} tid${triggerAsyncId()}`);});// produces:// eid 1 tid 0
Observe that thethen() callback claims to have executed in the context of theouter scope even though there was an asynchronous hop involved. Also,thetriggerAsyncId value is0, which means that we are missing context aboutthe resource that caused (triggered) thethen() callback to be executed.
Installing async hooks viaasync_hooks.createHook enables promise executiontracking:
import { createHook, executionAsyncId, triggerAsyncId }from'node:async_hooks';createHook({init() {} }).enable();// forces PromiseHooks to be enabled.Promise.resolve(1729).then(() => {console.log(`eid${executionAsyncId()} tid${triggerAsyncId()}`);});// produces:// eid 7 tid 6const { createHook, executionAsyncId, triggerAsyncId } =require('node:async_hooks');createHook({init() {} }).enable();// forces PromiseHooks to be enabled.Promise.resolve(1729).then(() => {console.log(`eid${executionAsyncId()} tid${triggerAsyncId()}`);});// produces:// eid 7 tid 6
In this example, adding any actual hook function enabled the tracking ofpromises. There are two promises in the example above; the promise created byPromise.resolve() and the promise returned by the call tothen(). In theexample above, the first promise got theasyncId6 and the latter gotasyncId7. During the execution of thethen() callback, we are executingin the context of promise withasyncId7. This promise was triggered byasync resource6.
Another subtlety with promises is thatbefore andafter callbacks are runonly on chained promises. That means promises not created bythen()/catch()will not have thebefore andafter callbacks fired on them. For more detailssee the details of the V8PromiseHooks API.
Disabling promise execution tracking#
Tracking promise execution can cause a significant performance overhead.To opt out of promise tracking, settrackPromises tofalse:
const { createHook } =require('node:async_hooks');const { writeSync } =require('node:fs');createHook({init(asyncId, type, triggerAsyncId, resource) {// This init hook does not get called when trackPromises is set to false.writeSync(1,`init hook triggered for${type}\n`); },trackPromises:false,// Do not track promises.}).enable();Promise.resolve(1729);import { createHook }from'node:async_hooks';import { writeSync }from'node:fs';createHook({init(asyncId, type, triggerAsyncId, resource) {// This init hook does not get called when trackPromises is set to false.writeSync(1,`init hook triggered for${type}\n`); },trackPromises:false,// Do not track promises.}).enable();Promise.resolve(1729);
JavaScript embedder API#
Library developers that handle their own asynchronous resources performing taskslike I/O, connection pooling, or managing callback queues may use theAsyncResource JavaScript API so that all the appropriate callbacks are called.
Class:AsyncResource#
The documentation for this class has movedAsyncResource.
Class:AsyncLocalStorage#
The documentation for this class has movedAsyncLocalStorage.
Buffer#
Source Code:lib/buffer.js
Buffer objects are used to represent a fixed-length sequence of bytes. ManyNode.js APIs supportBuffers.
TheBuffer class is a subclass of JavaScript's<Uint8Array> class andextends it with methods that cover additional use cases. Node.js APIs acceptplain<Uint8Array>s whereverBuffers are supported as well.
While theBuffer class is available within the global scope, it is stillrecommended to explicitly reference it via an import or require statement.
import {Buffer }from'node:buffer';// Creates a zero-filled Buffer of length 10.const buf1 =Buffer.alloc(10);// Creates a Buffer of length 10,// filled with bytes which all have the value `1`.const buf2 =Buffer.alloc(10,1);// Creates an uninitialized buffer of length 10.// This is faster than calling Buffer.alloc() but the returned// Buffer instance might contain old data that needs to be// overwritten using fill(), write(), or other functions that fill the Buffer's// contents.const buf3 =Buffer.allocUnsafe(10);// Creates a Buffer containing the bytes [1, 2, 3].const buf4 =Buffer.from([1,2,3]);// Creates a Buffer containing the bytes [1, 1, 1, 1] – the entries// are all truncated using `(value & 255)` to fit into the range 0–255.const buf5 =Buffer.from([257,257.5, -255,'1']);// Creates a Buffer containing the UTF-8-encoded bytes for the string 'tést':// [0x74, 0xc3, 0xa9, 0x73, 0x74] (in hexadecimal notation)// [116, 195, 169, 115, 116] (in decimal notation)const buf6 =Buffer.from('tést');// Creates a Buffer containing the Latin-1 bytes [0x74, 0xe9, 0x73, 0x74].const buf7 =Buffer.from('tést','latin1');const {Buffer } =require('node:buffer');// Creates a zero-filled Buffer of length 10.const buf1 =Buffer.alloc(10);// Creates a Buffer of length 10,// filled with bytes which all have the value `1`.const buf2 =Buffer.alloc(10,1);// Creates an uninitialized buffer of length 10.// This is faster than calling Buffer.alloc() but the returned// Buffer instance might contain old data that needs to be// overwritten using fill(), write(), or other functions that fill the Buffer's// contents.const buf3 =Buffer.allocUnsafe(10);// Creates a Buffer containing the bytes [1, 2, 3].const buf4 =Buffer.from([1,2,3]);// Creates a Buffer containing the bytes [1, 1, 1, 1] – the entries// are all truncated using `(value & 255)` to fit into the range 0–255.const buf5 =Buffer.from([257,257.5, -255,'1']);// Creates a Buffer containing the UTF-8-encoded bytes for the string 'tést':// [0x74, 0xc3, 0xa9, 0x73, 0x74] (in hexadecimal notation)// [116, 195, 169, 115, 116] (in decimal notation)const buf6 =Buffer.from('tést');// Creates a Buffer containing the Latin-1 bytes [0x74, 0xe9, 0x73, 0x74].const buf7 =Buffer.from('tést','latin1');
Buffers and character encodings#
History
| Version | Changes |
|---|---|
| v15.7.0, v14.18.0 | Introduced |
| v6.4.0 | Introduced |
| v5.0.0 | Removed the deprecated |
When converting betweenBuffers and strings, a character encoding may bespecified. If no character encoding is specified, UTF-8 will be used as thedefault.
import {Buffer }from'node:buffer';const buf =Buffer.from('hello world','utf8');console.log(buf.toString('hex'));// Prints: 68656c6c6f20776f726c64console.log(buf.toString('base64'));// Prints: aGVsbG8gd29ybGQ=console.log(Buffer.from('fhqwhgads','utf8'));// Prints: <Buffer 66 68 71 77 68 67 61 64 73>console.log(Buffer.from('fhqwhgads','utf16le'));// Prints: <Buffer 66 00 68 00 71 00 77 00 68 00 67 00 61 00 64 00 73 00>const {Buffer } =require('node:buffer');const buf =Buffer.from('hello world','utf8');console.log(buf.toString('hex'));// Prints: 68656c6c6f20776f726c64console.log(buf.toString('base64'));// Prints: aGVsbG8gd29ybGQ=console.log(Buffer.from('fhqwhgads','utf8'));// Prints: <Buffer 66 68 71 77 68 67 61 64 73>console.log(Buffer.from('fhqwhgads','utf16le'));// Prints: <Buffer 66 00 68 00 71 00 77 00 68 00 67 00 61 00 64 00 73 00>
Node.js buffers accept all case variations of encoding strings that theyreceive. For example, UTF-8 can be specified as'utf8','UTF8', or'uTf8'.
The character encodings currently supported by Node.js are the following:
'utf8'(alias:'utf-8'): Multi-byte encoded Unicode characters. Many webpages and other document formats useUTF-8. This is the default characterencoding. When decoding aBufferinto a string that does not exclusivelycontain valid UTF-8 data, the Unicode replacement characterU+FFFD� will beused to represent those errors.'utf16le'(alias:'utf-16le'): Multi-byte encoded Unicode characters.Unlike'utf8', each character in the string will be encoded using either 2or 4 bytes. Node.js only supports thelittle-endian variant ofUTF-16.'latin1': Latin-1 stands forISO-8859-1. This character encoding onlysupports the Unicode characters fromU+0000toU+00FF. Each character isencoded using a single byte. Characters that do not fit into that range aretruncated and will be mapped to characters in that range.
Converting aBuffer into a string using one of the above is referred to asdecoding, and converting a string into aBuffer is referred to as encoding.
Node.js also supports the following binary-to-text encodings. Forbinary-to-text encodings, the naming convention is reversed: Converting aBuffer into a string is typically referred to as encoding, and converting astring into aBuffer as decoding.
'base64':Base64 encoding. When creating aBufferfrom a string,this encoding will also correctly accept "URL and Filename Safe Alphabet" asspecified inRFC 4648, Section 5. Whitespace characters such as spaces,tabs, and new lines contained within the base64-encoded string are ignored.'base64url':base64url encoding as specified inRFC 4648, Section 5. When creating aBufferfrom a string, thisencoding will also correctly accept regular base64-encoded strings. Whenencoding aBufferto a string, this encoding will omit padding.'hex': Encode each byte as two hexadecimal characters. Data truncationmay occur when decoding strings that do not exclusively consist of an evennumber of hexadecimal characters. See below for an example.
The following legacy character encodings are also supported:
'ascii': For 7-bitASCII data only. When encoding a string into aBuffer, this is equivalent to using'latin1'. When decoding aBufferinto a string, using this encoding will additionally unset the highest bit ofeach byte before decoding as'latin1'.Generally, there should be no reason to use this encoding, as'utf8'(or, if the data is known to always be ASCII-only,'latin1') will be abetter choice when encoding or decoding ASCII-only text. It is only providedfor legacy compatibility.'binary': Alias for'latin1'.The name of this encoding can be very misleading, as all of theencodings listed here convert between strings and binary data. For convertingbetween strings andBuffers, typically'utf8'is the right choice.'ucs2','ucs-2': Aliases of'utf16le'. UCS-2 used to refer to a variantof UTF-16 that did not support characters that had code points larger thanU+FFFF. In Node.js, these code points are always supported.
import {Buffer }from'node:buffer';Buffer.from('1ag123','hex');// Prints <Buffer 1a>, data truncated when first non-hexadecimal value// ('g') encountered.Buffer.from('1a7','hex');// Prints <Buffer 1a>, data truncated when data ends in single digit ('7').Buffer.from('1634','hex');// Prints <Buffer 16 34>, all data represented.const {Buffer } =require('node:buffer');Buffer.from('1ag123','hex');// Prints <Buffer 1a>, data truncated when first non-hexadecimal value// ('g') encountered.Buffer.from('1a7','hex');// Prints <Buffer 1a>, data truncated when data ends in single digit ('7').Buffer.from('1634','hex');// Prints <Buffer 16 34>, all data represented.
Modern Web browsers follow theWHATWG Encoding Standard which aliasesboth'latin1' and'ISO-8859-1' to'win-1252'. This means that while doingsomething likehttp.get(), if the returned charset is one of those listed inthe WHATWG specification it is possible that the server actually returned'win-1252'-encoded data, and using'latin1' encoding may incorrectly decodethe characters.
Buffers and TypedArrays#
History
| Version | Changes |
|---|---|
| v3.0.0 | The |
Buffer instances are also JavaScript<Uint8Array> and<TypedArray>instances. All<TypedArray> methods and properties are available onBuffers. There are,however, subtle incompatibilities between theBuffer API and the<TypedArray> API.
In particular:
- While
TypedArray.prototype.slice()creates a copy of part of theTypedArray,Buffer.prototype.slice()creates a view over the existingBufferwithout copying. This behavior can be surprising, and only exists for legacycompatibility.TypedArray.prototype.subarray()can be used to achievethe behavior ofBuffer.prototype.slice()on bothBuffersand otherTypedArrays and should be preferred. buf.toString()is incompatible with itsTypedArrayequivalent.- A number of methods, e.g.
buf.indexOf(), support additional arguments.
There are two ways to create new<TypedArray> instances from aBuffer:
- Passing a
Bufferto a<TypedArray> constructor will copy theBuffer'scontents, interpreted as an array of integers, and not as a byte sequenceof the target type.
import {Buffer }from'node:buffer';const buf =Buffer.from([1,2,3,4]);const uint32array =newUint32Array(buf);console.log(uint32array);// Prints: Uint32Array(4) [ 1, 2, 3, 4 ]const {Buffer } =require('node:buffer');const buf =Buffer.from([1,2,3,4]);const uint32array =newUint32Array(buf);console.log(uint32array);// Prints: Uint32Array(4) [ 1, 2, 3, 4 ]
- Passing the
Buffer's underlying<ArrayBuffer> will create a<TypedArray> that shares its memory with theBuffer.
import {Buffer }from'node:buffer';const buf =Buffer.from('hello','utf16le');const uint16array =newUint16Array( buf.buffer, buf.byteOffset, buf.length /Uint16Array.BYTES_PER_ELEMENT);console.log(uint16array);// Prints: Uint16Array(5) [ 104, 101, 108, 108, 111 ]const {Buffer } =require('node:buffer');const buf =Buffer.from('hello','utf16le');const uint16array =newUint16Array( buf.buffer, buf.byteOffset, buf.length /Uint16Array.BYTES_PER_ELEMENT);console.log(uint16array);// Prints: Uint16Array(5) [ 104, 101, 108, 108, 111 ]
It is possible to create a newBuffer that shares the same allocatedmemory as a<TypedArray> instance by using theTypedArray object's.buffer property in the same way.Buffer.from()behaves likenew Uint8Array() in this context.
import {Buffer }from'node:buffer';const arr =newUint16Array(2);arr[0] =5000;arr[1] =4000;// Copies the contents of `arr`.const buf1 =Buffer.from(arr);// Shares memory with `arr`.const buf2 =Buffer.from(arr.buffer);console.log(buf1);// Prints: <Buffer 88 a0>console.log(buf2);// Prints: <Buffer 88 13 a0 0f>arr[1] =6000;console.log(buf1);// Prints: <Buffer 88 a0>console.log(buf2);// Prints: <Buffer 88 13 70 17>const {Buffer } =require('node:buffer');const arr =newUint16Array(2);arr[0] =5000;arr[1] =4000;// Copies the contents of `arr`.const buf1 =Buffer.from(arr);// Shares memory with `arr`.const buf2 =Buffer.from(arr.buffer);console.log(buf1);// Prints: <Buffer 88 a0>console.log(buf2);// Prints: <Buffer 88 13 a0 0f>arr[1] =6000;console.log(buf1);// Prints: <Buffer 88 a0>console.log(buf2);// Prints: <Buffer 88 13 70 17>
When creating aBuffer using a<TypedArray>'s.buffer, it ispossible to use only a portion of the underlying<ArrayBuffer> by passing inbyteOffset andlength parameters.
import {Buffer }from'node:buffer';const arr =newUint16Array(20);const buf =Buffer.from(arr.buffer,0,16);console.log(buf.length);// Prints: 16const {Buffer } =require('node:buffer');const arr =newUint16Array(20);const buf =Buffer.from(arr.buffer,0,16);console.log(buf.length);// Prints: 16
TheBuffer.from() andTypedArray.from() have different signatures andimplementations. Specifically, the<TypedArray> variants accept a secondargument that is a mapping function that is invoked on every element of thetyped array:
TheBuffer.from() method, however, does not support the use of a mappingfunction:
Buffer.from(array)Buffer.from(buffer)Buffer.from(arrayBuffer[, byteOffset[, length]])Buffer.from(string[, encoding])
Buffer methods are callable withUint8Array instances#
All methods on the Buffer prototype are callable with aUint8Array instance.
const { toString, write } =Buffer.prototype;const uint8array =newUint8Array(5);write.call(uint8array,'hello',0,5,'utf8');// 5// <Uint8Array 68 65 6c 6c 6f>toString.call(uint8array,'utf8');// 'hello'Buffers and iteration#
Buffer instances can be iterated over usingfor..of syntax:
import {Buffer }from'node:buffer';const buf =Buffer.from([1,2,3]);for (const bof buf) {console.log(b);}// Prints:// 1// 2// 3const {Buffer } =require('node:buffer');const buf =Buffer.from([1,2,3]);for (const bof buf) {console.log(b);}// Prints:// 1// 2// 3
Additionally, thebuf.values(),buf.keys(), andbuf.entries() methods can be used to create iterators.
Class:Blob#
History
| Version | Changes |
|---|---|
| v18.0.0, v16.17.0 | No longer experimental. |
| v15.7.0, v14.18.0 | Added in: v15.7.0, v14.18.0 |
A<Blob> encapsulates immutable, raw data that can be safely shared acrossmultiple worker threads.
new buffer.Blob([sources[, options]])#
History
| Version | Changes |
|---|---|
| v16.7.0 | Added the standard |
| v15.7.0, v14.18.0 | Added in: v15.7.0, v14.18.0 |
sources<string[]> |<ArrayBuffer[]> |<TypedArray[]> |<DataView[]> |<Blob[]> Anarray of string,<ArrayBuffer>,<TypedArray>,<DataView>, or<Blob> objects,or any mix of such objects, that will be stored within theBlob.options<Object>endings<string> One of either'transparent'or'native'. When setto'native', line endings in string source parts will be converted tothe platform native line-ending as specified byrequire('node:os').EOL.type<string> The Blob content-type. The intent is fortypeto conveythe MIME media type of the data, however no validation of the type formatis performed.
Creates a newBlob object containing a concatenation of the given sources.
<ArrayBuffer>,<TypedArray>,<DataView>, and<Buffer> sources are copied intothe 'Blob' and can therefore be safely modified after the 'Blob' is created.
String sources are encoded as UTF-8 byte sequences and copied into the Blob.Unmatched surrogate pairs within each string part will be replaced by UnicodeU+FFFD replacement characters.
blob.arrayBuffer()#
- Returns:<Promise>
Returns a promise that fulfills with an<ArrayBuffer> containing a copy oftheBlob data.
blob.bytes()#
Theblob.bytes() method returns the byte of theBlob object as aPromise<Uint8Array>.
const blob =newBlob(['hello']);blob.bytes().then((bytes) => {console.log(bytes);// Outputs: Uint8Array(5) [ 104, 101, 108, 108, 111 ]});blob.slice([start[, end[, type]]])#
start<number> The starting index.end<number> The ending index.type<string> The content-type for the newBlob
Creates and returns a newBlob containing a subset of thisBlob objectsdata. The originalBlob is not altered.
blob.stream()#
- Returns:<ReadableStream>
Returns a newReadableStream that allows the content of theBlob to be read.
blob.text()#
- Returns:<Promise>
Returns a promise that fulfills with the contents of theBlob decoded as aUTF-8 string.
Blob objects andMessageChannel#
Once a<Blob> object is created, it can be sent viaMessagePort to multipledestinations without transferring or immediately copying the data. The datacontained by theBlob is copied only when thearrayBuffer() ortext()methods are called.
import {Blob }from'node:buffer';import {setTimeoutas delay }from'node:timers/promises';const blob =newBlob(['hello there']);const mc1 =newMessageChannel();const mc2 =newMessageChannel();mc1.port1.onmessage =async ({ data }) => {console.log(await data.arrayBuffer()); mc1.port1.close();};mc2.port1.onmessage =async ({ data }) => {awaitdelay(1000);console.log(await data.arrayBuffer()); mc2.port1.close();};mc1.port2.postMessage(blob);mc2.port2.postMessage(blob);// The Blob is still usable after posting.blob.text().then(console.log);const {Blob } =require('node:buffer');const {setTimeout: delay } =require('node:timers/promises');const blob =newBlob(['hello there']);const mc1 =newMessageChannel();const mc2 =newMessageChannel();mc1.port1.onmessage =async ({ data }) => {console.log(await data.arrayBuffer()); mc1.port1.close();};mc2.port1.onmessage =async ({ data }) => {awaitdelay(1000);console.log(await data.arrayBuffer()); mc2.port1.close();};mc1.port2.postMessage(blob);mc2.port2.postMessage(blob);// The Blob is still usable after posting.blob.text().then(console.log);
Class:Buffer#
TheBuffer class is a global type for dealing with binary data directly.It can be constructed in a variety of ways.
Static method:Buffer.alloc(size[, fill[, encoding]])#
History
| Version | Changes |
|---|---|
| v20.0.0 | Throw ERR_INVALID_ARG_TYPE or ERR_OUT_OF_RANGE instead of ERR_INVALID_ARG_VALUE for invalid input arguments. |
| v15.0.0 | Throw ERR_INVALID_ARG_VALUE instead of ERR_INVALID_OPT_VALUE for invalid input arguments. |
| v10.0.0 | Attempting to fill a non-zero length buffer with a zero length buffer triggers a thrown exception. |
| v10.0.0 | Specifying an invalid string for |
| v8.9.3 | Specifying an invalid string for |
| v5.10.0 | Added in: v5.10.0 |
size<integer> The desired length of the newBuffer.fill<string> |<Buffer> |<Uint8Array> |<integer> A value to pre-fill the newBufferwith.Default:0.encoding<string> Iffillis a string, this is its encoding.Default:'utf8'.- Returns:<Buffer>
Allocates a newBuffer ofsize bytes. Iffill isundefined, theBuffer will be zero-filled.
import {Buffer }from'node:buffer';const buf =Buffer.alloc(5);console.log(buf);// Prints: <Buffer 00 00 00 00 00>const {Buffer } =require('node:buffer');const buf =Buffer.alloc(5);console.log(buf);// Prints: <Buffer 00 00 00 00 00>
Ifsize is larger thanbuffer.constants.MAX_LENGTH or smaller than 0,ERR_OUT_OF_RANGEis thrown.
Iffill is specified, the allocatedBuffer will be initialized by callingbuf.fill(fill).
import {Buffer }from'node:buffer';const buf =Buffer.alloc(5,'a');console.log(buf);// Prints: <Buffer 61 61 61 61 61>const {Buffer } =require('node:buffer');const buf =Buffer.alloc(5,'a');console.log(buf);// Prints: <Buffer 61 61 61 61 61>
If bothfill andencoding are specified, the allocatedBuffer will beinitialized by callingbuf.fill(fill, encoding).
import {Buffer }from'node:buffer';const buf =Buffer.alloc(11,'aGVsbG8gd29ybGQ=','base64');console.log(buf);// Prints: <Buffer 68 65 6c 6c 6f 20 77 6f 72 6c 64>const {Buffer } =require('node:buffer');const buf =Buffer.alloc(11,'aGVsbG8gd29ybGQ=','base64');console.log(buf);// Prints: <Buffer 68 65 6c 6c 6f 20 77 6f 72 6c 64>
CallingBuffer.alloc() can be measurably slower than the alternativeBuffer.allocUnsafe() but ensures that the newly createdBuffer instancecontents will never contain sensitive data from previous allocations, includingdata that might not have been allocated forBuffers.
ATypeError will be thrown ifsize is not a number.
Static method:Buffer.allocUnsafe(size)#
History
| Version | Changes |
|---|---|
| v20.0.0 | Throw ERR_INVALID_ARG_TYPE or ERR_OUT_OF_RANGE instead of ERR_INVALID_ARG_VALUE for invalid input arguments. |
| v15.0.0 | Throw ERR_INVALID_ARG_VALUE instead of ERR_INVALID_OPT_VALUE for invalid input arguments. |
| v7.0.0 | Passing a negative |
| v5.10.0 | Added in: v5.10.0 |
Allocates a newBuffer ofsize bytes. Ifsize is larger thanbuffer.constants.MAX_LENGTH or smaller than 0,ERR_OUT_OF_RANGEis thrown.
The underlying memory forBuffer instances created in this way isnotinitialized. The contents of the newly createdBuffer are unknown andmay contain sensitive data. UseBuffer.alloc() instead to initializeBuffer instances with zeroes.
import {Buffer }from'node:buffer';const buf =Buffer.allocUnsafe(10);console.log(buf);// Prints (contents may vary): <Buffer a0 8b 28 3f 01 00 00 00 50 32>buf.fill(0);console.log(buf);// Prints: <Buffer 00 00 00 00 00 00 00 00 00 00>const {Buffer } =require('node:buffer');const buf =Buffer.allocUnsafe(10);console.log(buf);// Prints (contents may vary): <Buffer a0 8b 28 3f 01 00 00 00 50 32>buf.fill(0);console.log(buf);// Prints: <Buffer 00 00 00 00 00 00 00 00 00 00>
ATypeError will be thrown ifsize is not a number.
TheBuffer module pre-allocates an internalBuffer instance ofsizeBuffer.poolSize that is used as a pool for the fast allocation of newBuffer instances created usingBuffer.allocUnsafe(),Buffer.from(array),Buffer.from(string), andBuffer.concat() only whensize is less thanBuffer.poolSize >>> 1 (floor ofBuffer.poolSize divided by two).
Use of this pre-allocated internal memory pool is a key difference betweencallingBuffer.alloc(size, fill) vs.Buffer.allocUnsafe(size).fill(fill).Specifically,Buffer.alloc(size, fill) willnever use the internalBufferpool, whileBuffer.allocUnsafe(size).fill(fill)will use the internalBuffer pool ifsize is less than or equal to halfBuffer.poolSize. Thedifference is subtle but can be important when an application requires theadditional performance thatBuffer.allocUnsafe() provides.
Static method:Buffer.allocUnsafeSlow(size)#
History
| Version | Changes |
|---|---|
| v20.0.0 | Throw ERR_INVALID_ARG_TYPE or ERR_OUT_OF_RANGE instead of ERR_INVALID_ARG_VALUE for invalid input arguments. |
| v15.0.0 | Throw ERR_INVALID_ARG_VALUE instead of ERR_INVALID_OPT_VALUE for invalid input arguments. |
| v5.12.0 | Added in: v5.12.0 |
Allocates a newBuffer ofsize bytes. Ifsize is larger thanbuffer.constants.MAX_LENGTH or smaller than 0,ERR_OUT_OF_RANGEis thrown. A zero-lengthBuffer is created ifsize is 0.
The underlying memory forBuffer instances created in this way isnotinitialized. The contents of the newly createdBuffer are unknown andmay contain sensitive data. Usebuf.fill(0) to initializesuchBuffer instances with zeroes.
When usingBuffer.allocUnsafe() to allocate newBuffer instances,allocations less thanBuffer.poolSize >>> 1 (4KiB when default poolSize is used) are slicedfrom a single pre-allocatedBuffer. This allows applications to avoid thegarbage collection overhead of creating many individually allocatedBufferinstances. This approach improves both performance and memory usage byeliminating the need to track and clean up as many individualArrayBuffer objects.
However, in the case where a developer may need to retain a small chunk ofmemory from a pool for an indeterminate amount of time, it may be appropriateto create an un-pooledBuffer instance usingBuffer.allocUnsafeSlow() andthen copying out the relevant bits.
import {Buffer }from'node:buffer';// Need to keep around a few small chunks of memory.const store = [];socket.on('readable',() => {let data;while (null !== (data = readable.read())) {// Allocate for retained data.const sb =Buffer.allocUnsafeSlow(10);// Copy the data into the new allocation. data.copy(sb,0,0,10); store.push(sb); }});const {Buffer } =require('node:buffer');// Need to keep around a few small chunks of memory.const store = [];socket.on('readable',() => {let data;while (null !== (data = readable.read())) {// Allocate for retained data.const sb =Buffer.allocUnsafeSlow(10);// Copy the data into the new allocation. data.copy(sb,0,0,10); store.push(sb); }});
ATypeError will be thrown ifsize is not a number.
Static method:Buffer.byteLength(string[, encoding])#
History
| Version | Changes |
|---|---|
| v7.0.0 | Passing invalid input will now throw an error. |
| v5.10.0 | The |
| v0.1.90 | Added in: v0.1.90 |
string<string> |<Buffer> |<TypedArray> |<DataView> |<ArrayBuffer> |<SharedArrayBuffer> Avalue to calculate the length of.encoding<string> Ifstringis a string, this is its encoding.Default:'utf8'.- Returns:<integer> The number of bytes contained within
string.
Returns the byte length of a string when encoded usingencoding.This is not the same asString.prototype.length, which does not accountfor the encoding that is used to convert the string into bytes.
For'base64','base64url', and'hex', this function assumes valid input.For strings that contain non-base64/hex-encoded data (e.g. whitespace), thereturn value might be greater than the length of aBuffer created from thestring.
import {Buffer }from'node:buffer';const str ='\u00bd + \u00bc = \u00be';console.log(`${str}:${str.length} characters, ` +`${Buffer.byteLength(str,'utf8')} bytes`);// Prints: ½ + ¼ = ¾: 9 characters, 12 bytesconst {Buffer } =require('node:buffer');const str ='\u00bd + \u00bc = \u00be';console.log(`${str}:${str.length} characters, ` +`${Buffer.byteLength(str,'utf8')} bytes`);// Prints: ½ + ¼ = ¾: 9 characters, 12 bytes
Whenstring is a<Buffer> |<DataView> |<TypedArray> |<ArrayBuffer> |<SharedArrayBuffer>,the byte length as reported by.byteLength is returned.
Static method:Buffer.compare(buf1, buf2)#
History
| Version | Changes |
|---|---|
| v8.0.0 | The arguments can now be |
| v0.11.13 | Added in: v0.11.13 |
buf1<Buffer> |<Uint8Array>buf2<Buffer> |<Uint8Array>- Returns:<integer> Either
-1,0, or1, depending on the result of thecomparison. Seebuf.compare()for details.
Comparesbuf1 tobuf2, typically for the purpose of sorting arrays ofBuffer instances. This is equivalent to callingbuf1.compare(buf2).
import {Buffer }from'node:buffer';const buf1 =Buffer.from('1234');const buf2 =Buffer.from('0123');const arr = [buf1, buf2];console.log(arr.sort(Buffer.compare));// Prints: [ <Buffer 30 31 32 33>, <Buffer 31 32 33 34> ]// (This result is equal to: [buf2, buf1].)const {Buffer } =require('node:buffer');const buf1 =Buffer.from('1234');const buf2 =Buffer.from('0123');const arr = [buf1, buf2];console.log(arr.sort(Buffer.compare));// Prints: [ <Buffer 30 31 32 33>, <Buffer 31 32 33 34> ]// (This result is equal to: [buf2, buf1].)
Static method:Buffer.concat(list[, totalLength])#
History
| Version | Changes |
|---|---|
| v8.0.0 | The elements of |
| v0.7.11 | Added in: v0.7.11 |
list<Buffer[]> |<Uint8Array[]> List ofBufferor<Uint8Array>instances to concatenate.totalLength<integer> Total length of theBufferinstances inlistwhen concatenated.- Returns:<Buffer>
Returns a newBuffer which is the result of concatenating all theBufferinstances in thelist together.
If the list has no items, or if thetotalLength is 0, then a new zero-lengthBuffer is returned.
IftotalLength is not provided, it is calculated from theBuffer instancesinlist by adding their lengths.
IftotalLength is provided, it must be an unsigned integer. If thecombined length of theBuffers inlist exceedstotalLength, the result istruncated tototalLength. If the combined length of theBuffers inlist isless thantotalLength, the remaining space is filled with zeros.
import {Buffer }from'node:buffer';// Create a single `Buffer` from a list of three `Buffer` instances.const buf1 =Buffer.alloc(10);const buf2 =Buffer.alloc(14);const buf3 =Buffer.alloc(18);const totalLength = buf1.length + buf2.length + buf3.length;console.log(totalLength);// Prints: 42const bufA =Buffer.concat([buf1, buf2, buf3], totalLength);console.log(bufA);// Prints: <Buffer 00 00 00 00 ...>console.log(bufA.length);// Prints: 42const {Buffer } =require('node:buffer');// Create a single `Buffer` from a list of three `Buffer` instances.const buf1 =Buffer.alloc(10);const buf2 =Buffer.alloc(14);const buf3 =Buffer.alloc(18);const totalLength = buf1.length + buf2.length + buf3.length;console.log(totalLength);// Prints: 42const bufA =Buffer.concat([buf1, buf2, buf3], totalLength);console.log(bufA);// Prints: <Buffer 00 00 00 00 ...>console.log(bufA.length);// Prints: 42
Buffer.concat() may also use the internalBuffer pool likeBuffer.allocUnsafe() does.
Static method:Buffer.copyBytesFrom(view[, offset[, length]])#
view<TypedArray> The<TypedArray> to copy.offset<integer> The starting offset withinview.Default:0.length<integer> The number of elements fromviewto copy.Default:view.length - offset.- Returns:<Buffer>
Copies the underlying memory ofview into a newBuffer.
const u16 =newUint16Array([0,0xffff]);const buf =Buffer.copyBytesFrom(u16,1,1);u16[1] =0;console.log(buf.length);// 2console.log(buf[0]);// 255console.log(buf[1]);// 255Static method:Buffer.from(array)#
array<integer[]>- Returns:<Buffer>
Allocates a newBuffer using anarray of bytes in the range0 –255.Array entries outside that range will be truncated to fit into it.
import {Buffer }from'node:buffer';// Creates a new Buffer containing the UTF-8 bytes of the string 'buffer'.const buf =Buffer.from([0x62,0x75,0x66,0x66,0x65,0x72]);const {Buffer } =require('node:buffer');// Creates a new Buffer containing the UTF-8 bytes of the string 'buffer'.const buf =Buffer.from([0x62,0x75,0x66,0x66,0x65,0x72]);
Ifarray is anArray-like object (that is, one with alength property oftypenumber), it is treated as if it is an array, unless it is aBuffer oraUint8Array. This means all otherTypedArray variants get treated as anArray. To create aBuffer from the bytes backing aTypedArray, useBuffer.copyBytesFrom().
ATypeError will be thrown ifarray is not anArray or another typeappropriate forBuffer.from() variants.
Buffer.from(array) andBuffer.from(string) may also use the internalBuffer pool likeBuffer.allocUnsafe() does.
Static method:Buffer.from(arrayBuffer[, byteOffset[, length]])#
arrayBuffer<ArrayBuffer> |<SharedArrayBuffer> An<ArrayBuffer>,<SharedArrayBuffer>, for example the.bufferproperty of a<TypedArray>.byteOffset<integer> Index of first byte to expose.Default:0.length<integer> Number of bytes to expose.Default:arrayBuffer.byteLength - byteOffset.- Returns:<Buffer>
This creates a view of the<ArrayBuffer> without copying the underlyingmemory. For example, when passed a reference to the.buffer property of a<TypedArray> instance, the newly createdBuffer will share the sameallocated memory as the<TypedArray>'s underlyingArrayBuffer.
import {Buffer }from'node:buffer';const arr =newUint16Array(2);arr[0] =5000;arr[1] =4000;// Shares memory with `arr`.const buf =Buffer.from(arr.buffer);console.log(buf);// Prints: <Buffer 88 13 a0 0f>// Changing the original Uint16Array changes the Buffer also.arr[1] =6000;console.log(buf);// Prints: <Buffer 88 13 70 17>const {Buffer } =require('node:buffer');const arr =newUint16Array(2);arr[0] =5000;arr[1] =4000;// Shares memory with `arr`.const buf =Buffer.from(arr.buffer);console.log(buf);// Prints: <Buffer 88 13 a0 0f>// Changing the original Uint16Array changes the Buffer also.arr[1] =6000;console.log(buf);// Prints: <Buffer 88 13 70 17>
The optionalbyteOffset andlength arguments specify a memory range withinthearrayBuffer that will be shared by theBuffer.
import {Buffer }from'node:buffer';const ab =newArrayBuffer(10);const buf =Buffer.from(ab,0,2);console.log(buf.length);// Prints: 2const {Buffer } =require('node:buffer');const ab =newArrayBuffer(10);const buf =Buffer.from(ab,0,2);console.log(buf.length);// Prints: 2
ATypeError will be thrown ifarrayBuffer is not an<ArrayBuffer> or a<SharedArrayBuffer> or another type appropriate forBuffer.from()variants.
It is important to remember that a backingArrayBuffer can cover a rangeof memory that extends beyond the bounds of aTypedArray view. A newBuffer created using thebuffer property of aTypedArray may extendbeyond the range of theTypedArray:
import {Buffer }from'node:buffer';const arrA =Uint8Array.from([0x63,0x64,0x65,0x66]);// 4 elementsconst arrB =newUint8Array(arrA.buffer,1,2);// 2 elementsconsole.log(arrA.buffer === arrB.buffer);// trueconst buf =Buffer.from(arrB.buffer);console.log(buf);// Prints: <Buffer 63 64 65 66>const {Buffer } =require('node:buffer');const arrA =Uint8Array.from([0x63,0x64,0x65,0x66]);// 4 elementsconst arrB =newUint8Array(arrA.buffer,1,2);// 2 elementsconsole.log(arrA.buffer === arrB.buffer);// trueconst buf =Buffer.from(arrB.buffer);console.log(buf);// Prints: <Buffer 63 64 65 66>
Static method:Buffer.from(buffer)#
buffer<Buffer> |<Uint8Array> An existingBufferor<Uint8Array> fromwhich to copy data.- Returns:<Buffer>
Copies the passedbuffer data onto a newBuffer instance.
import {Buffer }from'node:buffer';const buf1 =Buffer.from('buffer');const buf2 =Buffer.from(buf1);buf1[0] =0x61;console.log(buf1.toString());// Prints: aufferconsole.log(buf2.toString());// Prints: bufferconst {Buffer } =require('node:buffer');const buf1 =Buffer.from('buffer');const buf2 =Buffer.from(buf1);buf1[0] =0x61;console.log(buf1.toString());// Prints: aufferconsole.log(buf2.toString());// Prints: buffer
ATypeError will be thrown ifbuffer is not aBuffer or another typeappropriate forBuffer.from() variants.
Static method:Buffer.from(object[, offsetOrEncoding[, length]])#
object<Object> An object supportingSymbol.toPrimitiveorvalueOf().offsetOrEncoding<integer> |<string> A byte-offset or encoding.length<integer> A length.- Returns:<Buffer>
For objects whosevalueOf() function returns a value not strictly equal toobject, returnsBuffer.from(object.valueOf(), offsetOrEncoding, length).
import {Buffer }from'node:buffer';const buf =Buffer.from(newString('this is a test'));// Prints: <Buffer 74 68 69 73 20 69 73 20 61 20 74 65 73 74>const {Buffer } =require('node:buffer');const buf =Buffer.from(newString('this is a test'));// Prints: <Buffer 74 68 69 73 20 69 73 20 61 20 74 65 73 74>
For objects that supportSymbol.toPrimitive, returnsBuffer.from(object[Symbol.toPrimitive]('string'), offsetOrEncoding).
import {Buffer }from'node:buffer';classFoo { [Symbol.toPrimitive]() {return'this is a test'; }}const buf =Buffer.from(newFoo(),'utf8');// Prints: <Buffer 74 68 69 73 20 69 73 20 61 20 74 65 73 74>const {Buffer } =require('node:buffer');classFoo { [Symbol.toPrimitive]() {return'this is a test'; }}const buf =Buffer.from(newFoo(),'utf8');// Prints: <Buffer 74 68 69 73 20 69 73 20 61 20 74 65 73 74>
ATypeError will be thrown ifobject does not have the mentioned methods oris not of another type appropriate forBuffer.from() variants.
Static method:Buffer.from(string[, encoding])#
string<string> A string to encode.encoding<string> The encoding ofstring.Default:'utf8'.- Returns:<Buffer>
Creates a newBuffer containingstring. Theencoding parameter identifiesthe character encoding to be used when convertingstring into bytes.
import {Buffer }from'node:buffer';const buf1 =Buffer.from('this is a tést');const buf2 =Buffer.from('7468697320697320612074c3a97374','hex');console.log(buf1.toString());// Prints: this is a téstconsole.log(buf2.toString());// Prints: this is a téstconsole.log(buf1.toString('latin1'));// Prints: this is a téstconst {Buffer } =require('node:buffer');const buf1 =Buffer.from('this is a tést');const buf2 =Buffer.from('7468697320697320612074c3a97374','hex');console.log(buf1.toString());// Prints: this is a téstconsole.log(buf2.toString());// Prints: this is a téstconsole.log(buf1.toString('latin1'));// Prints: this is a tést
ATypeError will be thrown ifstring is not a string or another typeappropriate forBuffer.from() variants.
Buffer.from(string) may also use the internalBuffer pool likeBuffer.allocUnsafe() does.
Static method:Buffer.isBuffer(obj)#
Returnstrue ifobj is aBuffer,false otherwise.
import {Buffer }from'node:buffer';Buffer.isBuffer(Buffer.alloc(10));// trueBuffer.isBuffer(Buffer.from('foo'));// trueBuffer.isBuffer('a string');// falseBuffer.isBuffer([]);// falseBuffer.isBuffer(newUint8Array(1024));// falseconst {Buffer } =require('node:buffer');Buffer.isBuffer(Buffer.alloc(10));// trueBuffer.isBuffer(Buffer.from('foo'));// trueBuffer.isBuffer('a string');// falseBuffer.isBuffer([]);// falseBuffer.isBuffer(newUint8Array(1024));// false
Static method:Buffer.isEncoding(encoding)#
Returnstrue ifencoding is the name of a supported character encoding,orfalse otherwise.
import {Buffer }from'node:buffer';console.log(Buffer.isEncoding('utf8'));// Prints: trueconsole.log(Buffer.isEncoding('hex'));// Prints: trueconsole.log(Buffer.isEncoding('utf/8'));// Prints: falseconsole.log(Buffer.isEncoding(''));// Prints: falseconst {Buffer } =require('node:buffer');console.log(Buffer.isEncoding('utf8'));// Prints: trueconsole.log(Buffer.isEncoding('hex'));// Prints: trueconsole.log(Buffer.isEncoding('utf/8'));// Prints: falseconsole.log(Buffer.isEncoding(''));// Prints: false
Buffer.poolSize#
- Type:<integer>Default:
8192
This is the size (in bytes) of pre-allocated internalBuffer instances usedfor pooling. This value may be modified.
buf[index]#
index<integer>
The index operator[index] can be used to get and set the octet at positionindex inbuf. The values refer to individual bytes, so the legal valuerange is between0x00 and0xFF (hex) or0 and255 (decimal).
This operator is inherited fromUint8Array, so its behavior on out-of-boundsaccess is the same asUint8Array. In other words,buf[index] returnsundefined whenindex is negative or greater or equal tobuf.length, andbuf[index] = value does not modify the buffer ifindex is negative or>= buf.length.
import {Buffer }from'node:buffer';// Copy an ASCII string into a `Buffer` one byte at a time.// (This only works for ASCII-only strings. In general, one should use// `Buffer.from()` to perform this conversion.)const str ='Node.js';const buf =Buffer.allocUnsafe(str.length);for (let i =0; i < str.length; i++) { buf[i] = str.charCodeAt(i);}console.log(buf.toString('utf8'));// Prints: Node.jsconst {Buffer } =require('node:buffer');// Copy an ASCII string into a `Buffer` one byte at a time.// (This only works for ASCII-only strings. In general, one should use// `Buffer.from()` to perform this conversion.)const str ='Node.js';const buf =Buffer.allocUnsafe(str.length);for (let i =0; i < str.length; i++) { buf[i] = str.charCodeAt(i);}console.log(buf.toString('utf8'));// Prints: Node.js
buf.buffer#
- Type:<ArrayBuffer> The underlying
ArrayBufferobject based on which thisBufferobject is created.
ThisArrayBuffer is not guaranteed to correspond exactly to the originalBuffer. See the notes onbuf.byteOffset for details.
import {Buffer }from'node:buffer';const arrayBuffer =newArrayBuffer(16);const buffer =Buffer.from(arrayBuffer);console.log(buffer.buffer === arrayBuffer);// Prints: trueconst {Buffer } =require('node:buffer');const arrayBuffer =newArrayBuffer(16);const buffer =Buffer.from(arrayBuffer);console.log(buffer.buffer === arrayBuffer);// Prints: true
buf.byteOffset#
- Type:<integer> The
byteOffsetof theBuffer's underlyingArrayBufferobject.
When settingbyteOffset inBuffer.from(ArrayBuffer, byteOffset, length),or sometimes when allocating aBuffer smaller thanBuffer.poolSize, thebuffer does not start from a zero offset on the underlyingArrayBuffer.
This can cause problems when accessing the underlyingArrayBuffer directlyusingbuf.buffer, as other parts of theArrayBuffer may be unrelatedto theBuffer object itself.
A common issue when creating aTypedArray object that shares its memory withaBuffer is that in this case one needs to specify thebyteOffset correctly:
import {Buffer }from'node:buffer';// Create a buffer smaller than `Buffer.poolSize`.const nodeBuffer =Buffer.from([0,1,2,3,4,5,6,7,8,9]);// When casting the Node.js Buffer to an Int8Array, use the byteOffset// to refer only to the part of `nodeBuffer.buffer` that contains the memory// for `nodeBuffer`.newInt8Array(nodeBuffer.buffer, nodeBuffer.byteOffset, nodeBuffer.length);const {Buffer } =require('node:buffer');// Create a buffer smaller than `Buffer.poolSize`.const nodeBuffer =Buffer.from([0,1,2,3,4,5,6,7,8,9]);// When casting the Node.js Buffer to an Int8Array, use the byteOffset// to refer only to the part of `nodeBuffer.buffer` that contains the memory// for `nodeBuffer`.newInt8Array(nodeBuffer.buffer, nodeBuffer.byteOffset, nodeBuffer.length);
buf.compare(target[, targetStart[, targetEnd[, sourceStart[, sourceEnd]]]])#
History
| Version | Changes |
|---|---|
| v8.0.0 | The |
| v5.11.0 | Additional parameters for specifying offsets are supported now. |
| v0.11.13 | Added in: v0.11.13 |
target<Buffer> |<Uint8Array> ABufferor<Uint8Array> with which tocomparebuf.targetStart<integer> The offset withintargetat which to begincomparison.Default:0.targetEnd<integer> The offset withintargetat which to end comparison(not inclusive).Default:target.length.sourceStart<integer> The offset withinbufat which to begin comparison.Default:0.sourceEnd<integer> The offset withinbufat which to end comparison(not inclusive).Default:buf.length.- Returns:<integer>
Comparesbuf withtarget and returns a number indicating whetherbufcomes before, after, or is the same astarget in sort order.Comparison is based on the actual sequence of bytes in eachBuffer.
0is returned iftargetis the same asbuf1is returned iftargetshould comebeforebufwhen sorted.-1is returned iftargetshould comeafterbufwhen sorted.
import {Buffer }from'node:buffer';const buf1 =Buffer.from('ABC');const buf2 =Buffer.from('BCD');const buf3 =Buffer.from('ABCD');console.log(buf1.compare(buf1));// Prints: 0console.log(buf1.compare(buf2));// Prints: -1console.log(buf1.compare(buf3));// Prints: -1console.log(buf2.compare(buf1));// Prints: 1console.log(buf2.compare(buf3));// Prints: 1console.log([buf1, buf2, buf3].sort(Buffer.compare));// Prints: [ <Buffer 41 42 43>, <Buffer 41 42 43 44>, <Buffer 42 43 44> ]// (This result is equal to: [buf1, buf3, buf2].)const {Buffer } =require('node:buffer');const buf1 =Buffer.from('ABC');const buf2 =Buffer.from('BCD');const buf3 =Buffer.from('ABCD');console.log(buf1.compare(buf1));// Prints: 0console.log(buf1.compare(buf2));// Prints: -1console.log(buf1.compare(buf3));// Prints: -1console.log(buf2.compare(buf1));// Prints: 1console.log(buf2.compare(buf3));// Prints: 1console.log([buf1, buf2, buf3].sort(Buffer.compare));// Prints: [ <Buffer 41 42 43>, <Buffer 41 42 43 44>, <Buffer 42 43 44> ]// (This result is equal to: [buf1, buf3, buf2].)
The optionaltargetStart,targetEnd,sourceStart, andsourceEndarguments can be used to limit the comparison to specific ranges withintargetandbuf respectively.
import {Buffer }from'node:buffer';const buf1 =Buffer.from([1,2,3,4,5,6,7,8,9]);const buf2 =Buffer.from([5,6,7,8,9,1,2,3,4]);console.log(buf1.compare(buf2,5,9,0,4));// Prints: 0console.log(buf1.compare(buf2,0,6,4));// Prints: -1console.log(buf1.compare(buf2,5,6,5));// Prints: 1const {Buffer } =require('node:buffer');const buf1 =Buffer.from([1,2,3,4,5,6,7,8,9]);const buf2 =Buffer.from([5,6,7,8,9,1,2,3,4]);console.log(buf1.compare(buf2,5,9,0,4));// Prints: 0console.log(buf1.compare(buf2,0,6,4));// Prints: -1console.log(buf1.compare(buf2,5,6,5));// Prints: 1
ERR_OUT_OF_RANGE is thrown iftargetStart < 0,sourceStart < 0,targetEnd > target.byteLength, orsourceEnd > source.byteLength.
buf.copy(target[, targetStart[, sourceStart[, sourceEnd]]])#
target<Buffer> |<Uint8Array> ABufferor<Uint8Array> to copy into.targetStart<integer> The offset withintargetat which to beginwriting.Default:0.sourceStart<integer> The offset withinbuffrom which to begin copying.Default:0.sourceEnd<integer> The offset withinbufat which to stop copying (notinclusive).Default:buf.length.- Returns:<integer> The number of bytes copied.
Copies data from a region ofbuf to a region intarget, even if thetargetmemory region overlaps withbuf.
TypedArray.prototype.set() performs the same operation, and is availablefor all TypedArrays, including Node.jsBuffers, although it takesdifferent function arguments.
import {Buffer }from'node:buffer';// Create two `Buffer` instances.const buf1 =Buffer.allocUnsafe(26);const buf2 =Buffer.allocUnsafe(26).fill('!');for (let i =0; i <26; i++) {// 97 is the decimal ASCII value for 'a'. buf1[i] = i +97;}// Copy `buf1` bytes 16 through 19 into `buf2` starting at byte 8 of `buf2`.buf1.copy(buf2,8,16,20);// This is equivalent to:// buf2.set(buf1.subarray(16, 20), 8);console.log(buf2.toString('ascii',0,25));// Prints: !!!!!!!!qrst!!!!!!!!!!!!!const {Buffer } =require('node:buffer');// Create two `Buffer` instances.const buf1 =Buffer.allocUnsafe(26);const buf2 =Buffer.allocUnsafe(26).fill('!');for (let i =0; i <26; i++) {// 97 is the decimal ASCII value for 'a'. buf1[i] = i +97;}// Copy `buf1` bytes 16 through 19 into `buf2` starting at byte 8 of `buf2`.buf1.copy(buf2,8,16,20);// This is equivalent to:// buf2.set(buf1.subarray(16, 20), 8);console.log(buf2.toString('ascii',0,25));// Prints: !!!!!!!!qrst!!!!!!!!!!!!!
import {Buffer }from'node:buffer';// Create a `Buffer` and copy data from one region to an overlapping region// within the same `Buffer`.const buf =Buffer.allocUnsafe(26);for (let i =0; i <26; i++) {// 97 is the decimal ASCII value for 'a'. buf[i] = i +97;}buf.copy(buf,0,4,10);console.log(buf.toString());// Prints: efghijghijklmnopqrstuvwxyzconst {Buffer } =require('node:buffer');// Create a `Buffer` and copy data from one region to an overlapping region// within the same `Buffer`.const buf =Buffer.allocUnsafe(26);for (let i =0; i <26; i++) {// 97 is the decimal ASCII value for 'a'. buf[i] = i +97;}buf.copy(buf,0,4,10);console.log(buf.toString());// Prints: efghijghijklmnopqrstuvwxyz
buf.entries()#
- Returns:<Iterator>
Creates and returns aniterator of[index, byte] pairs from the contentsofbuf.
import {Buffer }from'node:buffer';// Log the entire contents of a `Buffer`.const buf =Buffer.from('buffer');for (const pairof buf.entries()) {console.log(pair);}// Prints:// [0, 98]// [1, 117]// [2, 102]// [3, 102]// [4, 101]// [5, 114]const {Buffer } =require('node:buffer');// Log the entire contents of a `Buffer`.const buf =Buffer.from('buffer');for (const pairof buf.entries()) {console.log(pair);}// Prints:// [0, 98]// [1, 117]// [2, 102]// [3, 102]// [4, 101]// [5, 114]
buf.equals(otherBuffer)#
History
| Version | Changes |
|---|---|
| v8.0.0 | The arguments can now be |
| v0.11.13 | Added in: v0.11.13 |
otherBuffer<Buffer> |<Uint8Array> ABufferor<Uint8Array> with which tocomparebuf.- Returns:<boolean>
Returnstrue if bothbuf andotherBuffer have exactly the same bytes,false otherwise. Equivalent tobuf.compare(otherBuffer) === 0.
import {Buffer }from'node:buffer';const buf1 =Buffer.from('ABC');const buf2 =Buffer.from('414243','hex');const buf3 =Buffer.from('ABCD');console.log(buf1.equals(buf2));// Prints: trueconsole.log(buf1.equals(buf3));// Prints: falseconst {Buffer } =require('node:buffer');const buf1 =Buffer.from('ABC');const buf2 =Buffer.from('414243','hex');const buf3 =Buffer.from('ABCD');console.log(buf1.equals(buf2));// Prints: trueconsole.log(buf1.equals(buf3));// Prints: false
buf.fill(value[, offset[, end]][, encoding])#
History
| Version | Changes |
|---|---|
| v11.0.0 | Throws |
| v10.0.0 | Negative |
| v10.0.0 | Attempting to fill a non-zero length buffer with a zero length buffer triggers a thrown exception. |
| v10.0.0 | Specifying an invalid string for |
| v5.7.0 | The |
| v0.5.0 | Added in: v0.5.0 |
value<string> |<Buffer> |<Uint8Array> |<integer> The value with which to fillbuf.Empty value (string, Uint8Array, Buffer) is coerced to0.offset<integer> Number of bytes to skip before starting to fillbuf.Default:0.end<integer> Where to stop fillingbuf(not inclusive).Default:buf.length.encoding<string> The encoding forvalueifvalueis a string.Default:'utf8'.- Returns:<Buffer> A reference to
buf.
Fillsbuf with the specifiedvalue. If theoffset andend are not given,the entirebuf will be filled:
import {Buffer }from'node:buffer';// Fill a `Buffer` with the ASCII character 'h'.const b =Buffer.allocUnsafe(50).fill('h');console.log(b.toString());// Prints: hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh// Fill a buffer with empty stringconst c =Buffer.allocUnsafe(5).fill('');console.log(c.fill(''));// Prints: <Buffer 00 00 00 00 00>const {Buffer } =require('node:buffer');// Fill a `Buffer` with the ASCII character 'h'.const b =Buffer.allocUnsafe(50).fill('h');console.log(b.toString());// Prints: hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh// Fill a buffer with empty stringconst c =Buffer.allocUnsafe(5).fill('');console.log(c.fill(''));// Prints: <Buffer 00 00 00 00 00>
value is coerced to auint32 value if it is not a string,Buffer, orinteger. If the resulting integer is greater than255 (decimal),buf will befilled withvalue & 255.
If the final write of afill() operation falls on a multi-byte character,then only the bytes of that character that fit intobuf are written:
import {Buffer }from'node:buffer';// Fill a `Buffer` with character that takes up two bytes in UTF-8.console.log(Buffer.allocUnsafe(5).fill('\u0222'));// Prints: <Buffer c8 a2 c8 a2 c8>const {Buffer } =require('node:buffer');// Fill a `Buffer` with character that takes up two bytes in UTF-8.console.log(Buffer.allocUnsafe(5).fill('\u0222'));// Prints: <Buffer c8 a2 c8 a2 c8>
Ifvalue contains invalid characters, it is truncated; if no validfill data remains, an exception is thrown:
import {Buffer }from'node:buffer';const buf =Buffer.allocUnsafe(5);console.log(buf.fill('a'));// Prints: <Buffer 61 61 61 61 61>console.log(buf.fill('aazz','hex'));// Prints: <Buffer aa aa aa aa aa>console.log(buf.fill('zz','hex'));// Throws an exception.const {Buffer } =require('node:buffer');const buf =Buffer.allocUnsafe(5);console.log(buf.fill('a'));// Prints: <Buffer 61 61 61 61 61>console.log(buf.fill('aazz','hex'));// Prints: <Buffer aa aa aa aa aa>console.log(buf.fill('zz','hex'));// Throws an exception.
buf.includes(value[, byteOffset][, encoding])#
History
| Version | Changes |
|---|---|
| v25.5.0 | supports Uint8Array as |
| v5.3.0 | Added in: v5.3.0 |
value<string> |<Buffer> |<Uint8Array> |<integer> What to search for.byteOffset<integer> Where to begin searching inbuf. If negative, thenoffset is calculated from the end ofbuf.Default:0.encoding<string> Ifvalueis a string, this is its encoding.Default:'utf8'.- Returns:<boolean>
trueifvaluewas found inbuf,falseotherwise.
Equivalent tobuf.indexOf() !== -1.
import {Buffer }from'node:buffer';const buf =Buffer.from('this is a buffer');console.log(buf.includes('this'));// Prints: trueconsole.log(buf.includes('is'));// Prints: trueconsole.log(buf.includes(Buffer.from('a buffer')));// Prints: trueconsole.log(buf.includes(97));// Prints: true (97 is the decimal ASCII value for 'a')console.log(buf.includes(Buffer.from('a buffer example')));// Prints: falseconsole.log(buf.includes(Buffer.from('a buffer example').slice(0,8)));// Prints: trueconsole.log(buf.includes('this',4));// Prints: falseconst {Buffer } =require('node:buffer');const buf =Buffer.from('this is a buffer');console.log(buf.includes('this'));// Prints: trueconsole.log(buf.includes('is'));// Prints: trueconsole.log(buf.includes(Buffer.from('a buffer')));// Prints: trueconsole.log(buf.includes(97));// Prints: true (97 is the decimal ASCII value for 'a')console.log(buf.includes(Buffer.from('a buffer example')));// Prints: falseconsole.log(buf.includes(Buffer.from('a buffer example').slice(0,8)));// Prints: trueconsole.log(buf.includes('this',4));// Prints: false
buf.indexOf(value[, byteOffset][, encoding])#
History
| Version | Changes |
|---|---|
| v8.0.0 | The |
| v5.7.0, v4.4.0 | When |
| v1.5.0 | Added in: v1.5.0 |
value<string> |<Buffer> |<Uint8Array> |<integer> What to search for.byteOffset<integer> Where to begin searching inbuf. If negative, thenoffset is calculated from the end ofbuf.Default:0.encoding<string> Ifvalueis a string, this is the encoding used todetermine the binary representation of the string that will be searched for inbuf.Default:'utf8'.- Returns:<integer> The index of the first occurrence of
valueinbuf, or-1ifbufdoes not containvalue.
Ifvalue is:
- a string,
valueis interpreted according to the character encoding inencoding. - a
Bufferor<Uint8Array>,valuewill be used in its entirety.To compare a partialBuffer, usebuf.subarray. - a number,
valuewill be interpreted as an unsigned 8-bit integervalue between0and255.
import {Buffer }from'node:buffer';const buf =Buffer.from('this is a buffer');console.log(buf.indexOf('this'));// Prints: 0console.log(buf.indexOf('is'));// Prints: 2console.log(buf.indexOf(Buffer.from('a buffer')));// Prints: 8console.log(buf.indexOf(97));// Prints: 8 (97 is the decimal ASCII value for 'a')console.log(buf.indexOf(Buffer.from('a buffer example')));// Prints: -1console.log(buf.indexOf(Buffer.from('a buffer example').slice(0,8)));// Prints: 8const utf16Buffer =Buffer.from('\u039a\u0391\u03a3\u03a3\u0395','utf16le');console.log(utf16Buffer.indexOf('\u03a3',0,'utf16le'));// Prints: 4console.log(utf16Buffer.indexOf('\u03a3', -4,'utf16le'));// Prints: 6const {Buffer } =require('node:buffer');const buf =Buffer.from('this is a buffer');console.log(buf.indexOf('this'));// Prints: 0console.log(buf.indexOf('is'));// Prints: 2console.log(buf.indexOf(Buffer.from('a buffer')));// Prints: 8console.log(buf.indexOf(97));// Prints: 8 (97 is the decimal ASCII value for 'a')console.log(buf.indexOf(Buffer.from('a buffer example')));// Prints: -1console.log(buf.indexOf(Buffer.from('a buffer example').slice(0,8)));// Prints: 8const utf16Buffer =Buffer.from('\u039a\u0391\u03a3\u03a3\u0395','utf16le');console.log(utf16Buffer.indexOf('\u03a3',0,'utf16le'));// Prints: 4console.log(utf16Buffer.indexOf('\u03a3', -4,'utf16le'));// Prints: 6
Ifvalue is not a string, number, orBuffer, this method will throw aTypeError. Ifvalue is a number, it will be coerced to a valid byte value,an integer between 0 and 255.
IfbyteOffset is not a number, it will be coerced to a number. If the resultof coercion isNaN or0, then the entire buffer will be searched. Thisbehavior matchesString.prototype.indexOf().
import {Buffer }from'node:buffer';const b =Buffer.from('abcdef');// Passing a value that's a number, but not a valid byte.// Prints: 2, equivalent to searching for 99 or 'c'.console.log(b.indexOf(99.9));console.log(b.indexOf(256 +99));// Passing a byteOffset that coerces to NaN or 0.// Prints: 1, searching the whole buffer.console.log(b.indexOf('b',undefined));console.log(b.indexOf('b', {}));console.log(b.indexOf('b',null));console.log(b.indexOf('b', []));const {Buffer } =require('node:buffer');const b =Buffer.from('abcdef');// Passing a value that's a number, but not a valid byte.// Prints: 2, equivalent to searching for 99 or 'c'.console.log(b.indexOf(99.9));console.log(b.indexOf(256 +99));// Passing a byteOffset that coerces to NaN or 0.// Prints: 1, searching the whole buffer.console.log(b.indexOf('b',undefined));console.log(b.indexOf('b', {}));console.log(b.indexOf('b',null));console.log(b.indexOf('b', []));
Ifvalue is an empty string or emptyBuffer andbyteOffset is lessthanbuf.length,byteOffset will be returned. Ifvalue is empty andbyteOffset is at leastbuf.length,buf.length will be returned.
buf.keys()#
- Returns:<Iterator>
Creates and returns aniterator ofbuf keys (indexes).
import {Buffer }from'node:buffer';const buf =Buffer.from('buffer');for (const keyof buf.keys()) {console.log(key);}// Prints:// 0// 1// 2// 3// 4// 5const {Buffer } =require('node:buffer');const buf =Buffer.from('buffer');for (const keyof buf.keys()) {console.log(key);}// Prints:// 0// 1// 2// 3// 4// 5
buf.lastIndexOf(value[, byteOffset][, encoding])#
History
| Version | Changes |
|---|---|
| v8.0.0 | The |
| v6.0.0 | Added in: v6.0.0 |
value<string> |<Buffer> |<Uint8Array> |<integer> What to search for.byteOffset<integer> Where to begin searching inbuf. If negative, thenoffset is calculated from the end ofbuf.Default:buf.length - 1.encoding<string> Ifvalueis a string, this is the encoding used todetermine the binary representation of the string that will be searched for inbuf.Default:'utf8'.- Returns:<integer> The index of the last occurrence of
valueinbuf, or-1ifbufdoes not containvalue.
Identical tobuf.indexOf(), except the last occurrence ofvalue is foundrather than the first occurrence.
import {Buffer }from'node:buffer';const buf =Buffer.from('this buffer is a buffer');console.log(buf.lastIndexOf('this'));// Prints: 0console.log(buf.lastIndexOf('buffer'));// Prints: 17console.log(buf.lastIndexOf(Buffer.from('buffer')));// Prints: 17console.log(buf.lastIndexOf(97));// Prints: 15 (97 is the decimal ASCII value for 'a')console.log(buf.lastIndexOf(Buffer.from('yolo')));// Prints: -1console.log(buf.lastIndexOf('buffer',5));// Prints: 5console.log(buf.lastIndexOf('buffer',4));// Prints: -1const utf16Buffer =Buffer.from('\u039a\u0391\u03a3\u03a3\u0395','utf16le');console.log(utf16Buffer.lastIndexOf('\u03a3',undefined,'utf16le'));// Prints: 6console.log(utf16Buffer.lastIndexOf('\u03a3', -5,'utf16le'));// Prints: 4const {Buffer } =require('node:buffer');const buf =Buffer.from('this buffer is a buffer');console.log(buf.lastIndexOf('this'));// Prints: 0console.log(buf.lastIndexOf('buffer'));// Prints: 17console.log(buf.lastIndexOf(Buffer.from('buffer')));// Prints: 17console.log(buf.lastIndexOf(97));// Prints: 15 (97 is the decimal ASCII value for 'a')console.log(buf.lastIndexOf(Buffer.from('yolo')));// Prints: -1console.log(buf.lastIndexOf('buffer',5));// Prints: 5console.log(buf.lastIndexOf('buffer',4));// Prints: -1const utf16Buffer =Buffer.from('\u039a\u0391\u03a3\u03a3\u0395','utf16le');console.log(utf16Buffer.lastIndexOf('\u03a3',undefined,'utf16le'));// Prints: 6console.log(utf16Buffer.lastIndexOf('\u03a3', -5,'utf16le'));// Prints: 4
Ifvalue is not a string, number, orBuffer, this method will throw aTypeError. Ifvalue is a number, it will be coerced to a valid byte value,an integer between 0 and 255.
IfbyteOffset is not a number, it will be coerced to a number. Any argumentsthat coerce toNaN, like{} orundefined, will search the whole buffer.This behavior matchesString.prototype.lastIndexOf().
import {Buffer }from'node:buffer';const b =Buffer.from('abcdef');// Passing a value that's a number, but not a valid byte.// Prints: 2, equivalent to searching for 99 or 'c'.console.log(b.lastIndexOf(99.9));console.log(b.lastIndexOf(256 +99));// Passing a byteOffset that coerces to NaN.// Prints: 1, searching the whole buffer.console.log(b.lastIndexOf('b',undefined));console.log(b.lastIndexOf('b', {}));// Passing a byteOffset that coerces to 0.// Prints: -1, equivalent to passing 0.console.log(b.lastIndexOf('b',null));console.log(b.lastIndexOf('b', []));const {Buffer } =require('node:buffer');const b =Buffer.from('abcdef');// Passing a value that's a number, but not a valid byte.// Prints: 2, equivalent to searching for 99 or 'c'.console.log(b.lastIndexOf(99.9));console.log(b.lastIndexOf(256 +99));// Passing a byteOffset that coerces to NaN.// Prints: 1, searching the whole buffer.console.log(b.lastIndexOf('b',undefined));console.log(b.lastIndexOf('b', {}));// Passing a byteOffset that coerces to 0.// Prints: -1, equivalent to passing 0.console.log(b.lastIndexOf('b',null));console.log(b.lastIndexOf('b', []));
Ifvalue is an empty string or emptyBuffer,byteOffset will be returned.
buf.length#
- Type:<integer>
Returns the number of bytes inbuf.
import {Buffer }from'node:buffer';// Create a `Buffer` and write a shorter string to it using UTF-8.const buf =Buffer.alloc(1234);console.log(buf.length);// Prints: 1234buf.write('some string',0,'utf8');console.log(buf.length);// Prints: 1234const {Buffer } =require('node:buffer');// Create a `Buffer` and write a shorter string to it using UTF-8.const buf =Buffer.alloc(1234);console.log(buf.length);// Prints: 1234buf.write('some string',0,'utf8');console.log(buf.length);// Prints: 1234
buf.parent#
buf.buffer instead.Thebuf.parent property is a deprecated alias forbuf.buffer.
buf.readBigInt64BE([offset])#
offset<integer> Number of bytes to skip before starting to read. Mustsatisfy:0 <= offset <= buf.length - 8.Default:0.- Returns:<bigint>
Reads a signed, big-endian 64-bit integer frombuf at the specifiedoffset.
Integers read from aBuffer are interpreted as two's complement signedvalues.
buf.readBigInt64LE([offset])#
offset<integer> Number of bytes to skip before starting to read. Mustsatisfy:0 <= offset <= buf.length - 8.Default:0.- Returns:<bigint>
Reads a signed, little-endian 64-bit integer frombuf at the specifiedoffset.
Integers read from aBuffer are interpreted as two's complement signedvalues.
buf.readBigUInt64BE([offset])#
History
| Version | Changes |
|---|---|
| v14.10.0, v12.19.0 | This function is also available as |
| v12.0.0, v10.20.0 | Added in: v12.0.0, v10.20.0 |
offset<integer> Number of bytes to skip before starting to read. Mustsatisfy:0 <= offset <= buf.length - 8.Default:0.- Returns:<bigint>
Reads an unsigned, big-endian 64-bit integer frombuf at the specifiedoffset.
This function is also available under thereadBigUint64BE alias.
import {Buffer }from'node:buffer';const buf =Buffer.from([0x00,0x00,0x00,0x00,0xff,0xff,0xff,0xff]);console.log(buf.readBigUInt64BE(0));// Prints: 4294967295nconst {Buffer } =require('node:buffer');const buf =Buffer.from([0x00,0x00,0x00,0x00,0xff,0xff,0xff,0xff]);console.log(buf.readBigUInt64BE(0));// Prints: 4294967295n
buf.readBigUInt64LE([offset])#
History
| Version | Changes |
|---|---|
| v14.10.0, v12.19.0 | This function is also available as |
| v12.0.0, v10.20.0 | Added in: v12.0.0, v10.20.0 |
offset<integer> Number of bytes to skip before starting to read. Mustsatisfy:0 <= offset <= buf.length - 8.Default:0.- Returns:<bigint>
Reads an unsigned, little-endian 64-bit integer frombuf at the specifiedoffset.
This function is also available under thereadBigUint64LE alias.
import {Buffer }from'node:buffer';const buf =Buffer.from([0x00,0x00,0x00,0x00,0xff,0xff,0xff,0xff]);console.log(buf.readBigUInt64LE(0));// Prints: 18446744069414584320nconst {Buffer } =require('node:buffer');const buf =Buffer.from([0x00,0x00,0x00,0x00,0xff,0xff,0xff,0xff]);console.log(buf.readBigUInt64LE(0));// Prints: 18446744069414584320n
buf.readDoubleBE([offset])#
History
| Version | Changes |
|---|---|
| v10.0.0 | Removed |
| v0.11.15 | Added in: v0.11.15 |
offset<integer> Number of bytes to skip before starting to read. Mustsatisfy0 <= offset <= buf.length - 8.Default:0.- Returns:<number>
Reads a 64-bit, big-endian double frombuf at the specifiedoffset.
import {Buffer }from'node:buffer';const buf =Buffer.from([1,2,3,4,5,6,7,8]);console.log(buf.readDoubleBE(0));// Prints: 8.20788039913184e-304const {Buffer } =require('node:buffer');const buf =Buffer.from([1,2,3,4,5,6,7,8]);console.log(buf.readDoubleBE(0));// Prints: 8.20788039913184e-304
buf.readDoubleLE([offset])#
History
| Version | Changes |
|---|---|
| v10.0.0 | Removed |
| v0.11.15 | Added in: v0.11.15 |
offset<integer> Number of bytes to skip before starting to read. Mustsatisfy0 <= offset <= buf.length - 8.Default:0.- Returns:<number>
Reads a 64-bit, little-endian double frombuf at the specifiedoffset.
import {Buffer }from'node:buffer';const buf =Buffer.from([1,2,3,4,5,6,7,8]);console.log(buf.readDoubleLE(0));// Prints: 5.447603722011605e-270console.log(buf.readDoubleLE(1));// Throws ERR_OUT_OF_RANGE.const {Buffer } =require('node:buffer');const buf =Buffer.from([1,2,3,4,5,6,7,8]);console.log(buf.readDoubleLE(0));// Prints: 5.447603722011605e-270console.log(buf.readDoubleLE(1));// Throws ERR_OUT_OF_RANGE.
buf.readFloatBE([offset])#
History
| Version | Changes |
|---|---|
| v10.0.0 | Removed |
| v0.11.15 | Added in: v0.11.15 |
offset<integer> Number of bytes to skip before starting to read. Mustsatisfy0 <= offset <= buf.length - 4.Default:0.- Returns:<number>
Reads a 32-bit, big-endian float frombuf at the specifiedoffset.
import {Buffer }from'node:buffer';const buf =Buffer.from([1,2,3,4]);console.log(buf.readFloatBE(0));// Prints: 2.387939260590663e-38const {Buffer } =require('node:buffer');const buf =Buffer.from([1,2,3,4]);console.log(buf.readFloatBE(0));// Prints: 2.387939260590663e-38
buf.readFloatLE([offset])#
History
| Version | Changes |
|---|---|
| v10.0.0 | Removed |
| v0.11.15 | Added in: v0.11.15 |
offset<integer> Number of bytes to skip before starting to read. Mustsatisfy0 <= offset <= buf.length - 4.Default:0.- Returns:<number>
Reads a 32-bit, little-endian float frombuf at the specifiedoffset.
import {Buffer }from'node:buffer';const buf =Buffer.from([1,2,3,4]);console.log(buf.readFloatLE(0));// Prints: 1.539989614439558e-36console.log(buf.readFloatLE(1));// Throws ERR_OUT_OF_RANGE.const {Buffer } =require('node:buffer');const buf =Buffer.from([1,2,3,4]);console.log(buf.readFloatLE(0));// Prints: 1.539989614439558e-36console.log(buf.readFloatLE(1));// Throws ERR_OUT_OF_RANGE.
buf.readInt8([offset])#
History
| Version | Changes |
|---|---|
| v10.0.0 | Removed |
| v0.5.0 | Added in: v0.5.0 |
offset<integer> Number of bytes to skip before starting to read. Mustsatisfy0 <= offset <= buf.length - 1.Default:0.- Returns:<integer>
Reads a signed 8-bit integer frombuf at the specifiedoffset.
Integers read from aBuffer are interpreted as two's complement signed values.
import {Buffer }from'node:buffer';const buf =Buffer.from([-1,5]);console.log(buf.readInt8(0));// Prints: -1console.log(buf.readInt8(1));// Prints: 5console.log(buf.readInt8(2));// Throws ERR_OUT_OF_RANGE.const {Buffer } =require('node:buffer');const buf =Buffer.from([-1,5]);console.log(buf.readInt8(0));// Prints: -1console.log(buf.readInt8(1));// Prints: 5console.log(buf.readInt8(2));// Throws ERR_OUT_OF_RANGE.
buf.readInt16BE([offset])#
History
| Version | Changes |
|---|---|
| v10.0.0 | Removed |
| v0.5.5 | Added in: v0.5.5 |
offset<integer> Number of bytes to skip before starting to read. Mustsatisfy0 <= offset <= buf.length - 2.Default:0.- Returns:<integer>
Reads a signed, big-endian 16-bit integer frombuf at the specifiedoffset.
Integers read from aBuffer are interpreted as two's complement signed values.
import {Buffer }from'node:buffer';const buf =Buffer.from([0,5]);console.log(buf.readInt16BE(0));// Prints: 5const {Buffer } =require('node:buffer');const buf =Buffer.from([0,5]);console.log(buf.readInt16BE(0));// Prints: 5
buf.readInt16LE([offset])#
History
| Version | Changes |
|---|---|
| v10.0.0 | Removed |
| v0.5.5 | Added in: v0.5.5 |
offset<integer> Number of bytes to skip before starting to read. Mustsatisfy0 <= offset <= buf.length - 2.Default:0.- Returns:<integer>
Reads a signed, little-endian 16-bit integer frombuf at the specifiedoffset.
Integers read from aBuffer are interpreted as two's complement signed values.
import {Buffer }from'node:buffer';const buf =Buffer.from([0,5]);console.log(buf.readInt16LE(0));// Prints: 1280console.log(buf.readInt16LE(1));// Throws ERR_OUT_OF_RANGE.const {Buffer } =require('node:buffer');const buf =Buffer.from([0,5]);console.log(buf.readInt16LE(0));// Prints: 1280console.log(buf.readInt16LE(1));// Throws ERR_OUT_OF_RANGE.
buf.readInt32BE([offset])#
History
| Version | Changes |
|---|---|
| v10.0.0 | Removed |
| v0.5.5 | Added in: v0.5.5 |
offset<integer> Number of bytes to skip before starting to read. Mustsatisfy0 <= offset <= buf.length - 4.Default:0.- Returns:<integer>
Reads a signed, big-endian 32-bit integer frombuf at the specifiedoffset.
Integers read from aBuffer are interpreted as two's complement signed values.
import {Buffer }from'node:buffer';const buf =Buffer.from([0,0,0,5]);console.log(buf.readInt32BE(0));// Prints: 5const {Buffer } =require('node:buffer');const buf =Buffer.from([0,0,0,5]);console.log(buf.readInt32BE(0));// Prints: 5
buf.readInt32LE([offset])#
History
| Version | Changes |
|---|---|
| v10.0.0 | Removed |
| v0.5.5 | Added in: v0.5.5 |
offset<integer> Number of bytes to skip before starting to read. Mustsatisfy0 <= offset <= buf.length - 4.Default:0.- Returns:<integer>
Reads a signed, little-endian 32-bit integer frombuf at the specifiedoffset.
Integers read from aBuffer are interpreted as two's complement signed values.
import {Buffer }from'node:buffer';const buf =Buffer.from([0,0,0,5]);console.log(buf.readInt32LE(0));// Prints: 83886080console.log(buf.readInt32LE(1));// Throws ERR_OUT_OF_RANGE.const {Buffer } =require('node:buffer');const buf =Buffer.from([0,0,0,5]);console.log(buf.readInt32LE(0));// Prints: 83886080console.log(buf.readInt32LE(1));// Throws ERR_OUT_OF_RANGE.
buf.readIntBE(offset, byteLength)#
History
| Version | Changes |
|---|---|
| v25.5.0 | supports Uint8Array as |
| v10.0.0 | Removed |
| v0.11.15 | Added in: v0.11.15 |
offset<integer> Number of bytes to skip before starting to read. Mustsatisfy0 <= offset <= buf.length - byteLength.byteLength<integer> Number of bytes to read. Must satisfy0 < byteLength <= 6.- Returns:<integer>
ReadsbyteLength number of bytes frombuf at the specifiedoffsetand interprets the result as a big-endian, two's complement signed valuesupporting up to 48 bits of accuracy.
import {Buffer }from'node:buffer';const buf =Buffer.from([0x12,0x34,0x56,0x78,0x90,0xab]);console.log(buf.readIntBE(0,6).toString(16));// Prints: 1234567890abconsole.log(buf.readIntBE(1,6).toString(16));// Throws ERR_OUT_OF_RANGE.console.log(buf.readIntBE(1,0).toString(16));// Throws ERR_OUT_OF_RANGE.const {Buffer } =require('node:buffer');const buf =Buffer.from([0x12,0x34,0x56,0x78,0x90,0xab]);console.log(buf.readIntBE(0,6).toString(16));// Prints: 1234567890abconsole.log(buf.readIntBE(1,6).toString(16));// Throws ERR_OUT_OF_RANGE.console.log(buf.readIntBE(1,0).toString(16));// Throws ERR_OUT_OF_RANGE.
buf.readIntLE(offset, byteLength)#
History
| Version | Changes |
|---|---|
| v25.5.0 | supports Uint8Array as |
| v10.0.0 | Removed |
| v0.11.15 | Added in: v0.11.15 |
offset<integer> Number of bytes to skip before starting to read. Mustsatisfy0 <= offset <= buf.length - byteLength.byteLength<integer> Number of bytes to read. Must satisfy0 < byteLength <= 6.- Returns:<integer>
ReadsbyteLength number of bytes frombuf at the specifiedoffsetand interprets the result as a little-endian, two's complement signed valuesupporting up to 48 bits of accuracy.
import {Buffer }from'node:buffer';const buf =Buffer.from([0x12,0x34,0x56,0x78,0x90,0xab]);console.log(buf.readIntLE(0,6).toString(16));// Prints: -546f87a9cbeeconst {Buffer } =require('node:buffer');const buf =Buffer.from([0x12,0x34,0x56,0x78,0x90,0xab]);console.log(buf.readIntLE(0,6).toString(16));// Prints: -546f87a9cbee
buf.readUInt8([offset])#
History
| Version | Changes |
|---|---|
| v14.9.0, v12.19.0 | This function is also available as |
| v10.0.0 | Removed |
| v0.5.0 | Added in: v0.5.0 |
offset<integer> Number of bytes to skip before starting to read. Mustsatisfy0 <= offset <= buf.length - 1.Default:0.- Returns:<integer>
Reads an unsigned 8-bit integer frombuf at the specifiedoffset.
This function is also available under thereadUint8 alias.
import {Buffer }from'node:buffer';const buf =Buffer.from([1, -2]);console.log(buf.readUInt8(0));// Prints: 1console.log(buf.readUInt8(1));// Prints: 254console.log(buf.readUInt8(2));// Throws ERR_OUT_OF_RANGE.const {Buffer } =require('node:buffer');const buf =Buffer.from([1, -2]);console.log(buf.readUInt8(0));// Prints: 1console.log(buf.readUInt8(1));// Prints: 254console.log(buf.readUInt8(2));// Throws ERR_OUT_OF_RANGE.
buf.readUInt16BE([offset])#
History
| Version | Changes |
|---|---|
| v14.9.0, v12.19.0 | This function is also available as |
| v10.0.0 | Removed |
| v0.5.5 | Added in: v0.5.5 |
offset<integer> Number of bytes to skip before starting to read. Mustsatisfy0 <= offset <= buf.length - 2.Default:0.- Returns:<integer>
Reads an unsigned, big-endian 16-bit integer frombuf at the specifiedoffset.
This function is also available under thereadUint16BE alias.
import {Buffer }from'node:buffer';const buf =Buffer.from([0x12,0x34,0x56]);console.log(buf.readUInt16BE(0).toString(16));// Prints: 1234console.log(buf.readUInt16BE(1).toString(16));// Prints: 3456const {Buffer } =require('node:buffer');const buf =Buffer.from([0x12,0x34,0x56]);console.log(buf.readUInt16BE(0).toString(16));// Prints: 1234console.log(buf.readUInt16BE(1).toString(16));// Prints: 3456
buf.readUInt16LE([offset])#
History
| Version | Changes |
|---|---|
| v14.9.0, v12.19.0 | This function is also available as |
| v10.0.0 | Removed |
| v0.5.5 | Added in: v0.5.5 |
offset<integer> Number of bytes to skip before starting to read. Mustsatisfy0 <= offset <= buf.length - 2.Default:0.- Returns:<integer>
Reads an unsigned, little-endian 16-bit integer frombuf at the specifiedoffset.
This function is also available under thereadUint16LE alias.
import {Buffer }from'node:buffer';const buf =Buffer.from([0x12,0x34,0x56]);console.log(buf.readUInt16LE(0).toString(16));// Prints: 3412console.log(buf.readUInt16LE(1).toString(16));// Prints: 5634console.log(buf.readUInt16LE(2).toString(16));// Throws ERR_OUT_OF_RANGE.const {Buffer } =require('node:buffer');const buf =Buffer.from([0x12,0x34,0x56]);console.log(buf.readUInt16LE(0).toString(16));// Prints: 3412console.log(buf.readUInt16LE(1).toString(16));// Prints: 5634console.log(buf.readUInt16LE(2).toString(16));// Throws ERR_OUT_OF_RANGE.
buf.readUInt32BE([offset])#
History
| Version | Changes |
|---|---|
| v14.9.0, v12.19.0 | This function is also available as |
| v10.0.0 | Removed |
| v0.5.5 | Added in: v0.5.5 |
offset<integer> Number of bytes to skip before starting to read. Mustsatisfy0 <= offset <= buf.length - 4.Default:0.- Returns:<integer>
Reads an unsigned, big-endian 32-bit integer frombuf at the specifiedoffset.
This function is also available under thereadUint32BE alias.
import {Buffer }from'node:buffer';const buf =Buffer.from([0x12,0x34,0x56,0x78]);console.log(buf.readUInt32BE(0).toString(16));// Prints: 12345678const {Buffer } =require('node:buffer');const buf =Buffer.from([0x12,0x34,0x56,0x78]);console.log(buf.readUInt32BE(0).toString(16));// Prints: 12345678
buf.readUInt32LE([offset])#
History
| Version | Changes |
|---|---|
| v14.9.0, v12.19.0 | This function is also available as |
| v10.0.0 | Removed |
| v0.5.5 | Added in: v0.5.5 |
offset<integer> Number of bytes to skip before starting to read. Mustsatisfy0 <= offset <= buf.length - 4.Default:0.- Returns:<integer>
Reads an unsigned, little-endian 32-bit integer frombuf at the specifiedoffset.
This function is also available under thereadUint32LE alias.
import {Buffer }from'node:buffer';const buf =Buffer.from([0x12,0x34,0x56,0x78]);console.log(buf.readUInt32LE(0).toString(16));// Prints: 78563412console.log(buf.readUInt32LE(1).toString(16));// Throws ERR_OUT_OF_RANGE.const {Buffer } =require('node:buffer');const buf =Buffer.from([0x12,0x34,0x56,0x78]);console.log(buf.readUInt32LE(0).toString(16));// Prints: 78563412console.log(buf.readUInt32LE(1).toString(16));// Throws ERR_OUT_OF_RANGE.
buf.readUIntBE(offset, byteLength)#
History
| Version | Changes |
|---|---|
| v25.5.0 | supports Uint8Array as |
| v14.9.0, v12.19.0 | This function is also available as |
| v10.0.0 | Removed |
| v0.11.15 | Added in: v0.11.15 |
offset<integer> Number of bytes to skip before starting to read. Mustsatisfy0 <= offset <= buf.length - byteLength.byteLength<integer> Number of bytes to read. Must satisfy0 < byteLength <= 6.- Returns:<integer>
ReadsbyteLength number of bytes frombuf at the specifiedoffsetand interprets the result as an unsigned big-endian integer supportingup to 48 bits of accuracy.
This function is also available under thereadUintBE alias.
import {Buffer }from'node:buffer';const buf =Buffer.from([0x12,0x34,0x56,0x78,0x90,0xab]);console.log(buf.readUIntBE(0,6).toString(16));// Prints: 1234567890abconsole.log(buf.readUIntBE(1,6).toString(16));// Throws ERR_OUT_OF_RANGE.const {Buffer } =require('node:buffer');const buf =Buffer.from([0x12,0x34,0x56,0x78,0x90,0xab]);console.log(buf.readUIntBE(0,6).toString(16));// Prints: 1234567890abconsole.log(buf.readUIntBE(1,6).toString(16));// Throws ERR_OUT_OF_RANGE.
buf.readUIntLE(offset, byteLength)#
History
| Version | Changes |
|---|---|
| v25.5.0 | supports Uint8Array as |
| v14.9.0, v12.19.0 | This function is also available as |
| v10.0.0 | Removed |
| v0.11.15 | Added in: v0.11.15 |
offset<integer> Number of bytes to skip before starting to read. Mustsatisfy0 <= offset <= buf.length - byteLength.byteLength<integer> Number of bytes to read. Must satisfy0 < byteLength <= 6.- Returns:<integer>
ReadsbyteLength number of bytes frombuf at the specifiedoffsetand interprets the result as an unsigned, little-endian integer supportingup to 48 bits of accuracy.
This function is also available under thereadUintLE alias.
import {Buffer }from'node:buffer';const buf =Buffer.from([0x12,0x34,0x56,0x78,0x90,0xab]);console.log(buf.readUIntLE(0,6).toString(16));// Prints: ab9078563412const {Buffer } =require('node:buffer');const buf =Buffer.from([0x12,0x34,0x56,0x78,0x90,0xab]);console.log(buf.readUIntLE(0,6).toString(16));// Prints: ab9078563412
buf.subarray([start[, end]])#
start<integer> Where the newBufferwill start.Default:0.end<integer> Where the newBufferwill end (not inclusive).Default:buf.length.- Returns:<Buffer>
Returns a newBuffer that references the same memory as the original, butoffset and cropped by thestart andend indexes.
Specifyingend greater thanbuf.length will return the same result asthat ofend equal tobuf.length.
This method is inherited fromTypedArray.prototype.subarray().
Modifying the newBuffer slice will modify the memory in the originalBufferbecause the allocated memory of the two objects overlap.
import {Buffer }from'node:buffer';// Create a `Buffer` with the ASCII alphabet, take a slice, and modify one byte// from the original `Buffer`.const buf1 =Buffer.allocUnsafe(26);for (let i =0; i <26; i++) {// 97 is the decimal ASCII value for 'a'. buf1[i] = i +97;}const buf2 = buf1.subarray(0,3);console.log(buf2.toString('ascii',0, buf2.length));// Prints: abcbuf1[0] =33;console.log(buf2.toString('ascii',0, buf2.length));// Prints: !bcconst {Buffer } =require('node:buffer');// Create a `Buffer` with the ASCII alphabet, take a slice, and modify one byte// from the original `Buffer`.const buf1 =Buffer.allocUnsafe(26);for (let i =0; i <26; i++) {// 97 is the decimal ASCII value for 'a'. buf1[i] = i +97;}const buf2 = buf1.subarray(0,3);console.log(buf2.toString('ascii',0, buf2.length));// Prints: abcbuf1[0] =33;console.log(buf2.toString('ascii',0, buf2.length));// Prints: !bc
Specifying negative indexes causes the slice to be generated relative to theend ofbuf rather than the beginning.
import {Buffer }from'node:buffer';const buf =Buffer.from('buffer');console.log(buf.subarray(-6, -1).toString());// Prints: buffe// (Equivalent to buf.subarray(0, 5).)console.log(buf.subarray(-6, -2).toString());// Prints: buff// (Equivalent to buf.subarray(0, 4).)console.log(buf.subarray(-5, -2).toString());// Prints: uff// (Equivalent to buf.subarray(1, 4).)const {Buffer } =require('node:buffer');const buf =Buffer.from('buffer');console.log(buf.subarray(-6, -1).toString());// Prints: buffe// (Equivalent to buf.subarray(0, 5).)console.log(buf.subarray(-6, -2).toString());// Prints: buff// (Equivalent to buf.subarray(0, 4).)console.log(buf.subarray(-5, -2).toString());// Prints: uff// (Equivalent to buf.subarray(1, 4).)
buf.slice([start[, end]])#
History
| Version | Changes |
|---|---|
| v17.5.0, v16.15.0 | The buf.slice() method has been deprecated. |
| v7.0.0 | All offsets are now coerced to integers before doing any calculations with them. |
| v7.1.0, v6.9.2 | Coercing the offsets to integers now handles values outside the 32-bit integer range properly. |
| v0.3.0 | Added in: v0.3.0 |
start<integer> Where the newBufferwill start.Default:0.end<integer> Where the newBufferwill end (not inclusive).Default:buf.length.- Returns:<Buffer>
buf.subarray instead.Returns a newBuffer that references the same memory as the original, butoffset and cropped by thestart andend indexes.
This method is not compatible with theUint8Array.prototype.slice(),which is a superclass ofBuffer. To copy the slice, useUint8Array.prototype.slice().
import {Buffer }from'node:buffer';const buf =Buffer.from('buffer');const copiedBuf =Uint8Array.prototype.slice.call(buf);copiedBuf[0]++;console.log(copiedBuf.toString());// Prints: cufferconsole.log(buf.toString());// Prints: buffer// With buf.slice(), the original buffer is modified.const notReallyCopiedBuf = buf.slice();notReallyCopiedBuf[0]++;console.log(notReallyCopiedBuf.toString());// Prints: cufferconsole.log(buf.toString());// Also prints: cuffer (!)const {Buffer } =require('node:buffer');const buf =Buffer.from('buffer');const copiedBuf =Uint8Array.prototype.slice.call(buf);copiedBuf[0]++;console.log(copiedBuf.toString());// Prints: cufferconsole.log(buf.toString());// Prints: buffer// With buf.slice(), the original buffer is modified.const notReallyCopiedBuf = buf.slice();notReallyCopiedBuf[0]++;console.log(notReallyCopiedBuf.toString());// Prints: cufferconsole.log(buf.toString());// Also prints: cuffer (!)
buf.swap16()#
- Returns:<Buffer> A reference to
buf.
Interpretsbuf as an array of unsigned 16-bit integers and swaps thebyte orderin-place. ThrowsERR_INVALID_BUFFER_SIZE ifbuf.lengthis not a multiple of 2.
import {Buffer }from'node:buffer';const buf1 =Buffer.from([0x1,0x2,0x3,0x4,0x5,0x6,0x7,0x8]);console.log(buf1);// Prints: <Buffer 01 02 03 04 05 06 07 08>buf1.swap16();console.log(buf1);// Prints: <Buffer 02 01 04 03 06 05 08 07>const buf2 =Buffer.from([0x1,0x2,0x3]);buf2.swap16();// Throws ERR_INVALID_BUFFER_SIZE.const {Buffer } =require('node:buffer');const buf1 =Buffer.from([0x1,0x2,0x3,0x4,0x5,0x6,0x7,0x8]);console.log(buf1);// Prints: <Buffer 01 02 03 04 05 06 07 08>buf1.swap16();console.log(buf1);// Prints: <Buffer 02 01 04 03 06 05 08 07>const buf2 =Buffer.from([0x1,0x2,0x3]);buf2.swap16();// Throws ERR_INVALID_BUFFER_SIZE.
One convenient use ofbuf.swap16() is to perform a fast in-place conversionbetween UTF-16 little-endian and UTF-16 big-endian:
import {Buffer }from'node:buffer';const buf =Buffer.from('This is little-endian UTF-16','utf16le');buf.swap16();// Convert to big-endian UTF-16 text.const {Buffer } =require('node:buffer');const buf =Buffer.from('This is little-endian UTF-16','utf16le');buf.swap16();// Convert to big-endian UTF-16 text.
buf.swap32()#
- Returns:<Buffer> A reference to
buf.
Interpretsbuf as an array of unsigned 32-bit integers and swaps thebyte orderin-place. ThrowsERR_INVALID_BUFFER_SIZE ifbuf.lengthis not a multiple of 4.
import {Buffer }from'node:buffer';const buf1 =Buffer.from([0x1,0x2,0x3,0x4,0x5,0x6,0x7,0x8]);console.log(buf1);// Prints: <Buffer 01 02 03 04 05 06 07 08>buf1.swap32();console.log(buf1);// Prints: <Buffer 04 03 02 01 08 07 06 05>const buf2 =Buffer.from([0x1,0x2,0x3]);buf2.swap32();// Throws ERR_INVALID_BUFFER_SIZE.const {Buffer } =require('node:buffer');const buf1 =Buffer.from([0x1,0x2,0x3,0x4,0x5,0x6,0x7,0x8]);console.log(buf1);// Prints: <Buffer 01 02 03 04 05 06 07 08>buf1.swap32();console.log(buf1);// Prints: <Buffer 04 03 02 01 08 07 06 05>const buf2 =Buffer.from([0x1,0x2,0x3]);buf2.swap32();// Throws ERR_INVALID_BUFFER_SIZE.
buf.swap64()#
- Returns:<Buffer> A reference to
buf.
Interpretsbuf as an array of 64-bit numbers and swaps byte orderin-place.ThrowsERR_INVALID_BUFFER_SIZE ifbuf.length is not a multiple of 8.
import {Buffer }from'node:buffer';const buf1 =Buffer.from([0x1,0x2,0x3,0x4,0x5,0x6,0x7,0x8]);console.log(buf1);// Prints: <Buffer 01 02 03 04 05 06 07 08>buf1.swap64();console.log(buf1);// Prints: <Buffer 08 07 06 05 04 03 02 01>const buf2 =Buffer.from([0x1,0x2,0x3]);buf2.swap64();// Throws ERR_INVALID_BUFFER_SIZE.const {Buffer } =require('node:buffer');const buf1 =Buffer.from([0x1,0x2,0x3,0x4,0x5,0x6,0x7,0x8]);console.log(buf1);// Prints: <Buffer 01 02 03 04 05 06 07 08>buf1.swap64();console.log(buf1);// Prints: <Buffer 08 07 06 05 04 03 02 01>const buf2 =Buffer.from([0x1,0x2,0x3]);buf2.swap64();// Throws ERR_INVALID_BUFFER_SIZE.
buf.toJSON()#
- Returns:<Object>
Returns a JSON representation ofbuf.JSON.stringify() implicitly callsthis function when stringifying aBuffer instance.
Buffer.from() accepts objects in the format returned from this method.In particular,Buffer.from(buf.toJSON()) works likeBuffer.from(buf).
import {Buffer }from'node:buffer';const buf =Buffer.from([0x1,0x2,0x3,0x4,0x5]);const json =JSON.stringify(buf);console.log(json);// Prints: {"type":"Buffer","data":[1,2,3,4,5]}const copy =JSON.parse(json,(key, value) => {return value && value.type ==='Buffer' ?Buffer.from(value) : value;});console.log(copy);// Prints: <Buffer 01 02 03 04 05>const {Buffer } =require('node:buffer');const buf =Buffer.from([0x1,0x2,0x3,0x4,0x5]);const json =JSON.stringify(buf);console.log(json);// Prints: {"type":"Buffer","data":[1,2,3,4,5]}const copy =JSON.parse(json,(key, value) => {return value && value.type ==='Buffer' ?Buffer.from(value) : value;});console.log(copy);// Prints: <Buffer 01 02 03 04 05>
buf.toString([encoding[, start[, end]]])#
History
| Version | Changes |
|---|---|
| v25.5.0 | supports Uint8Array as |
| v0.1.90 | Added in: v0.1.90 |
encoding<string> The character encoding to use.Default:'utf8'.start<integer> The byte offset to start decoding at.Default:0.end<integer> The byte offset to stop decoding at (not inclusive).Default:buf.length.- Returns:<string>
Decodesbuf to a string according to the specified character encoding inencoding.start andend may be passed to decode only a subset ofbuf.
Ifencoding is'utf8' and a byte sequence in the input is not valid UTF-8,then each invalid byte is replaced with the replacement characterU+FFFD.
The maximum length of a string instance (in UTF-16 code units) is availableasbuffer.constants.MAX_STRING_LENGTH.
import {Buffer }from'node:buffer';const buf1 =Buffer.allocUnsafe(26);for (let i =0; i <26; i++) {// 97 is the decimal ASCII value for 'a'. buf1[i] = i +97;}console.log(buf1.toString('utf8'));// Prints: abcdefghijklmnopqrstuvwxyzconsole.log(buf1.toString('utf8',0,5));// Prints: abcdeconst buf2 =Buffer.from('tést');console.log(buf2.toString('hex'));// Prints: 74c3a97374console.log(buf2.toString('utf8',0,3));// Prints: téconsole.log(buf2.toString(undefined,0,3));// Prints: téconst {Buffer } =require('node:buffer');const buf1 =Buffer.allocUnsafe(26);for (let i =0; i <26; i++) {// 97 is the decimal ASCII value for 'a'. buf1[i] = i +97;}console.log(buf1.toString('utf8'));// Prints: abcdefghijklmnopqrstuvwxyzconsole.log(buf1.toString('utf8',0,5));// Prints: abcdeconst buf2 =Buffer.from('tést');console.log(buf2.toString('hex'));// Prints: 74c3a97374console.log(buf2.toString('utf8',0,3));// Prints: téconsole.log(buf2.toString(undefined,0,3));// Prints: té
buf.values()#
- Returns:<Iterator>
Creates and returns aniterator forbuf values (bytes). This function iscalled automatically when aBuffer is used in afor..of statement.
import {Buffer }from'node:buffer';const buf =Buffer.from('buffer');for (const valueof buf.values()) {console.log(value);}// Prints:// 98// 117// 102// 102// 101// 114for (const valueof buf) {console.log(value);}// Prints:// 98// 117// 102// 102// 101// 114const {Buffer } =require('node:buffer');const buf =Buffer.from('buffer');for (const valueof buf.values()) {console.log(value);}// Prints:// 98// 117// 102// 102// 101// 114for (const valueof buf) {console.log(value);}// Prints:// 98// 117// 102// 102// 101// 114
buf.write(string[, offset[, length]][, encoding])#
History
| Version | Changes |
|---|---|
| v25.5.0 | supports Uint8Array as |
| v0.1.90 | Added in: v0.1.90 |
string<string> String to write tobuf.offset<integer> Number of bytes to skip before starting to writestring.Default:0.length<integer> Maximum number of bytes to write (written bytes will notexceedbuf.length - offset).Default:buf.length - offset.encoding<string> The character encoding ofstring.Default:'utf8'.- Returns:<integer> Number of bytes written.
Writesstring tobuf atoffset according to the character encoding inencoding. Thelength parameter is the number of bytes to write. Ifbuf didnot contain enough space to fit the entire string, only part ofstring will bewritten. However, partially encoded characters will not be written.
import {Buffer }from'node:buffer';const buf =Buffer.alloc(256);const len = buf.write('\u00bd + \u00bc = \u00be',0);console.log(`${len} bytes:${buf.toString('utf8',0, len)}`);// Prints: 12 bytes: ½ + ¼ = ¾const buffer =Buffer.alloc(10);const length = buffer.write('abcd',8);console.log(`${length} bytes:${buffer.toString('utf8',8,10)}`);// Prints: 2 bytes : abconst {Buffer } =require('node:buffer');const buf =Buffer.alloc(256);const len = buf.write('\u00bd + \u00bc = \u00be',0);console.log(`${len} bytes:${buf.toString('utf8',0, len)}`);// Prints: 12 bytes: ½ + ¼ = ¾const buffer =Buffer.alloc(10);const length = buffer.write('abcd',8);console.log(`${length} bytes:${buffer.toString('utf8',8,10)}`);// Prints: 2 bytes : ab
buf.writeBigInt64BE(value[, offset])#
value<bigint> Number to be written tobuf.offset<integer> Number of bytes to skip before starting to write. Mustsatisfy:0 <= offset <= buf.length - 8.Default:0.- Returns:<integer>
offsetplus the number of bytes written.
Writesvalue tobuf at the specifiedoffset as big-endian.
value is interpreted and written as a two's complement signed integer.
import {Buffer }from'node:buffer';const buf =Buffer.allocUnsafe(8);buf.writeBigInt64BE(0x0102030405060708n,0);console.log(buf);// Prints: <Buffer 01 02 03 04 05 06 07 08>const {Buffer } =require('node:buffer');const buf =Buffer.allocUnsafe(8);buf.writeBigInt64BE(0x0102030405060708n,0);console.log(buf);// Prints: <Buffer 01 02 03 04 05 06 07 08>
buf.writeBigInt64LE(value[, offset])#
value<bigint> Number to be written tobuf.offset<integer> Number of bytes to skip before starting to write. Mustsatisfy:0 <= offset <= buf.length - 8.Default:0.- Returns:<integer>
offsetplus the number of bytes written.
Writesvalue tobuf at the specifiedoffset as little-endian.
value is interpreted and written as a two's complement signed integer.
import {Buffer }from'node:buffer';const buf =Buffer.allocUnsafe(8);buf.writeBigInt64LE(0x0102030405060708n,0);console.log(buf);// Prints: <Buffer 08 07 06 05 04 03 02 01>const {Buffer } =require('node:buffer');const buf =Buffer.allocUnsafe(8);buf.writeBigInt64LE(0x0102030405060708n,0);console.log(buf);// Prints: <Buffer 08 07 06 05 04 03 02 01>
buf.writeBigUInt64BE(value[, offset])#
History
| Version | Changes |
|---|---|
| v14.10.0, v12.19.0 | This function is also available as |
| v12.0.0, v10.20.0 | Added in: v12.0.0, v10.20.0 |
value<bigint> Number to be written tobuf.offset<integer> Number of bytes to skip before starting to write. Mustsatisfy:0 <= offset <= buf.length - 8.Default:0.- Returns:<integer>
offsetplus the number of bytes written.
Writesvalue tobuf at the specifiedoffset as big-endian.
This function is also available under thewriteBigUint64BE alias.
import {Buffer }from'node:buffer';const buf =Buffer.allocUnsafe(8);buf.writeBigUInt64BE(0xdecafafecacefaden,0);console.log(buf);// Prints: <Buffer de ca fa fe ca ce fa de>const {Buffer } =require('node:buffer');const buf =Buffer.allocUnsafe(8);buf.writeBigUInt64BE(0xdecafafecacefaden,0);console.log(buf);// Prints: <Buffer de ca fa fe ca ce fa de>
buf.writeBigUInt64LE(value[, offset])#
History
| Version | Changes |
|---|---|
| v14.10.0, v12.19.0 | This function is also available as |
| v12.0.0, v10.20.0 | Added in: v12.0.0, v10.20.0 |
value<bigint> Number to be written tobuf.offset<integer> Number of bytes to skip before starting to write. Mustsatisfy:0 <= offset <= buf.length - 8.Default:0.- Returns:<integer>
offsetplus the number of bytes written.
Writesvalue tobuf at the specifiedoffset as little-endian
import {Buffer }from'node:buffer';const buf =Buffer.allocUnsafe(8);buf.writeBigUInt64LE(0xdecafafecacefaden,0);console.log(buf);// Prints: <Buffer de fa ce ca fe fa ca de>const {Buffer } =require('node:buffer');const buf =Buffer.allocUnsafe(8);buf.writeBigUInt64LE(0xdecafafecacefaden,0);console.log(buf);// Prints: <Buffer de fa ce ca fe fa ca de>
This function is also available under thewriteBigUint64LE alias.
buf.writeDoubleBE(value[, offset])#
History
| Version | Changes |
|---|---|
| v10.0.0 | Removed |
| v0.11.15 | Added in: v0.11.15 |
value<number> Number to be written tobuf.offset<integer> Number of bytes to skip before starting to write. Mustsatisfy0 <= offset <= buf.length - 8.Default:0.- Returns:<integer>
offsetplus the number of bytes written.
Writesvalue tobuf at the specifiedoffset as big-endian. Thevaluemust be a JavaScript number. Behavior is undefined whenvalue is anythingother than a JavaScript number.
import {Buffer }from'node:buffer';const buf =Buffer.allocUnsafe(8);buf.writeDoubleBE(123.456,0);console.log(buf);// Prints: <Buffer 40 5e dd 2f 1a 9f be 77>const {Buffer } =require('node:buffer');const buf =Buffer.allocUnsafe(8);buf.writeDoubleBE(123.456,0);console.log(buf);// Prints: <Buffer 40 5e dd 2f 1a 9f be 77>
buf.writeDoubleLE(value[, offset])#
History
| Version | Changes |
|---|---|
| v10.0.0 | Removed |
| v0.11.15 | Added in: v0.11.15 |
value<number> Number to be written tobuf.offset<integer> Number of bytes to skip before starting to write. Mustsatisfy0 <= offset <= buf.length - 8.Default:0.- Returns:<integer>
offsetplus the number of bytes written.
Writesvalue tobuf at the specifiedoffset as little-endian. Thevaluemust be a JavaScript number. Behavior is undefined whenvalue is anythingother than a JavaScript number.
import {Buffer }from'node:buffer';const buf =Buffer.allocUnsafe(8);buf.writeDoubleLE(123.456,0);console.log(buf);// Prints: <Buffer 77 be 9f 1a 2f dd 5e 40>const {Buffer } =require('node:buffer');const buf =Buffer.allocUnsafe(8);buf.writeDoubleLE(123.456,0);console.log(buf);// Prints: <Buffer 77 be 9f 1a 2f dd 5e 40>
buf.writeFloatBE(value[, offset])#
History
| Version | Changes |
|---|---|
| v10.0.0 | Removed |
| v0.11.15 | Added in: v0.11.15 |
value<number> Number to be written tobuf.offset<integer> Number of bytes to skip before starting to write. Mustsatisfy0 <= offset <= buf.length - 4.Default:0.- Returns:<integer>
offsetplus the number of bytes written.
Writesvalue tobuf at the specifiedoffset as big-endian. Behavior isundefined whenvalue is anything other than a JavaScript number.
import {Buffer }from'node:buffer';const buf =Buffer.allocUnsafe(4);buf.writeFloatBE(0xcafebabe,0);console.log(buf);// Prints: <Buffer 4f 4a fe bb>const {Buffer } =require('node:buffer');const buf =Buffer.allocUnsafe(4);buf.writeFloatBE(0xcafebabe,0);console.log(buf);// Prints: <Buffer 4f 4a fe bb>
buf.writeFloatLE(value[, offset])#
History
| Version | Changes |
|---|---|
| v10.0.0 | Removed |
| v0.11.15 | Added in: v0.11.15 |
value<number> Number to be written tobuf.offset<integer> Number of bytes to skip before starting to write. Mustsatisfy0 <= offset <= buf.length - 4.Default:0.- Returns:<integer>
offsetplus the number of bytes written.
Writesvalue tobuf at the specifiedoffset as little-endian. Behavior isundefined whenvalue is anything other than a JavaScript number.
import {Buffer }from'node:buffer';const buf =Buffer.allocUnsafe(4);buf.writeFloatLE(0xcafebabe,0);console.log(buf);// Prints: <Buffer bb fe 4a 4f>const {Buffer } =require('node:buffer');const buf =Buffer.allocUnsafe(4);buf.writeFloatLE(0xcafebabe,0);console.log(buf);// Prints: <Buffer bb fe 4a 4f>
buf.writeInt8(value[, offset])#
History
| Version | Changes |
|---|---|
| v10.0.0 | Removed |
| v0.5.0 | Added in: v0.5.0 |
value<integer> Number to be written tobuf.offset<integer> Number of bytes to skip before starting to write. Mustsatisfy0 <= offset <= buf.length - 1.Default:0.- Returns:<integer>
offsetplus the number of bytes written.
Writesvalue tobuf at the specifiedoffset.value must be a validsigned 8-bit integer. Behavior is undefined whenvalue is anything other thana signed 8-bit integer.
value is interpreted and written as a two's complement signed integer.
import {Buffer }from'node:buffer';const buf =Buffer.allocUnsafe(2);buf.writeInt8(2,0);buf.writeInt8(-2,1);console.log(buf);// Prints: <Buffer 02 fe>const {Buffer } =require('node:buffer');const buf =Buffer.allocUnsafe(2);buf.writeInt8(2,0);buf.writeInt8(-2,1);console.log(buf);// Prints: <Buffer 02 fe>
buf.writeInt16BE(value[, offset])#
History
| Version | Changes |
|---|---|
| v10.0.0 | Removed |
| v0.5.5 | Added in: v0.5.5 |
value<integer> Number to be written tobuf.offset<integer> Number of bytes to skip before starting to write. Mustsatisfy0 <= offset <= buf.length - 2.Default:0.- Returns:<integer>
offsetplus the number of bytes written.
Writesvalue tobuf at the specifiedoffset as big-endian. Thevaluemust be a valid signed 16-bit integer. Behavior is undefined whenvalue isanything other than a signed 16-bit integer.
Thevalue is interpreted and written as a two's complement signed integer.
import {Buffer }from'node:buffer';const buf =Buffer.allocUnsafe(2);buf.writeInt16BE(0x0102,0);console.log(buf);// Prints: <Buffer 01 02>const {Buffer } =require('node:buffer');const buf =Buffer.allocUnsafe(2);buf.writeInt16BE(0x0102,0);console.log(buf);// Prints: <Buffer 01 02>
buf.writeInt16LE(value[, offset])#
History
| Version | Changes |
|---|---|
| v10.0.0 | Removed |
| v0.5.5 | Added in: v0.5.5 |
value<integer> Number to be written tobuf.offset<integer> Number of bytes to skip before starting to write. Mustsatisfy0 <= offset <= buf.length - 2.Default:0.- Returns:<integer>
offsetplus the number of bytes written.
Writesvalue tobuf at the specifiedoffset as little-endian. Thevaluemust be a valid signed 16-bit integer. Behavior is undefined whenvalue isanything other than a signed 16-bit integer.
Thevalue is interpreted and written as a two's complement signed integer.
import {Buffer }from'node:buffer';const buf =Buffer.allocUnsafe(2);buf.writeInt16LE(0x0304,0);console.log(buf);// Prints: <Buffer 04 03>const {Buffer } =require('node:buffer');const buf =Buffer.allocUnsafe(2);buf.writeInt16LE(0x0304,0);console.log(buf);// Prints: <Buffer 04 03>
buf.writeInt32BE(value[, offset])#
History
| Version | Changes |
|---|---|
| v10.0.0 | Removed |
| v0.5.5 | Added in: v0.5.5 |
value<integer> Number to be written tobuf.offset<integer> Number of bytes to skip before starting to write. Mustsatisfy0 <= offset <= buf.length - 4.Default:0.- Returns:<integer>
offsetplus the number of bytes written.
Writesvalue tobuf at the specifiedoffset as big-endian. Thevaluemust be a valid signed 32-bit integer. Behavior is undefined whenvalue isanything other than a signed 32-bit integer.
Thevalue is interpreted and written as a two's complement signed integer.
import {Buffer }from'node:buffer';const buf =Buffer.allocUnsafe(4);buf.writeInt32BE(0x01020304,0);console.log(buf);// Prints: <Buffer 01 02 03 04>const {Buffer } =require('node:buffer');const buf =Buffer.allocUnsafe(4);buf.writeInt32BE(0x01020304,0);console.log(buf);// Prints: <Buffer 01 02 03 04>
buf.writeInt32LE(value[, offset])#
History
| Version | Changes |
|---|---|
| v10.0.0 | Removed |
| v0.5.5 | Added in: v0.5.5 |
value<integer> Number to be written tobuf.offset<integer> Number of bytes to skip before starting to write. Mustsatisfy0 <= offset <= buf.length - 4.Default:0.- Returns:<integer>
offsetplus the number of bytes written.
Writesvalue tobuf at the specifiedoffset as little-endian. Thevaluemust be a valid signed 32-bit integer. Behavior is undefined whenvalue isanything other than a signed 32-bit integer.
Thevalue is interpreted and written as a two's complement signed integer.
import {Buffer }from'node:buffer';const buf =Buffer.allocUnsafe(4);buf.writeInt32LE(0x05060708,0);console.log(buf);// Prints: <Buffer 08 07 06 05>const {Buffer } =require('node:buffer');const buf =Buffer.allocUnsafe(4);buf.writeInt32LE(0x05060708,0);console.log(buf);// Prints: <Buffer 08 07 06 05>
buf.writeIntBE(value, offset, byteLength)#
History
| Version | Changes |
|---|---|
| v10.0.0 | Removed |
| v0.11.15 | Added in: v0.11.15 |
value<integer> Number to be written tobuf.offset<integer> Number of bytes to skip before starting to write. Mustsatisfy0 <= offset <= buf.length - byteLength.byteLength<integer> Number of bytes to write. Must satisfy0 < byteLength <= 6.- Returns:<integer>
offsetplus the number of bytes written.
WritesbyteLength bytes ofvalue tobuf at the specifiedoffsetas big-endian. Supports up to 48 bits of accuracy. Behavior is undefined whenvalue is anything other than a signed integer.
import {Buffer }from'node:buffer';const buf =Buffer.allocUnsafe(6);buf.writeIntBE(0x1234567890ab,0,6);console.log(buf);// Prints: <Buffer 12 34 56 78 90 ab>const {Buffer } =require('node:buffer');const buf =Buffer.allocUnsafe(6);buf.writeIntBE(0x1234567890ab,0,6);console.log(buf);// Prints: <Buffer 12 34 56 78 90 ab>
buf.writeIntLE(value, offset, byteLength)#
History
| Version | Changes |
|---|---|
| v10.0.0 | Removed |
| v0.11.15 | Added in: v0.11.15 |
value<integer> Number to be written tobuf.offset<integer> Number of bytes to skip before starting to write. Mustsatisfy0 <= offset <= buf.length - byteLength.byteLength<integer> Number of bytes to write. Must satisfy0 < byteLength <= 6.- Returns:<integer>
offsetplus the number of bytes written.
WritesbyteLength bytes ofvalue tobuf at the specifiedoffsetas little-endian. Supports up to 48 bits of accuracy. Behavior is undefinedwhenvalue is anything other than a signed integer.
import {Buffer }from'node:buffer';const buf =Buffer.allocUnsafe(6);buf.writeIntLE(0x1234567890ab,0,6);console.log(buf);// Prints: <Buffer ab 90 78 56 34 12>const {Buffer } =require('node:buffer');const buf =Buffer.allocUnsafe(6);buf.writeIntLE(0x1234567890ab,0,6);console.log(buf);// Prints: <Buffer ab 90 78 56 34 12>
buf.writeUInt8(value[, offset])#
History
| Version | Changes |
|---|---|
| v14.9.0, v12.19.0 | This function is also available as |
| v10.0.0 | Removed |
| v0.5.0 | Added in: v0.5.0 |
value<integer> Number to be written tobuf.offset<integer> Number of bytes to skip before starting to write. Mustsatisfy0 <= offset <= buf.length - 1.Default:0.- Returns:<integer>
offsetplus the number of bytes written.
Writesvalue tobuf at the specifiedoffset.value must be avalid unsigned 8-bit integer. Behavior is undefined whenvalue is anythingother than an unsigned 8-bit integer.
This function is also available under thewriteUint8 alias.
import {Buffer }from'node:buffer';const buf =Buffer.allocUnsafe(4);buf.writeUInt8(0x3,0);buf.writeUInt8(0x4,1);buf.writeUInt8(0x23,2);buf.writeUInt8(0x42,3);console.log(buf);// Prints: <Buffer 03 04 23 42>const {Buffer } =require('node:buffer');const buf =Buffer.allocUnsafe(4);buf.writeUInt8(0x3,0);buf.writeUInt8(0x4,1);buf.writeUInt8(0x23,2);buf.writeUInt8(0x42,3);console.log(buf);// Prints: <Buffer 03 04 23 42>
buf.writeUInt16BE(value[, offset])#
History
| Version | Changes |
|---|---|
| v14.9.0, v12.19.0 | This function is also available as |
| v10.0.0 | Removed |
| v0.5.5 | Added in: v0.5.5 |
value<integer> Number to be written tobuf.offset<integer> Number of bytes to skip before starting to write. Mustsatisfy0 <= offset <= buf.length - 2.Default:0.- Returns:<integer>
offsetplus the number of bytes written.
Writesvalue tobuf at the specifiedoffset as big-endian. Thevaluemust be a valid unsigned 16-bit integer. Behavior is undefined whenvalueis anything other than an unsigned 16-bit integer.
This function is also available under thewriteUint16BE alias.
import {Buffer }from'node:buffer';const buf =Buffer.allocUnsafe(4);buf.writeUInt16BE(0xdead,0);buf.writeUInt16BE(0xbeef,2);console.log(buf);// Prints: <Buffer de ad be ef>const {Buffer } =require('node:buffer');const buf =Buffer.allocUnsafe(4);buf.writeUInt16BE(0xdead,0);buf.writeUInt16BE(0xbeef,2);console.log(buf);// Prints: <Buffer de ad be ef>
buf.writeUInt16LE(value[, offset])#
History
| Version | Changes |
|---|---|
| v14.9.0, v12.19.0 | This function is also available as |
| v10.0.0 | Removed |
| v0.5.5 | Added in: v0.5.5 |
value<integer> Number to be written tobuf.offset<integer> Number of bytes to skip before starting to write. Mustsatisfy0 <= offset <= buf.length - 2.Default:0.- Returns:<integer>
offsetplus the number of bytes written.
Writesvalue tobuf at the specifiedoffset as little-endian. Thevaluemust be a valid unsigned 16-bit integer. Behavior is undefined whenvalue isanything other than an unsigned 16-bit integer.
This function is also available under thewriteUint16LE alias.
import {Buffer }from'node:buffer';const buf =Buffer.allocUnsafe(4);buf.writeUInt16LE(0xdead,0);buf.writeUInt16LE(0xbeef,2);console.log(buf);// Prints: <Buffer ad de ef be>const {Buffer } =require('node:buffer');const buf =Buffer.allocUnsafe(4);buf.writeUInt16LE(0xdead,0);buf.writeUInt16LE(0xbeef,2);console.log(buf);// Prints: <Buffer ad de ef be>
buf.writeUInt32BE(value[, offset])#
History
| Version | Changes |
|---|---|
| v14.9.0, v12.19.0 | This function is also available as |
| v10.0.0 | Removed |
| v0.5.5 | Added in: v0.5.5 |
value<integer> Number to be written tobuf.offset<integer> Number of bytes to skip before starting to write. Mustsatisfy0 <= offset <= buf.length - 4.Default:0.- Returns:<integer>
offsetplus the number of bytes written.
Writesvalue tobuf at the specifiedoffset as big-endian. Thevaluemust be a valid unsigned 32-bit integer. Behavior is undefined whenvalueis anything other than an unsigned 32-bit integer.
This function is also available under thewriteUint32BE alias.
import {Buffer }from'node:buffer';const buf =Buffer.allocUnsafe(4);buf.writeUInt32BE(0xfeedface,0);console.log(buf);// Prints: <Buffer fe ed fa ce>const {Buffer } =require('node:buffer');const buf =Buffer.allocUnsafe(4);buf.writeUInt32BE(0xfeedface,0);console.log(buf);// Prints: <Buffer fe ed fa ce>
buf.writeUInt32LE(value[, offset])#
History
| Version | Changes |
|---|---|
| v14.9.0, v12.19.0 | This function is also available as |
| v10.0.0 | Removed |
| v0.5.5 | Added in: v0.5.5 |
value<integer> Number to be written tobuf.offset<integer> Number of bytes to skip before starting to write. Mustsatisfy0 <= offset <= buf.length - 4.Default:0.- Returns:<integer>
offsetplus the number of bytes written.
Writesvalue tobuf at the specifiedoffset as little-endian. Thevaluemust be a valid unsigned 32-bit integer. Behavior is undefined whenvalue isanything other than an unsigned 32-bit integer.
This function is also available under thewriteUint32LE alias.
import {Buffer }from'node:buffer';const buf =Buffer.allocUnsafe(4);buf.writeUInt32LE(0xfeedface,0);console.log(buf);// Prints: <Buffer ce fa ed fe>const {Buffer } =require('node:buffer');const buf =Buffer.allocUnsafe(4);buf.writeUInt32LE(0xfeedface,0);console.log(buf);// Prints: <Buffer ce fa ed fe>
buf.writeUIntBE(value, offset, byteLength)#
History
| Version | Changes |
|---|---|
| v14.9.0, v12.19.0 | This function is also available as |
| v10.0.0 | Removed |
| v0.5.5 | Added in: v0.5.5 |
value<integer> Number to be written tobuf.offset<integer> Number of bytes to skip before starting to write. Mustsatisfy0 <= offset <= buf.length - byteLength.byteLength<integer> Number of bytes to write. Must satisfy0 < byteLength <= 6.- Returns:<integer>
offsetplus the number of bytes written.
WritesbyteLength bytes ofvalue tobuf at the specifiedoffsetas big-endian. Supports up to 48 bits of accuracy. Behavior is undefinedwhenvalue is anything other than an unsigned integer.
This function is also available under thewriteUintBE alias.
import {Buffer }from'node:buffer';const buf =Buffer.allocUnsafe(6);buf.writeUIntBE(0x1234567890ab,0,6);console.log(buf);// Prints: <Buffer 12 34 56 78 90 ab>const {Buffer } =require('node:buffer');const buf =Buffer.allocUnsafe(6);buf.writeUIntBE(0x1234567890ab,0,6);console.log(buf);// Prints: <Buffer 12 34 56 78 90 ab>
buf.writeUIntLE(value, offset, byteLength)#
History
| Version | Changes |
|---|---|
| v14.9.0, v12.19.0 | This function is also available as |
| v10.0.0 | Removed |
| v0.5.5 | Added in: v0.5.5 |
value<integer> Number to be written tobuf.offset<integer> Number of bytes to skip before starting to write. Mustsatisfy0 <= offset <= buf.length - byteLength.byteLength<integer> Number of bytes to write. Must satisfy0 < byteLength <= 6.- Returns:<integer>
offsetplus the number of bytes written.
WritesbyteLength bytes ofvalue tobuf at the specifiedoffsetas little-endian. Supports up to 48 bits of accuracy. Behavior is undefinedwhenvalue is anything other than an unsigned integer.
This function is also available under thewriteUintLE alias.
import {Buffer }from'node:buffer';const buf =Buffer.allocUnsafe(6);buf.writeUIntLE(0x1234567890ab,0,6);console.log(buf);// Prints: <Buffer ab 90 78 56 34 12>const {Buffer } =require('node:buffer');const buf =Buffer.allocUnsafe(6);buf.writeUIntLE(0x1234567890ab,0,6);console.log(buf);// Prints: <Buffer ab 90 78 56 34 12>
new Buffer(array)#
History
| Version | Changes |
|---|---|
| v10.0.0 | Calling this constructor emits a deprecation warning when run from code outside the |
| v7.2.1 | Calling this constructor no longer emits a deprecation warning. |
| v7.0.0 | Calling this constructor emits a deprecation warning now. |
| v6.0.0 | Deprecated since: v6.0.0 |
Buffer.from(array) instead.array<integer[]> An array of bytes to copy from.
new Buffer(arrayBuffer[, byteOffset[, length]])#
History
| Version | Changes |
|---|---|
| v10.0.0 | Calling this constructor emits a deprecation warning when run from code outside the |
| v7.2.1 | Calling this constructor no longer emits a deprecation warning. |
| v7.0.0 | Calling this constructor emits a deprecation warning now. |
| v6.0.0 | The |
| v6.0.0 | Deprecated since: v6.0.0 |
| v3.0.0 | Added in: v3.0.0 |
Buffer.from(arrayBuffer[, byteOffset[, length]])instead.arrayBuffer<ArrayBuffer> |<SharedArrayBuffer> An<ArrayBuffer>,<SharedArrayBuffer> or the.bufferproperty of a<TypedArray>.byteOffset<integer> Index of first byte to expose.Default:0.length<integer> Number of bytes to expose.Default:arrayBuffer.byteLength - byteOffset.
new Buffer(buffer)#
History
| Version | Changes |
|---|---|
| v10.0.0 | Calling this constructor emits a deprecation warning when run from code outside the |
| v7.2.1 | Calling this constructor no longer emits a deprecation warning. |
| v7.0.0 | Calling this constructor emits a deprecation warning now. |
| v6.0.0 | Deprecated since: v6.0.0 |
Buffer.from(buffer) instead.buffer<Buffer> |<Uint8Array> An existingBufferor<Uint8Array> fromwhich to copy data.
new Buffer(size)#
History
| Version | Changes |
|---|---|
| v10.0.0 | Calling this constructor emits a deprecation warning when run from code outside the |
| v8.0.0 | The |
| v7.2.1 | Calling this constructor no longer emits a deprecation warning. |
| v7.0.0 | Calling this constructor emits a deprecation warning now. |
| v6.0.0 | Deprecated since: v6.0.0 |
size<integer> The desired length of the newBuffer.
SeeBuffer.alloc() andBuffer.allocUnsafe(). This variant of theconstructor is equivalent toBuffer.alloc().
new Buffer(string[, encoding])#
History
| Version | Changes |
|---|---|
| v10.0.0 | Calling this constructor emits a deprecation warning when run from code outside the |
| v7.2.1 | Calling this constructor no longer emits a deprecation warning. |
| v7.0.0 | Calling this constructor emits a deprecation warning now. |
| v6.0.0 | Deprecated since: v6.0.0 |
Buffer.from(string[, encoding]) instead.Class:File#
History
| Version | Changes |
|---|---|
| v23.0.0 | Makes File instances cloneable. |
| v20.0.0 | No longer experimental. |
| v19.2.0, v18.13.0 | Added in: v19.2.0, v18.13.0 |
- Extends:<Blob>
A<File> provides information about files.
new buffer.File(sources, fileName[, options])#
sources<string[]> |<ArrayBuffer[]> |<TypedArray[]> |<DataView[]> |<Blob[]> |<File[]>An array of string,<ArrayBuffer>,<TypedArray>,<DataView>,<File>, or<Blob>objects, or any mix of such objects, that will be stored within theFile.fileName<string> The name of the file.options<Object>endings<string> One of either'transparent'or'native'. When setto'native', line endings in string source parts will be converted tothe platform native line-ending as specified byrequire('node:os').EOL.type<string> The File content-type.lastModified<number> The last modified date of the file.Default:Date.now().
node:buffer module APIs#
While, theBuffer object is available as a global, there are additionalBuffer-related APIs that are available only via thenode:buffer moduleaccessed usingrequire('node:buffer').
buffer.atob(data)#
Buffer.from(data, 'base64') instead.data<any> The Base64-encoded input string.
Decodes a string of Base64-encoded data into bytes, and encodes those bytesinto a string using Latin-1 (ISO-8859-1).
Thedata may be any JavaScript-value that can be coerced into a string.
This function is only provided for compatibility with legacy web platform APIsand should never be used in new code, because they use strings to representbinary data and predate the introduction of typed arrays in JavaScript.For code running using Node.js APIs, converting between base64-encoded stringsand binary data should be performed usingBuffer.from(str, 'base64') andbuf.toString('base64').
An automated migration is available (source:
npx codemod@latest @nodejs/buffer-atob-btoabuffer.btoa(data)#
buf.toString('base64') instead.data<any> An ASCII (Latin1) string.
Decodes a string into bytes using Latin-1 (ISO-8859), and encodes those bytesinto a string using Base64.
Thedata may be any JavaScript-value that can be coerced into a string.
This function is only provided for compatibility with legacy web platform APIsand should never be used in new code, because they use strings to representbinary data and predate the introduction of typed arrays in JavaScript.For code running using Node.js APIs, converting between base64-encoded stringsand binary data should be performed usingBuffer.from(str, 'base64') andbuf.toString('base64').
An automated migration is available (source:
npx codemod@latest @nodejs/buffer-atob-btoabuffer.isAscii(input)#
input<Buffer> |<ArrayBuffer> |<TypedArray> The input to validate.- Returns:<boolean>
This function returnstrue ifinput contains only valid ASCII-encoded data,including the case in whichinput is empty.
Throws if theinput is a detached array buffer.
buffer.isUtf8(input)#
input<Buffer> |<ArrayBuffer> |<TypedArray> The input to validate.- Returns:<boolean>
This function returnstrue ifinput contains only valid UTF-8-encoded data,including the case in whichinput is empty.
Throws if theinput is a detached array buffer.
buffer.INSPECT_MAX_BYTES#
- Type:<integer>Default:
50
Returns the maximum number of bytes that will be returned whenbuf.inspect() is called. This can be overridden by user modules. Seeutil.inspect() for more details onbuf.inspect() behavior.
buffer.kMaxLength#
- Type:<integer> The largest size allowed for a single
Bufferinstance.
An alias forbuffer.constants.MAX_LENGTH.
buffer.kStringMaxLength#
- Type:<integer> The largest length allowed for a single
stringinstance.
An alias forbuffer.constants.MAX_STRING_LENGTH.
buffer.resolveObjectURL(id)#
History
| Version | Changes |
|---|---|
| v24.0.0, v22.17.0 | Marking the API stable. |
| v16.7.0 | Added in: v16.7.0 |
id<string> A'blob:nodedata:...URL string returned by a prior call toURL.createObjectURL().- Returns:<Blob>
Resolves a'blob:nodedata:...' an associated<Blob> object registered usinga prior call toURL.createObjectURL().
buffer.transcode(source, fromEnc, toEnc)#
History
| Version | Changes |
|---|---|
| v8.0.0 | The |
| v7.1.0 | Added in: v7.1.0 |
source<Buffer> |<Uint8Array> ABufferorUint8Arrayinstance.fromEnc<string> The current encoding.toEnc<string> To target encoding.- Returns:<Buffer>
Re-encodes the givenBuffer orUint8Array instance from one characterencoding to another. Returns a newBuffer instance.
Throws if thefromEnc ortoEnc specify invalid character encodings or ifconversion fromfromEnc totoEnc is not permitted.
Encodings supported bybuffer.transcode() are:'ascii','utf8','utf16le','ucs2','latin1', and'binary'.
The transcoding process will use substitution characters if a given bytesequence cannot be adequately represented in the target encoding. For instance:
import {Buffer, transcode }from'node:buffer';const newBuf =transcode(Buffer.from('€'),'utf8','ascii');console.log(newBuf.toString('ascii'));// Prints: '?'const {Buffer, transcode } =require('node:buffer');const newBuf =transcode(Buffer.from('€'),'utf8','ascii');console.log(newBuf.toString('ascii'));// Prints: '?'
Because the Euro (€) sign is not representable in US-ASCII, it is replacedwith? in the transcodedBuffer.
Buffer constants#
buffer.constants.MAX_LENGTH#
History
| Version | Changes |
|---|---|
| v22.0.0 | Value is changed to 253 - 1 on 64-bit architectures, and 231 - 1 on 32-bit architectures. |
| v15.0.0 | Value is changed to 232 on 64-bit architectures. |
| v14.0.0 | Value is changed from 231 - 1 to 232 - 1 on 64-bit architectures. |
| v8.2.0 | Added in: v8.2.0 |
- Type:<integer> The largest size allowed for a single
Bufferinstance.
On 32-bit architectures, this value is equal to 231 - 1 (about 2GiB).
On 64-bit architectures, this value is equal toNumber.MAX_SAFE_INTEGER(253 - 1, about 8 PiB).
It reflectsv8::Uint8Array::kMaxLength under the hood.
This value is also available asbuffer.kMaxLength.
Buffer.from(),Buffer.alloc(), andBuffer.allocUnsafe()#
In versions of Node.js prior to 6.0.0,Buffer instances were created using theBuffer constructor function, which allocates the returnedBufferdifferently based on what arguments are provided:
- Passing a number as the first argument to
Buffer()(e.g.new Buffer(10))allocates a newBufferobject of the specified size. Prior to Node.js 8.0.0,the memory allocated for suchBufferinstances isnot initialized andcan contain sensitive data. SuchBufferinstancesmust be subsequentlyinitialized by using eitherbuf.fill(0)or by writing to theentireBufferbefore reading data from theBuffer.While this behavior isintentional to improve performance,development experience has demonstrated that a more explicit distinction isrequired between creating a fast-but-uninitializedBufferversus creating aslower-but-saferBuffer. Since Node.js 8.0.0,Buffer(num)andnew Buffer(num)return aBufferwith initialized memory. - Passing a string, array, or
Bufferas the first argument copies thepassed object's data into theBuffer. - Passing an<ArrayBuffer> or a<SharedArrayBuffer> returns a
Bufferthat shares allocated memory with the given array buffer.
Because the behavior ofnew Buffer() is different depending on the type of thefirst argument, security and reliability issues can be inadvertently introducedinto applications when argument validation orBuffer initialization is notperformed.
For example, if an attacker can cause an application to receive a number wherea string is expected, the application may callnew Buffer(100)instead ofnew Buffer("100"), leading it to allocate a 100 byte buffer insteadof allocating a 3 byte buffer with content"100". This is commonly possibleusing JSON API calls. Since JSON distinguishes between numeric and string types,it allows injection of numbers where a naively written application that does notvalidate its input sufficiently might expect to always receive a string.Before Node.js 8.0.0, the 100 byte buffer might containarbitrary pre-existing in-memory data, so may be used to expose in-memorysecrets to a remote attacker. Since Node.js 8.0.0, exposure of memory cannotoccur because the data is zero-filled. However, other attacks are stillpossible, such as causing very large buffers to be allocated by the server,leading to performance degradation or crashing on memory exhaustion.
To make the creation ofBuffer instances more reliable and less error-prone,the various forms of thenew Buffer() constructor have beendeprecatedand replaced by separateBuffer.from(),Buffer.alloc(), andBuffer.allocUnsafe() methods.
Developers should migrate all existing uses of thenew Buffer() constructorsto one of these new APIs.
Buffer.from(array)returns a newBufferthatcontains a copy of theprovided octets.Buffer.from(arrayBuffer[, byteOffset[, length]])returns a newBufferthatshares the same allocated memory as the given<ArrayBuffer>.Buffer.from(buffer)returns a newBufferthatcontains a copy of thecontents of the givenBuffer.Buffer.from(string[, encoding])returns a newBufferthatcontains a copy of the provided string.Buffer.alloc(size[, fill[, encoding]])returns a newinitializedBufferof the specified size. This method is slower thanBuffer.allocUnsafe(size)but guarantees that newlycreatedBufferinstances never contain old data that is potentiallysensitive. ATypeErrorwill be thrown ifsizeis not a number.Buffer.allocUnsafe(size)andBuffer.allocUnsafeSlow(size)each return anew uninitializedBufferof the specifiedsize. Because theBufferisuninitialized, the allocated segment of memory might contain old data that ispotentially sensitive.
Buffer instances returned byBuffer.allocUnsafe(),Buffer.from(string),Buffer.concat() andBuffer.from(array)may be allocated off a sharedinternal memory pool ifsize is less than or equal to halfBuffer.poolSize.Instances returned byBuffer.allocUnsafeSlow()never use the shared internalmemory pool.
The--zero-fill-buffers command-line option#
Node.js can be started using the--zero-fill-buffers command-line option tocause all newly-allocatedBuffer instances to be zero-filled upon creation bydefault. Without the option, buffers created withBuffer.allocUnsafe() andBuffer.allocUnsafeSlow() are not zero-filled. Use of this flag can have ameasurable negative impact on performance. Use the--zero-fill-buffers optiononly when necessary to enforce that newly allocatedBuffer instances cannotcontain old data that is potentially sensitive.
$node --zero-fill-buffers>Buffer.allocUnsafe(5);<Buffer 00 00 00 00 00>What makesBuffer.allocUnsafe() andBuffer.allocUnsafeSlow() "unsafe"?#
When callingBuffer.allocUnsafe() andBuffer.allocUnsafeSlow(), thesegment of allocated memory isuninitialized (it is not zeroed-out). Whilethis design makes the allocation of memory quite fast, the allocated segment ofmemory might contain old data that is potentially sensitive. Using aBuffercreated byBuffer.allocUnsafe() withoutcompletely overwriting thememory can allow this old data to be leaked when theBuffer memory is read.
While there are clear performance advantages to usingBuffer.allocUnsafe(), extra caremust be taken in order to avoidintroducing security vulnerabilities into an application.
C++ addons#
Addons are dynamically-linked shared objects written in C++. Therequire() function can load addons as ordinary Node.js modules.Addons provide an interface between JavaScript and C/C++ libraries.
There are three options for implementing addons:
- Node-API
nan(Native Abstractions for Node.js)- direct use of internal V8, libuv, and Node.js libraries
Unless there is a need for direct access to functionality which is not
exposed by Node-API, use Node-API.Refer toC/C++ addons with Node-API for more information onNode-API.
When not using Node-API, implementing addons becomes more complex, requiring
knowledge of multiple components and APIs:
V8: the C++ library Node.js uses to provide theJavaScript implementation. It provides the mechanisms for creating objects,calling functions, etc. The V8's API is documented mostly in the
v8.hheader file (deps/v8/include/v8.hin the Node.js sourcetree), and is also availableonline.libuv: The C library that implements the Node.js event loop, its workerthreads and all of the asynchronous behaviors of the platform. It alsoserves as a cross-platform abstraction library, giving easy, POSIX-likeaccess across all major operating systems to many common system tasks, suchas interacting with the file system, sockets, timers, and system events. libuvalso provides a threading abstraction similar to POSIX threads formore sophisticated asynchronous addons that need to move beyond thestandard event loop. Addon authors shouldavoid blocking the event loop with I/O or other time-intensive tasks byoffloading work via libuv to non-blocking system operations, worker threads,or a custom use of libuv threads.
Internal Node.js libraries: Node.js itself exports C++ APIs that addons canuse, the most important of which is the
node::ObjectWrapclass.Other statically linked libraries (including OpenSSL): Theseother libraries are located in the
deps/directory in the Node.js sourcetree. Only the libuv, OpenSSL, V8, and zlib symbols are purposefullyre-exported by Node.js and may be used to various extents by addons. SeeLinking to libraries included with Node.js for additional information.
All of the following examples are available fordownload and maybe used as the starting-point for an addon.
Hello world#
This "Hello world" example is a simple addon, written in C++, that is theequivalent of the following JavaScript code:
module.exports.hello =() =>'world';First, create the filehello.cc:
// hello.cc#include<node.h>namespace demo {using v8::FunctionCallbackInfo;using v8::Isolate;using v8::Local;using v8::NewStringType;using v8::Object;using v8::String;using v8::Value;voidMethod(const FunctionCallbackInfo<Value>& args){ Isolate* isolate = args.GetIsolate(); args.GetReturnValue().Set(String::NewFromUtf8( isolate,"world", NewStringType::kNormal).ToLocalChecked());}voidInitialize(Local<Object> exports){NODE_SET_METHOD(exports,"hello", Method);}NODE_MODULE(NODE_GYP_MODULE_NAME, Initialize)}// namespace demoAll Node.js addons must export an initialization function followingthe pattern:
voidInitialize(Local<Object> exports);NODE_MODULE(NODE_GYP_MODULE_NAME, Initialize)There is no semi-colon afterNODE_MODULE as it's not a function (seenode.h).
Themodule_name must match the filename of the final binary (excludingthe.node suffix).
In thehello.cc example, then, the initialization function isInitializeand the addon module name isaddon.
When building addons withnode-gyp, using the macroNODE_GYP_MODULE_NAME asthe first parameter ofNODE_MODULE() will ensure that the name of the finalbinary will be passed toNODE_MODULE().
Addons defined withNODE_MODULE() can not be loaded in multiple contexts ormultiple threads at the same time.
Context-aware addons#
There are environments in which Node.js addons may need to be loaded multipletimes in multiple contexts. For example, theElectron runtime runs multipleinstances of Node.js in a single process. Each instance will have its ownrequire() cache, and thus each instance will need a native addon to behavecorrectly when loaded viarequire(). This means that the addonmust support multiple initializations.
A context-aware addon can be constructed by using the macroNODE_MODULE_INITIALIZER, which expands to the name of a function which Node.jswill expect to find when it loads an addon. An addon can thus be initialized asin the following example:
usingnamespace v8;extern"C"NODE_MODULE_EXPORTvoidNODE_MODULE_INITIALIZER(Local<Object> exports, Local<Value>module, Local<Context> context){/* Perform addon initialization steps here. */}Another option is to use the macroNODE_MODULE_INIT(), which will alsoconstruct a context-aware addon. UnlikeNODE_MODULE(), which is used toconstruct an addon around a given addon initializer function,NODE_MODULE_INIT() serves as the declaration of such an initializer to befollowed by a function body.
The following three variables may be used inside the function body following aninvocation ofNODE_MODULE_INIT():
Local<Object> exports,Local<Value> module, andLocal<Context> context
Building a context-aware addon requires careful management of global static datato ensure stability and correctness. Since the addon may be loaded multipletimes, potentially even from different threads, any global static data storedin the addon must be properly protected, and must not contain any persistentreferences to JavaScript objects. The reason for this is that JavaScriptobjects are only valid in one context, and will likely cause a crash whenaccessed from the wrong context or from a different thread than the one on whichthey were created.
The context-aware addon can be structured to avoid global static data byperforming the following steps:
- Define a class which will hold per-addon-instance data and which has a staticmember of the form
staticvoidDeleteInstance(void* data){// Cast `data` to an instance of the class and delete it.} - Heap-allocate an instance of this class in the addon initializer. This can beaccomplished using the
newkeyword. - Call
node::AddEnvironmentCleanupHook(), passing it the above-createdinstance and a pointer toDeleteInstance(). This will ensure the instance isdeleted when the environment is torn down. - Store the instance of the class in a
v8::External, and - Pass the
v8::Externalto all methods exposed to JavaScript by passing ittov8::FunctionTemplate::New()orv8::Function::New()which creates thenative-backed JavaScript functions. The third parameter ofv8::FunctionTemplate::New()orv8::Function::New()accepts thev8::Externaland makes it available in the native callback using thev8::FunctionCallbackInfo::Data()method.
This will ensure that the per-addon-instance data reaches each binding that canbe called from JavaScript. The per-addon-instance data must also be passed intoany asynchronous callbacks the addon may create.
The following example illustrates the implementation of a context-aware addon:
#include<node.h>usingnamespace v8;classAddonData {public:explicitAddonData(Isolate* isolate): call_count(0) {// Ensure this per-addon-instance data is deleted at environment cleanup. node::AddEnvironmentCleanupHook(isolate, DeleteInstance,this); }// Per-addon data.int call_count;staticvoidDeleteInstance(void* data){deletestatic_cast<AddonData*>(data); }};staticvoidMethod(const v8::FunctionCallbackInfo<v8::Value>& info){// Retrieve the per-addon-instance data. AddonData* data =reinterpret_cast<AddonData*>(info.Data().As<External>()->Value()); data->call_count++; info.GetReturnValue().Set((double)data->call_count);}// Initialize this addon to be context-aware.NODE_MODULE_INIT(/* exports, module, context */) { Isolate* isolate = Isolate::GetCurrent();// Create a new instance of `AddonData` for this instance of the addon and// tie its life cycle to that of the Node.js environment. AddonData* data =newAddonData(isolate);// Wrap the data in a `v8::External` so we can pass it to the method we// expose. Local<External> external = External::New(isolate, data);// Expose the method `Method` to JavaScript, and make sure it receives the// per-addon-instance data we created above by passing `external` as the// third parameter to the `FunctionTemplate` constructor. exports->Set(context, String::NewFromUtf8(isolate,"method").ToLocalChecked(), FunctionTemplate::New(isolate, Method, external) ->GetFunction(context).ToLocalChecked()).FromJust();}Worker support#
History
| Version | Changes |
|---|---|
| v14.8.0, v12.19.0 | Cleanup hooks may now be asynchronous. |
In order to be loaded from multiple Node.js environments,such as a main thread and a Worker thread, an add-on needs to either:
- Be an Node-API addon, or
- Be declared as context-aware using
NODE_MODULE_INIT()as described above
In order to supportWorker threads, addons need to clean up any resourcesthey may have allocated when such a thread exits. This can be achieved throughthe usage of theAddEnvironmentCleanupHook() function:
voidAddEnvironmentCleanupHook(v8::Isolate* isolate,void (*fun)(void* arg),void* arg);This function adds a hook that will run before a given Node.js instance shutsdown. If necessary, such hooks can be removed before they are run usingRemoveEnvironmentCleanupHook(), which has the same signature. Callbacks arerun in last-in first-out order.
If necessary, there is an additional pair ofAddEnvironmentCleanupHook()andRemoveEnvironmentCleanupHook() overloads, where the cleanup hook takes acallback function. This can be used for shutting down asynchronous resources,such as any libuv handles registered by the addon.
The followingaddon.cc usesAddEnvironmentCleanupHook:
// addon.cc#include<node.h>#include<assert.h>#include<stdlib.h>using node::AddEnvironmentCleanupHook;using v8::HandleScope;using v8::Isolate;using v8::Local;using v8::Object;// Note: In a real-world application, do not rely on static/global data.staticchar cookie[] ="yum yum";staticint cleanup_cb1_called =0;staticint cleanup_cb2_called =0;staticvoidcleanup_cb1(void* arg){ Isolate* isolate =static_cast<Isolate*>(arg);HandleScopescope(isolate); Local<Object> obj = Object::New(isolate);assert(!obj.IsEmpty());// assert VM is still aliveassert(obj->IsObject()); cleanup_cb1_called++;}staticvoidcleanup_cb2(void* arg){assert(arg ==static_cast<void*>(cookie)); cleanup_cb2_called++;}staticvoidsanity_check(void*){assert(cleanup_cb1_called ==1);assert(cleanup_cb2_called ==1);}// Initialize this addon to be context-aware.NODE_MODULE_INIT(/* exports, module, context */) { Isolate* isolate = Isolate::GetCurrent();AddEnvironmentCleanupHook(isolate, sanity_check,nullptr);AddEnvironmentCleanupHook(isolate, cleanup_cb2, cookie);AddEnvironmentCleanupHook(isolate, cleanup_cb1, isolate);}Test in JavaScript by running:
// test.jsrequire('./build/Release/addon');Building#
Once the source code has been written, it must be compiled into the binaryaddon.node file. To do so, create a file calledbinding.gyp in thetop-level of the project describing the build configuration of the moduleusing a JSON-like format. This file is used bynode-gyp, a tool writtenspecifically to compile Node.js addons.
{"targets":[{"target_name":"addon","sources":["hello.cc"]}]}A version of thenode-gyp utility is bundled and distributed withNode.js as part ofnpm. This version is not made directly available fordevelopers to use and is intended only to support the ability to use thenpm install command to compile and install addons. Developers who wish tousenode-gyp directly can install it using the commandnpm install -g node-gyp. See thenode-gypinstallation instructions formore information, including platform-specific requirements.
Once thebinding.gyp file has been created, usenode-gyp configure togenerate the appropriate project build files for the current platform. Thiswill generate either aMakefile (on Unix platforms) or avcxproj file(on Windows) in thebuild/ directory.
Next, invoke thenode-gyp build command to generate the compiledaddon.nodefile. This will be put into thebuild/Release/ directory.
When usingnpm install to install a Node.js addon, npm uses its own bundledversion ofnode-gyp to perform this same set of actions, generating acompiled version of the addon for the user's platform on demand.
Once built, the binary addon can be used from within Node.js by pointingrequire() to the builtaddon.node module:
// hello.jsconst addon =require('./build/Release/addon');console.log(addon.hello());// Prints: 'world'Because the exact path to the compiled addon binary can vary depending on howit is compiled (i.e. sometimes it may be in./build/Debug/), addons can usethebindings package to load the compiled module.
While thebindings package implementation is more sophisticated in how itlocates addon modules, it is essentially using atry…catch pattern similar to:
try {returnrequire('./build/Release/addon.node');}catch (err) {returnrequire('./build/Debug/addon.node');}Linking to libraries included with Node.js#
Node.js uses statically linked libraries such as V8, libuv, and OpenSSL. Alladdons are required to link to V8 and may link to any of the other dependenciesas well. Typically, this is as simple as including the appropriate#include <...> statements (e.g.#include <v8.h>) andnode-gyp will locatethe appropriate headers automatically. However, there are a few caveats to beaware of:
When
node-gypruns, it will detect the specific release version of Node.jsand download either the full source tarball or just the headers. If the fullsource is downloaded, addons will have complete access to the full set ofNode.js dependencies. However, if only the Node.js headers are downloaded,then only the symbols exported by Node.js will be available.node-gypcan be run using the--nodedirflag pointing at a local Node.jssource image. Using this option, the addon will have access to the full set ofdependencies.
Loading addons usingrequire()#
The filename extension of the compiled addon binary is.node (as opposedto.dll or.so). Therequire() function is written to look forfiles with the.node file extension and initialize those as dynamically-linkedlibraries.
When callingrequire(), the.node extension can usually beomitted and Node.js will still find and initialize the addon. One caveat,however, is that Node.js will first attempt to locate and load modules orJavaScript files that happen to share the same base name. For instance, ifthere is a fileaddon.js in the same directory as the binaryaddon.node,thenrequire('addon') will give precedence to theaddon.js fileand load it instead.
Native abstractions for Node.js#
Each of the examples illustrated in this document directly use theNode.js and V8 APIs for implementing addons. The V8 API can, and has, changeddramatically from one V8 release to the next (and one major Node.js release tothe next). With each change, addons may need to be updated and recompiled inorder to continue functioning. The Node.js release schedule is designed tominimize the frequency and impact of such changes but there is little thatNode.js can do to ensure stability of the V8 APIs.
TheNative Abstractions for Node.js (ornan) provide a set of tools thataddon developers are recommended to use to keep compatibility between past andfuture releases of V8 and Node.js. See thenanexamples for anillustration of how it can be used.
Node-API#
Node-API is an API for building native addons. It is independent fromthe underlying JavaScript runtime (e.g. V8) and is maintained as part ofNode.js itself. This API will be Application Binary Interface (ABI) stableacross versions of Node.js. It is intended to insulate addons fromchanges in the underlying JavaScript engine and allow modulescompiled for one version to run on later versions of Node.js withoutrecompilation. Addons are built/packaged with the same approach/toolsoutlined in this document (node-gyp, etc.). The only difference is theset of APIs that are used by the native code. Instead of using the V8orNative Abstractions for Node.js APIs, the functions availablein the Node-API are used.
Creating and maintaining an addon that benefits from the ABI stabilityprovided by Node-API carries with it certainimplementation considerations.
To use Node-API in the above "Hello world" example, replace the content ofhello.cc with the following. All other instructions remain the same.
// hello.cc using Node-API#include<node_api.h>namespace demo {napi_valueMethod(napi_env env, napi_callback_info args){ napi_value greeting; napi_status status; status =napi_create_string_utf8(env,"world", NAPI_AUTO_LENGTH, &greeting);if (status != napi_ok)returnnullptr;return greeting;}napi_valueinit(napi_env env, napi_value exports){ napi_status status; napi_value fn; status =napi_create_function(env,nullptr,0, Method,nullptr, &fn);if (status != napi_ok)returnnullptr; status =napi_set_named_property(env, exports,"hello", fn);if (status != napi_ok)returnnullptr;return exports;}NAPI_MODULE(NODE_GYP_MODULE_NAME, init)}// namespace demoThe functions available and how to use them are documented inC/C++ addons with Node-API.
Addon examples#
Following are some example addons intended to help developers get started. Theexamples use the V8 APIs. Refer to the onlineV8 referencefor help with the various V8 calls, and V8'sEmbedder's Guide for anexplanation of several concepts used such as handles, scopes, functiontemplates, etc.
Each of these examples using the followingbinding.gyp file:
{"targets":[{"target_name":"addon","sources":["addon.cc"]}]}In cases where there is more than one.cc file, simply add the additionalfilename to thesources array:
"sources":["addon.cc","myexample.cc"]Once thebinding.gyp file is ready, the example addons can be configured andbuilt usingnode-gyp:
node-gyp configure buildFunction arguments#
Addons will typically expose objects and functions that can be accessed fromJavaScript running within Node.js. When functions are invoked from JavaScript,the input arguments and return value must be mapped to and from the C/C++code.
The following example illustrates how to read function arguments passed fromJavaScript and how to return a result:
// addon.cc#include<node.h>namespace demo {using v8::Exception;using v8::FunctionCallbackInfo;using v8::Isolate;using v8::Local;using v8::Number;using v8::Object;using v8::String;using v8::Value;// This is the implementation of the "add" method// Input arguments are passed using the// const FunctionCallbackInfo<Value>& args structvoidAdd(const FunctionCallbackInfo<Value>& args){ Isolate* isolate = args.GetIsolate();// Check the number of arguments passed.if (args.Length() <2) {// Throw an Error that is passed back to JavaScript isolate->ThrowException(Exception::TypeError( String::NewFromUtf8(isolate,"Wrong number of arguments").ToLocalChecked()));return; }// Check the argument typesif (!args[0]->IsNumber() || !args[1]->IsNumber()) { isolate->ThrowException(Exception::TypeError( String::NewFromUtf8(isolate,"Wrong arguments").ToLocalChecked()));return; }// Perform the operationdouble value = args[0].As<Number>()->Value() + args[1].As<Number>()->Value(); Local<Number> num = Number::New(isolate, value);// Set the return value (using the passed in// FunctionCallbackInfo<Value>&) args.GetReturnValue().Set(num);}voidInit(Local<Object> exports){NODE_SET_METHOD(exports,"add", Add);}NODE_MODULE(NODE_GYP_MODULE_NAME, Init)}// namespace demoOnce compiled, the example addon can be required and used from within Node.js:
// test.jsconst addon =require('./build/Release/addon');console.log('This should be eight:', addon.add(3,5));Callbacks#
It is common practice within addons to pass JavaScript functions to a C++function and execute them from there. The following example illustrates howto invoke such callbacks:
// addon.cc#include<node.h>namespace demo {using v8::Context;using v8::Function;using v8::FunctionCallbackInfo;using v8::Isolate;using v8::Local;using v8::Null;using v8::Object;using v8::String;using v8::Value;voidRunCallback(const FunctionCallbackInfo<Value>& args){ Isolate* isolate = args.GetIsolate(); Local<Context> context = isolate->GetCurrentContext(); Local<Function> cb = Local<Function>::Cast(args[0]);constunsigned argc =1; Local<Value> argv[argc] = { String::NewFromUtf8(isolate,"hello world").ToLocalChecked() }; cb->Call(context,Null(isolate), argc, argv).ToLocalChecked();}voidInit(Local<Object> exports, Local<Object>module){NODE_SET_METHOD(module,"exports", RunCallback);}NODE_MODULE(NODE_GYP_MODULE_NAME, Init)}// namespace demoThis example uses a two-argument form ofInit() that receives the fullmodule object as the second argument. This allows the addon to completelyoverwriteexports with a single function instead of adding the function as aproperty ofexports.
To test it, run the following #"#all_addons_object-factory">#
Addons can create and return new objects from within a C++ function asillustrated in the following example. An object is created and returned with apropertymsg that echoes the string passed tocreateObject():
// addon.cc#include<node.h>namespace demo {using v8::Context;using v8::FunctionCallbackInfo;using v8::Isolate;using v8::Local;using v8::Object;using v8::String;using v8::Value;voidCreateObject(const FunctionCallbackInfo<Value>& args){ Isolate* isolate = args.GetIsolate(); Local<Context> context = isolate->GetCurrentContext(); Local<Object> obj = Object::New(isolate); obj->Set(context, String::NewFromUtf8(isolate,"msg").ToLocalChecked(), args[0]->ToString(context).ToLocalChecked()) .FromJust(); args.GetReturnValue().Set(obj);}voidInit(Local<Object> exports, Local<Object>module){NODE_SET_METHOD(module,"exports", CreateObject);}NODE_MODULE(NODE_GYP_MODULE_NAME, Init)}// namespace demoTo test it in #"#all_addons_function-factory">#
Another common scenario is creating JavaScript functions that wrap C++functions and returning those back to #"hello world").ToLocalChecked());}voidCreateFunction(const FunctionCallbackInfo<Value>& args){ Isolate* isolate = args.GetIsolate(); Local<Context> context = isolate->GetCurrentContext(); Local<FunctionTemplate> tpl = FunctionTemplate::New(isolate, MyFunction); Local<Function> fn = tpl->GetFunction(context).ToLocalChecked();// omit this to make it anonymous fn->SetName(String::NewFromUtf8( isolate,"theFunction").ToLocalChecked()); args.GetReturnValue().Set(fn);}voidInit(Local<Object> exports, Local<Object>module){NODE_SET_METHOD(module,"exports", CreateFunction);}NODE_MODULE(NODE_GYP_MODULE_NAME, Init)}// namespace demo
To test:
// test.jsconst addon =require('./build/Release/addon');const fn =addon();console.log(fn());// Prints: 'hello world'Wrapping C++ objects#
It is also possible to wrap C++ objects/classes in a way that allows newinstances to be created using the JavaScriptnew operator:
// addon.cc#include<node.h>#include"myobject.h"namespace demo {using v8::Local;using v8::Object;voidInitAll(Local<Object> exports){ MyObject::Init(exports);}NODE_MODULE(NODE_GYP_MODULE_NAME, InitAll)}// namespace demoThen, inmyobject.h, the wrapper class inherits fromnode::ObjectWrap:
// myobject.h#ifndef MYOBJECT_H#define MYOBJECT_H#include<node.h>#include<node_object_wrap.h>namespace demo {classMyObject :public node::ObjectWrap {public:staticvoidInit(v8::Local<v8::Object> exports);private:explicitMyObject(double value =0); ~MyObject();staticvoidNew(const v8::FunctionCallbackInfo<v8::Value>& args);staticvoidPlusOne(const v8::FunctionCallbackInfo<v8::Value>& args);double value_;};}// namespace demo#endifInmyobject.cc, implement the various methods that are to be exposed.In the following code, the methodplusOne() is exposed by adding it to theconstructor's prototype:
// myobject.cc#include"myobject.h"namespace demo {using v8::Context;using v8::Function;using v8::FunctionCallbackInfo;using v8::FunctionTemplate;using v8::Isolate;using v8::Local;using v8::Number;using v8::Object;using v8::ObjectTemplate;using v8::String;using v8::Value;MyObject::MyObject(double value) :value_(value) {}MyObject::~MyObject() {}voidMyObject::Init(Local<Object> exports){ Isolate* isolate = Isolate::GetCurrent(); Local<Context> context = isolate->GetCurrentContext(); Local<ObjectTemplate> addon_data_tpl = ObjectTemplate::New(isolate); addon_data_tpl->SetInternalFieldCount(1);// 1 field for the MyObject::New() Local<Object> addon_data = addon_data_tpl->NewInstance(context).ToLocalChecked();// Prepare constructor template Local<FunctionTemplate> tpl = FunctionTemplate::New(isolate, New, addon_data); tpl->SetClassName(String::NewFromUtf8(isolate,"MyObject").ToLocalChecked()); tpl->InstanceTemplate()->SetInternalFieldCount(1);// PrototypeNODE_SET_PROTOTYPE_METHOD(tpl,"plusOne", PlusOne); Local<Function> constructor = tpl->GetFunction(context).ToLocalChecked(); addon_data->SetInternalField(0, constructor); exports->Set(context, String::NewFromUtf8( isolate,"MyObject").ToLocalChecked(), constructor).FromJust();}voidMyObject::New(const FunctionCallbackInfo<Value>& args){ Isolate* isolate = args.GetIsolate(); Local<Context> context = isolate->GetCurrentContext();if (args.IsConstructCall()) {// Invoked as constructor: `new MyObject(...)`double value = args[0]->IsUndefined() ?0 : args[0]->NumberValue(context).FromMaybe(0); MyObject* obj =newMyObject(value); obj->Wrap(args.This()); args.GetReturnValue().Set(args.This()); }else {// Invoked as plain function `MyObject(...)`, turn into construct call.constint argc =1; Local<Value> argv[argc] = { args[0] }; Local<Function> cons = args.Data().As<Object>()->GetInternalField(0) .As<Value>().As<Function>(); Local<Object> result = cons->NewInstance(context, argc, argv).ToLocalChecked(); args.GetReturnValue().Set(result); }}voidMyObject::PlusOne(const FunctionCallbackInfo<Value>& args){ Isolate* isolate = args.GetIsolate(); MyObject* obj = ObjectWrap::Unwrap<MyObject>(args.This()); obj->value_ +=1; args.GetReturnValue().Set(Number::New(isolate, obj->value_));}}// namespace demoTo build this example, themyobject.cc file must be added to thebinding.gyp:
{"targets":[{"target_name":"addon","sources":["addon.cc","myobject.cc"]}]}Test it with:
// test.jsconst addon =require('./build/Release/addon');const obj =new addon.MyObject(10);console.log(obj.plusOne());// Prints: 11console.log(obj.plusOne());// Prints: 12console.log(obj.plusOne());// Prints: 13The destructor for a wrapper object will run when the object isgarbage-collected. For destructor testing, there are command-line flags thatcan be used to make it possible to force garbage collection. These flags areprovided by the underlying V8 JavaScript engine. They are subject to changeor removal at any time. They are not documented by Node.js or V8, and theyshould never be used outside of testing.
During shutdown of the process or worker threads destructors are not calledby the JS engine. Therefore it's the responsibility of the user to trackthese objects and ensure proper destruction to avoid resource leaks.
Factory of wrapped objects#
Alternatively, it is possible to use a factory pattern to avoid explicitlycreating object instances using the JavaScriptnew operator:
const obj = addon.createObject();// instead of:// const obj = new addon.Object();First, thecreateObject() method is implemented inaddon.cc:
// addon.cc#include<node.h>#include"myobject.h"namespace demo {using v8::FunctionCallbackInfo;using v8::Isolate;using v8::Local;using v8::Object;using v8::String;using v8::Value;voidCreateObject(const FunctionCallbackInfo<Value>& args){ MyObject::NewInstance(args);}voidInitAll(Local<Object> exports, Local<Object>module){ MyObject::Init();NODE_SET_METHOD(module,"exports", CreateObject);}NODE_MODULE(NODE_GYP_MODULE_NAME, InitAll)}// namespace demoInmyobject.h, the static methodNewInstance() is added to handleinstantiating the object. This method takes the place of usingnew in#"myobject.h"namespace demo {using node::AddEnvironmentCleanupHook;using v8::Context;using v8::Function;using v8::FunctionCallbackInfo;using v8::FunctionTemplate;using v8::Global;using v8::Isolate;using v8::Local;using v8::Number;using v8::Object;using v8::String;using v8::Value;// Warning! This is not thread-safe, this addon cannot be used for worker// threads.Global<Function> MyObject::constructor;MyObject::MyObject(double value) :value_(value) {}MyObject::~MyObject() {}voidMyObject::Init(){ Isolate* isolate = Isolate::GetCurrent();// Prepare constructor template Local<FunctionTemplate> tpl = FunctionTemplate::New(isolate, New); tpl->SetClassName(String::NewFromUtf8(isolate,"MyObject").ToLocalChecked()); tpl->InstanceTemplate()->SetInternalFieldCount(1);// PrototypeNODE_SET_PROTOTYPE_METHOD(tpl,"plusOne", PlusOne); Local<Context> context = isolate->GetCurrentContext(); constructor.Reset(isolate, tpl->GetFunction(context).ToLocalChecked());AddEnvironmentCleanupHook(isolate, [](void*) { constructor.Reset(); },nullptr);}voidMyObject::New(const FunctionCallbackInfo<Value>& args){ Isolate* isolate = args.GetIsolate(); Local<Context> context = isolate->GetCurrentContext();if (args.IsConstructCall()) {// Invoked as constructor: `new MyObject(...)`double value = args[0]->IsUndefined() ?0 : args[0]->NumberValue(context).FromMaybe(0); MyObject* obj =newMyObject(value); obj->Wrap(args.This()); args.GetReturnValue().Set(args.This()); }else {// Invoked as plain function `MyObject(...)`, turn into construct call.constint argc =1; Local<Value> argv[argc] = { args[0] }; Local<Function> cons = Local<Function>::New(isolate, constructor); Local<Object> instance = cons->NewInstance(context, argc, argv).ToLocalChecked(); args.GetReturnValue().Set(instance); }}voidMyObject::NewInstance(const FunctionCallbackInfo<Value>& args){ Isolate* isolate = args.GetIsolate();constunsigned argc =1; Local<Value> argv[argc] = { args[0] }; Local<Function> cons = Local<Function>::New(isolate, constructor); Local<Context> context = isolate->GetCurrentContext(); Local<Object> instance = cons->NewInstance(context, argc, argv).ToLocalChecked(); args.GetReturnValue().Set(instance);}voidMyObject::PlusOne(const FunctionCallbackInfo<Value>& args){ Isolate* isolate = args.GetIsolate(); MyObject* obj = ObjectWrap::Unwrap<MyObject>(args.This()); obj->value_ +=1; args.GetReturnValue().Set(Number::New(isolate, obj->value_));}}// namespace demo
Once again, to build this example, themyobject.cc file must be added to thebinding.gyp:
{"targets":[{"target_name":"addon","sources":["addon.cc","myobject.cc"]}]}Test it with:
// test.jsconst createObject =require('./build/Release/addon');const obj =createObject(10);console.log(obj.plusOne());// Prints: 11console.log(obj.plusOne());// Prints: 12console.log(obj.plusOne());// Prints: 13const obj2 =createObject(20);console.log(obj2.plusOne());// Prints: 21console.log(obj2.plusOne());// Prints: 22console.log(obj2.plusOne());// Prints: 23Passing wrapped objects around#
In addition to wrapping and returning C++ objects, it is possible to passwrapped objects around by unwrapping them with the Node.js helper functionnode::ObjectWrap::Unwrap. The following examples shows a functionadd()that can take twoMyObject objects as input arguments:
// addon.cc#include<node.h>#include<node_object_wrap.h>#include"myobject.h"namespace demo {using v8::Context;using v8::FunctionCallbackInfo;using v8::Isolate;using v8::Local;using v8::Number;using v8::Object;using v8::String;using v8::Value;voidCreateObject(const FunctionCallbackInfo<Value>& args){ MyObject::NewInstance(args);}voidAdd(const FunctionCallbackInfo<Value>& args){ Isolate* isolate = args.GetIsolate(); Local<Context> context = isolate->GetCurrentContext(); MyObject* obj1 = node::ObjectWrap::Unwrap<MyObject>( args[0]->ToObject(context).ToLocalChecked()); MyObject* obj2 = node::ObjectWrap::Unwrap<MyObject>( args[1]->ToObject(context).ToLocalChecked());double sum = obj1->value() + obj2->value(); args.GetReturnValue().Set(Number::New(isolate, sum));}voidInitAll(Local<Object> exports){ MyObject::Init();NODE_SET_METHOD(exports,"createObject", CreateObject);NODE_SET_METHOD(exports,"add", Add);}NODE_MODULE(NODE_GYP_MODULE_NAME, InitAll)}// namespace demoInmyobject.h, a new public method is added to allow access to private valuesafter unwrapping the object.
// myobject.h#ifndef MYOBJECT_H#define MYOBJECT_H#include<node.h>#include<node_object_wrap.h>namespace demo {classMyObject :public node::ObjectWrap {public:staticvoidInit();staticvoidNewInstance(const v8::FunctionCallbackInfo<v8::Value>& args);inlinedoublevalue()const{return value_; }private:explicitMyObject(double value =0); ~MyObject();staticvoidNew(const v8::FunctionCallbackInfo<v8::Value>& args);static v8::Global<v8::Function> constructor;double value_;};}// namespace demo#endifThe implementation ofmyobject.cc remains similar to the previous version:
// myobject.cc#include<node.h>#include"myobject.h"namespace demo {using node::AddEnvironmentCleanupHook;using v8::Context;using v8::Function;using v8::FunctionCallbackInfo;using v8::FunctionTemplate;using v8::Global;using v8::Isolate;using v8::Local;using v8::Object;using v8::String;using v8::Value;// Warning! This is not thread-safe, this addon cannot be used for worker// threads.Global<Function> MyObject::constructor;MyObject::MyObject(double value) :value_(value) {}MyObject::~MyObject() {}voidMyObject::Init(){ Isolate* isolate = Isolate::GetCurrent();// Prepare constructor template Local<FunctionTemplate> tpl = FunctionTemplate::New(isolate, New); tpl->SetClassName(String::NewFromUtf8(isolate,"MyObject").ToLocalChecked()); tpl->InstanceTemplate()->SetInternalFieldCount(1); Local<Context> context = isolate->GetCurrentContext(); constructor.Reset(isolate, tpl->GetFunction(context).ToLocalChecked());AddEnvironmentCleanupHook(isolate, [](void*) { constructor.Reset(); },nullptr);}voidMyObject::New(const FunctionCallbackInfo<Value>& args){ Isolate* isolate = args.GetIsolate(); Local<Context> context = isolate->GetCurrentContext();if (args.IsConstructCall()) {// Invoked as constructor: `new MyObject(...)`double value = args[0]->IsUndefined() ?0 : args[0]->NumberValue(context).FromMaybe(0); MyObject* obj =newMyObject(value); obj->Wrap(args.This()); args.GetReturnValue().Set(args.This()); }else {// Invoked as plain function `MyObject(...)`, turn into construct call.constint argc =1; Local<Value> argv[argc] = { args[0] }; Local<Function> cons = Local<Function>::New(isolate, constructor); Local<Object> instance = cons->NewInstance(context, argc, argv).ToLocalChecked(); args.GetReturnValue().Set(instance); }}voidMyObject::NewInstance(const FunctionCallbackInfo<Value>& args){ Isolate* isolate = args.GetIsolate();constunsigned argc =1; Local<Value> argv[argc] = { args[0] }; Local<Function> cons = Local<Function>::New(isolate, constructor); Local<Context> context = isolate->GetCurrentContext(); Local<Object> instance = cons->NewInstance(context, argc, argv).ToLocalChecked(); args.GetReturnValue().Set(instance);}}// namespace demoTest it with:
// test.jsconst addon =require('./build/Release/addon');const obj1 = addon.createObject(10);const obj2 = addon.createObject(20);const result = addon.add(obj1, obj2);console.log(result);// Prints: 30Node-API#
Node-API (formerly N-API) is an API for building native Addons. It isindependent from the underlying JavaScript runtime (for example, V8) and ismaintained as part of Node.js itself. This API will be Application BinaryInterface (ABI) stable across versions of Node.js. It is intended to insulateaddons from changes in the underlying JavaScript engine and allow modulescompiled for one major version to run on later major versions of Node.js withoutrecompilation. TheABI Stability guide provides a more in-depth explanation.
Addons are built/packaged with the same approach/tools outlined in the sectiontitledC++ Addons. The only difference is the set of APIs that are used bythe native code. Instead of using the V8 orNative Abstractions for Node.jsAPIs, the functions available in Node-API are used.
APIs exposed by Node-API are generally used to create and manipulateJavaScript values. Concepts and operations generally map to ideas specifiedin the ECMA-262 Language Specification. The APIs have the followingproperties:
- All Node-API calls return a status code of type
napi_status. Thisstatus indicates whether the API call succeeded or failed. - The API's return value is passed via an out parameter.
- All JavaScript values are abstracted behind an opaque type named
napi_value. - In case of an error status code, additional information can be obtainedusing
napi_get_last_error_info. More information can be found in the errorhandling sectionError handling.
Writing addons in various programming languages#
Node-API is a C API that ensures ABI stability across Node.js versionsand different compiler levels. With this stability guarantee, it is possibleto write addons in other programming languages on top of Node-API. Refertolanguage and engine bindings for more programming languages and enginessupport details.
node-addon-api is the official C++ binding that provides a more efficient way towrite C++ code that calls Node-API. This wrapper is a header-only library that offers an inlinable C++ API.Binaries built withnode-addon-api will depend on the symbols of the Node-APIC-based functions exported by Node.js. The following code snippet is an exampleofnode-addon-api:
Object obj = Object::New(env);obj["foo"] = String::New(env,"bar");The abovenode-addon-api C++ code is equivalent to the following C-basedNode-API code:
napi_status status;napi_value object, string;status =napi_create_object(env, &object);if (status != napi_ok) {napi_throw_error(env, ...);return;}status =napi_create_string_utf8(env,"bar", NAPI_AUTO_LENGTH, &string);if (status != napi_ok) {napi_throw_error(env, ...);return;}status =napi_set_named_property(env, object,"foo", string);if (status != napi_ok) {napi_throw_error(env, ...);return;}The end result is that the addon only uses the exported C APIs. Even thoughthe addon is written in C++, it still gets the benefits of the ABI stabilityprovided by the C Node-API.
When usingnode-addon-api instead of the C APIs, start with the APIdocsfornode-addon-api.
TheNode-API Resource offersan excellent orientation and tips for developers just getting started withNode-API andnode-addon-api. Additional media resources can be found on theNode-API Media page.
Implications of ABI stability#
Although Node-API provides an ABI stability guarantee, other parts of Node.js donot, and any external libraries used from the addon may not. In particular,none of the following APIs provide an ABI stability guarantee across majorversions:
the Node.js C++ APIs available via any of
#include<node.h>#include<node_buffer.h>#include<node_version.h>#include<node_object_wrap.h>the libuv APIs which are also included with Node.js and available via
#include<uv.h>the V8 API available via
#include<v8.h>
Thus, for an addon to remain ABI-compatible across Node.js major versions, itmust use Node-API exclusively by restricting itself to using
#include<node_api.h>and by checking, for all external libraries that it uses, that the externallibrary makes ABI stability guarantees similar to Node-API.
Enum values in ABI stability#
All enum data types defined in Node-API should be considered as a fixed sizeint32_t value. Bit flag enum types should be explicitly documented, and theywork with bit operators like bit-OR (|) as a bit value. Unless otherwisedocumented, an enum type should be considered to be extensible.
A new enum value will be added at the end of the enum definition. An enum valuewill not be removed or renamed.
For an enum type returned from a Node-API function, or provided as an outparameter of a Node-API function, the value is an integer value and an addonshould handle unknown values. New values are allowed to be introduced withouta version guard. For example, when checkingnapi_status in switch statements,an addon should include a default branch, as new status codes may be introducedin newer Node.js versions.
For an enum type used in an in-parameter, the result of passing an unknowninteger value to Node-API functions is undefined unless otherwise documented.A new value is added with a version guard to indicate the Node-API version inwhich it was introduced. For example,napi_get_all_property_names can beextended with new enum value ofnapi_key_filter.
For an enum type used in both in-parameters and out-parameters, new values areallowed to be introduced without a version guard.
Building#
Unlike modules written in JavaScript, developing and deploying Node.jsnative addons using Node-API requires an additional set of tools. Besides thebasic tools required to develop for Node.js, the native addon developerrequires a toolchain that can compile C and C++ code into a binary. Inaddition, depending upon how the native addon is deployed, theuser ofthe native addon will also need to have a C/C++ toolchain installed.
For Linux developers, the necessary C/C++ toolchain packages are readilyavailable.GCC is widely used in the Node.js community to build andtest across a variety of platforms. For many developers, theLLVMcompiler infrastructure is also a good choice.
For Mac developers,Xcode offers all the required compiler tools.However, it is not necessary to install the entire Xcode IDE. The followingcommand installs the necessary toolchain:
xcode-select --installFor Windows developers,Visual Studio offers all the required compilertools. However, it is not necessary to install the entire Visual StudioIDE. The following command installs the necessary toolchain:
npm install --global windows-build-toolsThe sections below describe the additional tools available for developingand deploying Node.js native addons.
Build tools#
Both the tools listed here require thatusers of the nativeaddon have a C/C++ toolchain installed in order to successfully installthe native addon.
node-gyp#
node-gyp is a build system based on thegyp-next fork ofGoogle'sGYP tool and comes bundled with npm. GYP, and therefore node-gyp,requires that Python be installed.
Historically, node-gyp has been the tool of choice for building nativeaddons. It has widespread adoption and documentation. However, somedevelopers have run into limitations in node-gyp.
CMake.js#
CMake.js is an alternative build system based onCMake.
CMake.js is a good choice for projects that already use CMake or fordevelopers affected by limitations in node-gyp.build_with_cmake is anexample of a CMake-based native addon project.
Uploading precompiled binaries#
The three tools listed here permit native addon developers and maintainersto create and upload binaries to public or private servers. These tools aretypically integrated with CI/CD build systems likeTravis CI andAppVeyor to build and upload binaries for a variety of platforms andarchitectures. These binaries are then available for download by users whodo not need to have a C/C++ toolchain installed.
node-pre-gyp#
node-pre-gyp is a tool based on node-gyp that adds the ability toupload binaries to a server of the developer's choice. node-pre-gyp hasparticularly good support for uploading binaries to Amazon S3.
prebuild#
prebuild is a tool that supports builds using either node-gyp orCMake.js. Unlike node-pre-gyp which supports a variety of servers, prebuilduploads binaries only toGitHub releases. prebuild is a good choice forGitHub projects using CMake.js.
prebuildify#
prebuildify is a tool based on node-gyp. The advantage of prebuildify isthat the built binaries are bundled with the native addon when it'suploaded to npm. The binaries are downloaded from npm and are immediatelyavailable to the module user when the native addon is installed.
Usage#
In order to use the Node-API functions, include the filenode_api.h whichis located in the src directory in the node development tree:
#include<node_api.h>This will opt into the defaultNAPI_VERSION for the given release of Node.js.In order to ensure compatibility with specific versions of Node-API, the versioncan be specified explicitly when including the header:
#define NAPI_VERSION 3#include<node_api.h>This restricts the Node-API surface to just the functionality that was availablein the specified (and earlier) versions.
Some of the Node-API surface is experimental and requires explicit opt-in:
#define NAPI_EXPERIMENTAL#include<node_api.h>In this case the entire API surface, including any experimental APIs, will beavailable to the module code.
Occasionally, experimental features are introduced that affect already-releasedand stable APIs. These features can be disabled by an opt-out:
#define NAPI_EXPERIMENTAL#define NODE_API_EXPERIMENTAL_<FEATURE_NAME>_OPT_OUT#include<node_api.h>where<FEATURE_NAME> is the name of an experimental feature that affects bothexperimental and stable APIs.
Node-API version matrix#
Up until version 9, Node-API versions were additive and versionedindependently from Node.js. This meant that any version wasan extension to the previous version in that it had all ofthe APIs from the previous version with some additions. EachNode.js version only supported a single Node-API version.For example v18.15.0 supports only Node-API version 8. ABI stability wasachieved because 8 was a strict superset of all previous versions.
As of version 9, while Node-API versions continue to be versionedindependently, an add-on that ran with Node-API version 9 may needcode updates to run with Node-API version 10. ABI stabilityis maintained, however, because Node.js versions that supportNode-API versions higher than 8 will support all versionsbetween 8 and the highest version they support and will defaultto providing the version 8 APIs unless an add-on opts into ahigher Node-API version. This approach provides the flexibilityof better optimizing existing Node-API functions whilemaintaining ABI stability. Existing add-ons can continue to run withoutrecompilation using an earlier version of Node-API. If an add-onneeds functionality from a newer Node-API version, changes to existingcode and recompilation will be needed to use those new functions anyway.
In versions of Node.js that support Node-API version 9 and later, definingNAPI_VERSION=X and using the existing add-on initialization macroswill bake in the requested Node-API version that will be used at runtimeinto the add-on. IfNAPI_VERSION is not set it will default to 8.
This table may not be up to date in older streams, the most up to dateinformation is in the latest API documentation in:Node-API version matrix
| Node-API version | Supported In |
|---|---|
| 10 | v22.14.0+, 23.6.0+ and all later versions |
| 9 | v18.17.0+, 20.3.0+, 21.0.0 and all later versions |
| 8 | v12.22.0+, v14.17.0+, v15.12.0+, 16.0.0 and all later versions |
| 7 | v10.23.0+, v12.19.0+, v14.12.0+, 15.0.0 and all later versions |
| 6 | v10.20.0+, v12.17.0+, 14.0.0 and all later versions |
| 5 | v10.17.0+, v12.11.0+, 13.0.0 and all later versions |
| 4 | v10.16.0+, v11.8.0+, 12.0.0 and all later versions |
| 3 | v6.14.2*, 8.11.2+, v9.11.0+*, 10.0.0 and all later versions |
| 2 | v8.10.0+*, v9.3.0+*, 10.0.0 and all later versions |
| 1 | v8.6.0+**, v9.0.0+*, 10.0.0 and all later versions |
* Node-API was experimental.
** Node.js 8.0.0 included Node-API as experimental. It was released asNode-API version 1 but continued to evolve until Node.js 8.6.0. The API isdifferent in versions prior to Node.js 8.6.0. We recommend Node-API version 3 orlater.
Each API documented for Node-API will have a header namedadded in:, and APIswhich are stable will have the additional headerNode-API version:.APIs are directly usable when using a Node.js version which supportsthe Node-API version shown inNode-API version: or higher.When using a Node.js version that does not support theNode-API version: listed or if there is noNode-API version: listed,then the API will only be available if#define NAPI_EXPERIMENTAL precedes the inclusion ofnode_api.horjs_native_api.h. If an API appears not to be available ona version of Node.js which is later than the one shown inadded in: thenthis is most likely the reason for the apparent absence.
The Node-APIs associated strictly with accessing ECMAScript features from nativecode can be found separately injs_native_api.h andjs_native_api_types.h.The APIs defined in these headers are included innode_api.h andnode_api_types.h. The headers are structured in this way in order to allowimplementations of Node-API outside of Node.js. For those implementations theNode.js specific APIs may not be applicable.
The Node.js-specific parts of an addon can be separated from the code thatexposes the actual functionality to the JavaScript environment so that thelatter may be used with multiple implementations of Node-API. In the examplebelow,addon.c andaddon.h refer only tojs_native_api.h. This ensuresthataddon.c can be reused to compile against either the Node.jsimplementation of Node-API or any implementation of Node-API outside of Node.js.
addon_node.c is a separate file that contains the Node.js specific entry pointto the addon and which instantiates the addon by calling intoaddon.c when theaddon is loaded into a Node.js environment.
// addon.h#ifndef _ADDON_H_#define _ADDON_H_#include<js_native_api.h>napi_valuecreate_addon(napi_env env);#endif// _ADDON_H_// addon.c#include"addon.h"#define NODE_API_CALL(env, call) \ do { \ napi_status status = (call); \if (status != napi_ok) { \ const napi_extended_error_info* error_info = NULL; \ napi_get_last_error_info((env), &error_info); \ const char* err_message = error_info->error_message; \ bool is_pending; \ napi_is_exception_pending((env), &is_pending); \/* If an exception is already pending, don't rethrow it */ \if (!is_pending) { \ const char* message = (err_message == NULL) \ ?"empty error message" \ : err_message; \ napi_throw_error((env), NULL, message); \ } \ return NULL; \ } \ } while(0)static napi_valueDoSomethingUseful(napi_env env, napi_callback_info info) {// Do something useful.returnNULL;}napi_valuecreate_addon(napi_env env) { napi_value result; NODE_API_CALL(env, napi_create_object(env, &result)); napi_value exported_function; NODE_API_CALL(env, napi_create_function(env,"doSomethingUseful", NAPI_AUTO_LENGTH, DoSomethingUseful,NULL, &exported_function)); NODE_API_CALL(env, napi_set_named_property(env, result,"doSomethingUseful", exported_function));return result;}// addon_node.c#include<node_api.h>#include"addon.h"NAPI_MODULE_INIT(/* napi_env env, napi_value exports */) {// This function body is expected to return a `napi_value`.// The variables `napi_env env` and `napi_value exports` may be used within// the body, as they are provided by the definition of `NAPI_MODULE_INIT()`.return create_addon(env);}Environment life cycle APIs#
Section Agents of theECMAScript Language Specification defines the conceptof an "Agent" as a self-contained environment in which JavaScript code runs.Multiple such Agents may be started and terminated either concurrently or insequence by the process.
A Node.js environment corresponds to an ECMAScript Agent. In the main process,an environment is created at startup, and additional environments can be createdon separate threads to serve asworker threads. When Node.js is embedded inanother application, the main thread of the application may also construct anddestroy a Node.js environment multiple times during the life cycle of theapplication process such that each Node.js environment created by theapplication may, in turn, during its life cycle create and destroy additionalenvironments as worker threads.
From the perspective of a native addon this means that the bindings it providesmay be called multiple times, from multiple contexts, and even concurrently frommultiple threads.
Native addons may need to allocate global state which they use duringtheir life cycle of an Node.js environment such that the state can beunique to each instance of the addon.
To this end, Node-API provides a way to associate data such that its life cycleis tied to the life cycle of a Node.js environment.
napi_set_instance_data#
napi_statusnapi_set_instance_data(node_api_basic_env env,void* data, napi_finalize finalize_cb,void* finalize_hint);[in] env: The environment that the Node-API call is invoked under.[in] data: The data item to make available to bindings of this instance.[in] finalize_cb: The function to call when the environment is being torndown. The function receivesdataso that it might free it.napi_finalizeprovides more details.[in] finalize_hint: Optional hint to pass to the finalize callback duringcollection.
Returnsnapi_ok if the API succeeded.
This API associatesdata with the currently running Node.js environment.datacan later be retrieved usingnapi_get_instance_data(). Any existing dataassociated with the currently running Node.js environment which was set by meansof a previous call tonapi_set_instance_data() will be overwritten. If afinalize_cb was provided by the previous call, it will not be called.
napi_get_instance_data#
napi_statusnapi_get_instance_data(node_api_basic_env env,void** data);[in] env: The environment that the Node-API call is invoked under.[out] data: The data item that was previously associated with the currentlyrunning Node.js environment by a call tonapi_set_instance_data().
Returnsnapi_ok if the API succeeded.
This API retrieves data that was previously associated with the currentlyrunning Node.js environment vianapi_set_instance_data(). If no data is set,the call will succeed anddata will be set toNULL.
Basic Node-API data types#
Node-API exposes the following fundamental data types as abstractions that areconsumed by the various APIs. These APIs should be treated as opaque,introspectable only with other Node-API calls.
napi_status#
Integral status code indicating the success or failure of a Node-API call.Currently, the following status codes are supported.
typedefenum { napi_ok, napi_invalid_arg, napi_object_expected, napi_string_expected, napi_name_expected, napi_function_expected, napi_number_expected, napi_boolean_expected, napi_array_expected, napi_generic_failure, napi_pending_exception, napi_cancelled, napi_escape_called_twice, napi_handle_scope_mismatch, napi_callback_scope_mismatch, napi_queue_full, napi_closing, napi_bigint_expected, napi_date_expected, napi_arraybuffer_expected, napi_detachable_arraybuffer_expected, napi_would_deadlock,/* unused */ napi_no_external_buffers_allowed, napi_cannot_run_js} napi_status;If additional information is required upon an API returning a failed status,it can be obtained by callingnapi_get_last_error_info.
napi_extended_error_info#
typedefstruct {constchar* error_message;void* engine_reserved;uint32_t engine_error_code; napi_status error_code;} napi_extended_error_info;error_message: UTF8-encoded string containing a VM-neutral description ofthe error.engine_reserved: Reserved for VM-specific error details. This is currentlynot implemented for any VM.engine_error_code: VM-specific error code. This is currentlynot implemented for any VM.error_code: The Node-API status code that originated with the last error.
See theError handling section for additional information.
napi_env#
napi_env is used to represent a context that the underlying Node-APIimplementation can use to persist VM-specific state. This structure is passedto native functions when they're invoked, and it must be passed back whenmaking Node-API calls. Specifically, the samenapi_env that was passed in whenthe initial native function was called must be passed to any subsequentnested Node-API calls. Caching thenapi_env for the purpose of general reuse,and passing thenapi_env between instances of the same addon running ondifferentWorker threads is not allowed. Thenapi_env becomes invalidwhen an instance of a native addon is unloaded. Notification of this event isdelivered through the callbacks given tonapi_add_env_cleanup_hook andnapi_set_instance_data.
node_api_basic_env#
This variant ofnapi_env is passed to synchronous finalizers(node_api_basic_finalize). There is a subset of Node-APIs which accepta parameter of typenode_api_basic_env as their first argument. These APIs donot access the state of the JavaScript engine and are thus safe to call fromsynchronous finalizers. Passing a parameter of typenapi_env to these APIs isallowed, however, passing a parameter of typenode_api_basic_env to APIs thataccess the JavaScript engine state is not allowed. Attempting to do so withouta cast will produce a compiler warning or an error when add-ons are compiledwith flags which cause them to emit warnings and/or errors when incorrectpointer types are passed into a function. Calling such APIs from a synchronousfinalizer will ultimately result in the termination of the application.
napi_value#
This is an opaque pointer that is used to represent a JavaScript value.
napi_threadsafe_function#
This is an opaque pointer that represents a JavaScript function which can becalled asynchronously from multiple threads vianapi_call_threadsafe_function().
napi_threadsafe_function_release_mode#
A value to be given tonapi_release_threadsafe_function() to indicate whetherthe thread-safe function is to be closed immediately (napi_tsfn_abort) ormerely released (napi_tsfn_release) and thus available for subsequent use vianapi_acquire_threadsafe_function() andnapi_call_threadsafe_function().
typedefenum { napi_tsfn_release, napi_tsfn_abort} napi_threadsafe_function_release_mode;napi_threadsafe_function_call_mode#
A value to be given tonapi_call_threadsafe_function() to indicate whetherthe call should block whenever the queue associated with the thread-safefunction is full.
typedefenum { napi_tsfn_nonblocking, napi_tsfn_blocking} napi_threadsafe_function_call_mode;Node-API memory management types#
napi_handle_scope#
This is an abstraction used to control and modify the lifetime of objectscreated within a particular scope. In general, Node-API values are createdwithin the context of a handle scope. When a native method is called fromJavaScript, a default handle scope will exist. If the user does not explicitlycreate a new handle scope, Node-API values will be created in the default handlescope. For any invocations of code outside the execution of a native method(for instance, during a libuv callback invocation), the module is required tocreate a scope before invoking any functions that can result in the creationof JavaScript values.
Handle scopes are created usingnapi_open_handle_scope and are destroyedusingnapi_close_handle_scope. Closing the scope can indicate to the GCthat allnapi_values created during the lifetime of the handle scope are nolonger referenced from the current stack frame.
For more details, review theObject lifetime management.
napi_escapable_handle_scope#
Escapable handle scopes are a special type of handle scope to return valuescreated within a particular handle scope to a parent scope.
napi_ref#
This is the abstraction to use to reference anapi_value. This allows forusers to manage the lifetimes of JavaScript values, including defining theirminimum lifetimes explicitly.
For more details, review theObject lifetime management.
napi_type_tag#
A 128-bit value stored as two unsigned 64-bit integers. It serves as a UUIDwith which JavaScript objects orexternals can be "tagged" in order toensure that they are of a certain type. This is a stronger check thannapi_instanceof, because the latter can report a false positive if theobject's prototype has been manipulated. Type-tagging is most useful inconjunction withnapi_wrap because it ensures that the pointer retrievedfrom a wrapped object can be safely cast to the native type corresponding to thetype tag that had been previously applied to the JavaScript object.
typedefstruct {uint64_t lower;uint64_t upper;} napi_type_tag;napi_async_cleanup_hook_handle#
An opaque value returned bynapi_add_async_cleanup_hook. It must be passedtonapi_remove_async_cleanup_hook when the chain of asynchronous cleanupevents completes.
Node-API callback types#
napi_callback_info#
Opaque datatype that is passed to a callback function. It can be used forgetting additional information about the context in which the callback wasinvoked.
napi_callback#
Function pointer type for user-provided native functions which are to beexposed to JavaScript via Node-API. Callback functions should satisfy thefollowing signature:
typedefnapi_value(*napi_callback)(napi_env, napi_callback_info);Unless for reasons discussed inObject Lifetime Management, creating ahandle and/or callback scope inside anapi_callback is not necessary.
node_api_basic_finalize#
Function pointer type for add-on provided functions that allow the user to benotified when externally-owned data is ready to be cleaned up because theobject it was associated with has been garbage-collected. The user must providea function satisfying the following signature which would get called upon theobject's collection. Currently,node_api_basic_finalize can be used forfinding out when objects that have external data are collected.
typedefvoid(*node_api_basic_finalize)(node_api_basic_env env,void* finalize_data,void* finalize_hint);Unless for reasons discussed inObject Lifetime Management, creating ahandle and/or callback scope inside the function body is not necessary.
Since these functions may be called while the JavaScript engine is in a statewhere it cannot execute JavaScript code, only Node-APIs which accept anode_api_basic_env as their first parameter may be called.node_api_post_finalizer can be used to schedule Node-API calls thatrequire access to the JavaScript engine's state to run after the currentgarbage collection cycle has completed.
In the case ofnode_api_create_external_string_latin1 andnode_api_create_external_string_utf16 theenv parameter may be null,because external strings can be collected during the latter part of environmentshutdown.
Change History:
experimental (
NAPI_EXPERIMENTAL):Only Node-API calls that accept a
node_api_basic_envas their firstparameter may be called, otherwise the application will be terminated with anappropriate error message. This feature can be turned off by definingNODE_API_EXPERIMENTAL_BASIC_ENV_OPT_OUT.
napi_finalize#
Function pointer type for add-on provided function that allow the user toschedule a group of calls to Node-APIs in response to a garbage collectionevent, after the garbage collection cycle has completed. These functionpointers can be used withnode_api_post_finalizer.
typedefvoid(*napi_finalize)(napi_env env,void* finalize_data,void* finalize_hint);Change History:
experimental (
NAPI_EXPERIMENTALis defined):A function of this type may no longer be used as a finalizer, except with
node_api_post_finalizer.node_api_basic_finalizemust be usedinstead. This feature can be turned off by definingNODE_API_EXPERIMENTAL_BASIC_ENV_OPT_OUT.
napi_async_execute_callback#
Function pointer used with functions that support asynchronousoperations. Callback functions must satisfy the following signature:
typedefvoid(*napi_async_execute_callback)(napi_env env,void* data);Implementations of this function must avoid making Node-API calls that executeJavaScript or interact with JavaScript objects. Node-API calls should be in thenapi_async_complete_callback instead. Do not use thenapi_env parameter asit will likely result in execution of JavaScript.
napi_async_complete_callback#
Function pointer used with functions that support asynchronousoperations. Callback functions must satisfy the following signature:
typedefvoid(*napi_async_complete_callback)(napi_env env, napi_status status,void* data);Unless for reasons discussed inObject Lifetime Management, creating ahandle and/or callback scope inside the function body is not necessary.
napi_threadsafe_function_call_js#
Function pointer used with asynchronous thread-safe function calls. The callbackwill be called on the main thread. Its purpose is to use a data item arrivingvia the queue from one of the secondary threads to construct the parametersnecessary for a call into JavaScript, usually vianapi_call_function, and thenmake the call into JavaScript.
The data arriving from the secondary thread via the queue is given in thedataparameter and the JavaScript function to call is given in thejs_callbackparameter.
Node-API sets up the environment prior to calling this callback, so it issufficient to call the JavaScript function vianapi_call_function rather thanvianapi_make_callback.
Callback functions must satisfy the following signature:
typedefvoid(*napi_threadsafe_function_call_js)(napi_env env, napi_value js_callback,void* context,void* data);[in] env: The environment to use for API calls, orNULLif the thread-safefunction is being torn down anddatamay need to be freed.[in] js_callback: The JavaScript function to call, orNULLif thethread-safe function is being torn down anddatamay need to be freed. Itmay also beNULLif the thread-safe function was created withoutjs_callback.[in] context: The optional data with which the thread-safe function wascreated.[in] data: Data created by the secondary thread. It is the responsibility ofthe callback to convert this native data to JavaScript values (with Node-APIfunctions) that can be passed as parameters whenjs_callbackis invoked.This pointer is managed entirely by the threads and this callback. Thus thiscallback should free the data.
Unless for reasons discussed inObject Lifetime Management, creating ahandle and/or callback scope inside the function body is not necessary.
napi_cleanup_hook#
Function pointer used withnapi_add_env_cleanup_hook. It will be calledwhen the environment is being torn down.
Callback functions must satisfy the following signature:
typedefvoid(*napi_cleanup_hook)(void* data);[in] data: The data that was passed tonapi_add_env_cleanup_hook.
napi_async_cleanup_hook#
Function pointer used withnapi_add_async_cleanup_hook. It will be calledwhen the environment is being torn down.
Callback functions must satisfy the following signature:
typedefvoid(*napi_async_cleanup_hook)(napi_async_cleanup_hook_handle handle,void* data);[in] handle: The handle that must be passed tonapi_remove_async_cleanup_hookafter completion of the asynchronouscleanup.[in] data: The data that was passed tonapi_add_async_cleanup_hook.
The body of the function should initiate the asynchronous cleanup actions at theend of whichhandle must be passed in a call tonapi_remove_async_cleanup_hook.
Error handling#
Node-API uses both return values and JavaScript exceptions for error handling.The following sections explain the approach for each case.
Return values#
All of the Node-API functions share the same error handling pattern. Thereturn type of all API functions isnapi_status.
The return value will benapi_ok if the request was successful andno uncaught JavaScript exception was thrown. If an error occurred ANDan exception was thrown, thenapi_status value for the errorwill be returned. If an exception was thrown, and no error occurred,napi_pending_exception will be returned.
In cases where a return value other thannapi_ok ornapi_pending_exception is returned,napi_is_exception_pendingmust be called to check if an exception is pending.See the section on exceptions for more details.
The full set of possiblenapi_status values is definedinnapi_api_types.h.
Thenapi_status return value provides a VM-independent representation ofthe error which occurred. In some cases it is useful to be able to getmore detailed information, including a string representing the error as well asVM (engine)-specific information.
In order to retrieve this informationnapi_get_last_error_infois provided which returns anapi_extended_error_info structure.The format of thenapi_extended_error_info structure is as follows:
typedefstructnapi_extended_error_info {constchar* error_message;void* engine_reserved;uint32_t engine_error_code; napi_status error_code;};error_message: Textual representation of the error that occurred.engine_reserved: Opaque handle reserved for engine use only.engine_error_code: VM specific error code.error_code: Node-API status code for the last error.
napi_get_last_error_info returns the information for the lastNode-API call that was made.
Do not rely on the content or format of any of the extended information as itis not subject to SemVer and may change at any time. It is intended only forlogging purposes.
napi_get_last_error_info#
napi_statusnapi_get_last_error_info(node_api_basic_env env,const napi_extended_error_info** result);[in] env: The environment that the API is invoked under.[out] result: Thenapi_extended_error_infostructure with moreinformation about the error.
Returnsnapi_ok if the API succeeded.
This API retrieves anapi_extended_error_info structure with informationabout the last error that occurred.
The content of thenapi_extended_error_info returned is only valid up untila Node-API function is called on the sameenv. This includes a call tonapi_is_exception_pending so it may often be necessary to make a copyof the information so that it can be used later. The pointer returnedinerror_message points to a statically-defined string so it is safe to usethat pointer if you have copied it out of theerror_message field (which willbe overwritten) before another Node-API function was called.
Do not rely on the content or format of any of the extended information as itis not subject to SemVer and may change at any time. It is intended only forlogging purposes.
This API can be called even if there is a pending JavaScript exception.
Exceptions#
Any Node-API function call may result in a pending JavaScript exception. This isthe case for any of the API functions, even those that may not cause theexecution of JavaScript.
If thenapi_status returned by a function isnapi_ok then noexception is pending and no additional action is required. If thenapi_status returned is anything other thannapi_ok ornapi_pending_exception, in order to try to recover and continueinstead of simply returning immediately,napi_is_exception_pendingmust be called in order to determine if an exception is pending or not.
In many cases when a Node-API function is called and an exception isalready pending, the function will return immediately with anapi_status ofnapi_pending_exception. However, this is not the casefor all functions. Node-API allows a subset of the functions to becalled to allow for some minimal cleanup before returning to JavaScript.In that case,napi_status will reflect the status for the function. Itwill not reflect previous pending exceptions. To avoid confusion, checkthe error status after every function call.
When an exception is pending one of two approaches can be employed.
The first approach is to do any appropriate cleanup and then return so thatexecution will return to JavaScript. As part of the transition back toJavaScript, the exception will be thrown at the point in the JavaScriptcode where the native method was invoked. The behavior of most Node-API callsis unspecified while an exception is pending, and many will simply returnnapi_pending_exception, so do as little as possible and then return toJavaScript where the exception can be handled.
The second approach is to try to handle the exception. There will be caseswhere the native code can catch the exception, take the appropriate action,and then continue. This is only recommended in specific caseswhere it is known that the exception can be safely handled. In thesecasesnapi_get_and_clear_last_exception can be used to get andclear the exception. On success, result will contain the handle tothe last JavaScriptObject thrown. If it is determined, afterretrieving the exception, the exception cannot be handled after allit can be re-thrown it withnapi_throw where error is theJavaScript value to be thrown.
The following utility functions are also available in case native codeneeds to throw an exception or determine if anapi_value is an instanceof a JavaScriptError object:napi_throw_error,napi_throw_type_error,napi_throw_range_error,node_api_throw_syntax_error andnapi_is_error.
The following utility functions are also available in case nativecode needs to create anError object:napi_create_error,napi_create_type_error,napi_create_range_error andnode_api_create_syntax_error,where result is thenapi_value that refers to the newly createdJavaScriptError object.
The Node.js project is adding error codes to all of the errorsgenerated internally. The goal is for applications to use theseerror codes for all error checking. The associated error messageswill remain, but will only be meant to be used for logging anddisplay with the expectation that the message can change withoutSemVer applying. In order to support this model with Node-API, bothin internal functionality and for module specific functionality(as its good practice), thethrow_ andcreate_ functionstake an optional code parameter which is the string for the codeto be added to the error object. If the optional parameter isNULLthen no code will be associated with the error. If a code is provided,the name associated with the error is also updated to be:
originalName [code]whereoriginalName is the original name associated with the errorandcode is the code that was provided. For example, if the codeis'ERR_ERROR_1' and aTypeError is being created the name will be:
TypeError [ERR_ERROR_1]napi_throw#
NAPI_EXTERN napi_statusnapi_throw(napi_env env, napi_value error);[in] env: The environment that the API is invoked under.[in] error: The JavaScript value to be thrown.
Returnsnapi_ok if the API succeeded.
This API throws the JavaScript value provided.
napi_throw_error#
NAPI_EXTERN napi_statusnapi_throw_error(napi_env env,constchar* code,constchar* msg);[in] env: The environment that the API is invoked under.[in] code: Optional error code to be set on the error.[in] msg: C string representing the text to be associated with the error.
Returnsnapi_ok if the API succeeded.
This API throws a JavaScriptError with the text provided.
napi_throw_type_error#
NAPI_EXTERN napi_statusnapi_throw_type_error(napi_env env,constchar* code,constchar* msg);[in] env: The environment that the API is invoked under.[in] code: Optional error code to be set on the error.[in] msg: C string representing the text to be associated with the error.
Returnsnapi_ok if the API succeeded.
This API throws a JavaScriptTypeError with the text provided.
napi_throw_range_error#
NAPI_EXTERN napi_statusnapi_throw_range_error(napi_env env,constchar* code,constchar* msg);[in] env: The environment that the API is invoked under.[in] code: Optional error code to be set on the error.[in] msg: C string representing the text to be associated with the error.
Returnsnapi_ok if the API succeeded.
This API throws a JavaScriptRangeError with the text provided.
node_api_throw_syntax_error#
NAPI_EXTERN napi_statusnode_api_throw_syntax_error(napi_env env,constchar* code,constchar* msg);[in] env: The environment that the API is invoked under.[in] code: Optional error code to be set on the error.[in] msg: C string representing the text to be associated with the error.
Returnsnapi_ok if the API succeeded.
This API throws a JavaScriptSyntaxError with the text provided.
napi_is_error#
NAPI_EXTERN napi_statusnapi_is_error(napi_env env, napi_value value,bool* result);[in] env: The environment that the API is invoked under.[in] value: Thenapi_valueto be checked.[out] result: Boolean value that is set to true ifnapi_valuerepresentsan error, false otherwise.
Returnsnapi_ok if the API succeeded.
This API queries anapi_value to check if it represents an error object.
napi_create_error#
NAPI_EXTERN napi_statusnapi_create_error(napi_env env, napi_value code, napi_value msg, napi_value* result);[in] env: The environment that the API is invoked under.[in] code: Optionalnapi_valuewith the string for the error code to beassociated with the error.[in] msg:napi_valuethat references a JavaScriptstringto be used asthe message for theError.[out] result:napi_valuerepresenting the error created.
Returnsnapi_ok if the API succeeded.
This API returns a JavaScriptError with the text provided.
napi_create_type_error#
NAPI_EXTERN napi_statusnapi_create_type_error(napi_env env, napi_value code, napi_value msg, napi_value* result);[in] env: The environment that the API is invoked under.[in] code: Optionalnapi_valuewith the string for the error code to beassociated with the error.[in] msg:napi_valuethat references a JavaScriptstringto be used asthe message for theError.[out] result:napi_valuerepresenting the error created.
Returnsnapi_ok if the API succeeded.
This API returns a JavaScriptTypeError with the text provided.
napi_create_range_error#
NAPI_EXTERN napi_statusnapi_create_range_error(napi_env env, napi_value code, napi_value msg, napi_value* result);[in] env: The environment that the API is invoked under.[in] code: Optionalnapi_valuewith the string for the error code to beassociated with the error.[in] msg:napi_valuethat references a JavaScriptstringto be used asthe message for theError.[out] result:napi_valuerepresenting the error created.
Returnsnapi_ok if the API succeeded.
This API returns a JavaScriptRangeError with the text provided.
node_api_create_syntax_error#
NAPI_EXTERN napi_statusnode_api_create_syntax_error(napi_env env, napi_value code, napi_value msg, napi_value* result);[in] env: The environment that the API is invoked under.[in] code: Optionalnapi_valuewith the string for the error code to beassociated with the error.[in] msg:napi_valuethat references a JavaScriptstringto be used asthe message for theError.[out] result:napi_valuerepresenting the error created.
Returnsnapi_ok if the API succeeded.
This API returns a JavaScriptSyntaxError with the text provided.
napi_get_and_clear_last_exception#
napi_statusnapi_get_and_clear_last_exception(napi_env env, napi_value* result);[in] env: The environment that the API is invoked under.[out] result: The exception if one is pending,NULLotherwise.
Returnsnapi_ok if the API succeeded.
This API can be called even if there is a pending JavaScript exception.
napi_is_exception_pending#
napi_statusnapi_is_exception_pending(napi_env env,bool* result);[in] env: The environment that the API is invoked under.[out] result: Boolean value that is set to true if an exception is pending.
Returnsnapi_ok if the API succeeded.
This API can be called even if there is a pending JavaScript exception.
napi_fatal_exception#
napi_statusnapi_fatal_exception(napi_env env, napi_value err);[in] env: The environment that the API is invoked under.[in] err: The error that is passed to'uncaughtException'.
Trigger an'uncaughtException' in JavaScript. Useful if an asynccallback throws an exception with no way to recover.
Fatal errors#
In the event of an unrecoverable error in a native addon, a fatal error can bethrown to immediately terminate the process.
napi_fatal_error#
NAPI_NO_RETURNvoidnapi_fatal_error(constchar* location,size_t location_len,constchar* message,size_t message_len);[in] location: Optional location at which the error occurred.[in] location_len: The length of the location in bytes, orNAPI_AUTO_LENGTHif it is null-terminated.[in] message: The message associated with the error.[in] message_len: The length of the message in bytes, orNAPI_AUTO_LENGTHif it is null-terminated.
The function call does not return, the process will be terminated.
This API can be called even if there is a pending JavaScript exception.
Object lifetime management#
As Node-API calls are made, handles to objects in the heap for the underlyingVM may be returned asnapi_values. These handles must hold theobjects 'live' until they are no longer required by the native code,otherwise the objects could be collected before the native code wasfinished using them.
As object handles are returned they are associated with a'scope'. The lifespan for the default scope is tied to the lifespanof the native method call. The result is that, by default, handlesremain valid and the objects associated with these handles will beheld live for the lifespan of the native method call.
In many cases, however, it is necessary that the handles remain valid foreither a shorter or longer lifespan than that of the native method.The sections which follow describe the Node-API functions that can be usedto change the handle lifespan from the default.
Making handle lifespan shorter than that of the native method#
It is often necessary to make the lifespan of handles shorter thanthe lifespan of a native method. For example, consider a native methodthat has a loop which iterates through the elements in a large array:
for (int i =0; i <1000000; i++) { napi_value result; napi_status status = napi_get_element(env, object, i, &result);if (status != napi_ok) {break; }// do something with element}This would result in a large number of handles being created, consumingsubstantial resources. In addition, even though the native code could onlyuse the most recent handle, all of the associated objects would also bekept alive since they all share the same scope.
To handle this case, Node-API provides the ability to establish a new 'scope' towhich newly created handles will be associated. Once those handlesare no longer required, the scope can be 'closed' and any handles associatedwith the scope are invalidated. The methods available to open/close scopes arenapi_open_handle_scope andnapi_close_handle_scope.
Node-API only supports a single nested hierarchy of scopes. There is only oneactive scope at any time, and all new handles will be associated with thatscope while it is active. Scopes must be closed in the reverse order fromwhich they are opened. In addition, all scopes created within a native methodmust be closed before returning from that method.
Taking the earlier example, adding calls tonapi_open_handle_scope andnapi_close_handle_scope would ensure that at most a single handleis valid throughout the execution of the loop:
for (int i =0; i <1000000; i++) { napi_handle_scope scope; napi_status status = napi_open_handle_scope(env, &scope);if (status != napi_ok) {break; } napi_value result; status = napi_get_element(env, object, i, &result);if (status != napi_ok) {break; }// do something with element status = napi_close_handle_scope(env, scope);if (status != napi_ok) {break; }}When nesting scopes, there are cases where a handle from aninner scope needs to live beyond the lifespan of that scope. Node-API supportsan 'escapable scope' in order to support this case. An escapable scopeallows one handle to be 'promoted' so that it 'escapes' thecurrent scope and the lifespan of the handle changes from the currentscope to that of the outer scope.
The methods available to open/close escapable scopes arenapi_open_escapable_handle_scope andnapi_close_escapable_handle_scope.
The request to promote a handle is made throughnapi_escape_handle whichcan only be called once.
napi_open_handle_scope#
NAPI_EXTERN napi_statusnapi_open_handle_scope(napi_env env, napi_handle_scope* result);[in] env: The environment that the API is invoked under.[out] result:napi_valuerepresenting the new scope.
Returnsnapi_ok if the API succeeded.
This API opens a new scope.
napi_close_handle_scope#
NAPI_EXTERN napi_statusnapi_close_handle_scope(napi_env env, napi_handle_scope scope);[in] env: The environment that the API is invoked under.[in] scope:napi_valuerepresenting the scope to be closed.
Returnsnapi_ok if the API succeeded.
This API closes the scope passed in. Scopes must be closed in thereverse order from which they were created.
This API can be called even if there is a pending JavaScript exception.
napi_open_escapable_handle_scope#
NAPI_EXTERN napi_statusnapi_open_escapable_handle_scope(napi_env env, napi_handle_scope* result);[in] env: The environment that the API is invoked under.[out] result:napi_valuerepresenting the new scope.
Returnsnapi_ok if the API succeeded.
This API opens a new scope from which one object can be promotedto the outer scope.
napi_close_escapable_handle_scope#
NAPI_EXTERN napi_statusnapi_close_escapable_handle_scope(napi_env env, napi_handle_scope scope);[in] env: The environment that the API is invoked under.[in] scope:napi_valuerepresenting the scope to be closed.
Returnsnapi_ok if the API succeeded.
This API closes the scope passed in. Scopes must be closed in thereverse order from which they were created.
This API can be called even if there is a pending JavaScript exception.
napi_escape_handle#
napi_statusnapi_escape_handle(napi_env env, napi_escapable_handle_scope scope, napi_value escapee, napi_value* result);[in] env: The environment that the API is invoked under.[in] scope:napi_valuerepresenting the current scope.[in] escapee:napi_valuerepresenting the JavaScriptObjectto beescaped.[out] result:napi_valuerepresenting the handle to the escapedObjectin the outer scope.
Returnsnapi_ok if the API succeeded.
This API promotes the handle to the JavaScript object so that it is validfor the lifetime of the outer scope. It can only be called once per scope.If it is called more than once an error will be returned.
This API can be called even if there is a pending JavaScript exception.
References to values with a lifespan longer than that of the native method#
In some cases, an addon will need to be able to create and reference valueswith a lifespan longer than that of a single native method invocation. Forexample, to create a constructor and later use that constructorin a request to create instances, it must be possible to referencethe constructor object across many different instance creation requests. Thiswould not be possible with a normal handle returned as anapi_value asdescribed in the earlier section. The lifespan of a normal handle ismanaged by scopes and all scopes must be closed before the end of a nativemethod.
Node-API provides methods for creating persistent references to values.Currently Node-API only allows references to be created for alimited set of value types, including object, external, function, and symbol.
Each reference has an associated count with a value of 0 or higher,which determines whether the reference will keep the corresponding value alive.References with a count of 0 do not prevent values from being collected.Values of object (object, function, external) and symbol types are becoming'weak' references and can still be accessed while they are not collected.Any count greater than 0 will prevent the values from being collected.
Symbol values have different flavors. The true weak reference behavior isonly supported by local symbols created with thenapi_create_symbol functionor the JavaScriptSymbol() constructor calls. Globally registered symbolscreated with thenode_api_symbol_for function or JavaScriptSymbol.for()function calls remain always strong references because the garbage collectordoes not collect them. The same is true for well-known symbols such asSymbol.iterator. They are also never collected by the garbage collector.
References can be created with an initial reference count. The count canthen be modified throughnapi_reference_ref andnapi_reference_unref. If an object is collected while the countfor a reference is 0, all subsequent calls toget the object associated with the referencenapi_get_reference_valuewill returnNULL for the returnednapi_value. An attempt to callnapi_reference_ref for a reference whose object has been collectedresults in an error.
References must be deleted once they are no longer required by the addon. Whena reference is deleted, it will no longer prevent the corresponding object frombeing collected. Failure to delete a persistent reference results ina 'memory leak' with both the native memory for the persistent reference andthe corresponding object on the heap being retained forever.
There can be multiple persistent references created which refer to the sameobject, each of which will either keep the object live or not based on itsindividual count. Multiple persistent references to the same objectcan result in unexpectedly keeping alive native memory. The native structuresfor a persistent reference must be kept alive until finalizers for thereferenced object are executed. If a new persistent reference is createdfor the same object, the finalizers for that object will not berun and the native memory pointed by the earlier persistent referencewill not be freed. This can be avoided by callingnapi_delete_reference in addition tonapi_reference_unref when possible.
Change History:
Version 10 (
NAPI_VERSIONis defined as10or higher):References can be created for all value types. The new supported valuetypes do not support weak reference semantic and the values of these typesare released when the reference count becomes 0 and cannot be accessed fromthe reference anymore.
napi_create_reference#
NAPI_EXTERN napi_statusnapi_create_reference(napi_env env, napi_value value,uint32_t initial_refcount, napi_ref* result);[in] env: The environment that the API is invoked under.[in] value: Thenapi_valuefor which a reference is being created.[in] initial_refcount: Initial reference count for the new reference.[out] result:napi_refpointing to the new reference.
Returnsnapi_ok if the API succeeded.
This API creates a new reference with the specified reference countto the value passed in.
napi_delete_reference#
NAPI_EXTERN napi_statusnapi_delete_reference(napi_env env, napi_ref ref);[in] env: The environment that the API is invoked under.[in] ref:napi_refto be deleted.
Returnsnapi_ok if the API succeeded.
This API deletes the reference passed in.
This API can be called even if there is a pending JavaScript exception.
napi_reference_ref#
NAPI_EXTERN napi_statusnapi_reference_ref(napi_env env, napi_ref ref,uint32_t* result);[in] env: The environment that the API is invoked under.[in] ref:napi_reffor which the reference count will be incremented.[out] result: The new reference count.
Returnsnapi_ok if the API succeeded.
This API increments the reference count for the referencepassed in and returns the resulting reference count.
napi_reference_unref#
NAPI_EXTERN napi_statusnapi_reference_unref(napi_env env, napi_ref ref,uint32_t* result);[in] env: The environment that the API is invoked under.[in] ref:napi_reffor which the reference count will be decremented.[out] result: The new reference count.
Returnsnapi_ok if the API succeeded.
This API decrements the reference count for the referencepassed in and returns the resulting reference count.
napi_get_reference_value#
NAPI_EXTERN napi_statusnapi_get_reference_value(napi_env env, napi_ref ref, napi_value* result);[in] env: The environment that the API is invoked under.[in] ref: Thenapi_reffor which the corresponding value isbeing requested.[out] result: Thenapi_valuereferenced by thenapi_ref.
Returnsnapi_ok if the API succeeded.
If still valid, this API returns thenapi_value representing theJavaScript value associated with thenapi_ref. Otherwise, resultwill beNULL.
Cleanup on exit of the current Node.js environment#
While a Node.js process typically releases all its resources when exiting,embedders of Node.js, or future Worker support, may require addons to registerclean-up hooks that will be run once the current Node.js environment exits.
Node-API provides functions for registering and un-registering such callbacks.When those callbacks are run, all resources that are being held by the addonshould be freed up.
napi_add_env_cleanup_hook#
NODE_EXTERN napi_statusnapi_add_env_cleanup_hook(node_api_basic_env env, napi_cleanup_hook fun,void* arg);Registersfun as a function to be run with thearg parameter once thecurrent Node.js environment exits.
A function can safely be specified multiple times with differentarg values. In that case, it will be called multiple times as well.Providing the samefun andarg values multiple times is not allowedand will lead the process to abort.
The hooks will be called in reverse order, i.e. the most recently added onewill be called first.
Removing this hook can be done by usingnapi_remove_env_cleanup_hook.Typically, that happens when the resource for which this hook was addedis being torn down anyway.
For asynchronous cleanup,napi_add_async_cleanup_hook is available.
napi_remove_env_cleanup_hook#
NAPI_EXTERN napi_statusnapi_remove_env_cleanup_hook(node_api_basic_env env,void (*fun)(void* arg),void* arg);Unregistersfun as a function to be run with thearg parameter once thecurrent Node.js environment exits. Both the argument and the function valueneed to be exact matches.
The function must have originally been registeredwithnapi_add_env_cleanup_hook, otherwise the process will abort.
napi_add_async_cleanup_hook#
History
| Version | Changes |
|---|---|
| v14.10.0, v12.19.0 | Changed signature of the |
| v14.8.0, v12.19.0 | Added in: v14.8.0, v12.19.0 |
NAPI_EXTERN napi_statusnapi_add_async_cleanup_hook( node_api_basic_env env, napi_async_cleanup_hook hook,void* arg, napi_async_cleanup_hook_handle* remove_handle);[in] env: The environment that the API is invoked under.[in] hook: The function pointer to call at environment teardown.[in] arg: The pointer to pass tohookwhen it gets called.[out] remove_handle: Optional handle that refers to the asynchronous cleanuphook.
Registershook, which is a function of typenapi_async_cleanup_hook, asa function to be run with theremove_handle andarg parameters once thecurrent Node.js environment exits.
Unlikenapi_add_env_cleanup_hook, the hook is allowed to be asynchronous.
Otherwise, behavior generally matches that ofnapi_add_env_cleanup_hook.
Ifremove_handle is notNULL, an opaque value will be stored in itthat must later be passed tonapi_remove_async_cleanup_hook,regardless of whether the hook has already been invoked.Typically, that happens when the resource for which this hook was addedis being torn down anyway.
napi_remove_async_cleanup_hook#
History
| Version | Changes |
|---|---|
| v14.10.0, v12.19.0 | Removed |
| v14.8.0, v12.19.0 | Added in: v14.8.0, v12.19.0 |
NAPI_EXTERN napi_statusnapi_remove_async_cleanup_hook( napi_async_cleanup_hook_handle remove_handle);[in] remove_handle: The handle to an asynchronous cleanup hook that wascreated withnapi_add_async_cleanup_hook.
Unregisters the cleanup hook corresponding toremove_handle. This will preventthe hook from being executed, unless it has already started executing.This must be called on anynapi_async_cleanup_hook_handle value obtainedfromnapi_add_async_cleanup_hook.
Finalization on the exit of the Node.js environment#
The Node.js environment may be torn down at an arbitrary time as soon aspossible with JavaScript execution disallowed, like on the request ofworker.terminate(). When the environment is being torn down, theregisterednapi_finalize callbacks of JavaScript objects, thread-safefunctions and environment instance data are invoked immediately andindependently.
The invocation ofnapi_finalize callbacks is scheduled after the manuallyregistered cleanup hooks. In order to ensure a proper order of addonfinalization during environment shutdown to avoid use-after-free in thenapi_finalize callback, addons should register a cleanup hook withnapi_add_env_cleanup_hook andnapi_add_async_cleanup_hook to manuallyrelease the allocated resource in a proper order.
Module registration#
Node-API modules are registered in a manner similar to other modulesexcept that instead of using theNODE_MODULE macro the followingis used:
NAPI_MODULE(NODE_GYP_MODULE_NAME, Init)The next difference is the signature for theInit method. For a Node-APImodule it is as follows:
napi_valueInit(napi_env env, napi_value exports);The return value fromInit is treated as theexports object for the module.TheInit method is passed an empty object via theexports parameter as aconvenience. IfInit returnsNULL, the parameter passed asexports isexported by the module. Node-API modules cannot modify themodule object butcan specify anything as theexports property of the module.
To add the methodhello as a function so that it can be called as a methodprovided by the addon:
napi_valueInit(napi_env env, napi_value exports) { napi_status status; napi_property_descriptor desc = {"hello",NULL, Method,NULL,NULL,NULL, napi_writable | napi_enumerable | napi_configurable,NULL }; status = napi_define_properties(env, exports,1, &desc);if (status != napi_ok)returnNULL;return exports;}To set a function to be returned by therequire() for the addon:
napi_valueInit(napi_env env, napi_value exports) { napi_value method; napi_status status; status = napi_create_function(env,"exports", NAPI_AUTO_LENGTH, Method,NULL, &method);if (status != napi_ok)returnNULL;return method;}To define a class so that new instances can be created (often used withObject wrap):
//NOTE: partial example, not all referenced code is includednapi_valueInit(napi_env env, napi_value exports) { napi_status status; napi_property_descriptor properties[] = { {"value",NULL,NULL, GetValue, SetValue,NULL, napi_writable | napi_configurable,NULL }, DECLARE_NAPI_METHOD("plusOne", PlusOne), DECLARE_NAPI_METHOD("multiply", Multiply), }; napi_value cons; status = napi_define_class(env,"MyObject", New,NULL,3, properties, &cons);if (status != napi_ok)returnNULL; status = napi_create_reference(env, cons,1, &constructor);if (status != napi_ok)returnNULL; status = napi_set_named_property(env, exports,"MyObject", cons);if (status != napi_ok)returnNULL;return exports;}You can also use theNAPI_MODULE_INIT macro, which acts as a shorthandforNAPI_MODULE and defining anInit function:
NAPI_MODULE_INIT(/* napi_env env, napi_value exports */) { napi_value answer; napi_status result; status = napi_create_int64(env,42, &answer);if (status != napi_ok)returnNULL; status = napi_set_named_property(env, exports,"answer", answer);if (status != napi_ok)returnNULL;return exports;}The parametersenv andexports are provided to the body of theNAPI_MODULE_INIT macro.
All Node-API addons are context-aware, meaning they may be loaded multipletimes. There are a few design considerations when declaring such a module.The documentation oncontext-aware addons provides more details.
The variablesenv andexports will be available inside the function bodyfollowing the macro invocation.
For more details on setting properties on objects, see the section onWorking with JavaScript properties.
For more details on building addon modules in general, refer to the existingAPI.
Working with JavaScript values#
Node-API exposes a set of APIs to create all types of JavaScript values.Some of these types are documented underSection language typesof theECMAScript Language Specification.
Fundamentally, these APIs are used to do one of the following:
- Create a new JavaScript object
- Convert from a primitive C type to a Node-API value
- Convert from Node-API value to a primitive C type
- Get global instances including
undefinedandnull
Node-API values are represented by the typenapi_value.Any Node-API call that requires a JavaScript value takes in anapi_value.In some cases, the API does check the type of thenapi_value up-front.However, for better performance, it's better for the caller to make sure thatthenapi_value in question is of the JavaScript type expected by the API.
Enum types#
napi_key_collection_mode#
typedefenum { napi_key_include_prototypes, napi_key_own_only} napi_key_collection_mode;Describes theKeys/Properties filter enums:
napi_key_collection_mode limits the range of collected properties.
napi_key_own_only limits the collected properties to the givenobject only.napi_key_include_prototypes will include all keysof the objects's prototype chain as well.
napi_key_filter#
typedefenum { napi_key_all_properties =0, napi_key_writable =1, napi_key_enumerable =1 <<1, napi_key_configurable =1 <<2, napi_key_skip_strings =1 <<3, napi_key_skip_symbols =1 <<4} napi_key_filter;Property filter bit flag. This works with bit operators to build a composite filter.
napi_key_conversion#
typedefenum { napi_key_keep_numbers, napi_key_numbers_to_strings} napi_key_conversion;napi_key_numbers_to_strings will convert integer indexes tostrings.napi_key_keep_numbers will return numbers for integerindexes.
napi_valuetype#
typedefenum {// ES6 types (corresponds to typeof) napi_undefined, napi_null, napi_boolean, napi_number, napi_string, napi_symbol, napi_object, napi_function, napi_external, napi_bigint,} napi_valuetype;Describes the type of anapi_value. This generally corresponds to the typesdescribed inSection language types of the ECMAScript Language Specification.In addition to types in that section,napi_valuetype can also representFunctions andObjects with external data.
A JavaScript value of typenapi_external appears in JavaScript as a plainobject such that no properties can be set on it, and no prototype.
napi_typedarray_type#
History
| Version | Changes |
|---|---|
| v25.4.0 | Added |
typedefenum { napi_int8_array, napi_uint8_array, napi_uint8_clamped_array, napi_int16_array, napi_uint16_array, napi_int32_array, napi_uint32_array, napi_float32_array, napi_float64_array, napi_bigint64_array, napi_biguint64_array, napi_float16_array,} napi_typedarray_type;This represents the underlying binary scalar datatype of theTypedArray.Elements of this enum correspond toSection TypedArray objects of theECMAScript Language Specification.
Object creation functions#
napi_create_array#
napi_statusnapi_create_array(napi_env env, napi_value* result)[in] env: The environment that the Node-API call is invoked under.[out] result: Anapi_valuerepresenting a JavaScriptArray.
Returnsnapi_ok if the API succeeded.
This API returns a Node-API value corresponding to a JavaScriptArray type.JavaScript arrays are described inSection Array objects of the ECMAScript Language Specification.
napi_create_array_with_length#
napi_statusnapi_create_array_with_length(napi_env env,size_t length, napi_value* result)[in] env: The environment that the API is invoked under.[in] length: The initial length of theArray.[out] result: Anapi_valuerepresenting a JavaScriptArray.
Returnsnapi_ok if the API succeeded.
This API returns a Node-API value corresponding to a JavaScriptArray type.TheArray's length property is set to the passed-in length parameter.However, the underlying buffer is not guaranteed to be pre-allocated by the VMwhen the array is created. That behavior is left to the underlying VMimplementation. If the buffer must be a contiguous block of memory that can bedirectly read and/or written via C, consider usingnapi_create_external_arraybuffer.
JavaScript arrays are described inSection Array objects of the ECMAScript Language Specification.
napi_create_arraybuffer#
napi_statusnapi_create_arraybuffer(napi_env env,size_t byte_length,void** data, napi_value* result)[in] env: The environment that the API is invoked under.[in] length: The length in bytes of the array buffer to create.[out] data: Pointer to the underlying byte buffer of theArrayBuffer.datacan optionally be ignored by passingNULL.[out] result: Anapi_valuerepresenting a JavaScriptArrayBuffer.
Returnsnapi_ok if the API succeeded.
This API returns a Node-API value corresponding to a JavaScriptArrayBuffer.ArrayBuffers are used to represent fixed-length binary data buffers. They arenormally used as a backing-buffer forTypedArray objects.TheArrayBuffer allocated will have an underlying byte buffer whose size isdetermined by thelength parameter that's passed in.The underlying buffer is optionally returned back to the caller in case thecaller wants to directly manipulate the buffer. This buffer can only bewritten to directly from native code. To write to this buffer from JavaScript,a typed array orDataView object would need to be created.
JavaScriptArrayBuffer objects are described inSection ArrayBuffer objects of the ECMAScript Language Specification.
napi_create_buffer#
napi_statusnapi_create_buffer(napi_env env,size_t size,void** data, napi_value* result)[in] env: The environment that the API is invoked under.[in] size: Size in bytes of the underlying buffer.[out] data: Raw pointer to the underlying buffer.datacan optionally be ignored by passingNULL.[out] result: Anapi_valuerepresenting anode::Buffer.
Returnsnapi_ok if the API succeeded.
This API allocates anode::Buffer object. While this is still afully-supported data structure, in most cases using aTypedArray will suffice.
napi_create_buffer_copy#
napi_statusnapi_create_buffer_copy(napi_env env,size_t length,constvoid* data,void** result_data, napi_value* result)[in] env: The environment that the API is invoked under.[in] size: Size in bytes of the input buffer (should be the same as the sizeof the new buffer).[in] data: Raw pointer to the underlying buffer to copy from.[out] result_data: Pointer to the newBuffer's underlying data buffer.result_datacan optionally be ignored by passingNULL.[out] result: Anapi_valuerepresenting anode::Buffer.
Returnsnapi_ok if the API succeeded.
This API allocates anode::Buffer object and initializes it with data copiedfrom the passed-in buffer. While this is still a fully-supported datastructure, in most cases using aTypedArray will suffice.
napi_create_date#
napi_statusnapi_create_date(napi_env env,double time, napi_value* result);[in] env: The environment that the API is invoked under.[in] time: ECMAScript time value in milliseconds since 01 January, 1970 UTC.[out] result: Anapi_valuerepresenting a JavaScriptDate.
Returnsnapi_ok if the API succeeded.
This API does not observe leap seconds; they are ignored, asECMAScript aligns with POSIX time specification.
This API allocates a JavaScriptDate object.
JavaScriptDate objects are described inSection Date objects of the ECMAScript Language Specification.
napi_create_external#
napi_statusnapi_create_external(napi_env env,void* data, napi_finalize finalize_cb,void* finalize_hint, napi_value* result)[in] env: The environment that the API is invoked under.[in] data: Raw pointer to the external data.[in] finalize_cb: Optional callback to call when the external value is beingcollected.napi_finalizeprovides more details.[in] finalize_hint: Optional hint to pass to the finalize callback duringcollection.[out] result: Anapi_valuerepresenting an external value.
Returnsnapi_ok if the API succeeded.
This API allocates a JavaScript value with external data attached to it. Thisis used to pass external data through JavaScript code, so it can be retrievedlater by native code usingnapi_get_value_external.
The API adds anapi_finalize callback which will be called when the JavaScriptobject just created has been garbage collected.
The created value is not an object, and therefore does not support additionalproperties. It is considered a distinct value type: callingnapi_typeof() withan external value yieldsnapi_external.
napi_create_external_arraybuffer#
napi_statusnapi_create_external_arraybuffer(napi_env env,void* external_data,size_t byte_length, napi_finalize finalize_cb,void* finalize_hint, napi_value* result)[in] env: The environment that the API is invoked under.[in] external_data: Pointer to the underlying byte buffer of theArrayBuffer.[in] byte_length: The length in bytes of the underlying buffer.[in] finalize_cb: Optional callback to call when theArrayBufferis beingcollected.napi_finalizeprovides more details.[in] finalize_hint: Optional hint to pass to the finalize callback duringcollection.[out] result: Anapi_valuerepresenting a JavaScriptArrayBuffer.
Returnsnapi_ok if the API succeeded.
Some runtimes other than Node.js have dropped support for external buffers.On runtimes other than Node.js this method may returnnapi_no_external_buffers_allowed to indicate that externalbuffers are not supported. One such runtime is Electron asdescribed in this issueelectron/issues/35801.
In order to maintain broadest compatibility with all runtimesyou may defineNODE_API_NO_EXTERNAL_BUFFERS_ALLOWED in your addon beforeincludes for the node-api headers. Doing so will hide the 2 functionsthat create external buffers. This will ensure a compilation erroroccurs if you accidentally use one of these methods.
This API returns a Node-API value corresponding to a JavaScriptArrayBuffer.The underlying byte buffer of theArrayBuffer is externally allocated andmanaged. The caller must ensure that the byte buffer remains valid until thefinalize callback is called.
The API adds anapi_finalize callback which will be called when the JavaScriptobject just created has been garbage collected.
JavaScriptArrayBuffers are described inSection ArrayBuffer objects of the ECMAScript Language Specification.
napi_create_external_buffer#
napi_statusnapi_create_external_buffer(napi_env env,size_t length,void* data, napi_finalize finalize_cb,void* finalize_hint, napi_value* result)[in] env: The environment that the API is invoked under.[in] length: Size in bytes of the input buffer (should be the same as thesize of the new buffer).[in] data: Raw pointer to the underlying buffer to expose to JavaScript.[in] finalize_cb: Optional callback to call when theArrayBufferis beingcollected.napi_finalizeprovides more details.[in] finalize_hint: Optional hint to pass to the finalize callback duringcollection.[out] result: Anapi_valuerepresenting anode::Buffer.
Returnsnapi_ok if the API succeeded.
Some runtimes other than Node.js have dropped support for external buffers.On runtimes other than Node.js this method may returnnapi_no_external_buffers_allowed to indicate that externalbuffers are not supported. One such runtime is Electron asdescribed in this issueelectron/issues/35801.
In order to maintain broadest compatibility with all runtimesyou may defineNODE_API_NO_EXTERNAL_BUFFERS_ALLOWED in your addon beforeincludes for the node-api headers. Doing so will hide the 2 functionsthat create external buffers. This will ensure a compilation erroroccurs if you accidentally use one of these methods.
This API allocates anode::Buffer object and initializes it with databacked by the passed in buffer. While this is still a fully-supported datastructure, in most cases using aTypedArray will suffice.
The API adds anapi_finalize callback which will be called when the JavaScriptobject just created has been garbage collected.
For Node.js >=4Buffers areUint8Arrays.
napi_create_object#
napi_statusnapi_create_object(napi_env env, napi_value* result)[in] env: The environment that the API is invoked under.[out] result: Anapi_valuerepresenting a JavaScriptObject.
Returnsnapi_ok if the API succeeded.
This API allocates a default JavaScriptObject.It is the equivalent of doingnew Object() in JavaScript.
The JavaScriptObject type is described inSection object type of theECMAScript Language Specification.
node_api_create_object_with_properties#
napi_statusnode_api_create_object_with_properties(napi_env env, napi_value prototype_or_null,const napi_value* property_names,const napi_value* property_values,size_t property_count, napi_value* result)[in] env: The environment that the API is invoked under.[in] prototype_or_null: The prototype object for the new object. Can be anapi_valuerepresenting a JavaScript object to use as the prototype, anapi_valuerepresenting JavaScriptnull, or anullptrthat will be converted tonull.[in] property_names: Array ofnapi_valuerepresenting the property names.[in] property_values: Array ofnapi_valuerepresenting the property values.[in] property_count: Number of properties in the arrays.[out] result: Anapi_valuerepresenting a JavaScriptObject.
Returnsnapi_ok if the API succeeded.
This API creates a JavaScriptObject with the specified prototype andproperties. This is more efficient than callingnapi_create_object followedby multiplenapi_set_property calls, as it can create the object with allproperties atomically, avoiding potential V8 map transitions.
The arraysproperty_names andproperty_values must have the same lengthspecified byproperty_count. The properties are added to the object in theorder they appear in the arrays.
napi_create_symbol#
napi_statusnapi_create_symbol(napi_env env, napi_value description, napi_value* result)[in] env: The environment that the API is invoked under.[in] description: Optionalnapi_valuewhich refers to a JavaScriptstringto be set as the description for the symbol.[out] result: Anapi_valuerepresenting a JavaScriptsymbol.
Returnsnapi_ok if the API succeeded.
This API creates a JavaScriptsymbol value from a UTF8-encoded C string.
The JavaScriptsymbol type is described inSection symbol typeof the ECMAScript Language Specification.
node_api_symbol_for#
napi_statusnode_api_symbol_for(napi_env env,constchar* utf8description,size_t length, napi_value* result)[in] env: The environment that the API is invoked under.[in] utf8description: UTF-8 C string representing the text to be used as thedescription for the symbol.[in] length: The length of the description string in bytes, orNAPI_AUTO_LENGTHif it is null-terminated.[out] result: Anapi_valuerepresenting a JavaScriptsymbol.
Returnsnapi_ok if the API succeeded.
This API searches in the global registry for an existing symbol with the givendescription. If the symbol already exists it will be returned, otherwise a newsymbol will be created in the registry.
The JavaScriptsymbol type is described inSection symbol type of the ECMAScriptLanguage Specification.
napi_create_typedarray#
napi_statusnapi_create_typedarray(napi_env env, napi_typedarray_type type,size_t length, napi_value arraybuffer,size_t byte_offset, napi_value* result)[in] env: The environment that the API is invoked under.[in] type: Scalar datatype of the elements within theTypedArray.[in] length: Number of elements in theTypedArray.[in] arraybuffer:ArrayBufferunderlying the typed array.[in] byte_offset: The byte offset within theArrayBufferfrom which tostart projecting theTypedArray.[out] result: Anapi_valuerepresenting a JavaScriptTypedArray.
Returnsnapi_ok if the API succeeded.
This API creates a JavaScriptTypedArray object over an existingArrayBuffer.TypedArray objects provide an array-like view over anunderlying data buffer where each element has the same underlying binary scalardatatype.
It's required that(length * size_of_element) + byte_offset shouldbe <= the size in bytes of the array passed in. If not, aRangeError exceptionis raised.
JavaScriptTypedArray objects are described inSection TypedArray objects of the ECMAScript Language Specification.
node_api_create_buffer_from_arraybuffer#
napi_status NAPI_CDECLnode_api_create_buffer_from_arraybuffer(napi_env env, napi_value arraybuffer,size_t byte_offset,size_t byte_length, napi_value* result)[in] env: The environment that the API is invoked under.[in] arraybuffer: TheArrayBufferfrom which the buffer will be created.[in] byte_offset: The byte offset within theArrayBufferfrom which to start creating the buffer.[in] byte_length: The length in bytes of the buffer to be created from theArrayBuffer.[out] result: Anapi_valuerepresenting the created JavaScriptBufferobject.
Returnsnapi_ok if the API succeeded.
This API creates a JavaScriptBuffer object from an existingArrayBuffer.TheBuffer object is a Node.js-specific class that provides a way to work with binary data directly in JavaScript.
The byte range[byte_offset, byte_offset + byte_length)must be within the bounds of theArrayBuffer. Ifbyte_offset + byte_lengthexceeds the size of theArrayBuffer, aRangeError exception is raised.
napi_create_dataview#
History
| Version | Changes |
|---|---|
| v25.4.0 | Added support for |
| v8.3.0 | Added in: v8.3.0 |
napi_statusnapi_create_dataview(napi_env env,size_t byte_length, napi_value arraybuffer,size_t byte_offset, napi_value* result)[in] env: The environment that the API is invoked under.[in] length: Number of elements in theDataView.[in] arraybuffer:ArrayBufferorSharedArrayBufferunderlying theDataView.[in] byte_offset: The byte offset within theArrayBufferfrom which tostart projecting theDataView.[out] result: Anapi_valuerepresenting a JavaScriptDataView.
Returnsnapi_ok if the API succeeded.
This API creates a JavaScriptDataView object over an existingArrayBufferorSharedArrayBuffer.DataView objects provide an array-like view over anunderlying data buffer, but one which allows items of different size and type intheArrayBuffer orSharedArrayBuffer.
It is required thatbyte_length + byte_offset is less than or equal to thesize in bytes of the array passed in. If not, aRangeError exception israised.
JavaScriptDataView objects are described inSection DataView objects of the ECMAScript Language Specification.
Functions to convert from C types to Node-API#
napi_create_int32#
napi_statusnapi_create_int32(napi_env env,int32_t value, napi_value* result)[in] env: The environment that the API is invoked under.[in] value: Integer value to be represented in JavaScript.[out] result: Anapi_valuerepresenting a JavaScriptnumber.
Returnsnapi_ok if the API succeeded.
This API is used to convert from the Cint32_t type to the JavaScriptnumber type.
The JavaScriptnumber type is described inSection number type of the ECMAScript Language Specification.
napi_create_uint32#
napi_statusnapi_create_uint32(napi_env env,uint32_t value, napi_value* result)[in] env: The environment that the API is invoked under.[in] value: Unsigned integer value to be represented in JavaScript.[out] result: Anapi_valuerepresenting a JavaScriptnumber.
Returnsnapi_ok if the API succeeded.
This API is used to convert from the Cuint32_t type to the JavaScriptnumber type.
The JavaScriptnumber type is described inSection number type of the ECMAScript Language Specification.
napi_create_int64#
napi_statusnapi_create_int64(napi_env env,int64_t value, napi_value* result)[in] env: The environment that the API is invoked under.[in] value: Integer value to be represented in JavaScript.[out] result: Anapi_valuerepresenting a JavaScriptnumber.
Returnsnapi_ok if the API succeeded.
This API is used to convert from the Cint64_t type to the JavaScriptnumber type.
The JavaScriptnumber type is described inSection number typeof the ECMAScript Language Specification. Note the complete range ofint64_tcannot be represented with full precision in JavaScript. Integer valuesoutside the range ofNumber.MIN_SAFE_INTEGER-(2**53 - 1) -Number.MAX_SAFE_INTEGER(2**53 - 1) will lose precision.
napi_create_double#
napi_statusnapi_create_double(napi_env env,double value, napi_value* result)[in] env: The environment that the API is invoked under.[in] value: Double-precision value to be represented in JavaScript.[out] result: Anapi_valuerepresenting a JavaScriptnumber.
Returnsnapi_ok if the API succeeded.
This API is used to convert from the Cdouble type to the JavaScriptnumber type.
The JavaScriptnumber type is described inSection number type of the ECMAScript Language Specification.
napi_create_bigint_int64#
napi_statusnapi_create_bigint_int64(napi_env env,int64_t value, napi_value* result);[in] env: The environment that the API is invoked under.[in] value: Integer value to be represented in JavaScript.[out] result: Anapi_valuerepresenting a JavaScriptBigInt.
Returnsnapi_ok if the API succeeded.
This API converts the Cint64_t type to the JavaScriptBigInt type.
napi_create_bigint_uint64#
napi_statusnapi_create_bigint_uint64(napi_env env,uint64_t value, napi_value* result);[in] env: The environment that the API is invoked under.[in] value: Unsigned integer value to be represented in JavaScript.[out] result: Anapi_valuerepresenting a JavaScriptBigInt.
Returnsnapi_ok if the API succeeded.
This API converts the Cuint64_t type to the JavaScriptBigInt type.
napi_create_bigint_words#
napi_statusnapi_create_bigint_words(napi_env env,int sign_bit,size_t word_count,constuint64_t* words, napi_value* result);[in] env: The environment that the API is invoked under.[in] sign_bit: Determines if the resultingBigIntwill be positive ornegative.[in] word_count: The length of thewordsarray.[in] words: An array ofuint64_tlittle-endian 64-bit words.[out] result: Anapi_valuerepresenting a JavaScriptBigInt.
Returnsnapi_ok if the API succeeded.
This API converts an array of unsigned 64-bit words into a singleBigIntvalue.
The resultingBigInt is calculated as: (–1)sign_bit (words[0]× (264)0 +words[1] × (264)1 + …)
napi_create_string_latin1#
napi_statusnapi_create_string_latin1(napi_env env,constchar* str,size_t length, napi_value* result);[in] env: The environment that the API is invoked under.[in] str: Character buffer representing an ISO-8859-1-encoded string.[in] length: The length of the string in bytes, orNAPI_AUTO_LENGTHif itis null-terminated.[out] result: Anapi_valuerepresenting a JavaScriptstring.
Returnsnapi_ok if the API succeeded.
This API creates a JavaScriptstring value from an ISO-8859-1-encoded Cstring. The native string is copied.
The JavaScriptstring type is described inSection string type of the ECMAScript Language Specification.
node_api_create_external_string_latin1#
napi_statusnode_api_create_external_string_latin1(napi_env env,char* str,size_t length, napi_finalize finalize_callback,void* finalize_hint, napi_value* result,bool* copied);[in] env: The environment that the API is invoked under.[in] str: Character buffer representing an ISO-8859-1-encoded string.[in] length: The length of the string in bytes, orNAPI_AUTO_LENGTHif itis null-terminated.[in] finalize_callback: The function to call when the string is beingcollected. The function will be called with the following parameters:[in] env: The environment in which the add-on is running. This valuemay be null if the string is being collected as part of the terminationof the worker or the main Node.js instance.[in] data: This is the valuestras avoid*pointer.[in] finalize_hint: This is the valuefinalize_hintthat was givento the API.napi_finalizeprovides more details.This parameter is optional. Passing a null value means that the add-ondoesn't need to be notified when the corresponding JavaScript string iscollected.
[in] finalize_hint: Optional hint to pass to the finalize callback duringcollection.[out] result: Anapi_valuerepresenting a JavaScriptstring.[out] copied: Whether the string was copied. If it was, the finalizer willalready have been invoked to destroystr.
Returnsnapi_ok if the API succeeded.
This API creates a JavaScriptstring value from an ISO-8859-1-encoded Cstring. The native string may not be copied and must thus exist for the entirelife cycle of the JavaScript value.
The JavaScriptstring type is described inSection string type of the ECMAScript Language Specification.
napi_create_string_utf16#
napi_statusnapi_create_string_utf16(napi_env env,constchar16_t* str,size_t length, napi_value* result)[in] env: The environment that the API is invoked under.[in] str: Character buffer representing a UTF16-LE-encoded string.[in] length: The length of the string in two-byte code units, orNAPI_AUTO_LENGTHif it is null-terminated.[out] result: Anapi_valuerepresenting a JavaScriptstring.
Returnsnapi_ok if the API succeeded.
This API creates a JavaScriptstring value from a UTF16-LE-encoded C string.The native string is copied.
The JavaScriptstring type is described inSection string type of the ECMAScript Language Specification.
node_api_create_external_string_utf16#
napi_statusnode_api_create_external_string_utf16(napi_env env,char16_t* str,size_t length, napi_finalize finalize_callback,void* finalize_hint, napi_value* result,bool* copied);[in] env: The environment that the API is invoked under.[in] str: Character buffer representing a UTF16-LE-encoded string.[in] length: The length of the string in two-byte code units, orNAPI_AUTO_LENGTHif it is null-terminated.[in] finalize_callback: The function to call when the string is beingcollected. The function will be called with the following parameters:[in] env: The environment in which the add-on is running. This valuemay be null if the string is being collected as part of the terminationof the worker or the main Node.js instance.[in] data: This is the valuestras avoid*pointer.[in] finalize_hint: This is the valuefinalize_hintthat was givento the API.napi_finalizeprovides more details.This parameter is optional. Passing a null value means that the add-ondoesn't need to be notified when the corresponding JavaScript string iscollected.
[in] finalize_hint: Optional hint to pass to the finalize callback duringcollection.[out] result: Anapi_valuerepresenting a JavaScriptstring.[out] copied: Whether the string was copied. If it was, the finalizer willalready have been invoked to destroystr.
Returnsnapi_ok if the API succeeded.
This API creates a JavaScriptstring value from a UTF16-LE-encoded C string.The native string may not be copied and must thus exist for the entire lifecycle of the JavaScript value.
The JavaScriptstring type is described inSection string type of the ECMAScript Language Specification.
napi_create_string_utf8#
napi_statusnapi_create_string_utf8(napi_env env,constchar* str,size_t length, napi_value* result)[in] env: The environment that the API is invoked under.[in] str: Character buffer representing a UTF8-encoded string.[in] length: The length of the string in bytes, orNAPI_AUTO_LENGTHif itis null-terminated.[out] result: Anapi_valuerepresenting a JavaScriptstring.
Returnsnapi_ok if the API succeeded.
This API creates a JavaScriptstring value from a UTF8-encoded C string.The native string is copied.
The JavaScriptstring type is described inSection string type of the ECMAScript Language Specification.
Functions to create optimized property keys#
Many JavaScript engines including V8 use internalized strings as keysto set and get property values. They typically use a hash table to createand lookup such strings. While it adds some cost per key creation, it improvesthe performance after that by enabling comparison of string pointers insteadof the whole strings.
If a new JavaScript string is intended to be used as a property key, then forsome JavaScript engines it will be more efficient to use the functions in thissection. Otherwise, use thenapi_create_string_utf8 ornode_api_create_external_string_utf8 series functions as there may beadditional overhead in creating/storing strings with the property keycreation methods.
node_api_create_property_key_latin1#
napi_status NAPI_CDECLnode_api_create_property_key_latin1(napi_env env,constchar* str,size_t length, napi_value* result);[in] env: The environment that the API is invoked under.[in] str: Character buffer representing an ISO-8859-1-encoded string.[in] length: The length of the string in bytes, orNAPI_AUTO_LENGTHif itis null-terminated.[out] result: Anapi_valuerepresenting an optimized JavaScriptstringto be used as a property key for objects.
Returnsnapi_ok if the API succeeded.
This API creates an optimized JavaScriptstring value froman ISO-8859-1-encoded C string to be used as a property key for objects.The native string is copied. In contrast withnapi_create_string_latin1,subsequent calls to this function with the samestr pointer may benefit from a speedupin the creation of the requestednapi_value, depending on the engine.
The JavaScriptstring type is described inSection string type of the ECMAScript Language Specification.
node_api_create_property_key_utf16#
napi_status NAPI_CDECLnode_api_create_property_key_utf16(napi_env env,constchar16_t* str,size_t length, napi_value* result);[in] env: The environment that the API is invoked under.[in] str: Character buffer representing a UTF16-LE-encoded string.[in] length: The length of the string in two-byte code units, orNAPI_AUTO_LENGTHif it is null-terminated.[out] result: Anapi_valuerepresenting an optimized JavaScriptstringto be used as a property key for objects.
Returnsnapi_ok if the API succeeded.
This API creates an optimized JavaScriptstring value froma UTF16-LE-encoded C string to be used as a property key for objects.The native string is copied.
The JavaScriptstring type is described inSection string type of the ECMAScript Language Specification.
node_api_create_property_key_utf8#
napi_status NAPI_CDECLnode_api_create_property_key_utf8(napi_env env,constchar* str,size_t length, napi_value* result);[in] env: The environment that the API is invoked under.[in] str: Character buffer representing a UTF8-encoded string.[in] length: The length of the string in two-byte code units, orNAPI_AUTO_LENGTHif it is null-terminated.[out] result: Anapi_valuerepresenting an optimized JavaScriptstringto be used as a property key for objects.
Returnsnapi_ok if the API succeeded.
This API creates an optimized JavaScriptstring value froma UTF8-encoded C string to be used as a property key for objects.The native string is copied.
The JavaScriptstring type is described inSection string type of the ECMAScript Language Specification.
Functions to convert from Node-API to C types#
napi_get_array_length#
napi_statusnapi_get_array_length(napi_env env, napi_value value,uint32_t* result)[in] env: The environment that the API is invoked under.[in] value:napi_valuerepresenting the JavaScriptArraywhose length isbeing queried.[out] result:uint32representing length of the array.
Returnsnapi_ok if the API succeeded.
This API returns the length of an array.
Array length is described inSection Array instance length of the ECMAScript LanguageSpecification.
napi_get_arraybuffer_info#
History
| Version | Changes |
|---|---|
| v24.9.0 | Added support for |
| v8.0.0 | Added in: v8.0.0 |
napi_statusnapi_get_arraybuffer_info(napi_env env, napi_value arraybuffer,void** data,size_t* byte_length)[in] env: The environment that the API is invoked under.[in] arraybuffer:napi_valuerepresenting theArrayBufferorSharedArrayBufferbeing queried.[out] data: The underlying data buffer of theArrayBufferorSharedArrayBufferis0, this may beNULLor any other pointer value.[out] byte_length: Length in bytes of the underlying data buffer.
Returnsnapi_ok if the API succeeded.
This API is used to retrieve the underlying data buffer of anArrayBuffer orSharedArrayBuffer and its length.
WARNING: Use caution while using this API. The lifetime of the underlying databuffer is managed by theArrayBuffer orSharedArrayBuffer even after it's returned. Apossible safe way to use this API is in conjunction withnapi_create_reference, which can be used to guarantee control over thelifetime of theArrayBuffer orSharedArrayBuffer. It's also safe to use the returned data bufferwithin the same callback as long as there are no calls to other APIs that mighttrigger a GC.
napi_get_buffer_info#
napi_statusnapi_get_buffer_info(napi_env env, napi_value value,void** data,size_t* length)[in] env: The environment that the API is invoked under.[in] value:napi_valuerepresenting thenode::BufferorUint8Arraybeing queried.[out] data: The underlying data buffer of thenode::BufferorUint8Array. If length is0, this may beNULLor any other pointer value.[out] length: Length in bytes of the underlying data buffer.
Returnsnapi_ok if the API succeeded.
This method returns the identicaldata andbyte_length asnapi_get_typedarray_info. Andnapi_get_typedarray_info accepts anode::Buffer (a Uint8Array) as the value too.
This API is used to retrieve the underlying data buffer of anode::Bufferand its length.
Warning: Use caution while using this API since the underlying data buffer'slifetime is not guaranteed if it's managed by the VM.
napi_get_prototype#
napi_statusnapi_get_prototype(napi_env env, napi_value object, napi_value* result)[in] env: The environment that the API is invoked under.[in] object:napi_valuerepresenting JavaScriptObjectwhose prototypeto return. This returns the equivalent ofObject.getPrototypeOf(which isnot the same as the function'sprototypeproperty).[out] result:napi_valuerepresenting prototype of the given object.
Returnsnapi_ok if the API succeeded.
napi_get_typedarray_info#
napi_statusnapi_get_typedarray_info(napi_env env, napi_value typedarray, napi_typedarray_type* type,size_t* length,void** data, napi_value* arraybuffer,size_t* byte_offset)[in] env: The environment that the API is invoked under.[in] typedarray:napi_valuerepresenting theTypedArraywhoseproperties to query.[out] type: Scalar datatype of the elements within theTypedArray.[out] length: The number of elements in theTypedArray.[out] data: The data buffer underlying theTypedArrayadjusted bythebyte_offsetvalue so that it points to the first element in theTypedArray. If the length of the array is0, this may beNULLorany other pointer value.[out] arraybuffer: TheArrayBufferunderlying theTypedArray.[out] byte_offset: The byte offset within the underlying native arrayat which the first element of the arrays is located. The value for the dataparameter has already been adjusted so that data points to the first elementin the array. Therefore, the first byte of the native array would be atdata - byte_offset.
Returnsnapi_ok if the API succeeded.
This API returns various properties of a typed array.
Any of the out parameters may beNULL if that property is unneeded.
Warning: Use caution while using this API since the underlying data bufferis managed by the VM.
napi_get_dataview_info#
napi_statusnapi_get_dataview_info(napi_env env, napi_value dataview,size_t* byte_length,void** data, napi_value* arraybuffer,size_t* byte_offset)[in] env: The environment that the API is invoked under.[in] dataview:napi_valuerepresenting theDataViewwhoseproperties to query.[out] byte_length: Number of bytes in theDataView.[out] data: The data buffer underlying theDataView.If byte_length is0, this may beNULLor any other pointer value.[out] arraybuffer:ArrayBufferunderlying theDataView.[out] byte_offset: The byte offset within the data buffer from whichto start projecting theDataView.
Returnsnapi_ok if the API succeeded.
Any of the out parameters may beNULL if that property is unneeded.
This API returns various properties of aDataView.
napi_get_date_value#
napi_statusnapi_get_date_value(napi_env env, napi_value value,double* result)[in] env: The environment that the API is invoked under.[in] value:napi_valuerepresenting a JavaScriptDate.[out] result: Time value as adoublerepresented as milliseconds sincemidnight at the beginning of 01 January, 1970 UTC.
This API does not observe leap seconds; they are ignored, asECMAScript aligns with POSIX time specification.
Returnsnapi_ok if the API succeeded. If a non-datenapi_value is passedin it returnsnapi_date_expected.
This API returns the C double primitive of time value for the given JavaScriptDate.
napi_get_value_bool#
napi_statusnapi_get_value_bool(napi_env env, napi_value value,bool* result)[in] env: The environment that the API is invoked under.[in] value:napi_valuerepresenting JavaScriptBoolean.[out] result: C boolean primitive equivalent of the given JavaScriptBoolean.
Returnsnapi_ok if the API succeeded. If a non-booleannapi_value ispassed in it returnsnapi_boolean_expected.
This API returns the C boolean primitive equivalent of the given JavaScriptBoolean.
napi_get_value_double#
napi_statusnapi_get_value_double(napi_env env, napi_value value,double* result)[in] env: The environment that the API is invoked under.[in] value:napi_valuerepresenting JavaScriptnumber.[out] result: C double primitive equivalent of the given JavaScriptnumber.
Returnsnapi_ok if the API succeeded. If a non-numbernapi_value is passedin it returnsnapi_number_expected.
This API returns the C double primitive equivalent of the given JavaScriptnumber.
napi_get_value_bigint_int64#
napi_statusnapi_get_value_bigint_int64(napi_env env, napi_value value,int64_t* result,bool* lossless);[in] env: The environment that the API is invoked under[in] value:napi_valuerepresenting JavaScriptBigInt.[out] result: Cint64_tprimitive equivalent of the given JavaScriptBigInt.[out] lossless: Indicates whether theBigIntvalue was convertedlosslessly.
Returnsnapi_ok if the API succeeded. If a non-BigInt is passed in itreturnsnapi_bigint_expected.
This API returns the Cint64_t primitive equivalent of the given JavaScriptBigInt. If needed it will truncate the value, settinglossless tofalse.
napi_get_value_bigint_uint64#
napi_statusnapi_get_value_bigint_uint64(napi_env env, napi_value value,uint64_t* result,bool* lossless);[in] env: The environment that the API is invoked under.[in] value:napi_valuerepresenting JavaScriptBigInt.[out] result: Cuint64_tprimitive equivalent of the given JavaScriptBigInt.[out] lossless: Indicates whether theBigIntvalue was convertedlosslessly.
Returnsnapi_ok if the API succeeded. If a non-BigInt is passed in itreturnsnapi_bigint_expected.
This API returns the Cuint64_t primitive equivalent of the given JavaScriptBigInt. If needed it will truncate the value, settinglossless tofalse.
napi_get_value_bigint_words#
napi_statusnapi_get_value_bigint_words(napi_env env, napi_value value,int* sign_bit,size_t* word_count,uint64_t* words);[in] env: The environment that the API is invoked under.[in] value:napi_valuerepresenting JavaScriptBigInt.[out] sign_bit: Integer representing if the JavaScriptBigIntis positiveor negative.[in/out] word_count: Must be initialized to the length of thewordsarray. Upon return, it will be set to the actual number of words thatwould be needed to store thisBigInt.[out] words: Pointer to a pre-allocated 64-bit word array.
Returnsnapi_ok if the API succeeded.
This API converts a singleBigInt value into a sign bit, 64-bit little-endianarray, and the number of elements in the array.sign_bit andwords may beboth set toNULL, in order to get onlyword_count.
napi_get_value_external#
napi_statusnapi_get_value_external(napi_env env, napi_value value,void** result)[in] env: The environment that the API is invoked under.[in] value:napi_valuerepresenting JavaScript external value.[out] result: Pointer to the data wrapped by the JavaScript external value.
Returnsnapi_ok if the API succeeded. If a non-externalnapi_value ispassed in it returnsnapi_invalid_arg.
This API retrieves the external data pointer that was previously passed tonapi_create_external().
napi_get_value_int32#
napi_statusnapi_get_value_int32(napi_env env, napi_value value,int32_t* result)[in] env: The environment that the API is invoked under.[in] value:napi_valuerepresenting JavaScriptnumber.[out] result: Cint32primitive equivalent of the given JavaScriptnumber.
Returnsnapi_ok if the API succeeded. If a non-numbernapi_valueis passed innapi_number_expected.
This API returns the Cint32 primitive equivalentof the given JavaScriptnumber.
If the number exceeds the range of the 32 bit integer, then the result istruncated to the equivalent of the bottom 32 bits. This can result in a largepositive number becoming a negative number if the value is > 231 - 1.
Non-finite number values (NaN,+Infinity, or-Infinity) set theresult to zero.
napi_get_value_int64#
napi_statusnapi_get_value_int64(napi_env env, napi_value value,int64_t* result)[in] env: The environment that the API is invoked under.[in] value:napi_valuerepresenting JavaScriptnumber.[out] result: Cint64primitive equivalent of the given JavaScriptnumber.
Returnsnapi_ok if the API succeeded. If a non-numbernapi_valueis passed in it returnsnapi_number_expected.
This API returns the Cint64 primitive equivalent of the given JavaScriptnumber.
number values outside the range ofNumber.MIN_SAFE_INTEGER-(2**53 - 1) -Number.MAX_SAFE_INTEGER(2**53 - 1) will loseprecision.
Non-finite number values (NaN,+Infinity, or-Infinity) set theresult to zero.
napi_get_value_string_latin1#
napi_statusnapi_get_value_string_latin1(napi_env env, napi_value value,char* buf,size_t bufsize,size_t* result)[in] env: The environment that the API is invoked under.[in] value:napi_valuerepresenting JavaScript string.[in] buf: Buffer to write the ISO-8859-1-encoded string into. IfNULLispassed in, the length of the string in bytes and excluding the null terminatoris returned inresult.[in] bufsize: Size of the destination buffer. When this value isinsufficient, the returned string is truncated and null-terminated.If this value is zero, then the string is not returned and no changes are doneto the buffer.[out] result: Number of bytes copied into the buffer, excluding the nullterminator.
Returnsnapi_ok if the API succeeded. If a non-stringnapi_valueis passed in it returnsnapi_string_expected.
This API returns the ISO-8859-1-encoded string corresponding the value passedin.
napi_get_value_string_utf8#
napi_statusnapi_get_value_string_utf8(napi_env env, napi_value value,char* buf,size_t bufsize,size_t* result)[in] env: The environment that the API is invoked under.[in] value:napi_valuerepresenting JavaScript string.[in] buf: Buffer to write the UTF8-encoded string into. IfNULLis passedin, the length of the string in bytes and excluding the null terminator isreturned inresult.[in] bufsize: Size of the destination buffer. When this value isinsufficient, the returned string is truncated and null-terminated.If this value is zero, then the string is not returned and no changes are doneto the buffer.[out] result: Number of bytes copied into the buffer, excluding the nullterminator.
Returnsnapi_ok if the API succeeded. If a non-stringnapi_valueis passed in it returnsnapi_string_expected.
This API returns the UTF8-encoded string corresponding the value passed in.
napi_get_value_string_utf16#
napi_statusnapi_get_value_string_utf16(napi_env env, napi_value value,char16_t* buf,size_t bufsize,size_t* result)[in] env: The environment that the API is invoked under.[in] value:napi_valuerepresenting JavaScript string.[in] buf: Buffer to write the UTF16-LE-encoded string into. IfNULLispassed in, the length of the string in 2-byte code units and excluding thenull terminator is returned.[in] bufsize: Size of the destination buffer. When this value isinsufficient, the returned string is truncated and null-terminated.If this value is zero, then the string is not returned and no changes are doneto the buffer.[out] result: Number of 2-byte code units copied into the buffer, excludingthe null terminator.
Returnsnapi_ok if the API succeeded. If a non-stringnapi_valueis passed in it returnsnapi_string_expected.
This API returns the UTF16-encoded string corresponding the value passed in.
napi_get_value_uint32#
napi_statusnapi_get_value_uint32(napi_env env, napi_value value,uint32_t* result)[in] env: The environment that the API is invoked under.[in] value:napi_valuerepresenting JavaScriptnumber.[out] result: C primitive equivalent of the givennapi_valueas auint32_t.
Returnsnapi_ok if the API succeeded. If a non-numbernapi_valueis passed in it returnsnapi_number_expected.
This API returns the C primitive equivalent of the givennapi_value as auint32_t.
Functions to get global instances#
napi_get_boolean#
napi_statusnapi_get_boolean(napi_env env,bool value, napi_value* result)[in] env: The environment that the API is invoked under.[in] value: The value of the boolean to retrieve.[out] result:napi_valuerepresenting JavaScriptBooleansingleton toretrieve.
Returnsnapi_ok if the API succeeded.
This API is used to return the JavaScript singleton object that is used torepresent the given boolean value.
napi_get_global#
napi_statusnapi_get_global(napi_env env, napi_value* result)[in] env: The environment that the API is invoked under.[out] result:napi_valuerepresenting JavaScriptglobalobject.
Returnsnapi_ok if the API succeeded.
This API returns theglobal object.
napi_get_null#
napi_statusnapi_get_null(napi_env env, napi_value* result)[in] env: The environment that the API is invoked under.[out] result:napi_valuerepresenting JavaScriptnullobject.
Returnsnapi_ok if the API succeeded.
This API returns thenull object.
napi_get_undefined#
napi_statusnapi_get_undefined(napi_env env, napi_value* result)[in] env: The environment that the API is invoked under.[out] result:napi_valuerepresenting JavaScript Undefined value.
Returnsnapi_ok if the API succeeded.
This API returns the Undefined object.
Working with JavaScript values and abstract operations#
Node-API exposes a set of APIs to perform some abstract operations on JavaScriptvalues.
These APIs support doing one of the following:
- Coerce JavaScript values to specific JavaScript types (such as
numberorstring). - Check the type of a JavaScript value.
- Check for equality between two JavaScript values.
napi_coerce_to_bool#
napi_statusnapi_coerce_to_bool(napi_env env, napi_value value, napi_value* result)[in] env: The environment that the API is invoked under.[in] value: The JavaScript value to coerce.[out] result:napi_valuerepresenting the coerced JavaScriptBoolean.
Returnsnapi_ok if the API succeeded.
This API implements the abstract operationToBoolean() as defined inSection ToBoolean of the ECMAScript Language Specification.
napi_coerce_to_number#
napi_statusnapi_coerce_to_number(napi_env env, napi_value value, napi_value* result)[in] env: The environment that the API is invoked under.[in] value: The JavaScript value to coerce.[out] result:napi_valuerepresenting the coerced JavaScriptnumber.
Returnsnapi_ok if the API succeeded.
This API implements the abstract operationToNumber() as defined inSection ToNumber of the ECMAScript Language Specification.This function potentially runs JS code if the passed-in value is anobject.
napi_coerce_to_object#
napi_statusnapi_coerce_to_object(napi_env env, napi_value value, napi_value* result)[in] env: The environment that the API is invoked under.[in] value: The JavaScript value to coerce.[out] result:napi_valuerepresenting the coerced JavaScriptObject.
Returnsnapi_ok if the API succeeded.
This API implements the abstract operationToObject() as defined inSection ToObject of the ECMAScript Language Specification.
napi_coerce_to_string#
napi_statusnapi_coerce_to_string(napi_env env, napi_value value, napi_value* result)[in] env: The environment that the API is invoked under.[in] value: The JavaScript value to coerce.[out] result:napi_valuerepresenting the coerced JavaScriptstring.
Returnsnapi_ok if the API succeeded.
This API implements the abstract operationToString() as defined inSection ToString of the ECMAScript Language Specification.This function potentially runs JS code if the passed-in value is anobject.
napi_typeof#
napi_statusnapi_typeof(napi_env env, napi_value value, napi_valuetype* result)[in] env: The environment that the API is invoked under.[in] value: The JavaScript value whose type to query.[out] result: The type of the JavaScript value.
Returnsnapi_ok if the API succeeded.
napi_invalid_argif the type ofvalueis not a known ECMAScript type andvalueis not an External value.
This API represents behavior similar to invoking thetypeof Operator onthe object as defined inSection typeof operator of the ECMAScript LanguageSpecification. However, there are some differences:
- It has support for detecting an External value.
- It detects
nullas a separate type, while ECMAScripttypeofwould detectobject.
Ifvalue has a type that is invalid, an error is returned.
napi_instanceof#
napi_statusnapi_instanceof(napi_env env, napi_value object, napi_value constructor,bool* result)[in] env: The environment that the API is invoked under.[in] object: The JavaScript value to check.[in] constructor: The JavaScript function object of the constructor functionto check against.[out] result: Boolean that is set to true ifobject instanceof constructoris true.
Returnsnapi_ok if the API succeeded.
This API represents invoking theinstanceof Operator on the object asdefined inSection instanceof operator of the ECMAScript Language Specification.
napi_is_array#
napi_statusnapi_is_array(napi_env env, napi_value value,bool* result)[in] env: The environment that the API is invoked under.[in] value: The JavaScript value to check.[out] result: Whether the given object is an array.
Returnsnapi_ok if the API succeeded.
This API represents invoking theIsArray operation on the objectas defined inSection IsArray of the ECMAScript Language Specification.
napi_is_arraybuffer#
napi_statusnapi_is_arraybuffer(napi_env env, napi_value value,bool* result)[in] env: The environment that the API is invoked under.[in] value: The JavaScript value to check.[out] result: Whether the given object is anArrayBuffer.
Returnsnapi_ok if the API succeeded.
This API checks if theObject passed in is an array buffer.
napi_is_buffer#
napi_statusnapi_is_buffer(napi_env env, napi_value value,bool* result)[in] env: The environment that the API is invoked under.[in] value: The JavaScript value to check.[out] result: Whether the givennapi_valuerepresents anode::BufferorUint8Arrayobject.
Returnsnapi_ok if the API succeeded.
This API checks if theObject passed in is a buffer or Uint8Array.napi_is_typedarray should be preferred if the caller needs to check if thevalue is a Uint8Array.
napi_is_date#
napi_statusnapi_is_date(napi_env env, napi_value value,bool* result)[in] env: The environment that the API is invoked under.[in] value: The JavaScript value to check.[out] result: Whether the givennapi_valuerepresents a JavaScriptDateobject.
Returnsnapi_ok if the API succeeded.
This API checks if theObject passed in is a date.
napi_is_error#
napi_statusnapi_is_error(napi_env env, napi_value value,bool* result)[in] env: The environment that the API is invoked under.[in] value: The JavaScript value to check.[out] result: Whether the givennapi_valuerepresents anErrorobject.
Returnsnapi_ok if the API succeeded.
This API checks if theObject passed in is anError.
napi_is_typedarray#
napi_statusnapi_is_typedarray(napi_env env, napi_value value,bool* result)[in] env: The environment that the API is invoked under.[in] value: The JavaScript value to check.[out] result: Whether the givennapi_valuerepresents aTypedArray.
Returnsnapi_ok if the API succeeded.
This API checks if theObject passed in is a typed array.
napi_is_dataview#
napi_statusnapi_is_dataview(napi_env env, napi_value value,bool* result)[in] env: The environment that the API is invoked under.[in] value: The JavaScript value to check.[out] result: Whether the givennapi_valuerepresents aDataView.
Returnsnapi_ok if the API succeeded.
This API checks if theObject passed in is aDataView.
napi_strict_equals#
napi_statusnapi_strict_equals(napi_env env, napi_value lhs, napi_value rhs,bool* result)[in] env: The environment that the API is invoked under.[in] lhs: The JavaScript value to check.[in] rhs: The JavaScript value to check against.[out] result: Whether the twonapi_valueobjects are equal.
Returnsnapi_ok if the API succeeded.
This API represents the invocation of the Strict Equality algorithm asdefined inSection IsStrctEqual of the ECMAScript Language Specification.
napi_detach_arraybuffer#
napi_statusnapi_detach_arraybuffer(napi_env env, napi_value arraybuffer)[in] env: The environment that the API is invoked under.[in] arraybuffer: The JavaScriptArrayBufferto be detached.
Returnsnapi_ok if the API succeeded. If a non-detachableArrayBuffer ispassed in it returnsnapi_detachable_arraybuffer_expected.
Generally, anArrayBuffer is non-detachable if it has been detached before.The engine may impose additional conditions on whether anArrayBuffer isdetachable. For example, V8 requires that theArrayBuffer be external,that is, created withnapi_create_external_arraybuffer.
This API represents the invocation of theArrayBuffer detach operation asdefined inSection detachArrayBuffer of the ECMAScript Language Specification.
napi_is_detached_arraybuffer#
napi_statusnapi_is_detached_arraybuffer(napi_env env, napi_value arraybuffer,bool* result)[in] env: The environment that the API is invoked under.[in] arraybuffer: The JavaScriptArrayBufferto be checked.[out] result: Whether thearraybufferis detached.
Returnsnapi_ok if the API succeeded.
TheArrayBuffer is considered detached if its internal data isnull.
This API represents the invocation of theArrayBufferIsDetachedBufferoperation as defined inSection isDetachedBuffer of the ECMAScript LanguageSpecification.
node_api_is_sharedarraybuffer#
napi_statusnode_api_is_sharedarraybuffer(napi_env env, napi_value value,bool* result)[in] env: The environment that the API is invoked under.[in] value: The JavaScript value to check.[out] result: Whether the givennapi_valuerepresents aSharedArrayBuffer.
Returnsnapi_ok if the API succeeded.
This API checks if the Object passed in is aSharedArrayBuffer.
node_api_create_sharedarraybuffer#
napi_statusnode_api_create_sharedarraybuffer(napi_env env,size_t byte_length,void** data, napi_value* result)[in] env: The environment that the API is invoked under.[in] byte_length: The length in bytes of the shared array buffer to create.[out] data: Pointer to the underlying byte buffer of theSharedArrayBuffer.datacan optionally be ignored by passingNULL.[out] result: Anapi_valuerepresenting a JavaScriptSharedArrayBuffer.
Returnsnapi_ok if the API succeeded.
This API returns a Node-API value corresponding to a JavaScriptSharedArrayBuffer.SharedArrayBuffers are used to represent fixed-length binary data buffers thatcan be shared across multiple workers.
TheSharedArrayBuffer allocated will have an underlying byte buffer whose size isdetermined by thebyte_length parameter that's passed in.The underlying buffer is optionally returned back to the caller in case thecaller wants to directly manipulate the buffer. This buffer can only bewritten to directly from native code. To write to this buffer from JavaScript,a typed array orDataView object would need to be created.
JavaScriptSharedArrayBuffer objects are described inSection SharedArrayBuffer objects of the ECMAScript Language Specification.
Working with JavaScript properties#
Node-API exposes a set of APIs to get and set properties on JavaScriptobjects.
Properties in JavaScript are represented as a tuple of a key and a value.Fundamentally, all property keys in Node-API can be represented in one of thefollowing forms:
- Named: a simple UTF8-encoded string
- Integer-Indexed: an index value represented by
uint32_t - JavaScript value: these are represented in Node-API by
napi_value. This canbe anapi_valuerepresenting astring,number, orsymbol.
Node-API values are represented by the typenapi_value.Any Node-API call that requires a JavaScript value takes in anapi_value.However, it's the caller's responsibility to make sure that thenapi_value in question is of the JavaScript type expected by the API.
The APIs documented in this section provide a simple interface toget and set properties on arbitrary JavaScript objects represented bynapi_value.
For instance, consider the following JavaScript code snippet:
const obj = {};obj.myProp =123;The equivalent can be done using Node-API values with the following snippet:
napi_status status = napi_generic_failure;// const obj = {}napi_value obj, value;status = napi_create_object(env, &obj);if (status != napi_ok)return status;// Create a napi_value for 123status = napi_create_int32(env,123, &value);if (status != napi_ok)return status;// obj.myProp = 123status = napi_set_named_property(env, obj,"myProp", value);if (status != napi_ok)return status;Indexed properties can be set in a similar manner. Consider the followingJavaScript snippet:
const arr = [];arr[123] ='hello';The equivalent can be done using Node-API values with the following snippet:
napi_status status = napi_generic_failure;// const arr = [];napi_value arr, value;status = napi_create_array(env, &arr);if (status != napi_ok)return status;// Create a napi_value for 'hello'status = napi_create_string_utf8(env,"hello", NAPI_AUTO_LENGTH, &value);if (status != napi_ok)return status;// arr[123] = 'hello';status = napi_set_element(env, arr,123, value);if (status != napi_ok)return status;Properties can be retrieved using the APIs described in this section.Consider the following JavaScript snippet:
const arr = [];const value = arr[123];The following is the approximate equivalent of the Node-API counterpart:
napi_status status = napi_generic_failure;// const arr = []napi_value arr, value;status = napi_create_array(env, &arr);if (status != napi_ok)return status;// const value = arr[123]status = napi_get_element(env, arr,123, &value);if (status != napi_ok)return status;Finally, multiple properties can also be defined on an object for performancereasons. Consider the following #"foo",NULL,NULL,NULL,NULL, fooValue, napi_writable | napi_configurable,NULL }, {"bar",NULL,NULL,NULL,NULL, barValue, napi_writable | napi_configurable,NULL }}status = napi_define_properties(env, obj,sizeof(descriptors) /sizeof(descriptors[0]), descriptors);if (status != napi_ok)return status;
Structures#
napi_property_attributes#
History
| Version | Changes |
|---|---|
| v14.12.0 | added |
typedefenum { napi_default =0, napi_writable =1 <<0, napi_enumerable =1 <<1, napi_configurable =1 <<2,// Used with napi_define_class to distinguish static properties// from instance properties. Ignored by napi_define_properties. napi_static =1 <<10,// Default for class methods. napi_default_method = napi_writable | napi_configurable,// Default for object properties, like in JS obj[prop]. napi_default_jsproperty = napi_writable | napi_enumerable | napi_configurable,} napi_property_attributes;napi_property_attributes are bit flags used to control the behavior ofproperties set on a JavaScript object. Other thannapi_static theycorrespond to the attributes listed inSection property attributesof theECMAScript Language Specification.They can be one or more of the following bit flags:
napi_default: No explicit attributes are set on the property. By default, aproperty is read only, not enumerable and not configurable.napi_writable: The property is writable.napi_enumerable: The property is enumerable.napi_configurable: The property is configurable as defined inSection property attributes of theECMAScript Language Specification.napi_static: The property will be defined as a static property on a class asopposed to an instance property, which is the default. This is used only bynapi_define_class. It is ignored bynapi_define_properties.napi_default_method: Like a method in a JS class, the property isconfigurable and writeable, but not enumerable.napi_default_jsproperty: Like a property set via assignment in JavaScript,the property is writable, enumerable, and configurable.
napi_property_descriptor#
typedefstruct {// One of utf8name or name should be NULL.constchar* utf8name; napi_value name; napi_callback method; napi_callback getter; napi_callback setter; napi_value value; napi_property_attributes attributes;void* data;} napi_property_descriptor;utf8name: Optional string describing the key for the property,encoded as UTF8. One ofutf8nameornamemust be provided for theproperty.name: Optionalnapi_valuethat points to a JavaScript string or symbolto be used as the key for the property. One ofutf8nameornamemustbe provided for the property.value: The value that's retrieved by a get access of the property if theproperty is a data property. If this is passed in, setgetter,setter,methodanddatatoNULL(since these members won't be used).getter: A function to call when a get access of the property is performed.If this is passed in, setvalueandmethodtoNULL(since these memberswon't be used). The given function is called implicitly by the runtime whenthe property is accessed from JavaScript code (or if a get on the property isperformed using a Node-API call).napi_callbackprovides more details.setter: A function to call when a set access of the property is performed.If this is passed in, setvalueandmethodtoNULL(since these memberswon't be used). The given function is called implicitly by the runtime whenthe property is set from JavaScript code (or if a set on the property isperformed using a Node-API call).napi_callbackprovides more details.method: Set this to make the property descriptor object'svalueproperty to be a JavaScript function represented bymethod. If this ispassed in, setvalue,getterandsettertoNULL(since these memberswon't be used).napi_callbackprovides more details.attributes: The attributes associated with the particular property. Seenapi_property_attributes.data: The callback data passed intomethod,getterandsetterif thisfunction is invoked.
Functions#
napi_get_property_names#
napi_statusnapi_get_property_names(napi_env env, napi_value object, napi_value* result);[in] env: The environment that the Node-API call is invoked under.[in] object: The object from which to retrieve the properties.[out] result: Anapi_valuerepresenting an array of JavaScript valuesthat represent the property names of the object. The API can be used toiterate overresultusingnapi_get_array_lengthandnapi_get_element.
Returnsnapi_ok if the API succeeded.
This API returns the names of the enumerable properties ofobject as an arrayof strings. The properties ofobject whose key is a symbol will not beincluded.
napi_get_all_property_names#
napi_get_all_property_names(napi_env env, napi_value object, napi_key_collection_mode key_mode, napi_key_filter key_filter, napi_key_conversion key_conversion, napi_value* result);[in] env: The environment that the Node-API call is invoked under.[in] object: The object from which to retrieve the properties.[in] key_mode: Whether to retrieve prototype properties as well.[in] key_filter: Which properties to retrieve(enumerable/readable/writable).[in] key_conversion: Whether to convert numbered property keys to strings.[out] result: Anapi_valuerepresenting an array of JavaScript valuesthat represent the property names of the object.napi_get_array_lengthandnapi_get_elementcan be used to iterate overresult.
Returnsnapi_ok if the API succeeded.
This API returns an array containing the names of the available propertiesof this object.
napi_set_property#
napi_statusnapi_set_property(napi_env env, napi_value object, napi_value key, napi_value value);[in] env: The environment that the Node-API call is invoked under.[in] object: The object on which to set the property.[in] key: The name of the property to set.[in] value: The property value.
Returnsnapi_ok if the API succeeded.
This API set a property on theObject passed in.
napi_get_property#
napi_statusnapi_get_property(napi_env env, napi_value object, napi_value key, napi_value* result);[in] env: The environment that the Node-API call is invoked under.[in] object: The object from which to retrieve the property.[in] key: The name of the property to retrieve.[out] result: The value of the property.
Returnsnapi_ok if the API succeeded.
This API gets the requested property from theObject passed in.
napi_has_property#
napi_statusnapi_has_property(napi_env env, napi_value object, napi_value key,bool* result);[in] env: The environment that the Node-API call is invoked under.[in] object: The object to query.[in] key: The name of the property whose existence to check.[out] result: Whether the property exists on the object or not.
Returnsnapi_ok if the API succeeded.
This API checks if theObject passed in has the named property.
napi_delete_property#
napi_statusnapi_delete_property(napi_env env, napi_value object, napi_value key,bool* result);[in] env: The environment that the Node-API call is invoked under.[in] object: The object to query.[in] key: The name of the property to delete.[out] result: Whether the property deletion succeeded or not.resultcanoptionally be ignored by passingNULL.
Returnsnapi_ok if the API succeeded.
This API attempts to delete thekey own property fromobject.
napi_has_own_property#
napi_statusnapi_has_own_property(napi_env env, napi_value object, napi_value key,bool* result);[in] env: The environment that the Node-API call is invoked under.[in] object: The object to query.[in] key: The name of the own property whose existence to check.[out] result: Whether the own property exists on the object or not.
Returnsnapi_ok if the API succeeded.
This API checks if theObject passed in has the named own property.key mustbe astring or asymbol, or an error will be thrown. Node-API will notperform any conversion between data types.
napi_set_named_property#
napi_statusnapi_set_named_property(napi_env env, napi_value object,constchar* utf8Name, napi_value value);[in] env: The environment that the Node-API call is invoked under.[in] object: The object on which to set the property.[in] utf8Name: The name of the property to set.[in] value: The property value.
Returnsnapi_ok if the API succeeded.
This method is equivalent to callingnapi_set_property with anapi_valuecreated from the string passed in asutf8Name.
napi_get_named_property#
napi_statusnapi_get_named_property(napi_env env, napi_value object,constchar* utf8Name, napi_value* result);[in] env: The environment that the Node-API call is invoked under.[in] object: The object from which to retrieve the property.[in] utf8Name: The name of the property to get.[out] result: The value of the property.
Returnsnapi_ok if the API succeeded.
This method is equivalent to callingnapi_get_property with anapi_valuecreated from the string passed in asutf8Name.
napi_has_named_property#
napi_statusnapi_has_named_property(napi_env env, napi_value object,constchar* utf8Name,bool* result);[in] env: The environment that the Node-API call is invoked under.[in] object: The object to query.[in] utf8Name: The name of the property whose existence to check.[out] result: Whether the property exists on the object or not.
Returnsnapi_ok if the API succeeded.
This method is equivalent to callingnapi_has_property with anapi_valuecreated from the string passed in asutf8Name.
napi_set_element#
napi_statusnapi_set_element(napi_env env, napi_value object,uint32_t index, napi_value value);[in] env: The environment that the Node-API call is invoked under.[in] object: The object from which to set the properties.[in] index: The index of the property to set.[in] value: The property value.
Returnsnapi_ok if the API succeeded.
This API sets an element on theObject passed in.
napi_get_element#
napi_statusnapi_get_element(napi_env env, napi_value object,uint32_t index, napi_value* result);[in] env: The environment that the Node-API call is invoked under.[in] object: The object from which to retrieve the property.[in] index: The index of the property to get.[out] result: The value of the property.
Returnsnapi_ok if the API succeeded.
This API gets the element at the requested index.
napi_has_element#
napi_statusnapi_has_element(napi_env env, napi_value object,uint32_t index,bool* result);[in] env: The environment that the Node-API call is invoked under.[in] object: The object to query.[in] index: The index of the property whose existence to check.[out] result: Whether the property exists on the object or not.
Returnsnapi_ok if the API succeeded.
This API returns if theObject passed in has an element at therequested index.
napi_delete_element#
napi_statusnapi_delete_element(napi_env env, napi_value object,uint32_t index,bool* result);[in] env: The environment that the Node-API call is invoked under.[in] object: The object to query.[in] index: The index of the property to delete.[out] result: Whether the element deletion succeeded or not.resultcanoptionally be ignored by passingNULL.
Returnsnapi_ok if the API succeeded.
This API attempts to delete the specifiedindex fromobject.
napi_define_properties#
napi_statusnapi_define_properties(napi_env env, napi_value object,size_t property_count,const napi_property_descriptor* properties);[in] env: The environment that the Node-API call is invoked under.[in] object: The object from which to retrieve the properties.[in] property_count: The number of elements in thepropertiesarray.[in] properties: The array of property descriptors.
Returnsnapi_ok if the API succeeded.
This method allows the efficient definition of multiple properties on a givenobject. The properties are defined using property descriptors (seenapi_property_descriptor). Given an array of such property descriptors,this API will set the properties on the object one at a time, as defined byDefineOwnProperty() (described inSection DefineOwnProperty of the ECMA-262specification).
napi_object_freeze#
napi_statusnapi_object_freeze(napi_env env, napi_value object);[in] env: The environment that the Node-API call is invoked under.[in] object: The object to freeze.
Returnsnapi_ok if the API succeeded.
This method freezes a given object. This prevents new properties frombeing added to it, existing properties from being removed, preventschanging the enumerability, configurability, or writability of existingproperties, and prevents the values of existing properties from being changed.It also prevents the object's prototype from being changed. This is describedinSection 19.1.2.6 of theECMA-262 specification.
napi_object_seal#
napi_statusnapi_object_seal(napi_env env, napi_value object);[in] env: The environment that the Node-API call is invoked under.[in] object: The object to seal.
Returnsnapi_ok if the API succeeded.
This method seals a given object. This prevents new properties from beingadded to it, as well as marking all existing properties as non-configurable.This is described inSection 19.1.2.20of the ECMA-262 specification.
node_api_set_prototype#
napi_statusnode_api_set_prototype(napi_env env, napi_value object, napi_value value);[in] env: The environment that the Node-API call is invoked under.[in] object: The object on which to set the prototype.[in] value: The prototype value.
Returnsnapi_ok if the API succeeded.
This API sets the prototype of theObject passed in.
Working with JavaScript functions#
Node-API provides a set of APIs that allow JavaScript code tocall back into native code. Node-APIs that support calling backinto native code take in a callback functions represented bythenapi_callback type. When the JavaScript VM calls back tonative code, thenapi_callback function provided is invoked. The APIsdocumented in this section allow the callback function to do thefollowing:
- Get information about the context in which the callback was invoked.
- Get the arguments passed into the callback.
- Return a
napi_valueback from the callback.
Additionally, Node-API provides a set of functions which allow callingJavaScript functions from native code. One can either call a functionlike a regular JavaScript function call, or as a constructorfunction.
Any non-NULL data which is passed to this API via thedata field of thenapi_property_descriptor items can be associated withobject and freedwheneverobject is garbage-collected by passing bothobject and the data tonapi_add_finalizer.
napi_call_function#
NAPI_EXTERN napi_statusnapi_call_function(napi_env env, napi_value recv, napi_value func,size_t argc,const napi_value* argv, napi_value* result);[in] env: The environment that the API is invoked under.[in] recv: Thethisvalue passed to the called function.[in] func:napi_valuerepresenting the JavaScript function to be invoked.[in] argc: The count of elements in theargvarray.[in] argv: Array ofnapi_valuesrepresenting JavaScript values passed inas arguments to the function.[out] result:napi_valuerepresenting the JavaScript object returned.
Returnsnapi_ok if the API succeeded.
This method allows a JavaScript function object to be called from a nativeadd-on. This is the primary mechanism of calling backfrom the add-on'snative codeinto JavaScript. For the special case of calling into JavaScriptafter an async operation, seenapi_make_callback.
A sample use case might look as follows. Consider the following JavaScriptsnippet:
functionAddTwo(num) {return num +2;}global.AddTwo =AddTwo;Then, the above function can be invoked from a native add-on using thefollowing code:
// Get the function named "AddTwo" on the global objectnapi_value global, add_two, arg;napi_status status = napi_get_global(env, &global);if (status != napi_ok)return;status = napi_get_named_property(env, global,"AddTwo", &add_two);if (status != napi_ok)return;// const arg = 1337status = napi_create_int32(env,1337, &arg);if (status != napi_ok)return;napi_value* argv = &arg;size_t argc =1;// AddTwo(arg);napi_value return_val;status = napi_call_function(env, global, add_two, argc, argv, &return_val);if (status != napi_ok)return;// Convert the result back to a native typeint32_t result;status = napi_get_value_int32(env, return_val, &result);if (status != napi_ok)return;napi_create_function#
napi_statusnapi_create_function(napi_env env,constchar* utf8name,size_t length, napi_callback cb,void* data, napi_value* result);[in] env: The environment that the API is invoked under.[in] utf8Name: Optional name of the function encoded as UTF8. This isvisible within JavaScript as the new function object'snameproperty.[in] length: The length of theutf8namein bytes, orNAPI_AUTO_LENGTHifit is null-terminated.[in] cb: The native function which should be called when this functionobject is invoked.napi_callbackprovides more details.[in] data: User-provided data context. This will be passed back into thefunction when invoked later.[out] result:napi_valuerepresenting the JavaScript function object forthe newly created function.
Returnsnapi_ok if the API succeeded.
This API allows an add-on author to create a function object in native code.This is the primary mechanism to allow callinginto the add-on's native codefrom JavaScript.
The newly created function is not automatically visible from script after thiscall. Instead, a property must be explicitly set on any object that is visibleto JavaScript, in order for the function to be accessible from script.
In order to expose a function as part of theadd-on's module exports, set the newly created function on the exportsobject. A sample module might look as follows:
napi_valueSayHello(napi_env env, napi_callback_info info) {printf("Hello\n");returnNULL;}napi_valueInit(napi_env env, napi_value exports) { napi_status status; napi_value fn; status = napi_create_function(env,NULL,0, SayHello,NULL, &fn);if (status != napi_ok)returnNULL; status = napi_set_named_property(env, exports,"sayHello", fn);if (status != napi_ok)returnNULL;return exports;}NAPI_MODULE(NODE_GYP_MODULE_NAME, Init)Given the above code, the add-on can be used from JavaScript as follows:
const myaddon =require('./addon');myaddon.sayHello();The string passed torequire() is the name of the target inbinding.gypresponsible for creating the.node file.
Any non-NULL data which is passed to this API via thedata parameter canbe associated with the resulting JavaScript function (which is returned in theresult parameter) and freed whenever the function is garbage-collected bypassing both the JavaScript function and the data tonapi_add_finalizer.
JavaScriptFunctions are described inSection Function objects of the ECMAScriptLanguage Specification.
napi_get_cb_info#
napi_statusnapi_get_cb_info(napi_env env, napi_callback_info cbinfo,size_t* argc, napi_value* argv, napi_value* thisArg,void** data)[in] env: The environment that the API is invoked under.[in] cbinfo: The callback info passed into the callback function.[in-out] argc: Specifies the length of the providedargvarray andreceives the actual count of arguments.argccanoptionally be ignored by passingNULL.[out] argv: C array ofnapi_values to which the arguments will becopied. If there are more arguments than the provided count, only therequested number of arguments are copied. If there are fewer argumentsprovided than claimed, the rest ofargvis filled withnapi_valuevaluesthat representundefined.argvcan optionally be ignored bypassingNULL.[out] thisArg: Receives the JavaScriptthisargument for the call.thisArgcan optionally be ignored by passingNULL.[out] data: Receives the data pointer for the callback.datacanoptionally be ignored by passingNULL.
Returnsnapi_ok if the API succeeded.
This method is used within a callback function to retrieve details about thecall like the arguments and thethis pointer from a given callback info.
napi_get_new_target#
napi_statusnapi_get_new_target(napi_env env, napi_callback_info cbinfo, napi_value* result)[in] env: The environment that the API is invoked under.[in] cbinfo: The callback info passed into the callback function.[out] result: Thenew.targetof the constructor call.
Returnsnapi_ok if the API succeeded.
This API returns thenew.target of the constructor call. If the currentcallback is not a constructor call, the result isNULL.
napi_new_instance#
napi_statusnapi_new_instance(napi_env env, napi_value cons,size_t argc, napi_value* argv, napi_value* result)[in] env: The environment that the API is invoked under.[in] cons:napi_valuerepresenting the JavaScript function to be invokedas a constructor.[in] argc: The count of elements in theargvarray.[in] argv: Array of JavaScript values asnapi_valuerepresenting thearguments to the constructor. Ifargcis zero this parameter may beomitted by passing inNULL.[out] result:napi_valuerepresenting the JavaScript object returned,which in this case is the constructed object.
This method is used to instantiate a new JavaScript value using a givennapi_value that represents the constructor for the object. For example,consider the following snippet:
functionMyObject(param) {this.param = param;}const arg ='hello';const value =newMyObject(arg);The following can be approximated in Node-API using the following snippet:
// Get the constructor function MyObjectnapi_value global, constructor, arg, value;napi_status status = napi_get_global(env, &global);if (status != napi_ok)return;status = napi_get_named_property(env, global,"MyObject", &constructor);if (status != napi_ok)return;// const arg = "hello"status = napi_create_string_utf8(env,"hello", NAPI_AUTO_LENGTH, &arg);if (status != napi_ok)return;napi_value* argv = &arg;size_t argc =1;// const value = new MyObject(arg)status = napi_new_instance(env, constructor, argc, argv, &value);Returnsnapi_ok if the API succeeded.
Object wrap#
Node-API offers a way to "wrap" C++ classes and instances so that the classconstructor and methods can be called from JavaScript.
- The
napi_define_classAPI defines a JavaScript class with constructor,static properties and methods, and instance properties and methods thatcorrespond to the C++ class. - When JavaScript code invokes the constructor, the constructor callbackuses
napi_wrapto wrap a new C++ instance in a JavaScript object,then returns the wrapper object. - When JavaScript code invokes a method or property accessor on the class,the corresponding
napi_callbackC++ function is invoked. For an instancecallback,napi_unwrapobtains the C++ instance that is the target ofthe call.
For wrapped objects it may be difficult to distinguish between a functioncalled on a class prototype and a function called on an instance of a class.A common pattern used to address this problem is to save a persistentreference to the class constructor for laterinstanceof checks.
napi_value MyClass_constructor =NULL;status = napi_get_reference_value(env, MyClass::es_constructor, &MyClass_constructor);assert(napi_ok == status);bool is_instance =false;status = napi_instanceof(env, es_this, MyClass_constructor, &is_instance);assert(napi_ok == status);if (is_instance) {// napi_unwrap() ...}else {// otherwise...}The reference must be freed once it is no longer needed.
There are occasions wherenapi_instanceof() is insufficient for ensuring thata JavaScript object is a wrapper for a certain native type. This is the caseespecially when wrapped JavaScript objects are passed back into the addon viastatic methods rather than as thethis value of prototype methods. In suchcases there is a chance that they may be unwrapped incorrectly.
const myAddon =require('./build/Release/my_addon.node');// `openDatabase()` returns a JavaScript object that wraps a native database// handle.const dbHandle = myAddon.openDatabase();// `query()` returns a JavaScript object that wraps a native query handle.const queryHandle = myAddon.query(dbHandle,'Gimme ALL the things!');// There is an accidental error in the line below. The first parameter to// `myAddon.queryHasRecords()` should be the database handle (`dbHandle`), not// the query handle (`query`), so the correct condition for the while-loop// should be//// myAddon.queryHasRecords(dbHandle, queryHandle)//while (myAddon.queryHasRecords(queryHandle, dbHandle)) {// retrieve records}In the above examplemyAddon.queryHasRecords() is a method that accepts twoarguments. The first is a database handle and the second is a query handle.Internally, it unwraps the first argument and casts the resulting pointer to anative database handle. It then unwraps the second argument and casts theresulting pointer to a query handle. If the arguments are passed in the wrongorder, the casts will work, however, there is a good chance that the underlyingdatabase operation will fail, or will even cause an invalid memory access.
To ensure that the pointer retrieved from the first argument is indeed a pointerto a database handle and, similarly, that the pointer retrieved from the secondargument is indeed a pointer to a query handle, the implementation ofqueryHasRecords() has to perform a type validation. Retaining the JavaScriptclass constructor from which the database handle was instantiated and theconstructor from which the query handle was instantiated innapi_refs canhelp, becausenapi_instanceof() can then be used to ensure that the instancespassed intoqueryHashRecords() are indeed of the correct type.
Unfortunately,napi_instanceof() does not protect against prototypemanipulation. For example, the prototype of the database handle instance can beset to the prototype of the constructor for query handle instances. In thiscase, the database handle instance can appear as a query handle instance, and itwill pass thenapi_instanceof() test for a query handle instance, while stillcontaining a pointer to a database handle.
To this end, Node-API provides type-tagging capabilities.
A type tag is a 128-bit integer unique to the addon. Node-API provides thenapi_type_tag structure for storing a type tag. When such a value is passedalong with a JavaScript object orexternal stored in anapi_value tonapi_type_tag_object(), the JavaScript object will be "marked" with thetype tag. The "mark" is invisible on the JavaScript side. When a JavaScriptobject arrives into a native binding,napi_check_object_type_tag() can be usedalong with the original type tag to determine whether the JavaScript object waspreviously "marked" with the type tag. This creates a type-checking capabilityof a higher fidelity thannapi_instanceof() can provide, because such type-tagging survives prototype manipulation and addon unloading/reloading.
Continuing the above example, the following skeleton addon implementationillustrates the use ofnapi_type_tag_object() andnapi_check_object_type_tag().
// This value is the type tag for a database handle. The command//// uuidgen | sed -r -e 's/-//g' -e 's/(.{16})(.*)/0x\1, 0x\2/'//// can be used to obtain the two values with which to initialize the structure.staticconst napi_type_tag DatabaseHandleTypeTag = {0x1edf75a38336451d,0xa5ed9ce2e4c00c38};// This value is the type tag for a query handle.staticconst napi_type_tag QueryHandleTypeTag = {0x9c73317f9fad44a3,0x93c3920bf3b0ad6a};static napi_valueopenDatabase(napi_env env, napi_callback_info info) { napi_status status; napi_value result;// Perform the underlying action which results in a database handle. DatabaseHandle* dbHandle = open_database();// Create a new, empty JS object. status = napi_create_object(env, &result);if (status != napi_ok)returnNULL;// Tag the object to indicate that it holds a pointer to a `DatabaseHandle`. status = napi_type_tag_object(env, result, &DatabaseHandleTypeTag);if (status != napi_ok)returnNULL;// Store the pointer to the `DatabaseHandle` structure inside the JS object. status = napi_wrap(env, result, dbHandle,NULL,NULL,NULL);if (status != napi_ok)returnNULL;return result;}// Later when we receive a JavaScript object purporting to be a database handle// we can use `napi_check_object_type_tag()` to ensure that it is indeed such a// handle.static napi_valuequery(napi_env env, napi_callback_info info) { napi_status status;size_t argc =2; napi_value argv[2];bool is_db_handle; status = napi_get_cb_info(env, info, &argc, argv,NULL,NULL);if (status != napi_ok)returnNULL;// Check that the object passed as the first parameter has the previously// applied tag. status = napi_check_object_type_tag(env, argv[0], &DatabaseHandleTypeTag, &is_db_handle);if (status != napi_ok)returnNULL;// Throw a `TypeError` if it doesn't.if (!is_db_handle) {// Throw a TypeError.returnNULL; }}napi_define_class#
napi_statusnapi_define_class(napi_env env,constchar* utf8name,size_t length, napi_callback constructor,void* data,size_t property_count,const napi_property_descriptor* properties, napi_value* result);[in] env: The environment that the API is invoked under.[in] utf8name: Name of the JavaScript constructor function. For clarity,it is recommended to use the C++ class name when wrapping a C++ class.[in] length: The length of theutf8namein bytes, orNAPI_AUTO_LENGTHif it is null-terminated.[in] constructor: Callback function that handles constructing instancesof the class. When wrapping a C++ class, this method must be a static memberwith thenapi_callbacksignature. A C++ class constructor cannot beused.napi_callbackprovides more details.[in] data: Optional data to be passed to the constructor callback asthedataproperty of the callback info.[in] property_count: Number of items in thepropertiesarray argument.[in] properties: Array of property descriptors describing static andinstance data properties, accessors, and methods on the classSeenapi_property_descriptor.[out] result: Anapi_valuerepresenting the constructor function forthe class.
Returnsnapi_ok if the API succeeded.
Defines a JavaScript class, including:
- A JavaScript constructor function that has the class name. When wrapping acorresponding C++ class, the callback passed via
constructorcan be used toinstantiate a new C++ class instance, which can then be placed inside theJavaScript object instance being constructed usingnapi_wrap. - Properties on the constructor function whose implementation can callcorrespondingstatic data properties, accessors, and methods of the C++class (defined by property descriptors with the
napi_staticattribute). - Properties on the constructor function's
prototypeobject. When wrapping aC++ class,non-static data properties, accessors, and methods of the C++class can be called from the static functions given in the propertydescriptors without thenapi_staticattribute after retrieving the C++ classinstance placed inside the JavaScript object instance by usingnapi_unwrap.
When wrapping a C++ class, the C++ constructor callback passed viaconstructorshould be a static method on the class that calls the actual class constructor,then wraps the new C++ instance in a JavaScript object, and returns the wrapperobject. Seenapi_wrap for details.
The JavaScript constructor function returned fromnapi_define_class isoften saved and used later to construct new instances of the class from nativecode, and/or to check whether provided values are instances of the class. Inthat case, to prevent the function value from being garbage-collected, astrong persistent reference to it can be created usingnapi_create_reference, ensuring that the reference count is kept >= 1.
Any non-NULL data which is passed to this API via thedata parameter or viathedata field of thenapi_property_descriptor array items can be associatedwith the resulting JavaScript constructor (which is returned in theresultparameter) and freed whenever the class is garbage-collected by passing boththe JavaScript function and the data tonapi_add_finalizer.
napi_wrap#
napi_statusnapi_wrap(napi_env env, napi_value js_object,void* native_object, napi_finalize finalize_cb,void* finalize_hint, napi_ref* result);[in] env: The environment that the API is invoked under.[in] js_object: The JavaScript object that will be the wrapper for thenative object.[in] native_object: The native instance that will be wrapped in theJavaScript object.[in] finalize_cb: Optional native callback that can be used to free thenative instance when the JavaScript object has been garbage-collected.napi_finalizeprovides more details.[in] finalize_hint: Optional contextual hint that is passed to thefinalize callback.[out] result: Optional reference to the wrapped object.
Returnsnapi_ok if the API succeeded.
Wraps a native instance in a JavaScript object. The native instance can beretrieved later usingnapi_unwrap().
When JavaScript code invokes a constructor for a class that was defined usingnapi_define_class(), thenapi_callback for the constructor is invoked.After constructing an instance of the native class, the callback must then callnapi_wrap() to wrap the newly constructed instance in the already-createdJavaScript object that is thethis argument to the constructor callback.(Thatthis object was created from the constructor function'sprototype,so it already has definitions of all the instance properties and methods.)
Typically when wrapping a class instance, a finalize callback should beprovided that simply deletes the native instance that is received as thedataargument to the finalize callback.
The optional returned reference is initially a weak reference, meaning ithas a reference count of 0. Typically this reference count would be incrementedtemporarily during async operations that require the instance to remain valid.
Caution: The optional returned reference (if obtained) should be deleted vianapi_delete_reference ONLY in response to the finalize callbackinvocation. If it is deleted before then, then the finalize callback may neverbe invoked. Therefore, when obtaining a reference a finalize callback is alsorequired in order to enable correct disposal of the reference.
Finalizer callbacks may be deferred, leaving a window where the object hasbeen garbage collected (and the weak reference is invalid) but the finalizerhasn't been called yet. When usingnapi_get_reference_value() on weakreferences returned bynapi_wrap(), you should still handle an empty result.
Callingnapi_wrap() a second time on an object will return an error. Toassociate another native instance with the object, usenapi_remove_wrap()first.
napi_unwrap#
napi_statusnapi_unwrap(napi_env env, napi_value js_object,void** result);[in] env: The environment that the API is invoked under.[in] js_object: The object associated with the native instance.[out] result: Pointer to the wrapped native instance.
Returnsnapi_ok if the API succeeded.
Retrieves a native instance that was previously wrapped in a JavaScriptobject usingnapi_wrap().
When JavaScript code invokes a method or property accessor on the class, thecorrespondingnapi_callback is invoked. If the callback is for an instancemethod or accessor, then thethis argument to the callback is the wrapperobject; the wrapped C++ instance that is the target of the call can be obtainedthen by callingnapi_unwrap() on the wrapper object.
napi_remove_wrap#
napi_statusnapi_remove_wrap(napi_env env, napi_value js_object,void** result);[in] env: The environment that the API is invoked under.[in] js_object: The object associated with the native instance.[out] result: Pointer to the wrapped native instance.
Returnsnapi_ok if the API succeeded.
Retrieves a native instance that was previously wrapped in the JavaScriptobjectjs_object usingnapi_wrap() and removes the wrapping. If a finalizecallback was associated with the wrapping, it will no longer be called when theJavaScript object becomes garbage-collected.
napi_type_tag_object#
napi_statusnapi_type_tag_object(napi_env env, napi_value js_object,const napi_type_tag* type_tag);[in] env: The environment that the API is invoked under.[in] js_object: The JavaScript object orexternal to be marked.[in] type_tag: The tag with which the object is to be marked.
Returnsnapi_ok if the API succeeded.
Associates the value of thetype_tag pointer with the JavaScript object orexternal.napi_check_object_type_tag() can then be used to compare the tagthat was attached to the object with one owned by the addon to ensure that theobject has the right type.
If the object already has an associated type tag, this API will returnnapi_invalid_arg.
napi_check_object_type_tag#
napi_statusnapi_check_object_type_tag(napi_env env, napi_value js_object,const napi_type_tag* type_tag,bool* result);[in] env: The environment that the API is invoked under.[in] js_object: The JavaScript object orexternal whose type tag toexamine.[in] type_tag: The tag with which to compare any tag found on the object.[out] result: Whether the type tag given matched the type tag on theobject.falseis also returned if no type tag was found on the object.
Returnsnapi_ok if the API succeeded.
Compares the pointer given astype_tag with any that can be found onjs_object. If no tag is found onjs_object or, if a tag is found but it doesnot matchtype_tag, thenresult is set tofalse. If a tag is found and itmatchestype_tag, thenresult is set totrue.
napi_add_finalizer#
napi_statusnapi_add_finalizer(napi_env env, napi_value js_object,void* finalize_data, node_api_basic_finalize finalize_cb,void* finalize_hint, napi_ref* result);[in] env: The environment that the API is invoked under.[in] js_object: The JavaScript object to which the native data will beattached.[in] finalize_data: Optional data to be passed tofinalize_cb.[in] finalize_cb: Native callback that will be used to free thenative data when the JavaScript object has been garbage-collected.napi_finalizeprovides more details.[in] finalize_hint: Optional contextual hint that is passed to thefinalize callback.[out] result: Optional reference to the JavaScript object.
Returnsnapi_ok if the API succeeded.
Adds anapi_finalize callback which will be called when the JavaScript objectinjs_object has been garbage-collected.
This API can be called multiple times on a single JavaScript object.
Caution: The optional returned reference (if obtained) should be deleted vianapi_delete_reference ONLY in response to the finalize callbackinvocation. If it is deleted before then, then the finalize callback may neverbe invoked. Therefore, when obtaining a reference a finalize callback is alsorequired in order to enable correct disposal of the reference.
node_api_post_finalizer#
napi_statusnode_api_post_finalizer(node_api_basic_env env, napi_finalize finalize_cb,void* finalize_data,void* finalize_hint);[in] env: The environment that the API is invoked under.[in] finalize_cb: Native callback that will be used to free thenative data when the JavaScript object has been garbage-collected.napi_finalizeprovides more details.[in] finalize_data: Optional data to be passed tofinalize_cb.[in] finalize_hint: Optional contextual hint that is passed to thefinalize callback.
Returnsnapi_ok if the API succeeded.
Schedules anapi_finalize callback to be called asynchronously in theevent loop.
Normally, finalizers are called while the GC (garbage collector) collectsobjects. At that point calling any Node-API that may cause changes in the GCstate will be disabled and will crash Node.js.
node_api_post_finalizer helps to work around this limitation by allowing theadd-on to defer calls to such Node-APIs to a point in time outside of the GCfinalization.
Simple asynchronous operations#
Addon modules often need to leverage async helpers from libuv as part of theirimplementation. This allows them to schedule work to be executed asynchronouslyso that their methods can return in advance of the work being completed. Thisallows them to avoid blocking overall execution of the Node.js application.
Node-API provides an ABI-stable interface for thesesupporting functions which covers the most common asynchronous use cases.
Node-API defines thenapi_async_work structure which is used to manageasynchronous workers. Instances are created/deleted withnapi_create_async_work andnapi_delete_async_work.
Theexecute andcomplete callbacks are functions that will beinvoked when the executor is ready to execute and when it completes itstask respectively.
Theexecute function should avoid making any Node-API callsthat could result in the execution of JavaScript or interaction withJavaScript objects. Most often, any code that needs to make Node-APIcalls should be made incomplete callback instead.Avoid using thenapi_env parameter in the execute callback asit will likely execute JavaScript.
These functions implement the following interfaces:
typedefvoid(*napi_async_execute_callback)(napi_env env,void* data);typedefvoid(*napi_async_complete_callback)(napi_env env, napi_status status,void* data);When these methods are invoked, thedata parameter passed will be theaddon-providedvoid* data that was passed into thenapi_create_async_work call.
Once created the async worker can be queuedfor execution using thenapi_queue_async_work function:
napi_statusnapi_queue_async_work(node_api_basic_env env, napi_async_work work);napi_cancel_async_work can be used if the work needsto be cancelled before the work has started execution.
After callingnapi_cancel_async_work, thecomplete callbackwill be invoked with a status value ofnapi_cancelled.The work should not be deleted before thecompletecallback invocation, even when it was cancelled.
napi_create_async_work#
History
| Version | Changes |
|---|---|
| v8.6.0 | Added |
| v8.0.0 | Added in: v8.0.0 |
napi_statusnapi_create_async_work(napi_env env, napi_value async_resource, napi_value async_resource_name, napi_async_execute_callback execute, napi_async_complete_callback complete,void* data, napi_async_work* result);[in] env: The environment that the API is invoked under.[in] async_resource: An optional object associated with the async workthat will be passed to possibleasync_hooksinithooks.[in] async_resource_name: Identifier for the kind of resource that is beingprovided for diagnostic information exposed by theasync_hooksAPI.[in] execute: The native function which should be called to execute thelogic asynchronously. The given function is called from a worker pool threadand can execute in parallel with the main event loop thread.[in] complete: The native function which will be called when theasynchronous logic is completed or is cancelled. The given function is calledfrom the main event loop thread.napi_async_complete_callbackprovidesmore details.[in] data: User-provided data context. This will be passed back into theexecute and complete functions.[out] result:napi_async_work*which is the handle to the newly createdasync work.
Returnsnapi_ok if the API succeeded.
This API allocates a work object that is used to execute logic asynchronously.It should be freed usingnapi_delete_async_work once the work is no longerrequired.
async_resource_name should be a null-terminated, UTF-8-encoded string.
Theasync_resource_name identifier is provided by the user and should berepresentative of the type of async work being performed. It is also recommendedto apply namespacing to the identifier, e.g. by including the module name. Seetheasync_hooks documentation for more information.
napi_delete_async_work#
napi_statusnapi_delete_async_work(napi_env env, napi_async_work work);[in] env: The environment that the API is invoked under.[in] work: The handle returned by the call tonapi_create_async_work.
Returnsnapi_ok if the API succeeded.
This API frees a previously allocated work object.
This API can be called even if there is a pending JavaScript exception.
napi_queue_async_work#
napi_statusnapi_queue_async_work(node_api_basic_env env, napi_async_work work);[in] env: The environment that the API is invoked under.[in] work: The handle returned by the call tonapi_create_async_work.
Returnsnapi_ok if the API succeeded.
This API requests that the previously allocated work be scheduledfor execution. Once it returns successfully, this API must not be called againwith the samenapi_async_work item or the result will be undefined.
napi_cancel_async_work#
napi_statusnapi_cancel_async_work(node_api_basic_env env, napi_async_work work);[in] env: The environment that the API is invoked under.[in] work: The handle returned by the call tonapi_create_async_work.
Returnsnapi_ok if the API succeeded.
This API cancels queued work if it has not yetbeen started. If it has already started executing, it cannot becancelled andnapi_generic_failure will be returned. If successful,thecomplete callback will be invoked with a status value ofnapi_cancelled. The work should not be deleted before thecompletecallback invocation, even if it has been successfully cancelled.
This API can be called even if there is a pending JavaScript exception.
Custom asynchronous operations#
The simple asynchronous work APIs above may not be appropriate for everyscenario. When using any other asynchronous mechanism, the following APIsare necessary to ensure an asynchronous operation is properly tracked bythe runtime.
napi_async_init#
History
| Version | Changes |
|---|---|
| v25.0.0 | The |
| v8.6.0 | Added in: v8.6.0 |
napi_statusnapi_async_init(napi_env env, napi_value async_resource, napi_value async_resource_name, napi_async_context* result)[in] env: The environment that the API is invoked under.[in] async_resource: Object associated with the async workthat will be passed to possibleasync_hooksinithooks and can beaccessed byasync_hooks.executionAsyncResource().[in] async_resource_name: Identifier for the kind of resource that is beingprovided for diagnostic information exposed by theasync_hooksAPI.[out] result: The initialized async context.
Returnsnapi_ok if the API succeeded.
In order to retain ABI compatibility with previous versions, passingNULLforasync_resource does not result in an error. However, this is notrecommended as this will result in undesirable behavior withasync_hooksinit hooks andasync_hooks.executionAsyncResource() as the resource isnow required by the underlyingasync_hooks implementation in order to providethe linkage between async callbacks.
Previous versions of this API were not maintaining a strong reference toasync_resource while thenapi_async_context object existed and insteadexpected the caller to hold a strong reference. This has been changed, as acorresponding call tonapi_async_destroy for every call tonapi_async_init() is a requirement in any case to avoid memory leaks.
napi_async_destroy#
napi_statusnapi_async_destroy(napi_env env, napi_async_context async_context);[in] env: The environment that the API is invoked under.[in] async_context: The async context to be destroyed.
Returnsnapi_ok if the API succeeded.
This API can be called even if there is a pending JavaScript exception.
napi_make_callback#
History
| Version | Changes |
|---|---|
| v8.6.0 | Added |
| v8.0.0 | Added in: v8.0.0 |
NAPI_EXTERN napi_statusnapi_make_callback(napi_env env, napi_async_context async_context, napi_value recv, napi_value func,size_t argc,const napi_value* argv, napi_value* result);[in] env: The environment that the API is invoked under.[in] async_context: Context for the async operation that isinvoking the callback. This should normally be a value previouslyobtained fromnapi_async_init.In order to retain ABI compatibility with previous versions, passingNULLforasync_contextdoes not result in an error. However, this resultsin incorrect operation of async hooks. Potential issues include loss ofasync context when using theAsyncLocalStorageAPI.[in] recv: Thethisvalue passed to the called function.[in] func:napi_valuerepresenting the JavaScript function to be invoked.[in] argc: The count of elements in theargvarray.[in] argv: Array of JavaScript values asnapi_valuerepresenting thearguments to the function. Ifargcis zero this parameter may beomitted by passing inNULL.[out] result:napi_valuerepresenting the JavaScript object returned.
Returnsnapi_ok if the API succeeded.
This method allows a JavaScript function object to be called from a nativeadd-on. This API is similar tonapi_call_function. However, it is used to callfrom native code backinto JavaScriptafter returning from an asyncoperation (when there is no other script on the stack). It is a fairly simplewrapper aroundnode::MakeCallback.
Note it isnot necessary to usenapi_make_callback from within anapi_async_complete_callback; in that situation the callback's asynccontext has already been set up, so a direct call tonapi_call_functionis sufficient and appropriate. Use of thenapi_make_callback functionmay be required when implementing custom async behavior that does not usenapi_create_async_work.
Anyprocess.nextTicks or Promises scheduled on the microtask queue byJavaScript during the callback are ran before returning back to C/C++.
napi_open_callback_scope#
NAPI_EXTERN napi_statusnapi_open_callback_scope(napi_env env, napi_value resource_object, napi_async_context context, napi_callback_scope* result)[in] env: The environment that the API is invoked under.[in] resource_object: An object associated with the async workthat will be passed to possibleasync_hooksinithooks. Thisparameter has been deprecated and is ignored at runtime. Use theasync_resourceparameter innapi_async_initinstead.[in] context: Context for the async operation that is invoking the callback.This should be a value previously obtained fromnapi_async_init.[out] result: The newly created scope.
There are cases (for example, resolving promises) where it isnecessary to have the equivalent of the scope associated with a callbackin place when making certain Node-API calls. If there is no other script onthe stack thenapi_open_callback_scope andnapi_close_callback_scope functions can be used to open/closethe required scope.
napi_close_callback_scope#
NAPI_EXTERN napi_statusnapi_close_callback_scope(napi_env env, napi_callback_scope scope)[in] env: The environment that the API is invoked under.[in] scope: The scope to be closed.
This API can be called even if there is a pending JavaScript exception.
Version management#
napi_get_node_version#
typedefstruct {uint32_t major;uint32_t minor;uint32_t patch;constchar* release;} napi_node_version;napi_statusnapi_get_node_version(node_api_basic_env env,const napi_node_version** version);[in] env: The environment that the API is invoked under.[out] version: A pointer to version information for Node.js itself.
Returnsnapi_ok if the API succeeded.
This function fills theversion struct with the major, minor, and patchversion of Node.js that is currently running, and therelease field with thevalue ofprocess.release.name.
The returned buffer is statically allocated and does not need to be freed.
napi_get_version#
napi_statusnapi_get_version(node_api_basic_env env,uint32_t* result);[in] env: The environment that the API is invoked under.[out] result: The highest version of Node-API supported.
Returnsnapi_ok if the API succeeded.
This API returns the highest Node-API version supported by theNode.js runtime. Node-API is planned to be additive such thatnewer releases of Node.js may support additional API functions.In order to allow an addon to use a newer function when running withversions of Node.js that support it, while providingfallback behavior when running with Node.js versions that don'tsupport it:
- Call
napi_get_version()to determine if the API is available. - If available, dynamically load a pointer to the function using
uv_dlsym(). - Use the dynamically loaded pointer to invoke the function.
- If the function is not available, provide an alternate implementationthat does not use the function.
Memory management#
napi_adjust_external_memory#
NAPI_EXTERN napi_statusnapi_adjust_external_memory(node_api_basic_env env,int64_t change_in_bytes,int64_t* result);[in] env: The environment that the API is invoked under.[in] change_in_bytes: The change in externally allocated memory that is keptalive by JavaScript objects.[out] result: The adjusted value. This value should reflect thetotal amount of external memory with the givenchange_in_bytesincluded.The absolute value of the returned value should not be depended on.For example, implementations may use a single counter for all addons, or acounter for each addon.
Returnsnapi_ok if the API succeeded.
This function gives the runtime an indication of the amount of externallyallocated memory that is kept alive by JavaScript objects(i.e. a JavaScript object that points to its own memory allocated by anative addon). Registering externally allocated memory may, but is notguaranteed to, trigger global garbage collections moreoften than it would otherwise.
This function is expected to be called in a manner such that anaddon does not decrease the external memory more than it hasincreased the external memory.
Promises#
Node-API provides facilities for creatingPromise objects as described inSection Promise objects of the ECMA specification. It implements promises as a pair ofobjects. When a promise is created bynapi_create_promise(), a "deferred"object is created and returned alongside thePromise. The deferred object isbound to the createdPromise and is the only means to resolve or reject thePromise usingnapi_resolve_deferred() ornapi_reject_deferred(). Thedeferred object that is created bynapi_create_promise() is freed bynapi_resolve_deferred() ornapi_reject_deferred(). ThePromise object maybe returned to JavaScript where it can be used in the usual fashion.
For example, to create a promise and pass it to an asynchronous worker:
napi_deferred deferred;napi_value promise;napi_status status;// Create the promise.status = napi_create_promise(env, &deferred, &promise);if (status != napi_ok)returnNULL;// Pass the deferred to a function that performs an asynchronous action.do_something_asynchronous(deferred);// Return the promise to JSreturn promise;The above functiondo_something_asynchronous() would perform its asynchronousaction and then it would resolve or reject the deferred, thereby concluding thepromise and freeing the deferred:
napi_deferred deferred;napi_value undefined;napi_status status;// Create a value with which to conclude the deferred.status = napi_get_undefined(env, &undefined);if (status != napi_ok)returnNULL;// Resolve or reject the promise associated with the deferred depending on// whether the asynchronous action succeeded.if (asynchronous_action_succeeded) { status = napi_resolve_deferred(env, deferred, undefined);}else { status = napi_reject_deferred(env, deferred, undefined);}if (status != napi_ok)returnNULL;// At this point the deferred has been freed, so we should assign NULL to it.deferred =NULL;napi_create_promise#
napi_statusnapi_create_promise(napi_env env, napi_deferred* deferred, napi_value* promise);[in] env: The environment that the API is invoked under.[out] deferred: A newly created deferred object which can later be passed tonapi_resolve_deferred()ornapi_reject_deferred()to resolve resp. rejectthe associated promise.[out] promise: The JavaScript promise associated with the deferred object.
Returnsnapi_ok if the API succeeded.
This API creates a deferred object and a JavaScript promise.
napi_resolve_deferred#
napi_statusnapi_resolve_deferred(napi_env env, napi_deferred deferred, napi_value resolution);[in] env: The environment that the API is invoked under.[in] deferred: The deferred object whose associated promise to resolve.[in] resolution: The value with which to resolve the promise.
This API resolves a JavaScript promise by way of the deferred objectwith which it is associated. Thus, it can only be used to resolve JavaScriptpromises for which the corresponding deferred object is available. Thiseffectively means that the promise must have been created usingnapi_create_promise() and the deferred object returned from that call musthave been retained in order to be passed to this API.
The deferred object is freed upon successful completion.
napi_reject_deferred#
napi_statusnapi_reject_deferred(napi_env env, napi_deferred deferred, napi_value rejection);[in] env: The environment that the API is invoked under.[in] deferred: The deferred object whose associated promise to resolve.[in] rejection: The value with which to reject the promise.
This API rejects a JavaScript promise by way of the deferred objectwith which it is associated. Thus, it can only be used to reject JavaScriptpromises for which the corresponding deferred object is available. Thiseffectively means that the promise must have been created usingnapi_create_promise() and the deferred object returned from that call musthave been retained in order to be passed to this API.
The deferred object is freed upon successful completion.
napi_is_promise#
napi_statusnapi_is_promise(napi_env env, napi_value value,bool* is_promise);[in] env: The environment that the API is invoked under.[in] value: The value to examine[out] is_promise: Flag indicating whetherpromiseis a native promiseobject (that is, a promise object created by the underlying engine).
Script execution#
Node-API provides an API for executing a string containing JavaScript using theunderlying JavaScript engine.
napi_run_script#
NAPI_EXTERN napi_statusnapi_run_script(napi_env env, napi_value script, napi_value* result);[in] env: The environment that the API is invoked under.[in] script: A JavaScript string containing the script to execute.[out] result: The value resulting from having executed the script.
This function executes a string of JavaScript code and returns its result withthe following caveats:
- Unlike
eval, this function does not allow the script to access the currentlexical scope, and therefore also does not allow to access themodule scope, meaning that pseudo-globals such asrequirewill not beavailable. - The script can access theglobal scope. Function and
vardeclarationsin the script will be added to theglobalobject. Variable declarationsmade usingletandconstwill be visible globally, but will not be addedto theglobalobject. - The value of
thisisglobalwithin the script.
libuv event loop#
Node-API provides a function for getting the current event loop associated witha specificnapi_env.
napi_get_uv_event_loop#
NAPI_EXTERN napi_statusnapi_get_uv_event_loop(node_api_basic_env env,struct uv_loop_s** loop);[in] env: The environment that the API is invoked under.[out] loop: The current libuv loop instance.
Note: While libuv has been relatively stable over time, it doesnot provide an ABI stability guarantee. Use of this function should be avoided.Its use may result in an addon that does not work across Node.js versions.asynchronous-thread-safe-function-callsare an alternative for many use cases.
Asynchronous thread-safe function calls#
JavaScript functions can normally only be called from a native addon's mainthread. If an addon creates additional threads, then Node-API functions thatrequire anapi_env,napi_value, ornapi_ref must not be called from thosethreads.
When an addon has additional threads and JavaScript functions need to be invokedbased on the processing completed by those threads, those threads mustcommunicate with the addon's main thread so that the main thread can invoke theJavaScript function on their behalf. The thread-safe function APIs provide aneasy way to do this.
These APIs provide the typenapi_threadsafe_function as well as APIs tocreate, destroy, and call objects of this type.napi_create_threadsafe_function() creates a persistent reference to anapi_value that holds a JavaScript function which can be called from multiplethreads. The calls happen asynchronously. This means that values with which theJavaScript callback is to be called will be placed in a queue, and, for eachvalue in the queue, a call will eventually be made to the JavaScript function.
Upon creation of anapi_threadsafe_function anapi_finalize callback can beprovided. This callback will be invoked on the main thread when the thread-safefunction is about to be destroyed. It receives the context and the finalize datagiven during construction, and provides an opportunity for cleaning up after thethreads e.g. by callinguv_thread_join().Aside from the main loop thread,no threads should be using the thread-safe function after the finalize callbackcompletes.
Thecontext given during the call tonapi_create_threadsafe_function() canbe retrieved from any thread with a call tonapi_get_threadsafe_function_context().
Calling a thread-safe function#
napi_call_threadsafe_function() can be used for initiating a call intoJavaScript.napi_call_threadsafe_function() accepts a parameter which controlswhether the API behaves blockingly. If set tonapi_tsfn_nonblocking, the APIbehaves non-blockingly, returningnapi_queue_full if the queue was full,preventing data from being successfully added to the queue. If set tonapi_tsfn_blocking, the API blocks until space becomes available in the queue.napi_call_threadsafe_function() never blocks if the thread-safe function wascreated with a maximum queue size of 0.
napi_call_threadsafe_function() should not be called withnapi_tsfn_blockingfrom a JavaScript thread, because, if the queue is full, it may cause theJavaScript thread to deadlock.
The actual call into JavaScript is controlled by the callback given via thecall_js_cb parameter.call_js_cb is invoked on the main thread once for eachvalue that was placed into the queue by a successful call tonapi_call_threadsafe_function(). If such a callback is not given, a defaultcallback will be used, and the resulting JavaScript call will have no arguments.Thecall_js_cb callback receives the JavaScript function to call as anapi_value in its parameters, as well as thevoid* context pointer used whencreating thenapi_threadsafe_function, and the next data pointer that wascreated by one of the secondary threads. The callback can then use an API suchasnapi_call_function() to call into JavaScript.
The callback may also be invoked withenv andcall_js_cb both set toNULLto indicate that calls into JavaScript are no longer possible, while itemsremain in the queue that may need to be freed. This normally occurs when theNode.js process exits while there is a thread-safe function still active.
It is not necessary to call into JavaScript vianapi_make_callback() becauseNode-API runscall_js_cb in a context appropriate for callbacks.
Zero or more queued items may be invoked in each tick of the event loop.Applications should not depend on a specific behavior other than progress ininvoking callbacks will be made and events will be invokedas time moves forward.
Reference counting of thread-safe functions#
Threads can be added to and removed from anapi_threadsafe_function objectduring its existence. Thus, in addition to specifying an initial number ofthreads upon creation,napi_acquire_threadsafe_function can be called toindicate that a new thread will start making use of the thread-safe function.Similarly,napi_release_threadsafe_function can be called to indicate that anexisting thread will stop making use of the thread-safe function.
napi_threadsafe_function objects are destroyed when every thread which usesthe object has callednapi_release_threadsafe_function() or has received areturn status ofnapi_closing in response to a call tonapi_call_threadsafe_function. The queue is emptied before thenapi_threadsafe_function is destroyed.napi_release_threadsafe_function()should be the last API call made in conjunction with a givennapi_threadsafe_function, because after the call completes, there is noguarantee that thenapi_threadsafe_function is still allocated. For the samereason, do not use a thread-safe functionafter receiving a return value ofnapi_closing in response to a call tonapi_call_threadsafe_function. Data associated with thenapi_threadsafe_function can be freed in itsnapi_finalize callback whichwas passed tonapi_create_threadsafe_function(). The parameterinitial_thread_count ofnapi_create_threadsafe_function marks the initialnumber of acquisitions of the thread-safe functions, instead of callingnapi_acquire_threadsafe_function multiple times at creation.
Once the number of threads making use of anapi_threadsafe_function reacheszero, no further threads can start making use of it by callingnapi_acquire_threadsafe_function(). In fact, all subsequent API callsassociated with it, exceptnapi_release_threadsafe_function(), will return anerror value ofnapi_closing.
The thread-safe function can be "aborted" by giving a value ofnapi_tsfn_aborttonapi_release_threadsafe_function(). This will cause all subsequent APIsassociated with the thread-safe function exceptnapi_release_threadsafe_function() to returnnapi_closing even before itsreference count reaches zero. In particular,napi_call_threadsafe_function()will returnnapi_closing, thus informing the threads that it is no longerpossible to make asynchronous calls to the thread-safe function. This can beused as a criterion for terminating the thread.Upon receiving a return valueofnapi_closing fromnapi_call_threadsafe_function() a thread must not usethe thread-safe function anymore because it is no longer guaranteed tobe allocated.
Deciding whether to keep the process running#
Similarly to libuv handles, thread-safe functions can be "referenced" and"unreferenced". A "referenced" thread-safe function will cause the event loop onthe thread on which it is created to remain alive until the thread-safe functionis destroyed. In contrast, an "unreferenced" thread-safe function will notprevent the event loop from exiting. The APIsnapi_ref_threadsafe_function andnapi_unref_threadsafe_function exist for this purpose.
Neither doesnapi_unref_threadsafe_function mark the thread-safe functions asable to be destroyed nor doesnapi_ref_threadsafe_function prevent it frombeing destroyed.
napi_create_threadsafe_function#
History
| Version | Changes |
|---|---|
| v12.6.0, v10.17.0 | Made |
| v10.6.0 | Added in: v10.6.0 |
NAPI_EXTERN napi_statusnapi_create_threadsafe_function(napi_env env, napi_value func, napi_value async_resource, napi_value async_resource_name,size_t max_queue_size,size_t initial_thread_count,void* thread_finalize_data, napi_finalize thread_finalize_cb,void* context, napi_threadsafe_function_call_js call_js_cb, napi_threadsafe_function* result);[in] env: The environment that the API is invoked under.[in] func: An optional JavaScript function to call from another thread. Itmust be provided ifNULLis passed tocall_js_cb.[in] async_resource: An optional object associated with the async work thatwill be passed to possibleasync_hooksinithooks.[in] async_resource_name: A JavaScript string to provide an identifier forthe kind of resource that is being provided for diagnostic information exposedby theasync_hooksAPI.[in] max_queue_size: Maximum size of the queue.0for no limit.[in] initial_thread_count: The initial number of acquisitions, i.e. theinitial number of threads, including the main thread, which will be making useof this function.[in] thread_finalize_data: Optional data to be passed tothread_finalize_cb.[in] thread_finalize_cb: Optional function to call when thenapi_threadsafe_functionis being destroyed.[in] context: Optional data to attach to the resultingnapi_threadsafe_function.[in] call_js_cb: Optional callback which calls the JavaScript function inresponse to a call on a different thread. This callback will be called on themain thread. If not given, the JavaScript function will be called with noparameters and withundefinedas itsthisvalue.napi_threadsafe_function_call_jsprovides more details.[out] result: The asynchronous thread-safe JavaScript function.
Change History:
Version 10 (
NAPI_VERSIONis defined as10or higher):Uncaught exceptions thrown in
call_js_cbare handled with the'uncaughtException'event, instead of being ignored.
napi_get_threadsafe_function_context#
NAPI_EXTERN napi_statusnapi_get_threadsafe_function_context(napi_threadsafe_function func,void** result);[in] func: The thread-safe function for which to retrieve the context.[out] result: The location where to store the context.
This API may be called from any thread which makes use offunc.
napi_call_threadsafe_function#
History
| Version | Changes |
|---|---|
| v14.5.0 | Support for |
| v14.1.0 | Return |
| v10.6.0 | Added in: v10.6.0 |
NAPI_EXTERN napi_statusnapi_call_threadsafe_function(napi_threadsafe_function func,void* data, napi_threadsafe_function_call_mode is_blocking);[in] func: The asynchronous thread-safe JavaScript function to invoke.[in] data: Data to send into JavaScript via the callbackcall_js_cbprovided during the creation of the thread-safe JavaScript function.[in] is_blocking: Flag whose value can be eithernapi_tsfn_blockingtoindicate that the call should block if the queue is full ornapi_tsfn_nonblockingto indicate that the call should return immediatelywith a status ofnapi_queue_fullwhenever the queue is full.
This API should not be called withnapi_tsfn_blocking from a JavaScriptthread, because, if the queue is full, it may cause the JavaScript thread todeadlock.
This API will returnnapi_closing ifnapi_release_threadsafe_function() wascalled withabort set tonapi_tsfn_abort from any thread. The value is onlyadded to the queue if the API returnsnapi_ok.
This API may be called from any thread which makes use offunc.
napi_acquire_threadsafe_function#
NAPI_EXTERN napi_statusnapi_acquire_threadsafe_function(napi_threadsafe_function func);[in] func: The asynchronous thread-safe JavaScript function to start makinguse of.
A thread should call this API before passingfunc to any other thread-safefunction APIs to indicate that it will be making use offunc. This preventsfunc from being destroyed when all other threads have stopped making use ofit.
This API may be called from any thread which will start making use offunc.
napi_release_threadsafe_function#
NAPI_EXTERN napi_statusnapi_release_threadsafe_function(napi_threadsafe_function func, napi_threadsafe_function_release_mode mode);[in] func: The asynchronous thread-safe JavaScript function whose referencecount to decrement.[in] mode: Flag whose value can be eithernapi_tsfn_releaseto indicatethat the current thread will make no further calls to the thread-safefunction, ornapi_tsfn_abortto indicate that in addition to the currentthread, no other thread should make any further calls to the thread-safefunction. If set tonapi_tsfn_abort, further calls tonapi_call_threadsafe_function()will returnnapi_closing, and no furthervalues will be placed in the queue.
A thread should call this API when it stops making use offunc. Passingfuncto any thread-safe APIs after having called this API has undefined results, asfunc may have been destroyed.
This API may be called from any thread which will stop making use offunc.
napi_ref_threadsafe_function#
NAPI_EXTERN napi_statusnapi_ref_threadsafe_function(node_api_basic_env env, napi_threadsafe_function func);[in] env: The environment that the API is invoked under.[in] func: The thread-safe function to reference.
This API is used to indicate that the event loop running on the main threadshould not exit untilfunc has been destroyed. Similar touv_ref it isalso idempotent.
Neither doesnapi_unref_threadsafe_function mark the thread-safe functions asable to be destroyed nor doesnapi_ref_threadsafe_function prevent it frombeing destroyed.napi_acquire_threadsafe_function andnapi_release_threadsafe_function are available for that purpose.
This API may only be called from the main thread.
napi_unref_threadsafe_function#
NAPI_EXTERN napi_statusnapi_unref_threadsafe_function(node_api_basic_env env, napi_threadsafe_function func);[in] env: The environment that the API is invoked under.[in] func: The thread-safe function to unreference.
This API is used to indicate that the event loop running on the main threadmay exit beforefunc is destroyed. Similar touv_unref it is alsoidempotent.
This API may only be called from the main thread.
Miscellaneous utilities#
node_api_get_module_file_name#
NAPI_EXTERN napi_statusnode_api_get_module_file_name(node_api_basic_env env,constchar** result);[in] env: The environment that the API is invoked under.[out] result: A URL containing the absolute path of thelocation from which the add-on was loaded. For a file on the localfile system it will start withfile://. The string is null-terminated andowned byenvand must thus not be modified or freed.
result may be an empty string if the add-on loading process fails to establishthe add-on's file name during loading.
C++ embedder API#
Node.js provides a number of C++ APIs that can be used to execute JavaScriptin a Node.js environment from other C++ software.
The documentation for these APIs can be found insrc/node.h in the Node.jssource tree. In addition to the APIs exposed by Node.js, some required conceptsare provided by the V8 embedder API.
Because using Node.js as an embedded library is different from writing codethat is executed by Node.js, breaking changes do not follow typical Node.jsdeprecation policy and may occur on each semver-major release without priorwarning.
Example embedding application#
The following sections will provide an overview over how to use these APIsto create an application from scratch that will perform the equivalent ofnode -e <code>, i.e. that will take a piece of JavaScript and run it ina Node.js-specific environment.
The full code can be foundin the Node.js source tree.
Setting up a per-process state#
Node.js requires some per-process state management in order to run:
- Arguments parsing for Node.jsCLI options,
- V8 per-process requirements, such as a
v8::Platforminstance.
The following example shows how these can be set up. Some class names are fromthenode andv8 C++ namespaces, respectively.
intmain(int argc,char** argv){ argv =uv_setup_args(argc, argv);std::vector<std::string>args(argv, argv + argc);// Parse Node.js CLI options, and print any errors that have occurred while// trying to parse them. std::unique_ptr<node::InitializationResult> result = node::InitializeOncePerProcess(args, { node::ProcessInitializationFlags::kNoInitializeV8, node::ProcessInitializationFlags::kNoInitializeNodeV8Platform });for (const std::string& error : result->errors())fprintf(stderr,"%s: %s\n", args[0].c_str(), error.c_str());if (result->early_return() !=0) {return result->exit_code(); }// Create a v8::Platform instance. `MultiIsolatePlatform::Create()` is a way// to create a v8::Platform instance that Node.js can use when creating// Worker threads. When no `MultiIsolatePlatform` instance is present,// Worker threads are disabled. std::unique_ptr<MultiIsolatePlatform> platform = MultiIsolatePlatform::Create(4); V8::InitializePlatform(platform.get()); V8::Initialize();// See below for the contents of this function.int ret =RunNodeInstance( platform.get(), result->args(), result->exec_args()); V8::Dispose(); V8::DisposePlatform(); node::TearDownOncePerProcess();return ret;}Setting up a per-instance state#
History
| Version | Changes |
|---|---|
| v15.0.0 | The |
Node.js has a concept of a “Node.js instance”, that is commonly being referredto asnode::Environment. Eachnode::Environment is associated with:
- Exactly one
v8::Isolate, i.e. one JS Engine instance, - Exactly one
uv_loop_t, i.e. one event loop, - A number of
v8::Contexts, but exactly one mainv8::Context, and - One
node::IsolateDatainstance that contains information that could beshared by multiplenode::Environments. The embedder should make surethatnode::IsolateDatais shared only amongnode::Environments thatuse the samev8::Isolate, Node.js does not perform this check.
In order to set up av8::Isolate, anv8::ArrayBuffer::Allocator needsto be provided. One possible choice is the default Node.js allocator, whichcan be created throughnode::ArrayBufferAllocator::Create(). Using the Node.jsallocator allows minor performance optimizations when addons use the Node.jsC++Buffer API, and is required in order to trackArrayBuffer memory inprocess.memoryUsage().
Additionally, eachv8::Isolate that is used for a Node.js instance needs tobe registered and unregistered with theMultiIsolatePlatform instance, if oneis being used, in order for the platform to know which event loop to usefor tasks scheduled by thev8::Isolate.
Thenode::NewIsolate() helper function creates av8::Isolate,sets it up with some Node.js-specific hooks (e.g. the Node.js error handler),and registers it with the platform automatically.
intRunNodeInstance(MultiIsolatePlatform* platform,const std::vector<std::string>& args,const std::vector<std::string>& exec_args){int exit_code =0;// Setup up a libuv event loop, v8::Isolate, and Node.js Environment. std::vector<std::string> errors; std::unique_ptr<CommonEnvironmentSetup> setup = CommonEnvironmentSetup::Create(platform, &errors, args, exec_args);if (!setup) {for (const std::string& err : errors)fprintf(stderr,"%s: %s\n", args[0].c_str(), err.c_str());return1; } Isolate* isolate = setup->isolate(); Environment* env = setup->env(); {Lockerlocker(isolate);Isolate::Scopeisolate_scope(isolate);HandleScopehandle_scope(isolate);// The v8::Context needs to be entered when node::CreateEnvironment() and// node::LoadEnvironment() are being called.Context::Scopecontext_scope(setup->context());// Set up the Node.js instance for execution, and run code inside of it.// There is also a variant that takes a callback and provides it with// the `require` and `process` objects, so that it can manually compile// and run scripts as needed.// The `require` function inside this script does *not* access the file// system, and can only load built-in Node.js modules.// `module.createRequire()` is being used to create one that is able to// load files from the disk, and uses the standard CommonJS file loader// instead of the internal-only `require` function. MaybeLocal<Value> loadenv_ret = node::LoadEnvironment( env,"const publicRequire ="" require('node:module').createRequire(process.cwd() + '/');""globalThis.require = publicRequire;""require('node:vm').runInThisContext(process.argv[1]);");if (loadenv_ret.IsEmpty())// There has been a JS exception.return1; exit_code = node::SpinEventLoop(env).FromMaybe(1);// node::Stop() can be used to explicitly stop the event loop and keep// further JavaScript from running. It can be called from any thread,// and will act like worker.terminate() if called from another thread. node::Stop(env); }return exit_code;}Child process#
Source Code:lib/child_process.js
Thenode:child_process module provides the ability to spawn subprocesses ina manner that is similar, but not identical, topopen(3). This capabilityis primarily provided by thechild_process.spawn() function:
const { spawn } =require('node:child_process');const ls =spawn('ls', ['-lh','/usr']);ls.stdout.on('data',(data) => {console.log(`stdout:${data}`);});ls.stderr.on('data',(data) => {console.error(`stderr:${data}`);});ls.on('close',(code) => {console.log(`child process exited with code${code}`);});import { spawn }from'node:child_process';import { once }from'node:events';const ls =spawn('ls', ['-lh','/usr']);ls.stdout.on('data',(data) => {console.log(`stdout:${data}`);});ls.stderr.on('data',(data) => {console.error(`stderr:${data}`);});const [code] =awaitonce(ls,'close');console.log(`child process exited with code${code}`);
By default, pipes forstdin,stdout, andstderr are established betweenthe parent Node.js process and the spawned subprocess. These pipes havelimited (and platform-specific) capacity. If the subprocess writes tostdout in excess of that limit without the output being captured, thesubprocess blocks, waiting for the pipe buffer to accept more data. This isidentical to the behavior of pipes in the shell. Use the{ stdio: 'ignore' }option if the output will not be consumed.
The command lookup is performed using theoptions.env.PATH environmentvariable ifenv is in theoptions object. Otherwise,process.env.PATH isused. Ifoptions.env is set withoutPATH, lookup on Unix is performedon a default search path search of/usr/bin:/bin (see your operating system'smanual for execvpe/execvp), on Windows the current processes environmentvariablePATH is used.
On Windows, environment variables are case-insensitive. Node.jslexicographically sorts theenv keys and uses the first one thatcase-insensitively matches. Only first (in lexicographic order) entry will bepassed to the subprocess. This might lead to issues on Windows when passingobjects to theenv option that have multiple variants of the same key, such asPATH andPath.
Thechild_process.spawn() method spawns the child process asynchronously,without blocking the Node.js event loop. Thechild_process.spawnSync()function provides equivalent functionality in a synchronous manner that blocksthe event loop until the spawned process either exits or is terminated.
For convenience, thenode:child_process module provides a handful ofsynchronous and asynchronous alternatives tochild_process.spawn() andchild_process.spawnSync(). Each of these alternatives are implemented ontop ofchild_process.spawn() orchild_process.spawnSync().
child_process.exec(): spawns a shell and runs a command within thatshell, passing thestdoutandstderrto a callback function whencomplete.child_process.execFile(): similar tochild_process.exec()exceptthat it spawns the command directly without first spawning a shell bydefault.child_process.fork(): spawns a new Node.js process and invokes aspecified module with an IPC communication channel established that allowssending messages between parent and child.child_process.execSync(): a synchronous version ofchild_process.exec()that will block the Node.js event loop.child_process.execFileSync(): a synchronous version ofchild_process.execFile()that will block the Node.js event loop.
For certain use cases, such as automating shell scripts, thesynchronous counterparts may be more convenient. In many cases, however,the synchronous methods can have significant impact on performance due tostalling the event loop while spawned processes complete.
Asynchronous process creation#
Thechild_process.spawn(),child_process.fork(),child_process.exec(),andchild_process.execFile() methods all follow the idiomatic asynchronousprogramming pattern typical of other Node.js APIs.
Each of the methods returns aChildProcess instance. These objectsimplement the Node.jsEventEmitter API, allowing the parent process toregister listener functions that are called when certain events occur duringthe life cycle of the child process.
Thechild_process.exec() andchild_process.execFile() methodsadditionally allow for an optionalcallback function to be specified that isinvoked when the child process terminates.
Spawning.bat and.cmd files on Windows#
The importance of the distinction betweenchild_process.exec() andchild_process.execFile() can vary based on platform. On Unix-typeoperating systems (Unix, Linux, macOS)child_process.execFile() can bemore efficient because it does not spawn a shell by default. On Windows,however,.bat and.cmd files are not executable on their own without aterminal, and therefore cannot be launched usingchild_process.execFile().When running on Windows,.bat and.cmd files can be invoked usingchild_process.spawn() with theshell option set, withchild_process.exec(), or by spawningcmd.exe and passing the.bat or.cmd file as an argument (which is what theshell option andchild_process.exec() do). In any case, if the script filename containsspaces it needs to be quoted.
// OR...const { exec, spawn } =require('node:child_process');exec('my.bat',(err, stdout, stderr) => {if (err) {console.error(err);return; }console.log(stdout);});// Script with spaces in the filename:const bat =spawn('"my script.cmd" a b', {shell:true });// or:exec('"my script.cmd" a b',(err, stdout, stderr) => {// ...});// OR...import { exec, spawn }from'node:child_process';exec('my.bat',(err, stdout, stderr) => {if (err) {console.error(err);return; }console.log(stdout);});// Script with spaces in the filename:const bat =spawn('"my script.cmd" a b', {shell:true });// or:exec('"my script.cmd" a b',(err, stdout, stderr) => {// ...});
child_process.exec(command[, options][, callback])#
History
| Version | Changes |
|---|---|
| v15.4.0 | AbortSignal support was added. |
| v16.4.0, v14.18.0 | The |
| v8.8.0 | The |
| v0.1.90 | Added in: v0.1.90 |
command<string> The command to run, with space-separated arguments.options<Object>cwd<string> |<URL> Current working directory of the child process.Default:process.cwd().env<Object> Environment key-value pairs.Default:process.env.encoding<string>Default:'utf8'shell<string> Shell to execute the command with. SeeShell requirements andDefault Windows shell.Default:'/bin/sh'on Unix,process.env.ComSpecon Windows.signal<AbortSignal> allows aborting the child process using anAbortSignal.timeout<number>Default:0maxBuffer<number> Largest amount of data in bytes allowed on stdout orstderr. If exceeded, the child process is terminated and any output istruncated. See caveat atmaxBufferand Unicode.Default:1024 * 1024.killSignal<string> |<integer>Default:'SIGTERM'uid<number> Sets the user identity of the process (seesetuid(2)).gid<number> Sets the group identity of the process (seesetgid(2)).windowsHide<boolean> Hide the subprocess console window that wouldnormally be created on Windows systems.Default:false.
callback<Function> called with the output when process terminates.- Returns:<ChildProcess>
Spawns a shell then executes thecommand within that shell, buffering anygenerated output. Thecommand string passed to the exec function is processeddirectly by the shell and special characters (vary based onshell)need to be dealt with accordingly:
const { exec } =require('node:child_process');exec('"/path/to/test file/test.sh" arg1 arg2');// Double quotes are used so that the space in the path is not interpreted as// a delimiter of multiple arguments.exec('echo "The \\$HOME variable is $HOME"');// The $HOME variable is escaped in the first instance, but not in the second.import { exec }from'node:child_process';exec('"/path/to/test file/test.sh" arg1 arg2');// Double quotes are used so that the space in the path is not interpreted as// a delimiter of multiple arguments.exec('echo "The \\$HOME variable is $HOME"');// The $HOME variable is escaped in the first instance, but not in the second.
Never pass unsanitized user input to this function. Any input containing shellmetacharacters may be used to trigger arbitrary command execution.
If acallback function is provided, it is called with the arguments(error, stdout, stderr). On success,error will benull. On error,error will be an instance ofError. Theerror.code property will bethe exit code of the process. By convention, any exit code other than0indicates an error.error.signal will be the signal that terminated theprocess.
Thestdout andstderr arguments passed to the callback will contain thestdout and stderr output of the child process. By default, Node.js will decodethe output as UTF-8 and pass strings to the callback. Theencoding optioncan be used to specify the character encoding used to decode the stdout andstderr output. Ifencoding is'buffer', or an unrecognized characterencoding,Buffer objects will be passed to the callback instead.
const { exec } =require('node:child_process');exec('cat *.js missing_file | wc -l',(error, stdout, stderr) => {if (error) {console.error(`exec error:${error}`);return; }console.log(`stdout:${stdout}`);console.error(`stderr:${stderr}`);});import { exec }from'node:child_process';exec('cat *.js missing_file | wc -l',(error, stdout, stderr) => {if (error) {console.error(`exec error:${error}`);return; }console.log(`stdout:${stdout}`);console.error(`stderr:${stderr}`);});
Iftimeout is greater than0, the parent process will send the signalidentified by thekillSignal property (the default is'SIGTERM') if thechild process runs longer thantimeout milliseconds.
Unlike theexec(3) POSIX system call,child_process.exec() does not replacethe existing process and uses a shell to execute the command.
If this method is invoked as itsutil.promisify()ed version, it returnsaPromise for anObject withstdout andstderr properties. The returnedChildProcess instance is attached to thePromise as achild property. Incase of an error (including any error resulting in an exit code other than 0), arejected promise is returned, with the sameerror object given in thecallback, but with two additional propertiesstdout andstderr.
const util =require('node:util');const exec = util.promisify(require('node:child_process').exec);asyncfunctionlsExample() {const { stdout, stderr } =awaitexec('ls');console.log('stdout:', stdout);console.error('stderr:', stderr);}lsExample();import { promisify }from'node:util';import child_processfrom'node:child_process';const exec =promisify(child_process.exec);asyncfunctionlsExample() {const { stdout, stderr } =awaitexec('ls');console.log('stdout:', stdout);console.error('stderr:', stderr);}lsExample();
If thesignal option is enabled, calling.abort() on the correspondingAbortController is similar to calling.kill() on the child process exceptthe error passed to the callback will be anAbortError:
const { exec } =require('node:child_process');const controller =newAbortController();const { signal } = controller;const child =exec('grep ssh', { signal },(error) => {console.error(error);// an AbortError});controller.abort();import { exec }from'node:child_process';const controller =newAbortController();const { signal } = controller;const child =exec('grep ssh', { signal },(error) => {console.error(error);// an AbortError});controller.abort();
child_process.execFile(file[, args][, options][, callback])#
History
| Version | Changes |
|---|---|
| v23.11.0, v22.15.0 | Passing |
| v16.4.0, v14.18.0 | The |
| v15.4.0, v14.17.0 | AbortSignal support was added. |
| v8.8.0 | The |
| v0.1.91 | Added in: v0.1.91 |
file<string> The name or path of the executable file to run.args<string[]> List of string arguments.options<Object>cwd<string> |<URL> Current working directory of the child process.env<Object> Environment key-value pairs.Default:process.env.encoding<string>Default:'utf8'timeout<number>Default:0maxBuffer<number> Largest amount of data in bytes allowed on stdout orstderr. If exceeded, the child process is terminated and any output istruncated. See caveat atmaxBufferand Unicode.Default:1024 * 1024.killSignal<string> |<integer>Default:'SIGTERM'uid<number> Sets the user identity of the process (seesetuid(2)).gid<number> Sets the group identity of the process (seesetgid(2)).windowsHide<boolean> Hide the subprocess console window that wouldnormally be created on Windows systems.Default:false.windowsVerbatimArguments<boolean> No quoting or escaping of arguments isdone on Windows. Ignored on Unix.Default:false.shell<boolean> |<string> Iftrue, runscommandinside of a shell. Uses'/bin/sh'on Unix, andprocess.env.ComSpecon Windows. A differentshell can be specified as a string. SeeShell requirements andDefault Windows shell.Default:false(no shell).signal<AbortSignal> allows aborting the child process using anAbortSignal.
callback<Function> Called with the output when process terminates.- Returns:<ChildProcess>
Thechild_process.execFile() function is similar tochild_process.exec()except that it does not spawn a shell by default. Rather, the specifiedexecutablefile is spawned directly as a new process making it slightly moreefficient thanchild_process.exec().
The same options aschild_process.exec() are supported. Since a shell isnot spawned, behaviors such as I/O redirection and file globbing are notsupported.
const { execFile } =require('node:child_process');const child =execFile('node', ['--version'],(error, stdout, stderr) => {if (error) {throw error; }console.log(stdout);});import { execFile }from'node:child_process';const child =execFile('node', ['--version'],(error, stdout, stderr) => {if (error) {throw error; }console.log(stdout);});
Thestdout andstderr arguments passed to the callback will contain thestdout and stderr output of the child process. By default, Node.js will decodethe output as UTF-8 and pass strings to the callback. Theencoding optioncan be used to specify the character encoding used to decode the stdout andstderr output. Ifencoding is'buffer', or an unrecognized characterencoding,Buffer objects will be passed to the callback instead.
If this method is invoked as itsutil.promisify()ed version, it returnsaPromise for anObject withstdout andstderr properties. The returnedChildProcess instance is attached to thePromise as achild property. Incase of an error (including any error resulting in an exit code other than 0), arejected promise is returned, with the sameerror object given in thecallback, but with two additional propertiesstdout andstderr.
const util =require('node:util');const execFile = util.promisify(require('node:child_process').execFile);asyncfunctiongetVersion() {const { stdout } =awaitexecFile('node', ['--version']);console.log(stdout);}getVersion();import { promisify }from'node:util';import child_processfrom'node:child_process';const execFile =promisify(child_process.execFile);asyncfunctiongetVersion() {const { stdout } =awaitexecFile('node', ['--version']);console.log(stdout);}getVersion();
If theshell option is enabled, do not pass unsanitized user input to thisfunction. Any input containing shell metacharacters may be used to triggerarbitrary command execution.
If thesignal option is enabled, calling.abort() on the correspondingAbortController is similar to calling.kill() on the child process exceptthe error passed to the callback will be anAbortError:
const { execFile } =require('node:child_process');const controller =newAbortController();const { signal } = controller;const child =execFile('node', ['--version'], { signal },(error) => {console.error(error);// an AbortError});controller.abort();import { execFile }from'node:child_process';const controller =newAbortController();const { signal } = controller;const child =execFile('node', ['--version'], { signal },(error) => {console.error(error);// an AbortError});controller.abort();
child_process.fork(modulePath[, args][, options])#
History
| Version | Changes |
|---|---|
| v17.4.0, v16.14.0 | The |
| v16.4.0, v14.18.0 | The |
| v15.13.0, v14.18.0 | timeout was added. |
| v15.11.0, v14.18.0 | killSignal for AbortSignal was added. |
| v15.6.0, v14.17.0 | AbortSignal support was added. |
| v13.2.0, v12.16.0 | The |
| v8.0.0 | The |
| v6.4.0 | The |
| v0.5.0 | Added in: v0.5.0 |
modulePath<string> |<URL> The module to run in the child.args<string[]> List of string arguments.options<Object>cwd<string> |<URL> Current working directory of the child process.detached<boolean> Prepare child process to run independently of itsparent process. Specific behavior depends on the platform (seeoptions.detached).env<Object> Environment key-value pairs.Default:process.env.execPath<string> Executable used to create the child process.execArgv<string[]> List of string arguments passed to the executable.Default:process.execArgv.gid<number> Sets the group identity of the process (seesetgid(2)).serialization<string> Specify the kind of serialization used for sendingmessages between processes. Possible values are'json'and'advanced'.SeeAdvanced serialization for more details.Default:'json'.signal<AbortSignal> Allows closing the child process using anAbortSignal.killSignal<string> |<integer> The signal value to be used when the spawnedprocess will be killed by timeout or abort signal.Default:'SIGTERM'.silent<boolean> Iftrue, stdin, stdout, and stderr of the childprocess will be piped to the parent process, otherwise they will be inheritedfrom the parent process, see the'pipe'and'inherit'options forchild_process.spawn()'sstdiofor more details.Default:false.stdio<Array> |<string> Seechild_process.spawn()'sstdio.When this option is provided, it overridessilent. If the array variantis used, it must contain exactly one item with value'ipc'or an errorwill be thrown. For instance[0, 1, 2, 'ipc'].uid<number> Sets the user identity of the process (seesetuid(2)).windowsVerbatimArguments<boolean> No quoting or escaping of arguments isdone on Windows. Ignored on Unix.Default:false.timeout<number> In milliseconds the maximum amount of time the processis allowed to run.Default:undefined.
- Returns:<ChildProcess>
Thechild_process.fork() method is a special case ofchild_process.spawn() used specifically to spawn new Node.js processes.Likechild_process.spawn(), aChildProcess object is returned. ThereturnedChildProcess will have an additional communication channelbuilt-in that allows messages to be passed back and forth between the parent andchild. Seesubprocess.send() for details.
Keep in mind that spawned Node.js child processes areindependent of the parent with exception of the IPC communication channelthat is established between the two. Each process has its own memory, withtheir own V8 instances. Because of the additional resource allocationsrequired, spawning a large number of child Node.js processes is notrecommended.
By default,child_process.fork() will spawn new Node.js instances using theprocess.execPath of the parent process. TheexecPath property in theoptions object allows for an alternative execution path to be used.
Node.js processes launched with a customexecPath will communicate with theparent process using the file descriptor (fd) identified using theenvironment variableNODE_CHANNEL_FD on the child process.
Unlike thefork(2) POSIX system call,child_process.fork() does not clone thecurrent process.
Theshell option available inchild_process.spawn() is not supported bychild_process.fork() and will be ignored if set.
If thesignal option is enabled, calling.abort() on the correspondingAbortController is similar to calling.kill() on the child process exceptthe error passed to the callback will be anAbortError:
const { fork } =require('node:child_process');const process =require('node:process');if (process.argv[2] ==='child') {setTimeout(() => {console.log(`Hello from${process.argv[2]}!`); },1_000);}else {const controller =newAbortController();const { signal } = controller;const child =fork(__filename, ['child'], { signal }); child.on('error',(err) => {// This will be called with err being an AbortError if the controller aborts }); controller.abort();// Stops the child process}import { fork }from'node:child_process';import processfrom'node:process';if (process.argv[2] ==='child') {setTimeout(() => {console.log(`Hello from${process.argv[2]}!`); },1_000);}else {const controller =newAbortController();const { signal } = controller;const child =fork(import.meta.url, ['child'], { signal }); child.on('error',(err) => {// This will be called with err being an AbortError if the controller aborts }); controller.abort();// Stops the child process}
child_process.spawn(command[, args][, options])#
History
| Version | Changes |
|---|---|
| v23.11.0, v22.15.0 | Passing |
| v16.4.0, v14.18.0 | The |
| v15.13.0, v14.18.0 | timeout was added. |
| v15.11.0, v14.18.0 | killSignal for AbortSignal was added. |
| v15.5.0, v14.17.0 | AbortSignal support was added. |
| v13.2.0, v12.16.0 | The |
| v8.8.0 | The |
| v6.4.0 | The |
| v5.7.0 | The |
| v0.1.90 | Added in: v0.1.90 |
command<string> The command to run.args<string[]> List of string arguments.options<Object>cwd<string> |<URL> Current working directory of the child process.env<Object> Environment key-value pairs.Default:process.env.argv0<string> Explicitly set the value ofargv[0]sent to the childprocess. This will be set tocommandif not specified.stdio<Array> |<string> Child's stdio configuration (seeoptions.stdio).detached<boolean> Prepare child process to run independently ofits parent process. Specific behavior depends on the platform (seeoptions.detached).uid<number> Sets the user identity of the process (seesetuid(2)).gid<number> Sets the group identity of the process (seesetgid(2)).serialization<string> Specify the kind of serialization used for sendingmessages between processes. Possible values are'json'and'advanced'.SeeAdvanced serialization for more details.Default:'json'.shell<boolean> |<string> Iftrue, runscommandinside of a shell. Uses'/bin/sh'on Unix, andprocess.env.ComSpecon Windows. A differentshell can be specified as a string. SeeShell requirements andDefault Windows shell.Default:false(no shell).windowsVerbatimArguments<boolean> No quoting or escaping of arguments isdone on Windows. Ignored on Unix. This is set totrueautomaticallywhenshellis specified and is CMD.Default:false.windowsHide<boolean> Hide the subprocess console window that wouldnormally be created on Windows systems.Default:false.signal<AbortSignal> allows aborting the child process using anAbortSignal.timeout<number> In milliseconds the maximum amount of time the processis allowed to run.Default:undefined.killSignal<string> |<integer> The signal value to be used when the spawnedprocess will be killed by timeout or abort signal.Default:'SIGTERM'.
- Returns:<ChildProcess>
Thechild_process.spawn() method spawns a new process using the givencommand, with command-line arguments inargs. If omitted,args defaultsto an empty array.
If theshell option is enabled, do not pass unsanitized user input to thisfunction. Any input containing shell metacharacters may be used to triggerarbitrary command execution.
A third argument may be used to specify additional options, with these defaults:
const defaults = {cwd:undefined,env: process.env,};Usecwd to specify the working directory from which the process is spawned.If not given, the default is to inherit the current working directory. If given,but the path does not exist, the child process emits anENOENT errorand exits immediately.ENOENT is also emitted when the commanddoes not exist.
Useenv to specify environment variables that will be visible to the newprocess, the default isprocess.env.
undefined values inenv will be ignored.
Example of runningls -lh /usr, capturingstdout,stderr, and theexit code:
const { spawn } =require('node:child_process');const ls =spawn('ls', ['-lh','/usr']);ls.stdout.on('data',(data) => {console.log(`stdout:${data}`);});ls.stderr.on('data',(data) => {console.error(`stderr:${data}`);});ls.on('close',(code) => {console.log(`child process exited with code${code}`);});import { spawn }from'node:child_process';import { once }from'node:events';const ls =spawn('ls', ['-lh','/usr']);ls.stdout.on('data',(data) => {console.log(`stdout:${data}`);});ls.stderr.on('data',(data) => {console.error(`stderr:${data}`);});const [code] =awaitonce(ls,'close');console.log(`child process exited with code${code}`);
Example: A very elaborate way to runps ax | grep ssh
const { spawn } =require('node:child_process');const ps =spawn('ps', ['ax']);const grep =spawn('grep', ['ssh']);ps.stdout.on('data',(data) => { grep.stdin.write(data);});ps.stderr.on('data',(data) => {console.error(`ps stderr:${data}`);});ps.on('close',(code) => {if (code !==0) {console.log(`ps process exited with code${code}`); } grep.stdin.end();});grep.stdout.on('data',(data) => {console.log(data.toString());});grep.stderr.on('data',(data) => {console.error(`grep stderr:${data}`);});grep.on('close',(code) => {if (code !==0) {console.log(`grep process exited with code${code}`); }});import { spawn }from'node:child_process';const ps =spawn('ps', ['ax']);const grep =spawn('grep', ['ssh']);ps.stdout.on('data',(data) => { grep.stdin.write(data);});ps.stderr.on('data',(data) => {console.error(`ps stderr:${data}`);});ps.on('close',(code) => {if (code !==0) {console.log(`ps process exited with code${code}`); } grep.stdin.end();});grep.stdout.on('data',(data) => {console.log(data.toString());});grep.stderr.on('data',(data) => {console.error(`grep stderr:${data}`);});grep.on('close',(code) => {if (code !==0) {console.log(`grep process exited with code${code}`); }});
Example of checking for failedspawn:
const { spawn } =require('node:child_process');const subprocess =spawn('bad_command');subprocess.on('error',(err) => {console.error('Failed to start subprocess.');});import { spawn }from'node:child_process';const subprocess =spawn('bad_command');subprocess.on('error',(err) => {console.error('Failed to start subprocess.');});
Certain platforms (macOS, Linux) will use the value ofargv[0] for the processtitle while others (Windows, SunOS) will usecommand.
Node.js overwritesargv[0] withprocess.execPath on startup, soprocess.argv[0] in a Node.js child process will not match theargv0parameter passed tospawn from the parent. Retrieve it with theprocess.argv0 property instead.
If thesignal option is enabled, calling.abort() on the correspondingAbortController is similar to calling.kill() on the child process exceptthe error passed to the callback will be anAbortError:
const { spawn } =require('node:child_process');const controller =newAbortController();const { signal } = controller;const grep =spawn('grep', ['ssh'], { signal });grep.on('error',(err) => {// This will be called with err being an AbortError if the controller aborts});controller.abort();// Stops the child processimport { spawn }from'node:child_process';const controller =newAbortController();const { signal } = controller;const grep =spawn('grep', ['ssh'], { signal });grep.on('error',(err) => {// This will be called with err being an AbortError if the controller aborts});controller.abort();// Stops the child process
options.detached#
On Windows, settingoptions.detached totrue makes it possible for thechild process to continue running after the parent exits. The child processwill have its own console window. Once enabled for a child process,it cannot be disabled.
On non-Windows platforms, ifoptions.detached is set totrue, the childprocess will be made the leader of a new process group and session. Childprocesses may continue running after the parent exits regardless of whetherthey are detached or not. Seesetsid(2) for more information.
By default, the parent will wait for the detached child process to exit.To prevent the parent process from waiting for a givensubprocess to exit, usethesubprocess.unref() method. Doing so will cause the parent process' eventloop to not include the child process in its reference count, allowing theparent process to exit independently of the child process, unless there is an establishedIPC channel between the child and the parent processes.
When using thedetached option to start a long-running process, the processwill not stay running in the background after the parent exits unless it isprovided with astdio configuration that is not connected to the parent.If the parent process'stdio is inherited, the child process will remain attachedto the controlling terminal.
Example of a long-running process, by detaching and also ignoring its parentstdio file descriptors, in order to ignore the parent's termination:
const { spawn } =require('node:child_process');const process =require('node:process');const subprocess =spawn(process.argv[0], ['child_program.js'], {detached:true,stdio:'ignore',});subprocess.unref();import { spawn }from'node:child_process';import processfrom'node:process';const subprocess =spawn(process.argv[0], ['child_program.js'], {detached:true,stdio:'ignore',});subprocess.unref();
Alternatively one can redirect the child process' output into files:
const { openSync } =require('node:fs');const { spawn } =require('node:child_process');const out =openSync('./out.log','a');const err =openSync('./out.log','a');const subprocess =spawn('prg', [], {detached:true,stdio: ['ignore', out, err ],});subprocess.unref();import { openSync }from'node:fs';import { spawn }from'node:child_process';const out =openSync('./out.log','a');const err =openSync('./out.log','a');const subprocess =spawn('prg', [], {detached:true,stdio: ['ignore', out, err ],});subprocess.unref();
options.stdio#
History
| Version | Changes |
|---|---|
| v15.6.0, v14.18.0 | Added the |
| v3.3.1 | The value |
| v0.7.10 | Added in: v0.7.10 |
Theoptions.stdio option is used to configure the pipes that are establishedbetween the parent and child process. By default, the child's stdin, stdout,and stderr are redirected to correspondingsubprocess.stdin,subprocess.stdout, andsubprocess.stderr streams on theChildProcess object. This is equivalent to setting theoptions.stdioequal to['pipe', 'pipe', 'pipe'].
For convenience,options.stdio may be one of the following strings:
'pipe': equivalent to['pipe', 'pipe', 'pipe'](the default)'overlapped': equivalent to['overlapped', 'overlapped', 'overlapped']'ignore': equivalent to['ignore', 'ignore', 'ignore']'inherit': equivalent to['inherit', 'inherit', 'inherit']or[0, 1, 2]
Otherwise, the value ofoptions.stdio is an array where each index correspondsto an fd in the child. The fds 0, 1, and 2 correspond to stdin, stdout,and stderr, respectively. Additional fds can be specified to create additionalpipes between the parent and child. The value is one of the following:
'pipe': Create a pipe between the child process and the parent process.The parent end of the pipe is exposed to the parent as a property on thechild_processobject assubprocess.stdio[fd]. Pipescreated for fds 0, 1, and 2 are also available assubprocess.stdin,subprocess.stdoutandsubprocess.stderr, respectively.These are not actual Unix pipes and therefore the child processcan not use them by their descriptor files,e.g./dev/fd/2or/dev/stdout.'overlapped': Same as'pipe'except that theFILE_FLAG_OVERLAPPEDflagis set on the handle. This is necessary for overlapped I/O on the childprocess's stdio handles. See thedocsfor more details. This is exactly the same as'pipe'on non-Windowssystems.'ipc': Create an IPC channel for passing messages/file descriptorsbetween parent and child. AChildProcessmay have at most one IPCstdio file descriptor. Setting this option enables thesubprocess.send()method. If the child process is a Node.js instance,the presence of an IPC channel will enableprocess.send()andprocess.disconnect()methods, as well as'disconnect'and'message'events within the child process.Accessing the IPC channel fd in any way other than
process.send()or using the IPC channel with a child process that is not a Node.js instanceis not supported.'ignore': Instructs Node.js to ignore the fd in the child. While Node.jswill always open fds 0, 1, and 2 for the processes it spawns, setting the fdto'ignore'will cause Node.js to open/dev/nulland attach it to thechild's fd.'inherit': Pass through the corresponding stdio stream to/from theparent process. In the first three positions, this is equivalent toprocess.stdin,process.stdout, andprocess.stderr, respectively. Inany other position, equivalent to'ignore'.<Stream> object: Share a readable or writable stream that refers to a tty,file, socket, or a pipe with the child process. The stream's underlyingfile descriptor is duplicated in the child process to the fd thatcorresponds to the index in the
stdioarray. The stream must have anunderlying descriptor (file streams do not start until the'open'event hasoccurred).NOTE: While it is technically possible to passstdinas a writable orstdout/stderras readable, it is not recommended.Readable and writable streams are designed with distinct behaviors, and usingthem incorrectly (e.g., passing a readable stream where a writable stream isexpected) can lead to unexpected results or errors. This practice is discouragedas it may result in undefined behavior or dropped callbacks if the streamencounters errors. Always ensure thatstdinis used as readable andstdout/stderras writable to maintain the intended flow of data betweenthe parent and child processes.Positive integer: The integer value is interpreted as a file descriptorthat is open in the parent process. It is shared with the childprocess, similar to how<Stream> objects can be shared. Passing socketsis not supported on Windows.
null,undefined: Use default value. For stdio fds 0, 1, and 2 (in otherwords, stdin, stdout, and stderr) a pipe is created. For fd 3 and up, thedefault is'ignore'.
const { spawn } =require('node:child_process');const process =require('node:process');// Child will use parent's stdios.spawn('prg', [], {stdio:'inherit' });// Spawn child sharing only stderr.spawn('prg', [], {stdio: ['pipe','pipe', process.stderr] });// Open an extra fd=4, to interact with programs presenting a// startd-style interface.spawn('prg', [], {stdio: ['pipe',null,null,null,'pipe'] });import { spawn }from'node:child_process';import processfrom'node:process';// Child will use parent's stdios.spawn('prg', [], {stdio:'inherit' });// Spawn child sharing only stderr.spawn('prg', [], {stdio: ['pipe','pipe', process.stderr] });// Open an extra fd=4, to interact with programs presenting a// startd-style interface.spawn('prg', [], {stdio: ['pipe',null,null,null,'pipe'] });
It is worth noting that when an IPC channel is established between theparent and child processes, and the child process is a Node.js instance,the child process is launched with the IPC channel unreferenced (usingunref()) until the child process registers an event handler for the'disconnect' event or the'message' event. This allows thechild process to exit normally without the process being held open by theopen IPC channel.See also:child_process.exec() andchild_process.fork().
Synchronous process creation#
Thechild_process.spawnSync(),child_process.execSync(), andchild_process.execFileSync() methods are synchronous and will block theNode.js event loop, pausing execution of any additional code until the spawnedprocess exits.
Blocking calls like these are mostly useful for simplifying general-purposescripting tasks and for simplifying the loading/processing of applicationconfiguration at startup.
child_process.execFileSync(file[, args][, options])#
History
| Version | Changes |
|---|---|
| v16.4.0, v14.18.0 | The |
| v10.10.0 | The |
| v8.8.0 | The |
| v8.0.0 | The |
| v6.2.1, v4.5.0 | The |
| v0.11.12 | Added in: v0.11.12 |
file<string> The name or path of the executable file to run.args<string[]> List of string arguments.options<Object>cwd<string> |<URL> Current working directory of the child process.input<string> |<Buffer> |<TypedArray> |<DataView> The value which will be passedas stdin to the spawned process. Ifstdio[0]is set to'pipe', Supplyingthis value will overridestdio[0].stdio<string> |<Array> Child's stdio configuration.Seechild_process.spawn()'sstdio.stderrby default willbe output to the parent process' stderr unlessstdiois specified.Default:'pipe'.env<Object> Environment key-value pairs.Default:process.env.uid<number> Sets the user identity of the process (seesetuid(2)).gid<number> Sets the group identity of the process (seesetgid(2)).timeout<number> In milliseconds the maximum amount of time the processis allowed to run.Default:undefined.killSignal<string> |<integer> The signal value to be used when the spawnedprocess will be killed.Default:'SIGTERM'.maxBuffer<number> Largest amount of data in bytes allowed on stdout orstderr. If exceeded, the child process is terminated. See caveat atmaxBufferand Unicode.Default:1024 * 1024.encoding<string> The encoding used for all stdio inputs and outputs.Default:'buffer'.windowsHide<boolean> Hide the subprocess console window that wouldnormally be created on Windows systems.Default:false.shell<boolean> |<string> Iftrue, runscommandinside of a shell. Uses'/bin/sh'on Unix, andprocess.env.ComSpecon Windows. A differentshell can be specified as a string. SeeShell requirements andDefault Windows shell.Default:false(no shell).
- Returns:<Buffer> |<string> The stdout from the command.
Thechild_process.execFileSync() method is generally identical tochild_process.execFile() with the exception that the method will notreturn until the child process has fully closed. When a timeout has beenencountered andkillSignal is sent, the method won't return until the processhas completely exited.
If the child process intercepts and handles theSIGTERM signal anddoes not exit, the parent process will still wait until the child process hasexited.
If the process times out or has a non-zero exit code, this method will throw anError that will include the full result of the underlyingchild_process.spawnSync().
If theshell option is enabled, do not pass unsanitized user input to thisfunction. Any input containing shell metacharacters may be used to triggerarbitrary command execution.
const { execFileSync } =require('node:child_process');try {const stdout =execFileSync('my-script.sh', ['my-arg'], {// Capture stdout and stderr from child process. Overrides the// default behavior of streaming child stderr to the parent stderrstdio:'pipe',// Use utf8 encoding for stdio pipesencoding:'utf8', });console.log(stdout);}catch (err) {if (err.code) {// Spawning child process failedconsole.error(err.code); }else {// Child was spawned but exited with non-zero exit code// Error contains any stdout and stderr from the childconst { stdout, stderr } = err;console.error({ stdout, stderr }); }}import { execFileSync }from'node:child_process';try {const stdout =execFileSync('my-script.sh', ['my-arg'], {// Capture stdout and stderr from child process. Overrides the// default behavior of streaming child stderr to the parent stderrstdio:'pipe',// Use utf8 encoding for stdio pipesencoding:'utf8', });console.log(stdout);}catch (err) {if (err.code) {// Spawning child process failedconsole.error(err.code); }else {// Child was spawned but exited with non-zero exit code// Error contains any stdout and stderr from the childconst { stdout, stderr } = err;console.error({ stdout, stderr }); }}
child_process.execSync(command[, options])#
History
| Version | Changes |
|---|---|
| v16.4.0, v14.18.0 | The |
| v10.10.0 | The |
| v8.8.0 | The |
| v8.0.0 | The |
| v0.11.12 | Added in: v0.11.12 |
command<string> The command to run.options<Object>cwd<string> |<URL> Current working directory of the child process.input<string> |<Buffer> |<TypedArray> |<DataView> The value which will be passedas stdin to the spawned process. Ifstdio[0]is set to'pipe', Supplyingthis value will overridestdio[0].stdio<string> |<Array> Child's stdio configuration.Seechild_process.spawn()'sstdio.stderrby default willbe output to the parent process' stderr unlessstdiois specified.Default:'pipe'.env<Object> Environment key-value pairs.Default:process.env.shell<string> Shell to execute the command with. SeeShell requirements andDefault Windows shell.Default:'/bin/sh'on Unix,process.env.ComSpecon Windows.uid<number> Sets the user identity of the process. (Seesetuid(2)).gid<number> Sets the group identity of the process. (Seesetgid(2)).timeout<number> In milliseconds the maximum amount of time the processis allowed to run.Default:undefined.killSignal<string> |<integer> The signal value to be used when the spawnedprocess will be killed.Default:'SIGTERM'.maxBuffer<number> Largest amount of data in bytes allowed on stdout orstderr. If exceeded, the child process is terminated and any output istruncated. See caveat atmaxBufferand Unicode.Default:1024 * 1024.encoding<string> The encoding used for all stdio inputs and outputs.Default:'buffer'.windowsHide<boolean> Hide the subprocess console window that wouldnormally be created on Windows systems.Default:false.
- Returns:<Buffer> |<string> The stdout from the command.
Thechild_process.execSync() method is generally identical tochild_process.exec() with the exception that the method will not returnuntil the child process has fully closed. When a timeout has been encounteredandkillSignal is sent, the method won't return until the process hascompletely exited. If the child process intercepts and handles theSIGTERMsignal and doesn't exit, the parent process will wait until the child processhas exited.
If the process times out or has a non-zero exit code, this method will throw.TheError object will contain the entire result fromchild_process.spawnSync().
Never pass unsanitized user input to this function. Any input containing shellmetacharacters may be used to trigger arbitrary command execution.
child_process.spawnSync(command[, args][, options])#
History
| Version | Changes |
|---|---|
| v16.4.0, v14.18.0 | The |
| v10.10.0 | The |
| v8.8.0 | The |
| v8.0.0 | The |
| v5.7.0 | The |
| v6.2.1, v4.5.0 | The |
| v0.11.12 | Added in: v0.11.12 |
command<string> The command to run.args<string[]> List of string arguments.options<Object>cwd<string> |<URL> Current working directory of the child process.input<string> |<Buffer> |<TypedArray> |<DataView> The value which will be passedas stdin to the spawned process. Ifstdio[0]is set to'pipe', Supplyingthis value will overridestdio[0].argv0<string> Explicitly set the value ofargv[0]sent to the childprocess. This will be set tocommandif not specified.stdio<string> |<Array> Child's stdio configuration.Seechild_process.spawn()'sstdio.Default:'pipe'.env<Object> Environment key-value pairs.Default:process.env.uid<number> Sets the user identity of the process (seesetuid(2)).gid<number> Sets the group identity of the process (seesetgid(2)).timeout<number> In milliseconds the maximum amount of time the processis allowed to run.Default:undefined.killSignal<string> |<integer> The signal value to be used when the spawnedprocess will be killed.Default:'SIGTERM'.maxBuffer<number> Largest amount of data in bytes allowed on stdout orstderr. If exceeded, the child process is terminated and any output istruncated. See caveat atmaxBufferand Unicode.Default:1024 * 1024.encoding<string> The encoding used for all stdio inputs and outputs.Default:'buffer'.shell<boolean> |<string> Iftrue, runscommandinside of a shell. Uses'/bin/sh'on Unix, andprocess.env.ComSpecon Windows. A differentshell can be specified as a string. SeeShell requirements andDefault Windows shell.Default:false(no shell).windowsVerbatimArguments<boolean> No quoting or escaping of arguments isdone on Windows. Ignored on Unix. This is set totrueautomaticallywhenshellis specified and is CMD.Default:false.windowsHide<boolean> Hide the subprocess console window that wouldnormally be created on Windows systems.Default:false.
- Returns:<Object>
pid<number> Pid of the child process.output<Array> Array of results from stdio output.stdout<Buffer> |<string> The contents ofoutput[1].stderr<Buffer> |<string> The contents ofoutput[2].status<number> |<null> The exit code of the subprocess, ornullif thesubprocess terminated due to a signal.signal<string> |<null> The signal used to kill the subprocess, ornullifthe subprocess did not terminate due to a signal.error<Error> The error object if the child process failed or timed out.
Thechild_process.spawnSync() method is generally identical tochild_process.spawn() with the exception that the function will not returnuntil the child process has fully closed. When a timeout has been encounteredandkillSignal is sent, the method won't return until the process hascompletely exited. If the process intercepts and handles theSIGTERM signaland doesn't exit, the parent process will wait until the child process hasexited.
If theshell option is enabled, do not pass unsanitized user input to thisfunction. Any input containing shell metacharacters may be used to triggerarbitrary command execution.
Class:ChildProcess#
- Extends:<EventEmitter>
Instances of theChildProcess represent spawned child processes.
Instances ofChildProcess are not intended to be created directly. Rather,use thechild_process.spawn(),child_process.exec(),child_process.execFile(), orchild_process.fork() methods to createinstances ofChildProcess.
Event:'close'#
code<number> The exit code if the child process exited on its own, ornullif the child process terminated due to a signal.signal<string> The signal by which the child process was terminated, ornullif the child process did not terminated due to a signal.
The'close' event is emitted after a process has endedand the stdiostreams of a child process have been closed. This is distinct from the'exit' event, since multiple processes might share the same stdiostreams. The'close' event will always emit after'exit' wasalready emitted, or'error' if the child process failed to spawn.
If the process exited,code is the final exit code of the process, otherwisenull. If the process terminated due to receipt of a signal,signal is thestring name of the signal, otherwisenull. One of the two will always benon-null.
const { spawn } =require('node:child_process');const ls =spawn('ls', ['-lh','/usr']);ls.stdout.on('data',(data) => {console.log(`stdout:${data}`);});ls.on('close',(code) => {console.log(`child process close all stdio with code${code}`);});ls.on('exit',(code) => {console.log(`child process exited with code${code}`);});import { spawn }from'node:child_process';import { once }from'node:events';const ls =spawn('ls', ['-lh','/usr']);ls.stdout.on('data',(data) => {console.log(`stdout:${data}`);});ls.on('close',(code) => {console.log(`child process close all stdio with code${code}`);});ls.on('exit',(code) => {console.log(`child process exited with code${code}`);});const [code] =awaitonce(ls,'close');console.log(`child process close all stdio with code${code}`);
Event:'disconnect'#
The'disconnect' event is emitted after calling thesubprocess.disconnect() method in parent process orprocess.disconnect() in child process. After disconnecting it is no longerpossible to send or receive messages, and thesubprocess.connectedproperty isfalse.
Event:'error'#
err<Error> The error.
The'error' event is emitted whenever:
- The process could not be spawned.
- The process could not be killed.
- Sending a message to the child process failed.
- The child process was aborted via the
signaloption.
The'exit' event may or may not fire after an error has occurred. Whenlistening to both the'exit' and'error' events, guardagainst accidentally invoking handler functions multiple times.
See alsosubprocess.kill() andsubprocess.send().
Event:'exit'#
code<number> The exit code if the child process exited on its own, ornullif the child process terminated due to a signal.signal<string> The signal by which the child process was terminated, ornullif the child process did not terminated due to a signal.
The'exit' event is emitted after the child process ends. If the processexited,code is the final exit code of the process, otherwisenull. If theprocess terminated due to receipt of a signal,signal is the string name ofthe signal, otherwisenull. One of the two will always be non-null.
When the'exit' event is triggered, child process stdio streams might still beopen.
Node.js establishes signal handlers forSIGINT andSIGTERM and Node.jsprocesses will not terminate immediately due to receipt of those signals.Rather, Node.js will perform a sequence of cleanup actions and then willre-raise the handled signal.
Seewaitpid(2).
Whencode isnull due to signal termination, you can useutil.convertProcessSignalToExitCode() to convert the signal to a POSIXexit code.
Event:'message'#
message<Object> A parsed JSON object or primitive value.sendHandle<Handle> |<undefined>undefinedor anet.Socket,net.Server, ordgram.Socketobject.
The'message' event is triggered when a child process usesprocess.send() to send messages.
The message goes through serialization and parsing. The resultingmessage might not be the same as what is originally sent.
If theserialization option was set to'advanced' used when spawning thechild process, themessage argument can contain data that JSON is not ableto represent.SeeAdvanced serialization for more details.
Event:'spawn'#
The'spawn' event is emitted once the child process has spawned successfully.If the child process does not spawn successfully, the'spawn' event is notemitted and the'error' event is emitted instead.
If emitted, the'spawn' event comes before all other events and before anydata is received viastdout orstderr.
The'spawn' event will fire regardless of whether an error occurswithinthe spawned process. For example, ifbash some-command spawns successfully,the'spawn' event will fire, thoughbash may fail to spawnsome-command.This caveat also applies when using{ shell: true }.
subprocess.channel#
History
| Version | Changes |
|---|---|
| v14.0.0 | The object no longer accidentally exposes native C++ bindings. |
| v7.1.0 | Added in: v7.1.0 |
- Type:<Object> A pipe representing the IPC channel to the child process.
Thesubprocess.channel property is a reference to the child's IPC channel. Ifno IPC channel exists, this property isundefined.
subprocess.channel.ref()#
This method makes the IPC channel keep the event loop of the parent processrunning if.unref() has been called before.
subprocess.channel.unref()#
This method makes the IPC channel not keep the event loop of the parent processrunning, and lets it finish even while the channel is open.
subprocess.connected#
- Type:<boolean> Set to
falseaftersubprocess.disconnect()is called.
Thesubprocess.connected property indicates whether it is still possible tosend and receive messages from a child process. Whensubprocess.connected isfalse, it is no longer possible to send or receive messages.
subprocess.disconnect()#
Closes the IPC channel between parent and child processes, allowing the childprocess to exit gracefully once there are no other connections keeping it alive.After calling this method thesubprocess.connected andprocess.connected properties in both the parent and child processes(respectively) will be set tofalse, and it will be no longer possibleto pass messages between the processes.
The'disconnect' event will be emitted when there are no messages in theprocess of being received. This will most often be triggered immediately aftercallingsubprocess.disconnect().
When the child process is a Node.js instance (e.g. spawned usingchild_process.fork()), theprocess.disconnect() method can be invokedwithin the child process to close the IPC channel as well.
subprocess.exitCode#
- Type:<integer>
Thesubprocess.exitCode property indicates the exit code of the child process.If the child process is still running, the field will benull.
When the child process is terminated by a signal,subprocess.exitCode will benull andsubprocess.signalCode will be set. To get the correspondingPOSIX exit code, useutil.convertProcessSignalToExitCode(subprocess.signalCode).
subprocess.kill([signal])#
Thesubprocess.kill() method sends a signal to the child process. If noargument is given, the process will be sent the'SIGTERM' signal. Seesignal(7) for a list of available signals. This function returnstrue ifkill(2) succeeds, andfalse otherwise.
const { spawn } =require('node:child_process');const grep =spawn('grep', ['ssh']);grep.on('close',(code, signal) => {console.log(`child process terminated due to receipt of signal${signal}`);});// Send SIGHUP to process.grep.kill('SIGHUP');import { spawn }from'node:child_process';const grep =spawn('grep', ['ssh']);grep.on('close',(code, signal) => {console.log(`child process terminated due to receipt of signal${signal}`);});// Send SIGHUP to process.grep.kill('SIGHUP');
TheChildProcess object may emit an'error' event if the signalcannot be delivered. Sending a signal to a child process that has already exitedis not an error but may have unforeseen consequences. Specifically, if theprocess identifier (PID) has been reassigned to another process, the signal willbe delivered to that process instead which can have unexpected results.
While the function is calledkill, the signal delivered to the child processmay not actually terminate the process.
Seekill(2) for reference.
On Windows, where POSIX signals do not exist, thesignal argument will beignored except for'SIGKILL','SIGTERM','SIGINT' and'SIGQUIT', and theprocess will always be killed forcefully and abruptly (similar to'SIGKILL').SeeSignal Events for more details.
On Linux, child processes of child processes will not be terminatedwhen attempting to kill their parent. This is likely to happen when running anew process in a shell or with the use of theshell option ofChildProcess:
const { spawn } =require('node:child_process');const subprocess =spawn('sh', ['-c',`node -e "setInterval(() => { console.log(process.pid, 'is alive') }, 500);"`, ], {stdio: ['inherit','inherit','inherit'], },);setTimeout(() => { subprocess.kill();// Does not terminate the Node.js process in the shell.},2000);import { spawn }from'node:child_process';const subprocess =spawn('sh', ['-c',`node -e "setInterval(() => { console.log(process.pid, 'is alive') }, 500);"`, ], {stdio: ['inherit','inherit','inherit'], },);setTimeout(() => { subprocess.kill();// Does not terminate the Node.js process in the shell.},2000);
subprocess[Symbol.dispose]()#
History
| Version | Changes |
|---|---|
| v24.2.0 | No longer experimental. |
| v20.5.0, v18.18.0 | Added in: v20.5.0, v18.18.0 |
Callssubprocess.kill() with'SIGTERM'.
subprocess.killed#
- Type:<boolean> Set to
trueaftersubprocess.kill()is used to successfullysend a signal to the child process.
Thesubprocess.killed property indicates whether the child processsuccessfully received a signal fromsubprocess.kill(). Thekilled propertydoes not indicate that the child process has been terminated.
subprocess.pid#
- Type:<integer> |<undefined>
Returns the process identifier (PID) of the child process. If the child processfails to spawn due to errors, then the value isundefined anderror isemitted.
const { spawn } =require('node:child_process');const grep =spawn('grep', ['ssh']);console.log(`Spawned child pid:${grep.pid}`);grep.stdin.end();import { spawn }from'node:child_process';const grep =spawn('grep', ['ssh']);console.log(`Spawned child pid:${grep.pid}`);grep.stdin.end();
subprocess.ref()#
Callingsubprocess.ref() after making a call tosubprocess.unref() willrestore the removed reference count for the child process, forcing the parentprocess to wait for the child process to exit before exiting itself.
const { spawn } =require('node:child_process');const process =require('node:process');const subprocess =spawn(process.argv[0], ['child_program.js'], {detached:true,stdio:'ignore',});subprocess.unref();subprocess.ref();import { spawn }from'node:child_process';import processfrom'node:process';const subprocess =spawn(process.argv[0], ['child_program.js'], {detached:true,stdio:'ignore',});subprocess.unref();subprocess.ref();
subprocess.send(message[, sendHandle[, options]][, callback])#
History
| Version | Changes |
|---|---|
| v5.8.0 | The |
| v5.0.0 | This method returns a boolean for flow control now. |
| v4.0.0 | The |
| v0.5.9 | Added in: v0.5.9 |
message<Object>sendHandle<Handle> |<undefined>undefined, or anet.Socket,net.Server, ordgram.Socketobject.options<Object> Theoptionsargument, if present, is an object used toparameterize the sending of certain types of handles.optionssupportsthe following properties:keepOpen<boolean> A value that can be used when passing instances ofnet.Socket. Whentrue, the socket is kept open in the sending process.Default:false.
callback<Function>- Returns:<boolean>
When an IPC channel has been established between the parent and child processes( i.e. when usingchild_process.fork()), thesubprocess.send() methodcan be used to send messages to the child process. When the child process is aNode.js instance, these messages can be received via the'message' event.
The message goes through serialization and parsing. The resultingmessage might not be the same as what is originally sent.
For example, in the parent script:
const { fork } =require('node:child_process');const forkedProcess =fork(`${__dirname}/sub.js`);forkedProcess.on('message',(message) => {console.log('PARENT got message:', message);});// Causes the child to print: CHILD got message: { hello: 'world' }forkedProcess.send({hello:'world' });import { fork }from'node:child_process';const forkedProcess =fork(`${import.meta.dirname}/sub.js`);forkedProcess.on('message',(message) => {console.log('PARENT got message:', message);});// Causes the child to print: CHILD got message: { hello: 'world' }forkedProcess.send({hello:'world' });
And then the child script,'sub.js' might look like this:
process.on('message',(message) => {console.log('CHILD got message:', message);});// Causes the parent to print: PARENT got message: { foo: 'bar', baz: null }process.send({foo:'bar',baz:NaN });Child Node.js processes will have aprocess.send() method of their ownthat allows the child process to send messages back to the parent process.
There is a special case when sending a{cmd: 'NODE_foo'} message. Messagescontaining aNODE_ prefix in thecmd property are reserved for use withinNode.js core and will not be emitted in the child's'message'event. Rather, such messages are emitted using the'internalMessage' event and are consumed internally by Node.js.Applications should avoid using such messages or listening for'internalMessage' events as it is subject to change without notice.
The optionalsendHandle argument that may be passed tosubprocess.send() isfor passing a TCP server or socket object to the child process. The child process willreceive the object as the second argument passed to the callback functionregistered on the'message' event. Any data that is receivedand buffered in the socket will not be sent to the child. Sending IPC sockets isnot supported on Windows.
The optionalcallback is a function that is invoked after the message issent but before the child process may have received it. The function is called with asingle argument:null on success, or anError object on failure.
If nocallback function is provided and the message cannot be sent, an'error' event will be emitted by theChildProcess object. This canhappen, for instance, when the child process has already exited.
subprocess.send() will returnfalse if the channel has closed or when thebacklog of unsent messages exceeds a threshold that makes it unwise to sendmore. Otherwise, the method returnstrue. Thecallback function can beused to implement flow control.
Example: sending a server object#
ThesendHandle argument can be used, for instance, to pass the handle ofa TCP server object to the child process as illustrated in the example below:
const { fork } =require('node:child_process');const { createServer } =require('node:net');const subprocess =fork('subprocess.js');// Open up the server object and send the handle.const server =createServer();server.on('connection',(socket) => { socket.end('handled by parent');});server.listen(1337,() => { subprocess.send('server', server);});import { fork }from'node:child_process';import { createServer }from'node:net';const subprocess =fork('subprocess.js');// Open up the server object and send the handle.const server =createServer();server.on('connection',(socket) => { socket.end('handled by parent');});server.listen(1337,() => { subprocess.send('server', server);});
The child process would then receive the server object as:
process.on('message',(m, server) => {if (m ==='server') { server.on('connection',(socket) => { socket.end('handled by child'); }); }});Once the server is now shared between the parent and child, some connectionscan be handled by the parent and some by the child.
While the example above uses a server created using thenode:net module,node:dgram module servers use exactly the same workflow with the exceptions oflistening on a'message' event instead of'connection' and usingserver.bind() instead ofserver.listen(). This is, however, onlysupported on Unix platforms.
Example: sending a socket object#
Similarly, thesendHandler argument can be used to pass the handle of asocket to the child process. The example below spawns two children that eachhandle connections with "normal" or "special" priority:
const { fork } =require('node:child_process');const { createServer } =require('node:net');const normal =fork('subprocess.js', ['normal']);const special =fork('subprocess.js', ['special']);// Open up the server and send sockets to child. Use pauseOnConnect to prevent// the sockets from being read before they are sent to the child process.const server =createServer({pauseOnConnect:true });server.on('connection',(socket) => {// If this is special priority...if (socket.remoteAddress ==='74.125.127.100') { special.send('socket', socket);return; }// This is normal priority. normal.send('socket', socket);});server.listen(1337);import { fork }from'node:child_process';import { createServer }from'node:net';const normal =fork('subprocess.js', ['normal']);const special =fork('subprocess.js', ['special']);// Open up the server and send sockets to child. Use pauseOnConnect to prevent// the sockets from being read before they are sent to the child process.const server =createServer({pauseOnConnect:true });server.on('connection',(socket) => {// If this is special priority...if (socket.remoteAddress ==='74.125.127.100') { special.send('socket', socket);return; }// This is normal priority. normal.send('socket', socket);});server.listen(1337);
Thesubprocess.js would receive the socket handle as the second argumentpassed to the event callback function:
process.on('message',(m, socket) => {if (m ==='socket') {if (socket) {// Check that the client socket exists.// It is possible for the socket to be closed between the time it is// sent and the time it is received in the child process. socket.end(`Request handled with${process.argv[2]} priority`); } }});Do not use.maxConnections on a socket that has been passed to a subprocess.The parent cannot track when the socket is destroyed.
Any'message' handlers in the subprocess should verify thatsocket exists,as the connection may have been closed during the time it takes to send theconnection to the child.
subprocess.signalCode#
Thesubprocess.signalCode property indicates the signal received bythe child process if any, elsenull.
When the child process is terminated by a signal,subprocess.exitCode will benull.To get the corresponding POSIX exit code, useutil.convertProcessSignalToExitCode(subprocess.signalCode).
subprocess.spawnargs#
- Type:<Array>
Thesubprocess.spawnargs property represents the full list of command-linearguments the child process was launched with.
subprocess.spawnfile#
- Type:<string>
Thesubprocess.spawnfile property indicates the executable file name ofthe child process that is launched.
Forchild_process.fork(), its value will be equal toprocess.execPath.Forchild_process.spawn(), its value will be the name ofthe executable file.Forchild_process.exec(), its value will be the name of the shellin which the child process is launched.
subprocess.stderr#
- Type:<stream.Readable> |<null> |<undefined>
AReadable Stream that represents the child process'sstderr.
If the child process was spawned withstdio[2] set to anything other than'pipe',then this will benull.
subprocess.stderr is an alias forsubprocess.stdio[2]. Both properties willrefer to the same value.
Thesubprocess.stderr property can benull orundefinedif the child process could not be successfully spawned.
subprocess.stdin#
- Type:<stream.Writable> |<null> |<undefined>
AWritable Stream that represents the child process'sstdin.
If a child process waits to read all of its input, the child process will not continueuntil this stream has been closed viaend().
If the child process was spawned withstdio[0] set to anything other than'pipe',then this will benull.
subprocess.stdin is an alias forsubprocess.stdio[0]. Both properties willrefer to the same value.
Thesubprocess.stdin property can benull orundefinedif the child process could not be successfully spawned.
subprocess.stdio#
- Type:<Array>
A sparse array of pipes to the child process, corresponding with positions inthestdio option passed tochild_process.spawn() that have been setto the value'pipe'.subprocess.stdio[0],subprocess.stdio[1], andsubprocess.stdio[2] are also available assubprocess.stdin,subprocess.stdout, andsubprocess.stderr, respectively.
In the following example, only the child's fd1 (stdout) is configured as apipe, so only the parent'ssubprocess.stdio[1] is a stream, all other valuesin the array arenull.
const assert =require('node:assert');const fs =require('node:fs');const child_process =require('node:child_process');const subprocess = child_process.spawn('ls', {stdio: [0,// Use parent's stdin for child.'pipe',// Pipe child's stdout to parent. fs.openSync('err.out','w'),// Direct child's stderr to a file. ],});assert.strictEqual(subprocess.stdio[0],null);assert.strictEqual(subprocess.stdio[0], subprocess.stdin);assert(subprocess.stdout);assert.strictEqual(subprocess.stdio[1], subprocess.stdout);assert.strictEqual(subprocess.stdio[2],null);assert.strictEqual(subprocess.stdio[2], subprocess.stderr);import assertfrom'node:assert';import fsfrom'node:fs';import child_processfrom'node:child_process';const subprocess = child_process.spawn('ls', {stdio: [0,// Use parent's stdin for child.'pipe',// Pipe child's stdout to parent. fs.openSync('err.out','w'),// Direct child's stderr to a file. ],});assert.strictEqual(subprocess.stdio[0],null);assert.strictEqual(subprocess.stdio[0], subprocess.stdin);assert(subprocess.stdout);assert.strictEqual(subprocess.stdio[1], subprocess.stdout);assert.strictEqual(subprocess.stdio[2],null);assert.strictEqual(subprocess.stdio[2], subprocess.stderr);
Thesubprocess.stdio property can beundefined if the child process couldnot be successfully spawned.
subprocess.stdout#
- Type:<stream.Readable> |<null> |<undefined>
AReadable Stream that represents the child process'sstdout.
If the child process was spawned withstdio[1] set to anything other than'pipe',then this will benull.
subprocess.stdout is an alias forsubprocess.stdio[1]. Both properties willrefer to the same value.
const { spawn } =require('node:child_process');const subprocess =spawn('ls');subprocess.stdout.on('data',(data) => {console.log(`Received chunk${data}`);});import { spawn }from'node:child_process';const subprocess =spawn('ls');subprocess.stdout.on('data',(data) => {console.log(`Received chunk${data}`);});
Thesubprocess.stdout property can benull orundefinedif the child process could not be successfully spawned.
subprocess.unref()#
By default, the parent process will wait for the detached child process to exit.To prevent the parent process from waiting for a givensubprocess to exit, use thesubprocess.unref() method. Doing so will cause the parent's event loop to notinclude the child process in its reference count, allowing the parent to exitindependently of the child, unless there is an established IPC channel betweenthe child and the parent processes.
const { spawn } =require('node:child_process');const process =require('node:process');const subprocess =spawn(process.argv[0], ['child_program.js'], {detached:true,stdio:'ignore',});subprocess.unref();import { spawn }from'node:child_process';import processfrom'node:process';const subprocess =spawn(process.argv[0], ['child_program.js'], {detached:true,stdio:'ignore',});subprocess.unref();
maxBuffer and Unicode#
ThemaxBuffer option specifies the largest number of bytes allowed onstdoutorstderr. If this value is exceeded, then the child process is terminated.This impacts output that includes multibyte character encodings such as UTF-8 orUTF-16. For instance,console.log('中文测试') will send 13 UTF-8 encoded bytestostdout although there are only 4 characters.
Shell requirements#
The shell should understand the-c switch. If the shell is'cmd.exe', itshould understand the/d /s /c switches and command-line parsing should becompatible.
Default Windows shell#
Although Microsoft specifies%COMSPEC% must contain the path to'cmd.exe' in the root environment, child processes are not always subject tothe same requirement. Thus, inchild_process functions where a shell can bespawned,'cmd.exe' is used as a fallback ifprocess.env.ComSpec isunavailable.
Advanced serialization#
Child processes support a serialization mechanism for IPC that is based on theserialization API of thenode:v8 module, based on theHTML structured clone algorithm. This is generally more powerful andsupports more built-in JavaScript object types, such asBigInt,MapandSet,ArrayBuffer andTypedArray,Buffer,Error,RegExp etc.
However, this format is not a full superset of JSON, and e.g. properties set onobjects of such built-in types will not be passed on through the serializationstep. Additionally, performance may not be equivalent to that of JSON, dependingon the structure of the passed data.Therefore, this feature requires opting in by setting theserialization option to'advanced' when callingchild_process.spawn()orchild_process.fork().
Cluster#
Source Code:lib/cluster.js
Clusters of Node.js processes can be used to run multiple instances of Node.jsthat can distribute workloads among their application threads. When processisolation is not needed, use theworker_threads module instead, whichallows running multiple application threads within a single Node.js instance.
The cluster module allows easy creation of child processes that all shareserver ports.
import clusterfrom'node:cluster';import httpfrom'node:http';import { availableParallelism }from'node:os';import processfrom'node:process';const numCPUs =availableParallelism();if (cluster.isPrimary) {console.log(`Primary${process.pid} is running`);// Fork workers.for (let i =0; i < numCPUs; i++) { cluster.fork(); } cluster.on('exit',(worker, code, signal) => {console.log(`worker${worker.process.pid} died`); });}else {// Workers can share any TCP connection// In this case it is an HTTP server http.createServer((req, res) => { res.writeHead(200); res.end('hello world\n'); }).listen(8000);console.log(`Worker${process.pid} started`);}const cluster =require('node:cluster');const http =require('node:http');const numCPUs =require('node:os').availableParallelism();const process =require('node:process');if (cluster.isPrimary) {console.log(`Primary${process.pid} is running`);// Fork workers.for (let i =0; i < numCPUs; i++) { cluster.fork(); } cluster.on('exit',(worker, code, signal) => {console.log(`worker${worker.process.pid} died`); });}else {// Workers can share any TCP connection// In this case it is an HTTP server http.createServer((req, res) => { res.writeHead(200); res.end('hello world\n'); }).listen(8000);console.log(`Worker${process.pid} started`);}
Running Node.js will now share port 8000 between the workers:
$node server.jsPrimary 3596 is runningWorker 4324 startedWorker 4520 startedWorker 6056 startedWorker 5644 startedOn Windows, it is not yet possible to set up a named pipe server in a worker.
How it works#
The worker processes are spawned using thechild_process.fork() method,so that they can communicate with the parent via IPC and pass serverhandles back and forth.
The cluster module supports two methods of distributing incomingconnections.
The first one (and the default one on all platforms except Windows)is the round-robin approach, where the primary process listens on aport, accepts new connections and distributes them across the workersin a round-robin fashion, with some built-in smarts to avoidoverloading a worker process.
The second approach is where the primary process creates the listensocket and sends it to interested workers. The workers then acceptincoming connections directly.
The second approach should, in theory, give the best performance.In practice however, distribution tends to be very unbalanced dueto operating system scheduler vagaries. Loads have been observedwhere over 70% of all connections ended up in just two processes,out of a total of eight.
Becauseserver.listen() hands off most of the work to the primaryprocess, there are three cases where the behavior between a normalNode.js process and a cluster worker differs:
server.listen({fd: 7})Because the message is passed to the primary,file descriptor 7in the parent will be listened on, and thehandle passed to the worker, rather than listening to the worker'sidea of what the number 7 file descriptor references.server.listen(handle)Listening on handles explicitly will causethe worker to use the supplied handle, rather than talk to the primaryprocess.server.listen(0)Normally, this will cause servers to listen on arandom port. However, in a cluster, each worker will receive thesame "random" port each time they dolisten(0). In essence, theport is random the first time, but predictable thereafter. To listenon a unique port, generate a port number based on the cluster worker ID.
Node.js does not provide routing logic. It is therefore important to design anapplication such that it does not rely too heavily on in-memory data objects forthings like sessions and login.
Because workers are all separate processes, they can be killed orre-spawned depending on a program's needs, without affecting otherworkers. As long as there are some workers still alive, the server willcontinue to accept connections. If no workers are alive, existing connectionswill be dropped and new connections will be refused. Node.js does notautomatically manage the number of workers, however. It is the application'sresponsibility to manage the worker pool based on its own needs.
Although a primary use case for thenode:cluster module is networking, it canalso be used for other use cases requiring worker processes.
Class:Worker#
- Extends:<EventEmitter>
AWorker object contains all public information and method about a worker.In the primary it can be obtained usingcluster.workers. In a workerit can be obtained usingcluster.worker.
Event:'disconnect'#
Similar to thecluster.on('disconnect') event, but specific to this worker.
cluster.fork().on('disconnect',() => {// Worker has disconnected});Event:'error'#
This event is the same as the one provided bychild_process.fork().
Within a worker,process.on('error') may also be used.
Event:'exit'#
code<number> The exit code, if it exited normally.signal<string> The name of the signal (e.g.'SIGHUP') that causedthe process to be killed.
Similar to thecluster.on('exit') event, but specific to this worker.
import clusterfrom'node:cluster';if (cluster.isPrimary) {const worker = cluster.fork(); worker.on('exit',(code, signal) => {if (signal) {console.log(`worker was killed by signal:${signal}`); }elseif (code !==0) {console.log(`worker exited with error code:${code}`); }else {console.log('worker success!'); } });}const cluster =require('node:cluster');if (cluster.isPrimary) {const worker = cluster.fork(); worker.on('exit',(code, signal) => {if (signal) {console.log(`worker was killed by signal:${signal}`); }elseif (code !==0) {console.log(`worker exited with error code:${code}`); }else {console.log('worker success!'); } });}
Event:'listening'#
address<Object>
Similar to thecluster.on('listening') event, but specific to this worker.
cluster.fork().on('listening',(address) => {// Worker is listening});cluster.fork().on('listening',(address) => {// Worker is listening});
It is not emitted in the worker.
Event:'message'#
message<Object>handle<undefined> |<Object>
Similar to the'message' event ofcluster, but specific to this worker.
Within a worker,process.on('message') may also be used.
Here is an example using the message system. It keeps a count in the primaryprocess of the number of HTTP requests received by the workers:
import clusterfrom'node:cluster';import httpfrom'node:http';import { availableParallelism }from'node:os';import processfrom'node:process';if (cluster.isPrimary) {// Keep track of http requestslet numReqs =0;setInterval(() => {console.log(`numReqs =${numReqs}`); },1000);// Count requestsfunctionmessageHandler(msg) {if (msg.cmd && msg.cmd ==='notifyRequest') { numReqs +=1; } }// Start workers and listen for messages containing notifyRequestconst numCPUs =availableParallelism();for (let i =0; i < numCPUs; i++) { cluster.fork(); }for (const idin cluster.workers) { cluster.workers[id].on('message', messageHandler); }}else {// Worker processes have a http server. http.Server((req, res) => { res.writeHead(200); res.end('hello world\n');// Notify primary about the request process.send({cmd:'notifyRequest' }); }).listen(8000);}const cluster =require('node:cluster');const http =require('node:http');const numCPUs =require('node:os').availableParallelism();const process =require('node:process');if (cluster.isPrimary) {// Keep track of http requestslet numReqs =0;setInterval(() => {console.log(`numReqs =${numReqs}`); },1000);// Count requestsfunctionmessageHandler(msg) {if (msg.cmd && msg.cmd ==='notifyRequest') { numReqs +=1; } }// Start workers and listen for messages containing notifyRequestfor (let i =0; i < numCPUs; i++) { cluster.fork(); }for (const idin cluster.workers) { cluster.workers[id].on('message', messageHandler); }}else {// Worker processes have a http server. http.Server((req, res) => { res.writeHead(200); res.end('hello world\n');// Notify primary about the request process.send({cmd:'notifyRequest' }); }).listen(8000);}
Event:'online'#
Similar to thecluster.on('online') event, but specific to this worker.
cluster.fork().on('online',() => {// Worker is online});It is not emitted in the worker.
worker.disconnect()#
History
| Version | Changes |
|---|---|
| v7.3.0 | This method now returns a reference to |
| v0.7.7 | Added in: v0.7.7 |
- Returns:<cluster.Worker> A reference to
worker.
In a worker, this function will close all servers, wait for the'close' eventon those servers, and then disconnect the IPC channel.
In the primary, an internal message is sent to the worker causing it to call.disconnect() on itself.
Causes.exitedAfterDisconnect to be set.
After a server is closed, it will no longer accept new connections,but connections may be accepted by any other listening worker. Existingconnections will be allowed to close as usual. When no more connections exist,seeserver.close(), the IPC channel to the worker will close allowing itto die gracefully.
The above appliesonly to server connections, client connections are notautomatically closed by workers, and disconnect does not wait for them to closebefore exiting.
In a worker,process.disconnect exists, but it is not this function;it isdisconnect().
Because long living server connections may block workers from disconnecting, itmay be useful to send a message, so application specific actions may be taken toclose them. It also may be useful to implement a timeout, killing a worker ifthe'disconnect' event has not been emitted after some time.
if (cluster.isPrimary) {const worker = cluster.fork();let timeout; worker.on('listening',(address) => { worker.send('shutdown'); worker.disconnect(); timeout =setTimeout(() => { worker.kill(); },2000); }); worker.on('disconnect',() => {clearTimeout(timeout); });}elseif (cluster.isWorker) {const net =require('node:net');const server = net.createServer((socket) => {// Connections never end }); server.listen(8000); process.on('message',(msg) => {if (msg ==='shutdown') {// Initiate graceful close of any connections to server } });}worker.exitedAfterDisconnect#
- Type:<boolean>
This property istrue if the worker exited due to.disconnect().If the worker exited any other way, it isfalse. If theworker has not exited, it isundefined.
The booleanworker.exitedAfterDisconnect allows distinguishing betweenvoluntary and accidental exit, the primary may choose not to respawn a workerbased on this value.
cluster.on('exit',(worker, code, signal) => {if (worker.exitedAfterDisconnect ===true) {console.log('Oh, it was just voluntary – no need to worry'); }});// kill workerworker.kill();worker.id#
- Type:<integer>
Each new worker is given its own unique id, this id is stored in theid.
While a worker is alive, this is the key that indexes it incluster.workers.
worker.isConnected()#
This function returnstrue if the worker is connected to its primary via itsIPC channel,false otherwise. A worker is connected to its primary after ithas been created. It is disconnected after the'disconnect' event is emitted.
worker.isDead()#
This function returnstrue if the worker's process has terminated (eitherbecause of exiting or being signaled). Otherwise, it returnsfalse.
import clusterfrom'node:cluster';import httpfrom'node:http';import { availableParallelism }from'node:os';import processfrom'node:process';const numCPUs =availableParallelism();if (cluster.isPrimary) {console.log(`Primary${process.pid} is running`);// Fork workers.for (let i =0; i < numCPUs; i++) { cluster.fork(); } cluster.on('fork',(worker) => {console.log('worker is dead:', worker.isDead()); }); cluster.on('exit',(worker, code, signal) => {console.log('worker is dead:', worker.isDead()); });}else {// Workers can share any TCP connection. In this case, it is an HTTP server. http.createServer((req, res) => { res.writeHead(200); res.end(`Current process\n${process.pid}`); process.kill(process.pid); }).listen(8000);}const cluster =require('node:cluster');const http =require('node:http');const numCPUs =require('node:os').availableParallelism();const process =require('node:process');if (cluster.isPrimary) {console.log(`Primary${process.pid} is running`);// Fork workers.for (let i =0; i < numCPUs; i++) { cluster.fork(); } cluster.on('fork',(worker) => {console.log('worker is dead:', worker.isDead()); }); cluster.on('exit',(worker, code, signal) => {console.log('worker is dead:', worker.isDead()); });}else {// Workers can share any TCP connection. In this case, it is an HTTP server. http.createServer((req, res) => { res.writeHead(200); res.end(`Current process\n${process.pid}`); process.kill(process.pid); }).listen(8000);}
worker.kill([signal])#
signal<string> Name of the kill signal to send to the workerprocess.Default:'SIGTERM'
This function will kill the worker. In the primary worker, it does this bydisconnecting theworker.process, and once disconnected, killing withsignal. In the worker, it does it by killing the process withsignal.
Thekill() function kills the worker process without waiting for a gracefuldisconnect, it has the same behavior asworker.process.kill().
This method is aliased asworker.destroy() for backwards compatibility.
In a worker,process.kill() exists, but it is not this function;it iskill().
worker.process#
- Type:<ChildProcess>
All workers are created usingchild_process.fork(), the returned objectfrom this function is stored as.process. In a worker, the globalprocessis stored.
See:Child Process module.
Workers will callprocess.exit(0) if the'disconnect' event occursonprocess and.exitedAfterDisconnect is nottrue. This protects againstaccidental disconnection.
worker.send(message[, sendHandle[, options]][, callback])#
History
| Version | Changes |
|---|---|
| v4.0.0 | The |
| v0.7.0 | Added in: v0.7.0 |
message<Object>sendHandle<Handle>options<Object> Theoptionsargument, if present, is an object used toparameterize the sending of certain types of handles.optionssupportsthe following properties:keepOpen<boolean> A value that can be used when passing instances ofnet.Socket. Whentrue, the socket is kept open in the sending process.Default:false.
callback<Function>- Returns:<boolean>
Send a message to a worker or primary, optionally with a handle.
In the primary, this sends a message to a specific worker. It is identical toChildProcess.send().
In a worker, this sends a message to the primary. It is identical toprocess.send().
This example will echo back all messages from the primary:
if (cluster.isPrimary) {const worker = cluster.fork(); worker.send('hi there');}elseif (cluster.isWorker) { process.on('message',(msg) => { process.send(msg); });}Event:'disconnect'#
worker<cluster.Worker>
Emitted after the worker IPC channel has disconnected. This can occur when aworker exits gracefully, is killed, or is disconnected manually (such as withworker.disconnect()).
There may be a delay between the'disconnect' and'exit' events. Theseevents can be used to detect if the process is stuck in a cleanup or if thereare long-living connections.
cluster.on('disconnect',(worker) => {console.log(`The worker #${worker.id} has disconnected`);});Event:'exit'#
worker<cluster.Worker>code<number> The exit code, if it exited normally.signal<string> The name of the signal (e.g.'SIGHUP') that causedthe process to be killed.
When any of the workers die the cluster module will emit the'exit' event.
This can be used to restart the worker by calling.fork() again.
cluster.on('exit',(worker, code, signal) => {console.log('worker %d died (%s). restarting...', worker.process.pid, signal || code); cluster.fork();});Event:'fork'#
worker<cluster.Worker>
When a new worker is forked the cluster module will emit a'fork' event.This can be used to log worker activity, and create a custom timeout.
const timeouts = [];functionerrorMsg() {console.error('Something must be wrong with the connection ...');}cluster.on('fork',(worker) => { timeouts[worker.id] =setTimeout(errorMsg,2000);});cluster.on('listening',(worker, address) => {clearTimeout(timeouts[worker.id]);});cluster.on('exit',(worker, code, signal) => {clearTimeout(timeouts[worker.id]);errorMsg();});Event:'listening'#
worker<cluster.Worker>address<Object>
After callinglisten() from a worker, when the'listening' event is emittedon the server, a'listening' event will also be emitted oncluster in theprimary.
The event handler is executed with two arguments, theworker contains theworker object and theaddress object contains the following connectionproperties:address,port, andaddressType. This is very useful if theworker is listening on more than one address.
cluster.on('listening',(worker, address) => {console.log(`A worker is now connected to${address.address}:${address.port}`);});TheaddressType is one of:
4(TCPv4)6(TCPv6)-1(Unix domain socket)'udp4'or'udp6'(UDPv4 or UDPv6)
Event:'message'#
History
| Version | Changes |
|---|---|
| v6.0.0 | The |
| v2.5.0 | Added in: v2.5.0 |
worker<cluster.Worker>message<Object>handle<undefined> |<Object>
Emitted when the cluster primary receives a message from any worker.
Event:'online'#
worker<cluster.Worker>
After forking a new worker, the worker should respond with an online message.When the primary receives an online message it will emit this event.The difference between'fork' and'online' is that fork is emitted when theprimary forks a worker, and'online' is emitted when the worker is running.
cluster.on('online',(worker) => {console.log('Yay, the worker responded after it was forked');});Event:'setup'#
settings<Object>
Emitted every time.setupPrimary() is called.
Thesettings object is thecluster.settings object at the time.setupPrimary() was called and is advisory only, since multiple calls to.setupPrimary() can be made in a single tick.
If accuracy is important, usecluster.settings.
cluster.disconnect([callback])#
callback<Function> Called when all workers are disconnected and handles areclosed.
Calls.disconnect() on each worker incluster.workers.
When they are disconnected all internal handles will be closed, allowing theprimary process to die gracefully if no other event is waiting.
The method takes an optional callback argument which will be called whenfinished.
This can only be called from the primary process.
cluster.fork([env])#
env<Object> Key/value pairs to add to worker process environment.- Returns:<cluster.Worker>
Spawn a new worker process.
This can only be called from the primary process.
cluster.isMaster#
Deprecated alias forcluster.isPrimary.
cluster.isPrimary#
- Type:<boolean>
True if the process is a primary. This is determinedby theprocess.env.NODE_UNIQUE_ID. Ifprocess.env.NODE_UNIQUE_ID isundefined, thenisPrimary istrue.
cluster.isWorker#
- Type:<boolean>
True if the process is not a primary (it is the negation ofcluster.isPrimary).
cluster.schedulingPolicy#
The scheduling policy, eithercluster.SCHED_RR for round-robin orcluster.SCHED_NONE to leave it to the operating system. This is aglobal setting and effectively frozen once either the first worker is spawned,or.setupPrimary() is called, whichever comes first.
SCHED_RR is the default on all operating systems except Windows.Windows will change toSCHED_RR once libuv is able to effectivelydistribute IOCP handles without incurring a large performance hit.
cluster.schedulingPolicy can also be set through theNODE_CLUSTER_SCHED_POLICY environment variable. Validvalues are'rr' and'none'.
cluster.settings#
History
| Version | Changes |
|---|---|
| v13.2.0, v12.16.0 | The |
| v9.5.0 | The |
| v9.4.0 | The |
| v8.2.0 | The |
| v6.4.0 | The |
| v0.7.1 | Added in: v0.7.1 |
- Type:<Object>
execArgv<string[]> List of string arguments passed to the Node.jsexecutable.Default:process.execArgv.exec<string> File path to worker file.Default:process.argv[1].args<string[]> String arguments passed to worker.Default:process.argv.slice(2).cwd<string> Current working directory of the worker process.Default:undefined(inherits from parent process).serialization<string> Specify the kind of serialization used for sendingmessages between processes. Possible values are'json'and'advanced'.SeeAdvanced serialization forchild_processfor more details.Default:false.silent<boolean> Whether or not to send output to parent's stdio.Default:false.stdio<Array> Configures the stdio of forked processes. Because thecluster module relies on IPC to function, this configuration must contain an'ipc'entry. When this option is provided, it overridessilent. Seechild_process.spawn()'sstdio.uid<number> Sets the user identity of the process. (Seesetuid(2).)gid<number> Sets the group identity of the process. (Seesetgid(2).)inspectPort<number> |<Function> Sets inspector port of worker.This can be a number, or a function that takes no arguments and returns anumber. By default each worker gets its own port, incremented from theprimary'sprocess.debugPort.windowsHide<boolean> Hide the forked processes console window that wouldnormally be created on Windows systems.Default:false.
After calling.setupPrimary() (or.fork()) this settings object willcontain the settings, including the default values.
This object is not intended to be changed or set manually.
cluster.setupMaster([settings])#
History
| Version | Changes |
|---|---|
| v16.0.0 | Deprecated since: v16.0.0 |
| v6.4.0 | The |
| v0.7.1 | Added in: v0.7.1 |
Deprecated alias for.setupPrimary().
cluster.setupPrimary([settings])#
settings<Object> Seecluster.settings.
setupPrimary is used to change the default 'fork' behavior. Once called,the settings will be present incluster.settings.
Any settings changes only affect future calls to.fork() and have noeffect on workers that are already running.
The only attribute of a worker that cannot be set via.setupPrimary() istheenv passed to.fork().
The defaults above apply to the first call only; the defaults for latercalls are the current values at the time ofcluster.setupPrimary() is called.
import clusterfrom'node:cluster';cluster.setupPrimary({exec:'worker.js',args: ['--use','https'],silent:true,});cluster.fork();// https workercluster.setupPrimary({exec:'worker.js',args: ['--use','http'],});cluster.fork();// http workerconst cluster =require('node:cluster');cluster.setupPrimary({exec:'worker.js',args: ['--use','https'],silent:true,});cluster.fork();// https workercluster.setupPrimary({exec:'worker.js',args: ['--use','http'],});cluster.fork();// http worker
This can only be called from the primary process.
cluster.worker#
- Type:<Object>
A reference to the current worker object. Not available in the primary process.
import clusterfrom'node:cluster';if (cluster.isPrimary) {console.log('I am primary'); cluster.fork(); cluster.fork();}elseif (cluster.isWorker) {console.log(`I am worker #${cluster.worker.id}`);}const cluster =require('node:cluster');if (cluster.isPrimary) {console.log('I am primary'); cluster.fork(); cluster.fork();}elseif (cluster.isWorker) {console.log(`I am worker #${cluster.worker.id}`);}
cluster.workers#
- Type:<Object>
A hash that stores the active worker objects, keyed byid field. This makes iteasy to loop through all the workers. It is only available in the primaryprocess.
A worker is removed fromcluster.workers after the worker has disconnectedand exited. The order between these two events cannot be determined inadvance. However, it is guaranteed that the removal from thecluster.workerslist happens before the last'disconnect' or'exit' event is emitted.
import clusterfrom'node:cluster';for (const workerofObject.values(cluster.workers)) { worker.send('big announcement to all workers');}const cluster =require('node:cluster');for (const workerofObject.values(cluster.workers)) { worker.send('big announcement to all workers');}
Command-line API#
Node.js comes with a variety of CLI options. These options expose built-indebugging, multiple ways to execute scripts, and other helpful runtime options.
To view this documentation as a manual page in a terminal, runman node.
Synopsis#
node [options] [V8 options] [<program-entry-point> | -e "script" | -] [--] [arguments]
node inspect [<program-entry-point> | -e "script" | <host>:<port>] …
node --v8-options
Execute without arguments to start theREPL.
For more info aboutnode inspect, see thedebugger documentation.
Program entry point#
The program entry point is a specifier-like string. If the string is not anabsolute path, it's resolved as a relative path from the current workingdirectory. That entry point string is then resolved as if it's been requestedbyrequire() from the current working directory. If no corresponding fileis found, an error is thrown.
By default, the resolved path is also loaded as if it's been requested byrequire(),unless one of the conditions below apply—then it's loaded as if it's been requestedbyimport():
- The program was started with a command-line flag that forces the entrypoint to be loaded with ECMAScript module loader, such as
--import. - The file has an
.mjs,.mtsor.wasmextension. - The file does not have a
.cjsextension, and the nearest parentpackage.jsonfile contains a top-level"type"field with a value of"module".
Seemodule resolution and loading for more details.
Options#
History
| Version | Changes |
|---|---|
| v10.12.0 | Underscores instead of dashes are now allowed for Node.js options as well, in addition to V8 options. |
All options, including V8 options, allow words to be separated by bothdashes (-) or underscores (_). For example,--pending-deprecation isequivalent to--pending_deprecation.
If an option that takes a single value (such as--max-http-header-size) ispassed more than once, then the last passed value is used. Options from thecommand line take precedence over options passed through theNODE_OPTIONSenvironment variable.
-#
Alias for stdin. Analogous to the use of- in other command-line utilities,meaning that the script is read from stdin, and the rest of the optionsare passed to that script.
--#
Indicate the end of node options. Pass the rest of the arguments to the script.If no script filename or eval/print script is supplied prior to this, thenthe next argument is used as a script filename.
--abort-on-uncaught-exception#
Aborting instead of exiting causes a core file to be generated for post-mortemanalysis using a debugger (such aslldb,gdb, andmdb).
If this flag is passed, the behavior can still be set to not abort throughprocess.setUncaughtExceptionCaptureCallback() (and through usage of thenode:domain module that uses it).
--allow-addons#
When using thePermission Model, the process will not be able to usenative addons by default.Attempts to do so will throw anERR_DLOPEN_DISABLED unless theuser explicitly passes the--allow-addons flag when starting Node.js.
Example:
// Attempt to require an native addonrequire('nodejs-addon-example');$node --permission --allow-fs-read=* index.jsnode:internal/modules/cjs/loader:1319 return process.dlopen(module, path.toNamespacedPath(filename)); ^Error: Cannot load native addon because loading addons is disabled. at Module._extensions..node (node:internal/modules/cjs/loader:1319:18) at Module.load (node:internal/modules/cjs/loader:1091:32) at Module._load (node:internal/modules/cjs/loader:938:12) at Module.require (node:internal/modules/cjs/loader:1115:19) at require (node:internal/modules/helpers:130:18) at Object.<anonymous> (/home/index.js:1:15) at Module._compile (node:internal/modules/cjs/loader:1233:14) at Module._extensions..js (node:internal/modules/cjs/loader:1287:10) at Module.load (node:internal/modules/cjs/loader:1091:32) at Module._load (node:internal/modules/cjs/loader:938:12) { code: 'ERR_DLOPEN_DISABLED'}--allow-child-process#
History
| Version | Changes |
|---|---|
| v24.4.0, v22.18.0 | When spawning process with the permission model enabled. The flags are inherit to the child Node.js process through NODE_OPTIONS environment variable. |
| v20.0.0 | Added in: v20.0.0 |
When using thePermission Model, the process will not be able to spawn anychild process by default.Attempts to do so will throw anERR_ACCESS_DENIED unless theuser explicitly passes the--allow-child-process flag when starting Node.js.
Example:
const childProcess =require('node:child_process');// Attempt to bypass the permissionchildProcess.spawn('node', ['-e','require("fs").writeFileSync("/new-file", "example")']);$node --permission --allow-fs-read=* index.jsnode:internal/child_process:388 const err = this._handle.spawn(options); ^Error: Access to this API has been restricted at ChildProcess.spawn (node:internal/child_process:388:28) at node:internal/main/run_main_module:17:47 { code: 'ERR_ACCESS_DENIED', permission: 'ChildProcess'}Thechild_process.fork() API inherits the execution arguments from theparent process. This means that if Node.js is started with the PermissionModel enabled and the--allow-child-process flag is set, any child processcreated usingchild_process.fork() will automatically receive all relevantPermission Model flags.
This behavior also applies tochild_process.spawn(), but in that case, theflags are propagated via theNODE_OPTIONS environment variable rather thandirectly through the process arguments.
--allow-fs-read#
History
| Version | Changes |
|---|---|
| v24.2.0, v22.17.0 | Entrypoints of your application are allowed to be read implicitly. |
| v23.5.0, v22.13.0 | Permission Model and --allow-fs flags are stable. |
| v20.7.0 | Paths delimited by comma ( |
| v20.0.0 | Added in: v20.0.0 |
This flag configures file system read permissions usingthePermission Model.
The valid arguments for the--allow-fs-read flag are:
*- To allow allFileSystemReadoperations.- Multiple paths can be allowed using multiple
--allow-fs-readflags.Example--allow-fs-read=/folder1/ --allow-fs-read=/folder1/
Examples can be found in theFile System Permissions documentation.
The initializer module and custom--require modules has a implicitread permission.
$node --permission -r custom-require.js -r custom-require-2.js index.js- The
custom-require.js,custom-require-2.js, andindex.jswill beby default in the allowed read list.
process.has('fs.read','index.js');// trueprocess.has('fs.read','custom-require.js');// trueprocess.has('fs.read','custom-require-2.js');// true--allow-fs-write#
History
| Version | Changes |
|---|---|
| v23.5.0, v22.13.0 | Permission Model and --allow-fs flags are stable. |
| v20.7.0 | Paths delimited by comma ( |
| v20.0.0 | Added in: v20.0.0 |
This flag configures file system write permissions usingthePermission Model.
The valid arguments for the--allow-fs-write flag are:
*- To allow allFileSystemWriteoperations.- Multiple paths can be allowed using multiple
--allow-fs-writeflags.Example--allow-fs-write=/folder1/ --allow-fs-write=/folder1/
Paths delimited by comma (,) are no longer allowed.When passing a single flag with a comma a warning will be displayed.
Examples can be found in theFile System Permissions documentation.
--allow-inspector#
When using thePermission Model, the process will not be able to connectthrough inspector protocol.
Attempts to do so will throw anERR_ACCESS_DENIED unless theuser explicitly passes the--allow-inspector flag when starting Node.js.
Example:
const {Session } =require('node:inspector/promises');const session =newSession();session.connect();$node --permission index.jsError: connect ERR_ACCESS_DENIED Access to this API has been restricted. Use --allow-inspector to manage permissions. code: 'ERR_ACCESS_DENIED',}--allow-net#
When using thePermission Model, the process will not be able to accessnetwork by default.Attempts to do so will throw anERR_ACCESS_DENIED unless theuser explicitly passes the--allow-net flag when starting Node.js.
Example:
const http =require('node:http');// Attempt to bypass the permissionconst req = http.get('http://example.com',() => {});req.on('error',(err) => {console.log('err', err);});$node --permission index.jsError: connect ERR_ACCESS_DENIED Access to this API has been restricted. Use --allow-net to manage permissions. code: 'ERR_ACCESS_DENIED',}--allow-wasi#
When using thePermission Model, the process will not be capable of creatingany WASI instances by default.For security reasons, the call will throw anERR_ACCESS_DENIED unless theuser explicitly passes the flag--allow-wasi in the main Node.js process.
Example:
const {WASI } =require('node:wasi');// Attempt to bypass the permissionnewWASI({version:'preview1',// Attempt to mount the whole filesystempreopens: {'/':'/', },});$node --permission --allow-fs-read=* index.jsError: Access to this API has been restricted at node:internal/main/run_main_module:30:49 { code: 'ERR_ACCESS_DENIED', permission: 'WASI',}--allow-worker#
When using thePermission Model, the process will not be able to create anyworker threads by default.For security reasons, the call will throw anERR_ACCESS_DENIED unless theuser explicitly pass the flag--allow-worker in the main Node.js process.
Example:
const {Worker } =require('node:worker_threads');// Attempt to bypass the permissionnewWorker(__filename);$node --permission --allow-fs-read=* index.jsError: Access to this API has been restricted at node:internal/main/run_main_module:17:47 { code: 'ERR_ACCESS_DENIED', permission: 'WorkerThreads'}--build-sea=config#
Generates asingle executable application from a JSONconfiguration file. The argument must be a path to the configuration file. Ifthe path is not absolute, it is resolved relative to the current workingdirectory.
For configuration fields, cross-platform notes, and asset APIs, seethesingle executable application documentation.
--build-snapshot#
History
| Version | Changes |
|---|---|
| v25.4.0 | The snapshot building process is no longer experimental. |
| v18.8.0 | Added in: v18.8.0 |
Generates a snapshot blob when the process exits and writes it todisk, which can be loaded later with--snapshot-blob.
When building the snapshot, if--snapshot-blob is not specified,the generated blob will be written, by default, tosnapshot.blobin the current working directory. Otherwise it will be written tothe path specified by--snapshot-blob.
$echo"globalThis.foo = 'I am from the snapshot'" > snapshot.js#Run snapshot.js to initialize the application and snapshot the#state of it into snapshot.blob.$node --snapshot-blob snapshot.blob --build-snapshot snapshot.js$echo"console.log(globalThis.foo)" > index.js#Load the generated snapshot and start the application from index.js.$node --snapshot-blob snapshot.blob index.jsI am from the snapshotThev8.startupSnapshot API can be used to specify an entry point atsnapshot building time, thus avoiding the need of an additional entryscript at deserialization time:
$echo"require('v8').startupSnapshot.setDeserializeMainFunction(() => console.log('I am from the snapshot'))" > snapshot.js$node --snapshot-blob snapshot.blob --build-snapshot snapshot.js$node --snapshot-blob snapshot.blobI am from the snapshotFor more information, check out thev8.startupSnapshot API documentation.
The snapshot currently only supports loding a single entrypoint during thesnapshot building process, which can load built-in modules, but not additional user-land modules.Users can bundle their applications into a single script with their bundlerof choice before building a snapshot.
As it's complicated to ensure the serializablility of all built-in modules,which are also growing over time, only a subset of the built-in modules arewell tested to be serializable during the snapshot building process.The Node.js core test suite checks that a few fairly complex applicationscan be snapshotted. The list of built-in modules beingcaptured by the built-in snapshot of Node.js is considered supported.When the snapshot builder encounters a built-in module that cannot beserialized, it may crash the snapshot building process. In that case a typicalworkaround would be to delay loading that module untilruntime, using eitherv8.startupSnapshot.setDeserializeMainFunction() orv8.startupSnapshot.addDeserializeCallback(). If serialization foran additional module during the snapshot building process is needed,please file a request in theNode.js issue tracker and link to it in thetracking issue for user-land snapshots.
--build-snapshot-config#
History
| Version | Changes |
|---|---|
| v25.4.0 | The snapshot building process is no longer experimental. |
| v21.6.0, v20.12.0 | Added in: v21.6.0, v20.12.0 |
Specifies the path to a JSON configuration file which configures snapshotcreation behavior.
The following options are currently supported:
builder<string> Required. Provides the name to the script that is executedbefore building the snapshot, as if--build-snapshothad been passedwithbuilderas the main script name.withoutCodeCache<boolean> Optional. Including the code cache reduces thetime spent on compiling functions included in the snapshot at the expenseof a bigger snapshot size and potentially breaking portability of thesnapshot.
When using this flag, additional script files provided on the command line willnot be executed and instead be interpreted as regular command line arguments.
-c,--check#
History
| Version | Changes |
|---|---|
| v10.0.0 | The |
| v5.0.0, v4.2.0 | Added in: v5.0.0, v4.2.0 |
Syntax check the script without executing.
--completion-bash#
Print source-able bash completion script for Node.js.
node --completion-bash > node_bash_completionsource node_bash_completion-C condition,--conditions=condition#
History
| Version | Changes |
|---|---|
| v22.9.0, v20.18.0 | The flag is no longer experimental. |
| v14.9.0, v12.19.0 | Added in: v14.9.0, v12.19.0 |
Provide customconditional exports resolution conditions.
Any number of custom string condition names are permitted.
The default Node.js conditions of"node","default","import", and"require" will always apply as defined.
For example, to run a module with "development" resolutions:
node -C development app.js--cpu-prof#
History
| Version | Changes |
|---|---|
| v22.4.0, v20.16.0 | The |
| v12.0.0 | Added in: v12.0.0 |
Starts the V8 CPU profiler on start up, and writes the CPU profile to diskbefore exit.
If--cpu-prof-dir is not specified, the generated profile is placedin the current working directory.
If--cpu-prof-name is not specified, the generated profile isnamedCPU.${yyyymmdd}.${hhmmss}.${pid}.${tid}.${seq}.cpuprofile.
$node --cpu-prof index.js$ls *.cpuprofileCPU.20190409.202950.15293.0.0.cpuprofileIf--cpu-prof-name is specified, the provided value is used as a templatefor the file name. The following placeholder is supported and will besubstituted at runtime:
${pid}— the current process ID
$node --cpu-prof --cpu-prof-name'CPU.${pid}.cpuprofile' index.js$ls *.cpuprofileCPU.15293.cpuprofile--cpu-prof-dir#
History
| Version | Changes |
|---|---|
| v22.4.0, v20.16.0 | The |
| v12.0.0 | Added in: v12.0.0 |
Specify the directory where the CPU profiles generated by--cpu-prof willbe placed.
The default value is controlled by the--diagnostic-dir command-line option.
--cpu-prof-interval#
History
| Version | Changes |
|---|---|
| v22.4.0, v20.16.0 | The |
| v12.2.0 | Added in: v12.2.0 |
Specify the sampling interval in microseconds for the CPU profiles generatedby--cpu-prof. The default is 1000 microseconds.
--cpu-prof-name#
History
| Version | Changes |
|---|---|
| v22.4.0, v20.16.0 | The |
| v12.0.0 | Added in: v12.0.0 |
Specify the file name of the CPU profile generated by--cpu-prof.
--diagnostic-dir=directory#
Set the directory to which all diagnostic output files are written.Defaults to current working directory.
Affects the default output directory of:
--disable-proto=mode#
Disable theObject.prototype.__proto__ property. Ifmode isdelete, theproperty is removed entirely. Ifmode isthrow, accesses to theproperty throw an exception with the codeERR_PROTO_ACCESS.
--disable-sigusr1#
History
| Version | Changes |
|---|---|
| v24.8.0, v22.20.0 | The option is no longer experimental. |
| v23.7.0, v22.14.0 | Added in: v23.7.0, v22.14.0 |
Disable the ability of starting a debugging session by sending aSIGUSR1 signal to the process.
--disable-warning=code-or-type#
Disable specific process warnings bycode ortype.
Warnings emitted fromprocess.emitWarning() may contain acode and atype. This option will not-emit warnings that have a matchingcode ortype.
List ofdeprecation warnings.
The Node.js core warning types are:DeprecationWarning andExperimentalWarning
For example, the following script will not emitDEP0025require('node:sys') when executed withnode --disable-warning=DEP0025:
import sysfrom'node:sys';const sys =require('node:sys');
For example, the following script will emit theDEP0025require('node:sys'), but not any ExperimentalWarnings (such asExperimentalWarning:vm.measureMemory is an experimental featurein <=v21) when executed withnode --disable-warning=ExperimentalWarning:
import sysfrom'node:sys';import vmfrom'node:vm';vm.measureMemory();const sys =require('node:sys');const vm =require('node:vm');vm.measureMemory();
--disable-wasm-trap-handler#
By default, Node.js enables trap-handler-based WebAssembly boundchecks. As a result, V8 does not need to insert inline bound checksint the code compiled from WebAssembly which may speedup WebAssemblyexecution significantly, but this optimization requires allocatinga big virtual memory cage (currently 10GB). If the Node.js processdoes not have access to a large enough virtual memory address spacedue to system configurations or hardware limitations, users won'tbe able to run any WebAssembly that involves allocation in thisvirtual memory cage and will see an out-of-memory error.
$ulimit -v 5000000$node -p"new WebAssembly.Memory({ initial: 10, maximum: 100 });"[eval]:1new WebAssembly.Memory({ initial: 10, maximum: 100 });^RangeError: WebAssembly.Memory(): could not allocate memory at [eval]:1:1 at runScriptInThisContext (node:internal/vm:209:10) at node:internal/process/execution:118:14 at [eval]-wrapper:6:24 at runScript (node:internal/process/execution:101:62) at evalScript (node:internal/process/execution:136:3) at node:internal/main/eval_string:49:3--disable-wasm-trap-handler disables this optimization so thatusers can at least run WebAssembly (with less optimal performance)when the virtual memory address space available to their Node.jsprocess is lower than what the V8 WebAssembly memory cage needs.
--disallow-code-generation-from-strings#
Make built-in language features likeeval andnew Function that generatecode from strings throw an exception instead. This does not affect the Node.jsnode:vm module.
--dns-result-order=order#
History
| Version | Changes |
|---|---|
| v22.1.0, v20.13.0 | The |
| v17.0.0 | Changed default value to |
| v16.4.0, v14.18.0 | Added in: v16.4.0, v14.18.0 |
Set the default value oforder indns.lookup() anddnsPromises.lookup(). The value could be:
ipv4first: sets defaultordertoipv4first.ipv6first: sets defaultordertoipv6first.verbatim: sets defaultordertoverbatim.
The default isverbatim anddns.setDefaultResultOrder() have higherpriority than--dns-result-order.
--enable-fips#
Enable FIPS-compliant crypto at startup. (Requires Node.js to be builtagainst FIPS-compatible OpenSSL.)
--enable-source-maps#
History
| Version | Changes |
|---|---|
| v15.11.0, v14.18.0 | This API is no longer experimental. |
| v12.12.0 | Added in: v12.12.0 |
EnableSource Map support for stack traces.
When using a transpiler, such as TypeScript, stack traces thrown by anapplication reference the transpiled code, not the original source position.--enable-source-maps enables caching of Source Maps and makes a besteffort to report stack traces relative to the original source file.
OverridingError.prepareStackTrace may prevent--enable-source-maps frommodifying the stack trace. Call and return the results of the originalError.prepareStackTrace in the overriding function to modify the stack tracewith source maps.
const originalPrepareStackTrace =Error.prepareStackTrace;Error.prepareStackTrace =(error, trace) => {// Modify error and trace and format stack trace with// original Error.prepareStackTrace.returnoriginalPrepareStackTrace(error, trace);};Note, enabling source maps can introduce latency to your applicationwhenError.stack is accessed. If you accessError.stack frequentlyin your application, take into account the performance implicationsof--enable-source-maps.
--entry-url#
When present, Node.js will interpret the entry point as a URL, rather than apath.
FollowsECMAScript module resolution rules.
Any query parameter or hash in the URL will be accessible viaimport.meta.url.
node --entry-url'file:///path/to/file.js?queryparams=work#and-hashes-too'node --entry-url'file.ts?query#hash'node --entry-url'data:text/javascript,console.log("Hello")'--env-file-if-exists=file#
History
| Version | Changes |
|---|---|
| v24.10.0 | The |
| v22.9.0 | Added in: v22.9.0 |
Behavior is the same as--env-file, but an error is not thrown if the filedoes not exist.
--env-file=file#
History
| Version | Changes |
|---|---|
| v24.10.0 | The |
| v21.7.0, v20.12.0 | Add support to multi-line values. |
| v20.6.0 | Added in: v20.6.0 |
Loads environment variables from a file relative to the current directory,making them available to applications onprocess.env. Theenvironmentvariables which configure Node.js, such asNODE_OPTIONS,are parsed and applied. If the same variable is defined in the environment andin the file, the value from the environment takes precedence.
You can pass multiple--env-file arguments. Subsequent files overridepre-existing variables defined in previous files.
An error is thrown if the file does not exist.
node --env-file=.env --env-file=.development.env index.jsThe format of the file should be one line per key-value pair of environmentvariable name and value separated by=:
PORT=3000Any text after a# is treated as a comment:
# This is a commentPORT=3000 # This is also a commentValues can start and end with the following quotes:`," or'.They are omitted from the values.
USERNAME="nodejs" # will result in `nodejs` as the value.Multi-line values are supported:
MULTI_LINE="THIS ISA MULTILINE"# will result in `THIS IS\nA MULTILINE` as the value.Export keyword before a key is ignored:
export USERNAME="nodejs" # will result in `nodejs` as the value.If you want to load environment variables from a file that may not exist, youcan use the--env-file-if-exists flag instead.
-e,--eval "script"#
History
| Version | Changes |
|---|---|
| v22.6.0 | Eval now supports experimental type-stripping. |
| v5.11.0 | Built-in libraries are now available as predefined variables. |
| v0.5.2 | Added in: v0.5.2 |
Evaluate the following argument as JavaScript. The modules which arepredefined in the REPL can also be used inscript.
On Windows, usingcmd.exe a single quote will not work correctly because itonly recognizes double" for quoting. In Powershell or Git bash, both'and" are usable.
It is possible to run code containing inline types unless the--no-strip-types flag is provided.
--experimental-addon-modules#
Enable experimental import support for.node addons.
--experimental-config-file=config#
If present, Node.js will look for a configuration file at the specified path.Node.js will read the configuration file and apply the settings. Theconfiguration file should be a JSON file with the following structure.vX.Y.Zin the$schema must be replaced with the version of Node.js you are using.
{"$schema":"https://nodejs.org/dist/vX.Y.Z/docs/node-config-schema.json","nodeOptions":{"import":["amaro/strip"],"watch-path":"src","watch-preserve-output":true},"test":{"test-isolation":"process"},"watch":{"watch-preserve-output":true}}The configuration file supports namespace-specific options:
The
nodeOptionsfield contains CLI flags that are allowed inNODE_OPTIONS.Namespace fields like
test,watch, andpermissioncontain configuration specific to that subsystem.
When a namespace is present in theconfiguration file, Node.js automatically enables the corresponding flag(e.g.,--test,--watch,--permission). This allows you to configuresubsystem-specific options without explicitly passing the flag on the command line.
For example:
{"test":{"test-isolation":"process"}}is equivalent to:
node --test --test-isolation=processTo disable the automatic flag while still using namespace options, you canexplicitly set the flag tofalse within the namespace:
{"test":{"test":false,"test-isolation":"process"}}No-op flags are not supported.Not all V8 flags are currently supported.
It is possible to use theofficial JSON schemato validate the configuration file, which may vary depending on the Node.js version.Each key in the configuration file corresponds to a flag that can be passedas a command-line argument. The value of the key is the value that would bepassed to the flag.
For example, the configuration file above is equivalent tothe following command-line arguments:
node --import amaro/strip --watch-path=src --watch-preserve-output --test-isolation=processThe priority in configuration is as follows:
- NODE_OPTIONS and command-line options
- Configuration file
- Dotenv NODE_OPTIONS
Values in the configuration file will not override the values in the environmentvariables and command-line options, but will override the values in theNODE_OPTIONSenv file parsed by the--env-file flag.
Keys cannot be duplicated within the same or different namespaces.
The configuration parser will throw an error if the configuration file containsunknown keys or keys that cannot be used in a namespace.
Node.js will not sanitize or perform validation on the user-provided configuration,soNEVER use untrusted configuration files.
--experimental-default-config-file#
If the--experimental-default-config-file flag is present, Node.js will look for anode.config.json file in the current working directory and load it as aas configuration file.
--experimental-eventsource#
Enable exposition ofEventSource Web API on the global scope.
--experimental-import-meta-resolve#
History
| Version | Changes |
|---|---|
| v20.6.0, v18.19.0 | synchronous import.meta.resolve made available by default, with the flag retained for enabling the experimental second argument as previously supported. |
| v13.9.0, v12.16.2 | Added in: v13.9.0, v12.16.2 |
Enable experimentalimport.meta.resolve() parent URL support, which allowspassing a secondparentURL argument for contextual resolution.
Previously gated the entireimport.meta.resolve feature.
--experimental-inspector-network-resource#
Enable experimental support for inspector network resources.
--experimental-loader=module#
History
| Version | Changes |
|---|---|
| v23.6.1, v22.13.1, v20.18.2 | Using this feature with the permission model enabled requires passing |
| v12.11.1 | This flag was renamed from |
| v8.8.0 | Added in: v8.8.0 |
This flag is discouraged and may be removed in a future version of Node.js.Please use
--importwithregister()instead.
Specify themodule containing exportedasynchronous module customization hooks.module may be any string accepted as animport specifier.
This feature requires--allow-worker if used with thePermission Model.
--experimental-network-inspection#
Enable experimental support for the network inspection with Chrome DevTools.
--experimental-print-required-tla#
If the ES module beingrequire()'d contains top-levelawait, this flagallows Node.js to evaluate the module, try to locate thetop-level awaits, and print their location to help users find them.
--experimental-quic#
Enable experimental support for the QUIC protocol.
--experimental-sea-config#
Use this flag to generate a blob that can be injected into the Node.jsbinary to produce asingle executable application. See the documentationaboutthis configuration for details.
--experimental-storage-inspection#
Enable experimental support for storage inspection
--experimental-test-coverage#
History
| Version | Changes |
|---|---|
| v20.1.0, v18.17.0 | This option can be used with |
| v19.7.0, v18.15.0 | Added in: v19.7.0, v18.15.0 |
When used in conjunction with thenode:test module, a code coverage report isgenerated as part of the test runner output. If no tests are run, a coveragereport is not generated. See the documentation oncollecting code coverage from tests for more details.
--experimental-test-module-mocks#
History
| Version | Changes |
|---|---|
| v23.6.1, v22.13.1, v20.18.2 | Using this feature with the permission model enabled requires passing |
| v22.3.0, v20.18.0 | Added in: v22.3.0, v20.18.0 |
Enable module mocking in the test runner.
This feature requires--allow-worker if used with thePermission Model.
--experimental-transform-types#
Enables the transformation of TypeScript-only syntax into JavaScript code.Implies--enable-source-maps.
--experimental-vm-modules#
Enable experimental ES Module support in thenode:vm module.
--experimental-wasi-unstable-preview1#
History
| Version | Changes |
|---|---|
| v20.0.0, v18.17.0 | This option is no longer required as WASI is enabled by default, but can still be passed. |
| v13.6.0 | changed from |
| v13.3.0, v12.16.0 | Added in: v13.3.0, v12.16.0 |
Enable experimental WebAssembly System Interface (WASI) support.
--experimental-worker-inspection#
Enable experimental support for the worker inspection with Chrome DevTools.
--expose-gc#
This flag will expose the gc extension from V8.
if (globalThis.gc) { globalThis.gc();}--force-fips#
Force FIPS-compliant crypto on startup. (Cannot be disabled from script code.)(Same requirements as--enable-fips.)
--force-node-api-uncaught-exceptions-policy#
EnforcesuncaughtException event on Node-API asynchronous callbacks.
To prevent from an existing add-on from crashing the process, this flag is notenabled by default. In the future, this flag will be enabled by default toenforce the correct behavior.
--frozen-intrinsics#
Enable experimental frozen intrinsics likeArray andObject.
Only the root context is supported. There is no guarantee thatglobalThis.Array is indeed the default intrinsic reference. Code may breakunder this flag.
To allow polyfills to be added,--require and--import both run before freezing intrinsics.
--heap-prof#
History
| Version | Changes |
|---|---|
| v22.4.0, v20.16.0 | The |
| v12.4.0 | Added in: v12.4.0 |
Starts the V8 heap profiler on start up, and writes the heap profile to diskbefore exit.
If--heap-prof-dir is not specified, the generated profile is placedin the current working directory.
If--heap-prof-name is not specified, the generated profile isnamedHeap.${yyyymmdd}.${hhmmss}.${pid}.${tid}.${seq}.heapprofile.
$node --heap-prof index.js$ls *.heapprofileHeap.20190409.202950.15293.0.001.heapprofile--heap-prof-dir#
History
| Version | Changes |
|---|---|
| v22.4.0, v20.16.0 | The |
| v12.4.0 | Added in: v12.4.0 |
Specify the directory where the heap profiles generated by--heap-prof willbe placed.
The default value is controlled by the--diagnostic-dir command-line option.
--heap-prof-interval#
History
| Version | Changes |
|---|---|
| v22.4.0, v20.16.0 | The |
| v12.4.0 | Added in: v12.4.0 |
Specify the average sampling interval in bytes for the heap profiles generatedby--heap-prof. The default is 512 * 1024 bytes.
--heap-prof-name#
History
| Version | Changes |
|---|---|
| v22.4.0, v20.16.0 | The |
| v12.4.0 | Added in: v12.4.0 |
Specify the file name of the heap profile generated by--heap-prof.
--heapsnapshot-near-heap-limit=max_count#
History
| Version | Changes |
|---|---|
| v25.4.0 | The flag is no longer experimental. |
| v15.1.0, v14.18.0 | Added in: v15.1.0, v14.18.0 |
Writes a V8 heap snapshot to disk when the V8 heap usage is approaching theheap limit.count should be a non-negative integer (in which caseNode.js will write no more thanmax_count snapshots to disk).
When generating snapshots, garbage collection may be triggered and bringthe heap usage down. Therefore multiple snapshots may be written to diskbefore the Node.js instance finally runs out of memory. These heap snapshotscan be compared to determine what objects are being allocated during thetime consecutive snapshots are taken. It's not guaranteed that Node.js willwrite exactlymax_count snapshots to disk, but it will tryits best to generate at least one and up tomax_count snapshots before theNode.js instance runs out of memory whenmax_count is greater than0.
Generating V8 snapshots takes time and memory (both memory managed by theV8 heap and native memory outside the V8 heap). The bigger the heap is,the more resources it needs. Node.js will adjust the V8 heap to accommodatethe additional V8 heap memory overhead, and try its best to avoid using upall the memory available to the process. When the process usesmore memory than the system deems appropriate, the process may be terminatedabruptly by the system, depending on the system configuration.
$node --max-old-space-size=100 --heapsnapshot-near-heap-limit=3 index.jsWrote snapshot to Heap.20200430.100036.49580.0.001.heapsnapshotWrote snapshot to Heap.20200430.100037.49580.0.002.heapsnapshotWrote snapshot to Heap.20200430.100038.49580.0.003.heapsnapshot<--- Last few GCs --->[49580:0x110000000] 4826 ms: Mark-sweep 130.6 (147.8) -> 130.5 (147.8) MB, 27.4 / 0.0 ms (average mu = 0.126, current mu = 0.034) allocation failure scavenge might not succeed[49580:0x110000000] 4845 ms: Mark-sweep 130.6 (147.8) -> 130.6 (147.8) MB, 18.8 / 0.0 ms (average mu = 0.088, current mu = 0.031) allocation failure scavenge might not succeed<--- JS stacktrace --->FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory....--heapsnapshot-signal=signal#
Enables a signal handler that causes the Node.js process to write a heap dumpwhen the specified signal is received.signal must be a valid signal name.Disabled by default.
$node --heapsnapshot-signal=SIGUSR2 index.js &$ps auxUSER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMANDnode 1 5.5 6.1 787252 247004 ? Ssl 16:43 0:02 node --heapsnapshot-signal=SIGUSR2 index.js$kill -USR2 1$lsHeap.20190718.133405.15554.0.001.heapsnapshot-h,--help#
Print node command-line options.The output of this option is less detailed than this document.
--import=module#
Preload the specified module at startup. If the flag is provided several times,each module will be executed sequentially in the order they appear, startingwith the ones provided inNODE_OPTIONS.
FollowsECMAScript module resolution rules.Use--require to load aCommonJS module.Modules preloaded with--require will run before modules preloaded with--import.
Modules are preloaded into the main thread as well as any worker threads,forked processes, or clustered processes.
--input-type=type#
History
| Version | Changes |
|---|---|
| v23.6.0, v22.18.0 | Add support for |
| v22.7.0, v20.19.0 | ESM syntax detection is enabled by default. |
| v12.0.0 | Added in: v12.0.0 |
This configures Node.js to interpret--eval orSTDIN input as CommonJS oras an ES module. Valid values are"commonjs","module","module-typescript" and"commonjs-typescript".The"-typescript" values are not available with the flag--no-strip-types.The default is no value, or"commonjs" if--no-experimental-detect-module is passed.
If--input-type is not provided,Node.js will try to detect the syntax with the following steps:
- Run the input as CommonJS.
- If step 1 fails, run the input as an ES module.
- If step 2 fails with a SyntaxError, strip the types.
- If step 3 fails with an error code
ERR_UNSUPPORTED_TYPESCRIPT_SYNTAXorERR_INVALID_TYPESCRIPT_SYNTAX,throw the error from step 2, including the TypeScript error in the message,else run as CommonJS. - If step 4 fails, run the input as an ES module.
To avoid the delay of multiple syntax detection passes, the--input-type=type flag can be used to specifyhow the--eval input should be interpreted.
The REPL does not support this option. Usage of--input-type=module with--print will throw an error, as--print does not support ES modulesyntax.
--insecure-http-parser#
Enable leniency flags on the HTTP parser. This may allowinteroperability with non-conformant HTTP implementations.
When enabled, the parser will accept the following:
- Invalid HTTP headers values.
- Invalid HTTP versions.
- Allow message containing both
Transfer-EncodingandContent-Lengthheaders. - Allow extra data after message when
Connection: closeis present. - Allow extra transfer encodings after
chunkedhas been provided. - Allow
\nto be used as token separator instead of\r\n. - Allow
\r\nnot to be provided after a chunk. - Allow spaces to be present after a chunk size and before
\r\n.
All the above will expose your application to request smugglingor poisoning attack. Avoid using this option.
--inspect-brk[=[host:]port]#
Activate inspector onhost:port and break at start of user script.Defaulthost:port is127.0.0.1:9229. If port0 is specified,a random available port will be used.
SeeV8 Inspector integration for Node.js for further explanation on Node.js debugger.
See thesecurity warning below regarding thehostparameter usage.
--inspect-port=[host:]port#
Set thehost:port to be used when the inspector is activated.Useful when activating the inspector by sending theSIGUSR1 signal.Except when--disable-sigusr1 is passed.
Default host is127.0.0.1. If port0 is specified,a random available port will be used.
See thesecurity warning below regarding thehostparameter usage.
--inspect-publish-uid=stderr,http#
Specify ways of the inspector web socket url exposure.
By default inspector websocket url is available in stderr and under/json/listendpoint onhttp://host:port/json/list.
--inspect-wait[=[host:]port]#
Activate inspector onhost:port and wait for debugger to be attached.Defaulthost:port is127.0.0.1:9229. If port0 is specified,a random available port will be used.
SeeV8 Inspector integration for Node.js for further explanation on Node.js debugger.
See thesecurity warning below regarding thehostparameter usage.
--inspect[=[host:]port]#
Activate inspector onhost:port. Default is127.0.0.1:9229. If port0 isspecified, a random available port will be used.
V8 inspector integration allows tools such as Chrome DevTools and IDEs to debugand profile Node.js instances. The tools attach to Node.js instances via atcp port and communicate using theChrome DevTools Protocol.SeeV8 Inspector integration for Node.js for further explanation on Node.js debugger.
Warning: binding inspector to a public IP:port combination is insecure#
Binding the inspector to a public IP (including0.0.0.0) with an open port isinsecure, as it allows external hosts to connect to the inspector and performaremote code execution attack.
If specifying a host, make sure that either:
- The host is not accessible from public networks.
- A firewall disallows unwanted connections on the port.
More specifically,--inspect=0.0.0.0 is insecure if the port (9229 bydefault) is not firewall-protected.
See thedebugging security implications section for more information.
--jitless#
Disableruntime allocation of executable memory. This may berequired on some platforms for security reasons. It can also reduce attacksurface on other platforms, but the performance impact may be severe.
--localstorage-file=file#
The file used to storelocalStorage data. If the file does not exist, it iscreated the first timelocalStorage is accessed. The same file may be sharedbetween multiple Node.js processes concurrently.
--max-http-header-size=size#
History
| Version | Changes |
|---|---|
| v13.13.0 | Change maximum default size of HTTP headers from 8 KiB to 16 KiB. |
| v11.6.0, v10.15.0 | Added in: v11.6.0, v10.15.0 |
Specify the maximum size, in bytes, of HTTP headers. Defaults to 16 KiB.
--max-old-space-size-percentage=percentage#
Sets the maximum memory size of V8's old memory section as a percentage of available system memory.This flag takes precedence over--max-old-space-size when both are specified.
Thepercentage parameter must be a number greater than 0 and up to 100, representing the percentageof available system memory to allocate to the V8 heap.
Note: This flag utilizes--max-old-space-size, which may be unreliable on 32-bit platforms due tointeger overflow issues.
# Using 50% of available system memorynode --max-old-space-size-percentage=50 index.js# Using 75% of available system memorynode --max-old-space-size-percentage=75 index.js--network-family-autoselection-attempt-timeout#
Sets the default value for the network family autoselection attempt timeout.For more information, seenet.getDefaultAutoSelectFamilyAttemptTimeout().
--no-addons#
Disable thenode-addons exports condition as well as disable loadingnative addons. When--no-addons is specified, callingprocess.dlopen orrequiring a native C++ addon will fail and throw an exception.
--no-async-context-frame#
Disables the use ofAsyncLocalStorage backed byAsyncContextFrame anduses the prior implementation which relied on async_hooks. The previous modelis retained for compatibility with Electron and for cases where the contextflow may differ. However, if a difference in flow is found please report it.
--no-experimental-detect-module#
History
| Version | Changes |
|---|---|
| v22.7.0, v20.19.0 | Syntax detection is enabled by default. |
| v21.1.0, v20.10.0 | Added in: v21.1.0, v20.10.0 |
Disable usingsyntax detection to determine module type.
--no-experimental-global-navigator#
Disable exposition ofNavigator API on the global scope.
--no-experimental-require-module#
History
| Version | Changes |
|---|---|
| v25.4.0 | The flag was renamed from |
| v23.0.0, v22.12.0, v20.19.0 | This is now false by default. |
| v22.0.0, v20.17.0 | Added in: v22.0.0, v20.17.0 |
--no-require-module instead.Legacy alias for--no-require-module.
--no-experimental-sqlite#
History
| Version | Changes |
|---|---|
| v23.4.0, v22.13.0 | SQLite is unflagged but still experimental. |
| v22.5.0 | Added in: v22.5.0 |
Disable the experimentalnode:sqlite module.
--no-experimental-webstorage#
History
| Version | Changes |
|---|---|
| v25.0.0 | The feature is now enabled by default. |
| v22.4.0 | Added in: v22.4.0 |
DisableWeb Storage support.
--no-extra-info-on-fatal-exception#
Hide extra information on fatal exception that causes exit.
--no-force-async-hooks-checks#
Disables runtime checks forasync_hooks. These will still be enableddynamically whenasync_hooks is enabled.
--no-global-search-paths#
Do not search modules from global paths like$HOME/.node_modules and$NODE_PATH.
--no-network-family-autoselection#
History
| Version | Changes |
|---|---|
| v20.0.0 | The flag was renamed from |
| v19.4.0 | Added in: v19.4.0 |
Disables the family autoselection algorithm unless connection options explicitlyenables it.
--no-require-module#
History
| Version | Changes |
|---|---|
| v25.4.0 | This flag is no longer experimental. |
| v25.4.0 | This flag was renamed from |
| v23.0.0, v22.12.0, v20.19.0 | This is now false by default. |
| v22.0.0, v20.17.0 | Added in: v22.0.0, v20.17.0 |
Disable support for loading a synchronous ES module graph inrequire().
--no-strip-types#
History
| Version | Changes |
|---|---|
| v25.2.0 | Type stripping is now stable, the flag was renamed from |
| v23.6.0, v22.18.0 | Type stripping is enabled by default. |
| v22.6.0 | Added in: v22.6.0 |
Disable type-stripping for TypeScript files.For more information, see theTypeScript type-stripping documentation.
--node-memory-debug#
Enable extra debug checks for memory leaks in Node.js internals. This isusually only useful for developers debugging Node.js itself.
--openssl-config=file#
Load an OpenSSL configuration file on startup. Among other uses, this can beused to enable FIPS-compliant crypto if Node.js is builtagainst FIPS-enabled OpenSSL.
--openssl-legacy-provider#
Enable OpenSSL 3.0 legacy provider. For more information please seeOSSL_PROVIDER-legacy.
--openssl-shared-config#
Enable OpenSSL default configuration section,openssl_conf to be read fromthe OpenSSL configuration file. The default configuration file is namedopenssl.cnf but this can be changed using the environment variableOPENSSL_CONF, or by using the command line option--openssl-config.The location of the default OpenSSL configuration file depends on how OpenSSLis being linked to Node.js. Sharing the OpenSSL configuration may have unwantedimplications and it is recommended to use a configuration section specific toNode.js which isnodejs_conf and is default when this option is not used.
--pending-deprecation#
Emit pending deprecation warnings.
Pending deprecations are generally identical to a runtime deprecation with thenotable exception that they are turnedoff by default and will not be emittedunless either the--pending-deprecation command-line flag, or theNODE_PENDING_DEPRECATION=1 environment variable, is set. Pending deprecationsare used to provide a kind of selective "early warning" mechanism thatdevelopers may leverage to detect deprecated API usage.
--permission#
History
| Version | Changes |
|---|---|
| v23.5.0, v22.13.0 | Permission Model is now stable. |
| v20.0.0 | Added in: v20.0.0 |
Enable the Permission Model for current process. When enabled, thefollowing permissions are restricted:
- File System - manageable through
--allow-fs-read,--allow-fs-writeflags - Network - manageable through
--allow-netflag - Child Process - manageable through
--allow-child-processflag - Worker Threads - manageable through
--allow-workerflag - WASI - manageable through
--allow-wasiflag - Addons - manageable through
--allow-addonsflag
--preserve-symlinks#
Instructs the module loader to preserve symbolic links when resolving andcaching modules.
By default, when Node.js loads a module from a path that is symbolically linkedto a different on-disk location, Node.js will dereference the link and use theactual on-disk "real path" of the module as both an identifier and as a rootpath to locate other dependency modules. In most cases, this default behavioris acceptable. However, when using symbolically linked peer dependencies, asillustrated in the example below, the default behavior causes an exception tobe thrown ifmoduleA attempts to requiremoduleB as a peer dependency:
{appDir} ├── app │ ├── index.js │ └── node_modules │ ├── moduleA -> {appDir}/moduleA │ └── moduleB │ ├── index.js │ └── package.json └── moduleA ├── index.js └── package.jsonThe--preserve-symlinks command-line flag instructs Node.js to use thesymlink path for modules as opposed to the real path, allowing symbolicallylinked peer dependencies to be found.
Note, however, that using--preserve-symlinks can have other side effects.Specifically, symbolically linkednative modules can fail to load if thoseare linked from more than one location in the dependency tree (Node.js wouldsee those as two separate modules and would attempt to load the module multipletimes, causing an exception to be thrown).
The--preserve-symlinks flag does not apply to the main module, which allowsnode --preserve-symlinks node_module/.bin/<foo> to work. To apply the samebehavior for the main module, also use--preserve-symlinks-main.
--preserve-symlinks-main#
Instructs the module loader to preserve symbolic links when resolving andcaching the main module (require.main).
This flag exists so that the main module can be opted-in to the same behaviorthat--preserve-symlinks gives to all other imports; they are separate flags,however, for backward compatibility with older Node.js versions.
--preserve-symlinks-main does not imply--preserve-symlinks; use--preserve-symlinks-main in addition to--preserve-symlinks when it is not desirable to follow symlinks beforeresolving relative paths.
See--preserve-symlinks for more information.
-p,--print "script"#
History
| Version | Changes |
|---|---|
| v5.11.0 | Built-in libraries are now available as predefined variables. |
| v0.6.4 | Added in: v0.6.4 |
Identical to-e but prints the result.
--redirect-warnings=file#
Write process warnings to the given file instead of printing to stderr. Thefile will be created if it does not exist, and will be appended to if it does.If an error occurs while attempting to write the warning to the file, thewarning will be written to stderr instead.
Thefile name may be an absolute path. If it is not, the default directory itwill be written to is controlled by the--diagnostic-dir command-line option.
--report-compact#
Write reports in a compact format, single-line JSON, more easily consumableby log processing systems than the default multi-line format designed forhuman consumption.
--report-dir=directory,--report-directory=directory#
History
| Version | Changes |
|---|---|
| v13.12.0, v12.17.0 | This option is no longer experimental. |
| v12.0.0 | Changed from |
| v11.8.0 | Added in: v11.8.0 |
Location at which the report will be generated.
--report-exclude-env#
When--report-exclude-env is passed the diagnostic report generated will notcontain theenvironmentVariables data.
--report-exclude-network#
Excludeheader.networkInterfaces from the diagnostic report. By defaultthis is not set and the network interfaces are included.
--report-filename=filename#
History
| Version | Changes |
|---|---|
| v13.12.0, v12.17.0 | This option is no longer experimental. |
| v12.0.0 | changed from |
| v11.8.0 | Added in: v11.8.0 |
Name of the file to which the report will be written.
If the filename is set to'stdout' or'stderr', the report is written tothe stdout or stderr of the process respectively.
--report-on-fatalerror#
History
| Version | Changes |
|---|---|
| v14.0.0, v13.14.0, v12.17.0 | This option is no longer experimental. |
| v12.0.0 | changed from |
| v11.8.0 | Added in: v11.8.0 |
Enables the report to be triggered on fatal errors (internal errors withinthe Node.js runtime such as out of memory) that lead to termination of theapplication. Useful to inspect various diagnostic data elements such as heap,stack, event loop state, resource consumption etc. to reason about the fatalerror.
--report-on-signal#
History
| Version | Changes |
|---|---|
| v13.12.0, v12.17.0 | This option is no longer experimental. |
| v12.0.0 | changed from |
| v11.8.0 | Added in: v11.8.0 |
Enables report to be generated upon receiving the specified (or predefined)signal to the running Node.js process. The signal to trigger the report isspecified through--report-signal.
--report-signal=signal#
History
| Version | Changes |
|---|---|
| v13.12.0, v12.17.0 | This option is no longer experimental. |
| v12.0.0 | changed from |
| v11.8.0 | Added in: v11.8.0 |
Sets or resets the signal for report generation (not supported on Windows).Default signal isSIGUSR2.
--report-uncaught-exception#
History
| Version | Changes |
|---|---|
| v18.8.0, v16.18.0 | Report is not generated if the uncaught exception is handled. |
| v13.12.0, v12.17.0 | This option is no longer experimental. |
| v12.0.0 | changed from |
| v11.8.0 | Added in: v11.8.0 |
Enables report to be generated when the process exits due to an uncaughtexception. Useful when inspecting the JavaScript stack in conjunction withnative stack and other runtime environment data.
-r,--require module#
History
| Version | Changes |
|---|---|
| v23.0.0, v22.12.0, v20.19.0 | This option also supports ECMAScript module. |
| v1.6.0 | Added in: v1.6.0 |
Preload the specified module at startup.
Followsrequire()'s module resolutionrules.module may be either a path to a file, or a node module name.
Modules preloaded with--require will run before modules preloaded with--import.
Modules are preloaded into the main thread as well as any worker threads,forked processes, or clustered processes.
--run#
History
| Version | Changes |
|---|---|
| v22.3.0 | NODE_RUN_SCRIPT_NAME environment variable is added. |
| v22.3.0 | NODE_RUN_PACKAGE_JSON_PATH environment variable is added. |
| v22.3.0 | Traverses up to the root directory and finds a |
| v22.0.0 | Added in: v22.0.0 |
This runs a specified command from a package.json's"scripts" object.If a missing"command" is provided, it will list the available scripts.
--run will traverse up to the root directory and finds apackage.jsonfile to run the command from.
--run prepends./node_modules/.bin for each ancestor ofthe current directory, to thePATH in order to execute the binaries fromdifferent folders where multiplenode_modules directories are present, ifancestor-folder/node_modules/.bin is a directory.
--run executes the command in the directory containing the relatedpackage.json.
For example, the following command will run thetest script ofthepackage.json in the current folder:
$node --runtestYou can also pass arguments to the command. Any argument after-- willbe appended to the script:
$node --runtest -- --verboseIntentional limitations#
node --run is not meant to match the behaviors ofnpm run or of theruncommands of other package managers. The Node.js implementation is intentionallymore limited, in order to focus on top performance for the most common usecases.Some features of otherrun implementations that are intentionally excludedare:
- Running
preorpostscripts in addition to the specified script. - Defining package manager-specific environment variables.
Environment variables#
The following environment variables are set when running a script with--run:
NODE_RUN_SCRIPT_NAME: The name of the script being run. For example, if--runis used to runtest, the value of this variable will betest.NODE_RUN_PACKAGE_JSON_PATH: The path to thepackage.jsonthat is beingprocessed.
--secure-heap-min=n#
When using--secure-heap, the--secure-heap-min flag specifies theminimum allocation from the secure heap. The minimum value is2.The maximum value is the lesser of--secure-heap or2147483647.The value given must be a power of two.
--secure-heap=n#
Initializes an OpenSSL secure heap ofn bytes. When initialized, thesecure heap is used for selected types of allocations within OpenSSLduring key generation and other operations. This is useful, for instance,to prevent sensitive information from leaking due to pointer overrunsor underruns.
The secure heap is a fixed size and cannot be resized at runtime so,if used, it is important to select a large enough heap to cover allapplication uses.
The heap size given must be a power of two. Any value less than 2will disable the secure heap.
The secure heap is disabled by default.
The secure heap is not available on Windows.
SeeCRYPTO_secure_malloc_init for more details.
--snapshot-blob=path#
When used with--build-snapshot,--snapshot-blob specifies the pathwhere the generated snapshot blob is written to. If not specified, thegenerated blob is written tosnapshot.blob in the current working directory.
When used without--build-snapshot,--snapshot-blob specifies thepath to the blob that is used to restore the application state.
When loading a snapshot, Node.js checks that:
- The version, architecture, and platform of the running Node.js binaryare exactly the same as that of the binary that generates the snapshot.
- The V8 flags and CPU features are compatible with that of the binarythat generates the snapshot.
If they don't match, Node.js refuses to load the snapshot and exits withstatus code 1.
--test#
History
| Version | Changes |
|---|---|
| v20.0.0 | The test runner is now stable. |
| v19.2.0, v18.13.0 | Test runner now supports running in watch mode. |
| v18.1.0, v16.17.0 | Added in: v18.1.0, v16.17.0 |
Starts the Node.js command line test runner. This flag cannot be combined with--watch-path,--check,--eval,--interactive, or the inspector.See the documentation onrunning tests from the command linefor more details.
--test-concurrency#
The maximum number of test files that the test runner CLI will executeconcurrently. If--test-isolation is set to'none', this flag is ignored andconcurrency is one. Otherwise, concurrency defaults toos.availableParallelism() - 1.
--test-coverage-branches=threshold#
Require a minimum percent of covered branches. If code coverage does not reachthe threshold specified, the process will exit with code1.
--test-coverage-exclude#
Excludes specific files from code coverage using a glob pattern, which can matchboth absolute and relative file paths.
This option may be specified multiple times to exclude multiple glob patterns.
If both--test-coverage-exclude and--test-coverage-include are provided,files must meetboth criteria to be included in the coverage report.
By default all the matching test files are excluded from the coverage report.Specifying this option will override the default behavior.
--test-coverage-functions=threshold#
Require a minimum percent of covered functions. If code coverage does not reachthe threshold specified, the process will exit with code1.
--test-coverage-include#
Includes specific files in code coverage using a glob pattern, which can matchboth absolute and relative file paths.
This option may be specified multiple times to include multiple glob patterns.
If both--test-coverage-exclude and--test-coverage-include are provided,files must meetboth criteria to be included in the coverage report.
--test-coverage-lines=threshold#
Require a minimum percent of covered lines. If code coverage does not reachthe threshold specified, the process will exit with code1.
--test-force-exit#
Configures the test runner to exit the process once all known tests havefinished executing even if the event loop would otherwise remain active.
--test-global-setup=module#
Specify a module that will be evaluated before all tests are executed andcan be used to setup global state or fixtures for tests.
See the documentation onglobal setup and teardown for more details.
--test-isolation=mode#
History
| Version | Changes |
|---|---|
| v23.6.0 | This flag was renamed from |
| v22.8.0 | Added in: v22.8.0 |
Configures the type of test isolation used in the test runner. Whenmode is'process', each test file is run in a separate child process. Whenmode is'none', all test files run in the same process as the test runner. The defaultisolation mode is'process'. This flag is ignored if the--test flag is notpresent. See thetest runner execution model section for more information.
--test-name-pattern#
History
| Version | Changes |
|---|---|
| v20.0.0 | The test runner is now stable. |
| v18.11.0 | Added in: v18.11.0 |
A regular expression that configures the test runner to only execute testswhose name matches the provided pattern. See the documentation onfiltering tests by name for more details.
If both--test-name-pattern and--test-skip-pattern are supplied,tests must satisfyboth requirements in order to be executed.
--test-only#
History
| Version | Changes |
|---|---|
| v20.0.0 | The test runner is now stable. |
| v18.0.0, v16.17.0 | Added in: v18.0.0, v16.17.0 |
Configures the test runner to only execute top level tests that have theonlyoption set. This flag is not necessary when test isolation is disabled.
--test-reporter#
History
| Version | Changes |
|---|---|
| v20.0.0 | The test runner is now stable. |
| v19.6.0, v18.15.0 | Added in: v19.6.0, v18.15.0 |
A test reporter to use when running tests. See the documentation ontest reporters for more details.
--test-reporter-destination#
History
| Version | Changes |
|---|---|
| v20.0.0 | The test runner is now stable. |
| v19.6.0, v18.15.0 | Added in: v19.6.0, v18.15.0 |
The destination for the corresponding test reporter. See the documentation ontest reporters for more details.
--test-rerun-failures#
A path to a file allowing the test runner to persist the state of the testsuite between runs. The test runner will use this file to determine which testshave already succeeded or failed, allowing for re-running of failed testswithout having to re-run the entire test suite. The test runner will create thisfile if it does not exist.See the documentation ontest reruns for more details.
--test-shard#
Test suite shard to execute in a format of<index>/<total>, where
indexis a positive integer, index of divided parts.totalis a positive integer, total of divided part.
This command will divide all tests files intototal equal parts,and will run only those that happen to be in anindex part.
For example, to split your tests suite into three parts, use this:
node --test --test-shard=1/3node --test --test-shard=2/3node --test --test-shard=3/3--test-skip-pattern#
A regular expression that configures the test runner to skip testswhose name matches the provided pattern. See the documentation onfiltering tests by name for more details.
If both--test-name-pattern and--test-skip-pattern are supplied,tests must satisfyboth requirements in order to be executed.
--test-timeout#
A number of milliseconds the test execution will fail after. If unspecified,subtests inherit this value from their parent. The default value isInfinity.
--test-update-snapshots#
History
| Version | Changes |
|---|---|
| v23.4.0, v22.13.0 | Snapshot testing is no longer experimental. |
| v22.3.0 | Added in: v22.3.0 |
Regenerates the snapshot files used by the test runner forsnapshot testing.
--tls-cipher-list=list#
Specify an alternative default TLS cipher list. Requires Node.js to be builtwith crypto support (default).
--tls-keylog=file#
Log TLS key material to a file. The key material is in NSSSSLKEYLOGFILEformat and can be used by software (such as Wireshark) to decrypt the TLStraffic.
--tls-max-v1.2#
Settls.DEFAULT_MAX_VERSION to 'TLSv1.2'. Use to disable support forTLSv1.3.
--tls-max-v1.3#
Set defaulttls.DEFAULT_MAX_VERSION to 'TLSv1.3'. Use to enable supportfor TLSv1.3.
--tls-min-v1.0#
Set defaulttls.DEFAULT_MIN_VERSION to 'TLSv1'. Use for compatibility withold TLS clients or servers.
--tls-min-v1.1#
Set defaulttls.DEFAULT_MIN_VERSION to 'TLSv1.1'. Use for compatibilitywith old TLS clients or servers.
--tls-min-v1.2#
Set defaulttls.DEFAULT_MIN_VERSION to 'TLSv1.2'. This is the default for12.x and later, but the option is supported for compatibility with older Node.jsversions.
--tls-min-v1.3#
Set defaulttls.DEFAULT_MIN_VERSION to 'TLSv1.3'. Use to disable supportfor TLSv1.2, which is not as secure as TLSv1.3.
--trace-env#
Print information about any access to environment variables done in the current Node.jsinstance to stderr, including:
- The environment variable reads that Node.js does internally.
- Writes in the form of
process.env.KEY = "SOME VALUE". - Reads in the form of
process.env.KEY. - Definitions in the form of
Object.defineProperty(process.env, 'KEY', {...}). - Queries in the form of
Object.hasOwn(process.env, 'KEY'),process.env.hasOwnProperty('KEY')or'KEY' in process.env. - Deletions in the form of
delete process.env.KEY. - Enumerations inf the form of
...process.envorObject.keys(process.env).
Only the names of the environment variables being accessed are printed. The values are not printed.
To print the stack trace of the access, use--trace-env-js-stack and/or--trace-env-native-stack.
--trace-env-js-stack#
In addition to what--trace-env does, this prints the JavaScript stack trace of the access.
--trace-env-native-stack#
In addition to what--trace-env does, this prints the native stack trace of the access.
--trace-event-categories#
A comma separated list of categories that should be traced when trace eventtracing is enabled using--trace-events-enabled.
--trace-event-file-pattern#
Template string specifying the filepath for the trace event data, itsupports${rotation} and${pid}.
--trace-exit#
Prints a stack trace whenever an environment is exited proactively,i.e. invokingprocess.exit().
--trace-require-module=mode#
Prints information about usage ofLoading ECMAScript modules usingrequire().
Whenmode isall, all usage is printed. Whenmode isno-node-modules, usagefrom thenode_modules folder is excluded.
--trace-sync-io#
Prints a stack trace whenever synchronous I/O is detected after the first turnof the event loop.
--trace-tls#
Prints TLS packet trace information tostderr. This can be used to debug TLSconnection problems.
--trace-uncaught#
Print stack traces for uncaught exceptions; usually, the stack trace associatedwith the creation of anError is printed, whereas this makes Node.js alsoprint the stack trace associated with throwing the value (which does not needto be anError instance).
Enabling this option may affect garbage collection behavior negatively.
--unhandled-rejections=mode#
History
| Version | Changes |
|---|---|
| v15.0.0 | Changed default mode to |
| v12.0.0, v10.17.0 | Added in: v12.0.0, v10.17.0 |
Using this flag allows to change what should happen when an unhandled rejectionoccurs. One of the following modes can be chosen:
throw: EmitunhandledRejection. If this hook is not set, raise theunhandled rejection as an uncaught exception. This is the default.strict: Raise the unhandled rejection as an uncaught exception. If theexception is handled,unhandledRejectionis emitted.warn: Always trigger a warning, no matter if theunhandledRejectionhook is set or not but do not print the deprecation warning.warn-with-error-code: EmitunhandledRejection. If this hook is notset, trigger a warning, and set the process exit code to 1.none: Silence all warnings.
If a rejection happens during the command line entry point's ES module staticloading phase, it will always raise it as an uncaught exception.
--use-bundled-ca,--use-openssl-ca#
Use bundled Mozilla CA store as supplied by current Node.js versionor use OpenSSL's default CA store. The default store is selectableat build-time.
The bundled CA store, as supplied by Node.js, is a snapshot of Mozilla CA storethat is fixed at release time. It is identical on all supported platforms.
Using OpenSSL store allows for external modifications of the store. For mostLinux and BSD distributions, this store is maintained by the distributionmaintainers and system administrators. OpenSSL CA store location is dependent onconfiguration of the OpenSSL library but this can be altered at runtime usingenvironment variables.
SeeSSL_CERT_DIR andSSL_CERT_FILE.
--use-env-proxy#
When enabled, Node.js parses theHTTP_PROXY,HTTPS_PROXY andNO_PROXYenvironment variables during startup, and tunnels requests over thespecified proxy.
This is equivalent to setting theNODE_USE_ENV_PROXY=1 environment variable.When both are set,--use-env-proxy takes precedence.
--use-largepages=mode#
Re-map the Node.js static code to large memory pages at startup. If supported onthe target system, this will cause the Node.js static code to be moved onto 2MiB pages instead of 4 KiB pages.
The following values are valid formode:
off: No mapping will be attempted. This is the default.on: If supported by the OS, mapping will be attempted. Failure to map willbe ignored and a message will be printed to standard error.silent: If supported by the OS, mapping will be attempted. Failure to mapwill be ignored and will not be reported.
--use-system-ca#
History
| Version | Changes |
|---|---|
| v23.9.0 | Added support on non-Windows and non-macOS. |
| v23.8.0 | Added in: v23.8.0 |
Node.js uses the trusted CA certificates present in the system store along withthe--use-bundled-ca option and theNODE_EXTRA_CA_CERTS environment variable.On platforms other than Windows and macOS, this loads certificates from the directoryand file trusted by OpenSSL, similar to--use-openssl-ca, with the difference beingthat it caches the certificates after first load.
On Windows and macOS, the certificate trust policy is similar toChromium's policy for locally trusted certificates, but with some differences:
On macOS, the following settings are respected:
- Default and System Keychains
- Trust:
- Any certificate where the “When using this certificate” flag is set to “Always Trust” or
- Any certificate where the “Secure Sockets Layer (SSL)” flag is set to “Always Trust”.
- The certificate must also be valid, with "X.509 Basic Policy" set to “Always Trust”.
- Trust:
On Windows, the following settings are respected:
- Local Machine (accessed via
certlm.msc)- Trust:
- Trusted Root Certification Authorities
- Trusted People
- Enterprise Trust -> Enterprise -> Trusted Root Certification Authorities
- Enterprise Trust -> Enterprise -> Trusted People
- Enterprise Trust -> Group Policy -> Trusted Root Certification Authorities
- Enterprise Trust -> Group Policy -> Trusted People
- Trust:
- Current User (accessed via
certmgr.msc)- Trust:
- Trusted Root Certification Authorities
- Enterprise Trust -> Group Policy -> Trusted Root Certification Authorities
- Trust:
On Windows and macOS, Node.js would check that the user settings for the trustedcertificates do not forbid them for TLS server authentication before using them.
Node.js currently does not support distrust/revocation of certificatesfrom another source based on system settings.
On other systems, Node.js loads certificates from the default certificate file(typically/etc/ssl/cert.pem) and default certificate directory (typically/etc/ssl/certs) that the version of OpenSSL that Node.js links to respects.This typically works with the convention on major Linux distributions and otherUnix-like systems. If the overriding OpenSSL environment variables(typicallySSL_CERT_FILE andSSL_CERT_DIR, depending on the configurationof the OpenSSL that Node.js links to) are set, the specified paths will be used to loadcertificates instead. These environment variables can be used as workaroundsif the conventional paths used by the version of OpenSSL Node.js links to arenot consistent with the system configuration that the users have for some reason.
--v8-pool-size=num#
Set V8's thread pool size which will be used to allocate background jobs.
If set to0 then Node.js will choose an appropriate size of the thread poolbased on an estimate of the amount of parallelism.
The amount of parallelism refers to the number of computations that can becarried out simultaneously in a given machine. In general, it's the same as theamount of CPUs, but it may diverge in environments such as VMs or containers.
--watch#
History
| Version | Changes |
|---|---|
| v22.0.0, v20.13.0 | Watch mode is now stable. |
| v19.2.0, v18.13.0 | Test runner now supports running in watch mode. |
| v18.11.0, v16.19.0 | Added in: v18.11.0, v16.19.0 |
Starts Node.js in watch mode.When in watch mode, changes in the watched files cause the Node.js process torestart.By default, watch mode will watch the entry pointand any required or imported module.Use--watch-path to specify what paths to watch.
This flag cannot be combined with--check,--eval,--interactive, or the REPL.
Note: The--watch flag requires a file path as an argument and is incompatiblewith--run or inline script input, as--run takes precedence and ignores watchmode. If no file is provided, Node.js will exit with status code9.
node --watch index.js--watch-kill-signal#
Customizes the signal sent to the process on watch mode restarts.
node --watch --watch-kill-signal SIGINT test.js--watch-path#
History
| Version | Changes |
|---|---|
| v22.0.0, v20.13.0 | Watch mode is now stable. |
| v18.11.0, v16.19.0 | Added in: v18.11.0, v16.19.0 |
Starts Node.js in watch mode and specifies what paths to watch.When in watch mode, changes in the watched paths cause the Node.js process torestart.This will turn off watching of required or imported modules, even when used incombination with--watch.
This flag cannot be combined with--check,--eval,--interactive,--test, or the REPL.
Note: Using--watch-path implicitly enables--watch, which requires a file pathand is incompatible with--run, as--run takes precedence and ignores watch mode.
node --watch-path=./src --watch-path=./tests index.jsThis option is only supported on macOS and Windows.AnERR_FEATURE_UNAVAILABLE_ON_PLATFORM exception will be thrownwhen the option is used on a platform that does not support it.
--watch-preserve-output#
Disable the clearing of the console when watch mode restarts the process.
node --watch --watch-preserve-output test.jsEnvironment variables#
FORCE_COLOR=[1, 2, 3]#
TheFORCE_COLOR environment variable is used toenable ANSI colorized output. The value may be:
1,true, or the empty string''indicate 16-color support,2to indicate 256-color support, or3to indicate 16 million-color support.
WhenFORCE_COLOR is used and set to a supported value, both theNO_COLOR,andNODE_DISABLE_COLORS environment variables are ignored.
Any other value will result in colorized output being disabled.
NODE_COMPILE_CACHE=dir#
History
| Version | Changes |
|---|---|
| v25.4.0 | This feature is no longer experimental. |
| v22.1.0 | Added in: v22.1.0 |
Enable themodule compile cache for the Node.js instance. See the documentation ofmodule compile cache for details.
NODE_COMPILE_CACHE_PORTABLE=1#
When set to 1, themodule compile cache can be reused across different directorylocations as long as the module layout relative to the cache directory remains the same.
NODE_DEBUG=module[,…]#
','-separated list of core modules that should print debug information.
NODE_DEBUG_NATIVE=module[,…]#
','-separated list of core C++ modules that should print debug information.
NODE_DISABLE_COMPILE_CACHE=1#
Disable themodule compile cache for the Node.js instance. See the documentation ofmodule compile cache for details.
NODE_EXTRA_CA_CERTS=file#
When set, the well known "root" CAs (like VeriSign) will be extended with theextra certificates infile. The file should consist of one or more trustedcertificates in PEM format. A message will be emitted (once) withprocess.emitWarning() if the file is missing ormalformed, but any errors are otherwise ignored.
Neither the well known nor extra certificates are used when thecaoptions property is explicitly specified for a TLS or HTTPS client or server.
This environment variable is ignored whennode runs as setuid root orhas Linux file capabilities set.
TheNODE_EXTRA_CA_CERTS environment variable is only read when the Node.jsprocess is first launched. Changing the value at runtime usingprocess.env.NODE_EXTRA_CA_CERTS has no effect on the current process.
NODE_ICU_DATA=file#
Data path for ICU (Intl object) data. Will extend linked-in data when compiledwith small-icu support.
NODE_OPTIONS=options...#
A space-separated list of command-line options.options... are interpretedbefore command-line options, so command-line options will override orcompound after anything inoptions.... Node.js will exit with an error ifan option that is not allowed in the environment is used, such as-p or ascript file.
If an option value contains a space, it can be escaped using double quotes:
NODE_OPTIONS='--require "./my path/file.js"'A singleton flag passed as a command-line option will override the same flagpassed intoNODE_OPTIONS:
# The inspector will be available on port 5555NODE_OPTIONS='--inspect=localhost:4444' node --inspect=localhost:5555A flag that can be passed multiple times will be treated as if itsNODE_OPTIONS instances were passed first, and then its command-lineinstances afterwards:
NODE_OPTIONS='--require "./a.js"' node --require"./b.js"# is equivalent to:node --require"./a.js" --require"./b.js"Node.js options that are allowed are in the following list. If an optionsupports both --XX and --no-XX variants, they are both supported but onlyone is included in the list below.
--allow-addons--allow-child-process--allow-fs-read--allow-fs-write--allow-inspector--allow-net--allow-wasi--allow-worker--conditions,-C--cpu-prof-dir--cpu-prof-interval--cpu-prof-name--cpu-prof--diagnostic-dir--disable-proto--disable-sigusr1--disable-warning--disable-wasm-trap-handler--dns-result-order--enable-fips--enable-network-family-autoselection--enable-source-maps--entry-url--experimental-abortcontroller--experimental-addon-modules--experimental-detect-module--experimental-eventsource--experimental-import-meta-resolve--experimental-json-modules--experimental-loader--experimental-modules--experimental-print-required-tla--experimental-quic--experimental-require-module--experimental-shadow-realm--experimental-specifier-resolution--experimental-test-isolation--experimental-top-level-await--experimental-transform-types--experimental-vm-modules--experimental-wasi-unstable-preview1--force-context-aware--force-fips--force-node-api-uncaught-exceptions-policy--frozen-intrinsics--heap-prof-dir--heap-prof-interval--heap-prof-name--heap-prof--heapsnapshot-near-heap-limit--heapsnapshot-signal--http-parser--icu-data-dir--import--input-type--insecure-http-parser--inspect-brk--inspect-port,--debug-port--inspect-publish-uid--inspect-wait--inspect--localstorage-file--max-http-header-size--max-old-space-size-percentage--napi-modules--network-family-autoselection-attempt-timeout--no-addons--no-async-context-frame--no-deprecation--no-experimental-global-navigator--no-experimental-repl-await--no-experimental-sqlite--no-experimental-strip-types--no-experimental-websocket--no-experimental-webstorage--no-extra-info-on-fatal-exception--no-force-async-hooks-checks--no-global-search-paths--no-network-family-autoselection--no-strip-types--no-warnings--no-webstorage--node-memory-debug--openssl-config--openssl-legacy-provider--openssl-shared-config--pending-deprecation--permission--preserve-symlinks-main--preserve-symlinks--prof-process--redirect-warnings--report-compact--report-dir,--report-directory--report-exclude-env--report-exclude-network--report-filename--report-on-fatalerror--report-on-signal--report-signal--report-uncaught-exception--require-module--require,-r--secure-heap-min--secure-heap--snapshot-blob--test-coverage-branches--test-coverage-exclude--test-coverage-functions--test-coverage-include--test-coverage-lines--test-global-setup--test-isolation--test-name-pattern--test-only--test-reporter-destination--test-reporter--test-rerun-failures--test-shard--test-skip-pattern--throw-deprecation--title--tls-cipher-list--tls-keylog--tls-max-v1.2--tls-max-v1.3--tls-min-v1.0--tls-min-v1.1--tls-min-v1.2--tls-min-v1.3--trace-deprecation--trace-env-js-stack--trace-env-native-stack--trace-env--trace-event-categories--trace-event-file-pattern--trace-events-enabled--trace-exit--trace-require-module--trace-sigint--trace-sync-io--trace-tls--trace-uncaught--trace-warnings--track-heap-objects--unhandled-rejections--use-bundled-ca--use-env-proxy--use-largepages--use-openssl-ca--use-system-ca--v8-pool-size--watch-kill-signal--watch-path--watch-preserve-output--watch--zero-fill-buffers
V8 options that are allowed are:
--abort-on-uncaught-exception--disallow-code-generation-from-strings--enable-etw-stack-walking--expose-gc--interpreted-frames-native-stack--jitless--max-old-space-size--max-semi-space-size--perf-basic-prof-only-functions--perf-basic-prof--perf-prof-unwinding-info--perf-prof--stack-trace-limit
--perf-basic-prof-only-functions,--perf-basic-prof,--perf-prof-unwinding-info, and--perf-prof are only available on Linux.
--enable-etw-stack-walking is only available on Windows.
NODE_PATH=path[:…]#
':'-separated list of directories prefixed to the module search path.
On Windows, this is a';'-separated list instead.
NODE_PENDING_DEPRECATION=1#
When set to1, emit pending deprecation warnings.
Pending deprecations are generally identical to a runtime deprecation with thenotable exception that they are turnedoff by default and will not be emittedunless either the--pending-deprecation command-line flag, or theNODE_PENDING_DEPRECATION=1 environment variable, is set. Pending deprecationsare used to provide a kind of selective "early warning" mechanism thatdevelopers may leverage to detect deprecated API usage.
NODE_PENDING_PIPE_INSTANCES=instances#
Set the number of pending pipe instance handles when the pipe server is waitingfor connections. This setting applies to Windows only.
NODE_PRESERVE_SYMLINKS=1#
When set to1, instructs the module loader to preserve symbolic links whenresolving and caching modules.
NODE_REDIRECT_WARNINGS=file#
When set, process warnings will be emitted to the given file instead ofprinting to stderr. The file will be created if it does not exist, and will beappended to if it does. If an error occurs while attempting to write thewarning to the file, the warning will be written to stderr instead. This isequivalent to using the--redirect-warnings=file command-line flag.
NODE_REPL_EXTERNAL_MODULE=file#
History
| Version | Changes |
|---|---|
| v22.3.0, v20.16.0 | Remove the possibility to use this env var with kDisableNodeOptionsEnv for embedders. |
| v13.0.0, v12.16.0 | Added in: v13.0.0, v12.16.0 |
Path to a Node.js module which will be loaded in place of the built-in REPL.Overriding this value to an empty string ('') will use the built-in REPL.
NODE_REPL_HISTORY=file#
Path to the file used to store the persistent REPL history. The default path is~/.node_repl_history, which is overridden by this variable. Setting the valueto an empty string ('' or' ') disables persistent REPL history.
NODE_SKIP_PLATFORM_CHECK=value#
Ifvalue equals'1', the check for a supported platform is skipped duringNode.js startup. Node.js might not execute correctly. Any issues encounteredon unsupported platforms will not be fixed.
NODE_TEST_CONTEXT=value#
Ifvalue equals'child', test reporter options will be overridden and testoutput will be sent to stdout in the TAP format. If any other value is provided,Node.js makes no guarantees about the reporter format used or its stability.
NODE_TLS_REJECT_UNAUTHORIZED=value#
Ifvalue equals'0', certificate validation is disabled for TLS connections.This makes TLS, and HTTPS by extension, insecure. The use of this environmentvariable is strongly discouraged.
NODE_USE_ENV_PROXY=1#
When enabled, Node.js parses theHTTP_PROXY,HTTPS_PROXY andNO_PROXYenvironment variables during startup, and tunnels requests over thespecified proxy.
This can also be enabled using the--use-env-proxy command-line flag.When both are set,--use-env-proxy takes precedence.
NODE_USE_SYSTEM_CA=1#
Node.js uses the trusted CA certificates present in the system store along withthe--use-bundled-ca option and theNODE_EXTRA_CA_CERTS environment variable.
This can also be enabled using the--use-system-ca command-line flag.When both are set,--use-system-ca takes precedence.
NODE_V8_COVERAGE=dir#
When set, Node.js will begin outputtingV8 JavaScript code coverage andSource Map data to the directory provided as an argument (coverageinformation is written as JSON to files with acoverage prefix).
NODE_V8_COVERAGE will automatically propagate to subprocesses, making iteasier to instrument applications that call thechild_process.spawn() familyof functions.NODE_V8_COVERAGE can be set to an empty string, to preventpropagation.
Coverage output#
Coverage is output as an array ofScriptCoverage objects on the top-levelkeyresult:
{"result":[{"scriptId":"67","url":"internal/tty.js","functions":[]}]}Source map cache#
If found, source map data is appended to the top-level keysource-map-cacheon the JSON coverage object.
source-map-cache is an object with keys representing the files source mapswere extracted from, and values which include the raw source-map URL(in the keyurl), the parsed Source Map v3 information (in the keydata),and the line lengths of the source file (in the keylineLengths).
{"result":[{"scriptId":"68","url":"file:///absolute/path/to/source.js","functions":[]}],"source-map-cache":{"file:///absolute/path/to/source.js":{"url":"./path-to-map.json","data":{"version":3,"sources":["file:///absolute/path/to/original.js"],"names":["Foo","console","info"],"mappings":"MAAMA,IACJC,YAAaC","sourceRoot":"./"},"lineLengths":[13,62,38,27]}}}NO_COLOR=<any>#
NO_COLOR is an alias forNODE_DISABLE_COLORS. The value of theenvironment variable is arbitrary.
OPENSSL_CONF=file#
Load an OpenSSL configuration file on startup. Among other uses, this can beused to enable FIPS-compliant crypto if Node.js is built with./configure --openssl-fips.
If the--openssl-config command-line option is used, the environmentvariable is ignored.
SSL_CERT_DIR=dir#
If--use-openssl-ca is enabled, or if--use-system-ca is enabled onplatforms other than macOS and Windows, this overrides and sets OpenSSL's directorycontaining trusted certificates.
Be aware that unless the child environment is explicitly set, this environmentvariable will be inherited by any child processes, and if they use OpenSSL, itmay cause them to trust the same CAs as node.
SSL_CERT_FILE=file#
If--use-openssl-ca is enabled, or if--use-system-ca is enabled onplatforms other than macOS and Windows, this overrides and sets OpenSSL's filecontaining trusted certificates.
Be aware that unless the child environment is explicitly set, this environmentvariable will be inherited by any child processes, and if they use OpenSSL, itmay cause them to trust the same CAs as node.
TZ#
History
| Version | Changes |
|---|---|
| v16.2.0 | Changing the TZ variable using process.env.TZ = changes the timezone on Windows as well. |
| v13.0.0 | Changing the TZ variable using process.env.TZ = changes the timezone on POSIX systems. |
| v0.0.1 | Added in: v0.0.1 |
TheTZ environment variable is used to specify the timezone configuration.
While Node.js does not support all of the variousways thatTZ is handled inother environments, it does support basictimezone IDs (such as'Etc/UTC','Europe/Paris', or'America/New_York').It may support a few other abbreviations or aliases, but these are stronglydiscouraged and not guaranteed.
$TZ=Europe/Dublin node -pe"new Date().toString()"Wed May 12 2021 20:30:48 GMT+0100 (Irish Standard Time)UV_THREADPOOL_SIZE=size#
Set the number of threads used in libuv's threadpool tosize threads.
Asynchronous system APIs are used by Node.js whenever possible, but where theydo not exist, libuv's threadpool is used to create asynchronous node APIs basedon synchronous system APIs. Node.js APIs that use the threadpool are:
- all
fsAPIs, other than the file watcher APIs and those that are explicitlysynchronous - asynchronous crypto APIs such as
crypto.pbkdf2(),crypto.scrypt(),crypto.randomBytes(),crypto.randomFill(),crypto.generateKeyPair() dns.lookup()- all
zlibAPIs, other than those that are explicitly synchronous
Because libuv's threadpool has a fixed size, it means that if for whateverreason any of these APIs takes a long time, other (seemingly unrelated) APIsthat run in libuv's threadpool will experience degraded performance. In order tomitigate this issue, one potential solution is to increase the size of libuv'sthreadpool by setting the'UV_THREADPOOL_SIZE' environment variable to a valuegreater than4 (its current default value). However, setting this from insidethe process usingprocess.env.UV_THREADPOOL_SIZE=size is not guranteed to workas the threadpool would have been created as part of the runtime initialisationmuch before user code is run. For more information, see thelibuv threadpool documentation.
Useful V8 options#
V8 has its own set of CLI options. Any V8 CLI option that is provided tonodewill be passed on to V8 to handle. V8's options haveno stability guarantee.The V8 team themselves don't consider them to be part of their formal API,and reserve the right to change them at any time. Likewise, they are notcovered by the Node.js stability guarantees. Many of the V8options are of interest only to V8 developers. Despite this, there is a smallset of V8 options that are widely applicable to Node.js, and they aredocumented here:
--abort-on-uncaught-exception#
--disallow-code-generation-from-strings#
--enable-etw-stack-walking#
--expose-gc#
--harmony-shadow-realm#
--heap-snapshot-on-oom#
--interpreted-frames-native-stack#
--jitless#
--max-old-space-size=SIZE (in MiB)#
Sets the max memory size of V8's old memory section. As memoryconsumption approaches the limit, V8 will spend more time ongarbage collection in an effort to free unused memory.
On a machine with 2 GiB of memory, consider setting this to1536 (1.5 GiB) to leave some memory for other uses and avoid swapping.
node --max-old-space-size=1536 index.js--max-semi-space-size=SIZE (in MiB)#
Sets the maximumsemi-space size for V8'sscavenge garbage collector inMiB (mebibytes).Increasing the max size of a semi-space may improve throughput for Node.js atthe cost of more memory consumption.
Since the young generation size of the V8 heap is three times (seeYoungGenerationSizeFromSemiSpaceSize in V8) the size of the semi-space,an increase of 1 MiB to semi-space applies to each of the three individualsemi-spaces and causes the heap size to increase by 3 MiB. The throughputimprovement depends on your workload (see#42511).
The default value depends on the memory limit. For example, on 64-bit systemswith a memory limit of 512 MiB, the max size of a semi-space defaults to 1 MiB.For memory limits up to and including 2GiB, the default max size of asemi-space will be less than 16 MiB on 64-bit systems.
To get the best configuration for your application, you should try differentmax-semi-space-size values when running benchmarks for your application.
For example, benchmark on a 64-bit systems:
for MiBin 16 32 64 128;do node --max-semi-space-size=$MiB index.jsdone--perf-basic-prof#
--perf-basic-prof-only-functions#
--perf-prof#
--perf-prof-unwinding-info#
--prof#
--security-revert#
--stack-trace-limit=limit#
The maximum number of stack frames to collect in an error's stack trace.Setting it to 0 disables stack trace collection. The default value is 10.
node --stack-trace-limit=12 -p -e"Error.stackTraceLimit"# prints 12Console#
Source Code:lib/console.js
Thenode:console module provides a simple debugging console that is similar tothe JavaScript console mechanism provided by web browsers.
The module exports two specific components:
- A
Consoleclass with methods such asconsole.log(),console.error(), andconsole.warn()that can be used to write to any Node.js stream. - A global
consoleinstance configured to write toprocess.stdoutandprocess.stderr. The globalconsolecan be used without callingrequire('node:console').
Warning: The global console object's methods are neither consistentlysynchronous like the browser APIs they resemble, nor are they consistentlyasynchronous like all other Node.js streams. Programs that desire to dependon the synchronous / asynchronous behavior of the console functions shouldfirst figure out the nature of console's backing stream. This is because thestream is dependent on the underlying platform and standard streamconfiguration of the current process. See thenote on process I/O formore information.
Example using the globalconsole:
console.log('hello world');// Prints: hello world, to stdoutconsole.log('hello %s','world');// Prints: hello world, to stdoutconsole.error(newError('Whoops, something bad happened'));// Prints error message and stack trace to stderr:// Error: Whoops, something bad happened// at [eval]:5:15// at Script.runInThisContext (node:vm:132:18)// at Object.runInThisContext (node:vm:309:38)// at node:internal/process/execution:77:19// at [eval]-wrapper:6:22// at evalScript (node:internal/process/execution:76:60)// at node:internal/main/eval_string:23:3const name ='Will Robinson';console.warn(`Danger${name}! Danger!`);// Prints: Danger Will Robinson! Danger!, to stderrExample using theConsole class:
const out =getStreamSomehow();const err =getStreamSomehow();const myConsole =newconsole.Console(out, err);myConsole.log('hello world');// Prints: hello world, to outmyConsole.log('hello %s','world');// Prints: hello world, to outmyConsole.error(newError('Whoops, something bad happened'));// Prints: [Error: Whoops, something bad happened], to errconst name ='Will Robinson';myConsole.warn(`Danger${name}! Danger!`);// Prints: Danger Will Robinson! Danger!, to errClass:Console#
History
| Version | Changes |
|---|---|
| v8.0.0 | Errors that occur while writing to the underlying streams will now be ignored by default. |
TheConsole class can be used to create a simple logger with configurableoutput streams and can be accessed using eitherrequire('node:console').Consoleorconsole.Console (or their destructured counterparts):
import {Console }from'node:console';const {Console } =require('node:console');
const {Console } =console;new Console(stdout[, stderr][, ignoreErrors])#
new Console(options)#
History
| Version | Changes |
|---|---|
| v24.10.0 | The |
| v14.2.0, v12.17.0 | The |
| v11.7.0 | The |
| v10.0.0 | The |
| v8.0.0 | The |
options<Object>stdout<stream.Writable>stderr<stream.Writable>ignoreErrors<boolean> Ignore errors when writing to the underlyingstreams.Default:true.colorMode<boolean> |<string> Set color support for thisConsoleinstance.Setting totrueenables coloring while inspecting values. Setting tofalsedisables coloring while inspecting values. Setting to'auto'makes color support depend on the value of theisTTYpropertyand the value returned bygetColorDepth()on the respective stream. Thisoption can not be used, ifinspectOptions.colorsis set as well.Default:'auto'.inspectOptions<Object> |<Map> Specifies options that are passed along toutil.inspect(). Can be an options object or, if different optionsfor stdout and stderr are desired, aMapfrom stream objects to options.groupIndentation<number> Set group indentation.Default:2.
Creates a newConsole with one or two writable stream instances.stdout is awritable stream to print log or info output.stderr is used for warning orerror output. Ifstderr is not provided,stdout is used forstderr.
import { createWriteStream }from'node:fs';import {Console }from'node:console';// Alternatively// const { Console } = console;const output =createWriteStream('./stdout.log');const errorOutput =createWriteStream('./stderr.log');// Custom simple loggerconst logger =newConsole({stdout: output,stderr: errorOutput });// use it like consoleconst count =5;logger.log('count: %d', count);// In stdout.log: count 5const fs =require('node:fs');const {Console } =require('node:console');// Alternatively// const { Console } = console;const output = fs.createWriteStream('./stdout.log');const errorOutput = fs.createWriteStream('./stderr.log');// Custom simple loggerconst logger =newConsole({stdout: output,stderr: errorOutput });// use it like consoleconst count =5;logger.log('count: %d', count);// In stdout.log: count 5
The globalconsole is a specialConsole whose output is sent toprocess.stdout andprocess.stderr. It is equivalent to calling:
newConsole({stdout: process.stdout,stderr: process.stderr });console.assert(value[, ...message])#
History
| Version | Changes |
|---|---|
| v10.0.0 | The implementation is now spec compliant and does not throw anymore. |
| v0.1.101 | Added in: v0.1.101 |
value<any> The value tested for being truthy....message<any> All arguments besidesvalueare used as error message.
console.assert() writes a message ifvalue isfalsy or omitted. It onlywrites a message and does not otherwise affect execution. The output alwaysstarts with"Assertion failed". If provided,message is formatted usingutil.format().
Ifvalue istruthy, nothing happens.
console.assert(true,'does nothing');console.assert(false,'Whoops %s work','didn\'t');// Assertion failed: Whoops didn't workconsole.assert();// Assertion failedconsole.clear()#
Whenstdout is a TTY, callingconsole.clear() will attempt to clear theTTY. Whenstdout is not a TTY, this method does nothing.
The specific operation ofconsole.clear() can vary across operating systemsand terminal types. For most Linux operating systems,console.clear()operates similarly to theclear shell command. On Windows,console.clear()will clear only the output in the current terminal viewport for the Node.jsbinary.
console.count([label])#
label<string> The display label for the counter.Default:'default'.
Maintains an internal counter specific tolabel and outputs tostdout thenumber of timesconsole.count() has been called with the givenlabel.
>console.count()default:1undefined>console.count('default')default:2undefined>console.count('abc')abc:1undefined>console.count('xyz')xyz:1undefined>console.count('abc')abc:2undefined>console.count()default:3undefined>console.countReset([label])#
label<string> The display label for the counter.Default:'default'.
Resets the internal counter specific tolabel.
>console.count('abc');abc:1undefined>console.countReset('abc');undefined>console.count('abc');abc:1undefined>console.debug(data[, ...args])#
History
| Version | Changes |
|---|---|
| v8.10.0 |
|
| v8.0.0 | Added in: v8.0.0 |
Theconsole.debug() function is an alias forconsole.log().
console.dir(obj[, options])#
obj<any>options<Object>showHidden<boolean> Iftruethen the object's non-enumerable and symbolproperties will be shown too.Default:false.depth<number> Tellsutil.inspect()how many times to recurse whileformatting the object. This is useful for inspecting large complicatedobjects. To make it recurse indefinitely, passnull.Default:2.colors<boolean> Iftrue, then the output will be styled with ANSI colorcodes. Colors are customizable;seecustomizingutil.inspect()colors.Default:false.
Usesutil.inspect() onobj and prints the resulting string tostdout.This function bypasses any custominspect() function defined onobj.
console.dirxml(...data)#
History
| Version | Changes |
|---|---|
| v9.3.0 |
|
| v8.0.0 | Added in: v8.0.0 |
...data<any>
This method callsconsole.log() passing it the arguments received.This method does not produce any XML formatting.
console.error([data][, ...args])#
Prints tostderr with newline. Multiple arguments can be passed, with thefirst used as the primary message and all additional used as substitutionvalues similar toprintf(3) (the arguments are all passed toutil.format()).
const code =5;console.error('error #%d', code);// Prints: error #5, to stderrconsole.error('error', code);// Prints: error 5, to stderrIf formatting elements (e.g.%d) are not found in the first string thenutil.inspect() is called on each argument and the resulting stringvalues are concatenated. Seeutil.format() for more information.
console.group([...label])#
...label<any>
Increases indentation of subsequent lines by spaces forgroupIndentationlength.
If one or morelabels are provided, those are printed first without theadditional indentation.
console.groupEnd()#
Decreases indentation of subsequent lines by spaces forgroupIndentationlength.
console.info([data][, ...args])#
Theconsole.info() function is an alias forconsole.log().
console.log([data][, ...args])#
Prints tostdout with newline. Multiple arguments can be passed, with thefirst used as the primary message and all additional used as substitutionvalues similar toprintf(3) (the arguments are all passed toutil.format()).
const count =5;console.log('count: %d', count);// Prints: count: 5, to stdoutconsole.log('count:', count);// Prints: count: 5, to stdoutSeeutil.format() for more information.
console.table(tabularData[, properties])#
tabularData<any>properties<string[]> Alternate properties for constructing the table.
Try to construct a table with the columns of the properties oftabularData(or useproperties) and rows oftabularData and log it. Falls back to justlogging the argument if it can't be parsed as tabular.
// These can't be parsed as tabular dataconsole.table(Symbol());// Symbol()console.table(undefined);// undefinedconsole.table([{a:1,b:'Y' }, {a:'Z',b:2 }]);// ┌─────────┬─────┬─────┐// │ (index) │ a │ b │// ├─────────┼─────┼─────┤// │ 0 │ 1 │ 'Y' │// │ 1 │ 'Z' │ 2 │// └─────────┴─────┴─────┘console.table([{a:1,b:'Y' }, {a:'Z',b:2 }], ['a']);// ┌─────────┬─────┐// │ (index) │ a │// ├─────────┼─────┤// │ 0 │ 1 │// │ 1 │ 'Z' │// └─────────┴─────┘console.time([label])#
label<string>Default:'default'
Starts a timer that can be used to compute the duration of an operation. Timersare identified by a uniquelabel. Use the samelabel when callingconsole.timeEnd() to stop the timer and output the elapsed time insuitable time units tostdout. For example, if the elapsedtime is 3869ms,console.timeEnd() displays "3.869s".
console.timeEnd([label])#
History
| Version | Changes |
|---|---|
| v13.0.0 | The elapsed time is displayed with a suitable time unit. |
| v6.0.0 | This method no longer supports multiple calls that don't map to individual |
| v0.1.104 | Added in: v0.1.104 |
label<string>Default:'default'
Stops a timer that was previously started by callingconsole.time() andprints the result tostdout:
console.time('bunch-of-stuff');// Do a bunch of stuff.console.timeEnd('bunch-of-stuff');// Prints: bunch-of-stuff: 225.438msconsole.timeLog([label][, ...data])#
For a timer that was previously started by callingconsole.time(), printsthe elapsed time and otherdata arguments tostdout:
console.time('process');const value =expensiveProcess1();// Returns 42console.timeLog('process', value);// Prints "process: 365.227ms 42".doExpensiveProcess2(value);console.timeEnd('process');console.trace([message][, ...args])#
Prints tostderr the string'Trace: ', followed by theutil.format()formatted message and stack trace to the current position in the code.
console.trace('Show me');// Prints: (stack trace will vary based on where trace is called)// Trace: Show me// at repl:2:9// at REPLServer.defaultEval (repl.js:248:27)// at bound (domain.js:287:14)// at REPLServer.runBound [as eval] (domain.js:300:12)// at REPLServer.<anonymous> (repl.js:412:12)// at emitOne (events.js:82:20)// at REPLServer.emit (events.js:169:7)// at REPLServer.Interface._onLine (readline.js:210:10)// at REPLServer.Interface._line (readline.js:549:8)// at REPLServer.Interface._ttyWrite (readline.js:826:14)console.warn([data][, ...args])#
Theconsole.warn() function is an alias forconsole.error().
Inspector only methods#
The following methods are exposed by the V8 engine in the general API but donot display anything unless used in conjunction with theinspector(--inspect flag).
console.profile([label])#
label<string>
This method does not display anything unless used in the inspector. Theconsole.profile() method starts a JavaScript CPU profile with an optionallabel untilconsole.profileEnd() is called. The profile is then added totheProfile panel of the inspector.
console.profile('MyLabel');// Some codeconsole.profileEnd('MyLabel');// Adds the profile 'MyLabel' to the Profiles panel of the inspector.console.profileEnd([label])#
label<string>
This method does not display anything unless used in the inspector. Stops thecurrent JavaScript CPU profiling session if one has been started and printsthe report to theProfiles panel of the inspector. Seeconsole.profile() for an example.
If this method is called without a label, the most recently started profile isstopped.
Crypto#
Source Code:lib/crypto.js
Thenode:crypto module provides cryptographic functionality that includes aset of wrappers for OpenSSL's hash, HMAC, cipher, decipher, sign, and verifyfunctions.
const { createHmac } =awaitimport('node:crypto');const secret ='abcdefg';const hash =createHmac('sha256', secret) .update('I love cupcakes') .digest('hex');console.log(hash);// Prints:// c0fa1bc00531bd78ef38c628449c5102aeabd49b5dc3a2a516ea6ea959d6658econst { createHmac } =require('node:crypto');const secret ='abcdefg';const hash =createHmac('sha256', secret) .update('I love cupcakes') .digest('hex');console.log(hash);// Prints:// c0fa1bc00531bd78ef38c628449c5102aeabd49b5dc3a2a516ea6ea959d6658e
Determining if crypto support is unavailable#
It is possible for Node.js to be built without including support for thenode:crypto module. In such cases, attempting toimport fromcrypto orcallingrequire('node:crypto') will result in an error being thrown.
When using CommonJS, the error thrown can be caught using try/catch:
let crypto;try { crypto =require('node:crypto');}catch (err) {console.error('crypto support is disabled!');}When using the lexical ESMimport keyword, the error can only becaught if a handler forprocess.on('uncaughtException') is registeredbefore any attempt to load the module is made (using, for instance,a preload module).
When using ESM, if there is a chance that the code may be run on a buildof Node.js where crypto support is not enabled, consider using theimport() function instead of the lexicalimport keyword:
let crypto;try { crypto =awaitimport('node:crypto');}catch (err) {console.error('crypto support is disabled!');}Asymmetric key types#
The following table lists the asymmetric key types recognized by theKeyObject API:
| Key Type | Description | OID |
|---|---|---|
'dh' | Diffie-Hellman | 1.2.840.113549.1.3.1 |
'dsa' | DSA | 1.2.840.10040.4.1 |
'ec' | Elliptic curve | 1.2.840.10045.2.1 |
'ed25519' | Ed25519 | 1.3.101.112 |
'ed448' | Ed448 | 1.3.101.113 |
'ml-dsa-44'1 | ML-DSA-44 | 2.16.840.1.101.3.4.3.17 |
'ml-dsa-65'1 | ML-DSA-65 | 2.16.840.1.101.3.4.3.18 |
'ml-dsa-87'1 | ML-DSA-87 | 2.16.840.1.101.3.4.3.19 |
'ml-kem-512'1 | ML-KEM-512 | 2.16.840.1.101.3.4.4.1 |
'ml-kem-768'1 | ML-KEM-768 | 2.16.840.1.101.3.4.4.2 |
'ml-kem-1024'1 | ML-KEM-1024 | 2.16.840.1.101.3.4.4.3 |
'rsa-pss' | RSA PSS | 1.2.840.113549.1.1.10 |
'rsa' | RSA | 1.2.840.113549.1.1.1 |
'slh-dsa-sha2-128f'1 | SLH-DSA-SHA2-128f | 2.16.840.1.101.3.4.3.21 |
'slh-dsa-sha2-128s'1 | SLH-DSA-SHA2-128s | 2.16.840.1.101.3.4.3.20 |
'slh-dsa-sha2-192f'1 | SLH-DSA-SHA2-192f | 2.16.840.1.101.3.4.3.23 |
'slh-dsa-sha2-192s'1 | SLH-DSA-SHA2-192s | 2.16.840.1.101.3.4.3.22 |
'slh-dsa-sha2-256f'1 | SLH-DSA-SHA2-256f | 2.16.840.1.101.3.4.3.25 |
'slh-dsa-sha2-256s'1 | SLH-DSA-SHA2-256s | 2.16.840.1.101.3.4.3.24 |
'slh-dsa-shake-128f'1 | SLH-DSA-SHAKE-128f | 2.16.840.1.101.3.4.3.27 |
'slh-dsa-shake-128s'1 | SLH-DSA-SHAKE-128s | 2.16.840.1.101.3.4.3.26 |
'slh-dsa-shake-192f'1 | SLH-DSA-SHAKE-192f | 2.16.840.1.101.3.4.3.29 |
'slh-dsa-shake-192s'1 | SLH-DSA-SHAKE-192s | 2.16.840.1.101.3.4.3.28 |
'slh-dsa-shake-256f'1 | SLH-DSA-SHAKE-256f | 2.16.840.1.101.3.4.3.31 |
'slh-dsa-shake-256s'1 | SLH-DSA-SHAKE-256s | 2.16.840.1.101.3.4.3.30 |
'x25519' | X25519 | 1.3.101.110 |
'x448' | X448 | 1.3.101.111 |
Class:Certificate#
SPKAC is a Certificate Signing Request mechanism originally implemented byNetscape and was specified formally as part of HTML5'skeygen element.
<keygen> is deprecated sinceHTML 5.2 and new projectsshould not use this element anymore.
Thenode:crypto module provides theCertificate class for working with SPKACdata. The most common usage is handling output generated by the HTML5<keygen> element. Node.js usesOpenSSL's SPKAC implementation internally.
Static method:Certificate.exportChallenge(spkac[, encoding])#
History
| Version | Changes |
|---|---|
| v15.0.0 | The spkac argument can be an ArrayBuffer. Limited the size of the spkac argument to a maximum of 2**31 - 1 bytes. |
| v9.0.0 | Added in: v9.0.0 |
spkac<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView>encoding<string> Theencoding of thespkacstring.- Returns:<Buffer> The challenge component of the
spkacdata structure, whichincludes a public key and a challenge.
const {Certificate } =awaitimport('node:crypto');const spkac =getSpkacSomehow();const challenge =Certificate.exportChallenge(spkac);console.log(challenge.toString('utf8'));// Prints: the challenge as a UTF8 stringconst {Certificate } =require('node:crypto');const spkac =getSpkacSomehow();const challenge =Certificate.exportChallenge(spkac);console.log(challenge.toString('utf8'));// Prints: the challenge as a UTF8 string
Static method:Certificate.exportPublicKey(spkac[, encoding])#
History
| Version | Changes |
|---|---|
| v15.0.0 | The spkac argument can be an ArrayBuffer. Limited the size of the spkac argument to a maximum of 2**31 - 1 bytes. |
| v9.0.0 | Added in: v9.0.0 |
spkac<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView>encoding<string> Theencoding of thespkacstring.- Returns:<Buffer> The public key component of the
spkacdata structure,which includes a public key and a challenge.
const {Certificate } =awaitimport('node:crypto');const spkac =getSpkacSomehow();const publicKey =Certificate.exportPublicKey(spkac);console.log(publicKey);// Prints: the public key as <Buffer ...>const {Certificate } =require('node:crypto');const spkac =getSpkacSomehow();const publicKey =Certificate.exportPublicKey(spkac);console.log(publicKey);// Prints: the public key as <Buffer ...>
Static method:Certificate.verifySpkac(spkac[, encoding])#
History
| Version | Changes |
|---|---|
| v15.0.0 | The spkac argument can be an ArrayBuffer. Added encoding. Limited the size of the spkac argument to a maximum of 2**31 - 1 bytes. |
| v9.0.0 | Added in: v9.0.0 |
spkac<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView>encoding<string> Theencoding of thespkacstring.- Returns:<boolean>
trueif the givenspkacdata structure is valid,falseotherwise.
import {Buffer }from'node:buffer';const {Certificate } =awaitimport('node:crypto');const spkac =getSpkacSomehow();console.log(Certificate.verifySpkac(Buffer.from(spkac)));// Prints: true or falseconst {Buffer } =require('node:buffer');const {Certificate } =require('node:crypto');const spkac =getSpkacSomehow();console.log(Certificate.verifySpkac(Buffer.from(spkac)));// Prints: true or false
Legacy API#
As a legacy interface, it is possible to create new instances ofthecrypto.Certificate class as illustrated in the examples below.
new crypto.Certificate()#
Instances of theCertificate class can be created using thenew keywordor by callingcrypto.Certificate() as a function:
const {Certificate } =awaitimport('node:crypto');const cert1 =newCertificate();const cert2 =Certificate();const {Certificate } =require('node:crypto');const cert1 =newCertificate();const cert2 =Certificate();
certificate.exportChallenge(spkac[, encoding])#
spkac<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView>encoding<string> Theencoding of thespkacstring.- Returns:<Buffer> The challenge component of the
spkacdata structure, whichincludes a public key and a challenge.
const {Certificate } =awaitimport('node:crypto');const cert =Certificate();const spkac =getSpkacSomehow();const challenge = cert.exportChallenge(spkac);console.log(challenge.toString('utf8'));// Prints: the challenge as a UTF8 stringconst {Certificate } =require('node:crypto');const cert =Certificate();const spkac =getSpkacSomehow();const challenge = cert.exportChallenge(spkac);console.log(challenge.toString('utf8'));// Prints: the challenge as a UTF8 string
certificate.exportPublicKey(spkac[, encoding])#
spkac<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView>encoding<string> Theencoding of thespkacstring.- Returns:<Buffer> The public key component of the
spkacdata structure,which includes a public key and a challenge.
const {Certificate } =awaitimport('node:crypto');const cert =Certificate();const spkac =getSpkacSomehow();const publicKey = cert.exportPublicKey(spkac);console.log(publicKey);// Prints: the public key as <Buffer ...>const {Certificate } =require('node:crypto');const cert =Certificate();const spkac =getSpkacSomehow();const publicKey = cert.exportPublicKey(spkac);console.log(publicKey);// Prints: the public key as <Buffer ...>
certificate.verifySpkac(spkac[, encoding])#
spkac<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView>encoding<string> Theencoding of thespkacstring.- Returns:<boolean>
trueif the givenspkacdata structure is valid,falseotherwise.
import {Buffer }from'node:buffer';const {Certificate } =awaitimport('node:crypto');const cert =Certificate();const spkac =getSpkacSomehow();console.log(cert.verifySpkac(Buffer.from(spkac)));// Prints: true or falseconst {Buffer } =require('node:buffer');const {Certificate } =require('node:crypto');const cert =Certificate();const spkac =getSpkacSomehow();console.log(cert.verifySpkac(Buffer.from(spkac)));// Prints: true or false
Class:Cipheriv#
- Extends:<stream.Transform>
Instances of theCipheriv class are used to encrypt data. The class can beused in one of two ways:
- As astream that is both readable and writable, where plain unencrypteddata is written to produce encrypted data on the readable side, or
- Using the
cipher.update()andcipher.final()methods to producethe encrypted data.
Thecrypto.createCipheriv() method isused to createCipheriv instances.Cipheriv objects are not to be createddirectly using thenew keyword.
Example: UsingCipheriv objects as streams:
const { scrypt, randomFill, createCipheriv,} =awaitimport('node:crypto');const algorithm ='aes-192-cbc';const password ='Password used to generate key';// First, we'll generate the key. The key length is dependent on the algorithm.// In this case for aes192, it is 24 bytes (192 bits).scrypt(password,'salt',24,(err, key) => {if (err)throw err;// Then, we'll generate a random initialization vectorrandomFill(newUint8Array(16),(err, iv) => {if (err)throw err;// Once we have the key and iv, we can create and use the cipher...const cipher =createCipheriv(algorithm, key, iv);let encrypted =''; cipher.setEncoding('hex'); cipher.on('data',(chunk) => encrypted += chunk); cipher.on('end',() =>console.log(encrypted)); cipher.write('some clear text data'); cipher.end(); });});const { scrypt, randomFill, createCipheriv,} =require('node:crypto');const algorithm ='aes-192-cbc';const password ='Password used to generate key';// First, we'll generate the key. The key length is dependent on the algorithm.// In this case for aes192, it is 24 bytes (192 bits).scrypt(password,'salt',24,(err, key) => {if (err)throw err;// Then, we'll generate a random initialization vectorrandomFill(newUint8Array(16),(err, iv) => {if (err)throw err;// Once we have the key and iv, we can create and use the cipher...const cipher =createCipheriv(algorithm, key, iv);let encrypted =''; cipher.setEncoding('hex'); cipher.on('data',(chunk) => encrypted += chunk); cipher.on('end',() =>console.log(encrypted)); cipher.write('some clear text data'); cipher.end(); });});
Example: UsingCipheriv and piped streams:
import { createReadStream, createWriteStream,}from'node:fs';import { pipeline,}from'node:stream';const { scrypt, randomFill, createCipheriv,} =awaitimport('node:crypto');const algorithm ='aes-192-cbc';const password ='Password used to generate key';// First, we'll generate the key. The key length is dependent on the algorithm.// In this case for aes192, it is 24 bytes (192 bits).scrypt(password,'salt',24,(err, key) => {if (err)throw err;// Then, we'll generate a random initialization vectorrandomFill(newUint8Array(16),(err, iv) => {if (err)throw err;const cipher =createCipheriv(algorithm, key, iv);const input =createReadStream('test.js');const output =createWriteStream('test.enc');pipeline(input, cipher, output,(err) => {if (err)throw err; }); });});const { createReadStream, createWriteStream,} =require('node:fs');const { pipeline,} =require('node:stream');const { scrypt, randomFill, createCipheriv,} =require('node:crypto');const algorithm ='aes-192-cbc';const password ='Password used to generate key';// First, we'll generate the key. The key length is dependent on the algorithm.// In this case for aes192, it is 24 bytes (192 bits).scrypt(password,'salt',24,(err, key) => {if (err)throw err;// Then, we'll generate a random initialization vectorrandomFill(newUint8Array(16),(err, iv) => {if (err)throw err;const cipher =createCipheriv(algorithm, key, iv);const input =createReadStream('test.js');const output =createWriteStream('test.enc');pipeline(input, cipher, output,(err) => {if (err)throw err; }); });});
Example: Using thecipher.update() andcipher.final() methods:
const { scrypt, randomFill, createCipheriv,} =awaitimport('node:crypto');const algorithm ='aes-192-cbc';const password ='Password used to generate key';// First, we'll generate the key. The key length is dependent on the algorithm.// In this case for aes192, it is 24 bytes (192 bits).scrypt(password,'salt',24,(err, key) => {if (err)throw err;// Then, we'll generate a random initialization vectorrandomFill(newUint8Array(16),(err, iv) => {if (err)throw err;const cipher =createCipheriv(algorithm, key, iv);let encrypted = cipher.update('some clear text data','utf8','hex'); encrypted += cipher.final('hex');console.log(encrypted); });});const { scrypt, randomFill, createCipheriv,} =require('node:crypto');const algorithm ='aes-192-cbc';const password ='Password used to generate key';// First, we'll generate the key. The key length is dependent on the algorithm.// In this case for aes192, it is 24 bytes (192 bits).scrypt(password,'salt',24,(err, key) => {if (err)throw err;// Then, we'll generate a random initialization vectorrandomFill(newUint8Array(16),(err, iv) => {if (err)throw err;const cipher =createCipheriv(algorithm, key, iv);let encrypted = cipher.update('some clear text data','utf8','hex'); encrypted += cipher.final('hex');console.log(encrypted); });});
cipher.final([outputEncoding])#
outputEncoding<string> Theencoding of the return value.- Returns:<Buffer> |<string> Any remaining enciphered contents.If
outputEncodingis specified, a string isreturned. If anoutputEncodingis not provided, aBufferis returned.
Once thecipher.final() method has been called, theCipheriv object can nolonger be used to encrypt data. Attempts to callcipher.final() more thanonce will result in an error being thrown.
cipher.getAuthTag()#
- Returns:<Buffer> When using an authenticated encryption mode (
GCM,CCM,OCB, andchacha20-poly1305are currently supported), thecipher.getAuthTag()method returns aBuffercontaining theauthentication tag that has been computed fromthe given data.
Thecipher.getAuthTag() method should only be called after encryption hasbeen completed using thecipher.final() method.
If theauthTagLength option was set during thecipher instance's creation,this function will return exactlyauthTagLength bytes.
cipher.setAAD(buffer[, options])#
buffer<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView>options<Object>stream.transformoptions- Returns:<Cipheriv> The same
Cipherivinstance for method chaining.
When using an authenticated encryption mode (GCM,CCM,OCB, andchacha20-poly1305 arecurrently supported), thecipher.setAAD() method sets the value used for theadditional authenticated data (AAD) input parameter.
TheplaintextLength option is optional forGCM andOCB. When usingCCM,theplaintextLength option must be specified and its value must match thelength of the plaintext in bytes. SeeCCM mode.
Thecipher.setAAD() method must be called beforecipher.update().
cipher.setAutoPadding([autoPadding])#
autoPadding<boolean>Default:true- Returns:<Cipheriv> The same
Cipherivinstance for method chaining.
When using block encryption algorithms, theCipheriv class will automaticallyadd padding to the input data to the appropriate block size. To disable thedefault padding callcipher.setAutoPadding(false).
WhenautoPadding isfalse, the length of the entire input data must be amultiple of the cipher's block size orcipher.final() will throw an error.Disabling automatic padding is useful for non-standard padding, for instanceusing0x0 instead of PKCS padding.
Thecipher.setAutoPadding() method must be called beforecipher.final().
cipher.update(data[, inputEncoding][, outputEncoding])#
History
| Version | Changes |
|---|---|
| v6.0.0 | The default |
| v0.1.94 | Added in: v0.1.94 |
data<string> |<Buffer> |<TypedArray> |<DataView>inputEncoding<string> Theencoding of the data.outputEncoding<string> Theencoding of the return value.- Returns:<Buffer> |<string>
Updates the cipher withdata. If theinputEncoding argument is given,thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,data must be aBuffer,TypedArray, orDataView. Ifdata is aBuffer,TypedArray, orDataView, theninputEncoding is ignored.
TheoutputEncoding specifies the output format of the enciphereddata. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncoding is provided, aBuffer is returned.
Thecipher.update() method can be called multiple times with new data untilcipher.final() is called. Callingcipher.update() aftercipher.final() will result in an error being thrown.
Class:Decipheriv#
- Extends:<stream.Transform>
Instances of theDecipheriv class are used to decrypt data. The class can beused in one of two ways:
- As astream that is both readable and writable, where plain encrypteddata is written to produce unencrypted data on the readable side, or
- Using the
decipher.update()anddecipher.final()methods toproduce the unencrypted data.
Thecrypto.createDecipheriv() method isused to createDecipheriv instances.Decipheriv objects are not to be createddirectly using thenew keyword.
Example: UsingDecipheriv objects as streams:
import {Buffer }from'node:buffer';const { scryptSync, createDecipheriv,} =awaitimport('node:crypto');const algorithm ='aes-192-cbc';const password ='Password used to generate key';// Key length is dependent on the algorithm. In this case for aes192, it is// 24 bytes (192 bits).// Use the async `crypto.scrypt()` instead.const key =scryptSync(password,'salt',24);// The IV is usually passed along with the ciphertext.const iv =Buffer.alloc(16,0);// Initialization vector.const decipher =createDecipheriv(algorithm, key, iv);let decrypted ='';decipher.on('readable',() => {let chunk;while (null !== (chunk = decipher.read())) { decrypted += chunk.toString('utf8'); }});decipher.on('end',() => {console.log(decrypted);// Prints: some clear text data});// Encrypted with same algorithm, key and iv.const encrypted ='e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa';decipher.write(encrypted,'hex');decipher.end();const { scryptSync, createDecipheriv,} =require('node:crypto');const {Buffer } =require('node:buffer');const algorithm ='aes-192-cbc';const password ='Password used to generate key';// Key length is dependent on the algorithm. In this case for aes192, it is// 24 bytes (192 bits).// Use the async `crypto.scrypt()` instead.const key =scryptSync(password,'salt',24);// The IV is usually passed along with the ciphertext.const iv =Buffer.alloc(16,0);// Initialization vector.const decipher =createDecipheriv(algorithm, key, iv);let decrypted ='';decipher.on('readable',() => {let chunk;while (null !== (chunk = decipher.read())) { decrypted += chunk.toString('utf8'); }});decipher.on('end',() => {console.log(decrypted);// Prints: some clear text data});// Encrypted with same algorithm, key and iv.const encrypted ='e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa';decipher.write(encrypted,'hex');decipher.end();
Example: UsingDecipheriv and piped streams:
import { createReadStream, createWriteStream,}from'node:fs';import {Buffer }from'node:buffer';const { scryptSync, createDecipheriv,} =awaitimport('node:crypto');const algorithm ='aes-192-cbc';const password ='Password used to generate key';// Use the async `crypto.scrypt()` instead.const key =scryptSync(password,'salt',24);// The IV is usually passed along with the ciphertext.const iv =Buffer.alloc(16,0);// Initialization vector.const decipher =createDecipheriv(algorithm, key, iv);const input =createReadStream('test.enc');const output =createWriteStream('test.js');input.pipe(decipher).pipe(output);const { createReadStream, createWriteStream,} =require('node:fs');const { scryptSync, createDecipheriv,} =require('node:crypto');const {Buffer } =require('node:buffer');const algorithm ='aes-192-cbc';const password ='Password used to generate key';// Use the async `crypto.scrypt()` instead.const key =scryptSync(password,'salt',24);// The IV is usually passed along with the ciphertext.const iv =Buffer.alloc(16,0);// Initialization vector.const decipher =createDecipheriv(algorithm, key, iv);const input =createReadStream('test.enc');const output =createWriteStream('test.js');input.pipe(decipher).pipe(output);
Example: Using thedecipher.update() anddecipher.final() methods:
import {Buffer }from'node:buffer';const { scryptSync, createDecipheriv,} =awaitimport('node:crypto');const algorithm ='aes-192-cbc';const password ='Password used to generate key';// Use the async `crypto.scrypt()` instead.const key =scryptSync(password,'salt',24);// The IV is usually passed along with the ciphertext.const iv =Buffer.alloc(16,0);// Initialization vector.const decipher =createDecipheriv(algorithm, key, iv);// Encrypted using same algorithm, key and iv.const encrypted ='e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa';let decrypted = decipher.update(encrypted,'hex','utf8');decrypted += decipher.final('utf8');console.log(decrypted);// Prints: some clear text dataconst { scryptSync, createDecipheriv,} =require('node:crypto');const {Buffer } =require('node:buffer');const algorithm ='aes-192-cbc';const password ='Password used to generate key';// Use the async `crypto.scrypt()` instead.const key =scryptSync(password,'salt',24);// The IV is usually passed along with the ciphertext.const iv =Buffer.alloc(16,0);// Initialization vector.const decipher =createDecipheriv(algorithm, key, iv);// Encrypted using same algorithm, key and iv.const encrypted ='e5f79c5915c02171eec6b212d5520d44480993d7d622a7c4c2da32f6efda0ffa';let decrypted = decipher.update(encrypted,'hex','utf8');decrypted += decipher.final('utf8');console.log(decrypted);// Prints: some clear text data
decipher.final([outputEncoding])#
outputEncoding<string> Theencoding of the return value.- Returns:<Buffer> |<string> Any remaining deciphered contents.If
outputEncodingis specified, a string isreturned. If anoutputEncodingis not provided, aBufferis returned.
Once thedecipher.final() method has been called, theDecipheriv object canno longer be used to decrypt data. Attempts to calldecipher.final() morethan once will result in an error being thrown.
decipher.setAAD(buffer[, options])#
History
| Version | Changes |
|---|---|
| v15.0.0 | The buffer argument can be a string or ArrayBuffer and is limited to no more than 2 ** 31 - 1 bytes. |
| v7.2.0 | This method now returns a reference to |
| v1.0.0 | Added in: v1.0.0 |
buffer<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView>options<Object>stream.transformoptions- Returns:<Decipheriv> The same Decipher for method chaining.
When using an authenticated encryption mode (GCM,CCM,OCB, andchacha20-poly1305 arecurrently supported), thedecipher.setAAD() method sets the value used for theadditional authenticated data (AAD) input parameter.
Theoptions argument is optional forGCM. When usingCCM, theplaintextLength option must be specified and its value must match the lengthof the ciphertext in bytes. SeeCCM mode.
Thedecipher.setAAD() method must be called beforedecipher.update().
When passing a string as thebuffer, please considercaveats when using strings as inputs to cryptographic APIs.
decipher.setAuthTag(buffer[, encoding])#
History
| Version | Changes |
|---|---|
| v22.0.0, v20.13.0 | Using GCM tag lengths other than 128 bits without specifying the |
| v15.0.0 | The buffer argument can be a string or ArrayBuffer and is limited to no more than 2 ** 31 - 1 bytes. |
| v11.0.0 | This method now throws if the GCM tag length is invalid. |
| v7.2.0 | This method now returns a reference to |
| v1.0.0 | Added in: v1.0.0 |
buffer<string> |<Buffer> |<ArrayBuffer> |<TypedArray> |<DataView>encoding<string> String encoding to use whenbufferis a string.- Returns:<Decipheriv> The same Decipher for method chaining.
When using an authenticated encryption mode (GCM,CCM,OCB, andchacha20-poly1305 arecurrently supported), thedecipher.setAuthTag() method is used to pass in thereceivedauthentication tag. If no tag is provided, or if the cipher texthas been tampered with,decipher.final() will throw, indicating that thecipher text should be discarded due to failed authentication. If the tag lengthis invalid according toNIST SP 800-38D or does not match the value of theauthTagLength option,decipher.setAuthTag() will throw an error.
Thedecipher.setAuthTag() method must be called beforedecipher.update()forCCM mode or beforedecipher.final() forGCM andOCB modes andchacha20-poly1305.decipher.setAuthTag() can only be called once.
Because thenode:crypto module was originally designed to closely mirrorOpenSSL's behavior, this function permits short GCM authentication tags unlessan explicit authentication tag length was passed tocrypto.createDecipheriv() when thedecipher object was created. Thisbehavior is deprecated and subject to change (seeDEP0182).In the meantime, applications should either set theauthTagLength option whencallingcreateDecipheriv() or check the actualauthentication tag length before passing it tosetAuthTag().
When passing a string as the authentication tag, please considercaveats when using strings as inputs to cryptographic APIs.
decipher.setAutoPadding([autoPadding])#
autoPadding<boolean>Default:true- Returns:<Decipheriv> The same Decipher for method chaining.
When data has been encrypted without standard block padding, callingdecipher.setAutoPadding(false) will disable automatic padding to preventdecipher.final() from checking for and removing padding.
Turning auto padding off will only work if the input data's length is amultiple of the ciphers block size.
Thedecipher.setAutoPadding() method must be called beforedecipher.final().
decipher.update(data[, inputEncoding][, outputEncoding])#
History
| Version | Changes |
|---|---|
| v6.0.0 | The default |
| v0.1.94 | Added in: v0.1.94 |
data<string> |<Buffer> |<TypedArray> |<DataView>inputEncoding<string> Theencoding of thedatastring.outputEncoding<string> Theencoding of the return value.- Returns:<Buffer> |<string>
Updates the decipher withdata. If theinputEncoding argument is given,thedataargument is a string using the specified encoding. If theinputEncodingargument is not given,data must be aBuffer. Ifdata is aBuffer theninputEncoding is ignored.
TheoutputEncoding specifies the output format of the enciphereddata. If theoutputEncodingis specified, a string using the specified encoding is returned. If nooutputEncoding is provided, aBuffer is returned.
Thedecipher.update() method can be called multiple times with new data untildecipher.final() is called. Callingdecipher.update() afterdecipher.final() will result in an error being thrown.
Even if the underlying cipher implements authentication, the authenticity andintegrity of the plaintext returned from this function may be uncertain at thistime. For authenticated encryption algorithms, authenticity is generally onlyestablished when the application callsdecipher.final().
Class:DiffieHellman#
TheDiffieHellman class is a utility for creating Diffie-Hellman keyexchanges.
Instances of theDiffieHellman class can be created using thecrypto.createDiffieHellman() function.
import assertfrom'node:assert';const { createDiffieHellman,} =awaitimport('node:crypto');// Generate Alice's keys...const alice =createDiffieHellman(2048);const aliceKey = alice.generateKeys();// Generate Bob's keys...const bob =createDiffieHellman(alice.getPrime(), alice.getGenerator());const bobKey = bob.generateKeys();// Exchange and generate the secret...const aliceSecret = alice.computeSecret(bobKey);const bobSecret = bob.computeSecret(aliceKey);// OKassert.strictEqual(aliceSecret.toString('hex'), bobSecret.toString('hex'));const assert =require('node:assert');const { createDiffieHellman,} =require('node:crypto');// Generate Alice's keys...const alice =createDiffieHellman(2048);const aliceKey = alice.generateKeys();// Generate Bob's keys...const bob =createDiffieHellman(alice.getPrime(), alice.getGenerator());const bobKey = bob.generateKeys();// Exchange and generate the secret...const aliceSecret = alice.computeSecret(bobKey);const bobSecret = bob.computeSecret(aliceKey);// OKassert.strictEqual(aliceSecret.toString('hex'), bobSecret.toString('hex'));
diffieHellman.computeSecret(otherPublicKey[, inputEncoding][, outputEncoding])#
otherPublicKey<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView>inputEncoding<string> Theencoding of anotherPublicKeystring.outputEncoding<string> Theencoding of the return value.- Returns:<Buffer> |<string>
Computes the shared secret usingotherPublicKey as the otherparty's public key and returns the computed shared secret. The suppliedkey is interpreted using the specifiedinputEncoding, and secret isencoded using specifiedoutputEncoding.If theinputEncoding is notprovided,otherPublicKey is expected to be aBuffer,TypedArray, orDataView.
IfoutputEncoding is given a string is returned; otherwise, aBuffer is returned.
diffieHellman.generateKeys([encoding])#
Generates private and public Diffie-Hellman key values unless they have beengenerated or computed already, and returnsthe public key in the specifiedencoding. This key should betransferred to the other party.Ifencoding is provided a string is returned; otherwise aBuffer is returned.
This function is a thin wrapper aroundDH_generate_key(). In particular,once a private key has been generated or set, calling this function only updatesthe public key but does not generate a new private key.
diffieHellman.getGenerator([encoding])#
Returns the Diffie-Hellman generator in the specifiedencoding.Ifencoding is provided a string isreturned; otherwise aBuffer is returned.
diffieHellman.getPrime([encoding])#
Returns the Diffie-Hellman prime in the specifiedencoding.Ifencoding is provided a string isreturned; otherwise aBuffer is returned.
diffieHellman.getPrivateKey([encoding])#
Returns the Diffie-Hellman private key in the specifiedencoding.Ifencoding is provided astring is returned; otherwise aBuffer is returned.
diffieHellman.getPublicKey([encoding])#
Returns the Diffie-Hellman public key in the specifiedencoding.Ifencoding is provided astring is returned; otherwise aBuffer is returned.
diffieHellman.setPrivateKey(privateKey[, encoding])#
privateKey<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView>encoding<string> Theencoding of theprivateKeystring.
Sets the Diffie-Hellman private key. If theencoding argument is provided,privateKey is expectedto be a string. If noencoding is provided,privateKey is expectedto be aBuffer,TypedArray, orDataView.
This function does not automatically compute the associated public key. EitherdiffieHellman.setPublicKey() ordiffieHellman.generateKeys() can beused to manually provide the public key or to automatically derive it.
diffieHellman.setPublicKey(publicKey[, encoding])#
publicKey<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView>encoding<string> Theencoding of thepublicKeystring.
Sets the Diffie-Hellman public key. If theencoding argument is provided,publicKey is expectedto be a string. If noencoding is provided,publicKey is expectedto be aBuffer,TypedArray, orDataView.
diffieHellman.verifyError#
A bit field containing any warnings and/or errors resulting from a checkperformed during initialization of theDiffieHellman object.
The following values are valid for this property (as defined innode:constants module):
DH_CHECK_P_NOT_SAFE_PRIMEDH_CHECK_P_NOT_PRIMEDH_UNABLE_TO_CHECK_GENERATORDH_NOT_SUITABLE_GENERATOR
Class:DiffieHellmanGroup#
TheDiffieHellmanGroup class takes a well-known modp group as its argument.It works the same asDiffieHellman, except that it does not allow changingits keys after creation. In other words, it does not implementsetPublicKey()orsetPrivateKey() methods.
const { createDiffieHellmanGroup } =awaitimport('node:crypto');const dh =createDiffieHellmanGroup('modp16');const { createDiffieHellmanGroup } =require('node:crypto');const dh =createDiffieHellmanGroup('modp16');
The following groups are supported:
'modp14'(2048 bits,RFC 3526 Section 3)'modp15'(3072 bits,RFC 3526 Section 4)'modp16'(4096 bits,RFC 3526 Section 5)'modp17'(6144 bits,RFC 3526 Section 6)'modp18'(8192 bits,RFC 3526 Section 7)
The following groups are still supported but deprecated (seeCaveats):
'modp1'(768 bits,RFC 2409 Section 6.1)'modp2'(1024 bits,RFC 2409 Section 6.2)'modp5'(1536 bits,RFC 3526 Section 2)
These deprecated groups might be removed in future versions of Node.js.
Class:ECDH#
TheECDH class is a utility for creating Elliptic Curve Diffie-Hellman (ECDH)key exchanges.
Instances of theECDH class can be created using thecrypto.createECDH() function.
import assertfrom'node:assert';const { createECDH,} =awaitimport('node:crypto');// Generate Alice's keys...const alice =createECDH('secp521r1');const aliceKey = alice.generateKeys();// Generate Bob's keys...const bob =createECDH('secp521r1');const bobKey = bob.generateKeys();// Exchange and generate the secret...const aliceSecret = alice.computeSecret(bobKey);const bobSecret = bob.computeSecret(aliceKey);assert.strictEqual(aliceSecret.toString('hex'), bobSecret.toString('hex'));// OKconst assert =require('node:assert');const { createECDH,} =require('node:crypto');// Generate Alice's keys...const alice =createECDH('secp521r1');const aliceKey = alice.generateKeys();// Generate Bob's keys...const bob =createECDH('secp521r1');const bobKey = bob.generateKeys();// Exchange and generate the secret...const aliceSecret = alice.computeSecret(bobKey);const bobSecret = bob.computeSecret(aliceKey);assert.strictEqual(aliceSecret.toString('hex'), bobSecret.toString('hex'));// OK
Static method:ECDH.convertKey(key, curve[, inputEncoding[, outputEncoding[, format]]])#
key<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView>curve<string>inputEncoding<string> Theencoding of thekeystring.outputEncoding<string> Theencoding of the return value.format<string>Default:'uncompressed'- Returns:<Buffer> |<string>
Converts the EC Diffie-Hellman public key specified bykey andcurve to theformat specified byformat. Theformat argument specifies point encodingand can be'compressed','uncompressed' or'hybrid'. The supplied key isinterpreted using the specifiedinputEncoding, and the returned key is encodedusing the specifiedoutputEncoding.
Usecrypto.getCurves() to obtain a list of available curve names.On recent OpenSSL releases,openssl ecparam -list_curves will also displaythe name and description of each available elliptic curve.
Ifformat is not specified the point will be returned in'uncompressed'format.
If theinputEncoding is not provided,key is expected to be aBuffer,TypedArray, orDataView.
Example (uncompressing a key):
const { createECDH,ECDH,} =awaitimport('node:crypto');const ecdh =createECDH('secp256k1');ecdh.generateKeys();const compressedKey = ecdh.getPublicKey('hex','compressed');const uncompressedKey =ECDH.convertKey(compressedKey,'secp256k1','hex','hex','uncompressed');// The converted key and the uncompressed public key should be the sameconsole.log(uncompressedKey === ecdh.getPublicKey('hex'));const { createECDH,ECDH,} =require('node:crypto');const ecdh =createECDH('secp256k1');ecdh.generateKeys();const compressedKey = ecdh.getPublicKey('hex','compressed');const uncompressedKey =ECDH.convertKey(compressedKey,'secp256k1','hex','hex','uncompressed');// The converted key and the uncompressed public key should be the sameconsole.log(uncompressedKey === ecdh.getPublicKey('hex'));
ecdh.computeSecret(otherPublicKey[, inputEncoding][, outputEncoding])#
History
| Version | Changes |
|---|---|
| v10.0.0 | Changed error format to better support invalid public key error. |
| v6.0.0 | The default |
| v0.11.14 | Added in: v0.11.14 |
otherPublicKey<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView>inputEncoding<string> Theencoding of theotherPublicKeystring.outputEncoding<string> Theencoding of the return value.- Returns:<Buffer> |<string>
Computes the shared secret usingotherPublicKey as the otherparty's public key and returns the computed shared secret. The suppliedkey is interpreted using specifiedinputEncoding, and the returned secretis encoded using the specifiedoutputEncoding.If theinputEncoding is notprovided,otherPublicKey is expected to be aBuffer,TypedArray, orDataView.
IfoutputEncoding is given a string will be returned; otherwise aBuffer is returned.
ecdh.computeSecret will throw anERR_CRYPTO_ECDH_INVALID_PUBLIC_KEY error whenotherPublicKeylies outside of the elliptic curve. SinceotherPublicKey isusually supplied from a remote user over an insecure network,be sure to handle this exception accordingly.
ecdh.generateKeys([encoding[, format]])#
encoding<string> Theencoding of the return value.format<string>Default:'uncompressed'- Returns:<Buffer> |<string>
Generates private and public EC Diffie-Hellman key values, and returnsthe public key in the specifiedformat andencoding. This key should betransferred to the other party.
Theformat argument specifies point encoding and can be'compressed' or'uncompressed'. Ifformat is not specified, the point will be returned in'uncompressed' format.
Ifencoding is provided a string is returned; otherwise aBufferis returned.
ecdh.getPrivateKey([encoding])#
encoding<string> Theencoding of the return value.- Returns:<Buffer> |<string> The EC Diffie-Hellman in the specified
encoding.
Ifencoding is specified, a string is returned; otherwise aBuffer isreturned.
ecdh.getPublicKey([encoding][, format])#
encoding<string> Theencoding of the return value.format<string>Default:'uncompressed'- Returns:<Buffer> |<string> The EC Diffie-Hellman public key in the specified
encodingandformat.
Theformat argument specifies point encoding and can be'compressed' or'uncompressed'. Ifformat is not specified the point will be returned in'uncompressed' format.
Ifencoding is specified, a string is returned; otherwise aBuffer isreturned.
ecdh.setPrivateKey(privateKey[, encoding])#
privateKey<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView>encoding<string> Theencoding of theprivateKeystring.
Sets the EC Diffie-Hellman private key.Ifencoding is provided,privateKey is expectedto be a string; otherwiseprivateKey is expected to be aBuffer,TypedArray, orDataView.
IfprivateKey is not valid for the curve specified when theECDH object wascreated, an error is thrown. Upon setting the private key, the associatedpublic point (key) is also generated and set in theECDH object.
ecdh.setPublicKey(publicKey[, encoding])#
publicKey<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView>encoding<string> Theencoding of thepublicKeystring.
Sets the EC Diffie-Hellman public key.Ifencoding is providedpublicKey is expected tobe a string; otherwise aBuffer,TypedArray, orDataView is expected.
There is not normally a reason to call this method becauseECDHonly requires a private key and the other party's public key to compute theshared secret. Typically eitherecdh.generateKeys() orecdh.setPrivateKey() will be called. Theecdh.setPrivateKey() methodattempts to generate the public point/key associated with the private key beingset.
Example (obtaining a shared secret):
const { createECDH, createHash,} =awaitimport('node:crypto');const alice =createECDH('secp256k1');const bob =createECDH('secp256k1');// This is a shortcut way of specifying one of Alice's previous private// keys. It would be unwise to use such a predictable private key in a real// application.alice.setPrivateKey(createHash('sha256').update('alice','utf8').digest(),);// Bob uses a newly generated cryptographically strong// pseudorandom key pairbob.generateKeys();const aliceSecret = alice.computeSecret(bob.getPublicKey(),null,'hex');const bobSecret = bob.computeSecret(alice.getPublicKey(),null,'hex');// aliceSecret and bobSecret should be the same shared secret valueconsole.log(aliceSecret === bobSecret);const { createECDH, createHash,} =require('node:crypto');const alice =createECDH('secp256k1');const bob =createECDH('secp256k1');// This is a shortcut way of specifying one of Alice's previous private// keys. It would be unwise to use such a predictable private key in a real// application.alice.setPrivateKey(createHash('sha256').update('alice','utf8').digest(),);// Bob uses a newly generated cryptographically strong// pseudorandom key pairbob.generateKeys();const aliceSecret = alice.computeSecret(bob.getPublicKey(),null,'hex');const bobSecret = bob.computeSecret(alice.getPublicKey(),null,'hex');// aliceSecret and bobSecret should be the same shared secret valueconsole.log(aliceSecret === bobSecret);
Class:Hash#
- Extends:<stream.Transform>
TheHash class is a utility for creating hash digests of data. It can beused in one of two ways:
- As astream that is both readable and writable, where data is writtento produce a computed hash digest on the readable side, or
- Using the
hash.update()andhash.digest()methods to produce thecomputed hash.
Thecrypto.createHash() method is used to createHash instances.Hashobjects are not to be created directly using thenew keyword.
Example: UsingHash objects as streams:
const { createHash,} =awaitimport('node:crypto');const hash =createHash('sha256');hash.on('readable',() => {// Only one element is going to be produced by the// hash stream.const data = hash.read();if (data) {console.log(data.toString('hex'));// Prints:// 6a2da20943931e9834fc12cfe5bb47bbd9ae43489a30726962b576f4e3993e50 }});hash.write('some data to hash');hash.end();const { createHash,} =require('node:crypto');const hash =createHash('sha256');hash.on('readable',() => {// Only one element is going to be produced by the// hash stream.const data = hash.read();if (data) {console.log(data.toString('hex'));// Prints:// 6a2da20943931e9834fc12cfe5bb47bbd9ae43489a30726962b576f4e3993e50 }});hash.write('some data to hash');hash.end();
Example: UsingHash and piped streams:
import { createReadStream }from'node:fs';import { stdout }from'node:process';const { createHash } =awaitimport('node:crypto');const hash =createHash('sha256');const input =createReadStream('test.js');input.pipe(hash).setEncoding('hex').pipe(stdout);const { createReadStream } =require('node:fs');const { createHash } =require('node:crypto');const { stdout } =require('node:process');const hash =createHash('sha256');const input =createReadStream('test.js');input.pipe(hash).setEncoding('hex').pipe(stdout);
Example: Using thehash.update() andhash.digest() methods:
const { createHash,} =awaitimport('node:crypto');const hash =createHash('sha256');hash.update('some data to hash');console.log(hash.digest('hex'));// Prints:// 6a2da20943931e9834fc12cfe5bb47bbd9ae43489a30726962b576f4e3993e50const { createHash,} =require('node:crypto');const hash =createHash('sha256');hash.update('some data to hash');console.log(hash.digest('hex'));// Prints:// 6a2da20943931e9834fc12cfe5bb47bbd9ae43489a30726962b576f4e3993e50
hash.copy([options])#
options<Object>stream.transformoptions- Returns:<Hash>
Creates a newHash object that contains a deep copy of the internal stateof the currentHash object.
The optionaloptions argument controls stream behavior. For XOF hashfunctions such as'shake256', theoutputLength option can be used tospecify the desired output length in bytes.
An error is thrown when an attempt is made to copy theHash object afteritshash.digest() method has been called.
// Calculate a rolling hash.const { createHash,} =awaitimport('node:crypto');const hash =createHash('sha256');hash.update('one');console.log(hash.copy().digest('hex'));hash.update('two');console.log(hash.copy().digest('hex'));hash.update('three');console.log(hash.copy().digest('hex'));// Etc.// Calculate a rolling hash.const { createHash,} =require('node:crypto');const hash =createHash('sha256');hash.update('one');console.log(hash.copy().digest('hex'));hash.update('two');console.log(hash.copy().digest('hex'));hash.update('three');console.log(hash.copy().digest('hex'));// Etc.
hash.digest([encoding])#
Calculates the digest of all of the data passed to be hashed (using thehash.update() method).Ifencoding is provided a string will be returned; otherwiseaBuffer is returned.
TheHash object can not be used again afterhash.digest() method has beencalled. Multiple calls will cause an error to be thrown.
hash.update(data[, inputEncoding])#
History
| Version | Changes |
|---|---|
| v6.0.0 | The default |
| v0.1.92 | Added in: v0.1.92 |
data<string> |<Buffer> |<TypedArray> |<DataView>inputEncoding<string> Theencoding of thedatastring.
Updates the hash content with the givendata, the encoding of whichis given ininputEncoding.Ifencoding is not provided, and thedata is a string, anencoding of'utf8' is enforced. Ifdata is aBuffer,TypedArray, orDataView, theninputEncoding is ignored.
This can be called many times with new data as it is streamed.
Class:Hmac#
- Extends:<stream.Transform>
TheHmac class is a utility for creating cryptographic HMAC digests. It canbe used in one of two ways:
- As astream that is both readable and writable, where data is writtento produce a computed HMAC digest on the readable side, or
- Using the
hmac.update()andhmac.digest()methods to produce thecomputed HMAC digest.
Thecrypto.createHmac() method is used to createHmac instances.Hmacobjects are not to be created directly using thenew keyword.
Example: UsingHmac objects as streams:
const { createHmac,} =awaitimport('node:crypto');const hmac =createHmac('sha256','a secret');hmac.on('readable',() => {// Only one element is going to be produced by the// hash stream.const data = hmac.read();if (data) {console.log(data.toString('hex'));// Prints:// 7fd04df92f636fd450bc841c9418e5825c17f33ad9c87c518115a45971f7f77e }});hmac.write('some data to hash');hmac.end();const { createHmac,} =require('node:crypto');const hmac =createHmac('sha256','a secret');hmac.on('readable',() => {// Only one element is going to be produced by the// hash stream.const data = hmac.read();if (data) {console.log(data.toString('hex'));// Prints:// 7fd04df92f636fd450bc841c9418e5825c17f33ad9c87c518115a45971f7f77e }});hmac.write('some data to hash');hmac.end();
Example: UsingHmac and piped streams:
import { createReadStream }from'node:fs';import { stdout }from'node:process';const { createHmac,} =awaitimport('node:crypto');const hmac =createHmac('sha256','a secret');const input =createReadStream('test.js');input.pipe(hmac).pipe(stdout);const { createReadStream,} =require('node:fs');const { createHmac,} =require('node:crypto');const { stdout } =require('node:process');const hmac =createHmac('sha256','a secret');const input =createReadStream('test.js');input.pipe(hmac).pipe(stdout);
Example: Using thehmac.update() andhmac.digest() methods:
const { createHmac,} =awaitimport('node:crypto');const hmac =createHmac('sha256','a secret');hmac.update('some data to hash');console.log(hmac.digest('hex'));// Prints:// 7fd04df92f636fd450bc841c9418e5825c17f33ad9c87c518115a45971f7f77econst { createHmac,} =require('node:crypto');const hmac =createHmac('sha256','a secret');hmac.update('some data to hash');console.log(hmac.digest('hex'));// Prints:// 7fd04df92f636fd450bc841c9418e5825c17f33ad9c87c518115a45971f7f77e
hmac.digest([encoding])#
Calculates the HMAC digest of all of the data passed usinghmac.update().Ifencoding isprovided a string is returned; otherwise aBuffer is returned;
TheHmac object can not be used again afterhmac.digest() has beencalled. Multiple calls tohmac.digest() will result in an error being thrown.
hmac.update(data[, inputEncoding])#
History
| Version | Changes |
|---|---|
| v6.0.0 | The default |
| v0.1.94 | Added in: v0.1.94 |
data<string> |<Buffer> |<TypedArray> |<DataView>inputEncoding<string> Theencoding of thedatastring.
Updates theHmac content with the givendata, the encoding of whichis given ininputEncoding.Ifencoding is not provided, and thedata is a string, anencoding of'utf8' is enforced. Ifdata is aBuffer,TypedArray, orDataView, theninputEncoding is ignored.
This can be called many times with new data as it is streamed.
Class:KeyObject#
History
| Version | Changes |
|---|---|
| v24.6.0 | Add support for ML-DSA keys. |
| v14.5.0, v12.19.0 | Instances of this class can now be passed to worker threads using |
| v11.13.0 | This class is now exported. |
| v11.6.0 | Added in: v11.6.0 |
Node.js uses aKeyObject class to represent a symmetric or asymmetric key,and each kind of key exposes different functions. Thecrypto.createSecretKey(),crypto.createPublicKey() andcrypto.createPrivateKey() methods are used to createKeyObjectinstances.KeyObject objects are not to be created directly using thenewkeyword.
Most applications should consider using the newKeyObject API instead ofpassing keys as strings orBuffers due to improved security features.
KeyObject instances can be passed to other threads viapostMessage().The receiver obtains a clonedKeyObject, and theKeyObject does not need tobe listed in thetransferList argument.
Static method:KeyObject.from(key)#
key<CryptoKey>- Returns:<KeyObject>
Example: Converting aCryptoKey instance to aKeyObject:
const {KeyObject } =awaitimport('node:crypto');const { subtle } = globalThis.crypto;const key =await subtle.generateKey({name:'HMAC',hash:'SHA-256',length:256,},true, ['sign','verify']);const keyObject =KeyObject.from(key);console.log(keyObject.symmetricKeySize);// Prints: 32 (symmetric key size in bytes)const {KeyObject } =require('node:crypto');const { subtle } = globalThis.crypto;(asyncfunction() {const key =await subtle.generateKey({name:'HMAC',hash:'SHA-256',length:256, },true, ['sign','verify']);const keyObject =KeyObject.from(key);console.log(keyObject.symmetricKeySize);// Prints: 32 (symmetric key size in bytes)})();
keyObject.asymmetricKeyDetails#
History
| Version | Changes |
|---|---|
| v16.9.0 | Expose |
| v15.7.0 | Added in: v15.7.0 |
- Type:<Object>
modulusLength<number> Key size in bits (RSA, DSA).publicExponent<bigint> Public exponent (RSA).hashAlgorithm<string> Name of the message digest (RSA-PSS).mgf1HashAlgorithm<string> Name of the message digest used byMGF1 (RSA-PSS).saltLength<number> Minimal salt length in bytes (RSA-PSS).divisorLength<number> Size ofqin bits (DSA).namedCurve<string> Name of the curve (EC).
This property exists only on asymmetric keys. Depending on the type of the key,this object contains information about the key. None of the information obtainedthrough this property can be used to uniquely identify a key or to compromisethe security of the key.
For RSA-PSS keys, if the key material contains aRSASSA-PSS-params sequence,thehashAlgorithm,mgf1HashAlgorithm, andsaltLength properties will beset.
Other key details might be exposed via this API using additional attributes.
keyObject.asymmetricKeyType#
History
| Version | Changes |
|---|---|
| v24.8.0 | Add support for SLH-DSA keys. |
| v24.7.0 | Add support for ML-KEM keys. |
| v24.6.0 | Add support for ML-DSA keys. |
| v13.9.0, v12.17.0 | Added support for |
| v12.0.0 | Added support for |
| v12.0.0 | This property now returns |
| v12.0.0 | Added support for |
| v12.0.0 | Added support for |
| v11.6.0 | Added in: v11.6.0 |
- Type:<string>
For asymmetric keys, this property represents the type of the key. See thesupportedasymmetric key types.
This property isundefined for unrecognizedKeyObject types and symmetrickeys.
keyObject.equals(otherKeyObject)#
otherKeyObject<KeyObject> AKeyObjectwith which tocomparekeyObject.- Returns:<boolean>
Returnstrue orfalse depending on whether the keys have exactly the sametype, value, and parameters. This method is notconstant time.
keyObject.export([options])#
History
| Version | Changes |
|---|---|
| v15.9.0 | Added support for |
| v11.6.0 | Added in: v11.6.0 |
For symmetric keys, the following encoding options can be used:
format<string> Must be'buffer'(default) or'jwk'.
For public keys, the following encoding options can be used:
For private keys, the following encoding options can be used:
type<string> Must be one of'pkcs1'(RSA only),'pkcs8'or'sec1'(EC only).format<string> Must be'pem','der', or'jwk'.cipher<string> If specified, the private key will be encrypted withthe givencipherandpassphraseusing PKCS#5 v2.0 password basedencryption.passphrase<string> |<Buffer> The passphrase to use for encryption, seecipher.
The result type depends on the selected encoding format, when PEM theresult is a string, when DER it will be a buffer containing the dataencoded as DER, whenJWK it will be an object.
WhenJWK encoding format was selected, all other encoding options areignored.
PKCS#1, SEC1, and PKCS#8 type keys can be encrypted by using a combination ofthecipher andformat options. The PKCS#8type can be used with anyformat to encrypt any key algorithm (RSA, EC, or DH) by specifying acipher. PKCS#1 and SEC1 can only be encrypted by specifying acipherwhen the PEMformat is used. For maximum compatibility, use PKCS#8 forencrypted private keys. Since PKCS#8 defines its ownencryption mechanism, PEM-level encryption is not supported when encryptinga PKCS#8 key. SeeRFC 5208 for PKCS#8 encryption andRFC 1421 forPKCS#1 and SEC1 encryption.
keyObject.symmetricKeySize#
- Type:<number>
For secret keys, this property represents the size of the key in bytes. Thisproperty isundefined for asymmetric keys.
keyObject.toCryptoKey(algorithm, extractable, keyUsages)#
extractable<boolean>keyUsages<string[]> SeeKey usages.- Returns:<CryptoKey>
Converts aKeyObject instance to aCryptoKey.
Class:Sign#
- Extends:<stream.Writable>
TheSign class is a utility for generating signatures. It can be used in oneof two ways:
- As a writablestream, where data to be signed is written and the
sign.sign()method is used to generate and return the signature, or - Using the
sign.update()andsign.sign()methods to produce thesignature.
Thecrypto.createSign() method is used to createSign instances. Theargument is the string name of the hash function to use.Sign objects are notto be created directly using thenew keyword.
Example: UsingSign andVerify objects as streams:
const { generateKeyPairSync, createSign, createVerify,} =awaitimport('node:crypto');const { privateKey, publicKey } =generateKeyPairSync('ec', {namedCurve:'sect239k1',});const sign =createSign('SHA256');sign.write('some data to sign');sign.end();const signature = sign.sign(privateKey,'hex');const verify =createVerify('SHA256');verify.write('some data to sign');verify.end();console.log(verify.verify(publicKey, signature,'hex'));// Prints: trueconst { generateKeyPairSync, createSign, createVerify,} =require('node:crypto');const { privateKey, publicKey } =generateKeyPairSync('ec', {namedCurve:'sect239k1',});const sign =createSign('SHA256');sign.write('some data to sign');sign.end();const signature = sign.sign(privateKey,'hex');const verify =createVerify('SHA256');verify.write('some data to sign');verify.end();console.log(verify.verify(publicKey, signature,'hex'));// Prints: true
Example: Using thesign.update() andverify.update() methods:
const { generateKeyPairSync, createSign, createVerify,} =awaitimport('node:crypto');const { privateKey, publicKey } =generateKeyPairSync('rsa', {modulusLength:2048,});const sign =createSign('SHA256');sign.update('some data to sign');sign.end();const signature = sign.sign(privateKey);const verify =createVerify('SHA256');verify.update('some data to sign');verify.end();console.log(verify.verify(publicKey, signature));// Prints: trueconst { generateKeyPairSync, createSign, createVerify,} =require('node:crypto');const { privateKey, publicKey } =generateKeyPairSync('rsa', {modulusLength:2048,});const sign =createSign('SHA256');sign.update('some data to sign');sign.end();const signature = sign.sign(privateKey);const verify =createVerify('SHA256');verify.update('some data to sign');verify.end();console.log(verify.verify(publicKey, signature));// Prints: true
sign.sign(privateKey[, outputEncoding])#
History
| Version | Changes |
|---|---|
| v15.0.0 | The privateKey can also be an ArrayBuffer and CryptoKey. |
| v13.2.0, v12.16.0 | This function now supports IEEE-P1363 DSA and ECDSA signatures. |
| v12.0.0 | This function now supports RSA-PSS keys. |
| v11.6.0 | This function now supports key objects. |
| v8.0.0 | Support for RSASSA-PSS and additional options was added. |
| v0.1.92 | Added in: v0.1.92 |
privateKey<Object> |<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> |<KeyObject> |<CryptoKey>outputEncoding<string> Theencoding of the return value.- Returns:<Buffer> |<string>
Calculates the signature on all the data passed through using eithersign.update() orsign.write().
IfprivateKey is not aKeyObject, this function behaves as ifprivateKey had been passed tocrypto.createPrivateKey(). If it is anobject, the following additional properties can be passed:
dsaEncoding<string> For DSA and ECDSA, this option specifies theformat of the generated signature. It can be one of the following:'der'(default): DER-encoded ASN.1 signature structure encoding(r, s).'ieee-p1363': Signature formatr || sas proposed in IEEE-P1363.
padding<integer> Optional padding value for RSA, one of the following:crypto.constants.RSA_PKCS1_PADDING(default)crypto.constants.RSA_PKCS1_PSS_PADDING
RSA_PKCS1_PSS_PADDINGwill use MGF1 with the same hash functionused to sign the message as specified in section 3.1 ofRFC 4055, unlessan MGF1 hash function has been specified as part of the key in compliance withsection 3.3 ofRFC 4055.saltLength<integer> Salt length for when padding isRSA_PKCS1_PSS_PADDING. The special valuecrypto.constants.RSA_PSS_SALTLEN_DIGESTsets the salt length to the digestsize,crypto.constants.RSA_PSS_SALTLEN_MAX_SIGN(default) sets it to themaximum permissible value.
IfoutputEncoding is provided a string is returned; otherwise aBufferis returned.
TheSign object can not be again used aftersign.sign() method has beencalled. Multiple calls tosign.sign() will result in an error being thrown.
sign.update(data[, inputEncoding])#
History
| Version | Changes |
|---|---|
| v6.0.0 | The default |
| v0.1.92 | Added in: v0.1.92 |
data<string> |<Buffer> |<TypedArray> |<DataView>inputEncoding<string> Theencoding of thedatastring.
Updates theSign content with the givendata, the encoding of whichis given ininputEncoding.Ifencoding is not provided, and thedata is a string, anencoding of'utf8' is enforced. Ifdata is aBuffer,TypedArray, orDataView, theninputEncoding is ignored.
This can be called many times with new data as it is streamed.
Class:Verify#
- Extends:<stream.Writable>
TheVerify class is a utility for verifying signatures. It can be used in oneof two ways:
- As a writablestream where written data is used to validate against thesupplied signature, or
- Using the
verify.update()andverify.verify()methods to verifythe signature.
Thecrypto.createVerify() method is used to createVerify instances.Verify objects are not to be created directly using thenew keyword.
SeeSign for examples.
verify.update(data[, inputEncoding])#
History
| Version | Changes |
|---|---|
| v6.0.0 | The default |
| v0.1.92 | Added in: v0.1.92 |
data<string> |<Buffer> |<TypedArray> |<DataView>inputEncoding<string> Theencoding of thedatastring.
Updates theVerify content with the givendata, the encoding of whichis given ininputEncoding.IfinputEncoding is not provided, and thedata is a string, anencoding of'utf8' is enforced. Ifdata is aBuffer,TypedArray, orDataView, theninputEncoding is ignored.
This can be called many times with new data as it is streamed.
verify.verify(object, signature[, signatureEncoding])#
History
| Version | Changes |
|---|---|
| v15.0.0 | The object can also be an ArrayBuffer and CryptoKey. |
| v13.2.0, v12.16.0 | This function now supports IEEE-P1363 DSA and ECDSA signatures. |
| v12.0.0 | This function now supports RSA-PSS keys. |
| v11.7.0 | The key can now be a private key. |
| v8.0.0 | Support for RSASSA-PSS and additional options was added. |
| v0.1.92 | Added in: v0.1.92 |
object<Object> |<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> |<KeyObject> |<CryptoKey>signature<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView>signatureEncoding<string> Theencoding of thesignaturestring.- Returns:<boolean>
trueorfalsedepending on the validity of thesignature for the data and public key.
Verifies the provided data using the givenobject andsignature.
Ifobject is not aKeyObject, this function behaves as ifobject had been passed tocrypto.createPublicKey(). If it is anobject, the following additional properties can be passed:
dsaEncoding<string> For DSA and ECDSA, this option specifies theformat of the signature. It can be one of the following:'der'(default): DER-encoded ASN.1 signature structure encoding(r, s).'ieee-p1363': Signature formatr || sas proposed in IEEE-P1363.
padding<integer> Optional padding value for RSA, one of the following:crypto.constants.RSA_PKCS1_PADDING(default)crypto.constants.RSA_PKCS1_PSS_PADDING
RSA_PKCS1_PSS_PADDINGwill use MGF1 with the same hash functionused to verify the message as specified in section 3.1 ofRFC 4055, unlessan MGF1 hash function has been specified as part of the key in compliance withsection 3.3 ofRFC 4055.saltLength<integer> Salt length for when padding isRSA_PKCS1_PSS_PADDING. The special valuecrypto.constants.RSA_PSS_SALTLEN_DIGESTsets the salt length to the digestsize,crypto.constants.RSA_PSS_SALTLEN_AUTO(default) causes it to bedetermined automatically.
Thesignature argument is the previously calculated signature for the data, inthesignatureEncoding.If asignatureEncoding is specified, thesignature is expected to be astring; otherwisesignature is expected to be aBuffer,TypedArray, orDataView.
Theverify object can not be used again afterverify.verify() has beencalled. Multiple calls toverify.verify() will result in an error beingthrown.
Because public keys can be derived from private keys, a private key maybe passed instead of a public key.
Class:X509Certificate#
Encapsulates an X509 certificate and provides read-only access toits information.
const { X509Certificate } =awaitimport('node:crypto');const x509 =newX509Certificate('{... pem encoded cert ...}');console.log(x509.subject);const { X509Certificate } =require('node:crypto');const x509 =newX509Certificate('{... pem encoded cert ...}');console.log(x509.subject);
new X509Certificate(buffer)#
buffer<string> |<TypedArray> |<Buffer> |<DataView> A PEM or DER encodedX509 Certificate.
x509.ca#
- Type:<boolean> Will be
trueif this is a Certificate Authority (CA)certificate.
x509.checkEmail(email[, options])#
History
| Version | Changes |
|---|---|
| v18.0.0 | The subject option now defaults to |
| v17.5.0, v16.15.0 | The subject option can now be set to |
| v17.5.0, v16.14.1 | The |
| v15.6.0 | Added in: v15.6.0 |
email<string>options<Object>subject<string>'default','always', or'never'.Default:'default'.
- Returns:<string> |<undefined> Returns
emailif the certificate matches,undefinedif it does not.
Checks whether the certificate matches the given email address.
If the'subject' option is undefined or set to'default', the certificatesubject is only considered if the subject alternative name extension either doesnot exist or does not contain any email addresses.
If the'subject' option is set to'always' and if the subject alternativename extension either does not exist or does not contain a matching emailaddress, the certificate subject is considered.
If the'subject' option is set to'never', the certificate subject is neverconsidered, even if the certificate contains no subject alternative names.
x509.checkHost(name[, options])#
History
| Version | Changes |
|---|---|
| v18.0.0 | The subject option now defaults to |
| v17.5.0, v16.15.0 | The subject option can now be set to |
| v15.6.0 | Added in: v15.6.0 |
name<string>options<Object>- Returns:<string> |<undefined> Returns a subject name that matches
name,orundefinedif no subject name matchesname.
Checks whether the certificate matches the given host name.
If the certificate matches the given host name, the matching subject name isreturned. The returned name might be an exact match (e.g.,foo.example.com)or it might contain wildcards (e.g.,*.example.com). Because host namecomparisons are case-insensitive, the returned subject name might also differfrom the givenname in capitalization.
If the'subject' option is undefined or set to'default', the certificatesubject is only considered if the subject alternative name extension either doesnot exist or does not contain any DNS names. This behavior is consistent withRFC 2818 ("HTTP Over TLS").
If the'subject' option is set to'always' and if the subject alternativename extension either does not exist or does not contain a matching DNS name,the certificate subject is considered.
If the'subject' option is set to'never', the certificate subject is neverconsidered, even if the certificate contains no subject alternative names.
x509.checkIP(ip)#
History
| Version | Changes |
|---|---|
| v17.5.0, v16.14.1 | The |
| v15.6.0 | Added in: v15.6.0 |
ip<string>- Returns:<string> |<undefined> Returns
ipif the certificate matches,undefinedif it does not.
Checks whether the certificate matches the given IP address (IPv4 or IPv6).
OnlyRFC 5280iPAddress subject alternative names are considered, and theymust match the givenip address exactly. Other subject alternative names aswell as the subject field of the certificate are ignored.
x509.checkIssued(otherCert)#
otherCert<X509Certificate>- Returns:<boolean>
Checks whether this certificate was potentially issued by the givenotherCertby comparing the certificate metadata.
This is useful for pruning a list of possible issuer certificates which have beenselected using a more rudimentary filtering routine, i.e. just based on subjectand issuer names.
Finally, to verify that this certificate's signature was produced by a private keycorresponding tootherCert's public key usex509.verify(publicKey)withotherCert's public key represented as aKeyObjectlike so
if (!x509.verify(otherCert.publicKey)) {thrownewError('otherCert did not issue x509');}x509.checkPrivateKey(privateKey)#
privateKey<KeyObject> A private key.- Returns:<boolean>
Checks whether the public key for this certificate is consistent withthe given private key.
x509.fingerprint#
- Type:<string>
The SHA-1 fingerprint of this certificate.
Because SHA-1 is cryptographically broken and because the security of SHA-1 issignificantly worse than that of algorithms that are commonly used to signcertificates, consider usingx509.fingerprint256 instead.
x509.fingerprint512#
- Type:<string>
The SHA-512 fingerprint of this certificate.
Because computing the SHA-256 fingerprint is usually faster and because it isonly half the size of the SHA-512 fingerprint,x509.fingerprint256 may bea better choice. While SHA-512 presumably provides a higher level of security ingeneral, the security of SHA-256 matches that of most algorithms that arecommonly used to sign certificates.
x509.infoAccess#
History
| Version | Changes |
|---|---|
| v17.3.1, v16.13.2 | Parts of this string may be encoded as JSON string literals in response to CVE-2021-44532. |
| v15.6.0 | Added in: v15.6.0 |
- Type:<string>
A textual representation of the certificate's authority information accessextension.
This is a line feed separated list of access descriptions. Each line begins withthe access method and the kind of the access location, followed by a colon andthe value associated with the access location.
After the prefix denoting the access method and the kind of the access location,the remainder of each line might be enclosed in quotes to indicate that thevalue is a JSON string literal. For backward compatibility, Node.js only usesJSON string literals within this property when necessary to avoid ambiguity.Third-party code should be prepared to handle both possible entry formats.
x509.issuerCertificate#
- Type:<X509Certificate>
The issuer certificate orundefined if the issuer certificate is notavailable.
x509.keyUsage#
- Type:<string[]>
An array detailing the key extended usages for this certificate.
x509.serialNumber#
- Type:<string>
The serial number of this certificate.
Serial numbers are assigned by certificate authorities and do not uniquelyidentify certificates. Consider usingx509.fingerprint256 as a uniqueidentifier instead.
x509.subjectAltName#
History
| Version | Changes |
|---|---|
| v17.3.1, v16.13.2 | Parts of this string may be encoded as JSON string literals in response to CVE-2021-44532. |
| v15.6.0 | Added in: v15.6.0 |
- Type:<string>
The subject alternative name specified for this certificate.
This is a comma-separated list of subject alternative names. Each entry beginswith a string identifying the kind of the subject alternative name followed bya colon and the value associated with the entry.
Earlier versions of Node.js incorrectly assumed that it is safe to split thisproperty at the two-character sequence', ' (seeCVE-2021-44532). However,both malicious and legitimate certificates can contain subject alternative namesthat include this sequence when represented as a string.
After the prefix denoting the type of the entry, the remainder of each entrymight be enclosed in quotes to indicate that the value is a JSON string literal.For backward compatibility, Node.js only uses JSON string literals within thisproperty when necessary to avoid ambiguity. Third-party code should be preparedto handle both possible entry formats.
x509.toJSON()#
- Type:<string>
There is no standard JSON encoding for X509 certificates. ThetoJSON() method returns a string containing the PEM encodedcertificate.
x509.toLegacyObject()#
- Type:<Object>
Returns information about this certificate using the legacycertificate object encoding.
x509.validFromDate#
- Type:<Date>
The date/time from which this certificate is valid, encapsulated in aDate object.
x509.validToDate#
- Type:<Date>
The date/time until which this certificate is valid, encapsulated in aDate object.
x509.signatureAlgorithm#
- Type:<string> |<undefined>
The algorithm used to sign the certificate orundefined if the signature algorithm is unknown by OpenSSL.
x509.signatureAlgorithmOid#
- Type:<string>
The OID of the algorithm used to sign the certificate.
x509.verify(publicKey)#
publicKey<KeyObject> A public key.- Returns:<boolean>
Verifies that this certificate was signed by the given public key.Does not perform any other validation checks on the certificate.
node:crypto module methods and properties#
crypto.argon2(algorithm, parameters, callback)#
algorithm<string> Variant of Argon2, one of"argon2d","argon2i"or"argon2id".parameters<Object>message<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> REQUIRED, this is the password for passwordhashing applications of Argon2.nonce<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> REQUIRED, must be atleast 8 bytes long. This is the salt for password hashing applications of Argon2.parallelism<number> REQUIRED, degree of parallelism determines how many computational chains (lanes)can be run. Must be greater than 1 and less than2**24-1.tagLength<number> REQUIRED, the length of the key to generate. Must be greater than 4 andless than2**32-1.memory<number> REQUIRED, memory cost in 1KiB blocks. Must be greater than8 * parallelismand less than2**32-1. The actual number of blocks is roundeddown to the nearest multiple of4 * parallelism.passes<number> REQUIRED, number of passes (iterations). Must be greater than 1 and lessthan2**32-1.secret<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> |<undefined> OPTIONAL, Random additional input,similar to the salt, that shouldNOT be stored with the derived key. This is known as pepper inpassword hashing applications. If used, must have a length not greater than2**32-1bytes.associatedData<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> |<undefined> OPTIONAL, Additional data tobe added to the hash, functionally equivalent to salt or secret, but meant fornon-random data. If used, must have a length not greater than2**32-1bytes.
callback<Function>
Provides an asynchronousArgon2 implementation. Argon2 is a password-basedkey derivation function that is designed to be expensive computationally andmemory-wise in order to make brute-force attacks unrewarding.
Thenonce should be as unique as possible. It is recommended that a nonce israndom and at least 16 bytes long. SeeNIST SP 800-132 for details.
When passing strings formessage,nonce,secret orassociatedData, pleaseconsidercaveats when using strings as inputs to cryptographic APIs.
Thecallback function is called with two arguments:err andderivedKey.err is an exception object when key derivation fails, otherwiseerr isnull.derivedKey is passed to the callback as aBuffer.
An exception is thrown when any of the input arguments specify invalid valuesor types.
const { argon2, randomBytes } =awaitimport('node:crypto');const parameters = {message:'password',nonce:randomBytes(16),parallelism:4,tagLength:64,memory:65536,passes:3,};argon2('argon2id', parameters,(err, derivedKey) => {if (err)throw err;console.log(derivedKey.toString('hex'));// 'af91dad...9520f15'});const { argon2, randomBytes } =require('node:crypto');const parameters = {message:'password',nonce:randomBytes(16),parallelism:4,tagLength:64,memory:65536,passes:3,};argon2('argon2id', parameters,(err, derivedKey) => {if (err)throw err;console.log(derivedKey.toString('hex'));// 'af91dad...9520f15'});
crypto.argon2Sync(algorithm, parameters)#
algorithm<string> Variant of Argon2, one of"argon2d","argon2i"or"argon2id".parameters<Object>message<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> REQUIRED, this is the password for passwordhashing applications of Argon2.nonce<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> REQUIRED, must be atleast 8 bytes long. This is the salt for password hashing applications of Argon2.parallelism<number> REQUIRED, degree of parallelism determines how many computational chains (lanes)can be run. Must be greater than 1 and less than2**24-1.tagLength<number> REQUIRED, the length of the key to generate. Must be greater than 4 andless than2**32-1.memory<number> REQUIRED, memory cost in 1KiB blocks. Must be greater than8 * parallelismand less than2**32-1. The actual number of blocks is roundeddown to the nearest multiple of4 * parallelism.passes<number> REQUIRED, number of passes (iterations). Must be greater than 1 and lessthan2**32-1.secret<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> |<undefined> OPTIONAL, Random additional input,similar to the salt, that shouldNOT be stored with the derived key. This is known as pepper inpassword hashing applications. If used, must have a length not greater than2**32-1bytes.associatedData<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> |<undefined> OPTIONAL, Additional data tobe added to the hash, functionally equivalent to salt or secret, but meant fornon-random data. If used, must have a length not greater than2**32-1bytes.
- Returns:<Buffer>
Provides a synchronousArgon2 implementation. Argon2 is a password-basedkey derivation function that is designed to be expensive computationally andmemory-wise in order to make brute-force attacks unrewarding.
Thenonce should be as unique as possible. It is recommended that a nonce israndom and at least 16 bytes long. SeeNIST SP 800-132 for details.
When passing strings formessage,nonce,secret orassociatedData, pleaseconsidercaveats when using strings as inputs to cryptographic APIs.
An exception is thrown when key derivation fails, otherwise the derived key isreturned as aBuffer.
An exception is thrown when any of the input arguments specify invalid valuesor types.
const { argon2Sync, randomBytes } =awaitimport('node:crypto');const parameters = {message:'password',nonce:randomBytes(16),parallelism:4,tagLength:64,memory:65536,passes:3,};const derivedKey =argon2Sync('argon2id', parameters);console.log(derivedKey.toString('hex'));// 'af91dad...9520f15'const { argon2Sync, randomBytes } =require('node:crypto');const parameters = {message:'password',nonce:randomBytes(16),parallelism:4,tagLength:64,memory:65536,passes:3,};const derivedKey =argon2Sync('argon2id', parameters);console.log(derivedKey.toString('hex'));// 'af91dad...9520f15'
crypto.checkPrime(candidate[, options], callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v15.8.0 | Added in: v15.8.0 |
candidate<ArrayBuffer> |<SharedArrayBuffer> |<TypedArray> |<Buffer> |<DataView> |<bigint>A possible prime encoded as a sequence of big endian octets of arbitrarylength.options<Object>checks<number> The number of Miller-Rabin probabilistic primalityiterations to perform. When the value is0(zero), a number of checksis used that yields a false positive rate of at most 2-64 forrandom input. Care must be used when selecting a number of checks. Referto the OpenSSL documentation for theBN_is_prime_exfunctionnchecksoptions for more details.Default:0
callback<Function>
Checks the primality of thecandidate.
crypto.checkPrimeSync(candidate[, options])#
candidate<ArrayBuffer> |<SharedArrayBuffer> |<TypedArray> |<Buffer> |<DataView> |<bigint>A possible prime encoded as a sequence of big endian octets of arbitrarylength.options<Object>checks<number> The number of Miller-Rabin probabilistic primalityiterations to perform. When the value is0(zero), a number of checksis used that yields a false positive rate of at most 2-64 forrandom input. Care must be used when selecting a number of checks. Referto the OpenSSL documentation for theBN_is_prime_exfunctionnchecksoptions for more details.Default:0
- Returns:<boolean>
trueif the candidate is a prime with an errorprobability less than0.25 ** options.checks.
Checks the primality of thecandidate.
crypto.constants#
- Type:<Object>
An object containing commonly used constants for crypto and security relatedoperations. The specific constants currently defined are described inCrypto constants.
crypto.createCipheriv(algorithm, key, iv[, options])#
History
| Version | Changes |
|---|---|
| v17.9.0, v16.17.0 | The |
| v15.0.0 | The password and iv arguments can be an ArrayBuffer and are each limited to a maximum of 2 ** 31 - 1 bytes. |
| v11.6.0 | The |
| v11.2.0, v10.17.0 | The cipher |
| v10.10.0 | Ciphers in OCB mode are now supported. |
| v10.2.0 | The |
| v9.9.0 | The |
| v0.1.94 | Added in: v0.1.94 |
algorithm<string>key<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> |<KeyObject> |<CryptoKey>iv<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> |<null>options<Object>stream.transformoptions- Returns:<Cipheriv>
Creates and returns aCipheriv object, with the givenalgorithm,key andinitialization vector (iv).
Theoptions argument controls stream behavior and is optional except when acipher in CCM or OCB mode (e.g.'aes-128-ccm') is used. In that case, theauthTagLength option is required and specifies the length of theauthentication tag in bytes, seeCCM mode. In GCM mode, theauthTagLengthoption is not required but can be used to set the length of the authenticationtag that will be returned bygetAuthTag() and defaults to 16 bytes.Forchacha20-poly1305, theauthTagLength option defaults to 16 bytes.
Thealgorithm is dependent on OpenSSL, examples are'aes192', etc. Onrecent OpenSSL releases,openssl list -cipher-algorithms willdisplay the available cipher algorithms.
Thekey is the raw key used by thealgorithm andiv is aninitialization vector. Both arguments must be'utf8' encoded strings,Buffers,TypedArray, orDataViews. Thekey may optionally beaKeyObject of typesecret. If the cipher does not needan initialization vector,iv may benull.
When passing strings forkey oriv, please considercaveats when using strings as inputs to cryptographic APIs.
Initialization vectors should be unpredictable and unique; ideally, they will becryptographically random. They do not have to be secret: IVs are typically justadded to ciphertext messages unencrypted. It may sound contradictory thatsomething has to be unpredictable and unique, but does not have to be secret;remember that an attacker must not be able to predict ahead of time what agiven IV will be.
crypto.createDecipheriv(algorithm, key, iv[, options])#
History
| Version | Changes |
|---|---|
| v17.9.0, v16.17.0 | The |
| v11.6.0 | The |
| v11.2.0, v10.17.0 | The cipher |
| v10.10.0 | Ciphers in OCB mode are now supported. |
| v10.2.0 | The |
| v9.9.0 | The |
| v0.1.94 | Added in: v0.1.94 |
algorithm<string>key<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> |<KeyObject> |<CryptoKey>iv<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> |<null>options<Object>stream.transformoptions- Returns:<Decipheriv>
Creates and returns aDecipheriv object that uses the givenalgorithm,keyand initialization vector (iv).
Theoptions argument controls stream behavior and is optional except when acipher in CCM or OCB mode (e.g.'aes-128-ccm') is used. In that case, theauthTagLength option is required and specifies the length of theauthentication tag in bytes, seeCCM mode.Forchacha20-poly1305, theauthTagLength option defaults to 16bytes and must be set to a different value if a different length is used.For AES-GCM, theauthTagLength option has no default value when decrypting,andsetAuthTag() will accept arbitrarily short authentication tags. Thisbehavior is deprecated and subject to change (seeDEP0182).In the meantime, applications should either set theauthTagLength option orcheck the actual authentication tag length before passing it tosetAuthTag().
Thealgorithm is dependent on OpenSSL, examples are'aes192', etc. Onrecent OpenSSL releases,openssl list -cipher-algorithms willdisplay the available cipher algorithms.
Thekey is the raw key used by thealgorithm andiv is aninitialization vector. Both arguments must be'utf8' encoded strings,Buffers,TypedArray, orDataViews. Thekey may optionally beaKeyObject of typesecret. If the cipher does not needan initialization vector,iv may benull.
When passing strings forkey oriv, please considercaveats when using strings as inputs to cryptographic APIs.
Initialization vectors should be unpredictable and unique; ideally, they will becryptographically random. They do not have to be secret: IVs are typically justadded to ciphertext messages unencrypted. It may sound contradictory thatsomething has to be unpredictable and unique, but does not have to be secret;remember that an attacker must not be able to predict ahead of time what a givenIV will be.
crypto.createDiffieHellman(prime[, primeEncoding][, generator][, generatorEncoding])#
History
| Version | Changes |
|---|---|
| v8.0.0 | The |
| v8.0.0 | The |
| v6.0.0 | The default for the encoding parameters changed from |
| v0.11.12 | Added in: v0.11.12 |
prime<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView>primeEncoding<string> Theencoding of theprimestring.generator<number> |<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView>Default:2generatorEncoding<string> Theencoding of thegeneratorstring.- Returns:<DiffieHellman>
Creates aDiffieHellman key exchange object using the suppliedprime and anoptional specificgenerator.
Thegenerator argument can be a number, string, orBuffer. Ifgenerator is not specified, the value2 is used.
IfprimeEncoding is specified,prime is expected to be a string; otherwiseaBuffer,TypedArray, orDataView is expected.
IfgeneratorEncoding is specified,generator is expected to be a string;otherwise a number,Buffer,TypedArray, orDataView is expected.
crypto.createDiffieHellman(primeLength[, generator])#
primeLength<number>generator<number>Default:2- Returns:<DiffieHellman>
Creates aDiffieHellman key exchange object and generates a prime ofprimeLength bits using an optional specific numericgenerator.Ifgenerator is not specified, the value2 is used.
crypto.createDiffieHellmanGroup(name)#
name<string>- Returns:<DiffieHellmanGroup>
An alias forcrypto.getDiffieHellman()
crypto.createECDH(curveName)#
Creates an Elliptic Curve Diffie-Hellman (ECDH) key exchange object using apredefined curve specified by thecurveName string. Usecrypto.getCurves() to obtain a list of available curve names. On recentOpenSSL releases,openssl ecparam -list_curves will also display the nameand description of each available elliptic curve.
crypto.createHash(algorithm[, options])#
History
| Version | Changes |
|---|---|
| v12.8.0 | The |
| v0.1.92 | Added in: v0.1.92 |
algorithm<string>options<Object>stream.transformoptions- Returns:<Hash>
Creates and returns aHash object that can be used to generate hash digestsusing the givenalgorithm. Optionaloptions argument controls streambehavior. For XOF hash functions such as'shake256', theoutputLength optioncan be used to specify the desired output length in bytes.
Thealgorithm is dependent on the available algorithms supported by theversion of OpenSSL on the platform. Examples are'sha256','sha512', etc.On recent releases of OpenSSL,openssl list -digest-algorithms willdisplay the available digest algorithms.
Example: generating the sha256 sum of a file
import { createReadStream,}from'node:fs';import { argv }from'node:process';const { createHash,} =awaitimport('node:crypto');const filename = argv[2];const hash =createHash('sha256');const input =createReadStream(filename);input.on('readable',() => {// Only one element is going to be produced by the// hash stream.const data = input.read();if (data) hash.update(data);else {console.log(`${hash.digest('hex')}${filename}`); }});const { createReadStream,} =require('node:fs');const { createHash,} =require('node:crypto');const { argv } =require('node:process');const filename = argv[2];const hash =createHash('sha256');const input =createReadStream(filename);input.on('readable',() => {// Only one element is going to be produced by the// hash stream.const data = input.read();if (data) hash.update(data);else {console.log(`${hash.digest('hex')}${filename}`); }});
crypto.createHmac(algorithm, key[, options])#
History
| Version | Changes |
|---|---|
| v15.0.0 | The key can also be an ArrayBuffer or CryptoKey. The encoding option was added. The key cannot contain more than 2 ** 32 - 1 bytes. |
| v11.6.0 | The |
| v0.1.94 | Added in: v0.1.94 |
algorithm<string>key<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> |<KeyObject> |<CryptoKey>options<Object>stream.transformoptionsencoding<string> The string encoding to use whenkeyis a string.
- Returns:<Hmac>
Creates and returns anHmac object that uses the givenalgorithm andkey.Optionaloptions argument controls stream behavior.
Thealgorithm is dependent on the available algorithms supported by theversion of OpenSSL on the platform. Examples are'sha256','sha512', etc.On recent releases of OpenSSL,openssl list -digest-algorithms willdisplay the available digest algorithms.
Thekey is the HMAC key used to generate the cryptographic HMAC hash. If it isaKeyObject, its type must besecret. If it is a string, please considercaveats when using strings as inputs to cryptographic APIs. If it wasobtained from a cryptographically secure source of entropy, such ascrypto.randomBytes() orcrypto.generateKey(), its length should notexceed the block size ofalgorithm (e.g., 512 bits for SHA-256).
Example: generating the sha256 HMAC of a file
import { createReadStream,}from'node:fs';import { argv }from'node:process';const { createHmac,} =awaitimport('node:crypto');const filename = argv[2];const hmac =createHmac('sha256','a secret');const input =createReadStream(filename);input.on('readable',() => {// Only one element is going to be produced by the// hash stream.const data = input.read();if (data) hmac.update(data);else {console.log(`${hmac.digest('hex')}${filename}`); }});const { createReadStream,} =require('node:fs');const { createHmac,} =require('node:crypto');const { argv } =require('node:process');const filename = argv[2];const hmac =createHmac('sha256','a secret');const input =createReadStream(filename);input.on('readable',() => {// Only one element is going to be produced by the// hash stream.const data = input.read();if (data) hmac.update(data);else {console.log(`${hmac.digest('hex')}${filename}`); }});
crypto.createPrivateKey(key)#
History
| Version | Changes |
|---|---|
| v24.6.0 | Add support for ML-DSA keys. |
| v15.12.0 | The key can also be a JWK object. |
| v15.0.0 | The key can also be an ArrayBuffer. The encoding option was added. The key cannot contain more than 2 ** 32 - 1 bytes. |
| v11.6.0 | Added in: v11.6.0 |
key<Object> |<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView>key<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> |<Object> The keymaterial, either in PEM, DER, or JWK format.format<string> Must be'pem','der', or ''jwk'.Default:'pem'.type<string> Must be'pkcs1','pkcs8'or'sec1'. This option isrequired only if theformatis'der'and ignored otherwise.passphrase<string> |<Buffer> The passphrase to use for decryption.encoding<string> The string encoding to use whenkeyis a string.
- Returns:<KeyObject>
Creates and returns a new key object containing a private key. Ifkey is astring orBuffer,format is assumed to be'pem'; otherwise,keymust be an object with the properties described above.
If the private key is encrypted, apassphrase must be specified. The lengthof the passphrase is limited to 1024 bytes.
crypto.createPublicKey(key)#
History
| Version | Changes |
|---|---|
| v24.6.0 | Add support for ML-DSA keys. |
| v15.12.0 | The key can also be a JWK object. |
| v15.0.0 | The key can also be an ArrayBuffer. The encoding option was added. The key cannot contain more than 2 ** 32 - 1 bytes. |
| v11.13.0 | The |
| v11.7.0 | The |
| v11.6.0 | Added in: v11.6.0 |
key<Object> |<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView>key<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> |<Object> The keymaterial, either in PEM, DER, or JWK format.format<string> Must be'pem','der', or'jwk'.Default:'pem'.type<string> Must be'pkcs1'or'spki'. This option isrequired only if theformatis'der'and ignored otherwise.encoding<string> The string encoding to use whenkeyis a string.
- Returns:<KeyObject>
Creates and returns a new key object containing a public key. Ifkey is astring orBuffer,format is assumed to be'pem'; ifkey is aKeyObjectwith type'private', the public key is derived from the given private key;otherwise,key must be an object with the properties described above.
If the format is'pem', the'key' may also be an X.509 certificate.
Because public keys can be derived from private keys, a private key may bepassed instead of a public key. In that case, this function behaves as ifcrypto.createPrivateKey() had been called, except that the type of thereturnedKeyObject will be'public' and that the private key cannot beextracted from the returnedKeyObject. Similarly, if aKeyObject with type'private' is given, a newKeyObject with type'public' will be returnedand it will be impossible to extract the private key from the returned object.
crypto.createSecretKey(key[, encoding])#
History
| Version | Changes |
|---|---|
| v18.8.0, v16.18.0 | The key can now be zero-length. |
| v15.0.0 | The key can also be an ArrayBuffer or string. The encoding argument was added. The key cannot contain more than 2 ** 32 - 1 bytes. |
| v11.6.0 | Added in: v11.6.0 |
key<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView>encoding<string> The string encoding whenkeyis a string.- Returns:<KeyObject>
Creates and returns a new key object containing a secret key for symmetricencryption orHmac.
crypto.createSign(algorithm[, options])#
algorithm<string>options<Object>stream.Writableoptions- Returns:<Sign>
Creates and returns aSign object that uses the givenalgorithm. Usecrypto.getHashes() to obtain the names of the available digest algorithms.Optionaloptions argument controls thestream.Writable behavior.
In some cases, aSign instance can be created using the name of a signaturealgorithm, such as'RSA-SHA256', instead of a digest algorithm. This will usethe corresponding digest algorithm. This does not work for all signaturealgorithms, such as'ecdsa-with-SHA256', so it is best to always use digestalgorithm names.
crypto.createVerify(algorithm[, options])#
algorithm<string>options<Object>stream.Writableoptions- Returns:<Verify>
Creates and returns aVerify object that uses the given algorithm.Usecrypto.getHashes() to obtain an array of names of the availablesigning algorithms. Optionaloptions argument controls thestream.Writable behavior.
In some cases, aVerify instance can be created using the name of a signaturealgorithm, such as'RSA-SHA256', instead of a digest algorithm. This will usethe corresponding digest algorithm. This does not work for all signaturealgorithms, such as'ecdsa-with-SHA256', so it is best to always use digestalgorithm names.
crypto.decapsulate(key, ciphertext[, callback])#
key<Object> |<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> |<KeyObject> Private Keyciphertext<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView>callback<Function>- Returns:<Buffer> if the
callbackfunction is not provided.
Key decapsulation using a KEM algorithm with a private key.
Supported key types and their KEM algorithms are:
'rsa'2 RSA Secret Value Encapsulation'ec'3 DHKEM(P-256, HKDF-SHA256), DHKEM(P-384, HKDF-SHA256), DHKEM(P-521, HKDF-SHA256)'x25519'3 DHKEM(X25519, HKDF-SHA256)'x448'3 DHKEM(X448, HKDF-SHA512)'ml-kem-512'1 ML-KEM'ml-kem-768'1 ML-KEM'ml-kem-1024'1 ML-KEM
Ifkey is not aKeyObject, this function behaves as ifkey had beenpassed tocrypto.createPrivateKey().
If thecallback function is provided this function uses libuv's threadpool.
crypto.diffieHellman(options[, callback])#
History
| Version | Changes |
|---|---|
| v23.11.0 | Optional callback argument added. |
| v13.9.0, v12.17.0 | Added in: v13.9.0, v12.17.0 |
options<Object>privateKey<KeyObject>publicKey<KeyObject>
callback<Function>- Returns:<Buffer> if the
callbackfunction is not provided.
Computes the Diffie-Hellman shared secret based on aprivateKey and apublicKey.Both keys must have the sameasymmetricKeyType and must support either the DH orECDH operation.
If thecallback function is provided this function uses libuv's threadpool.
crypto.encapsulate(key[, callback])#
key<Object> |<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> |<KeyObject> Public Keycallback<Function>- Returns:<Object> if the
callbackfunction is not provided.
Key encapsulation using a KEM algorithm with a public key.
Supported key types and their KEM algorithms are:
'rsa'2 RSA Secret Value Encapsulation'ec'3 DHKEM(P-256, HKDF-SHA256), DHKEM(P-384, HKDF-SHA256), DHKEM(P-521, HKDF-SHA256)'x25519'3 DHKEM(X25519, HKDF-SHA256)'x448'3 DHKEM(X448, HKDF-SHA512)'ml-kem-512'1 ML-KEM'ml-kem-768'1 ML-KEM'ml-kem-1024'1 ML-KEM
Ifkey is not aKeyObject, this function behaves as ifkey had beenpassed tocrypto.createPublicKey().
If thecallback function is provided this function uses libuv's threadpool.
crypto.fips#
Property for checking and controlling whether a FIPS compliant crypto provideris currently in use. Setting to true requires a FIPS build of Node.js.
This property is deprecated. Please usecrypto.setFips() andcrypto.getFips() instead.
crypto.generateKey(type, options, callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v15.0.0 | Added in: v15.0.0 |
type<string> The intended use of the generated secret key. Currentlyaccepted values are'hmac'and'aes'.options<Object>length<number> The bit length of the key to generate. This must be avalue greater than 0.- If
typeis'hmac', the minimum is 8, and the maximum length is231-1. If the value is not a multiple of 8, the generatedkey will be truncated toMath.floor(length / 8). - If
typeis'aes', the length must be one of128,192, or256.
- If
callback<Function>err<Error>key<KeyObject>
Asynchronously generates a new random secret key of the givenlength. Thetype will determine which validations will be performed on thelength.
const { generateKey,} =awaitimport('node:crypto');generateKey('hmac', {length:512 },(err, key) => {if (err)throw err;console.log(key.export().toString('hex'));// 46e..........620});const { generateKey,} =require('node:crypto');generateKey('hmac', {length:512 },(err, key) => {if (err)throw err;console.log(key.export().toString('hex'));// 46e..........620});
The size of a generated HMAC key should not exceed the block size of theunderlying hash function. Seecrypto.createHmac() for more information.
crypto.generateKeyPair(type, options, callback)#
History
| Version | Changes |
|---|---|
| v24.8.0 | Add support for SLH-DSA key pairs. |
| v24.7.0 | Add support for ML-KEM key pairs. |
| v24.6.0 | Add support for ML-DSA key pairs. |
| v18.0.0 | Passing an invalid callback to the |
| v16.10.0 | Add ability to define |
| v13.9.0, v12.17.0 | Add support for Diffie-Hellman. |
| v12.0.0 | Add support for RSA-PSS key pairs. |
| v12.0.0 | Add ability to generate X25519 and X448 key pairs. |
| v12.0.0 | Add ability to generate Ed25519 and Ed448 key pairs. |
| v11.6.0 | The |
| v10.12.0 | Added in: v10.12.0 |
type<string> The asymmetric key type to generate. See thesupportedasymmetric key types.options<Object>modulusLength<number> Key size in bits (RSA, DSA).publicExponent<number> Public exponent (RSA).Default:0x10001.hashAlgorithm<string> Name of the message digest (RSA-PSS).mgf1HashAlgorithm<string> Name of the message digest used byMGF1 (RSA-PSS).saltLength<number> Minimal salt length in bytes (RSA-PSS).divisorLength<number> Size ofqin bits (DSA).namedCurve<string> Name of the curve to use (EC).prime<Buffer> The prime parameter (DH).primeLength<number> Prime length in bits (DH).generator<number> Custom generator (DH).Default:2.groupName<string> Diffie-Hellman group name (DH). Seecrypto.getDiffieHellman().paramEncoding<string> Must be'named'or'explicit'(EC).Default:'named'.publicKeyEncoding<Object> SeekeyObject.export().privateKeyEncoding<Object> SeekeyObject.export().
callback<Function>err<Error>publicKey<string> |<Buffer> |<KeyObject>privateKey<string> |<Buffer> |<KeyObject>
Generates a new asymmetric key pair of the giventype. See thesupportedasymmetric key types.
If apublicKeyEncoding orprivateKeyEncoding was specified, this functionbehaves as ifkeyObject.export() had been called on its result. Otherwise,the respective part of the key is returned as aKeyObject.
It is recommended to encode public keys as'spki' and private keys as'pkcs8' with encryption for long-term storage:
const { generateKeyPair,} =awaitimport('node:crypto');generateKeyPair('rsa', {modulusLength:4096,publicKeyEncoding: {type:'spki',format:'pem', },privateKeyEncoding: {type:'pkcs8',format:'pem',cipher:'aes-256-cbc',passphrase:'top secret', },},(err, publicKey, privateKey) => {// Handle errors and use the generated key pair.});const { generateKeyPair,} =require('node:crypto');generateKeyPair('rsa', {modulusLength:4096,publicKeyEncoding: {type:'spki',format:'pem', },privateKeyEncoding: {type:'pkcs8',format:'pem',cipher:'aes-256-cbc',passphrase:'top secret', },},(err, publicKey, privateKey) => {// Handle errors and use the generated key pair.});
On completion,callback will be called witherr set toundefined andpublicKey /privateKey representing the generated key pair.
If this method is invoked as itsutil.promisify()ed version, it returnsaPromise for anObject withpublicKey andprivateKey properties.
crypto.generateKeyPairSync(type, options)#
History
| Version | Changes |
|---|---|
| v24.8.0 | Add support for SLH-DSA key pairs. |
| v24.7.0 | Add support for ML-KEM key pairs. |
| v24.6.0 | Add support for ML-DSA key pairs. |
| v16.10.0 | Add ability to define |
| v13.9.0, v12.17.0 | Add support for Diffie-Hellman. |
| v12.0.0 | Add support for RSA-PSS key pairs. |
| v12.0.0 | Add ability to generate X25519 and X448 key pairs. |
| v12.0.0 | Add ability to generate Ed25519 and Ed448 key pairs. |
| v11.6.0 | The |
| v10.12.0 | Added in: v10.12.0 |
type<string> The asymmetric key type to generate. See thesupportedasymmetric key types.options<Object>modulusLength<number> Key size in bits (RSA, DSA).publicExponent<number> Public exponent (RSA).Default:0x10001.hashAlgorithm<string> Name of the message digest (RSA-PSS).mgf1HashAlgorithm<string> Name of the message digest used byMGF1 (RSA-PSS).saltLength<number> Minimal salt length in bytes (RSA-PSS).divisorLength<number> Size ofqin bits (DSA).namedCurve<string> Name of the curve to use (EC).prime<Buffer> The prime parameter (DH).primeLength<number> Prime length in bits (DH).generator<number> Custom generator (DH).Default:2.groupName<string> Diffie-Hellman group name (DH). Seecrypto.getDiffieHellman().paramEncoding<string> Must be'named'or'explicit'(EC).Default:'named'.publicKeyEncoding<Object> SeekeyObject.export().privateKeyEncoding<Object> SeekeyObject.export().
- Returns:<Object>
publicKey<string> |<Buffer> |<KeyObject>privateKey<string> |<Buffer> |<KeyObject>
Generates a new asymmetric key pair of the giventype. See thesupportedasymmetric key types.
If apublicKeyEncoding orprivateKeyEncoding was specified, this functionbehaves as ifkeyObject.export() had been called on its result. Otherwise,the respective part of the key is returned as aKeyObject.
When encoding public keys, it is recommended to use'spki'. When encodingprivate keys, it is recommended to use'pkcs8' with a strong passphrase,and to keep the passphrase confidential.
const { generateKeyPairSync,} =awaitimport('node:crypto');const { publicKey, privateKey,} =generateKeyPairSync('rsa', {modulusLength:4096,publicKeyEncoding: {type:'spki',format:'pem', },privateKeyEncoding: {type:'pkcs8',format:'pem',cipher:'aes-256-cbc',passphrase:'top secret', },});const { generateKeyPairSync,} =require('node:crypto');const { publicKey, privateKey,} =generateKeyPairSync('rsa', {modulusLength:4096,publicKeyEncoding: {type:'spki',format:'pem', },privateKeyEncoding: {type:'pkcs8',format:'pem',cipher:'aes-256-cbc',passphrase:'top secret', },});
The return value{ publicKey, privateKey } represents the generated key pair.When PEM encoding was selected, the respective key will be a string, otherwiseit will be a buffer containing the data encoded as DER.
crypto.generateKeySync(type, options)#
type<string> The intended use of the generated secret key. Currentlyaccepted values are'hmac'and'aes'.options<Object>length<number> The bit length of the key to generate.- If
typeis'hmac', the minimum is 8, and the maximum length is231-1. If the value is not a multiple of 8, the generatedkey will be truncated toMath.floor(length / 8). - If
typeis'aes', the length must be one of128,192, or256.
- If
- Returns:<KeyObject>
Synchronously generates a new random secret key of the givenlength. Thetype will determine which validations will be performed on thelength.
const { generateKeySync,} =awaitimport('node:crypto');const key =generateKeySync('hmac', {length:512 });console.log(key.export().toString('hex'));// e89..........41econst { generateKeySync,} =require('node:crypto');const key =generateKeySync('hmac', {length:512 });console.log(key.export().toString('hex'));// e89..........41e
The size of a generated HMAC key should not exceed the block size of theunderlying hash function. Seecrypto.createHmac() for more information.
crypto.generatePrime(size[, options], callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v15.8.0 | Added in: v15.8.0 |
size<number> The size (in bits) of the prime to generate.options<Object>add<ArrayBuffer> |<SharedArrayBuffer> |<TypedArray> |<Buffer> |<DataView> |<bigint>rem<ArrayBuffer> |<SharedArrayBuffer> |<TypedArray> |<Buffer> |<DataView> |<bigint>safe<boolean>Default:false.bigint<boolean> Whentrue, the generated prime is returnedas abigint.
callback<Function>err<Error>prime<ArrayBuffer> |<bigint>
Generates a pseudorandom prime ofsize bits.
Ifoptions.safe istrue, the prime will be a safe prime -- that is,(prime - 1) / 2 will also be a prime.
Theoptions.add andoptions.rem parameters can be used to enforce additionalrequirements, e.g., for Diffie-Hellman:
- If
options.addandoptions.remare both set, the prime will satisfy thecondition thatprime % add = rem. - If only
options.addis set andoptions.safeis nottrue, the prime willsatisfy the condition thatprime % add = 1. - If only
options.addis set andoptions.safeis set totrue, the primewill instead satisfy the condition thatprime % add = 3. This is necessarybecauseprime % add = 1foroptions.add > 2would contradict the conditionenforced byoptions.safe. options.remis ignored ifoptions.addis not given.
Bothoptions.add andoptions.rem must be encoded as big-endian sequencesif given as anArrayBuffer,SharedArrayBuffer,TypedArray,Buffer, orDataView.
By default, the prime is encoded as a big-endian sequence of octetsin an<ArrayBuffer>. If thebigint option istrue, then a<bigint>is provided.
Thesize of the prime will have a direct impact on how long it takes togenerate the prime. The larger the size, the longer it will take. Becausewe use OpenSSL'sBN_generate_prime_ex function, which provides onlyminimal control over our ability to interrupt the generation process,it is not recommended to generate overly large primes, as doing so may makethe process unresponsive.
crypto.generatePrimeSync(size[, options])#
size<number> The size (in bits) of the prime to generate.options<Object>add<ArrayBuffer> |<SharedArrayBuffer> |<TypedArray> |<Buffer> |<DataView> |<bigint>rem<ArrayBuffer> |<SharedArrayBuffer> |<TypedArray> |<Buffer> |<DataView> |<bigint>safe<boolean>Default:false.bigint<boolean> Whentrue, the generated prime is returnedas abigint.
- Returns:<ArrayBuffer> |<bigint>
Generates a pseudorandom prime ofsize bits.
Ifoptions.safe istrue, the prime will be a safe prime -- that is,(prime - 1) / 2 will also be a prime.
Theoptions.add andoptions.rem parameters can be used to enforce additionalrequirements, e.g., for Diffie-Hellman:
- If
options.addandoptions.remare both set, the prime will satisfy thecondition thatprime % add = rem. - If only
options.addis set andoptions.safeis nottrue, the prime willsatisfy the condition thatprime % add = 1. - If only
options.addis set andoptions.safeis set totrue, the primewill instead satisfy the condition thatprime % add = 3. This is necessarybecauseprime % add = 1foroptions.add > 2would contradict the conditionenforced byoptions.safe. options.remis ignored ifoptions.addis not given.
Bothoptions.add andoptions.rem must be encoded as big-endian sequencesif given as anArrayBuffer,SharedArrayBuffer,TypedArray,Buffer, orDataView.
By default, the prime is encoded as a big-endian sequence of octetsin an<ArrayBuffer>. If thebigint option istrue, then a<bigint>is provided.
Thesize of the prime will have a direct impact on how long it takes togenerate the prime. The larger the size, the longer it will take. Becausewe use OpenSSL'sBN_generate_prime_ex function, which provides onlyminimal control over our ability to interrupt the generation process,it is not recommended to generate overly large primes, as doing so may makethe process unresponsive.
crypto.getCipherInfo(nameOrNid[, options])#
nameOrNid<string> |<number> The name or nid of the cipher to query.options<Object>- Returns:<Object>
name<string> The name of the ciphernid<number> The nid of the cipherblockSize<number> The block size of the cipher in bytes. This propertyis omitted whenmodeis'stream'.ivLength<number> The expected or default initialization vector length inbytes. This property is omitted if the cipher does not use an initializationvector.keyLength<number> The expected or default key length in bytes.mode<string> The cipher mode. One of'cbc','ccm','cfb','ctr','ecb','gcm','ocb','ofb','stream','wrap','xts'.
Returns information about a given cipher.
Some ciphers accept variable length keys and initialization vectors. By default,thecrypto.getCipherInfo() method will return the default values for theseciphers. To test if a given key length or iv length is acceptable for givencipher, use thekeyLength andivLength options. If the given values areunacceptable,undefined will be returned.
crypto.getCiphers()#
- Returns:<string[]> An array with the names of the supported cipheralgorithms.
const { getCiphers,} =awaitimport('node:crypto');console.log(getCiphers());// ['aes-128-cbc', 'aes-128-ccm', ...]const { getCiphers,} =require('node:crypto');console.log(getCiphers());// ['aes-128-cbc', 'aes-128-ccm', ...]
crypto.getCurves()#
- Returns:<string[]> An array with the names of the supported elliptic curves.
const { getCurves,} =awaitimport('node:crypto');console.log(getCurves());// ['Oakley-EC2N-3', 'Oakley-EC2N-4', ...]const { getCurves,} =require('node:crypto');console.log(getCurves());// ['Oakley-EC2N-3', 'Oakley-EC2N-4', ...]
crypto.getDiffieHellman(groupName)#
groupName<string>- Returns:<DiffieHellmanGroup>
Creates a predefinedDiffieHellmanGroup key exchange object. Thesupported groups are listed in the documentation forDiffieHellmanGroup.
The returned object mimics the interface of objects created bycrypto.createDiffieHellman(), but will not allow changingthe keys (withdiffieHellman.setPublicKey(), for example). Theadvantage of using this method is that the parties do not have togenerate nor exchange a group modulus beforehand, saving both processorand communication time.
Example (obtaining a shared secret):
const { getDiffieHellman,} =awaitimport('node:crypto');const alice =getDiffieHellman('modp14');const bob =getDiffieHellman('modp14');alice.generateKeys();bob.generateKeys();const aliceSecret = alice.computeSecret(bob.getPublicKey(),null,'hex');const bobSecret = bob.computeSecret(alice.getPublicKey(),null,'hex');/* aliceSecret and bobSecret should be the same */console.log(aliceSecret === bobSecret);const { getDiffieHellman,} =require('node:crypto');const alice =getDiffieHellman('modp14');const bob =getDiffieHellman('modp14');alice.generateKeys();bob.generateKeys();const aliceSecret = alice.computeSecret(bob.getPublicKey(),null,'hex');const bobSecret = bob.computeSecret(alice.getPublicKey(),null,'hex');/* aliceSecret and bobSecret should be the same */console.log(aliceSecret === bobSecret);
crypto.getFips()#
crypto.getHashes()#
- Returns:<string[]> An array of the names of the supported hash algorithms,such as
'RSA-SHA256'. Hash algorithms are also called "digest" algorithms.
const { getHashes,} =awaitimport('node:crypto');console.log(getHashes());// ['DSA', 'DSA-SHA', 'DSA-SHA1', ...]const { getHashes,} =require('node:crypto');console.log(getHashes());// ['DSA', 'DSA-SHA', 'DSA-SHA1', ...]
crypto.getRandomValues(typedArray)#
typedArray<Buffer> |<TypedArray> |<DataView> |<ArrayBuffer>- Returns:<Buffer> |<TypedArray> |<DataView> |<ArrayBuffer> Returns
typedArray.
A convenient alias forcrypto.webcrypto.getRandomValues(). Thisimplementation is not compliant with the Web Crypto spec, to writeweb-compatible code usecrypto.webcrypto.getRandomValues() instead.
crypto.hash(algorithm, data[, options])#
History
| Version | Changes |
|---|---|
| v25.4.0 | This API is no longer experimental. |
| v24.4.0 | The |
| v21.7.0, v20.12.0 | Added in: v21.7.0, v20.12.0 |
algorithm<string> |<undefined>data<string> |<Buffer> |<TypedArray> |<DataView> Whendatais astring, it will be encoded as UTF-8 before being hashed. If a differentinput encoding is desired for a string input, user could encode the stringinto aTypedArrayusing eitherTextEncoderorBuffer.from()and passingthe encodedTypedArrayinto this API instead.options<Object> |<string>- Returns:<string> |<Buffer>
A utility for creating one-shot hash digests of data. It can be faster thanthe object-basedcrypto.createHash() when hashing a smaller amount of data(<= 5MB) that's readily available. If the data can be big or if it is streamed,it's still recommended to usecrypto.createHash() instead.
Thealgorithm is dependent on the available algorithms supported by theversion of OpenSSL on the platform. Examples are'sha256','sha512', etc.On recent releases of OpenSSL,openssl list -digest-algorithms willdisplay the available digest algorithms.
Ifoptions is a string, then it specifies theoutputEncoding.
Example:
const crypto =require('node:crypto');const {Buffer } =require('node:buffer');// Hashing a string and return the result as a hex-encoded string.const string ='Node.js';// 10b3493287f831e81a438811a1ffba01f8cec4b7console.log(crypto.hash('sha1', string));// Encode a base64-encoded string into a Buffer, hash it and return// the result as a buffer.const base64 ='Tm9kZS5qcw==';// <Buffer 10 b3 49 32 87 f8 31 e8 1a 43 88 11 a1 ff ba 01 f8 ce c4 b7>console.log(crypto.hash('sha1',Buffer.from(base64,'base64'),'buffer'));import cryptofrom'node:crypto';import {Buffer }from'node:buffer';// Hashing a string and return the result as a hex-encoded string.const string ='Node.js';// 10b3493287f831e81a438811a1ffba01f8cec4b7console.log(crypto.hash('sha1', string));// Encode a base64-encoded string into a Buffer, hash it and return// the result as a buffer.const base64 ='Tm9kZS5qcw==';// <Buffer 10 b3 49 32 87 f8 31 e8 1a 43 88 11 a1 ff ba 01 f8 ce c4 b7>console.log(crypto.hash('sha1',Buffer.from(base64,'base64'),'buffer'));
crypto.hkdf(digest, ikm, salt, info, keylen, callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v18.8.0, v16.18.0 | The input keying material can now be zero-length. |
| v15.0.0 | Added in: v15.0.0 |
digest<string> The digest algorithm to use.ikm<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> |<KeyObject> The inputkeying material. Must be provided but can be zero-length.salt<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> The salt value. Mustbe provided but can be zero-length.info<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> Additional info value.Must be provided but can be zero-length, and cannot be more than 1024 bytes.keylen<number> The length of the key to generate. Must be greater than 0.The maximum allowable value is255times the number of bytes produced bythe selected digest function (e.g.sha512generates 64-byte hashes, makingthe maximum HKDF output 16320 bytes).callback<Function>err<Error>derivedKey<ArrayBuffer>
HKDF is a simple key derivation function defined in RFC 5869. The givenikm,salt andinfo are used with thedigest to derive a key ofkeylen bytes.
The suppliedcallback function is called with two arguments:err andderivedKey. If an errors occurs while deriving the key,err will be set;otherwiseerr will benull. The successfully generatedderivedKey willbe passed to the callback as an<ArrayBuffer>. An error will be thrown if anyof the input arguments specify invalid values or types.
import {Buffer }from'node:buffer';const { hkdf,} =awaitimport('node:crypto');hkdf('sha512','key','salt','info',64,(err, derivedKey) => {if (err)throw err;console.log(Buffer.from(derivedKey).toString('hex'));// '24156e2...5391653'});const { hkdf,} =require('node:crypto');const {Buffer } =require('node:buffer');hkdf('sha512','key','salt','info',64,(err, derivedKey) => {if (err)throw err;console.log(Buffer.from(derivedKey).toString('hex'));// '24156e2...5391653'});
crypto.hkdfSync(digest, ikm, salt, info, keylen)#
History
| Version | Changes |
|---|---|
| v18.8.0, v16.18.0 | The input keying material can now be zero-length. |
| v15.0.0 | Added in: v15.0.0 |
digest<string> The digest algorithm to use.ikm<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> |<KeyObject> The inputkeying material. Must be provided but can be zero-length.salt<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> The salt value. Mustbe provided but can be zero-length.info<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> Additional info value.Must be provided but can be zero-length, and cannot be more than 1024 bytes.keylen<number> The length of the key to generate. Must be greater than 0.The maximum allowable value is255times the number of bytes produced bythe selected digest function (e.g.sha512generates 64-byte hashes, makingthe maximum HKDF output 16320 bytes).- Returns:<ArrayBuffer>
Provides a synchronous HKDF key derivation function as defined in RFC 5869. Thegivenikm,salt andinfo are used with thedigest to derive a key ofkeylen bytes.
The successfully generatedderivedKey will be returned as an<ArrayBuffer>.
An error will be thrown if any of the input arguments specify invalid values ortypes, or if the derived key cannot be generated.
import {Buffer }from'node:buffer';const { hkdfSync,} =awaitimport('node:crypto');const derivedKey =hkdfSync('sha512','key','salt','info',64);console.log(Buffer.from(derivedKey).toString('hex'));// '24156e2...5391653'const { hkdfSync,} =require('node:crypto');const {Buffer } =require('node:buffer');const derivedKey =hkdfSync('sha512','key','salt','info',64);console.log(Buffer.from(derivedKey).toString('hex'));// '24156e2...5391653'
crypto.pbkdf2(password, salt, iterations, keylen, digest, callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v15.0.0 | The password and salt arguments can also be ArrayBuffer instances. |
| v14.0.0 | The |
| v8.0.0 | The |
| v6.0.0 | Calling this function without passing the |
| v6.0.0 | The default encoding for |
| v0.5.5 | Added in: v0.5.5 |
password<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView>salt<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView>iterations<number>keylen<number>digest<string>callback<Function>
Provides an asynchronous Password-Based Key Derivation Function 2 (PBKDF2)implementation. A selected HMAC digest algorithm specified bydigest isapplied to derive a key of the requested byte length (keylen) from thepassword,salt anditerations.
The suppliedcallback function is called with two arguments:err andderivedKey. If an error occurs while deriving the key,err will be set;otherwiseerr will benull. By default, the successfully generatedderivedKey will be passed to the callback as aBuffer. An error will bethrown if any of the input arguments specify invalid values or types.
Theiterations argument must be a number set as high as possible. Thehigher the number of iterations, the more secure the derived key will be,but will take a longer amount of time to complete.
Thesalt should be as unique as possible. It is recommended that a salt israndom and at least 16 bytes long. SeeNIST SP 800-132 for details.
When passing strings forpassword orsalt, please considercaveats when using strings as inputs to cryptographic APIs.
const { pbkdf2,} =awaitimport('node:crypto');pbkdf2('secret','salt',100000,64,'sha512',(err, derivedKey) => {if (err)throw err;console.log(derivedKey.toString('hex'));// '3745e48...08d59ae'});const { pbkdf2,} =require('node:crypto');pbkdf2('secret','salt',100000,64,'sha512',(err, derivedKey) => {if (err)throw err;console.log(derivedKey.toString('hex'));// '3745e48...08d59ae'});
An array of supported digest functions can be retrieved usingcrypto.getHashes().
This API uses libuv's threadpool, which can have surprising andnegative performance implications for some applications; see theUV_THREADPOOL_SIZE documentation for more information.
crypto.pbkdf2Sync(password, salt, iterations, keylen, digest)#
History
| Version | Changes |
|---|---|
| v14.0.0 | The |
| v6.0.0 | Calling this function without passing the |
| v6.0.0 | The default encoding for |
| v0.9.3 | Added in: v0.9.3 |
password<string> |<Buffer> |<TypedArray> |<DataView>salt<string> |<Buffer> |<TypedArray> |<DataView>iterations<number>keylen<number>digest<string>- Returns:<Buffer>
Provides a synchronous Password-Based Key Derivation Function 2 (PBKDF2)implementation. A selected HMAC digest algorithm specified bydigest isapplied to derive a key of the requested byte length (keylen) from thepassword,salt anditerations.
If an error occurs anError will be thrown, otherwise the derived key will bereturned as aBuffer.
Theiterations argument must be a number set as high as possible. Thehigher the number of iterations, the more secure the derived key will be,but will take a longer amount of time to complete.
Thesalt should be as unique as possible. It is recommended that a salt israndom and at least 16 bytes long. SeeNIST SP 800-132 for details.
When passing strings forpassword orsalt, please considercaveats when using strings as inputs to cryptographic APIs.
const { pbkdf2Sync,} =awaitimport('node:crypto');const key =pbkdf2Sync('secret','salt',100000,64,'sha512');console.log(key.toString('hex'));// '3745e48...08d59ae'const { pbkdf2Sync,} =require('node:crypto');const key =pbkdf2Sync('secret','salt',100000,64,'sha512');console.log(key.toString('hex'));// '3745e48...08d59ae'
An array of supported digest functions can be retrieved usingcrypto.getHashes().
crypto.privateDecrypt(privateKey, buffer)#
History
| Version | Changes |
|---|---|
| v21.6.2, v20.11.1, v18.19.1 | The |
| v15.0.0 | Added string, ArrayBuffer, and CryptoKey as allowable key types. The oaepLabel can be an ArrayBuffer. The buffer can be a string or ArrayBuffer. All types that accept buffers are limited to a maximum of 2 ** 31 - 1 bytes. |
| v12.11.0 | The |
| v12.9.0 | The |
| v11.6.0 | This function now supports key objects. |
| v0.11.14 | Added in: v0.11.14 |
privateKey<Object> |<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> |<KeyObject> |<CryptoKey>oaepHash<string> The hash function to use for OAEP padding and MGF1.Default:'sha1'oaepLabel<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> The label touse for OAEP padding. If not specified, no label is used.padding<crypto.constants> An optional padding value defined incrypto.constants, which may be:crypto.constants.RSA_NO_PADDING,crypto.constants.RSA_PKCS1_PADDING, orcrypto.constants.RSA_PKCS1_OAEP_PADDING.
buffer<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView>- Returns:<Buffer> A new
Bufferwith the decrypted content.
Decryptsbuffer withprivateKey.buffer was previously encrypted usingthe corresponding public key, for example usingcrypto.publicEncrypt().
IfprivateKey is not aKeyObject, this function behaves as ifprivateKey had been passed tocrypto.createPrivateKey(). If it is anobject, thepadding property can be passed. Otherwise, this function usesRSA_PKCS1_OAEP_PADDING.
Usingcrypto.constants.RSA_PKCS1_PADDING incrypto.privateDecrypt()requires OpenSSL to support implicit rejection (rsa_pkcs1_implicit_rejection).If the version of OpenSSL used by Node.js does not support this feature,attempting to useRSA_PKCS1_PADDING will fail.
crypto.privateEncrypt(privateKey, buffer)#
History
| Version | Changes |
|---|---|
| v15.0.0 | Added string, ArrayBuffer, and CryptoKey as allowable key types. The passphrase can be an ArrayBuffer. The buffer can be a string or ArrayBuffer. All types that accept buffers are limited to a maximum of 2 ** 31 - 1 bytes. |
| v11.6.0 | This function now supports key objects. |
| v1.1.0 | Added in: v1.1.0 |
privateKey<Object> |<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> |<KeyObject> |<CryptoKey>key<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> |<KeyObject> |<CryptoKey>A PEM encoded private key.passphrase<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> An optionalpassphrase for the private key.padding<crypto.constants> An optional padding value defined incrypto.constants, which may be:crypto.constants.RSA_NO_PADDINGorcrypto.constants.RSA_PKCS1_PADDING.encoding<string> The string encoding to use whenbuffer,key,orpassphraseare strings.
buffer<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView>- Returns:<Buffer> A new
Bufferwith the encrypted content.
Encryptsbuffer withprivateKey. The returned data can be decrypted usingthe corresponding public key, for example usingcrypto.publicDecrypt().
IfprivateKey is not aKeyObject, this function behaves as ifprivateKey had been passed tocrypto.createPrivateKey(). If it is anobject, thepadding property can be passed. Otherwise, this function usesRSA_PKCS1_PADDING.
crypto.publicDecrypt(key, buffer)#
History
| Version | Changes |
|---|---|
| v15.0.0 | Added string, ArrayBuffer, and CryptoKey as allowable key types. The passphrase can be an ArrayBuffer. The buffer can be a string or ArrayBuffer. All types that accept buffers are limited to a maximum of 2 ** 31 - 1 bytes. |
| v11.6.0 | This function now supports key objects. |
| v1.1.0 | Added in: v1.1.0 |
key<Object> |<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> |<KeyObject> |<CryptoKey>passphrase<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> An optionalpassphrase for the private key.padding<crypto.constants> An optional padding value defined incrypto.constants, which may be:crypto.constants.RSA_NO_PADDINGorcrypto.constants.RSA_PKCS1_PADDING.encoding<string> The string encoding to use whenbuffer,key,orpassphraseare strings.
buffer<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView>- Returns:<Buffer> A new
Bufferwith the decrypted content.
Decryptsbuffer withkey.buffer was previously encrypted usingthe corresponding private key, for example usingcrypto.privateEncrypt().
Ifkey is not aKeyObject, this function behaves as ifkey had been passed tocrypto.createPublicKey(). If it is anobject, thepadding property can be passed. Otherwise, this function usesRSA_PKCS1_PADDING.
Because RSA public keys can be derived from private keys, a private key maybe passed instead of a public key.
crypto.publicEncrypt(key, buffer)#
History
| Version | Changes |
|---|---|
| v15.0.0 | Added string, ArrayBuffer, and CryptoKey as allowable key types. The oaepLabel and passphrase can be ArrayBuffers. The buffer can be a string or ArrayBuffer. All types that accept buffers are limited to a maximum of 2 ** 31 - 1 bytes. |
| v12.11.0 | The |
| v12.9.0 | The |
| v11.6.0 | This function now supports key objects. |
| v0.11.14 | Added in: v0.11.14 |
key<Object> |<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> |<KeyObject> |<CryptoKey>key<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> |<KeyObject> |<CryptoKey>A PEM encoded public or private key,<KeyObject>, or<CryptoKey>.oaepHash<string> The hash function to use for OAEP padding and MGF1.Default:'sha1'oaepLabel<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> The label touse for OAEP padding. If not specified, no label is used.passphrase<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> An optionalpassphrase for the private key.padding<crypto.constants> An optional padding value defined incrypto.constants, which may be:crypto.constants.RSA_NO_PADDING,crypto.constants.RSA_PKCS1_PADDING, orcrypto.constants.RSA_PKCS1_OAEP_PADDING.encoding<string> The string encoding to use whenbuffer,key,oaepLabel, orpassphraseare strings.
buffer<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView>- Returns:<Buffer> A new
Bufferwith the encrypted content.
Encrypts the content ofbuffer withkey and returns a newBuffer with encrypted content. The returned data can be decrypted usingthe corresponding private key, for example usingcrypto.privateDecrypt().
Ifkey is not aKeyObject, this function behaves as ifkey had been passed tocrypto.createPublicKey(). If it is anobject, thepadding property can be passed. Otherwise, this function usesRSA_PKCS1_OAEP_PADDING.
Because RSA public keys can be derived from private keys, a private key maybe passed instead of a public key.
crypto.randomBytes(size[, callback])#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v9.0.0 | Passing |
| v0.5.8 | Added in: v0.5.8 |
size<number> The number of bytes to generate. Thesizemustnot be larger than2**31 - 1.callback<Function>- Returns:<Buffer> if the
callbackfunction is not provided.
Generates cryptographically strong pseudorandom data. Thesize argumentis a number indicating the number of bytes to generate.
If acallback function is provided, the bytes are generated asynchronouslyand thecallback function is invoked with two arguments:err andbuf.If an error occurs,err will be anError object; otherwise it isnull. Thebuf argument is aBuffer containing the generated bytes.
// Asynchronousconst { randomBytes,} =awaitimport('node:crypto');randomBytes(256,(err, buf) => {if (err)throw err;console.log(`${buf.length} bytes of random data:${buf.toString('hex')}`);});// Asynchronousconst { randomBytes,} =require('node:crypto');randomBytes(256,(err, buf) => {if (err)throw err;console.log(`${buf.length} bytes of random data:${buf.toString('hex')}`);});
If thecallback function is not provided, the random bytes are generatedsynchronously and returned as aBuffer. An error will be thrown ifthere is a problem generating the bytes.
// Synchronousconst { randomBytes,} =awaitimport('node:crypto');const buf =randomBytes(256);console.log(`${buf.length} bytes of random data:${buf.toString('hex')}`);// Synchronousconst { randomBytes,} =require('node:crypto');const buf =randomBytes(256);console.log(`${buf.length} bytes of random data:${buf.toString('hex')}`);
Thecrypto.randomBytes() method will not complete until there issufficient entropy available.This should normally never take longer than a few milliseconds. The only timewhen generating the random bytes may conceivably block for a longer period oftime is right after boot, when the whole system is still low on entropy.
This API uses libuv's threadpool, which can have surprising andnegative performance implications for some applications; see theUV_THREADPOOL_SIZE documentation for more information.
The asynchronous version ofcrypto.randomBytes() is carried out in a singlethreadpool request. To minimize threadpool task length variation, partitionlargerandomBytes requests when doing so as part of fulfilling a clientrequest.
crypto.randomFill(buffer[, offset][, size], callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v9.0.0 | The |
| v7.10.0, v6.13.0 | Added in: v7.10.0, v6.13.0 |
buffer<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> Must be supplied. Thesize of the providedbuffermust not be larger than2**31 - 1.offset<number>Default:0size<number>Default:buffer.length - offset. Thesizemustnot be larger than2**31 - 1.callback<Function>function(err, buf) {}.
This function is similar tocrypto.randomBytes() but requires the firstargument to be aBuffer that will be filled. It alsorequires that a callback is passed in.
If thecallback function is not provided, an error will be thrown.
import {Buffer }from'node:buffer';const { randomFill } =awaitimport('node:crypto');const buf =Buffer.alloc(10);randomFill(buf,(err, buf) => {if (err)throw err;console.log(buf.toString('hex'));});randomFill(buf,5,(err, buf) => {if (err)throw err;console.log(buf.toString('hex'));});// The above is equivalent to the following:randomFill(buf,5,5,(err, buf) => {if (err)throw err;console.log(buf.toString('hex'));});const { randomFill } =require('node:crypto');const {Buffer } =require('node:buffer');const buf =Buffer.alloc(10);randomFill(buf,(err, buf) => {if (err)throw err;console.log(buf.toString('hex'));});randomFill(buf,5,(err, buf) => {if (err)throw err;console.log(buf.toString('hex'));});// The above is equivalent to the following:randomFill(buf,5,5,(err, buf) => {if (err)throw err;console.log(buf.toString('hex'));});
AnyArrayBuffer,TypedArray, orDataView instance may be passed asbuffer.
While this includes instances ofFloat32Array andFloat64Array, thisfunction should not be used to generate random floating-point numbers. Theresult may contain+Infinity,-Infinity, andNaN, and even if the arraycontains finite numbers only, they are not drawn from a uniform randomdistribution and have no meaningful lower or upper bounds.
import {Buffer }from'node:buffer';const { randomFill } =awaitimport('node:crypto');const a =newUint32Array(10);randomFill(a,(err, buf) => {if (err)throw err;console.log(Buffer.from(buf.buffer, buf.byteOffset, buf.byteLength) .toString('hex'));});const b =newDataView(newArrayBuffer(10));randomFill(b,(err, buf) => {if (err)throw err;console.log(Buffer.from(buf.buffer, buf.byteOffset, buf.byteLength) .toString('hex'));});const c =newArrayBuffer(10);randomFill(c,(err, buf) => {if (err)throw err;console.log(Buffer.from(buf).toString('hex'));});const { randomFill } =require('node:crypto');const {Buffer } =require('node:buffer');const a =newUint32Array(10);randomFill(a,(err, buf) => {if (err)throw err;console.log(Buffer.from(buf.buffer, buf.byteOffset, buf.byteLength) .toString('hex'));});const b =newDataView(newArrayBuffer(10));randomFill(b,(err, buf) => {if (err)throw err;console.log(Buffer.from(buf.buffer, buf.byteOffset, buf.byteLength) .toString('hex'));});const c =newArrayBuffer(10);randomFill(c,(err, buf) => {if (err)throw err;console.log(Buffer.from(buf).toString('hex'));});
This API uses libuv's threadpool, which can have surprising andnegative performance implications for some applications; see theUV_THREADPOOL_SIZE documentation for more information.
The asynchronous version ofcrypto.randomFill() is carried out in a singlethreadpool request. To minimize threadpool task length variation, partitionlargerandomFill requests when doing so as part of fulfilling a clientrequest.
crypto.randomFillSync(buffer[, offset][, size])#
History
| Version | Changes |
|---|---|
| v9.0.0 | The |
| v7.10.0, v6.13.0 | Added in: v7.10.0, v6.13.0 |
buffer<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> Must be supplied. Thesize of the providedbuffermust not be larger than2**31 - 1.offset<number>Default:0size<number>Default:buffer.length - offset. Thesizemustnot be larger than2**31 - 1.- Returns:<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> The object passed as
bufferargument.
Synchronous version ofcrypto.randomFill().
import {Buffer }from'node:buffer';const { randomFillSync } =awaitimport('node:crypto');const buf =Buffer.alloc(10);console.log(randomFillSync(buf).toString('hex'));randomFillSync(buf,5);console.log(buf.toString('hex'));// The above is equivalent to the following:randomFillSync(buf,5,5);console.log(buf.toString('hex'));const { randomFillSync } =require('node:crypto');const {Buffer } =require('node:buffer');const buf =Buffer.alloc(10);console.log(randomFillSync(buf).toString('hex'));randomFillSync(buf,5);console.log(buf.toString('hex'));// The above is equivalent to the following:randomFillSync(buf,5,5);console.log(buf.toString('hex'));
AnyArrayBuffer,TypedArray orDataView instance may be passed asbuffer.
import {Buffer }from'node:buffer';const { randomFillSync } =awaitimport('node:crypto');const a =newUint32Array(10);console.log(Buffer.from(randomFillSync(a).buffer, a.byteOffset, a.byteLength).toString('hex'));const b =newDataView(newArrayBuffer(10));console.log(Buffer.from(randomFillSync(b).buffer, b.byteOffset, b.byteLength).toString('hex'));const c =newArrayBuffer(10);console.log(Buffer.from(randomFillSync(c)).toString('hex'));const { randomFillSync } =require('node:crypto');const {Buffer } =require('node:buffer');const a =newUint32Array(10);console.log(Buffer.from(randomFillSync(a).buffer, a.byteOffset, a.byteLength).toString('hex'));const b =newDataView(newArrayBuffer(10));console.log(Buffer.from(randomFillSync(b).buffer, b.byteOffset, b.byteLength).toString('hex'));const c =newArrayBuffer(10);console.log(Buffer.from(randomFillSync(c)).toString('hex'));
crypto.randomInt([min, ]max[, callback])#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v14.10.0, v12.19.0 | Added in: v14.10.0, v12.19.0 |
min<integer> Start of random range (inclusive).Default:0.max<integer> End of random range (exclusive).callback<Function>function(err, n) {}.
Return a random integern such thatmin <= n < max. Thisimplementation avoidsmodulo bias.
The range (max - min) must be less than 248.min andmax mustbesafe integers.
If thecallback function is not provided, the random integer isgenerated synchronously.
// Asynchronousconst { randomInt,} =awaitimport('node:crypto');randomInt(3,(err, n) => {if (err)throw err;console.log(`Random number chosen from (0, 1, 2):${n}`);});// Asynchronousconst { randomInt,} =require('node:crypto');randomInt(3,(err, n) => {if (err)throw err;console.log(`Random number chosen from (0, 1, 2):${n}`);});
// Synchronousconst { randomInt,} =awaitimport('node:crypto');const n =randomInt(3);console.log(`Random number chosen from (0, 1, 2):${n}`);// Synchronousconst { randomInt,} =require('node:crypto');const n =randomInt(3);console.log(`Random number chosen from (0, 1, 2):${n}`);
// With `min` argumentconst { randomInt,} =awaitimport('node:crypto');const n =randomInt(1,7);console.log(`The dice rolled:${n}`);// With `min` argumentconst { randomInt,} =require('node:crypto');const n =randomInt(1,7);console.log(`The dice rolled:${n}`);
crypto.randomUUID([options])#
options<Object>disableEntropyCache<boolean> By default, to improve performance,Node.js generates and caches enoughrandom data to generate up to 128 random UUIDs. To generate a UUIDwithout using the cache, setdisableEntropyCachetotrue.Default:false.
- Returns:<string>
Generates a randomRFC 4122 version 4 UUID. The UUID is generated using acryptographic pseudorandom number generator.
crypto.scrypt(password, salt, keylen[, options], callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v15.0.0 | The password and salt arguments can also be ArrayBuffer instances. |
| v12.8.0, v10.17.0 | The |
| v10.9.0 | The |
| v10.5.0 | Added in: v10.5.0 |
password<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView>salt<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView>keylen<number>options<Object>cost<number> CPU/memory cost parameter. Must be a power of two greaterthan one.Default:16384.blockSize<number> Block size parameter.Default:8.parallelization<number> Parallelization parameter.Default:1.N<number> Alias forcost. Only one of both may be specified.r<number> Alias forblockSize. Only one of both may be specified.p<number> Alias forparallelization. Only one of both may be specified.maxmem<number> Memory upper bound. It is an error when (approximately)128 * N * r > maxmem.Default:32 * 1024 * 1024.
callback<Function>
Provides an asynchronousscrypt implementation. Scrypt is a password-basedkey derivation function that is designed to be expensive computationally andmemory-wise in order to make brute-force attacks unrewarding.
Thesalt should be as unique as possible. It is recommended that a salt israndom and at least 16 bytes long. SeeNIST SP 800-132 for details.
When passing strings forpassword orsalt, please considercaveats when using strings as inputs to cryptographic APIs.
Thecallback function is called with two arguments:err andderivedKey.err is an exception object when key derivation fails, otherwiseerr isnull.derivedKey is passed to the callback as aBuffer.
An exception is thrown when any of the input arguments specify invalid valuesor types.
const { scrypt,} =awaitimport('node:crypto');// Using the factory defaults.scrypt('password','salt',64,(err, derivedKey) => {if (err)throw err;console.log(derivedKey.toString('hex'));// '3745e48...08d59ae'});// Using a custom N parameter. Must be a power of two.scrypt('password','salt',64, {N:1024 },(err, derivedKey) => {if (err)throw err;console.log(derivedKey.toString('hex'));// '3745e48...aa39b34'});const { scrypt,} =require('node:crypto');// Using the factory defaults.scrypt('password','salt',64,(err, derivedKey) => {if (err)throw err;console.log(derivedKey.toString('hex'));// '3745e48...08d59ae'});// Using a custom N parameter. Must be a power of two.scrypt('password','salt',64, {N:1024 },(err, derivedKey) => {if (err)throw err;console.log(derivedKey.toString('hex'));// '3745e48...aa39b34'});
crypto.scryptSync(password, salt, keylen[, options])#
History
| Version | Changes |
|---|---|
| v12.8.0, v10.17.0 | The |
| v10.9.0 | The |
| v10.5.0 | Added in: v10.5.0 |
password<string> |<Buffer> |<TypedArray> |<DataView>salt<string> |<Buffer> |<TypedArray> |<DataView>keylen<number>options<Object>cost<number> CPU/memory cost parameter. Must be a power of two greaterthan one.Default:16384.blockSize<number> Block size parameter.Default:8.parallelization<number> Parallelization parameter.Default:1.N<number> Alias forcost. Only one of both may be specified.r<number> Alias forblockSize. Only one of both may be specified.p<number> Alias forparallelization. Only one of both may be specified.maxmem<number> Memory upper bound. It is an error when (approximately)128 * N * r > maxmem.Default:32 * 1024 * 1024.
- Returns:<Buffer>
Provides a synchronousscrypt implementation. Scrypt is a password-basedkey derivation function that is designed to be expensive computationally andmemory-wise in order to make brute-force attacks unrewarding.
Thesalt should be as unique as possible. It is recommended that a salt israndom and at least 16 bytes long. SeeNIST SP 800-132 for details.
When passing strings forpassword orsalt, please considercaveats when using strings as inputs to cryptographic APIs.
An exception is thrown when key derivation fails, otherwise the derived key isreturned as aBuffer.
An exception is thrown when any of the input arguments specify invalid valuesor types.
const { scryptSync,} =awaitimport('node:crypto');// Using the factory defaults.const key1 =scryptSync('password','salt',64);console.log(key1.toString('hex'));// '3745e48...08d59ae'// Using a custom N parameter. Must be a power of two.const key2 =scryptSync('password','salt',64, {N:1024 });console.log(key2.toString('hex'));// '3745e48...aa39b34'const { scryptSync,} =require('node:crypto');// Using the factory defaults.const key1 =scryptSync('password','salt',64);console.log(key1.toString('hex'));// '3745e48...08d59ae'// Using a custom N parameter. Must be a power of two.const key2 =scryptSync('password','salt',64, {N:1024 });console.log(key2.toString('hex'));// '3745e48...aa39b34'
crypto.secureHeapUsed()#
- Returns:<Object>
total<number> The total allocated secure heap size as specifiedusing the--secure-heap=ncommand-line flag.min<number> The minimum allocation from the secure heap asspecified using the--secure-heap-mincommand-line flag.used<number> The total number of bytes currently allocated fromthe secure heap.utilization<number> The calculated ratio ofusedtototalallocated bytes.
crypto.setEngine(engine[, flags])#
History
| Version | Changes |
|---|---|
| v22.4.0, v20.16.0 | Custom engine support in OpenSSL 3 is deprecated. |
| v0.11.11 | Added in: v0.11.11 |
engine<string>flags<crypto.constants>Default:crypto.constants.ENGINE_METHOD_ALL
Load and set theengine for some or all OpenSSL functions (selected by flags).Support for custom engines in OpenSSL is deprecated from OpenSSL 3.
engine could be either an id or a path to the engine's shared library.
The optionalflags argument usesENGINE_METHOD_ALL by default. Theflagsis a bit field taking one of or a mix of the following flags (defined incrypto.constants):
crypto.constants.ENGINE_METHOD_RSAcrypto.constants.ENGINE_METHOD_DSAcrypto.constants.ENGINE_METHOD_DHcrypto.constants.ENGINE_METHOD_RANDcrypto.constants.ENGINE_METHOD_ECcrypto.constants.ENGINE_METHOD_CIPHERScrypto.constants.ENGINE_METHOD_DIGESTScrypto.constants.ENGINE_METHOD_PKEY_METHScrypto.constants.ENGINE_METHOD_PKEY_ASN1_METHScrypto.constants.ENGINE_METHOD_ALLcrypto.constants.ENGINE_METHOD_NONE
crypto.setFips(bool)#
bool<boolean>trueto enable FIPS mode.
Enables the FIPS compliant crypto provider in a FIPS-enabled Node.js build.Throws an error if FIPS mode is not available.
crypto.sign(algorithm, data, key[, callback])#
History
| Version | Changes |
|---|---|
| v24.8.0 | Add support for ML-DSA, Ed448, and SLH-DSA context parameter. |
| v24.8.0 | Add support for SLH-DSA signing. |
| v24.6.0 | Add support for ML-DSA signing. |
| v18.0.0 | Passing an invalid callback to the |
| v15.12.0 | Optional callback argument added. |
| v13.2.0, v12.16.0 | This function now supports IEEE-P1363 DSA and ECDSA signatures. |
| v12.0.0 | Added in: v12.0.0 |
algorithm<string> |<null> |<undefined>data<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView>key<Object> |<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> |<KeyObject> |<CryptoKey>callback<Function>- Returns:<Buffer> if the
callbackfunction is not provided.
Calculates and returns the signature fordata using the given private key andalgorithm. Ifalgorithm isnull orundefined, then the algorithm isdependent upon the key type.
algorithm is required to benull orundefined for Ed25519, Ed448, andML-DSA.
Ifkey is not aKeyObject, this function behaves as ifkey had beenpassed tocrypto.createPrivateKey(). If it is an object, the followingadditional properties can be passed:
dsaEncoding<string> For DSA and ECDSA, this option specifies theformat of the generated signature. It can be one of the following:'der'(default): DER-encoded ASN.1 signature structure encoding(r, s).'ieee-p1363': Signature formatr || sas proposed in IEEE-P1363.
padding<integer> Optional padding value for RSA, one of the following:crypto.constants.RSA_PKCS1_PADDING(default)crypto.constants.RSA_PKCS1_PSS_PADDING
RSA_PKCS1_PSS_PADDINGwill use MGF1 with the same hash functionused to sign the message as specified in section 3.1 ofRFC 4055.saltLength<integer> Salt length for when padding isRSA_PKCS1_PSS_PADDING. The special valuecrypto.constants.RSA_PSS_SALTLEN_DIGESTsets the salt length to the digestsize,crypto.constants.RSA_PSS_SALTLEN_MAX_SIGN(default) sets it to themaximum permissible value.context<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> For Ed448, ML-DSA, and SLH-DSA,this option specifies the optional context to differentiate signatures generatedfor different purposes with the same key.
If thecallback function is provided this function uses libuv's threadpool.
crypto.timingSafeEqual(a, b)#
History
| Version | Changes |
|---|---|
| v15.0.0 | The a and b arguments can also be ArrayBuffer. |
| v6.6.0 | Added in: v6.6.0 |
a<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView>b<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView>- Returns:<boolean>
This function compares the underlying bytes that represent the givenArrayBuffer,TypedArray, orDataView instances using a constant-timealgorithm.
This function does not leak timing information thatwould allow an attacker to guess one of the values. This is suitable forcomparing HMAC digests or secret values like authentication cookies orcapability urls.
a andb must both beBuffers,TypedArrays, orDataViews, and theymust have the same byte length. An error is thrown ifa andb havedifferent byte lengths.
If at least one ofa andb is aTypedArray with more than one byte perentry, such asUint16Array, the result will be computed using the platformbyte order.
When both of the inputs areFloat32Arrays orFloat64Arrays, this function might return unexpected results due to IEEE 754encoding of floating-point numbers. In particular, neitherx === y norObject.is(x, y) implies that the byte representations of two floating-pointnumbersx andy are equal.
Use ofcrypto.timingSafeEqual does not guarantee that thesurrounding codeis timing-safe. Care should be taken to ensure that the surrounding code doesnot introduce timing vulnerabilities.
crypto.verify(algorithm, data, key, signature[, callback])#
History
| Version | Changes |
|---|---|
| v24.8.0 | Add support for ML-DSA, Ed448, and SLH-DSA context parameter. |
| v24.8.0 | Add support for SLH-DSA signature verification. |
| v24.6.0 | Add support for ML-DSA signature verification. |
| v18.0.0 | Passing an invalid callback to the |
| v15.12.0 | Optional callback argument added. |
| v15.0.0 | The data, key, and signature arguments can also be ArrayBuffer. |
| v13.2.0, v12.16.0 | This function now supports IEEE-P1363 DSA and ECDSA signatures. |
| v12.0.0 | Added in: v12.0.0 |
algorithm<string> |<null> |<undefined>data<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView>key<Object> |<string> |<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> |<KeyObject> |<CryptoKey>signature<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView>callback<Function>- Returns:<boolean>
trueorfalsedepending on the validity of thesignature for the data and public key if thecallbackfunction is notprovided.
Verifies the given signature fordata using the given key and algorithm. Ifalgorithm isnull orundefined, then the algorithm is dependent upon thekey type.
algorithm is required to benull orundefined for Ed25519, Ed448, andML-DSA.
Ifkey is not aKeyObject, this function behaves as ifkey had beenpassed tocrypto.createPublicKey(). If it is an object, the followingadditional properties can be passed:
dsaEncoding<string> For DSA and ECDSA, this option specifies theformat of the signature. It can be one of the following:'der'(default): DER-encoded ASN.1 signature structure encoding(r, s).'ieee-p1363': Signature formatr || sas proposed in IEEE-P1363.
padding<integer> Optional padding value for RSA, one of the following:crypto.constants.RSA_PKCS1_PADDING(default)crypto.constants.RSA_PKCS1_PSS_PADDING
RSA_PKCS1_PSS_PADDINGwill use MGF1 with the same hash functionused to sign the message as specified in section 3.1 ofRFC 4055.saltLength<integer> Salt length for when padding isRSA_PKCS1_PSS_PADDING. The special valuecrypto.constants.RSA_PSS_SALTLEN_DIGESTsets the salt length to the digestsize,crypto.constants.RSA_PSS_SALTLEN_MAX_SIGN(default) sets it to themaximum permissible value.context<ArrayBuffer> |<Buffer> |<TypedArray> |<DataView> For Ed448, ML-DSA, and SLH-DSA,this option specifies the optional context to differentiate signatures generatedfor different purposes with the same key.
Thesignature argument is the previously calculated signature for thedata.
Because public keys can be derived from private keys, a private key or a publickey may be passed forkey.
If thecallback function is provided this function uses libuv's threadpool.
crypto.webcrypto#
Type:<Crypto> An implementation of the Web Crypto API standard.
See theWeb Crypto API documentation for details.
Notes#
Using strings as inputs to cryptographic APIs#
For historical reasons, many cryptographic APIs provided by Node.js acceptstrings as inputs where the underlying cryptographic algorithm works on bytesequences. These instances include plaintexts, ciphertexts, symmetric keys,initialization vectors, passphrases, salts, authentication tags,and additional authenticated data.
When passing strings to cryptographic APIs, consider the following factors.
Not all byte sequences are valid UTF-8 strings. Therefore, when a bytesequence of length
nis derived from a string, its entropy is generallylower than the entropy of a random or pseudorandomnbyte sequence.For example, no UTF-8 string will result in the byte sequencec0 af. Secretkeys should almost exclusively be random or pseudorandom byte sequences.Similarly, when converting random or pseudorandom byte sequences to UTF-8strings, subsequences that do not represent valid code points may be replacedby the Unicode replacement character (
U+FFFD). The byte representation ofthe resulting Unicode string may, therefore, not be equal to the byte sequencethat the string was created from.const original = [0xc0,0xaf];const bytesAsString =Buffer.from(original).toString('utf8');const stringAsBytes =Buffer.from(bytesAsString,'utf8');console.log(stringAsBytes);// Prints '<Buffer ef bf bd ef bf bd>'.The outputs of ciphers, hash functions, signature algorithms, and keyderivation functions are pseudorandom byte sequences and should not beused as Unicode strings.
When strings are obtained from user input, some Unicode characters can berepresented in multiple equivalent ways that result in different bytesequences. For example, when passing a user passphrase to a key derivationfunction, such as PBKDF2 or scrypt, the result of the key derivation functiondepends on whether the string uses composed or decomposed characters. Node.jsdoes not normalize character representations. Developers should consider using
String.prototype.normalize()on user inputs before passing them tocryptographic APIs.
Legacy streams API (prior to Node.js 0.10)#
The Crypto module was added to Node.js before there was the concept of aunified Stream API, and before there wereBuffer objects for handlingbinary data. As such, manycrypto classes have methods nottypically found on other Node.js classes that implement thestreamsAPI (e.g.update(),final(), ordigest()). Also, many methods acceptedand returned'latin1' encoded strings by default rather thanBuffers. Thisdefault was changed in Node.js 0.9.3 to useBuffer objects by defaultinstead.
Support for weak or compromised algorithms#
Thenode:crypto module still supports some algorithms which are alreadycompromised and are not recommended for use. The API also allowsthe use of ciphers and hashes with a small key size that are too weak for safeuse.
Users should take full responsibility for selecting the cryptoalgorithm and key size according to their security requirements.
Based on the recommendations ofNIST SP 800-131A:
- MD5 and SHA-1 are no longer acceptable where collision resistance isrequired such as digital signatures.
- The key used with RSA, DSA, and DH algorithms is recommended to haveat least 2048 bits and that of the curve of ECDSA and ECDH at least224 bits, to be safe to use for several years.
- The DH groups of
modp1,modp2andmodp5have a key sizesmaller than 2048 bits and are not recommended.
See the reference for other recommendations and details.
Some algorithms that have known weaknesses and are of little relevance inpractice are only available through thelegacy provider, which is notenabled by default.
CCM mode#
CCM is one of the supportedAEAD algorithms. Applications which use thismode must adhere to certain restrictions when using the cipher API:
- The authentication tag length must be specified during cipher creation bysetting the
authTagLengthoption and must be one of 4, 6, 8, 10, 12, 14 or16 bytes. - The length of the initialization vector (nonce)
Nmust be between 7 and 13bytes (7 ≤ N ≤ 13). - The length of the plaintext is limited to
2 ** (8 * (15 - N))bytes. - When decrypting, the authentication tag must be set via
setAuthTag()beforecallingupdate().Otherwise, decryption will fail andfinal()will throw an error incompliance with section 2.6 ofRFC 3610. - Using stream methods such as
write(data),end(data)orpipe()in CCMmode might fail as CCM cannot handle more than one chunk of data per instance. - When passing additional authenticated data (AAD), the length of the actualmessage in bytes must be passed to
setAAD()via theplaintextLengthoption.Many crypto libraries include the authentication tag in the ciphertext,which means that they produce ciphertexts of the lengthplaintextLength + authTagLength. Node.js does not include the authenticationtag, so the ciphertext length is alwaysplaintextLength.This is not necessary if no AAD is used. - As CCM processes the whole message at once,
update()must be called exactlyonce. - Even though calling
update()is sufficient to encrypt/decrypt the message,applicationsmust callfinal()to compute or verify theauthentication tag.
import {Buffer }from'node:buffer';const { createCipheriv, createDecipheriv, randomBytes,} =awaitimport('node:crypto');const key ='keykeykeykeykeykeykeykey';const nonce =randomBytes(12);const aad =Buffer.from('0123456789','hex');const cipher =createCipheriv('aes-192-ccm', key, nonce, {authTagLength:16,});const plaintext ='Hello world';cipher.setAAD(aad, {plaintextLength:Buffer.byteLength(plaintext),});const ciphertext = cipher.update(plaintext,'utf8');cipher.final();const tag = cipher.getAuthTag();// Now transmit { ciphertext, nonce, tag }.const decipher =createDecipheriv('aes-192-ccm', key, nonce, {authTagLength:16,});decipher.setAuthTag(tag);decipher.setAAD(aad, {plaintextLength: ciphertext.length,});const receivedPlaintext = decipher.update(ciphertext,null,'utf8');try { decipher.final();}catch (err) {thrownewError('Authentication failed!', {cause: err });}console.log(receivedPlaintext);const {Buffer } =require('node:buffer');const { createCipheriv, createDecipheriv, randomBytes,} =require('node:crypto');const key ='keykeykeykeykeykeykeykey';const nonce =randomBytes(12);const aad =Buffer.from('0123456789','hex');const cipher =createCipheriv('aes-192-ccm', key, nonce, {authTagLength:16,});const plaintext ='Hello world';cipher.setAAD(aad, {plaintextLength:Buffer.byteLength(plaintext),});const ciphertext = cipher.update(plaintext,'utf8');cipher.final();const tag = cipher.getAuthTag();// Now transmit { ciphertext, nonce, tag }.const decipher =createDecipheriv('aes-192-ccm', key, nonce, {authTagLength:16,});decipher.setAuthTag(tag);decipher.setAAD(aad, {plaintextLength: ciphertext.length,});const receivedPlaintext = decipher.update(ciphertext,null,'utf8');try { decipher.final();}catch (err) {thrownewError('Authentication failed!', {cause: err });}console.log(receivedPlaintext);
FIPS mode#
When using OpenSSL 3, Node.js supports FIPS 140-2 when used with an appropriateOpenSSL 3 provider, such as theFIPS provider from OpenSSL 3 which can beinstalled by following the instructions inOpenSSL's FIPS README file.
For FIPS support in Node.js you will need:
- A correctly installed OpenSSL 3 FIPS provider.
- An OpenSSL 3FIPS module configuration file.
- An OpenSSL 3 configuration file that references the FIPS moduleconfiguration file.
Node.js will need to be configured with an OpenSSL configuration file thatpoints to the FIPS provider. An example configuration file looks like this:
nodejs_conf = nodejs_init.include /<absolute path>/fipsmodule.cnf[nodejs_init]providers = provider_sect[provider_sect]default = default_sect# The fips section name should match the section name inside the# included fipsmodule.cnf.fips = fips_sect[default_sect]activate = 1wherefipsmodule.cnf is the FIPS module configuration file generated from theFIPS provider installation step:
openssl fipsinstallSet theOPENSSL_CONF environment variable to point toyour configuration file andOPENSSL_MODULES to the location of the FIPSprovider dynamic library. e.g.
export OPENSSL_CONF=/<path to configuration file>/nodejs.cnfexport OPENSSL_MODULES=/<path to openssl lib>/ossl-modulesFIPS mode can then be enabled in Node.js either by:
- Starting Node.js with
--enable-fipsor--force-fipscommand line flags. - Programmatically calling
crypto.setFips(true).
Optionally FIPS mode can be enabled in Node.js via the OpenSSL configurationfile. e.g.
nodejs_conf = nodejs_init.include /<absolute path>/fipsmodule.cnf[nodejs_init]providers = provider_sectalg_section = algorithm_sect[provider_sect]default = default_sect# The fips section name should match the section name inside the# included fipsmodule.cnf.fips = fips_sect[default_sect]activate = 1[algorithm_sect]default_properties = fips=yesCrypto constants#
The following constants exported bycrypto.constants apply to various uses ofthenode:crypto,node:tls, andnode:https modules and are generallyspecific to OpenSSL.
OpenSSL options#
See thelist of SSL OP Flags for details.
| Constant | Description |
|---|---|
SSL_OP_ALL | Applies multiple bug workarounds within OpenSSL. Seehttps://www.openssl.org/docs/man3.0/man3/SSL_CTX_set_options.html for detail. |
SSL_OP_ALLOW_NO_DHE_KEX | Instructs OpenSSL to allow a non-[EC]DHE-based key exchange mode for TLS v1.3 |
SSL_OP_ALLOW_UNSAFE_LEGACY_RENEGOTIATION | Allows legacy insecure renegotiation between OpenSSL and unpatched clients or servers. Seehttps://www.openssl.org/docs/man3.0/man3/SSL_CTX_set_options.html. |
SSL_OP_CIPHER_SERVER_PREFERENCE | Attempts to use the server's preferences instead of the client's when selecting a cipher. Behavior depends on protocol version. Seehttps://www.openssl.org/docs/man3.0/man3/SSL_CTX_set_options.html. |
SSL_OP_CISCO_ANYCONNECT | Instructs OpenSSL to use Cisco's version identifier of DTLS_BAD_VER. |
SSL_OP_COOKIE_EXCHANGE | Instructs OpenSSL to turn on cookie exchange. |
SSL_OP_CRYPTOPRO_TLSEXT_BUG | Instructs OpenSSL to add server-hello extension from an early version of the cryptopro draft. |
SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS | Instructs OpenSSL to disable a SSL 3.0/TLS 1.0 vulnerability workaround added in OpenSSL 0.9.6d. |
SSL_OP_LEGACY_SERVER_CONNECT | Allows initial connection to servers that do not support RI. |
SSL_OP_NO_COMPRESSION | Instructs OpenSSL to disable support for SSL/TLS compression. |
SSL_OP_NO_ENCRYPT_THEN_MAC | Instructs OpenSSL to disable encrypt-then-MAC. |
SSL_OP_NO_QUERY_MTU | |
SSL_OP_NO_RENEGOTIATION | Instructs OpenSSL to disable renegotiation. |
SSL_OP_NO_SESSION_RESUMPTION_ON_RENEGOTIATION | Instructs OpenSSL to always start a new session when performing renegotiation. |
SSL_OP_NO_SSLv2 | Instructs OpenSSL to turn off SSL v2 |
SSL_OP_NO_SSLv3 | Instructs OpenSSL to turn off SSL v3 |
SSL_OP_NO_TICKET | Instructs OpenSSL to disable use of RFC4507bis tickets. |
SSL_OP_NO_TLSv1 | Instructs OpenSSL to turn off TLS v1 |
SSL_OP_NO_TLSv1_1 | Instructs OpenSSL to turn off TLS v1.1 |
SSL_OP_NO_TLSv1_2 | Instructs OpenSSL to turn off TLS v1.2 |
SSL_OP_NO_TLSv1_3 | Instructs OpenSSL to turn off TLS v1.3 |
SSL_OP_PRIORITIZE_CHACHA | Instructs OpenSSL server to prioritize ChaCha20-Poly1305 when the client does. This option has no effect ifSSL_OP_CIPHER_SERVER_PREFERENCE is not enabled. |
SSL_OP_TLS_ROLLBACK_BUG | Instructs OpenSSL to disable version rollback attack detection. |
OpenSSL engine constants#
| Constant | Description |
|---|---|
ENGINE_METHOD_RSA | Limit engine usage to RSA |
ENGINE_METHOD_DSA | Limit engine usage to DSA |
ENGINE_METHOD_DH | Limit engine usage to DH |
ENGINE_METHOD_RAND | Limit engine usage to RAND |
ENGINE_METHOD_EC | Limit engine usage to EC |
ENGINE_METHOD_CIPHERS | Limit engine usage to CIPHERS |
ENGINE_METHOD_DIGESTS | Limit engine usage to DIGESTS |
ENGINE_METHOD_PKEY_METHS | Limit engine usage to PKEY_METHS |
ENGINE_METHOD_PKEY_ASN1_METHS | Limit engine usage to PKEY_ASN1_METHS |
ENGINE_METHOD_ALL | |
ENGINE_METHOD_NONE |
Other OpenSSL constants#
| Constant | Description |
|---|---|
DH_CHECK_P_NOT_SAFE_PRIME | |
DH_CHECK_P_NOT_PRIME | |
DH_UNABLE_TO_CHECK_GENERATOR | |
DH_NOT_SUITABLE_GENERATOR | |
RSA_PKCS1_PADDING | |
RSA_SSLV23_PADDING | |
RSA_NO_PADDING | |
RSA_PKCS1_OAEP_PADDING | |
RSA_X931_PADDING | |
RSA_PKCS1_PSS_PADDING | |
RSA_PSS_SALTLEN_DIGEST | Sets the salt length forRSA_PKCS1_PSS_PADDING to the digest size when signing or verifying. |
RSA_PSS_SALTLEN_MAX_SIGN | Sets the salt length forRSA_PKCS1_PSS_PADDING to the maximum permissible value when signing data. |
RSA_PSS_SALTLEN_AUTO | Causes the salt length forRSA_PKCS1_PSS_PADDING to be determined automatically when verifying a signature. |
POINT_CONVERSION_COMPRESSED | |
POINT_CONVERSION_UNCOMPRESSED | |
POINT_CONVERSION_HYBRID |
Node.js crypto constants#
| Constant | Description |
|---|---|
defaultCoreCipherList | Specifies the built-in default cipher list used by Node.js. |
defaultCipherList | Specifies the active default cipher list used by the current Node.js process. |
Footnotes
Debugger#
Node.js includes a command-line debugging utility. The Node.js debugger clientis not a full-featured debugger, but simple stepping and inspection arepossible.
To use it, start Node.js with theinspect argument followed by the path to thescript to debug.
$node inspect myscript.js< Debugger listening on ws://127.0.0.1:9229/621111f9-ffcb-4e82-b718-48a145fa5db8< For help, see: https://nodejs.org/en/docs/inspector<connecting to 127.0.0.1:9229 ... ok< Debugger attached.< okBreak on start in myscript.js:2 1 // myscript.js>2 global.x = 5; 3 setTimeout(() => { 4 debugger;debug>The debugger automatically breaks on the first executable line. To insteadrun until the first breakpoint (specified by adebugger statement), settheNODE_INSPECT_RESUME_ON_START environment variable to1.
$cat myscript.js// myscript.jsglobal.x = 5;setTimeout(() => { debugger; console.log('world');}, 1000);console.log('hello');$NODE_INSPECT_RESUME_ON_START=1 node inspect myscript.js< Debugger listening on ws://127.0.0.1:9229/f1ed133e-7876-495b-83ae-c32c6fc319c2< For help, see: https://nodejs.org/en/docs/inspector<connecting to 127.0.0.1:9229 ... ok< Debugger attached.<< hello<break in myscript.js:4 2 global.x = 5; 3 setTimeout(() => {>4 debugger; 5 console.log('world'); 6 }, 1000);debug>nextbreak in myscript.js:5 3 setTimeout(() => { 4 debugger;>5 console.log('world'); 6 }, 1000); 7 console.log('hello');debug>replPress Ctrl+C to leave debug repl>x5>2 + 24debug>next< world<break in myscript.js:6 4 debugger; 5 console.log('world');>6 }, 1000); 7 console.log('hello'); 8debug>.exit$Therepl command allows code to be evaluated remotely. Thenext commandsteps to the next line. Typehelp to see what other commands are available.
Pressingenter without typing a command will repeat the previous debuggercommand.
Watchers#
It is possible to watch expression and variable values while debugging. Onevery breakpoint, each expression from the watchers list will be evaluatedin the current context and displayed immediately before the breakpoint'ssource code listing.
To begin watching an expression, typewatch('my_expression'). The commandwatchers will print the active watchers. To remove a watcher, typeunwatch('my_expression').
Command reference#
Stepping#
cont,c: Continue executionnext,n: Step nextstep,s: Step inout,o: Step outpause: Pause running code (like pause button in Developer Tools)
Breakpoints#
setBreakpoint(),sb(): Set breakpoint on current linesetBreakpoint(line),sb(line): Set breakpoint on specific linesetBreakpoint('fn()'),sb(...): Set breakpoint on a first statement infunction's bodysetBreakpoint('script.js', 1),sb(...): Set breakpoint on first line ofscript.jssetBreakpoint('script.js', 1, 'num < 4'),sb(...): Set conditionalbreakpoint on first line ofscript.jsthat only breaks whennum < 4evaluates totrueclearBreakpoint('script.js', 1),cb(...): Clear breakpoint inscript.json line 1
It is also possible to set a breakpoint in a file (module) thatis not loaded yet:
$node inspect main.js< Debugger listening on ws://127.0.0.1:9229/48a5b28a-550c-471b-b5e1-d13dd7165df9< For help, see: https://nodejs.org/en/docs/inspector<connecting to 127.0.0.1:9229 ... ok< Debugger attached.<Break on start in main.js:1>1 const mod = require('./mod.js'); 2 mod.hello(); 3 mod.hello();debug>setBreakpoint('mod.js', 22)Warning: script 'mod.js' was not loaded yet.debug>cbreak in mod.js:22 20 // USE OR OTHER DEALINGS IN THE SOFTWARE. 21>22 exports.hello =function() { 23 return 'hello from module'; 24 };debug>It is also possible to set a conditional breakpoint that only breaks when agiven expression evaluates totrue:
$node inspect main.js< Debugger listening on ws://127.0.0.1:9229/ce24daa8-3816-44d4-b8ab-8273c8a66d35< For help, see: https://nodejs.org/en/docs/inspector<connecting to 127.0.0.1:9229 ... ok< Debugger attached.Break on start in main.js:7 5 } 6>7 addOne(10); 8 addOne(-1); 9debug>setBreakpoint('main.js', 4,'num < 0') 1 'use strict'; 2 3 function addOne(num) {>4return num + 1; 5 } 6 7 addOne(10); 8 addOne(-1); 9debug>contbreak in main.js:4 2 3 function addOne(num) {>4return num + 1; 5 } 6debug>exec('num')-1debug>Information#
backtrace,bt: Print backtrace of current execution framelist(5): List scripts source code with 5 line context (5 lines before andafter)watch(expr): Add expression to watch listunwatch(expr): Remove expression from watch listunwatch(index): Remove expression at specific index from watch listwatchers: List all watchers and their values (automatically listed on eachbreakpoint)repl: Open debugger's repl for evaluation in debugging script's contextexec expr,p expr: Execute an expression in debugging script's context andprint its valueprofile: Start CPU profiling sessionprofileEnd: Stop current CPU profiling sessionprofiles: List all completed CPU profiling sessionsprofiles[n].save(filepath = 'node.cpuprofile'): Save CPU profiling sessionto disk as JSONtakeHeapSnapshot(filepath = 'node.heapsnapshot'): Take a heap snapshotand save to disk as JSON
Execution control#
run: Run script (automatically runs on debugger's start)restart: Restart scriptkill: Kill script
Various#
scripts: List all loaded scriptsversion: Display V8's version
Advanced usage#
V8 inspector integration for Node.js#
V8 Inspector integration allows attaching Chrome DevTools to Node.jsinstances for debugging and profiling. It uses theChrome DevTools Protocol.
V8 Inspector can be enabled by passing the--inspect flag when starting aNode.js application. It is also possible to supply a custom port with that flag,e.g.--inspect=9222 will accept DevTools connections on port 9222.
Using the--inspect flag will execute the code immediately before debugger is connected.This means that the code will start running before you can start debugging, which mightnot be ideal if you want to debug from the very beginning.
In such cases, you have two alternatives:
--inspect-waitflag: This flag will wait for debugger to be attached before executing the code.This allows you to start debugging right from the beginning of the execution.--inspect-brkflag: Unlike--inspect, this flag will break on the first line of the codeas soon as debugger is attached. This is useful when you want to debug the code step by stepfrom the very beginning, without any code execution prior to debugging.
So, when deciding between--inspect,--inspect-wait, and--inspect-brk, consider whether you wantthe code to start executing immediately, wait for debugger to be attached before execution,or break on the first line for step-by-step debugging.
$node --inspect index.jsDebugger listening on ws://127.0.0.1:9229/dc9010dd-f8b8-4ac5-a510-c1a114ec7d29For help, see: https://nodejs.org/en/docs/inspector(In the example above, the UUID dc9010dd-f8b8-4ac5-a510-c1a114ec7d29at the end of the URL is generated on the fly, it varies in differentdebugging sessions.)
If the Chrome browser is older than 66.0.3345.0,useinspector.html instead ofjs_app.html in the above URL.
Chrome DevTools doesn't support debuggingworker threads yet.ndb can be used to debug them.
Deprecated APIs#
Node.js APIs might be deprecated for any of the following reasons:
- Use of the API is unsafe.
- An improved alternative API is available.
- Breaking changes to the API are expected in a future major release.
Node.js uses four kinds of deprecations:
- Documentation-only
- Application (non-
node_modulescode only) - Runtime (all code)
- End-of-Life
A Documentation-only deprecation is one that is expressed only within theNode.js API docs. These generate no side-effects while running Node.js.Some Documentation-only deprecations trigger a runtime warning when launchedwith--pending-deprecation flag (or its alternative,NODE_PENDING_DEPRECATION=1 environment variable), similarly to Runtimedeprecations below. Documentation-only deprecations that support that flagare explicitly labeled as such in thelist of Deprecated APIs.
An Application deprecation for only non-node_modules code will, by default,generate a process warning that will be printed tostderr the first timethe deprecated API is used in code that's not loaded fromnode_modules.When the--throw-deprecation command-line flag is used, a Runtimedeprecation will cause an error to be thrown. When--pending-deprecation is used, warnings will also be emitted forcode loaded fromnode_modules.
A runtime deprecation for all code is similar to the runtime deprecationfor non-node_modules code, except that it also emits a warning forcode loaded fromnode_modules.
An End-of-Life deprecation is used when functionality is or will soon be removedfrom Node.js.
Revoking deprecations#
Occasionally, the deprecation of an API might be reversed. In such situations,this document will be updated with information relevant to the decision.However, the deprecation identifier will not be modified.
List of deprecated APIs#
DEP0001:http.OutgoingMessage.prototype.flush#
History
| Version | Changes |
|---|---|
| v14.0.0 | End-of-Life. |
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v1.6.0 | Runtime deprecation. |
Type: End-of-Life
OutgoingMessage.prototype.flush() has been removed. UseOutgoingMessage.prototype.flushHeaders() instead.
DEP0002:require('_linklist')#
History
| Version | Changes |
|---|---|
| v8.0.0 | End-of-Life. |
| v6.12.0 | A deprecation code has been assigned. |
| v5.0.0 | Runtime deprecation. |
Type: End-of-Life
The_linklist module is deprecated. Please use a userland alternative.
DEP0003:_writableState.buffer#
History
| Version | Changes |
|---|---|
| v14.0.0 | End-of-Life. |
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v0.11.15 | Runtime deprecation. |
Type: End-of-Life
The_writableState.buffer has been removed. Use_writableState.getBuffer()instead.
DEP0004:CryptoStream.prototype.readyState#
History
| Version | Changes |
|---|---|
| v10.0.0 | End-of-Life. |
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v0.4.0 | Documentation-only deprecation. |
Type: End-of-Life
TheCryptoStream.prototype.readyState property was removed.
DEP0005:Buffer() constructor#
History
| Version | Changes |
|---|---|
| v10.0.0 | Runtime deprecation. |
| v6.12.0 | A deprecation code has been assigned. |
| v6.0.0 | Documentation-only deprecation. |
Type: Application (non-node_modules code only)
TheBuffer() function andnew Buffer() constructor are deprecated due toAPI usability issues that can lead to accidental security issues.
As an alternative, use one of the following methods of constructingBufferobjects:
Buffer.alloc(size[, fill[, encoding]]): Create aBufferwithinitialized memory.Buffer.allocUnsafe(size): Create aBufferwithuninitialized memory.Buffer.allocUnsafeSlow(size): Create aBufferwithuninitializedmemory.Buffer.from(array): Create aBufferwith a copy ofarrayBuffer.from(arrayBuffer[, byteOffset[, length]])-Create aBufferthat wraps the givenarrayBuffer.Buffer.from(buffer): Create aBufferthat copiesbuffer.Buffer.from(string[, encoding]): Create aBufferthat copiesstring.
Without--pending-deprecation, runtime warnings occur only for code not innode_modules. This means there will not be deprecation warnings forBuffer() usage in dependencies. With--pending-deprecation, a runtimewarning results no matter where theBuffer() usage occurs.
DEP0006:child_processoptions.customFds#
History
| Version | Changes |
|---|---|
| v12.0.0 | End-of-Life. |
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v0.11.14 | Runtime deprecation. |
| v0.5.10 | Documentation-only deprecation. |
Type: End-of-Life
Within thechild_process module'sspawn(),fork(), andexec()methods, theoptions.customFds option is deprecated. Theoptions.stdiooption should be used instead.
DEP0007: Replaceclusterworker.suicide withworker.exitedAfterDisconnect#
History
| Version | Changes |
|---|---|
| v9.0.0 | End-of-Life. |
| v7.0.0 | Runtime deprecation. |
| v6.12.0 | A deprecation code has been assigned. |
| v6.0.0 | Documentation-only deprecation. |
Type: End-of-Life
In an earlier version of the Node.jscluster, a boolean property with the namesuicide was added to theWorker object. The intent of this property was toprovide an indication of how and why theWorker instance exited. In Node.js6.0.0, the old property was deprecated and replaced with a newworker.exitedAfterDisconnect property. The old property name did notprecisely describe the actual semantics and was unnecessarily emotion-laden.
DEP0008:require('node:constants')#
History
| Version | Changes |
|---|---|
| v6.12.0 | A deprecation code has been assigned. |
| v6.3.0 | Documentation-only deprecation. |
Type: Documentation-only
Thenode:constants module is deprecated. When requiring access to constantsrelevant to specific Node.js builtin modules, developers should instead referto theconstants property exposed by the relevant module. For instance,require('node:fs').constants andrequire('node:os').constants.
DEP0009:crypto.pbkdf2 without digest#
History
| Version | Changes |
|---|---|
| v14.0.0 | End-of-Life (for |
| v11.0.0 | Runtime deprecation (for |
| v8.0.0 | End-of-Life (for |
| v6.12.0 | A deprecation code has been assigned. |
| v6.0.0 | Runtime deprecation (for |
Type: End-of-Life
Use of thecrypto.pbkdf2() API without specifying a digest was deprecatedin Node.js 6.0 because the method defaulted to using the non-recommended'SHA1' digest. Previously, a deprecation warning was printed. Starting inNode.js 8.0.0, callingcrypto.pbkdf2() orcrypto.pbkdf2Sync() withdigest set toundefined will throw aTypeError.
Beginning in Node.js 11.0.0, calling these functions withdigest set tonull would print a deprecation warning to align with the behavior whendigestisundefined.
Now, however, passing eitherundefined ornull will throw aTypeError.
DEP0010:crypto.createCredentials#
History
| Version | Changes |
|---|---|
| v11.0.0 | End-of-Life. |
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v0.11.13 | Runtime deprecation. |
Type: End-of-Life
Thecrypto.createCredentials() API was removed. Please usetls.createSecureContext() instead.
DEP0011:crypto.Credentials#
History
| Version | Changes |
|---|---|
| v11.0.0 | End-of-Life. |
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v0.11.13 | Runtime deprecation. |
Type: End-of-Life
Thecrypto.Credentials class was removed. Please usetls.SecureContextinstead.
DEP0012:Domain.dispose#
History
| Version | Changes |
|---|---|
| v9.0.0 | End-of-Life. |
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v0.11.7 | Runtime deprecation. |
Type: End-of-Life
Domain.dispose() has been removed. Recover from failed I/O actionsexplicitly via error event handlers set on the domain instead.
DEP0013:fs asynchronous function without callback#
History
| Version | Changes |
|---|---|
| v10.0.0 | End-of-Life. |
| v7.0.0 | Runtime deprecation. |
Type: End-of-Life
Calling an asynchronous function without a callback throws aTypeErrorin Node.js 10.0.0 onwards. Seehttps://github.com/nodejs/node/pull/12562.
DEP0014:fs.read legacy String interface#
History
| Version | Changes |
|---|---|
| v8.0.0 | End-of-Life. |
| v6.0.0 | Runtime deprecation. |
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v0.1.96 | Documentation-only deprecation. |
Type: End-of-Life
Thefs.read() legacyString interface is deprecated. Use theBufferAPI as mentioned in the documentation instead.
DEP0015:fs.readSync legacy String interface#
History
| Version | Changes |
|---|---|
| v8.0.0 | End-of-Life. |
| v6.0.0 | Runtime deprecation. |
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v0.1.96 | Documentation-only deprecation. |
Type: End-of-Life
Thefs.readSync() legacyString interface is deprecated. Use theBuffer API as mentioned in the documentation instead.
DEP0016:GLOBAL/root#
History
| Version | Changes |
|---|---|
| v14.0.0 | End-of-Life. |
| v6.12.0 | A deprecation code has been assigned. |
| v6.0.0 | Runtime deprecation. |
Type: End-of-Life
TheGLOBAL androot aliases for theglobal property were deprecatedin Node.js 6.0.0 and have since been removed.
DEP0017:Intl.v8BreakIterator#
History
| Version | Changes |
|---|---|
| v9.0.0 | End-of-Life. |
| v7.0.0 | Runtime deprecation. |
Type: End-of-Life
Intl.v8BreakIterator was a non-standard extension and has been removed.SeeIntl.Segmenter.
DEP0018: Unhandled promise rejections#
History
| Version | Changes |
|---|---|
| v15.0.0 | End-of-Life. |
| v7.0.0 | Runtime deprecation. |
Type: End-of-Life
Unhandled promise rejections are deprecated. By default, promise rejectionsthat are not handled terminate the Node.js process with a non-zero exitcode. To change the way Node.js treats unhandled rejections, use the--unhandled-rejections command-line option.
DEP0019:require('.') resolved outside directory#
History
| Version | Changes |
|---|---|
| v12.0.0 | Removed functionality. |
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v1.8.1 | Runtime deprecation. |
Type: End-of-Life
In certain cases,require('.') could resolve outside the package directory.This behavior has been removed.
DEP0020:Server.connections#
History
| Version | Changes |
|---|---|
| v15.0.0 | Server.connections has been removed. |
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v0.9.7 | Runtime deprecation. |
Type: End-of-Life
TheServer.connections property was deprecated in Node.js 0.9.7 and hasbeen removed. Please use theServer.getConnections() method instead.
DEP0021:Server.listenFD#
History
| Version | Changes |
|---|---|
| v12.0.0 | End-of-Life. |
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v0.7.12 | Runtime deprecation. |
Type: End-of-Life
TheServer.listenFD() method was deprecated and removed. Please useServer.listen({fd: <number>}) instead.
DEP0022:os.tmpDir()#
History
| Version | Changes |
|---|---|
| v14.0.0 | End-of-Life. |
| v7.0.0 | Runtime deprecation. |
Type: End-of-Life
Theos.tmpDir() API was deprecated in Node.js 7.0.0 and has since beenremoved. Please useos.tmpdir() instead.
An automated migration is available (source):
npx codemod@latest @nodejs/tmpDir-to-tmpdirDEP0023:os.getNetworkInterfaces()#
History
| Version | Changes |
|---|---|
| v12.0.0 | End-of-Life. |
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v0.6.0 | Runtime deprecation. |
Type: End-of-Life
Theos.getNetworkInterfaces() method is deprecated. Please use theos.networkInterfaces() method instead.
DEP0024:REPLServer.prototype.convertToContext()#
History
| Version | Changes |
|---|---|
| v9.0.0 | End-of-Life. |
| v7.0.0 | Runtime deprecation. |
Type: End-of-Life
TheREPLServer.prototype.convertToContext() API has been removed.
DEP0025:require('node:sys')#
History
| Version | Changes |
|---|---|
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v1.0.0 | Runtime deprecation. |
Type: Runtime
Thenode:sys module is deprecated. Please use theutil module instead.
DEP0026:util.print()#
History
| Version | Changes |
|---|---|
| v12.0.0 | End-of-Life. |
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v0.11.3 | Runtime deprecation. |
Type: End-of-Life
util.print() has been removed. Please useconsole.log() instead.
An automated migration is available (source):
npx codemod@latest @nodejs/util-print-to-console-logDEP0027:util.puts()#
History
| Version | Changes |
|---|---|
| v12.0.0 | End-of-Life. |
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v0.11.3 | Runtime deprecation. |
Type: End-of-Life
util.puts() has been removed. Please useconsole.log() instead.
An automated migration is available (source):
npx codemod@latest @nodejs/util-print-to-console-logDEP0028:util.debug()#
History
| Version | Changes |
|---|---|
| v12.0.0 | End-of-Life. |
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v0.11.3 | Runtime deprecation. |
Type: End-of-Life
util.debug() has been removed. Please useconsole.error() instead.
An automated migration is available (source):
npx codemod@latest @nodejs/util-print-to-console-logDEP0029:util.error()#
History
| Version | Changes |
|---|---|
| v12.0.0 | End-of-Life. |
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v0.11.3 | Runtime deprecation. |
Type: End-of-Life
util.error() has been removed. Please useconsole.error() instead.
An automated migration is available (source):
npx codemod@latest @nodejs/util-print-to-console-logDEP0030:SlowBuffer#
History
| Version | Changes |
|---|---|
| v25.0.0 | End-of-Life. |
| v24.0.0 | Runtime deprecation. |
| v6.12.0 | A deprecation code has been assigned. |
| v6.0.0 | Documentation-only deprecation. |
Type: End-of-Life
TheSlowBuffer class has been removed. Please useBuffer.allocUnsafeSlow(size) instead.
An automated migration is available (source).
npx codemod@latest @nodejs/slow-buffer-to-buffer-alloc-unsafe-slowDEP0031:ecdh.setPublicKey()#
History
| Version | Changes |
|---|---|
| v25.0.0 | Runtime deprecation. |
| v6.12.0 | A deprecation code has been assigned. |
| v5.2.0 | Documentation-only deprecation. |
Type: Runtime
Theecdh.setPublicKey() method is now deprecated as its inclusion inthe API is not useful.
DEP0032:node:domain module#
History
| Version | Changes |
|---|---|
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v1.4.2 | Documentation-only deprecation. |
Type: Documentation-only
Thedomain module is deprecated and should not be used.
DEP0033:EventEmitter.listenerCount()#
History
| Version | Changes |
|---|---|
| v25.4.0 | Deprecation revoked. |
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v3.2.0 | Documentation-only deprecation. |
Type: Revoked
Theevents.listenerCount(emitter, eventName) API was deprecated, as itprovided identical fuctionality toemitter.listenerCount(eventName). Thedeprecation was revoked because this function has been repurposed to alsoaccept<EventTarget> arguments.
DEP0034:fs.exists(path, callback)#
History
| Version | Changes |
|---|---|
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v1.0.0 | Documentation-only deprecation. |
Type: Documentation-only
Thefs.exists(path, callback) API is deprecated. Please usefs.stat() orfs.access() instead.
DEP0035:fs.lchmod(path, mode, callback)#
History
| Version | Changes |
|---|---|
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v0.4.7 | Documentation-only deprecation. |
Type: Documentation-only
Thefs.lchmod(path, mode, callback) API is deprecated.
DEP0036:fs.lchmodSync(path, mode)#
History
| Version | Changes |
|---|---|
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v0.4.7 | Documentation-only deprecation. |
Type: Documentation-only
Thefs.lchmodSync(path, mode) API is deprecated.
DEP0037:fs.lchown(path, uid, gid, callback)#
History
| Version | Changes |
|---|---|
| v10.6.0 | Deprecation revoked. |
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v0.4.7 | Documentation-only deprecation. |
Type: Deprecation revoked
Thefs.lchown(path, uid, gid, callback) API was deprecated. Thedeprecation was revoked because the requisite supporting APIs were added inlibuv.
DEP0038:fs.lchownSync(path, uid, gid)#
History
| Version | Changes |
|---|---|
| v10.6.0 | Deprecation revoked. |
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v0.4.7 | Documentation-only deprecation. |
Type: Deprecation revoked
Thefs.lchownSync(path, uid, gid) API was deprecated. The deprecation wasrevoked because the requisite supporting APIs were added in libuv.
DEP0039:require.extensions#
History
| Version | Changes |
|---|---|
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v0.10.6 | Documentation-only deprecation. |
Type: Documentation-only
Therequire.extensions property is deprecated.
DEP0040:node:punycode module#
History
| Version | Changes |
|---|---|
| v21.0.0 | Runtime deprecation. |
| v16.6.0 | Added support for |
| v7.0.0 | Documentation-only deprecation. |
Type: Runtime
Thepunycode module is deprecated. Please use a userland alternativeinstead.
DEP0041:NODE_REPL_HISTORY_FILE environment variable#
History
| Version | Changes |
|---|---|
| v10.0.0 | End-of-Life. |
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v3.0.0 | Documentation-only deprecation. |
Type: End-of-Life
TheNODE_REPL_HISTORY_FILE environment variable was removed. Please useNODE_REPL_HISTORY instead.
DEP0042:tls.CryptoStream#
History
| Version | Changes |
|---|---|
| v10.0.0 | End-of-Life. |
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v0.11.3 | Documentation-only deprecation. |
Type: End-of-Life
Thetls.CryptoStream class was removed. Please usetls.TLSSocket instead.
DEP0043:tls.SecurePair#
History
| Version | Changes |
|---|---|
| v24.0.0 | End-of-Life. |
| v8.0.0 | Runtime deprecation. |
| v6.12.0 | A deprecation code has been assigned. |
| v6.0.0 | Documentation-only deprecation. |
| v0.11.15 | Deprecation revoked. |
| v0.11.3 | Runtime deprecation. |
Type: End-of-Life
Thetls.SecurePair class is deprecated. Please usetls.TLSSocket instead.
DEP0044:util.isArray()#
History
| Version | Changes |
|---|---|
| v22.0.0 | Runtime deprecation. |
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v4.0.0, v3.3.1 | Documentation-only deprecation. |
Type: Runtime
Theutil.isArray() API is deprecated. Please useArray.isArray()instead.
An automated migration is available (source):
npx codemod@latest @nodejs/util-isDEP0045:util.isBoolean()#
History
| Version | Changes |
|---|---|
| v23.0.0 | End-of-Life deprecation. |
| v22.0.0 | Runtime deprecation. |
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v4.0.0, v3.3.1 | Documentation-only deprecation. |
Type: End-of-Life
Theutil.isBoolean() API has been removed. Please usetypeof arg === 'boolean' instead.
An automated migration is available (source):
npx codemod@latest @nodejs/util-isDEP0046:util.isBuffer()#
History
| Version | Changes |
|---|---|
| v23.0.0 | End-of-Life deprecation. |
| v22.0.0 | Runtime deprecation. |
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v4.0.0, v3.3.1 | Documentation-only deprecation. |
Type: End-of-Life
Theutil.isBuffer() API has been removed. Please useBuffer.isBuffer() instead.
An automated migration is available (source):
npx codemod@latest @nodejs/util-isDEP0047:util.isDate()#
History
| Version | Changes |
|---|---|
| v23.0.0 | End-of-Life deprecation. |
| v22.0.0 | Runtime deprecation. |
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v4.0.0, v3.3.1 | Documentation-only deprecation. |
Type: End-of-Life
Theutil.isDate() API has been removed. Please usearg instanceof Date instead.
Also for stronger approaches, consider using:Date.prototype.toString.call(arg) === '[object Date]' && !isNaN(arg).This can also be used in atry/catch block to handle invalid date objects.
An automated migration is available (source):
npx codemod@latest @nodejs/util-isDEP0048:util.isError()#
History
| Version | Changes |
|---|---|
| v23.0.0 | End-of-Life deprecation. |
| v22.0.0 | Runtime deprecation. |
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v4.0.0, v3.3.1 | Documentation-only deprecation. |
Type: End-of-Life
Theutil.isError() API has been removed. Please useError.isError(arg).
An automated migration is available (source):
npx codemod@latest @nodejs/util-isDEP0049:util.isFunction()#
History
| Version | Changes |
|---|---|
| v23.0.0 | End-of-Life deprecation. |
| v22.0.0 | Runtime deprecation. |
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v4.0.0, v3.3.1 | Documentation-only deprecation. |
Type: End-of-Life
Theutil.isFunction() API has been removed. Please usetypeof arg === 'function' instead.
An automated migration is available (source):
npx codemod@latest @nodejs/util-isDEP0050:util.isNull()#
History
| Version | Changes |
|---|---|
| v23.0.0 | End-of-Life deprecation. |
| v22.0.0 | Runtime deprecation. |
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v4.0.0, v3.3.1 | Documentation-only deprecation. |
Type: End-of-Life
Theutil.isNull() API has been removed. Please usearg === null instead.
An automated migration is available (source):
npx codemod@latest @nodejs/util-isDEP0051:util.isNullOrUndefined()#
History
| Version | Changes |
|---|---|
| v23.0.0 | End-of-Life deprecation. |
| v22.0.0 | Runtime deprecation. |
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v4.0.0, v3.3.1 | Documentation-only deprecation. |
Type: End-of-Life
Theutil.isNullOrUndefined() API has been removed. Please usearg === null || arg === undefined instead.
An automated migration is available (source):
npx codemod@latest @nodejs/util-isDEP0052:util.isNumber()#
History
| Version | Changes |
|---|---|
| v23.0.0 | End-of-Life deprecation. |
| v22.0.0 | Runtime deprecation. |
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v4.0.0, v3.3.1 | Documentation-only deprecation. |
Type: End-of-Life
Theutil.isNumber() API has been removed. Please usetypeof arg === 'number' instead.
An automated migration is available (source):
npx codemod@latest @nodejs/util-isDEP0053:util.isObject()#
History
| Version | Changes |
|---|---|
| v23.0.0 | End-of-Life deprecation. |
| v22.0.0 | Runtime deprecation. |
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v4.0.0, v3.3.1 | Documentation-only deprecation. |
Type: End-of-Life
Theutil.isObject() API has been removed. Please usearg && typeof arg === 'object' instead.
An automated migration is available (source):
npx codemod@latest @nodejs/util-isDEP0054:util.isPrimitive()#
History
| Version | Changes |
|---|---|
| v23.0.0 | End-of-Life deprecation. |
| v22.0.0 | Runtime deprecation. |
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v4.0.0, v3.3.1 | Documentation-only deprecation. |
Type: End-of-Life
Theutil.isPrimitive() API has been removed. Please useObject(arg) !== arg instead.
An automated migration is available (source):
npx codemod@latest @nodejs/util-isDEP0055:util.isRegExp()#
History
| Version | Changes |
|---|---|
| v23.0.0 | End-of-Life deprecation. |
| v22.0.0 | Runtime deprecation. |
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v4.0.0, v3.3.1 | Documentation-only deprecation. |
Type: End-of-Life
Theutil.isRegExp() API has been removed. Please usearg instanceof RegExp instead.
An automated migration is available (source):
npx codemod@latest @nodejs/util-isDEP0056:util.isString()#
History
| Version | Changes |
|---|---|
| v23.0.0 | End-of-Life deprecation. |
| v22.0.0 | Runtime deprecation. |
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v4.0.0, v3.3.1 | Documentation-only deprecation. |
Type: End-of-Life
Theutil.isString() API has been removed. Please usetypeof arg === 'string' instead.
An automated migration is available (source):
npx codemod@latest @nodejs/util-isDEP0057:util.isSymbol()#
History
| Version | Changes |
|---|---|
| v23.0.0 | End-of-Life deprecation. |
| v22.0.0 | Runtime deprecation. |
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v4.0.0, v3.3.1 | Documentation-only deprecation. |
Type: End-of-Life
Theutil.isSymbol() API has been removed. Please usetypeof arg === 'symbol' instead.
An automated migration is available (source):
npx codemod@latest @nodejs/util-isDEP0058:util.isUndefined()#
History
| Version | Changes |
|---|---|
| v23.0.0 | End-of-Life deprecation. |
| v22.0.0 | Runtime deprecation. |
| v6.12.0, v4.8.6 | A deprecation code has been assigned. |
| v4.0.0, v3.3.1 | Documentation-only deprecation. |
Type: End-of-Life
Theutil.isUndefined() API has been removed. Please usearg === undefined instead.
An automated migration is available (source):
npx codemod@latest @nodejs/util-isDEP0059:util.log()#
History
| Version | Changes |
|---|---|
| v23.0.0 | End-of-Life deprecation. |
| v22.0.0 | Runtime deprecation. |
| v6.12.0 | A deprecation code has been assigned. |
| v6.0.0 | Documentation-only deprecation. |
Type: End-of-Life
Theutil.log() API has been removed because it's an unmaintainedlegacy API that was exposed to user land by accident. Instead,consider the following alternatives based on your specific needs:
Third-Party Logging Libraries
Use
console.log(new Date().toLocaleString(), message)
By adopting one of these alternatives, you can transition away fromutil.log()and choose a logging strategy that aligns with the specificrequirements and complexity of your application.
An automated migration is available (source):
npx codemod@latest @nodejs/util-log-to-console-logDEP0060:util._extend()#
History
| Version | Changes |
|---|---|
| v22.0.0 | Runtime deprecation. |
| v6.12.0 | A deprecation code has been assigned. |
| v6.0.0 | Documentation-only deprecation. |
Type: Runtime
Theutil._extend() API is deprecated because it's an unmaintainedlegacy API that was exposed to user land by accident.Please usetarget = Object.assign(target, source) instead.
An automated migration is available (source):
npx codemod@latest @nodejs/util-extend-to-object-assignDEP0061:fs.SyncWriteStream#
History
| Version | Changes |
|---|---|
| v11.0.0 | End-of-Life. |
| v8.0.0 | Runtime deprecation. |
| v7.0.0 | Documentation-only deprecation. |
Type: End-of-Life
Thefs.SyncWriteStream class was never intended to be a publicly accessibleAPI and has been removed. No alternative API is available. Please use a userlandalternative.
DEP0062:node --debug#
History
| Version | Changes |
|---|---|
| v12.0.0 | End-of-Life. |
| v8.0.0 | Runtime deprecation. |
Type: End-of-Life
--debug activates the legacy V8 debugger interface, which was removed asof V8 5.8. It is replaced by Inspector which is activated with--inspectinstead.
DEP0063:ServerResponse.prototype.writeHeader()#
History
| Version | Changes |
|---|---|
| v25.0.0 | Runtime deprecation. |
| v8.0.0 | Documentation-only deprecation. |
Type: Runtime
Thenode:http moduleServerResponse.prototype.writeHeader() API isdeprecated. Please useServerResponse.prototype.writeHead() instead.
TheServerResponse.prototype.writeHeader() method was never documented as anofficially supported API.
DEP0064:tls.createSecurePair()#
History
| Version | Changes |
|---|---|
| v24.0.0 | End-of-Life. |
| v8.0.0 | Runtime deprecation. |
| v6.12.0 | A deprecation code has been assigned. |
| v6.0.0 | Documentation-only deprecation. |
| v0.11.15 | Deprecation revoked. |
| v0.11.3 | Runtime deprecation. |
Type: End-of-Life
Thetls.createSecurePair() API was deprecated in documentation in Node.js0.11.3. Users should usetls.Socket instead.
DEP0065:repl.REPL_MODE_MAGIC andNODE_REPL_MODE=magic#
History
| Version | Changes |
|---|---|
| v10.0.0 | End-of-Life. |
| v8.0.0 | Documentation-only deprecation. |
Type: End-of-Life
Thenode:repl module'sREPL_MODE_MAGIC constant, used forreplMode option,has been removed. Its behavior has been functionally identical to that ofREPL_MODE_SLOPPY since Node.js 6.0.0, when V8 5.0 was imported. Please useREPL_MODE_SLOPPY instead.
TheNODE_REPL_MODE environment variable is used to set the underlyingreplMode of an interactivenode session. Its value,magic, is alsoremoved. Please usesloppy instead.
DEP0066:OutgoingMessage.prototype._headers, OutgoingMessage.prototype._headerNames#
History
| Version | Changes |
|---|---|
| v24.0.0 | End-of-Life. |
| v12.0.0 | Runtime deprecation. |
| v8.0.0 | Documentation-only deprecation. |
Type: End-of-Life
Thenode:http moduleOutgoingMessage.prototype._headers andOutgoingMessage.prototype._headerNames properties are deprecated. Use one ofthe public methods (e.g.OutgoingMessage.prototype.getHeader(),OutgoingMessage.prototype.getHeaders(),OutgoingMessage.prototype.getHeaderNames(),OutgoingMessage.prototype.getRawHeaderNames(),OutgoingMessage.prototype.hasHeader(),OutgoingMessage.prototype.removeHeader(),OutgoingMessage.prototype.setHeader()) for working with outgoing headers.
TheOutgoingMessage.prototype._headers andOutgoingMessage.prototype._headerNames properties were never documented asofficially supported properties.
An automated migration is available (source):
npx codemod@latest @nodejs/http-outgoingmessage-headersDEP0067:OutgoingMessage.prototype._renderHeaders#
History
| Version | Changes |
|---|---|
| v8.0.0 | Documentation-only deprecation. |
Type: Documentation-only
Thenode:http moduleOutgoingMessage.prototype._renderHeaders() API isdeprecated.
TheOutgoingMessage.prototype._renderHeaders property was never documented asan officially supported API.
DEP0068:node debug#
History
| Version | Changes |
|---|---|
| v15.0.0 | The legacy |
| v8.0.0 | Runtime deprecation. |
Type: End-of-Life
node debug corresponds to the legacy CLI debugger which has been replaced witha V8-inspector based CLI debugger available throughnode inspect.
DEP0069:vm.runInDebugContext(string)#
History
| Version | Changes |
|---|---|
| v10.0.0 | End-of-Life. |
| v9.0.0 | Runtime deprecation. |
| v8.0.0 | Documentation-only deprecation. |
Type: End-of-Life
DebugContext has been removed in V8 and is not available in Node.js 10+.
DebugContext was an experimental API.
DEP0070:async_hooks.currentId()#
History
| Version | Changes |
|---|---|
| v9.0.0 | End-of-Life. |
| v8.2.0 | Runtime deprecation. |
Type: End-of-Life
async_hooks.currentId() was renamed toasync_hooks.executionAsyncId() forclarity.
This change was made whileasync_hooks was an experimental API.
DEP0071:async_hooks.triggerId()#
History
| Version | Changes |
|---|---|
| v9.0.0 | End-of-Life. |
| v8.2.0 | Runtime deprecation. |
Type: End-of-Life
async_hooks.triggerId() was renamed toasync_hooks.triggerAsyncId() forclarity.
This change was made whileasync_hooks was an experimental API.
DEP0072:async_hooks.AsyncResource.triggerId()#
History
| Version | Changes |
|---|---|
| v9.0.0 | End-of-Life. |
| v8.2.0 | Runtime deprecation. |
Type: End-of-Life
async_hooks.AsyncResource.triggerId() was renamed toasync_hooks.AsyncResource.triggerAsyncId() for clarity.
This change was made whileasync_hooks was an experimental API.
DEP0073: Several internal properties ofnet.Server#
History
| Version | Changes |
|---|---|
| v10.0.0 | End-of-Life. |
| v9.0.0 | Runtime deprecation. |
Type: End-of-Life
Accessing several internal, undocumented properties ofnet.Server instanceswith inappropriate names is deprecated.
As the original API was undocumented and not generally useful for non-internalcode, no replacement API is provided.
DEP0074:REPLServer.bufferedCommand#
History
| Version | Changes |
|---|---|
| v15.0.0 | End-of-Life. |
| v9.0.0 | Runtime deprecation. |
Type: End-of-Life
TheREPLServer.bufferedCommand property was deprecated in favor ofREPLServer.clearBufferedCommand().
DEP0075:REPLServer.parseREPLKeyword()#
History
| Version | Changes |
|---|---|
| v15.0.0 | End-of-Life. |
| v9.0.0 | Runtime deprecation. |
Type: End-of-Life
REPLServer.parseREPLKeyword() was removed from userland visibility.
DEP0076:tls.parseCertString()#
History
| Version | Changes |
|---|---|
| v18.0.0 | End-of-Life. |
| v9.0.0 | Runtime deprecation. |
| v8.6.0 | Documentation-only deprecation. |
Type: End-of-Life
tls.parseCertString() was a trivial parsing helper that was made public bymistake. While it was supposed to parse certificate subject and issuer strings,it never handled multi-value Relative Distinguished Names correctly.
Earlier versions of this document suggested usingquerystring.parse() as analternative totls.parseCertString(). However,querystring.parse() also doesnot handle all certificate subjects correctly and should not be used.
DEP0077:Module._debug()#
History
| Version | Changes |
|---|---|
| v25.0.0 | End-of-Life. |
| v9.0.0 | Runtime deprecation. |
Type: End-of-Life
Module._debug() has been removed.
TheModule._debug() function was never documented as an officiallysupported API.
DEP0078:REPLServer.turnOffEditorMode()#
History
| Version | Changes |
|---|---|
| v15.0.0 | End-of-Life. |
| v9.0.0 | Runtime deprecation. |
Type: End-of-Life
REPLServer.turnOffEditorMode() was removed from userland visibility.
DEP0079: Custom inspection function on objects via.inspect()#
History
| Version | Changes |
|---|---|
| v11.0.0 | End-of-Life. |
| v10.0.0 | Runtime deprecation. |
| v8.7.0 | Documentation-only deprecation. |
Type: End-of-Life
Using a property namedinspect on an object to specify a custom inspectionfunction forutil.inspect() is deprecated. Useutil.inspect.custominstead. For backward compatibility with Node.js prior to version 6.4.0, bothcan be specified.
DEP0080:path._makeLong()#
History
| Version | Changes |
|---|---|
| v9.0.0 | Documentation-only deprecation. |
Type: Documentation-only
The internalpath._makeLong() was not intended for public use. However,userland modules have found it useful. The internal API is deprecatedand replaced with an identical, publicpath.toNamespacedPath() method.
DEP0081:fs.truncate() using a file descriptor#
History
| Version | Changes |
|---|---|
| v24.0.0 | End-of-Life. |
| v9.0.0 | Runtime deprecation. |
Type: End-of-Life
fs.truncate()fs.truncateSync() usage with a file descriptor isdeprecated. Please usefs.ftruncate() orfs.ftruncateSync() to work withfile descriptors.
An automated migration is available (source):
npx codemod@latest @nodejs/fs-truncate-fd-deprecationDEP0082:REPLServer.prototype.memory()#
History
| Version | Changes |
|---|---|
| v15.0.0 | End-of-Life. |
| v9.0.0 | Runtime deprecation. |
Type: End-of-Life
REPLServer.prototype.memory() is only necessary for the internal mechanics oftheREPLServer itself. Do not use this function.
DEP0083: Disabling ECDH by settingecdhCurve tofalse#
History
| Version | Changes |
|---|---|
| v10.0.0 | End-of-Life. |
| v9.2.0 | Runtime deprecation. |
Type: End-of-Life
TheecdhCurve option totls.createSecureContext() andtls.TLSSocket couldbe set tofalse to disable ECDH entirely on the server only. This mode wasdeprecated in preparation for migrating to OpenSSL 1.1.0 and consistency withthe client and is now unsupported. Use theciphers parameter instead.
DEP0084: requiring bundled internal dependencies#
History
| Version | Changes |
|---|---|
| v12.0.0 | This functionality has been removed. |
| v10.0.0 | Runtime deprecation. |
Type: End-of-Life
Since Node.js versions 4.4.0 and 5.2.0, several modules only intended forinternal usage were mistakenly exposed to user code throughrequire(). Thesemodules were:
v8/tools/codemapv8/tools/consarrayv8/tools/csvparserv8/tools/logreaderv8/tools/profile_viewv8/tools/profilev8/tools/SourceMapv8/tools/splaytreev8/tools/tickprocessor-driverv8/tools/tickprocessornode-inspect/lib/_inspect(from 7.6.0)node-inspect/lib/internal/inspect_client(from 7.6.0)node-inspect/lib/internal/inspect_repl(from 7.6.0)
Thev8/* modules do not have any exports, and if not imported in a specificorder would in fact throw errors. As such there are virtually no legitimate usecases for importing them throughrequire().
On the other hand,node-inspect can be installed locally through a packagemanager, as it is published on the npm registry under the same name. No sourcecode modification is necessary if that is done.
DEP0085: AsyncHooks sensitive API#
History
| Version | Changes |
|---|---|
| v10.0.0 | End-of-Life. |
| v9.4.0, v8.10.0 | Runtime deprecation. |
Type: End-of-Life
The AsyncHooks sensitive API was never documented and had various minor issues.Use theAsyncResource API instead. Seehttps://github.com/nodejs/node/issues/15572.
DEP0086: RemoverunInAsyncIdScope#
History
| Version | Changes |
|---|---|
| v10.0.0 | End-of-Life. |
| v9.4.0, v8.10.0 | Runtime deprecation. |
Type: End-of-Life
runInAsyncIdScope doesn't emit the'before' or'after' event and can thuscause a lot of issues. Seehttps://github.com/nodejs/node/issues/14328.
DEP0089:require('node:assert')#
History
| Version | Changes |
|---|---|
| v12.8.0 | Deprecation revoked. |
| v9.9.0, v8.13.0 | Documentation-only deprecation. |
Type: Deprecation revoked
Importing assert directly was not recommended as the exposed functions useloose equality checks. The deprecation was revoked because use of thenode:assert module is not discouraged, and the deprecation caused developerconfusion.
DEP0090: Invalid GCM authentication tag lengths#
History
| Version | Changes |
|---|---|
| v11.0.0 | End-of-Life. |
| v10.0.0 | Runtime deprecation. |
Type: End-of-Life
Node.js used to support all GCM authentication tag lengths which are accepted byOpenSSL when callingdecipher.setAuthTag(). Beginning with Node.jsv11.0.0, only authentication tag lengths of 128, 120, 112, 104, 96, 64, and 32bits are allowed. Authentication tags of other lengths are invalid perNIST SP 800-38D.
DEP0091:crypto.DEFAULT_ENCODING#
History
| Version | Changes |
|---|---|
| v20.0.0 | End-of-Life. |
| v10.0.0 | Runtime deprecation. |
Type: End-of-Life
Thecrypto.DEFAULT_ENCODING property only existed for compatibility withNode.js releases prior to versions 0.9.3 and has been removed.
DEP0092: Top-levelthis bound tomodule.exports#
History
| Version | Changes |
|---|---|
| v10.0.0 | Documentation-only deprecation. |
Type: Documentation-only
Assigning properties to the top-levelthis as an alternativetomodule.exports is deprecated. Developers should useexportsormodule.exports instead.
DEP0093:crypto.fips is deprecated and replaced#
History
| Version | Changes |
|---|---|
| v23.0.0 | Runtime deprecation. |
| v10.0.0 | Documentation-only deprecation. |
Type: Runtime
Thecrypto.fips property is deprecated. Please usecrypto.setFips()andcrypto.getFips() instead.
An automated migration is available (source).
npx codemod@latest @nodejs/crypto-fips-to-getFipsDEP0094: Usingassert.fail() with more than one argument#
History
| Version | Changes |
|---|---|
| v25.0.0 | End-of-Life. |
| v10.0.0 | Runtime deprecation. |
Type: End-of-Life
Usingassert.fail() with more than one argument is deprecated. Useassert.fail() with only one argument or use a differentnode:assert modulemethod.
DEP0095:timers.enroll()#
History
| Version | Changes |
|---|---|
| v24.0.0 | End-of-Life. |
| v10.0.0 | Runtime deprecation. |
Type: End-of-Life
timers.enroll() has been removed. Please use the publicly documentedsetTimeout() orsetInterval() instead.
DEP0096:timers.unenroll()#
History
| Version | Changes |
|---|---|
| v24.0.0 | End-of-Life. |
| v10.0.0 | Runtime deprecation. |
Type: End-of-Life
timers.unenroll() has been removed. Please use the publicly documentedclearTimeout() orclearInterval() instead.
DEP0097:MakeCallback withdomain property#
History
| Version | Changes |
|---|---|
| v10.0.0 | Runtime deprecation. |
Type: Runtime
Users ofMakeCallback that add thedomain property to carry context,should start using theasync_context variant ofMakeCallback orCallbackScope, or the high-levelAsyncResource class.
DEP0098: AsyncHooks embedderAsyncResource.emitBefore andAsyncResource.emitAfter APIs#
History
| Version | Changes |
|---|---|
| v12.0.0 | End-of-Life. |
| v10.0.0, v9.6.0, v8.12.0 | Runtime deprecation. |
Type: End-of-Life
The embedded API provided by AsyncHooks exposes.emitBefore() and.emitAfter() methods which are very easy to use incorrectly which can leadto unrecoverable errors.
UseasyncResource.runInAsyncScope() API instead which provides a muchsafer, and more convenient, alternative. Seehttps://github.com/nodejs/node/pull/18513.
DEP0099: Async context-unawarenode::MakeCallback C++ APIs#
History
| Version | Changes |
|---|---|
| v10.0.0 | Compile-time deprecation. |
Type: Compile-time
Certain versions ofnode::MakeCallback APIs available to native addons aredeprecated. Please use the versions of the API that accept anasync_contextparameter.
DEP0100:process.assert()#
History
| Version | Changes |
|---|---|
| v23.0.0 | End-of-Life. |
| v10.0.0 | Runtime deprecation. |
| v0.3.7 | Documentation-only deprecation. |
Type: End-of-Life
process.assert() is deprecated. Please use theassert module instead.
This was never a documented feature.
An automated migration is available (source).
npx codemod@latest @nodejs/process-assert-to-node-assertDEP0101:--with-lttng#
History
| Version | Changes |
|---|---|
| v10.0.0 | End-of-Life. |
Type: End-of-Life
The--with-lttng compile-time option has been removed.
DEP0102: UsingnoAssert inBuffer#(read|write) operations#
History
| Version | Changes |
|---|---|
| v10.0.0 | End-of-Life. |
Type: End-of-Life
Using thenoAssert argument has no functionality anymore. All input isverified regardless of the value ofnoAssert. Skipping the verificationcould lead to hard-to-find errors and crashes.
DEP0103:process.binding('util').is[...] typechecks#
History
| Version | Changes |
|---|---|
| v10.9.0 | Superseded byDEP0111. |
| v10.0.0 | Documentation-only deprecation. |
Type: Documentation-only (supports--pending-deprecation)
Usingprocess.binding() in general should be avoided. The type checkingmethods in particular can be replaced by usingutil.types.
This deprecation has been superseded by the deprecation of theprocess.binding() API (DEP0111).
DEP0104:process.env string coercion#
History
| Version | Changes |
|---|---|
| v10.0.0 | Documentation-only deprecation with |
Type: Documentation-only (supports--pending-deprecation)
When assigning a non-string property toprocess.env, the assigned value isimplicitly converted to a string. This behavior is deprecated if the assignedvalue is not a string, boolean, or number. In the future, such assignment mightresult in a thrown error. Please convert the property to a string beforeassigning it toprocess.env.
DEP0105:decipher.finaltol#
History
| Version | Changes |
|---|---|
| v11.0.0 | End-of-Life. |
| v10.0.0 | Runtime deprecation. |
Type: End-of-Life
decipher.finaltol() has never been documented and was an alias fordecipher.final(). This API has been removed, and it is recommended to usedecipher.final() instead.
DEP0106:crypto.createCipher andcrypto.createDecipher#
History
| Version | Changes |
|---|---|
| v22.0.0 | End-of-Life. |
| v11.0.0 | Runtime deprecation. |
| v10.0.0 | Documentation-only deprecation. |
Type: End-of-Life
crypto.createCipher() andcrypto.createDecipher() have been removedas they use a weak key derivation function (MD5 with no salt) and staticinitialization vectors.It is recommended to derive a key usingcrypto.pbkdf2() orcrypto.scrypt() with random salts and to usecrypto.createCipheriv() andcrypto.createDecipheriv() to obtain theCipheriv andDecipheriv objects respectively.
DEP0107:tls.convertNPNProtocols()#
History
| Version | Changes |
|---|---|
| v11.0.0 | End-of-Life. |
| v10.0.0 | Runtime deprecation. |
Type: End-of-Life
This was an undocumented helper function not intended for use outside Node.jscore and obsoleted by the removal of NPN (Next Protocol Negotiation) support.
DEP0108:zlib.bytesRead#
History
| Version | Changes |
|---|---|
| v23.0.0 | End-of-Life. |
| v11.0.0 | Runtime deprecation. |
| v10.0.0 | Documentation-only deprecation. |
Type: End-of-Life
Deprecated alias forzlib.bytesWritten. This original name was chosenbecause it also made sense to interpret the value as the number of bytesread by the engine, but is inconsistent with other streams in Node.js thatexpose values under these names.
An automated migration is available (source):
npx codemod@latest @nodejs/zlib-bytesread-to-byteswrittenDEP0109:http,https, andtls support for invalid URLs#
History
| Version | Changes |
|---|---|
| v16.0.0 | End-of-Life. |
| v11.0.0 | Runtime deprecation. |
Type: End-of-Life
Some previously supported (but strictly invalid) URLs were accepted through thehttp.request(),http.get(),https.request(),https.get(), andtls.checkServerIdentity() APIs because those wereaccepted by the legacyurl.parse() API. The mentioned APIs now use the WHATWGURL parser that requires strictly valid URLs. Passing an invalid URL isdeprecated and support will be removed in the future.
DEP0110:vm.Script cached data#
History
| Version | Changes |
|---|---|
| v10.6.0 | Documentation-only deprecation. |
Type: Documentation-only
TheproduceCachedData option is deprecated. Usescript.createCachedData() instead.
DEP0111:process.binding()#
History
| Version | Changes |
|---|---|
| v11.12.0 | Added support for |
| v10.9.0 | Documentation-only deprecation. |
Type: Documentation-only (supports--pending-deprecation)
process.binding() is for use by Node.js internal code only.
Whileprocess.binding() has not reached End-of-Life status in general, it isunavailable when thepermission model is enabled.
DEP0112:dgram private APIs#
History
| Version | Changes |
|---|---|
| v25.0.0 | End-of-Life. |
| v11.0.0 | Runtime deprecation. |
Type: End-of-Life
Thenode:dgram module previously contained several APIs that were never meantto accessed outside of Node.js core:Socket.prototype._handle,Socket.prototype._receiving,Socket.prototype._bindState,Socket.prototype._queue,Socket.prototype._reuseAddr,Socket.prototype._healthCheck(),Socket.prototype._stopReceiving(), anddgram._createSocketHandle(). These have been removed.
DEP0113:Cipher.setAuthTag(),Decipher.getAuthTag()#
History
| Version | Changes |
|---|---|
| v12.0.0 | End-of-Life. |
| v11.0.0 | Runtime deprecation. |
Type: End-of-Life
Cipher.setAuthTag() andDecipher.getAuthTag() are no longer available. Theywere never documented and would throw when called.
DEP0114:crypto._toBuf()#
History
| Version | Changes |
|---|---|
| v12.0.0 | End-of-Life. |
| v11.0.0 | Runtime deprecation. |
Type: End-of-Life
Thecrypto._toBuf() function was not designed to be used by modules outsideof Node.js core and was removed.
DEP0115:crypto.prng(),crypto.pseudoRandomBytes(),crypto.rng()#
History
| Version | Changes |
|---|---|
| v11.0.0 | Documentation-only deprecation with |
Type: Documentation-only (supports--pending-deprecation)
In recent versions of Node.js, there is no difference betweencrypto.randomBytes() andcrypto.pseudoRandomBytes(). The latter isdeprecated along with the undocumented aliasescrypto.prng() andcrypto.rng() in favor ofcrypto.randomBytes() and might be removed in afuture release.
DEP0116: Legacy URL API#
History
| Version | Changes |
|---|---|
| v19.0.0, v18.13.0 | `url.parse()` is deprecated again in DEP0169. |
| v15.13.0, v14.17.0 | Deprecation revoked. Status changed to "Legacy". |
| v11.0.0 | Documentation-only deprecation. |
Type: Deprecation revoked
Thelegacy URL API is deprecated. This includesurl.format(),url.parse(),url.resolve(), and thelegacyurlObject. Pleaseuse theWHATWG URL API instead.
An automated migration is available (source).
npx codemod@latest @nodejs/node-url-to-whatwg-urlDEP0117: Native crypto handles#
History
| Version | Changes |
|---|---|
| v12.0.0 | End-of-Life. |
| v11.0.0 | Runtime deprecation. |
Type: End-of-Life
Previous versions of Node.js exposed handles to internal native objects throughthe_handle property of theCipher,Decipher,DiffieHellman,DiffieHellmanGroup,ECDH,Hash,Hmac,Sign, andVerify classes.The_handle property has been removed because improper use of the nativeobject can lead to crashing the application.
DEP0118:dns.lookup() support for a falsy host name#
History
| Version | Changes |
|---|---|
| v25.0.0 | End-of-Life. |
| v11.0.0 | Runtime deprecation. |
Type: End-of-Life
Previous versions of Node.js supporteddns.lookup() with a falsy host namelikedns.lookup(false) due to backward compatibility. This has been removed.
DEP0119:process.binding('uv').errname() private API#
History
| Version | Changes |
|---|---|
| v11.0.0 | Documentation-only deprecation with |
Type: Documentation-only (supports--pending-deprecation)
process.binding('uv').errname() is deprecated. Please useutil.getSystemErrorName() instead.
DEP0120: Windows Performance Counter support#
History
| Version | Changes |
|---|---|
| v12.0.0 | End-of-Life. |
| v11.0.0 | Runtime deprecation. |
Type: End-of-Life
Windows Performance Counter support has been removed from Node.js. TheundocumentedCOUNTER_NET_SERVER_CONNECTION(),COUNTER_NET_SERVER_CONNECTION_CLOSE(),COUNTER_HTTP_SERVER_REQUEST(),COUNTER_HTTP_SERVER_RESPONSE(),COUNTER_HTTP_CLIENT_REQUEST(), andCOUNTER_HTTP_CLIENT_RESPONSE() functions have been deprecated.
DEP0121:net._setSimultaneousAccepts()#
History
| Version | Changes |
|---|---|
| v24.0.0 | End-of-Life. |
| v12.0.0 | Runtime deprecation. |
Type: End-of-Life
The undocumentednet._setSimultaneousAccepts() function was originallyintended for debugging and performance tuning when using thenode:child_process andnode:cluster modules on Windows. The function is notgenerally useful and is being removed. See discussion here:https://github.com/nodejs/node/issues/18391
DEP0122:tlsServer.prototype.setOptions()#
History
| Version | Changes |
|---|---|
| v24.0.0 | End-of-Life. |
| v12.0.0 | Runtime deprecation. |
Type: End-of-Life
Please useServer.prototype.setSecureContext() instead.
DEP0123: setting the TLS ServerName to an IP address#
History
| Version | Changes |
|---|---|
| v25.0.0 | End-of-Life. |
| v12.0.0 | Runtime deprecation. |
Type: End-of-Life
Setting the TLS ServerName to an IP address is not permitted byRFC 6066.
DEP0124: usingREPLServer.rli#
History
| Version | Changes |
|---|---|
| v15.0.0 | End-of-Life. |
| v12.0.0 | Runtime deprecation. |
Type: End-of-Life
This property is a reference to the instance itself.
DEP0125:require('node:_stream_wrap')#
History
| Version | Changes |
|---|---|
| v12.0.0 | Runtime deprecation. |
Type: Runtime
Thenode:_stream_wrap module is deprecated.
DEP0126:timers.active()#
History
| Version | Changes |
|---|---|
| v24.0.0 | End-of-Life. |
| v11.14.0 | Runtime deprecation. |
Type: End-of-Life
The previously undocumentedtimers.active() has been removed.Please use the publicly documentedtimeout.refresh() instead.If re-referencing the timeout is necessary,timeout.ref() can be usedwith no performance impact since Node.js 10.
DEP0127:timers._unrefActive()#
History
| Version | Changes |
|---|---|
| v24.0.0 | End-of-Life. |
| v11.14.0 | Runtime deprecation. |
Type: End-of-Life
The previously undocumented and "private"timers._unrefActive() has been removed.Please use the publicly documentedtimeout.refresh() instead.If unreferencing the timeout is necessary,timeout.unref() can be usedwith no performance impact since Node.js 10.
DEP0128: modules with an invalidmain entry and anindex.js file#
History
| Version | Changes |
|---|---|
| v16.0.0 | Runtime deprecation. |
| v12.0.0 | Documentation-only. |
Type: Runtime
Modules that have an invalidmain entry (e.g.,./does-not-exist.js) andalso have anindex.js file in the top level directory will resolve theindex.js file. That is deprecated and is going to throw an error in futureNode.js versions.
DEP0129:ChildProcess._channel#
History
| Version | Changes |
|---|---|
| v25.0.0 | End-of-Life. |
| v13.0.0 | Runtime deprecation. |
| v11.14.0 | Documentation-only. |
Type: End-of-Life
The_channel property of child process objects returned byspawn() andsimilar functions is not intended for public use. UseChildProcess.channelinstead.
DEP0130:Module.createRequireFromPath()#
History
| Version | Changes |
|---|---|
| v16.0.0 | End-of-life. |
| v13.0.0 | Runtime deprecation. |
| v12.2.0 | Documentation-only. |
Type: End-of-Life
Usemodule.createRequire() instead.
An automated migration is available (source):
npx codemod@latest @nodejs/create-require-from-pathDEP0131: Legacy HTTP parser#
History
| Version | Changes |
|---|---|
| v13.0.0 | This feature has been removed. |
| v12.22.0 | Runtime deprecation. |
| v12.3.0 | Documentation-only. |
Type: End-of-Life
The legacy HTTP parser, used by default in versions of Node.js prior to 12.0.0,is deprecated and has been removed in v13.0.0. Prior to v13.0.0, the--http-parser=legacy command-line flag could be used to revert to using thelegacy parser.
DEP0132:worker.terminate() with callback#
History
| Version | Changes |
|---|---|
| v25.0.0 | End-of-Life. |
| v12.5.0 | Runtime deprecation. |
Type: End-of-Life
Passing a callback toworker.terminate() is deprecated. Use the returnedPromise instead, or a listener to the worker's'exit' event.
DEP0133:httpconnection#
History
| Version | Changes |
|---|---|
| v12.12.0 | Documentation-only deprecation. |
Type: Documentation-only
Preferresponse.socket overresponse.connection andrequest.socket overrequest.connection.
DEP0134:process._tickCallback#
History
| Version | Changes |
|---|---|
| v12.12.0 | Documentation-only deprecation with |
Type: Documentation-only (supports--pending-deprecation)
Theprocess._tickCallback property was never documented asan officially supported API.
DEP0135:WriteStream.open() andReadStream.open() are internal#
History
| Version | Changes |
|---|---|
| v25.0.0 | End-of-Life. |
| v13.0.0 | Runtime deprecation. |
Type: End-of-Life
WriteStream.open() andReadStream.open() are undocumented internalAPIs that do not make sense to use in userland. File streams should always beopened through their corresponding factory methodsfs.createWriteStream()andfs.createReadStream()) or by passing a file descriptor in options.
DEP0136:httpfinished#
History
| Version | Changes |
|---|---|
| v13.4.0, v12.16.0 | Documentation-only deprecation. |
Type: Documentation-only
response.finished indicates whetherresponse.end() has beencalled, not whether'finish' has been emitted and the underlying datais flushed.
Useresponse.writableFinished orresponse.writableEndedaccordingly instead to avoid the ambiguity.
To maintain existing behaviorresponse.finished should be replaced withresponse.writableEnded.
DEP0137: Closing fs.FileHandle on garbage collection#
History
| Version | Changes |
|---|---|
| v25.0.0 | End-of-Life. |
| v14.0.0 | Runtime deprecation. |
Type: End-of-Life
Allowing afs.FileHandle object to be closed on garbage collection usedto be allowed, but now throws an error.
Please ensure that allfs.FileHandle objects are explicitly closed usingFileHandle.prototype.close() when thefs.FileHandle is no longer needed:
const fsPromises =require('node:fs').promises;asyncfunctionopenAndClose() {let filehandle;try { filehandle =await fsPromises.open('thefile.txt','r'); }finally {if (filehandle !==undefined)await filehandle.close(); }}DEP0138:process.mainModule#
History
| Version | Changes |
|---|---|
| v14.0.0 | Documentation-only deprecation. |
Type: Documentation-only
process.mainModule is a CommonJS-only feature whileprocess globalobject is shared with non-CommonJS environment. Its use within ECMAScriptmodules is unsupported.
It is deprecated in favor ofrequire.main, because it serves the samepurpose and is only available on CommonJS environment.
An automated migration is available (source):
npx codemod@latest @nodejs/process-main-moduleDEP0139:process.umask() with no arguments#
History
| Version | Changes |
|---|---|
| v14.0.0, v12.19.0 | Documentation-only deprecation. |
Type: Documentation-only
Callingprocess.umask() with no argument causes the process-wide umask to bewritten twice. This introduces a race condition between threads, and is apotential security vulnerability. There is no safe, cross-platform alternativeAPI.
DEP0140: Userequest.destroy() instead ofrequest.abort()#
History
| Version | Changes |
|---|---|
| v14.1.0, v13.14.0 | Documentation-only deprecation. |
Type: Documentation-only
Userequest.destroy() instead ofrequest.abort().
DEP0141:repl.inputStream andrepl.outputStream#
History
| Version | Changes |
|---|---|
| v14.3.0 | Documentation-only deprecation with |
Type: Documentation-only (supports--pending-deprecation)
Thenode:repl module exported the input and output stream twice. Use.inputinstead of.inputStream and.output instead of.outputStream.
DEP0142:repl._builtinLibs#
History
| Version | Changes |
|---|---|
| v14.3.0 | Documentation-only deprecation with |
Type: Documentation-only (supports--pending-deprecation)
Thenode:repl module exports a_builtinLibs property that contains an arrayof built-in modules. It was incomplete so far and instead it's better to relyuponrequire('node:module').builtinModules.
An automated migration is available (source):
npx codemod@latest @nodejs/repl-builtin-modulesDEP0143:Transform._transformState#
History
| Version | Changes |
|---|---|
| v15.0.0 | End-of-Life. |
| v14.5.0 | Runtime deprecation. |
Type: End-of-Life
Transform._transformState will be removed in future versions where it isno longer required due to simplification of the implementation.
DEP0144:module.parent#
History
| Version | Changes |
|---|---|
| v14.6.0, v12.19.0 | Documentation-only deprecation. |
Type: Documentation-only (supports--pending-deprecation)
A CommonJS module can access the first module that required it usingmodule.parent. This feature is deprecated because it does not workconsistently in the presence of ECMAScript modules and because it gives aninaccurate representation of the CommonJS module graph.
Some modules use it to check if they are the entry point of the current process.Instead, it is recommended to comparerequire.main andmodule:
if (require.main ===module) {// Code section that will run only if current file is the entry point.}When looking for the CommonJS modules that have required the current one,require.cache andmodule.children can be used:
const moduleParents =Object.values(require.cache) .filter((m) => m.children.includes(module));DEP0145:socket.bufferSize#
History
| Version | Changes |
|---|---|
| v14.6.0 | Documentation-only deprecation. |
Type: Documentation-only
socket.bufferSize is just an alias forwritable.writableLength.
DEP0146:new crypto.Certificate()#
History
| Version | Changes |
|---|---|
| v14.9.0 | Documentation-only deprecation. |
Type: Documentation-only
Thecrypto.Certificate() constructor is deprecated. Usestatic methods ofcrypto.Certificate() instead.
DEP0147:fs.rmdir(path, { recursive: true })#
History
| Version | Changes |
|---|---|
| v25.0.0 | End-of-Life. |
| v16.0.0 | Runtime deprecation. |
| v15.0.0 | Runtime deprecation for permissive behavior. |
| v14.14.0 | Documentation-only deprecation. |
Type: End-of-Life
Thefs.rmdir,fs.rmdirSync, andfs.promises.rmdir methods usedto support arecursive option. That option has been removed.
Usefs.rm(path, { recursive: true, force: true }),fs.rmSync(path, { recursive: true, force: true }) orfs.promises.rm(path, { recursive: true, force: true }) instead.
An automated migration is available (source):
npx codemod@latest @nodejs/rmdirDEP0148: Folder mappings in"exports" (trailing"/")#
History
| Version | Changes |
|---|---|
| v17.0.0 | End-of-Life. |
| v16.0.0 | Runtime deprecation. |
| v15.1.0 | Runtime deprecation for self-referencing imports. |
| v14.13.0 | Documentation-only deprecation. |
Type: End-of-Life
Using a trailing"/" to define subpath folder mappings in thesubpath exports orsubpath imports fields is no longer supported.Usesubpath patterns instead.
DEP0149:http.IncomingMessage#connection#
History
| Version | Changes |
|---|---|
| v16.0.0 | Documentation-only deprecation. |
Type: Documentation-only
Prefermessage.socket overmessage.connection.
DEP0150: Changing the value ofprocess.config#
History
| Version | Changes |
|---|---|
| v19.0.0 | End-of-Life. |
| v16.0.0 | Runtime deprecation. |
Type: End-of-Life
Theprocess.config property provides access to Node.js compile-time settings.However, the property is mutable and therefore subject to tampering. The abilityto change the value will be removed in a future version of Node.js.
DEP0151: Main index lookup and extension searching#
History
| Version | Changes |
|---|---|
| v16.0.0 | Runtime deprecation. |
| v15.8.0, v14.18.0 | Documentation-only deprecation with |
Type: Runtime
Previously,index.js and extension searching lookups would apply toimport 'pkg' main entry point resolution, even when resolving ES modules.
With this deprecation, all ES module main entry point resolutions requirean explicit"exports" or"main" entry with the exact file extension.
DEP0152: Extension PerformanceEntry properties#
History
| Version | Changes |
|---|---|
| v25.0.0 | End-of-Life. |
| v16.0.0 | Runtime deprecation. |
Type: End-of-Life
The'gc','http2', and'http'<PerformanceEntry> object types used to haveadditional properties assigned to them that provide additional information.These properties are now available within the standarddetail propertyof thePerformanceEntry object. The deprecated accessors have beenremoved.
DEP0153:dns.lookup anddnsPromises.lookup options type coercion#
History
| Version | Changes |
|---|---|
| v18.0.0 | End-of-Life. |
| v17.0.0 | Runtime deprecation. |
| v16.8.0 | Documentation-only deprecation. |
Type: End-of-Life
Using a non-nullish non-integer value forfamily option, a non-nullishnon-number value forhints option, a non-nullish non-boolean value foralloption, or a non-nullish non-boolean value forverbatim option indns.lookup() anddnsPromises.lookup() throws anERR_INVALID_ARG_TYPE error.
DEP0154: RSA-PSS generate key pair options#
History
| Version | Changes |
|---|---|
| v25.0.0 | End-of-Life. |
| v20.0.0 | Runtime deprecation. |
| v16.10.0 | Documentation-only deprecation. |
Type: End-of-Life
Use'hashAlgorithm' instead of'hash', and'mgf1HashAlgorithm' instead of'mgf1Hash'.
An automated migration is available (source):
npx codemod@latest @nodejs/crypto-rsa-pss-updateDEP0155: Trailing slashes in pattern specifier resolutions#
History
| Version | Changes |
|---|---|
| v17.0.0 | Runtime deprecation. |
| v16.10.0 | Documentation-only deprecation with |
Type: Runtime
The remapping of specifiers ending in"/" likeimport 'pkg/x/' is deprecatedfor package"exports" and"imports" pattern resolutions.
DEP0156:.aborted property and'abort','aborted' event inhttp#
History
| Version | Changes |
|---|---|
| v17.0.0, v16.12.0 | Documentation-only deprecation. |
Type: Documentation-only
Move to<Stream> API instead, as thehttp.ClientRequest,http.ServerResponse, andhttp.IncomingMessage are all stream-based.Checkstream.destroyed instead of the.aborted property, and listen for'close' instead of'abort','aborted' event.
The.aborted property and'abort' event are only useful for detecting.abort() calls. For closing a request early, use the Stream.destroy([error]) then check the.destroyed property and'close' eventshould have the same effect. The receiving end should also check thereadable.readableEnded value onhttp.IncomingMessage to get whetherit was an aborted or graceful destroy.
DEP0157: Thenable support in streams#
History
| Version | Changes |
|---|---|
| v18.0.0 | End-of-life. |
| v17.2.0, v16.14.0 | Documentation-only deprecation. |
Type: End-of-Life
An undocumented feature of Node.js streams was to support thenables inimplementation methods. This is now deprecated, use callbacks instead and avoiduse of async function for streams implementation methods.
This feature caused users to encounter unexpected problems where the userimplements the function in callback style but uses e.g. an async method whichwould cause an error since mixing promise and callback semantics is not valid.
const w =newWritable({asyncfinal(callback) {awaitsomeOp();callback(); },});DEP0158:buffer.slice(start, end)#
History
| Version | Changes |
|---|---|
| v17.5.0, v16.15.0 | Documentation-only deprecation. |
Type: Documentation-only
This method was deprecated because it is not compatible withUint8Array.prototype.slice(), which is a superclass ofBuffer.
Usebuffer.subarray which does the same thing instead.
DEP0159:ERR_INVALID_CALLBACK#
History
| Version | Changes |
|---|---|
| v18.0.0 | End-of-Life. |
Type: End-of-Life
This error code was removed due to adding more confusion tothe errors used for value type validation.
DEP0160:process.on('multipleResolves', handler)#
History
| Version | Changes |
|---|---|
| v25.0.0 | End-of-Life. |
| v18.0.0 | Runtime deprecation. |
| v17.6.0, v16.15.0 | Documentation-only deprecation. |
Type: End-of-Life
This event was deprecated and removed because it did not work with V8 promisecombinators which diminished its usefulness.
DEP0161:process._getActiveRequests() andprocess._getActiveHandles()#
History
| Version | Changes |
|---|---|
| v17.6.0, v16.15.0 | Documentation-only deprecation. |
Type: Documentation-only
Theprocess._getActiveHandles() andprocess._getActiveRequests()functions are not intended for public use and can be removed in futurereleases.
Useprocess.getActiveResourcesInfo() to get a list of types of activeresources and not the actual references.
DEP0162:fs.write(),fs.writeFileSync() coercion to string#
History
| Version | Changes |
|---|---|
| v19.0.0 | End-of-Life. |
| v18.0.0 | Runtime deprecation. |
| v17.8.0, v16.15.0 | Documentation-only deprecation. |
Type: End-of-Life
Implicit coercion of objects with owntoString property, passed as secondparameter infs.write(),fs.writeFile(),fs.appendFile(),fs.writeFileSync(), andfs.appendFileSync() is deprecated.Convert them to primitive strings.
DEP0163:channel.subscribe(onMessage),channel.unsubscribe(onMessage)#
History
| Version | Changes |
|---|---|
| v24.8.0, v22.20.0 | Deprecation revoked. |
| v18.7.0, v16.17.0 | Documentation-only deprecation. |
Type: Deprecation revoked
These methods were deprecated because their use could leave the channel objectvulnerable to being garbage-collected if not strongly referenced by the user.The deprecation was revoked because channel objects are now resistant togarbage collection when the channel has active subscribers.
DEP0164:process.exit(code),process.exitCode coercion to integer#
History
| Version | Changes |
|---|---|
| v20.0.0 | End-of-Life. |
| v19.0.0 | Runtime deprecation. |
| v18.10.0, v16.18.0 | Documentation-only deprecation of |
| v18.7.0, v16.17.0 | Documentation-only deprecation of |
Type: End-of-Life
Values other thanundefined,null, integer numbers, and integer strings(e.g.,'1') are deprecated as value for thecode parameter inprocess.exit() and as value to assign toprocess.exitCode.
DEP0165:--trace-atomics-wait#
History
| Version | Changes |
|---|---|
| v23.0.0 | End-of-Life. |
| v22.0.0 | Runtime deprecation. |
| v18.8.0, v16.18.0 | Documentation-only deprecation. |
Type: End-of-Life
The--trace-atomics-wait flag has been removed becauseit uses the V8 hookSetAtomicsWaitCallback,that will be removed in a future V8 release.
DEP0166: Double slashes in imports and exports targets#
History
| Version | Changes |
|---|---|
| v19.0.0 | Runtime deprecation. |
| v18.10.0 | Documentation-only deprecation with |
Type: Runtime
Package imports and exports targets mapping into paths including a double slash(of"/" or"\") are deprecated and will fail with a resolution validationerror in a future release. This same deprecation also applies to pattern matchesstarting or ending in a slash.
DEP0167: WeakDiffieHellmanGroup instances (modp1,modp2,modp5)#
History
| Version | Changes |
|---|---|
| v18.10.0, v16.18.0 | Documentation-only deprecation. |
Type: Documentation-only
The well-known MODP groupsmodp1,modp2, andmodp5 are deprecated becausethey are not secure against practical attacks. SeeRFC 8247 Section 2.4 fordetails.
These groups might be removed in future versions of Node.js. Applications thatrely on these groups should evaluate using stronger MODP groups instead.
DEP0168: Unhandled exception in Node-API callbacks#
History
| Version | Changes |
|---|---|
| v18.3.0, v16.17.0 | Runtime deprecation. |
Type: Runtime
The implicit suppression of uncaught exceptions in Node-API callbacks is nowdeprecated.
Set the flag--force-node-api-uncaught-exceptions-policy to force Node.jsto emit an'uncaughtException' event if the exception is not handled inNode-API callbacks.
DEP0169: Insecure url.parse()#
History
| Version | Changes |
|---|---|
| v24.0.0 | Application deprecation. |
| v19.9.0, v18.17.0 | Added support for |
| v19.0.0, v18.13.0 | Documentation-only deprecation. |
Type: Application (non-node_modules code only)
url.parse() behavior is not standardized and prone to errors thathave security implications. Use theWHATWG URL API instead. CVEs are notissued forurl.parse() vulnerabilities.
Passing a string argument tourl.format() invokesurl.parse()internally, and is therefore also covered by this deprecation.
DEP0170: Invalid port when usingurl.parse()#
History
| Version | Changes |
|---|---|
| v25.0.0 | End-of-Life. |
| v20.0.0 | Runtime deprecation. |
| v19.2.0, v18.13.0 | Documentation-only deprecation. |
Type: End-of-Life
url.parse() used to accept URLs with ports that are not numbers. Thisbehavior might result in host name spoofing with unexpected input. These URLswill throw an error (which theWHATWG URL API also does).
DEP0171: Setters forhttp.IncomingMessage headers and trailers#
History
| Version | Changes |
|---|---|
| v19.3.0, v18.13.0 | Documentation-only deprecation. |
Type: Documentation-only
In a future version of Node.js,message.headers,message.headersDistinct,message.trailers, andmessage.trailersDistinct will be read-only.
DEP0172: TheasyncResource property ofAsyncResource bound functions#
History
| Version | Changes |
|---|---|
| v25.0.0 | End-of-Life. |
| v20.0.0 | Runtime-deprecation. |
Type: End-of-Life
Older versions of Node.js would add theasyncResource when a function isbound to anAsyncResource. It no longer does.
DEP0173: theassert.CallTracker class#
History
| Version | Changes |
|---|---|
| v25.0.0 | End-of-Life. |
| v20.1.0 | Runtime deprecation. |
Type: End-of-Life
Theassert.CallTracker API has been removed.
DEP0174: callingpromisify on a function that returns aPromise#
History
| Version | Changes |
|---|---|
| v21.0.0 | Runtime deprecation. |
| v20.8.0 | Documentation-only deprecation. |
Type: Runtime
Callingutil.promisify on a function that returns aPromise will ignorethe result of said promise, which can lead to unhandled promise rejections.
DEP0175:util.toUSVString#
History
| Version | Changes |
|---|---|
| v20.8.0 | Documentation-only deprecation. |
Type: Documentation-only
Theutil.toUSVString() API is deprecated. Please useString.prototype.toWellFormed instead.
DEP0176:fs.F_OK,fs.R_OK,fs.W_OK,fs.X_OK#
History
| Version | Changes |
|---|---|
| v25.0.0 | End-of-Life. |
| v24.0.0 | Runtime deprecation. |
| v20.8.0 | Documentation-only deprecation. |
Type: End-of-Life
F_OK,R_OK,W_OK andX_OK getters exposed directly onnode:fs wereremoved. Get them fromfs.constants orfs.promises.constants instead.
An automated migration is available (source):
npx codemod@latest @nodejs/fs-access-mode-constantsDEP0177:util.types.isWebAssemblyCompiledModule#
History
| Version | Changes |
|---|---|
| v21.7.0, v20.12.0 | End-of-Life. |
| v21.3.0, v20.11.0 | A deprecation code has been assigned. |
| v14.0.0 | Documentation-only deprecation. |
Type: End-of-Life
Theutil.types.isWebAssemblyCompiledModule API has been removed.Please usevalue instanceof WebAssembly.Module instead.
DEP0178:dirent.path#
History
| Version | Changes |
|---|---|
| v24.0.0 | End-of-Life. |
| v23.0.0 | Runtime deprecation. |
| v21.5.0, v20.12.0, v18.20.0 | Documentation-only deprecation. |
Type: End-of-Life
Thedirent.path property has been removed due to its lack of consistency acrossrelease lines. Please usedirent.parentPath instead.
An automated migration is available (source):
npx codemod@latest @nodejs/dirent-path-to-parent-pathDEP0179:Hash constructor#
History
| Version | Changes |
|---|---|
| v22.0.0 | Runtime deprecation. |
| v21.5.0, v20.12.0 | Documentation-only deprecation. |
Type: Runtime
CallingHash class directly withHash() ornew Hash() isdeprecated due to being internals, not intended for public use.Please use thecrypto.createHash() method to create Hash instances.
DEP0180:fs.Stats constructor#
History
| Version | Changes |
|---|---|
| v22.0.0 | Runtime deprecation. |
| v20.13.0 | Documentation-only deprecation. |
Type: Runtime
Callingfs.Stats class directly withStats() ornew Stats() isdeprecated due to being internals, not intended for public use.
DEP0181:Hmac constructor#
History
| Version | Changes |
|---|---|
| v22.0.0 | Runtime deprecation. |
| v20.13.0 | Documentation-only deprecation. |
Type: Runtime
CallingHmac class directly withHmac() ornew Hmac() isdeprecated due to being internals, not intended for public use.Please use thecrypto.createHmac() method to create Hmac instances.
DEP0182: Short GCM authentication tags without explicitauthTagLength#
History
| Version | Changes |
|---|---|
| v23.0.0 | Runtime deprecation. |
| v20.13.0 | Documentation-only deprecation. |
Type: Runtime
Applications that intend to use authentication tags that are shorter than thedefault authentication tag length must set theauthTagLength option of thecrypto.createDecipheriv() function to the appropriate length.
For ciphers in GCM mode, thedecipher.setAuthTag() function acceptsauthentication tags of any valid length (seeDEP0090). This behavioris deprecated to better align with recommendations perNIST SP 800-38D.
DEP0183: OpenSSL engine-based APIs#
History
| Version | Changes |
|---|---|
| v22.4.0, v20.16.0 | Documentation-only deprecation. |
Type: Documentation-only
OpenSSL 3 has deprecated support for custom engines with a recommendation toswitch to its new provider model. TheclientCertEngine option forhttps.request(),tls.createSecureContext(), andtls.createServer();theprivateKeyEngine andprivateKeyIdentifier fortls.createSecureContext();andcrypto.setEngine() all depend on this functionality from OpenSSL.
DEP0184: Instantiatingnode:zlib classes withoutnew#
History
| Version | Changes |
|---|---|
| v24.0.0 | Runtime deprecation. |
| v22.9.0, v20.18.0 | Documentation-only deprecation. |
Type: Runtime
Instantiating classes without thenew qualifier exported by thenode:zlib module is deprecated.It is recommended to use thenew qualifier instead. This applies to all Zlib classes, such asDeflate,DeflateRaw,Gunzip,Inflate,InflateRaw,Unzip, andZlib.
DEP0185: Instantiatingnode:repl classes withoutnew#
History
| Version | Changes |
|---|---|
| v25.0.0 | End-of-Life. |
| v24.0.0 | Runtime deprecation. |
| v22.9.0, v20.18.0 | Documentation-only deprecation. |
Type: End-of-Life
Instantiating classes without thenew qualifier exported by thenode:repl module is deprecated.Thenew qualifier must be used instead. This applies to all REPL classes, includingREPLServer andRecoverable.
An automated migration is available (source):
npx codemod@latest @nodejs/repl-classes-with-newDEP0187: Passing invalid argument types tofs.existsSync#
History
| Version | Changes |
|---|---|
| v24.0.0 | Runtime deprecation. |
| v23.4.0, v22.13.0, v20.19.3 | Documentation-only. |
Type: Runtime
Passing non-supported argument types is deprecated and, instead of returningfalse,will throw an error in a future version.
DEP0188:process.features.ipv6 andprocess.features.uv#
History
| Version | Changes |
|---|---|
| v23.4.0, v22.13.0 | Documentation-only deprecation. |
Type: Documentation-only
These properties are unconditionallytrue. Any checks based on these properties are redundant.
DEP0189:process.features.tls_*#
History
| Version | Changes |
|---|---|
| v23.4.0, v22.13.0 | Documentation-only deprecation. |
Type: Documentation-only
process.features.tls_alpn,process.features.tls_ocsp, andprocess.features.tls_sni aredeprecated, as their values are guaranteed to be identical to that ofprocess.features.tls.
DEP0190: Passingargs tonode:child_processexecFile/spawn withshell optiontrue#
History
| Version | Changes |
|---|---|
| v24.0.0 | Runtime deprecation. |
| v23.11.0, v22.15.0 | Documentation-only deprecation. |
Type: Runtime
When anargs array is passed tochild_process.execFile orchild_process.spawn with the option{ shell: true }, the values are not escaped, only space-separated, which can lead to shell injection.
DEP0191:repl.builtinModules#
History
| Version | Changes |
|---|---|
| v24.0.0, v22.16.0 | Documentation-only deprecation with |
Type: Documentation-only (supports--pending-deprecation)
Thenode:repl module exports abuiltinModules property that contains an arrayof built-in modules. This was incomplete and matched the already deprecatedrepl._builtinLibs (DEP0142) instead it's better to relyuponrequire('node:module').builtinModules.
An automated migration is available (source):
npx codemod@latest @nodejs/repl-builtin-modulesDEP0192:require('node:_tls_common') andrequire('node:_tls_wrap')#
History
| Version | Changes |
|---|---|
| v24.2.0, v22.17.0 | Runtime deprecation. |
Type: Runtime
Thenode:_tls_common andnode:_tls_wrap modules are deprecated as they should be consideredan internal nodejs implementation rather than a public facing API, usenode:tls instead.
DEP0193:require('node:_stream_*')#
History
| Version | Changes |
|---|---|
| v24.2.0, v22.17.0 | Runtime deprecation. |
Type: Runtime
Thenode:_stream_duplex,node:_stream_passthrough,node:_stream_readable,node:_stream_transform,node:_stream_wrap andnode:_stream_writable modules are deprecated as they should be consideredan internal nodejs implementation rather than a public facing API, usenode:stream instead.
DEP0194: HTTP/2 priority signaling#
History
| Version | Changes |
|---|---|
| v24.2.0 | End-of-Life. |
| v24.2.0, v22.17.0 | Documentation-only deprecation. |
Type: End-of-Life
The support for priority signaling has been removed following its deprecation in theRFC 9113.
DEP0195: Instantiatingnode:http classes withoutnew#
History
| Version | Changes |
|---|---|
| v24.2.0, v22.17.0 | Documentation-only deprecation. |
Type: Documentation-only
Instantiating classes without thenew qualifier exported by thenode:http module is deprecated.It is recommended to use thenew qualifier instead. This applies to all http classes, such asOutgoingMessage,IncomingMessage,ServerResponse andClientRequest.
An automated migration is available (source):
npx codemod@latest @nodejs/http-classes-with-newDEP0196: Callingnode:child_process functions withoptions.shell as an empty string#
History
| Version | Changes |
|---|---|
| v24.2.0, v22.17.0 | Documentation-only deprecation. |
Type: Documentation-only
Calling the process-spawning functions with{ shell: '' } is almost certainlyunintentional, and can cause aberrant behavior.
To makechild_process.execFile orchild_process.spawn invoke thedefault shell, use{ shell: true }. If the intention is not to invoke a shell(default behavior), either omit theshell option, or set it tofalse or anullish value.
To makechild_process.exec invoke the default shell, either omit theshell option, or set it to a nullish value. If the intention is not to invokea shell, usechild_process.execFile instead.
DEP0197:util.types.isNativeError()#
History
| Version | Changes |
|---|---|
| v24.2.0 | Documentation-only deprecation. |
Type: Documentation-only
Theutil.types.isNativeError API is deprecated. Please useError.isError instead.
An automated migration is available (source):
npx codemod@latest @nodejs/types-is-native-errorDEP0198: Creating SHAKE-128 and SHAKE-256 digests without an explicitoptions.outputLength#
History
| Version | Changes |
|---|---|
| v25.0.0 | Runtime deprecation. |
| v24.4.0, v22.18.0, v20.19.5 | Documentation-only deprecation with support for |
Type: Runtime
Creating SHAKE-128 and SHAKE-256 digests without an explicitoptions.outputLength is deprecated.
DEP0199:require('node:_http_*')#
History
| Version | Changes |
|---|---|
| v24.6.0, v22.19.0 | Documentation-only deprecation. |
Type: Documentation-only
Thenode:_http_agent,node:_http_client,node:_http_common,node:_http_incoming,node:_http_outgoing andnode:_http_server modules are deprecated as they should be consideredan internal nodejs implementation rather than a public facing API, usenode:http instead.
DEP0200: Closing fs.Dir on garbage collection#
History
| Version | Changes |
|---|---|
| v24.9.0 | Documentation-only deprecation. |
Type: Documentation-only
Allowing afs.Dir object to be closed on garbage collection isdeprecated. In the future, doing so might result in a thrown error that willterminate the process.
Please ensure that allfs.Dir objects are explicitly closed usingDir.prototype.close() orusing keyword:
import { opendir }from'node:fs/promises';{awaitusing dir =awaitopendir('/async/disposable/directory');}// Closed by dir[Symbol.asyncDispose](){using dir =awaitopendir('/sync/disposable/directory');}// Closed by dir[Symbol.dispose](){const dir =awaitopendir('/unconditionally/iterated/directory');forawait (const entryof dir) {// process an entry }// Closed by iterator}{let dir;try { dir =awaitopendir('/legacy/closeable/directory'); }finally {await dir?.close(); }}Diagnostics Channel#
History
| Version | Changes |
|---|---|
| v19.2.0, v18.13.0 | diagnostics_channel is now Stable. |
| v15.1.0, v14.17.0 | Added in: v15.1.0, v14.17.0 |
Source Code:lib/diagnostics_channel.js
Thenode:diagnostics_channel module provides an API to create named channelsto report arbitrary message data for diagnostics purposes.
It can be accessed using:
import diagnostics_channelfrom'node:diagnostics_channel';const diagnostics_channel =require('node:diagnostics_channel');
It is intended that a module writer wanting to report diagnostics messageswill create one or many top-level channels to report messages through.Channels may also be acquired at runtime but it is not encourageddue to the additional overhead of doing so. Channels may be exported forconvenience, but as long as the name is known it can be acquired anywhere.
If you intend for your module to produce diagnostics data for others toconsume it is recommended that you include documentation of what namedchannels are used along with the shape of the message data. Channel namesshould generally include the module name to avoid collisions with data fromother modules.
Public API#
Overview#
Following is a simple overview of the public API.
import diagnostics_channelfrom'node:diagnostics_channel';// Get a reusable channel objectconst channel = diagnostics_channel.channel('my-channel');functiononMessage(message, name) {// Received data}// Subscribe to the channeldiagnostics_channel.subscribe('my-channel', onMessage);// Check if the channel has an active subscriberif (channel.hasSubscribers) {// Publish data to the channel channel.publish({some:'data', });}// Unsubscribe from the channeldiagnostics_channel.unsubscribe('my-channel', onMessage);const diagnostics_channel =require('node:diagnostics_channel');// Get a reusable channel objectconst channel = diagnostics_channel.channel('my-channel');functiononMessage(message, name) {// Received data}// Subscribe to the channeldiagnostics_channel.subscribe('my-channel', onMessage);// Check if the channel has an active subscriberif (channel.hasSubscribers) {// Publish data to the channel channel.publish({some:'data', });}// Unsubscribe from the channeldiagnostics_channel.unsubscribe('my-channel', onMessage);
diagnostics_channel.hasSubscribers(name)#
Check if there are active subscribers to the named channel. This is helpful ifthe message you want to send might be expensive to prepare.
This API is optional but helpful when trying to publish messages from veryperformance-sensitive code.
import diagnostics_channelfrom'node:diagnostics_channel';if (diagnostics_channel.hasSubscribers('my-channel')) {// There are subscribers, prepare and publish message}const diagnostics_channel =require('node:diagnostics_channel');if (diagnostics_channel.hasSubscribers('my-channel')) {// There are subscribers, prepare and publish message}
diagnostics_channel.channel(name)#
This is the primary entry-point for anyone wanting to publish to a namedchannel. It produces a channel object which is optimized to reduce overhead atpublish time as much as possible.
import diagnostics_channelfrom'node:diagnostics_channel';const channel = diagnostics_channel.channel('my-channel');const diagnostics_channel =require('node:diagnostics_channel');const channel = diagnostics_channel.channel('my-channel');
diagnostics_channel.subscribe(name, onMessage)#
name<string> |<symbol> The channel nameonMessage<Function> The handler to receive channel messages
Register a message handler to subscribe to this channel. This message handlerwill be run synchronously whenever a message is published to the channel. Anyerrors thrown in the message handler will trigger an'uncaughtException'.
import diagnostics_channelfrom'node:diagnostics_channel';diagnostics_channel.subscribe('my-channel',(message, name) => {// Received data});const diagnostics_channel =require('node:diagnostics_channel');diagnostics_channel.subscribe('my-channel',(message, name) => {// Received data});
diagnostics_channel.unsubscribe(name, onMessage)#
name<string> |<symbol> The channel nameonMessage<Function> The previous subscribed handler to remove- Returns:<boolean>
trueif the handler was found,falseotherwise.
Remove a message handler previously registered to this channel withdiagnostics_channel.subscribe(name, onMessage).
import diagnostics_channelfrom'node:diagnostics_channel';functiononMessage(message, name) {// Received data}diagnostics_channel.subscribe('my-channel', onMessage);diagnostics_channel.unsubscribe('my-channel', onMessage);const diagnostics_channel =require('node:diagnostics_channel');functiononMessage(message, name) {// Received data}diagnostics_channel.subscribe('my-channel', onMessage);diagnostics_channel.unsubscribe('my-channel', onMessage);
diagnostics_channel.tracingChannel(nameOrChannels)#
nameOrChannels<string> |<TracingChannel> Channel name orobject containing all theTracingChannel Channels- Returns:<TracingChannel> Collection of channels to trace with
Creates aTracingChannel wrapper for the givenTracingChannel Channels. If a name is given, the corresponding tracingchannels will be created in the form oftracing:${name}:${eventType} whereeventType corresponds to the types ofTracingChannel Channels.
import diagnostics_channelfrom'node:diagnostics_channel';const channelsByName = diagnostics_channel.tracingChannel('my-channel');// or...const channelsByCollection = diagnostics_channel.tracingChannel({start: diagnostics_channel.channel('tracing:my-channel:start'),end: diagnostics_channel.channel('tracing:my-channel:end'),asyncStart: diagnostics_channel.channel('tracing:my-channel:asyncStart'),asyncEnd: diagnostics_channel.channel('tracing:my-channel:asyncEnd'),error: diagnostics_channel.channel('tracing:my-channel:error'),});const diagnostics_channel =require('node:diagnostics_channel');const channelsByName = diagnostics_channel.tracingChannel('my-channel');// or...const channelsByCollection = diagnostics_channel.tracingChannel({start: diagnostics_channel.channel('tracing:my-channel:start'),end: diagnostics_channel.channel('tracing:my-channel:end'),asyncStart: diagnostics_channel.channel('tracing:my-channel:asyncStart'),asyncEnd: diagnostics_channel.channel('tracing:my-channel:asyncEnd'),error: diagnostics_channel.channel('tracing:my-channel:error'),});
Class:Channel#
The classChannel represents an individual named channel within the datapipeline. It is used to track subscribers and to publish messages when thereare subscribers present. It exists as a separate object to avoid channellookups at publish time, enabling very fast publish speeds and allowingfor heavy use while incurring very minimal cost. Channels are created withdiagnostics_channel.channel(name), constructing a channel directlywithnew Channel(name) is not supported.
channel.hasSubscribers#
- Returns:<boolean> If there are active subscribers
Check if there are active subscribers to this channel. This is helpful ifthe message you want to send might be expensive to prepare.
This API is optional but helpful when trying to publish messages from veryperformance-sensitive code.
import diagnostics_channelfrom'node:diagnostics_channel';const channel = diagnostics_channel.channel('my-channel');if (channel.hasSubscribers) {// There are subscribers, prepare and publish message}const diagnostics_channel =require('node:diagnostics_channel');const channel = diagnostics_channel.channel('my-channel');if (channel.hasSubscribers) {// There are subscribers, prepare and publish message}
channel.publish(message)#
message<any> The message to send to the channel subscribers
Publish a message to any subscribers to the channel. This will triggermessage handlers synchronously so they will execute within the same context.
import diagnostics_channelfrom'node:diagnostics_channel';const channel = diagnostics_channel.channel('my-channel');channel.publish({some:'message',});const diagnostics_channel =require('node:diagnostics_channel');const channel = diagnostics_channel.channel('my-channel');channel.publish({some:'message',});
channel.subscribe(onMessage)#
History
| Version | Changes |
|---|---|
| v24.8.0, v22.20.0 | Deprecation revoked. |
| v18.7.0, v16.17.0 | Documentation-only deprecation. |
| v15.1.0, v14.17.0 | Added in: v15.1.0, v14.17.0 |
onMessage<Function> The handler to receive channel messages
Register a message handler to subscribe to this channel. This message handlerwill be run synchronously whenever a message is published to the channel. Anyerrors thrown in the message handler will trigger an'uncaughtException'.
import diagnostics_channelfrom'node:diagnostics_channel';const channel = diagnostics_channel.channel('my-channel');channel.subscribe((message, name) => {// Received data});const diagnostics_channel =require('node:diagnostics_channel');const channel = diagnostics_channel.channel('my-channel');channel.subscribe((message, name) => {// Received data});
channel.unsubscribe(onMessage)#
History
| Version | Changes |
|---|---|
| v24.8.0, v22.20.0 | Deprecation revoked. |
| v18.7.0, v16.17.0 | Documentation-only deprecation. |
| v17.1.0, v16.14.0, v14.19.0 | Added return value. Added to channels without subscribers. |
| v15.1.0, v14.17.0 | Added in: v15.1.0, v14.17.0 |
onMessage<Function> The previous subscribed handler to remove- Returns:<boolean>
trueif the handler was found,falseotherwise.
Remove a message handler previously registered to this channel withchannel.subscribe(onMessage).
import diagnostics_channelfrom'node:diagnostics_channel';const channel = diagnostics_channel.channel('my-channel');functiononMessage(message, name) {// Received data}channel.subscribe(onMessage);channel.unsubscribe(onMessage);const diagnostics_channel =require('node:diagnostics_channel');const channel = diagnostics_channel.channel('my-channel');functiononMessage(message, name) {// Received data}channel.subscribe(onMessage);channel.unsubscribe(onMessage);
channel.bindStore(store[, transform])#
store<AsyncLocalStorage> The store to which to bind the context datatransform<Function> Transform context data before setting the store context
Whenchannel.runStores(context, ...) is called, the given context datawill be applied to any store bound to the channel. If the store has already beenbound the previoustransform function will be replaced with the new one.Thetransform function may be omitted to set the given context data as thecontext directly.
import diagnostics_channelfrom'node:diagnostics_channel';import {AsyncLocalStorage }from'node:async_hooks';const store =newAsyncLocalStorage();const channel = diagnostics_channel.channel('my-channel');channel.bindStore(store,(data) => {return { data };});const diagnostics_channel =require('node:diagnostics_channel');const {AsyncLocalStorage } =require('node:async_hooks');const store =newAsyncLocalStorage();const channel = diagnostics_channel.channel('my-channel');channel.bindStore(store,(data) => {return { data };});
channel.unbindStore(store)#
store<AsyncLocalStorage> The store to unbind from the channel.- Returns:<boolean>
trueif the store was found,falseotherwise.
Remove a message handler previously registered to this channel withchannel.bindStore(store).
import diagnostics_channelfrom'node:diagnostics_channel';import {AsyncLocalStorage }from'node:async_hooks';const store =newAsyncLocalStorage();const channel = diagnostics_channel.channel('my-channel');channel.bindStore(store);channel.unbindStore(store);const diagnostics_channel =require('node:diagnostics_channel');const {AsyncLocalStorage } =require('node:async_hooks');const store =newAsyncLocalStorage();const channel = diagnostics_channel.channel('my-channel');channel.bindStore(store);channel.unbindStore(store);
channel.runStores(context, fn[, thisArg[, ...args]])#
context<any> Message to send to subscribers and bind to storesfn<Function> Handler to run within the entered storage contextthisArg<any> The receiver to be used for the function call....args<any> Optional arguments to pass to the function.
Applies the given data to any AsyncLocalStorage instances bound to the channelfor the duration of the given function, then publishes to the channel withinthe scope of that data is applied to the stores.
If a transform function was given tochannel.bindStore(store) it will beapplied to transform the message data before it becomes the context value forthe store. The prior storage context is accessible from within the transformfunction in cases where context linking is required.
The context applied to the store should be accessible in any async code whichcontinues from execution which began during the given function, howeverthere are some situations in whichcontext loss may occur.
import diagnostics_channelfrom'node:diagnostics_channel';import {AsyncLocalStorage }from'node:async_hooks';const store =newAsyncLocalStorage();const channel = diagnostics_channel.channel('my-channel');channel.bindStore(store,(message) => {const parent = store.getStore();returnnewSpan(message, parent);});channel.runStores({some:'message' },() => { store.getStore();// Span({ some: 'message' })});const diagnostics_channel =require('node:diagnostics_channel');const {AsyncLocalStorage } =require('node:async_hooks');const store =newAsyncLocalStorage();const channel = diagnostics_channel.channel('my-channel');channel.bindStore(store,(message) => {const parent = store.getStore();returnnewSpan(message, parent);});channel.runStores({some:'message' },() => { store.getStore();// Span({ some: 'message' })});
Class:TracingChannel#
The classTracingChannel is a collection ofTracingChannel Channels whichtogether express a single traceable action. It is used to formalize andsimplify the process of producing events for tracing application flow.diagnostics_channel.tracingChannel() is used to construct aTracingChannel. As withChannel it is recommended to create and reuse asingleTracingChannel at the top-level of the file rather than creating themdynamically.
tracingChannel.subscribe(subscribers)#
subscribers<Object> Set ofTracingChannel Channels subscribersstart<Function> Thestartevent subscriberend<Function> Theendevent subscriberasyncStart<Function> TheasyncStartevent subscriberasyncEnd<Function> TheasyncEndevent subscribererror<Function> Theerrorevent subscriber
Helper to subscribe a collection of functions to the corresponding channels.This is the same as callingchannel.subscribe(onMessage) on each channelindividually.
import diagnostics_channelfrom'node:diagnostics_channel';const channels = diagnostics_channel.tracingChannel('my-channel');channels.subscribe({start(message) {// Handle start message },end(message) {// Handle end message },asyncStart(message) {// Handle asyncStart message },asyncEnd(message) {// Handle asyncEnd message },error(message) {// Handle error message },});const diagnostics_channel =require('node:diagnostics_channel');const channels = diagnostics_channel.tracingChannel('my-channel');channels.subscribe({start(message) {// Handle start message },end(message) {// Handle end message },asyncStart(message) {// Handle asyncStart message },asyncEnd(message) {// Handle asyncEnd message },error(message) {// Handle error message },});
tracingChannel.unsubscribe(subscribers)#
subscribers<Object> Set ofTracingChannel Channels subscribersstart<Function> Thestartevent subscriberend<Function> Theendevent subscriberasyncStart<Function> TheasyncStartevent subscriberasyncEnd<Function> TheasyncEndevent subscribererror<Function> Theerrorevent subscriber
- Returns:<boolean>
trueif all handlers were successfully unsubscribed,andfalseotherwise.
Helper to unsubscribe a collection of functions from the corresponding channels.This is the same as callingchannel.unsubscribe(onMessage) on each channelindividually.
import diagnostics_channelfrom'node:diagnostics_channel';const channels = diagnostics_channel.tracingChannel('my-channel');channels.unsubscribe({start(message) {// Handle start message },end(message) {// Handle end message },asyncStart(message) {// Handle asyncStart message },asyncEnd(message) {// Handle asyncEnd message },error(message) {// Handle error message },});const diagnostics_channel =require('node:diagnostics_channel');const channels = diagnostics_channel.tracingChannel('my-channel');channels.unsubscribe({start(message) {// Handle start message },end(message) {// Handle end message },asyncStart(message) {// Handle asyncStart message },asyncEnd(message) {// Handle asyncEnd message },error(message) {// Handle error message },});
tracingChannel.traceSync(fn[, context[, thisArg[, ...args]]])#
fn<Function> Function to wrap a trace aroundcontext<Object> Shared object to correlate events throughthisArg<any> The receiver to be used for the function call...args<any> Optional arguments to pass to the function- Returns:<any> The return value of the given function
Trace a synchronous function call. This will always produce astart eventandend event around the execution and may produce anerror eventif the given function throws an error. This will run the given function usingchannel.runStores(context, ...) on thestart channel which ensures allevents should have any bound stores set to match this trace context.
To ensure only correct trace graphs are formed, events will only be publishedif subscribers are present prior to starting the trace. Subscriptions which areadded after the trace begins will not receive future events from that trace,only future traces will be seen.
import diagnostics_channelfrom'node:diagnostics_channel';const channels = diagnostics_channel.tracingChannel('my-channel');channels.traceSync(() => {// Do something}, {some:'thing',});const diagnostics_channel =require('node:diagnostics_channel');const channels = diagnostics_channel.tracingChannel('my-channel');channels.traceSync(() => {// Do something}, {some:'thing',});
tracingChannel.tracePromise(fn[, context[, thisArg[, ...args]]])#
fn<Function> Promise-returning function to wrap a trace aroundcontext<Object> Shared object to correlate trace events throughthisArg<any> The receiver to be used for the function call...args<any> Optional arguments to pass to the function- Returns:<Promise> Chained from promise returned by the given function
Trace a promise-returning function call. This will always produce astart event andend event around the synchronous portion of thefunction execution, and will produce anasyncStart event andasyncEnd event when a promise continuation is reached. It may alsoproduce anerror event if the given function throws an error or thereturned promise rejects. This will run the given function usingchannel.runStores(context, ...) on thestart channel which ensures allevents should have any bound stores set to match this trace context.
To ensure only correct trace graphs are formed, events will only be publishedif subscribers are present prior to starting the trace. Subscriptions which areadded after the trace begins will not receive future events from that trace,only future traces will be seen.
import diagnostics_channelfrom'node:diagnostics_channel';const channels = diagnostics_channel.tracingChannel('my-channel');channels.tracePromise(async () => {// Do something}, {some:'thing',});const diagnostics_channel =require('node:diagnostics_channel');const channels = diagnostics_channel.tracingChannel('my-channel');channels.tracePromise(async () => {// Do something}, {some:'thing',});
tracingChannel.traceCallback(fn[, position[, context[, thisArg[, ...args]]]])#
fn<Function> callback using function to wrap a trace aroundposition<number> Zero-indexed argument position of expected callback(defaults to last argument ifundefinedis passed)context<Object> Shared object to correlate trace events through (defaultsto{}ifundefinedis passed)thisArg<any> The receiver to be used for the function call...args<any> arguments to pass to the function (must include the callback)- Returns:<any> The return value of the given function
Trace a callback-receiving function call. The callback is expected to followthe error as first arg convention typically used. This will always produce astart event andend event around the synchronous portion of thefunction execution, and will produce aasyncStart event andasyncEnd event around the callback execution. It may also produce anerror event if the given function throws or the first argument passed tothe callback is set. This will run the given function usingchannel.runStores(context, ...) on thestart channel which ensures allevents should have any bound stores set to match this trace context.
To ensure only correct trace graphs are formed, events will only be publishedif subscribers are present prior to starting the trace. Subscriptions which areadded after the trace begins will not receive future events from that trace,only future traces will be seen.
import diagnostics_channelfrom'node:diagnostics_channel';const channels = diagnostics_channel.tracingChannel('my-channel');channels.traceCallback((arg1, callback) => {// Do somethingcallback(null,'result');},1, {some:'thing',}, thisArg, arg1, callback);const diagnostics_channel =require('node:diagnostics_channel');const channels = diagnostics_channel.tracingChannel('my-channel');channels.traceCallback((arg1, callback) => {// Do somethingcallback(null,'result');},1, {some:'thing',}, thisArg, arg1, callback);
The callback will also be run withchannel.runStores(context, ...) whichenables context loss recovery in some cases.
import diagnostics_channelfrom'node:diagnostics_channel';import {AsyncLocalStorage }from'node:async_hooks';const channels = diagnostics_channel.tracingChannel('my-channel');const myStore =newAsyncLocalStorage();// The start channel sets the initial store data to something// and stores that store data value on the trace context objectchannels.start.bindStore(myStore,(data) => {const span =newSpan(data); data.span = span;return span;});// Then asyncStart can restore from that data it stored previouslychannels.asyncStart.bindStore(myStore,(data) => {return data.span;});const diagnostics_channel =require('node:diagnostics_channel');const {AsyncLocalStorage } =require('node:async_hooks');const channels = diagnostics_channel.tracingChannel('my-channel');const myStore =newAsyncLocalStorage();// The start channel sets the initial store data to something// and stores that store data value on the trace context objectchannels.start.bindStore(myStore,(data) => {const span =newSpan(data); data.span = span;return span;});// Then asyncStart can restore from that data it stored previouslychannels.asyncStart.bindStore(myStore,(data) => {return data.span;});
tracingChannel.hasSubscribers#
- Returns:<boolean>
trueif any of the individual channels has a subscriber,falseif not.
This is a helper method available on aTracingChannel instance to check ifany of theTracingChannel Channels have subscribers. Atrue is returned ifany of them have at least one subscriber, afalse is returned otherwise.
import diagnostics_channelfrom'node:diagnostics_channel';const channels = diagnostics_channel.tracingChannel('my-channel');if (channels.hasSubscribers) {// Do something}const diagnostics_channel =require('node:diagnostics_channel');const channels = diagnostics_channel.tracingChannel('my-channel');if (channels.hasSubscribers) {// Do something}
TracingChannel Channels#
A TracingChannel is a collection of several diagnostics_channels representingspecific points in the execution lifecycle of a single traceable action. Thebehavior is split into five diagnostics_channels consisting ofstart,end,asyncStart,asyncEnd, anderror. A single traceable action willshare the same event object between all events, this can be helpful formanaging correlation through a weakmap.
These event objects will be extended withresult orerror values whenthe task "completes". In the case of a synchronous task theresult will bethe return value and theerror will be anything thrown from the function.With callback-based async functions theresult will be the second argumentof the callback while theerror will either be a thrown error visible in theend event or the first callback argument in either of theasyncStart orasyncEnd events.
To ensure only correct trace graphs are formed, events should only be publishedif subscribers are present prior to starting the trace. Subscriptions which areadded after the trace begins should not receive future events from that trace,only future traces will be seen.
Tracing channels should follow a naming pattern of:
tracing:module.class.method:startortracing:module.function:starttracing:module.class.method:endortracing:module.function:endtracing:module.class.method:asyncStartortracing:module.function:asyncStarttracing:module.class.method:asyncEndortracing:module.function:asyncEndtracing:module.class.method:errorortracing:module.function:error
start(event)#
- Name:
tracing:${name}:start
Thestart event represents the point at which a function is called. At thispoint the event data may contain function arguments or anything else availableat the very start of the execution of the function.
end(event)#
- Name:
tracing:${name}:end
Theend event represents the point at which a function call returns a value.In the case of an async function this is when the promise returned not when thefunction itself makes a return statement internally. At this point, if thetraced function was synchronous theresult field will be set to the returnvalue of the function. Alternatively, theerror field may be present torepresent any thrown errors.
It is recommended to listen specifically to theerror event to track errorsas it may be possible for a traceable action to produce multiple errors. Forexample, an async task which fails may be started internally before the syncpart of the task then throws an error.
asyncStart(event)#
- Name:
tracing:${name}:asyncStart
TheasyncStart event represents the callback or continuation of a traceablefunction being reached. At this point things like callback arguments may beavailable, or anything else expressing the "result" of the action.
For callbacks-based functions, the first argument of the callback will beassigned to theerror field, if notundefined ornull, and the secondargument will be assigned to theresult field.
For promises, the argument to theresolve path will be assigned toresultor the argument to thereject path will be assign toerror.
It is recommended to listen specifically to theerror event to track errorsas it may be possible for a traceable action to produce multiple errors. Forexample, an async task which fails may be started internally before the syncpart of the task then throws an error.
asyncEnd(event)#
- Name:
tracing:${name}:asyncEnd
TheasyncEnd event represents the callback of an asynchronous functionreturning. It's not likely event data will change after theasyncStart event,however it may be useful to see the point where the callback completes.
error(event)#
- Name:
tracing:${name}:error
Theerror event represents any error produced by the traceable functioneither synchronously or asynchronously. If an error is thrown in thesynchronous portion of the traced function the error will be assigned to theerror field of the event and theerror event will be triggered. If an erroris received asynchronously through a callback or promise rejection it will alsobe assigned to theerror field of the event and trigger theerror event.
It is possible for a single traceable function call to produce errors multipletimes so this should be considered when consuming this event. For example, ifanother async task is triggered internally which fails and then the sync partof the function then throws and error twoerror events will be emitted, onefor the sync error and one for the async error.
Built-in Channels#
Console#
Event:'console.log'#
args<any[]>
Emitted whenconsole.log() is called. Receives and array of the argumentspassed toconsole.log().
Event:'console.info'#
args<any[]>
Emitted whenconsole.info() is called. Receives and array of the argumentspassed toconsole.info().
Event:'console.debug'#
args<any[]>
Emitted whenconsole.debug() is called. Receives and array of the argumentspassed toconsole.debug().
HTTP#
Event:'http.client.request.created'#
request<http.ClientRequest>
Emitted when client creates a request object.Unlikehttp.client.request.start, this event is emitted before the request has been sent.
Event:'http.client.request.error'#
request<http.ClientRequest>error<Error>
Emitted when an error occurs during a client request.
Event:'http.client.response.finish'#
request<http.ClientRequest>response<http.IncomingMessage>
Emitted when client receives a response.
Event:'http.server.request.start'#
request<http.IncomingMessage>response<http.ServerResponse>socket<net.Socket>server<http.Server>
Emitted when server receives a request.
Event:'http.server.response.created'#
request<http.IncomingMessage>response<http.ServerResponse>
Emitted when server creates a response.The event is emitted before the response is sent.
Event:'http.server.response.finish'#
request<http.IncomingMessage>response<http.ServerResponse>socket<net.Socket>server<http.Server>
Emitted when server sends a response.
HTTP/2#
Event:'http2.client.stream.created'#
stream<ClientHttp2Stream>headers<HTTP/2 Headers Object>
Emitted when a stream is created on the client.
Event:'http2.client.stream.start'#
stream<ClientHttp2Stream>headers<HTTP/2 Headers Object>
Emitted when a stream is started on the client.
Event:'http2.client.stream.error'#
stream<ClientHttp2Stream>error<Error>
Emitted when an error occurs during the processing of a stream on the client.
Event:'http2.client.stream.finish'#
stream<ClientHttp2Stream>headers<HTTP/2 Headers Object>flags<number>
Emitted when a stream is received on the client.
Event:'http2.client.stream.bodyChunkSent'#
stream<ClientHttp2Stream>writev<boolean>data<Buffer> |<string> |<Buffer[]> |<Object[]>encoding<string>
Emitted when a chunk of the client stream body is being sent.
Event:'http2.client.stream.bodySent'#
stream<ClientHttp2Stream>
Emitted after the client stream body has been fully sent.
Event:'http2.client.stream.close'#
stream<ClientHttp2Stream>
Emitted when a stream is closed on the client. The HTTP/2 error code used whenclosing the stream can be retrieved using thestream.rstCode property.
Event:'http2.server.stream.created'#
stream<ServerHttp2Stream>headers<HTTP/2 Headers Object>
Emitted when a stream is created on the server.
Event:'http2.server.stream.start'#
stream<ServerHttp2Stream>headers<HTTP/2 Headers Object>
Emitted when a stream is started on the server.
Event:'http2.server.stream.error'#
stream<ServerHttp2Stream>error<Error>
Emitted when an error occurs during the processing of a stream on the server.
Event:'http2.server.stream.finish'#
stream<ServerHttp2Stream>headers<HTTP/2 Headers Object>flags<number>
Emitted when a stream is sent on the server.
Event:'http2.server.stream.close'#
stream<ServerHttp2Stream>
Emitted when a stream is closed on the server. The HTTP/2 error code used whenclosing the stream can be retrieved using thestream.rstCode property.
Modules#
Event:'module.require.start'#
event<Object> containing the following propertiesidArgument passed torequire(). Module name.parentFilenameName of the module that attempted to require(id).
Emitted whenrequire() is executed. Seestart event.
Event:'module.require.end'#
event<Object> containing the following propertiesidArgument passed torequire(). Module name.parentFilenameName of the module that attempted to require(id).
Emitted when arequire() call returns. Seeend event.
Event:'module.require.error'#
event<Object> containing the following propertiesidArgument passed torequire(). Module name.parentFilenameName of the module that attempted to require(id).
error<Error>
Emitted when arequire() throws an error. Seeerror event.
Event:'module.import.asyncStart'#
event<Object> containing the following propertiesidArgument passed toimport(). Module name.parentURLURL object of the module that attempted to import(id).
Emitted whenimport() is invoked. SeeasyncStart event.
Event:'module.import.asyncEnd'#
event<Object> containing the following propertiesidArgument passed toimport(). Module name.parentURLURL object of the module that attempted to import(id).
Emitted whenimport() has completed. SeeasyncEnd event.
Event:'module.import.error'#
event<Object> containing the following propertiesidArgument passed toimport(). Module name.parentURLURL object of the module that attempted to import(id).
error<Error>
Emitted when aimport() throws an error. Seeerror event.
NET#
Event:'net.client.socket'#
socket<net.Socket> |<tls.TLSSocket>
Emitted when a new TCP or pipe client socket connection is created.
Event:'tracing:net.server.listen:asyncStart'#
server<net.Server>options<Object>
Emitted whennet.Server.listen() is invoked, before the port or pipe is actually setup.
Event:'tracing:net.server.listen:asyncEnd'#
server<net.Server>
Emitted whennet.Server.listen() has completed and thus the server is ready to accept connection.
Event:'tracing:net.server.listen:error'#
server<net.Server>error<Error>
Emitted whennet.Server.listen() is returning an error.
UDP#
DNS#
Source Code:lib/dns.js
Thenode:dns module enables name resolution. For example, use it to look up IPaddresses of host names.
Although named for theDomain Name System (DNS), it does not always use theDNS protocol for lookups.dns.lookup() uses the operating systemfacilities to perform name resolution. It may not need to perform any networkcommunication. To perform name resolution the way other applications on the samesystem do, usedns.lookup().
import dnsfrom'node:dns';dns.lookup('example.org',(err, address, family) => {console.log('address: %j family: IPv%s', address, family);});// address: "2606:2800:21f:cb07:6820:80da:af6b:8b2c" family: IPv6const dns =require('node:dns');dns.lookup('example.org',(err, address, family) => {console.log('address: %j family: IPv%s', address, family);});// address: "2606:2800:21f:cb07:6820:80da:af6b:8b2c" family: IPv6
All other functions in thenode:dns module connect to an actual DNS server toperform name resolution. They will always use the network to perform DNSqueries. These functions do not use the same set of configuration files used bydns.lookup() (e.g./etc/hosts). Use these functions to always performDNS queries, bypassing other name-resolution facilities.
import dnsfrom'node:dns';dns.resolve4('archive.org',(err, addresses) => {if (err)throw err;console.log(`addresses:${JSON.stringify(addresses)}`); addresses.forEach((a) => { dns.reverse(a,(err, hostnames) => {if (err) {throw err; }console.log(`reverse for${a}:${JSON.stringify(hostnames)}`); }); });});const dns =require('node:dns');dns.resolve4('archive.org',(err, addresses) => {if (err)throw err;console.log(`addresses:${JSON.stringify(addresses)}`); addresses.forEach((a) => { dns.reverse(a,(err, hostnames) => {if (err) {throw err; }console.log(`reverse for${a}:${JSON.stringify(hostnames)}`); }); });});
See theImplementation considerations section for more information.
Class:dns.Resolver#
An independent resolver for DNS requests.
Creating a new resolver uses the default server settings. Settingthe servers used for a resolver usingresolver.setServers() does not affectother resolvers:
import {Resolver }from'node:dns';const resolver =newResolver();resolver.setServers(['4.4.4.4']);// This request will use the server at 4.4.4.4, independent of global settings.resolver.resolve4('example.org',(err, addresses) => {// ...});const {Resolver } =require('node:dns');const resolver =newResolver();resolver.setServers(['4.4.4.4']);// This request will use the server at 4.4.4.4, independent of global settings.resolver.resolve4('example.org',(err, addresses) => {// ...});
The following methods from thenode:dns module are available:
resolver.getServers()resolver.resolve()resolver.resolve4()resolver.resolve6()resolver.resolveAny()resolver.resolveCaa()resolver.resolveCname()resolver.resolveMx()resolver.resolveNaptr()resolver.resolveNs()resolver.resolvePtr()resolver.resolveSoa()resolver.resolveSrv()resolver.resolveTlsa()resolver.resolveTxt()resolver.reverse()resolver.setServers()
Resolver([options])#
History
| Version | Changes |
|---|---|
| v16.7.0, v14.18.0 | The |
| v12.18.3 | The constructor now accepts an |
| v8.3.0 | Added in: v8.3.0 |
Create a new resolver.
resolver.cancel()#
Cancel all outstanding DNS queries made by this resolver. The correspondingcallbacks will be called with an error with codeECANCELLED.
resolver.setLocalAddress([ipv4][, ipv6])#
ipv4<string> A string representation of an IPv4 address.Default:'0.0.0.0'ipv6<string> A string representation of an IPv6 address.Default:'::0'
The resolver instance will send its requests from the specified IP address.This allows programs to specify outbound interfaces when used on multi-homedsystems.
If a v4 or v6 address is not specified, it is set to the default and theoperating system will choose a local address automatically.
The resolver will use the v4 local address when making requests to IPv4 DNSservers, and the v6 local address when making requests to IPv6 DNS servers.Therrtype of resolution requests has no impact on the local address used.
dns.getServers()#
- Returns:<string[]>
Returns an array of IP address strings, formatted according toRFC 5952,that are currently configured for DNS resolution. A string will include a portsection if a custom port is used.
['8.8.8.8','2001:4860:4860::8888','8.8.8.8:1053','[2001:4860:4860::8888]:1053',]dns.lookup(hostname[, options], callback)#
History
| Version | Changes |
|---|---|
| v22.1.0, v20.13.0 | The |
| v18.4.0 | For compatibility with |
| v18.0.0 | Passing an invalid callback to the |
| v17.0.0 | The |
| v8.5.0 | The |
| v1.2.0 | The |
| v0.1.90 | Added in: v0.1.90 |
hostname<string>options<integer> |<Object>family<integer> |<string> The record family. Must be4,6, or0. Forbackward compatibility reasons,'IPv4'and'IPv6'are interpreted as4and6respectively. The value0indicates that either an IPv4 or IPv6address is returned. If the value0is used with{ all: true }(seebelow), either one of or both IPv4 and IPv6 addresses are returned,depending on the system's DNS resolver.Default:0.hints<number> One or moresupportedgetaddrinfoflags. Multipleflags may be passed by bitwiseORing their values.all<boolean> Whentrue, the callback returns all resolved addresses inan array. Otherwise, returns a single address.Default:false.order<string> Whenverbatim, the resolved addresses are returnunsorted. Whenipv4first, the resolved addresses are sorted by placingIPv4 addresses before IPv6 addresses. Whenipv6first, the resolvedaddresses are sorted by placing IPv6 addresses before IPv4 addresses.Default:verbatim(addresses are not reordered).Default value is configurable usingdns.setDefaultResultOrder()or--dns-result-order.verbatim<boolean> Whentrue, the callback receives IPv4 and IPv6addresses in the order the DNS resolver returned them. Whenfalse,IPv4 addresses are placed before IPv6 addresses.This option will be deprecated in favor oforder. When both are specified,orderhas higher precedence. New code should only useorder.Default:true(addresses are not reordered). Default value isconfigurable usingdns.setDefaultResultOrder()or--dns-result-order.
callback<Function>
Resolves a host name (e.g.'nodejs.org') into the first found A (IPv4) orAAAA (IPv6) record. Alloption properties are optional. Ifoptions is aninteger, then it must be4 or6 – ifoptions is not provided, theneither IPv4 or IPv6 addresses, or both, are returned if found.
With theall option set totrue, the arguments forcallback change to(err, addresses), withaddresses being an array of objects with thepropertiesaddress andfamily.
On error,err is anError object, whereerr.code is the error code.Keep in mind thaterr.code will be set to'ENOTFOUND' not only whenthe host name does not exist but also when the lookup fails in other wayssuch as no available file descriptors.
dns.lookup() does not necessarily have anything to do with the DNS protocol.The implementation uses an operating system facility that can associate nameswith addresses and vice versa. This implementation can have subtle butimportant consequences on the behavior of any Node.js program. Please take sometime to consult theImplementation considerations section before usingdns.lookup().
Example usage:
import dnsfrom'node:dns';const options = {family:6,hints: dns.ADDRCONFIG | dns.V4MAPPED,};dns.lookup('example.org', options,(err, address, family) =>console.log('address: %j family: IPv%s', address, family));// address: "2606:2800:21f:cb07:6820:80da:af6b:8b2c" family: IPv6// When options.all is true, the result will be an Array.options.all =true;dns.lookup('example.org', options,(err, addresses) =>console.log('addresses: %j', addresses));// addresses: [{"address":"2606:2800:21f:cb07:6820:80da:af6b:8b2c","family":6}]const dns =require('node:dns');const options = {family:6,hints: dns.ADDRCONFIG | dns.V4MAPPED,};dns.lookup('example.org', options,(err, address, family) =>console.log('address: %j family: IPv%s', address, family));// address: "2606:2800:21f:cb07:6820:80da:af6b:8b2c" family: IPv6// When options.all is true, the result will be an Array.options.all =true;dns.lookup('example.org', options,(err, addresses) =>console.log('addresses: %j', addresses));// addresses: [{"address":"2606:2800:21f:cb07:6820:80da:af6b:8b2c","family":6}]
If this method is invoked as itsutil.promisify()ed version, andallis not set totrue, it returns aPromise for anObject withaddress andfamily properties.
Supported getaddrinfo flags#
History
| Version | Changes |
|---|---|
| v13.13.0, v12.17.0 | Added support for the |
The following flags can be passed as hints todns.lookup().
dns.ADDRCONFIG: Limits returned address types to the types of non-loopbackaddresses configured on the system. For example, IPv4 addresses are onlyreturned if the current system has at least one IPv4 address configured.dns.V4MAPPED: If the IPv6 family was specified, but no IPv6 addresses werefound, then return IPv4 mapped IPv6 addresses. It is not supportedon some operating systems (e.g. FreeBSD 10.1).dns.ALL: Ifdns.V4MAPPEDis specified, return resolved IPv6 addresses aswell as IPv4 mapped IPv6 addresses.
dns.lookupService(address, port, callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v0.11.14 | Added in: v0.11.14 |
address<string>port<number>callback<Function>
Resolves the givenaddress andport into a host name and service usingthe operating system's underlyinggetnameinfo implementation.
Ifaddress is not a valid IP address, aTypeError will be thrown.Theport will be coerced to a number. If it is not a legal port, aTypeErrorwill be thrown.
On an error,err is anError object, whereerr.code is the error code.
import dnsfrom'node:dns';dns.lookupService('127.0.0.1',22,(err, hostname, service) => {console.log(hostname, service);// Prints: localhost ssh});const dns =require('node:dns');dns.lookupService('127.0.0.1',22,(err, hostname, service) => {console.log(hostname, service);// Prints: localhost ssh});
If this method is invoked as itsutil.promisify()ed version, it returns aPromise for anObject withhostname andservice properties.
dns.resolve(hostname[, rrtype], callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v0.1.27 | Added in: v0.1.27 |
hostname<string> Host name to resolve.rrtype<string> Resource record type.Default:'A'.callback<Function>err<Error>records<string[]> |<Object[]> |<Object>
Uses the DNS protocol to resolve a host name (e.g.'nodejs.org') into an arrayof the resource records. Thecallback function has arguments(err, records). When successful,records will be an array of resourcerecords. The type and structure of individual results varies based onrrtype:
rrtype | records contains | Result type | Shorthand method |
|---|---|---|---|
'A' | IPv4 addresses (default) | <string> | dns.resolve4() |
'AAAA' | IPv6 addresses | <string> | dns.resolve6() |
'ANY' | any records | <Object> | dns.resolveAny() |
'CAA' | CA authorization records | <Object> | dns.resolveCaa() |
'CNAME' | canonical name records | <string> | dns.resolveCname() |
'MX' | mail exchange records | <Object> | dns.resolveMx() |
'NAPTR' | name authority pointer records | <Object> | dns.resolveNaptr() |
'NS' | name server records | <string> | dns.resolveNs() |
'PTR' | pointer records | <string> | dns.resolvePtr() |
'SOA' | start of authority records | <Object> | dns.resolveSoa() |
'SRV' | service records | <Object> | dns.resolveSrv() |
'TLSA' | certificate associations | <Object> | dns.resolveTlsa() |
'TXT' | text records | <string[]> | dns.resolveTxt() |
On error,err is anError object, whereerr.code is one of theDNS error codes.
dns.resolve4(hostname[, options], callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v7.2.0 | This method now supports passing |
| v0.1.16 | Added in: v0.1.16 |
hostname<string> Host name to resolve.options<Object>ttl<boolean> Retrieves the Time-To-Live value (TTL) of each record.Whentrue, the callback receives an array of{ address: '1.2.3.4', ttl: 60 }objects rather than an array of strings,with the TTL expressed in seconds.
callback<Function>err<Error>addresses<string[]> |<Object[]>
Uses the DNS protocol to resolve a IPv4 addresses (A records) for thehostname. Theaddresses argument passed to thecallback functionwill contain an array of IPv4 addresses (e.g.['74.125.79.104', '74.125.79.105', '74.125.79.106']).
dns.resolve6(hostname[, options], callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v7.2.0 | This method now supports passing |
| v0.1.16 | Added in: v0.1.16 |
hostname<string> Host name to resolve.options<Object>ttl<boolean> Retrieve the Time-To-Live value (TTL) of each record.Whentrue, the callback receives an array of{ address: '0:1:2:3:4:5:6:7', ttl: 60 }objects rather than an array ofstrings, with the TTL expressed in seconds.
callback<Function>err<Error>addresses<string[]> |<Object[]>
Uses the DNS protocol to resolve IPv6 addresses (AAAA records) for thehostname. Theaddresses argument passed to thecallback functionwill contain an array of IPv6 addresses.
dns.resolveAny(hostname, callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
hostname<string>callback<Function>err<Error>ret<Object[]>
Uses the DNS protocol to resolve all records (also known asANY or* query).Theret argument passed to thecallback function will be an array containingvarious types of records. Each object has a propertytype that indicates thetype of the current record. And depending on thetype, additional propertieswill be present on the object:
| Type | Properties |
|---|---|
'A' | address/ttl |
'AAAA' | address/ttl |
'CAA' | Refer todns.resolveCaa() |
'CNAME' | value |
'MX' | Refer todns.resolveMx() |
'NAPTR' | Refer todns.resolveNaptr() |
'NS' | value |
'PTR' | value |
'SOA' | Refer todns.resolveSoa() |
'SRV' | Refer todns.resolveSrv() |
'TLSA' | Refer todns.resolveTlsa() |
'TXT' | This type of record contains an array property calledentries which refers todns.resolveTxt(), e.g.{ entries: ['...'], type: 'TXT' } |
Here is an example of theret object passed to the callback:
[ {type:'A',address:'127.0.0.1',ttl:299 }, {type:'CNAME',value:'example.com' }, {type:'MX',exchange:'alt4.aspmx.l.example.com',priority:50 }, {type:'NS',value:'ns1.example.com' }, {type:'TXT',entries: ['v=spf1 include:_spf.example.com ~all' ] }, {type:'SOA',nsname:'ns1.example.com',hostmaster:'admin.example.com',serial:156696742,refresh:900,retry:900,expire:1800,minttl:60 } ]DNS server operators may choose not to respond toANYqueries. It may be better to call individual methods likedns.resolve4(),dns.resolveMx(), and so on. For more details, seeRFC 8482.
dns.resolveCname(hostname, callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v0.3.2 | Added in: v0.3.2 |
hostname<string>callback<Function>err<Error>addresses<string[]>
Uses the DNS protocol to resolveCNAME records for thehostname. Theaddresses argument passed to thecallback functionwill contain an array of canonical name records available for thehostname(e.g.['bar.example.com']).
dns.resolveCaa(hostname, callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v15.0.0, v14.17.0 | Added in: v15.0.0, v14.17.0 |
hostname<string>callback<Function>err<Error>records<Object[]>
Uses the DNS protocol to resolveCAA records for thehostname. Theaddresses argument passed to thecallback functionwill contain an array of certification authority authorization recordsavailable for thehostname (e.g.[{critical: 0, iodef: 'mailto:pki@example.com'}, {critical: 128, issue: 'pki.example.com'}]).
dns.resolveMx(hostname, callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v0.1.27 | Added in: v0.1.27 |
hostname<string>callback<Function>err<Error>addresses<Object[]>
Uses the DNS protocol to resolve mail exchange records (MX records) for thehostname. Theaddresses argument passed to thecallback function willcontain an array of objects containing both apriority andexchangeproperty (e.g.[{priority: 10, exchange: 'mx.example.com'}, ...]).
dns.resolveNaptr(hostname, callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v0.9.12 | Added in: v0.9.12 |
hostname<string>callback<Function>err<Error>addresses<Object[]>
Uses the DNS protocol to resolve regular expression-based records (NAPTRrecords) for thehostname. Theaddresses argument passed to thecallbackfunction will contain an array of objects with the following properties:
flagsserviceregexpreplacementorderpreference
{flags:'s',service:'SIP+D2U',regexp:'',replacement:'_sip._udp.example.com',order:30,preference:100}dns.resolveNs(hostname, callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v0.1.90 | Added in: v0.1.90 |
hostname<string>callback<Function>err<Error>addresses<string[]>
Uses the DNS protocol to resolve name server records (NS records) for thehostname. Theaddresses argument passed to thecallback function willcontain an array of name server records available forhostname(e.g.['ns1.example.com', 'ns2.example.com']).
dns.resolvePtr(hostname, callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v6.0.0 | Added in: v6.0.0 |
hostname<string>callback<Function>err<Error>addresses<string[]>
Uses the DNS protocol to resolve pointer records (PTR records) for thehostname. Theaddresses argument passed to thecallback function willbe an array of strings containing the reply records.
dns.resolveSoa(hostname, callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v0.11.10 | Added in: v0.11.10 |
hostname<string>callback<Function>
Uses the DNS protocol to resolve a start of authority record (SOA record) forthehostname. Theaddress argument passed to thecallback function willbe an object with the following properties:
nsnamehostmasterserialrefreshretryexpireminttl
{nsname:'ns.example.com',hostmaster:'root.example.com',serial:2013101809,refresh:10000,retry:2400,expire:604800,minttl:3600}dns.resolveSrv(hostname, callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v0.1.27 | Added in: v0.1.27 |
hostname<string>callback<Function>err<Error>addresses<Object[]>
Uses the DNS protocol to resolve service records (SRV records) for thehostname. Theaddresses argument passed to thecallback function willbe an array of objects with the following properties:
priorityweightportname
{priority:10,weight:5,port:21223,name:'service.example.com'}dns.resolveTlsa(hostname, callback)#
hostname<string>callback<Function>err<Error>records<Object[]>
Uses the DNS protocol to resolve certificate associations (TLSA records) forthehostname. Therecords argument passed to thecallback function is anarray of objects with these properties:
certUsageselectormatchdata
{certUsage:3,selector:1,match:1,data: [ArrayBuffer]}dns.resolveTxt(hostname, callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v0.1.27 | Added in: v0.1.27 |
hostname<string>callback<Function>err<Error>records<string[]>
Uses the DNS protocol to resolve text queries (TXT records) for thehostname. Therecords argument passed to thecallback function is atwo-dimensional array of the text records available forhostname (e.g.[ ['v=spf1 ip4:0.0.0.0 ', '~all' ] ]). Each sub-array contains TXT chunks ofone record. Depending on the use case, these could be either joined together ortreated separately.
dns.reverse(ip, callback)#
ip<string>callback<Function>err<Error>hostnames<string[]>
Performs a reverse DNS query that resolves an IPv4 or IPv6 address to anarray of host names.
On error,err is anError object, whereerr.code isone of theDNS error codes.
dns.setDefaultResultOrder(order)#
History
| Version | Changes |
|---|---|
| v22.1.0, v20.13.0 | The |
| v17.0.0 | Changed default value to |
| v16.4.0, v14.18.0 | Added in: v16.4.0, v14.18.0 |
order<string> must be'ipv4first','ipv6first'or'verbatim'.
Set the default value oforder indns.lookup() anddnsPromises.lookup(). The value could be:
ipv4first: sets defaultordertoipv4first.ipv6first: sets defaultordertoipv6first.verbatim: sets defaultordertoverbatim.
The default isverbatim anddns.setDefaultResultOrder() have higherpriority than--dns-result-order. When usingworker threads,dns.setDefaultResultOrder() from the main thread won't affect the defaultdns orders in workers.
dns.getDefaultResultOrder()#
History
| Version | Changes |
|---|---|
| v22.1.0, v20.13.0 | The |
| v20.1.0, v18.17.0 | Added in: v20.1.0, v18.17.0 |
Get the default value fororder indns.lookup() anddnsPromises.lookup(). The value could be:
ipv4first: fororderdefaulting toipv4first.ipv6first: fororderdefaulting toipv6first.verbatim: fororderdefaulting toverbatim.
dns.setServers(servers)#
servers<string[]> array ofRFC 5952 formatted addresses
Sets the IP address and port of servers to be used when performing DNSresolution. Theservers argument is an array ofRFC 5952 formattedaddresses. If the port is the IANA default DNS port (53) it can be omitted.
dns.setServers(['8.8.8.8','[2001:4860:4860::8888]','8.8.8.8:1053','[2001:4860:4860::8888]:1053',]);An error will be thrown if an invalid address is provided.
Thedns.setServers() method must not be called while a DNS query is inprogress.
Thedns.setServers() method affects onlydns.resolve(),dns.resolve*() anddns.reverse() (and specificallynotdns.lookup()).
This method works much likeresolve.conf.That is, if attempting to resolve with the first server provided results in aNOTFOUND error, theresolve() method willnot attempt to resolve withsubsequent servers provided. Fallback DNS servers will only be used if theearlier ones time out or result in some other error.
DNS promises API#
History
| Version | Changes |
|---|---|
| v15.0.0 | Exposed as |
| v11.14.0, v10.17.0 | This API is no longer experimental. |
| v10.6.0 | Added in: v10.6.0 |
Thedns.promises API provides an alternative set of asynchronous DNS methodsthat returnPromise objects rather than using callbacks. The API is accessibleviarequire('node:dns').promises orrequire('node:dns/promises').
Class:dnsPromises.Resolver#
An independent resolver for DNS requests.
Creating a new resolver uses the default server settings. Settingthe servers used for a resolver usingresolver.setServers() does not affectother resolvers:
import {Resolver }from'node:dns/promises';const resolver =newResolver();resolver.setServers(['4.4.4.4']);// This request will use the server at 4.4.4.4, independent of global settings.const addresses =await resolver.resolve4('example.org');const {Resolver } =require('node:dns').promises;const resolver =newResolver();resolver.setServers(['4.4.4.4']);// This request will use the server at 4.4.4.4, independent of global settings.resolver.resolve4('example.org').then((addresses) => {// ...});// Alternatively, the same code can be written using async-await style.(asyncfunction() {const addresses =await resolver.resolve4('example.org');})();
The following methods from thednsPromises API are available:
resolver.getServers()resolver.resolve()resolver.resolve4()resolver.resolve6()resolver.resolveAny()resolver.resolveCaa()resolver.resolveCname()resolver.resolveMx()resolver.resolveNaptr()resolver.resolveNs()resolver.resolvePtr()resolver.resolveSoa()resolver.resolveSrv()resolver.resolveTlsa()resolver.resolveTxt()resolver.reverse()resolver.setServers()
resolver.cancel()#
Cancel all outstanding DNS queries made by this resolver. The correspondingpromises will be rejected with an error with the codeECANCELLED.
dnsPromises.getServers()#
- Returns:<string[]>
Returns an array of IP address strings, formatted according toRFC 5952,that are currently configured for DNS resolution. A string will include a portsection if a custom port is used.
['8.8.8.8','2001:4860:4860::8888','8.8.8.8:1053','[2001:4860:4860::8888]:1053',]dnsPromises.lookup(hostname[, options])#
History
| Version | Changes |
|---|---|
| v22.1.0, v20.13.0 | The |
| v10.6.0 | Added in: v10.6.0 |
hostname<string>options<integer> |<Object>family<integer> The record family. Must be4,6, or0. The value0indicates that either an IPv4 or IPv6 address is returned. If thevalue0is used with{ all: true }(see below), either one of or bothIPv4 and IPv6 addresses are returned, depending on the system's DNSresolver.Default:0.hints<number> One or moresupportedgetaddrinfoflags. Multipleflags may be passed by bitwiseORing their values.all<boolean> Whentrue, thePromiseis resolved with all addresses inan array. Otherwise, returns a single address.Default:false.order<string> Whenverbatim, thePromiseis resolved with IPv4 andIPv6 addresses in the order the DNS resolver returned them. Whenipv4first,IPv4 addresses are placed before IPv6 addresses. Whenipv6first,IPv6 addresses are placed before IPv4 addresses.Default:verbatim(addresses are not reordered).Default value is configurable usingdns.setDefaultResultOrder()or--dns-result-order. New code should use{ order: 'verbatim' }.verbatim<boolean> Whentrue, thePromiseis resolved with IPv4 andIPv6 addresses in the order the DNS resolver returned them. Whenfalse,IPv4 addresses are placed before IPv6 addresses.This option will be deprecated in favor oforder. When both are specified,orderhas higher precedence. New code should only useorder.Default: currentlyfalse(addresses are reordered) but this isexpected to change in the not too distant future. Default value isconfigurable usingdns.setDefaultResultOrder()or--dns-result-order.
Resolves a host name (e.g.'nodejs.org') into the first found A (IPv4) orAAAA (IPv6) record. Alloption properties are optional. Ifoptions is aninteger, then it must be4 or6 – ifoptions is not provided, theneither IPv4 or IPv6 addresses, or both, are returned if found.
With theall option set totrue, thePromise is resolved withaddressesbeing an array of objects with the propertiesaddress andfamily.
On error, thePromise is rejected with anError object, whereerr.codeis the error code.Keep in mind thaterr.code will be set to'ENOTFOUND' not only whenthe host name does not exist but also when the lookup fails in other wayssuch as no available file descriptors.
dnsPromises.lookup() does not necessarily have anything to do with the DNSprotocol. The implementation uses an operating system facility that canassociate names with addresses and vice versa. This implementation can havesubtle but important consequences on the behavior of any Node.js program. Pleasetake some time to consult theImplementation considerations section beforeusingdnsPromises.lookup().
Example usage:
import dnsfrom'node:dns';const dnsPromises = dns.promises;const options = {family:6,hints: dns.ADDRCONFIG | dns.V4MAPPED,};await dnsPromises.lookup('example.org', options).then((result) => {console.log('address: %j family: IPv%s', result.address, result.family);// address: "2606:2800:21f:cb07:6820:80da:af6b:8b2c" family: IPv6});// When options.all is true, the result will be an Array.options.all =true;await dnsPromises.lookup('example.org', options).then((result) => {console.log('addresses: %j', result);// addresses: [{"address":"2606:2800:21f:cb07:6820:80da:af6b:8b2c","family":6}]});const dns =require('node:dns');const dnsPromises = dns.promises;const options = {family:6,hints: dns.ADDRCONFIG | dns.V4MAPPED,};dnsPromises.lookup('example.org', options).then((result) => {console.log('address: %j family: IPv%s', result.address, result.family);// address: "2606:2800:21f:cb07:6820:80da:af6b:8b2c" family: IPv6});// When options.all is true, the result will be an Array.options.all =true;dnsPromises.lookup('example.org', options).then((result) => {console.log('addresses: %j', result);// addresses: [{"address":"2606:2800:21f:cb07:6820:80da:af6b:8b2c","family":6}]});
dnsPromises.lookupService(address, port)#
Resolves the givenaddress andport into a host name and service usingthe operating system's underlyinggetnameinfo implementation.
Ifaddress is not a valid IP address, aTypeError will be thrown.Theport will be coerced to a number. If it is not a legal port, aTypeErrorwill be thrown.
On error, thePromise is rejected with anError object, whereerr.codeis the error code.
import dnsPromisesfrom'node:dns/promises';const result =await dnsPromises.lookupService('127.0.0.1',22);console.log(result.hostname, result.service);// Prints: localhost sshconst dnsPromises =require('node:dns').promises;dnsPromises.lookupService('127.0.0.1',22).then((result) => {console.log(result.hostname, result.service);// Prints: localhost ssh});
dnsPromises.resolve(hostname[, rrtype])#
Uses the DNS protocol to resolve a host name (e.g.'nodejs.org') into an arrayof the resource records. When successful, thePromise is resolved with anarray of resource records. The type and structure of individual results varybased onrrtype:
rrtype | records contains | Result type | Shorthand method |
|---|---|---|---|
'A' | IPv4 addresses (default) | <string> | dnsPromises.resolve4() |
'AAAA' | IPv6 addresses | <string> | dnsPromises.resolve6() |
'ANY' | any records | <Object> | dnsPromises.resolveAny() |
'CAA' | CA authorization records | <Object> | dnsPromises.resolveCaa() |
'CNAME' | canonical name records | <string> | dnsPromises.resolveCname() |
'MX' | mail exchange records | <Object> | dnsPromises.resolveMx() |
'NAPTR' | name authority pointer records | <Object> | dnsPromises.resolveNaptr() |
'NS' | name server records | <string> | dnsPromises.resolveNs() |
'PTR' | pointer records | <string> | dnsPromises.resolvePtr() |
'SOA' | start of authority records | <Object> | dnsPromises.resolveSoa() |
'SRV' | service records | <Object> | dnsPromises.resolveSrv() |
'TLSA' | certificate associations | <Object> | dnsPromises.resolveTlsa() |
'TXT' | text records | <string[]> | dnsPromises.resolveTxt() |
On error, thePromise is rejected with anError object, whereerr.codeis one of theDNS error codes.
dnsPromises.resolve4(hostname[, options])#
hostname<string> Host name to resolve.options<Object>ttl<boolean> Retrieve the Time-To-Live value (TTL) of each record.Whentrue, thePromiseis resolved with an array of{ address: '1.2.3.4', ttl: 60 }objects rather than an array of strings,with the TTL expressed in seconds.
Uses the DNS protocol to resolve IPv4 addresses (A records) for thehostname. On success, thePromise is resolved with an array of IPv4addresses (e.g.['74.125.79.104', '74.125.79.105', '74.125.79.106']).
dnsPromises.resolve6(hostname[, options])#
hostname<string> Host name to resolve.options<Object>ttl<boolean> Retrieve the Time-To-Live value (TTL) of each record.Whentrue, thePromiseis resolved with an array of{ address: '0:1:2:3:4:5:6:7', ttl: 60 }objects rather than an array ofstrings, with the TTL expressed in seconds.
Uses the DNS protocol to resolve IPv6 addresses (AAAA records) for thehostname. On success, thePromise is resolved with an array of IPv6addresses.
dnsPromises.resolveAny(hostname)#
hostname<string>
Uses the DNS protocol to resolve all records (also known asANY or* query).On success, thePromise is resolved with an array containing various types ofrecords. Each object has a propertytype that indicates the type of thecurrent record. And depending on thetype, additional properties will bepresent on the object:
| Type | Properties |
|---|---|
'A' | address/ttl |
'AAAA' | address/ttl |
'CAA' | Refer todnsPromises.resolveCaa() |
'CNAME' | value |
'MX' | Refer todnsPromises.resolveMx() |
'NAPTR' | Refer todnsPromises.resolveNaptr() |
'NS' | value |
'PTR' | value |
'SOA' | Refer todnsPromises.resolveSoa() |
'SRV' | Refer todnsPromises.resolveSrv() |
'TLSA' | Refer todnsPromises.resolveTlsa() |
'TXT' | This type of record contains an array property calledentries which refers todnsPromises.resolveTxt(), e.g.{ entries: ['...'], type: 'TXT' } |
Here is an example of the result object:
[ {type:'A',address:'127.0.0.1',ttl:299 }, {type:'CNAME',value:'example.com' }, {type:'MX',exchange:'alt4.aspmx.l.example.com',priority:50 }, {type:'NS',value:'ns1.example.com' }, {type:'TXT',entries: ['v=spf1 include:_spf.example.com ~all' ] }, {type:'SOA',nsname:'ns1.example.com',hostmaster:'admin.example.com',serial:156696742,refresh:900,retry:900,expire:1800,minttl:60 } ]dnsPromises.resolveCaa(hostname)#
hostname<string>
Uses the DNS protocol to resolveCAA records for thehostname. On success,thePromise is resolved with an array of objects containing availablecertification authority authorization records available for thehostname(e.g.[{critical: 0, iodef: 'mailto:pki@example.com'},{critical: 128, issue: 'pki.example.com'}]).
dnsPromises.resolveCname(hostname)#
hostname<string>
Uses the DNS protocol to resolveCNAME records for thehostname. On success,thePromise is resolved with an array of canonical name records available forthehostname (e.g.['bar.example.com']).
dnsPromises.resolveMx(hostname)#
hostname<string>
Uses the DNS protocol to resolve mail exchange records (MX records) for thehostname. On success, thePromise is resolved with an array of objectscontaining both apriority andexchange property (e.g.[{priority: 10, exchange: 'mx.example.com'}, ...]).
dnsPromises.resolveNaptr(hostname)#
hostname<string>
Uses the DNS protocol to resolve regular expression-based records (NAPTRrecords) for thehostname. On success, thePromise is resolved with an arrayof objects with the following properties:
flagsserviceregexpreplacementorderpreference
{flags:'s',service:'SIP+D2U',regexp:'',replacement:'_sip._udp.example.com',order:30,preference:100}dnsPromises.resolveNs(hostname)#
hostname<string>
Uses the DNS protocol to resolve name server records (NS records) for thehostname. On success, thePromise is resolved with an array of name serverrecords available forhostname (e.g.['ns1.example.com', 'ns2.example.com']).
dnsPromises.resolvePtr(hostname)#
hostname<string>
Uses the DNS protocol to resolve pointer records (PTR records) for thehostname. On success, thePromise is resolved with an array of stringscontaining the reply records.
dnsPromises.resolveSoa(hostname)#
hostname<string>
Uses the DNS protocol to resolve a start of authority record (SOA record) forthehostname. On success, thePromise is resolved with an object with thefollowing properties:
nsnamehostmasterserialrefreshretryexpireminttl
{nsname:'ns.example.com',hostmaster:'root.example.com',serial:2013101809,refresh:10000,retry:2400,expire:604800,minttl:3600}dnsPromises.resolveSrv(hostname)#
hostname<string>
Uses the DNS protocol to resolve service records (SRV records) for thehostname. On success, thePromise is resolved with an array of objects withthe following properties:
priorityweightportname
{priority:10,weight:5,port:21223,name:'service.example.com'}dnsPromises.resolveTlsa(hostname)#
hostname<string>
Uses the DNS protocol to resolve certificate associations (TLSA records) forthehostname. On success, thePromise is resolved with an array of objectswith these properties:
certUsageselectormatchdata
{certUsage:3,selector:1,match:1,data: [ArrayBuffer]}dnsPromises.resolveTxt(hostname)#
hostname<string>
Uses the DNS protocol to resolve text queries (TXT records) for thehostname. On success, thePromise is resolved with a two-dimensional arrayof the text records available forhostname (e.g.[ ['v=spf1 ip4:0.0.0.0 ', '~all' ] ]). Each sub-array contains TXT chunks ofone record. Depending on the use case, these could be either joined together ortreated separately.
dnsPromises.reverse(ip)#
ip<string>
Performs a reverse DNS query that resolves an IPv4 or IPv6 address to anarray of host names.
On error, thePromise is rejected with anError object, whereerr.codeis one of theDNS error codes.
dnsPromises.setDefaultResultOrder(order)#
History
| Version | Changes |
|---|---|
| v22.1.0, v20.13.0 | The |
| v17.0.0 | Changed default value to |
| v16.4.0, v14.18.0 | Added in: v16.4.0, v14.18.0 |
order<string> must be'ipv4first','ipv6first'or'verbatim'.
Set the default value oforder indns.lookup() anddnsPromises.lookup(). The value could be:
ipv4first: sets defaultordertoipv4first.ipv6first: sets defaultordertoipv6first.verbatim: sets defaultordertoverbatim.
The default isverbatim anddnsPromises.setDefaultResultOrder() havehigher priority than--dns-result-order. When usingworker threads,dnsPromises.setDefaultResultOrder() from the main thread won't affect thedefault dns orders in workers.
dnsPromises.setServers(servers)#
servers<string[]> array ofRFC 5952 formatted addresses
Sets the IP address and port of servers to be used when performing DNSresolution. Theservers argument is an array ofRFC 5952 formattedaddresses. If the port is the IANA default DNS port (53) it can be omitted.
dnsPromises.setServers(['8.8.8.8','[2001:4860:4860::8888]','8.8.8.8:1053','[2001:4860:4860::8888]:1053',]);An error will be thrown if an invalid address is provided.
ThednsPromises.setServers() method must not be called while a DNS query is inprogress.
This method works much likeresolve.conf.That is, if attempting to resolve with the first server provided results in aNOTFOUND error, theresolve() method willnot attempt to resolve withsubsequent servers provided. Fallback DNS servers will only be used if theearlier ones time out or result in some other error.
Error codes#
Each DNS query can return one of the following error codes:
dns.NODATA: DNS server returned an answer with no data.dns.FORMERR: DNS server claims query was misformatted.dns.SERVFAIL: DNS server returned general failure.dns.NOTFOUND: Domain name not found.dns.NOTIMP: DNS server does not implement the requested operation.dns.REFUSED: DNS server refused query.dns.BADQUERY: Misformatted DNS query.dns.BADNAME: Misformatted host name.dns.BADFAMILY: Unsupported address family.dns.BADRESP: Misformatted DNS reply.dns.CONNREFUSED: Could not contact DNS servers.dns.TIMEOUT: Timeout while contacting DNS servers.dns.EOF: End of file.dns.FILE: Error reading file.dns.NOMEM: Out of memory.dns.DESTRUCTION: Channel is being destroyed.dns.BADSTR: Misformatted string.dns.BADFLAGS: Illegal flags specified.dns.NONAME: Given host name is not numeric.dns.BADHINTS: Illegal hints flags specified.dns.NOTINITIALIZED: c-ares library initialization not yet performed.dns.LOADIPHLPAPI: Error loadingiphlpapi.dll.dns.ADDRGETNETWORKPARAMS: Could not findGetNetworkParamsfunction.dns.CANCELLED: DNS query cancelled.
ThednsPromises API also exports the above error codes, e.g.,dnsPromises.NODATA.
Implementation considerations#
Althoughdns.lookup() and the variousdns.resolve*()/dns.reverse()functions have the same goal of associating a network name with a networkaddress (or vice versa), their behavior is quite different. These differencescan have subtle but significant consequences on the behavior of Node.jsprograms.
dns.lookup()#
Under the hood,dns.lookup() uses the same operating system facilitiesas most other programs. For instance,dns.lookup() will almost alwaysresolve a given name the same way as theping command. On most POSIX-likeoperating systems, the behavior of thedns.lookup() function can bemodified by changing settings innsswitch.conf(5) and/orresolv.conf(5),but changing these files will change the behavior of all otherprograms running on the same operating system.
Though the call todns.lookup() will be asynchronous from JavaScript'sperspective, it is implemented as a synchronous call togetaddrinfo(3) that runson libuv's threadpool. This can have surprising negative performanceimplications for some applications, see theUV_THREADPOOL_SIZEdocumentation for more information.
Various networking APIs will calldns.lookup() internally to resolvehost names. If that is an issue, consider resolving the host name to an addressusingdns.resolve() and using the address instead of a host name. Also, somenetworking APIs (such assocket.connect() anddgram.createSocket())allow the default resolver,dns.lookup(), to be replaced.
dns.resolve(),dns.resolve*(), anddns.reverse()#
These functions are implemented quite differently thandns.lookup(). Theydo not usegetaddrinfo(3) and theyalways perform a DNS query on thenetwork. This network communication is always done asynchronously and does notuse libuv's threadpool.
As a result, these functions cannot have the same negative impact on otherprocessing that happens on libuv's threadpool thatdns.lookup() can have.
They do not use the same set of configuration files thatdns.lookup()uses. For instance, they do not use the configuration from/etc/hosts.
Domain#
History
| Version | Changes |
|---|---|
| v8.8.0 | Any |
| v8.0.0 | Handlers for |
| v1.4.2 | Deprecated since: v1.4.2 |
Source Code:lib/domain.js
This module is pending deprecation. Once a replacement API has beenfinalized, this module will be fully deprecated. Most developers shouldnot have cause to use this module. Users who absolutely must havethe functionality that domains provide may rely on it for the time beingbut should expect to have to migrate to a different solutionin the future.
Domains provide a way to handle multiple different IO operations as asingle group. If any of the event emitters or callbacks registered to adomain emit an'error' event, or throw an error, then the domain objectwill be notified, rather than losing the context of the error in theprocess.on('uncaughtException') handler, or causing the program toexit immediately with an error code.
Warning: Don't ignore errors!#
Domain error handlers are not a substitute for closing down aprocess when an error occurs.
By the very nature of howthrow works in JavaScript, there is almostnever any way to safely "pick up where it left off", without leakingreferences, or creating some other sort of undefined brittle state.
The safest way to respond to a thrown error is to shut down theprocess. Of course, in a normal web server, there may be manyopen connections, and it is not reasonable to abruptly shut those downbecause an error was triggered by someone else.
The better approach is to send an error response to the request thattriggered the error, while letting the others finish in their normaltime, and stop listening for new requests in that worker.
In this way,domain usage goes hand-in-hand with the cluster module,since the primary process can fork a new worker when a workerencounters an error. For Node.js programs that scale to multiplemachines, the terminating proxy or service registry can take note ofthe failure, and react accordingly.
For example, this is not a good idea:
// XXX WARNING! BAD IDEA!const d =require('node:domain').create();d.on('error',(er) => {// The error won't crash the process, but what it does is worse!// Though we've prevented abrupt process restarting, we are leaking// a lot of resources if this ever happens.// This is no better than process.on('uncaughtException')!console.log(`error, but oh well${er.message}`);});d.run(() => {require('node:http').createServer((req, res) => {handleRequest(req, res); }).listen(PORT);});By using the context of a domain, and the resilience of separating ourprogram into multiple worker processes, we can react moreappropriately, and handle errors with much greater safety.
// Much better!const cluster =require('node:cluster');constPORT = +process.env.PORT ||1337;if (cluster.isPrimary) {// A more realistic scenario would have more than 2 workers,// and perhaps not put the primary and worker in the same file.//// It is also possible to get a bit fancier about logging, and// implement whatever custom logic is needed to prevent DoS// attacks and other bad behavior.//// See the options in the cluster documentation.//// The important thing is that the primary does very little,// increasing our resilience to unexpected errors. cluster.fork(); cluster.fork(); cluster.on('disconnect',(worker) => {console.error('disconnect!'); cluster.fork(); });}else {// the worker//// This is where we put our bugs!const domain =require('node:domain');// See the cluster documentation for more details about using// worker processes to serve requests. How it works, caveats, etc.const server =require('node:http').createServer((req, res) => {const d = domain.create(); d.on('error',(er) => {console.error(`error${er.stack}`);// We're in dangerous territory!// By definition, something unexpected occurred,// which we probably didn't want.// Anything can happen now! Be very careful!try {// Make sure we close down within 30 secondsconst killtimer =setTimeout(() => { process.exit(1); },30000);// But don't keep the process open just for that! killtimer.unref();// Stop taking new requests. server.close();// Let the primary know we're dead. This will trigger a// 'disconnect' in the cluster primary, and then it will fork// a new worker. cluster.worker.disconnect();// Try to send an error to the request that triggered the problem res.statusCode =500; res.setHeader('content-type','text/plain'); res.end('Oops, there was a problem!\n'); }catch (er2) {// Oh well, not much we can do at this point.console.error(`Error sending 500!${er2.stack}`); } });// Because req and res were created before this domain existed,// we need to explicitly add them.// See the explanation of implicit vs explicit binding below. d.add(req); d.add(res);// Now run the handler function in the domain. d.run(() => {handleRequest(req, res); }); }); server.listen(PORT);}// This part is not important. Just an example routing thing.// Put fancy application logic here.functionhandleRequest(req, res) {switch (req.url) {case'/error':// We do some async stuff, and then...setTimeout(() => {// Whoops! flerb.bark(); }, timeout);break;default: res.end('ok'); }}Additions toError objects#
Any time anError object is routed through a domain, a few extra fieldsare added to it.
error.domainThe domain that first handled the error.error.domainEmitterThe event emitter that emitted an'error'eventwith the error object.error.domainBoundThe callback function which was bound to thedomain, and passed an error as its first argument.error.domainThrownA boolean indicating whether the error wasthrown, emitted, or passed to a bound callback function.
Implicit binding#
If domains are in use, then allnewEventEmitter objects (includingStream objects, requests, responses, etc.) will be implicitly bound tothe active domain at the time of their creation.
Additionally, callbacks passed to low-level event loop requests (such astofs.open(), or other callback-taking methods) will automatically bebound to the active domain. If they throw, then the domain will catchthe error.
In order to prevent excessive memory usage,Domain objects themselvesare not implicitly added as children of the active domain. If theywere, then it would be too easy to prevent request and response objectsfrom being properly garbage collected.
To nestDomain objects as children of a parentDomain they must beexplicitly added.
Implicit binding routes thrown errors and'error' events to theDomain's'error' event, but does not register theEventEmitter on theDomain.Implicit binding only takes care of thrown errors and'error' events.
Explicit binding#
Sometimes, the domain in use is not the one that ought to be used for aspecific event emitter. Or, the event emitter could have been createdin the context of one domain, but ought to instead be bound to someother domain.
For example, there could be one domain in use for an HTTP server, butperhaps we would like to have a separate domain to use for each request.
That is possible via explicit binding.
// Create a top-level domain for the serverconst domain =require('node:domain');const http =require('node:http');const serverDomain = domain.create();serverDomain.run(() => {// Server is created in the scope of serverDomain http.createServer((req, res) => {// Req and res are also created in the scope of serverDomain// however, we'd prefer to have a separate domain for each request.// create it first thing, and add req and res to it.const reqd = domain.create(); reqd.add(req); reqd.add(res); reqd.on('error',(er) => {console.error('Error', er, req.url);try { res.writeHead(500); res.end('Error occurred, sorry.'); }catch (er2) {console.error('Error sending 500', er2, req.url); } }); }).listen(1337);});domain.create()#
- Returns:<Domain>
Class:Domain#
- Extends:<EventEmitter>
TheDomain class encapsulates the functionality of routing errors anduncaught exceptions to the activeDomain object.
To handle the errors that it catches, listen to its'error' event.
domain.members#
- Type:<Array>
An array of event emitters that have been explicitly added to the domain.
domain.add(emitter)#
History
| Version | Changes |
|---|---|
| v9.3.0 | No longer accepts timer objects. |
emitter<EventEmitter> emitter to be added to the domain
Explicitly adds an emitter to the domain. If any event handlers called bythe emitter throw an error, or if the emitter emits an'error' event, itwill be routed to the domain's'error' event, just like with implicitbinding.
If theEventEmitter was already bound to a domain, it is removed from thatone, and bound to this one instead.
domain.bind(callback)#
callback<Function> The callback function- Returns:<Function> The bound function
The returned function will be a wrapper around the supplied callbackfunction. When the returned function is called, any errors that arethrown will be routed to the domain's'error' event.
const d = domain.create();functionreadSomeFile(filename, cb) { fs.readFile(filename,'utf8', d.bind((er, data) => {// If this throws, it will also be passed to the domain.returncb(er, data ?JSON.parse(data) :null); }));}d.on('error',(er) => {// An error occurred somewhere. If we throw it now, it will crash the program// with the normal line number and stack message.});domain.enter()#
Theenter() method is plumbing used by therun(),bind(), andintercept() methods to set the active domain. It setsdomain.active andprocess.domain to the domain, and implicitly pushes the domain onto the domainstack managed by the domain module (seedomain.exit() for details on thedomain stack). The call toenter() delimits the beginning of a chain ofasynchronous calls and I/O operations bound to a domain.
Callingenter() changes only the active domain, and does not alter the domainitself.enter() andexit() can be called an arbitrary number of times on asingle domain.
domain.exit()#
Theexit() method exits the current domain, popping it off the domain stack.Any time execution is going to switch to the context of a different chain ofasynchronous calls, it's important to ensure that the current domain is exited.The call toexit() delimits either the end of or an interruption to the chainof asynchronous calls and I/O operations bound to a domain.
If there are multiple, nested domains bound to the current execution context,exit() will exit any domains nested within this domain.
Callingexit() changes only the active domain, and does not alter the domainitself.enter() andexit() can be called an arbitrary number of times on asingle domain.
domain.intercept(callback)#
callback<Function> The callback function- Returns:<Function> The intercepted function
This method is almost identical todomain.bind(callback). However, inaddition to catching thrown errors, it will also interceptErrorobjects sent as the first argument to the function.
In this way, the commonif (err) return callback(err); pattern can be replacedwith a single error handler in a single place.
const d = domain.create();functionreadSomeFile(filename, cb) { fs.readFile(filename,'utf8', d.intercept((data) => {// Note, the first argument is never passed to the// callback since it is assumed to be the 'Error' argument// and thus intercepted by the domain.// If this throws, it will also be passed to the domain// so the error-handling logic can be moved to the 'error'// event on the domain instead of being repeated throughout// the program.returncb(null,JSON.parse(data)); }));}d.on('error',(er) => {// An error occurred somewhere. If we throw it now, it will crash the program// with the normal line number and stack message.});domain.remove(emitter)#
emitter<EventEmitter> emitter to be removed from the domain
The opposite ofdomain.add(emitter). Removes domain handling from thespecified emitter.
domain.run(fn[, ...args])#
fn<Function>...args<any>
Run the supplied function in the context of the domain, implicitlybinding all event emitters, timers, and low-level requests that arecreated in that context. Optionally, arguments can be passed tothe function.
This is the most basic way to use a domain.
const domain =require('node:domain');const fs =require('node:fs');const d = domain.create();d.on('error',(er) => {console.error('Caught error!', er);});d.run(() => { process.nextTick(() => {setTimeout(() => {// Simulating some various async stuff fs.open('non-existent file','r',(er, fd) => {if (er)throw er;// proceed... }); },100); });});In this example, thed.on('error') handler will be triggered, ratherthan crashing the program.
Domains and promises#
As of Node.js 8.0.0, the handlers of promises are run inside the domain inwhich the call to.then() or.catch() itself was made:
const d1 = domain.create();const d2 = domain.create();let p;d1.run(() => { p =Promise.resolve(42);});d2.run(() => { p.then((v) => {// running in d2 });});A callback may be bound to a specific domain usingdomain.bind(callback):
const d1 = domain.create();const d2 = domain.create();let p;d1.run(() => { p =Promise.resolve(42);});d2.run(() => { p.then(p.domain.bind((v) => {// running in d1 }));});Domains will not interfere with the error handling mechanisms forpromises. In other words, no'error' event will be emitted for unhandledPromise rejections.
Environment Variables#
Environment variables are variables associated to the environment the Node.js process runs in.
CLI Environment Variables#
There is a set of environment variables that can be defined to customize the behavior of Node.js,for more details refer to theCLI Environment Variables documentation.
process.env#
The basic API for interacting with environment variables isprocess.env, it consists of an objectwith pre-populated user environment variables that can be modified and expanded.
For more details refer to theprocess.env documentation.
DotEnv#
Set of utilities for dealing with additional environment variables defined in.env files.
.env files#
.env files (also known as dotenv files) are files that define environment variables,which Node.js applications can then interact with (popularized by thedotenv package).
The following is an example of the content of a basic.env file:
MY_VAR_A = "my variable A"MY_VAR_B = "my variable B"This type of file is used in various different programming languages and platforms but thereis no formal specification for it, therefore Node.js defines its own specification described below.
A.env file is a file that contains key-value pairs, each pair is represented by a variable namefollowed by the equal sign (=) followed by a variable value.
The name of such files is usually.env or it starts with.env (like for example.env.dev wheredev indicates a specific target environment). This is the recommended naming scheme but it is notmandatory and dotenv files can have any arbitrary file name.
Variable Names#
A valid variable name must contain only letters (uppercase or lowercase), digits and underscores(_) and it can't begin with a digit.
More specifically a valid variable name must match the following regular expression:
^[a-zA-Z_]+[a-zA-Z0-9_]*$The recommended convention is to use capital letters with underscores and digits when necessary,but any variable name respecting the above definition will work just fine.
For example, the following are some valid variable names:MY_VAR,MY_VAR_1,my_var,my_var_1,myVar,My_Var123, while these are instead not valid:1_VAR,'my-var',"my var",VAR_#1.
Variable Values#
Variable values are comprised by any arbitrary text, which can optionally be wrapped insidesingle (') or double (") quotes.
Quoted variables can span across multiple lines, while non quoted ones are restricted to a single line.
Noting that when parsed by Node.js all values are interpreted as text, meaning that any value willresult in a JavaScript string inside Node.js. For example the following values:0,true and{ "hello": "world" } will result in the literal strings'0','true' and'{ "hello": "world" }'instead of the number zero, the booleantrue and an object with thehello property respectively.
Examples of valid variables:
MY_SIMPLE_VAR = a simple single line variableMY_EQUALS_VAR = "this variable contains an = sign!"MY_HASH_VAR = 'this variable contains a # symbol!'MY_MULTILINE_VAR = 'this is a multiline variable containingtwo separate lines\nSorry, I meant three lines'Spacing#
Leading and trailing whitespace characters around variable keys and values are ignored unless theyare enclosed within quotes.
For example:
MY_VAR_A = my variable a MY_VAR_B = ' my variable b 'will be treated identically to:
MY_VAR_A = my variable aMY_VAR_B = ' my variable b 'Comments#
Hash-tag (#) characters denote the beginning of a comment, meaning that the rest of the linewill be completely ignored.
Hash-tags found within quotes are however treated as any other standard character.
For example:
# This is a commentMY_VAR = my variable # This is also a commentMY_VAR_A = "# this is NOT a comment"export prefixes#
Theexport keyword can optionally be added in front of variable declarations, such keyword will be completely ignoredby all processing done on the file.
This is useful so that the file can be sourced, without modifications, in shell terminals.
Example:
export MY_VAR = my variableCLI Options#
.env files can be used to populate theprocess.env object via one the following CLI options:
Programmatic APIs#
There following two functions allow you to directly interact with.env files:
process.loadEnvFileloads an.envfile and populatesprocess.envwith its variablesutil.parseEnvparses the row content of an.envfile and returns its value in an object
Errors#
Applications running in Node.js will generally experience the followingcategories of errors:
- Standard JavaScript errors such as<EvalError>,<SyntaxError>,<RangeError>,<ReferenceError>,<TypeError>, and<URIError>.
- Standard
DOMExceptions. - System errors triggered by underlying operating system constraints suchas attempting to open a file that does not exist or attempting to send dataover a closed socket.
AssertionErrors are a special class of error that can be triggered whenNode.js detects an exceptional logic violation that should never occur. Theseare raised typically by thenode:assertmodule.- User-specified errors triggered by application code.
All JavaScript and system errors raised by Node.js inherit from, or areinstances of, the standard JavaScript<Error> class and are guaranteedto provideat least the properties available on that class.
Theerror.message property of errors raised by Node.js may be changed inany versions. Useerror.code to identify an error instead. For aDOMException, usedomException.name to identify its type.
Error propagation and interception#
Node.js supports several mechanisms for propagating and handling errors thatoccur while an application is running. How these errors are reported andhandled depends entirely on the type ofError and the style of the API that iscalled.
All JavaScript errors are handled as exceptions thatimmediately generateand throw an error using the standard JavaScriptthrow mechanism. Theseare handled using thetry…catch construct provided by theJavaScript language.
// Throws with a ReferenceError because z is not defined.try {const m =1;const n = m + z;}catch (err) {// Handle the error here.}Any use of the JavaScriptthrow mechanism will raise an exception thatmust be handled or the Node.js process will exit immediately.
With few exceptions,Synchronous APIs (any blocking method that does notreturn a<Promise> nor accept acallback function, such asfs.readFileSync), will usethrow to report errors.
Errors that occur withinAsynchronous APIs may be reported in multiple ways:
Some asynchronous methods returns a<Promise>, you should always take intoaccount that it might be rejected. See
--unhandled-rejectionsflag forhow the process will react to an unhandled promise rejection.const fs =require('node:fs/promises');(async () => {let data;try { data =await fs.readFile('a file that does not exist'); }catch (err) {console.error('There was an error reading the file!', err);return; }// Otherwise handle the data})();Most asynchronous methods that accept a
callbackfunction will accept anErrorobject passed as the first argument to that function. If that firstargument is notnulland is an instance ofError, then an error occurredthat should be handled.const fs =require('node:fs');fs.readFile('a file that does not exist',(err, data) => {if (err) {console.error('There was an error reading the file!', err);return; }// Otherwise handle the data});When an asynchronous method is called on an object that is an
EventEmitter, errors can be routed to that object's'error'event.const net =require('node:net');const connection = net.connect('localhost');// Adding an 'error' event handler to a stream:connection.on('error',(err) => {// If the connection is reset by the server, or if it can't// connect at all, or on any sort of error encountered by// the connection, the error will be sent here.console.error(err);});connection.pipe(process.stdout);A handful of typically asynchronous methods in the Node.js API may stilluse the
throwmechanism to raise exceptions that must be handled usingtry…catch. There is no comprehensive list of such methods; pleaserefer to the documentation of each method to determine the appropriateerror handling mechanism required.
The use of the'error' event mechanism is most common forstream-basedandevent emitter-based APIs, which themselves represent a series ofasynchronous operations over time (as opposed to a single operation that maypass or fail).
ForallEventEmitter objects, if an'error' event handler is notprovided, the error will be thrown, causing the Node.js process to report anuncaught exception and crash unless either: a handler has been registered forthe'uncaughtException' event, or the deprecatednode:domainmodule is used.
constEventEmitter =require('node:events');const ee =newEventEmitter();setImmediate(() => {// This will crash the process because no 'error' event// handler has been added. ee.emit('error',newError('This will crash'));});Errors generated in this waycannot be intercepted usingtry…catch asthey are thrownafter the calling code has already exited.
Developers must refer to the documentation for each method to determineexactly how errors raised by those methods are propagated.
Class:Error#
A generic JavaScript<Error> object that does not denote any specificcircumstance of why the error occurred.Error objects capture a "stack trace"detailing the point in the code at which theError was instantiated, and mayprovide a text description of the error.
All errors generated by Node.js, including all system and JavaScript errors,will either be instances of, or inherit from, theError class.
new Error(message[, options])#
Creates a newError object and sets theerror.message property to theprovided text message. If an object is passed asmessage, the text messageis generated by callingString(message). If thecause option is provided,it is assigned to theerror.cause property. Theerror.stack property willrepresent the point in the code at whichnew Error() was called. Stack tracesare dependent onV8's stack trace API. Stack traces extend only to either(a) the beginning ofsynchronous code execution, or (b) the number of framesgiven by the propertyError.stackTraceLimit, whichever is smaller.
Error.captureStackTrace(targetObject[, constructorOpt])#
targetObject<Object>constructorOpt<Function>
Creates a.stack property ontargetObject, which when accessed returnsa string representing the location in the code at whichError.captureStackTrace() was called.
const myObject = {};Error.captureStackTrace(myObject);myObject.stack;// Similar to `new Error().stack`The first line of the trace will be prefixed with${myObject.name}: ${myObject.message}.
The optionalconstructorOpt argument accepts a function. If given, all framesaboveconstructorOpt, includingconstructorOpt, will be omitted from thegenerated stack trace.
TheconstructorOpt argument is useful for hiding implementationdetails of error generation from the user. For instance:
functiona() {b();}functionb() {c();}functionc() {// Create an error without stack trace to avoid calculating the stack trace twice.const { stackTraceLimit } =Error;Error.stackTraceLimit =0;const error =newError();Error.stackTraceLimit = stackTraceLimit;// Capture the stack trace above function bError.captureStackTrace(error, b);// Neither function c, nor b is included in the stack tracethrow error;}a();Error.stackTraceLimit#
- Type:<number>
TheError.stackTraceLimit property specifies the number of stack framescollected by a stack trace (whether generated bynew Error().stack orError.captureStackTrace(obj)).
The default value is10 but may be set to any valid JavaScript number. Changeswill affect any stack trace capturedafter the value has been changed.
If set to a non-number value, or set to a negative number, stack traces willnot capture any frames.
error.cause#
- Type:<any>
If present, theerror.cause property is the underlying cause of theError.It is used when catching an error and throwing a new one with a differentmessage or code in order to still have access to the original error.
Theerror.cause property is typically set by callingnew Error(message, { cause }). It is not set by the constructor if thecause option is not provided.
This property allows errors to be chained. When serializingError objects,util.inspect() recursively serializeserror.cause if it is set.
const cause =newError('The remote HTTP server responded with a 500 status');const symptom =newError('The message failed to send', { cause });console.log(symptom);// Prints:// Error: The message failed to send// at REPL2:1:17// at Script.runInThisContext (node:vm:130:12)// ... 7 lines matching cause stack trace ...// at [_line] [as _line] (node:internal/readline/interface:886:18) {// [cause]: Error: The remote HTTP server responded with a 500 status// at REPL1:1:15// at Script.runInThisContext (node:vm:130:12)// at REPLServer.defaultEval (node:repl:574:29)// at bound (node:domain:426:15)// at REPLServer.runBound [as eval] (node:domain:437:12)// at REPLServer.onLine (node:repl:902:10)// at REPLServer.emit (node:events:549:35)// at REPLServer.emit (node:domain:482:12)// at [_onLine] [as _onLine] (node:internal/readline/interface:425:12)// at [_line] [as _line] (node:internal/readline/interface:886:18)error.code#
- Type:<string>
Theerror.code property is a string label that identifies the kind of error.error.code is the most stable way to identify an error. It will only changebetween major versions of Node.js. In contrast,error.message strings maychange between any versions of Node.js. SeeNode.js error codes for detailsabout specific codes.
error.message#
- Type:<string>
Theerror.message property is the string description of the error as set bycallingnew Error(message). Themessage passed to the constructor will alsoappear in the first line of the stack trace of theError, however changingthis property after theError object is createdmay not change the firstline of the stack trace (for example, whenerror.stack is read before thisproperty is changed).
const err =newError('The message');console.error(err.message);// Prints: The messageerror.stack#
- Type:<string>
Theerror.stack property is a string describing the point in the code at whichtheError was instantiated.
Error: Things keep happening! at /home/gbusey/file.js:525:2 at Frobnicator.refrobulate (/home/gbusey/business-logic.js:424:21) at Actor.<anonymous> (/home/gbusey/actors.js:400:8) at increaseSynergy (/home/gbusey/actors.js:701:6)The first line is formatted as<error class name>: <error message>, andis followed by a series of stack frames (each line beginning with "at ").Each frame describes a call site within the code that lead to the error beinggenerated. V8 attempts to display a name for each function (by variable name,function name, or object method name), but occasionally it will not be able tofind a suitable name. If V8 cannot determine a name for the function, onlylocation information will be displayed for that frame. Otherwise, thedetermined function name will be displayed with location information appendedin parentheses.
Frames are only generated for JavaScript functions. If, for example, executionsynchronously passes through a C++ addon function calledcheetahify whichitself calls a JavaScript function, the frame representing thecheetahify callwill not be present in the stack traces:
const cheetahify =require('./native-binding.node');functionmakeFaster() {// `cheetahify()` *synchronously* calls speedy.cheetahify(functionspeedy() {thrownewError('oh no!'); });}makeFaster();// will throw:// /home/gbusey/file.js:6// throw new Error('oh no!');// ^// Error: oh no!// at speedy (/home/gbusey/file.js:6:11)// at makeFaster (/home/gbusey/file.js:5:3)// at Object.<anonymous> (/home/gbusey/file.js:10:1)// at Module._compile (module.js:456:26)// at Object.Module._extensions..js (module.js:474:10)// at Module.load (module.js:356:32)// at Function.Module._load (module.js:312:12)// at Function.Module.runMain (module.js:497:10)// at startup (node.js:119:16)// at node.js:906:3The location information will be one of:
native, if the frame represents a call internal to V8 (as in[].forEach).plain-filename.js:line:column, if the frame represents a call internalto Node.js./absolute/path/to/file.js:line:column, if the frame represents a call ina user program (using CommonJS module system), or its dependencies.<transport-protocol>:///url/to/module/file.mjs:line:column, if the framerepresents a call in a user program (using ES module system), orits dependencies.
The number of frames captured by the stack trace is bounded by the smaller ofError.stackTraceLimit or the number of available frames on the current eventloop tick.
error.stack is a getter/setter for a hidden internal property which is onlypresent on builtinError objects (those for whichError.isError returnstrue). Iferror is not a builtin error object, then theerror.stack getterwill always returnundefined, and the setter will do nothing. This can occurif the accessor is manually invoked with athis value that is not a builtinerror object, such as a<Proxy>.
Class:AssertionError#
- Extends:<errors.Error>
Indicates the failure of an assertion. For details, seeClass: assert.AssertionError.
Class:RangeError#
- Extends:<errors.Error>
Indicates that a provided argument was not within the set or range ofacceptable values for a function; whether that is a numeric range, oroutside the set of options for a given function parameter.
require('node:net').connect(-1);// Throws "RangeError: "port" option should be >= 0 and < 65536: -1"Node.js will generate and throwRangeError instancesimmediately as a formof argument validation.
Class:ReferenceError#
- Extends:<errors.Error>
Indicates that an attempt is being made to access a variable that is notdefined. Such errors commonly indicate typos in code, or an otherwise brokenprogram.
While client code may generate and propagate these errors, in practice, only V8will do so.
doesNotExist;// Throws ReferenceError, doesNotExist is not a variable in this program.Unless an application is dynamically generating and running code,ReferenceError instances indicate a bug in the code or its dependencies.
Class:SyntaxError#
- Extends:<errors.Error>
Indicates that a program is not valid JavaScript. These errors may only begenerated and propagated as a result of code evaluation. Code evaluation mayhappen as a result ofeval,Function,require, orvm. These errorsare almost always indicative of a broken program.
try {require('node:vm').runInThisContext('binary ! isNotOk');}catch (err) {// 'err' will be a SyntaxError.}SyntaxError instances are unrecoverable in the context that created them –they may only be caught by other contexts.
Class:SystemError#
- Extends:<errors.Error>
Node.js generates system errors when exceptions occur within its runtimeenvironment. These usually occur when an application violates an operatingsystem constraint. For example, a system error will occur if an applicationattempts to read a file that does not exist.
address<string> If present, the address to which a network connectionfailedcode<string> The string error codedest<string> If present, the file path destination when reporting a filesystem errorerrno<number> The system-provided error numberinfo<Object> If present, extra details about the error conditionmessage<string> A system-provided human-readable description of the errorpath<string> If present, the file path when reporting a file system errorport<number> If present, the network connection port that is not availablesyscall<string> The name of the system call that triggered the error
error.address#
- Type:<string>
If present,error.address is a string describing the address to which anetwork connection failed.
error.dest#
- Type:<string>
If present,error.dest is the file path destination when reporting a filesystem error.
error.errno#
- Type:<number>
Theerror.errno property is a negative number which correspondsto the error code defined inlibuv Error handling.
On Windows the error number provided by the system will be normalized by libuv.
To get the string representation of the error code, useutil.getSystemErrorName(error.errno).
error.message#
- Type:<string>
error.message is a system-provided human-readable description of the error.
Common system errors#
This is a list of system errors commonly-encountered when writing a Node.jsprogram. For a comprehensive list, see theerrno(3) man page.
EACCES(Permission denied): An attempt was made to access a file in a wayforbidden by its file access permissions.EADDRINUSE(Address already in use): An attempt to bind a server(net,http, orhttps) to a local address failed due toanother server on the local system already occupying that address.ECONNREFUSED(Connection refused): No connection could be made because thetarget machine actively refused it. This usually results from trying toconnect to a service that is inactive on the foreign host.ECONNRESET(Connection reset by peer): A connection was forcibly closed bya peer. This normally results from a loss of the connection on the remotesocket due to a timeout or reboot. Commonly encountered via thehttpandnetmodules.EEXIST(File exists): An existing file was the target of an operation thatrequired that the target not exist.EISDIR(Is a directory): An operation expected a file, but the givenpathname was a directory.EMFILE(Too many open files in system): Maximum number offile descriptors allowable on the system has been reached, andrequests for another descriptor cannot be fulfilled until at least onehas been closed. This is encountered when opening many files at once inparallel, especially on systems (in particular, macOS) where there is a lowfile descriptor limit for processes. To remedy a low limit, runulimit -n 2048in the same shell that will run the Node.js process.ENOENT(No such file or directory): Commonly raised byfsoperationsto indicate that a component of the specified pathname does not exist. Noentity (file or directory) could be found by the given path.ENOTDIR(Not a directory): A component of the given pathname existed, butwas not a directory as expected. Commonly raised byfs.readdir.ENOTEMPTY(Directory not empty): A directory with entries was the targetof an operation that requires an empty directory, usuallyfs.unlink.ENOTFOUND(DNS lookup failed): Indicates a DNS failure of eitherEAI_NODATAorEAI_NONAME. This is not a standard POSIX error.EPERM(Operation not permitted): An attempt was made to perform anoperation that requires elevated privileges.EPIPE(Broken pipe): A write on a pipe, socket, or FIFO for which there isno process to read the data. Commonly encountered at thenetandhttplayers, indicative that the remote side of the stream beingwritten to has been closed.ETIMEDOUT(Operation timed out): A connect or send request failed becausethe connected party did not properly respond after a period of time. Usuallyencountered byhttpornet. Often a sign that asocket.end()was not properly called.
Class:TypeError#
- Extends<errors.Error>
Indicates that a provided argument is not an allowable type. For example,passing a function to a parameter which expects a string would be aTypeError.
require('node:url').parse(() => { });// Throws TypeError, since it expected a string.Node.js will generate and throwTypeError instancesimmediately as a formof argument validation.
Exceptions vs. errors#
A JavaScript exception is a value that is thrown as a result of an invalidoperation or as the target of athrow statement. While it is not requiredthat these values are instances ofError or classes which inherit fromError, all exceptions thrown by Node.js or the JavaScript runtimewill beinstances ofError.
Some exceptions areunrecoverable at the JavaScript layer. Such exceptionswillalways cause the Node.js process to crash. Examples includeassert()checks orabort() calls in the C++ layer.
OpenSSL errors#
Errors originating incrypto ortls are of classError, and in addition tothe standard.code and.message properties, may have some additionalOpenSSL-specific properties.
error.opensslErrorStack#
An array of errors that can give context to where in the OpenSSL library anerror originates from.
error.function#
The OpenSSL function the error originates in.
error.library#
The OpenSSL library the error originates in.
Node.js error codes#
ABORT_ERR#
Used when an operation has been aborted (typically using anAbortController).
APIsnot usingAbortSignals typically do not raise an error with this code.
This code does not use the regularERR_* convention Node.js errors use inorder to be compatible with the web platform'sAbortError.
ERR_ACCESS_DENIED#
A special type of error that is triggered whenever Node.js tries to get accessto a resource restricted by thePermission Model.
ERR_AMBIGUOUS_ARGUMENT#
A function argument is being used in a way that suggests that the functionsignature may be misunderstood. This is thrown by thenode:assert module whenthemessage parameter inassert.throws(block, message) matches the errormessage thrown byblock because that usage suggests that the user believesmessage is the expected message rather than the message theAssertionErrorwill display ifblock does not throw.
ERR_ARG_NOT_ITERABLE#
An iterable argument (i.e. a value that works withfor...of loops) wasrequired, but not provided to a Node.js API.
ERR_ASSERTION#
A special type of error that can be triggered whenever Node.js detects anexceptional logic violation that should never occur. These are raised typicallyby thenode:assert module.
ERR_ASYNC_CALLBACK#
An attempt was made to register something that is not a function as anAsyncHooks callback.
ERR_ASYNC_LOADER_REQUEST_NEVER_SETTLED#
An operation related to module loading is customized by an asynchronous loaderhook that never settled the promise before the loader thread exits.
ERR_ASYNC_TYPE#
The type of an asynchronous resource was invalid. Users are also ableto define their own types if using the public embedder API.
ERR_BROTLI_INVALID_PARAM#
An invalid parameter key was passed during construction of a Brotli stream.
ERR_BUFFER_CONTEXT_NOT_AVAILABLE#
An attempt was made to create a Node.jsBuffer instance from addon or embeddercode, while in a JS engine Context that is not associated with a Node.jsinstance. The data passed to theBuffer method will have been releasedby the time the method returns.
When encountering this error, a possible alternative to creating aBufferinstance is to create a normalUint8Array, which only differs in theprototype of the resulting object.Uint8Arrays are generally accepted in allNode.js core APIs whereBuffers are; they are available in all Contexts.
ERR_BUFFER_TOO_LARGE#
An attempt has been made to create aBuffer larger than the maximum allowedsize.
ERR_CHILD_PROCESS_IPC_REQUIRED#
Used when a child process is being forked without specifying an IPC channel.
ERR_CHILD_PROCESS_STDIO_MAXBUFFER#
Used when the main process is trying to read data from the child process'sSTDERR/STDOUT, and the data's length is longer than themaxBuffer option.
ERR_CLOSED_MESSAGE_PORT#
History
| Version | Changes |
|---|---|
| v16.2.0, v14.17.1 | The error message was reintroduced. |
| v11.12.0 | The error message was removed. |
| v10.5.0 | Added in: v10.5.0 |
There was an attempt to use aMessagePort instance in a closedstate, usually after.close() has been called.
ERR_CONSOLE_WRITABLE_STREAM#
Console was instantiated withoutstdout stream, orConsole has anon-writablestdout orstderr stream.
ERR_CONTEXT_NOT_INITIALIZED#
The vm context passed into the API is not yet initialized. This could happenwhen an error occurs (and is caught) during the creation of thecontext, for example, when the allocation fails or the maximum call stacksize is reached when the context is created.
ERR_CPU_PROFILE_ALREADY_STARTED#
The CPU profile with the given name is already started.
ERR_CPU_PROFILE_NOT_STARTED#
The CPU profile with the given name is not started.
ERR_CPU_PROFILE_TOO_MANY#
There are too many CPU profiles being collected.
ERR_CRYPTO_ARGON2_NOT_SUPPORTED#
Argon2 is not supported by the current version of OpenSSL being used.
ERR_CRYPTO_CUSTOM_ENGINE_NOT_SUPPORTED#
An OpenSSL engine was requested (for example, through theclientCertEngine orprivateKeyEngine TLS options) that is not supported by the version of OpenSSLbeing used, likely due to the compile-time flagOPENSSL_NO_ENGINE.
ERR_CRYPTO_ECDH_INVALID_FORMAT#
An invalid value for theformat argument was passed to thecrypto.ECDH()classgetPublicKey() method.
ERR_CRYPTO_ECDH_INVALID_PUBLIC_KEY#
An invalid value for thekey argument has been passed to thecrypto.ECDH() classcomputeSecret() method. It means that the publickey lies outside of the elliptic curve.
ERR_CRYPTO_ENGINE_UNKNOWN#
An invalid crypto engine identifier was passed torequire('node:crypto').setEngine().
ERR_CRYPTO_FIPS_FORCED#
The--force-fips command-line argument was used but there was an attemptto enable or disable FIPS mode in thenode:crypto module.
ERR_CRYPTO_FIPS_UNAVAILABLE#
An attempt was made to enable or disable FIPS mode, but FIPS mode was notavailable.
ERR_CRYPTO_HASH_FINALIZED#
hash.digest() was called multiple times. Thehash.digest() method mustbe called no more than one time per instance of aHash object.
ERR_CRYPTO_HASH_UPDATE_FAILED#
hash.update() failed for any reason. This should rarely, if ever, happen.
ERR_CRYPTO_INCOMPATIBLE_KEY_OPTIONS#
The selected public or private key encoding is incompatible with other options.
ERR_CRYPTO_INVALID_COUNTER#
An invalid counter was provided for a counter-mode cipher.
ERR_CRYPTO_INVALID_KEY_OBJECT_TYPE#
The given crypto key object's type is invalid for the attempted operation.
ERR_CRYPTO_INVALID_SCRYPT_PARAMS#
One or morecrypto.scrypt() orcrypto.scryptSync() parameters areoutside their legal range.
ERR_CRYPTO_INVALID_STATE#
A crypto method was used on an object that was in an invalid state. Forinstance, callingcipher.getAuthTag() before callingcipher.final().
ERR_CRYPTO_JOB_INIT_FAILED#
Initialization of an asynchronous crypto operation failed.
ERR_CRYPTO_JWK_UNSUPPORTED_CURVE#
Key's Elliptic Curve is not registered for use in theJSON Web Key Elliptic Curve Registry.
ERR_CRYPTO_JWK_UNSUPPORTED_KEY_TYPE#
Key's Asymmetric Key Type is not registered for use in theJSON Web Key Types Registry.
ERR_CRYPTO_KEM_NOT_SUPPORTED#
Attempted to use KEM operations while Node.js was not compiled withOpenSSL with KEM support.
ERR_CRYPTO_OPERATION_FAILED#
A crypto operation failed for an otherwise unspecified reason.
ERR_CRYPTO_PBKDF2_ERROR#
The PBKDF2 algorithm failed for unspecified reasons. OpenSSL does not providemore details and therefore neither does Node.js.
ERR_CRYPTO_SCRYPT_NOT_SUPPORTED#
Node.js was compiled withoutscrypt support. Not possible with the officialrelease binaries but can happen with custom builds, including distro builds.
ERR_CRYPTO_TIMING_SAFE_EQUAL_LENGTH#
crypto.timingSafeEqual() was called withBuffer,TypedArray, orDataView arguments of different lengths.
ERR_CRYPTO_UNKNOWN_DH_GROUP#
An unknown Diffie-Hellman group name was given. Seecrypto.getDiffieHellman() for a list of valid group names.
ERR_CRYPTO_UNSUPPORTED_OPERATION#
An attempt to invoke an unsupported crypto operation was made.
ERR_DEBUGGER_STARTUP_ERROR#
Thedebugger timed out waiting for the required host/port to be free.
ERR_DIR_CONCURRENT_OPERATION#
A synchronous read or close call was attempted on anfs.Dir which hasongoing asynchronous operations.
ERR_DLOPEN_DISABLED#
Loading native addons has been disabled using--no-addons.
ERR_DOMAIN_CALLBACK_NOT_AVAILABLE#
Thenode:domain module was not usable since it could not establish therequired error handling hooks, becauseprocess.setUncaughtExceptionCaptureCallback() had been called at anearlier point in time.
ERR_DOMAIN_CANNOT_SET_UNCAUGHT_EXCEPTION_CAPTURE#
process.setUncaughtExceptionCaptureCallback() could not be calledbecause thenode:domain module has been loaded at an earlier point in time.
The stack trace is extended to include the point in time at which thenode:domain module had been loaded.
ERR_DUPLICATE_STARTUP_SNAPSHOT_MAIN_FUNCTION#
v8.startupSnapshot.setDeserializeMainFunction() could not be calledbecause it had already been called before.
ERR_ENCODING_INVALID_ENCODED_DATA#
Data provided toTextDecoder() API was invalid according to the encodingprovided.
ERR_ENCODING_NOT_SUPPORTED#
Encoding provided toTextDecoder() API was not one of theWHATWG Supported Encodings.
ERR_EXECUTION_ENVIRONMENT_NOT_AVAILABLE#
The JS execution context is not associated with a Node.js environment.This may occur when Node.js is used as an embedded library and some hooksfor the JS engine are not set up properly.
ERR_FALSY_VALUE_REJECTION#
APromise that was callbackified viautil.callbackify() was rejected with afalsy value.
ERR_FEATURE_UNAVAILABLE_ON_PLATFORM#
Used when a feature that is not availableto the current platform which is running Node.js is used.
ERR_FS_CP_DIR_TO_NON_DIR#
An attempt was made to copy a directory to a non-directory (file, symlink,etc.) usingfs.cp().
ERR_FS_CP_EEXIST#
An attempt was made to copy over a file that already existed withfs.cp(), with theforce anderrorOnExist set totrue.
ERR_FS_CP_NON_DIR_TO_DIR#
An attempt was made to copy a non-directory (file, symlink, etc.) to a directoryusingfs.cp().
ERR_FS_CP_SYMLINK_TO_SUBDIRECTORY#
When usingfs.cp(), a symlink indest pointed to a subdirectoryofsrc.
ERR_FS_FILE_TOO_LARGE#
An attempt was made to read a file larger than the supported 2 GiB limit forfs.readFile(). This is not a limitation ofBuffer, but an internal I/O constraint.For handling larger files, consider usingfs.createReadStream() to read thefile in chunks.
ERR_FS_WATCH_QUEUE_OVERFLOW#
The number of file system events queued without being handled exceeded the size specified inmaxQueue infs.watch().
ERR_HTTP2_CONNECT_AUTHORITY#
For HTTP/2 requests using theCONNECT method, the:authority pseudo-headeris required.
ERR_HTTP2_CONNECT_PATH#
For HTTP/2 requests using theCONNECT method, the:path pseudo-header isforbidden.
ERR_HTTP2_CONNECT_SCHEME#
For HTTP/2 requests using theCONNECT method, the:scheme pseudo-header isforbidden.
ERR_HTTP2_GOAWAY_SESSION#
New HTTP/2 Streams may not be opened after theHttp2Session has received aGOAWAY frame from the connected peer.
ERR_HTTP2_HEADERS_AFTER_RESPOND#
An additional headers was specified after an HTTP/2 response was initiated.
ERR_HTTP2_HEADER_SINGLE_VALUE#
Multiple values were provided for an HTTP/2 header field that was required tohave only a single value.
ERR_HTTP2_INFO_STATUS_NOT_ALLOWED#
Informational HTTP status codes (1xx) may not be set as the response statuscode on HTTP/2 responses.
ERR_HTTP2_INVALID_CONNECTION_HEADERS#
HTTP/1 connection specific headers are forbidden to be used in HTTP/2requests and responses.
ERR_HTTP2_INVALID_INFO_STATUS#
An invalid HTTP informational status code has been specified. Informationalstatus codes must be an integer between100 and199 (inclusive).
ERR_HTTP2_INVALID_PACKED_SETTINGS_LENGTH#
InputBuffer andUint8Array instances passed to thehttp2.getUnpackedSettings() API must have a length that is a multiple ofsix.
ERR_HTTP2_INVALID_PSEUDOHEADER#
Only valid HTTP/2 pseudoheaders (:status,:path,:authority,:scheme,and:method) may be used.
ERR_HTTP2_INVALID_SESSION#
An action was performed on anHttp2Session object that had already beendestroyed.
ERR_HTTP2_MAX_PENDING_SETTINGS_ACK#
Whenever an HTTP/2SETTINGS frame is sent to a connected peer, the peer isrequired to send an acknowledgment that it has received and applied the newSETTINGS. By default, a maximum number of unacknowledgedSETTINGS frames maybe sent at any given time. This error code is used when that limit has beenreached.
ERR_HTTP2_NESTED_PUSH#
An attempt was made to initiate a new push stream from within a push stream.Nested push streams are not permitted.
ERR_HTTP2_NO_SOCKET_MANIPULATION#
An attempt was made to directly manipulate (read, write, pause, resume, etc.) asocket attached to anHttp2Session.
ERR_HTTP2_OUT_OF_STREAMS#
The number of streams created on a single HTTP/2 session reached the maximumlimit.
ERR_HTTP2_PAYLOAD_FORBIDDEN#
A message payload was specified for an HTTP response code for which a payload isforbidden.
ERR_HTTP2_PSEUDOHEADER_NOT_ALLOWED#
An HTTP/2 pseudo-header has been used inappropriately. Pseudo-headers are headerkey names that begin with the: prefix.
ERR_HTTP2_PUSH_DISABLED#
An attempt was made to create a push stream, which had been disabled by theclient.
ERR_HTTP2_SEND_FILE#
An attempt was made to use theHttp2Stream.prototype.responseWithFile() API tosend a directory.
ERR_HTTP2_SEND_FILE_NOSEEK#
An attempt was made to use theHttp2Stream.prototype.responseWithFile() API tosend something other than a regular file, butoffset orlength options wereprovided.
ERR_HTTP2_SOCKET_BOUND#
An attempt was made to connect aHttp2Session object to anet.Socket ortls.TLSSocket that had already been bound to anotherHttp2Session object.
ERR_HTTP2_SOCKET_UNBOUND#
An attempt was made to use thesocket property of anHttp2Session thathas already been closed.
ERR_HTTP2_STATUS_INVALID#
An invalid HTTP status code has been specified. Status codes must be an integerbetween100 and599 (inclusive).
ERR_HTTP2_STREAM_CANCEL#
AnHttp2Stream was destroyed before any data was transmitted to the connectedpeer.
ERR_HTTP2_STREAM_SELF_DEPENDENCY#
When setting the priority for an HTTP/2 stream, the stream may be marked asa dependency for a parent stream. This error code is used when an attempt ismade to mark a stream and dependent of itself.
ERR_HTTP2_TOO_MANY_INVALID_FRAMES#
The limit of acceptable invalid HTTP/2 protocol frames sent by the peer,as specified through themaxSessionInvalidFrames option, has been exceeded.
ERR_HTTP2_TRAILERS_NOT_READY#
Thehttp2stream.sendTrailers() method cannot be called until after the'wantTrailers' event is emitted on anHttp2Stream object. The'wantTrailers' event will only be emitted if thewaitForTrailers optionis set for theHttp2Stream.
ERR_HTTP2_UNSUPPORTED_PROTOCOL#
http2.connect() was passed a URL that uses any protocol other thanhttp: orhttps:.
ERR_HTTP_BODY_NOT_ALLOWED#
An error is thrown when writing to an HTTP response which does not allowcontents.
ERR_HTTP_CONTENT_LENGTH_MISMATCH#
Response body size doesn't match with the specified content-length header value.
ERR_HTTP_HEADERS_SENT#
An attempt was made to add more headers after the headers had already been sent.
ERR_HTTP_TRAILER_INVALID#
TheTrailer header was set even though the transfer encoding does not supportthat.
ERR_IMPORT_ATTRIBUTE_MISSING#
An import attribute is missing, preventing the specified module to be imported.
ERR_IMPORT_ATTRIBUTE_TYPE_INCOMPATIBLE#
An importtype attribute was provided, but the specified module is of adifferent type.
ERR_IMPORT_ATTRIBUTE_UNSUPPORTED#
An import attribute is not supported by this version of Node.js.
ERR_INCOMPATIBLE_OPTION_PAIR#
An option pair is incompatible with each other and cannot be used at the sametime.
ERR_INPUT_TYPE_NOT_ALLOWED#
The--input-type flag was used to attempt to execute a file. This flag canonly be used with input via--eval,--print, orSTDIN.
ERR_INSPECTOR_ALREADY_ACTIVATED#
While using thenode:inspector module, an attempt was made to activate theinspector when it already started to listen on a port. Useinspector.close()before activating it on a different address.
ERR_INSPECTOR_ALREADY_CONNECTED#
While using thenode:inspector module, an attempt was made to connect when theinspector was already connected.
ERR_INSPECTOR_CLOSED#
While using thenode:inspector module, an attempt was made to use theinspector after the session had already closed.
ERR_INSPECTOR_NOT_CONNECTED#
While using thenode:inspector module, an attempt was made to use theinspector before it was connected.
ERR_INSPECTOR_NOT_WORKER#
An API was called on the main thread that can only be used fromthe worker thread.
ERR_INTERNAL_ASSERTION#
There was a bug in Node.js or incorrect usage of Node.js internals.To fix the error, open an issue athttps://github.com/nodejs/node/issues.
ERR_INVALID_ASYNC_ID#
An invalidasyncId ortriggerAsyncId was passed usingAsyncHooks. An idless than -1 should never happen.
ERR_INVALID_BUFFER_SIZE#
A swap was performed on aBuffer but its size was not compatible with theoperation.
ERR_INVALID_CURSOR_POS#
A cursor on a given stream cannot be moved to a specified row without aspecified column.
ERR_INVALID_FILE_URL_HOST#
A Node.js API that consumesfile: URLs (such as certain functions in thefs module) encountered a file URL with an incompatible host. Thissituation can only occur on Unix-like systems where onlylocalhost or an emptyhost is supported.
ERR_INVALID_FILE_URL_PATH#
A Node.js API that consumesfile: URLs (such as certain functions in thefs module) encountered a file URL with an incompatible path. The exactsemantics for determining whether a path can be used is platform-dependent.
The thrown error object includes aninput property that contains the URL objectof the invalidfile: URL.
ERR_INVALID_HANDLE_TYPE#
An attempt was made to send an unsupported "handle" over an IPC communicationchannel to a child process. Seesubprocess.send() andprocess.send()for more information.
ERR_INVALID_MODULE#
An attempt was made to load a module that does not exist or was otherwise notvalid.
ERR_INVALID_MODULE_SPECIFIER#
The imported module string is an invalid URL, package name, or package subpathspecifier.
ERR_INVALID_OBJECT_DEFINE_PROPERTY#
An error occurred while setting an invalid attribute on the property ofan object.
ERR_INVALID_PACKAGE_TARGET#
Thepackage.json"exports" field contains an invalid target mappingvalue for the attempted module resolution.
ERR_INVALID_REPL_EVAL_CONFIG#
BothbreakEvalOnSigint andeval options were set in theREPL config,which is not supported.
ERR_INVALID_REPL_INPUT#
The input may not be used in theREPL. The conditions under which thiserror is used are described in theREPL documentation.
ERR_INVALID_RETURN_PROPERTY#
Thrown in case a function option does not provide a valid value for one of itsreturned object properties on execution.
ERR_INVALID_RETURN_PROPERTY_VALUE#
Thrown in case a function option does not provide an expected valuetype for one of its returned object properties on execution.
ERR_INVALID_RETURN_VALUE#
Thrown in case a function option does not return an expected valuetype on execution, such as when a function is expected to return a promise.
ERR_INVALID_STATE#
Indicates that an operation cannot be completed due to an invalid state.For instance, an object may have already been destroyed, or may beperforming another operation.
ERR_INVALID_SYNC_FORK_INPUT#
ABuffer,TypedArray,DataView, orstring was provided as stdio input toan asynchronous fork. See the documentation for thechild_process modulefor more information.
ERR_INVALID_THIS#
A Node.js API function was called with an incompatiblethis value.
const urlSearchParams =newURLSearchParams('foo=bar&baz=new');const buf =Buffer.alloc(1);urlSearchParams.has.call(buf,'foo');// Throws a TypeError with code 'ERR_INVALID_THIS'ERR_INVALID_TUPLE#
An element in theiterable provided to theWHATWGURLSearchParams constructor did notrepresent a[name, value] tuple – that is, if an element is not iterable, ordoes not consist of exactly two elements.
ERR_INVALID_TYPESCRIPT_SYNTAX#
History
| Version | Changes |
|---|---|
| v23.7.0, v22.14.0 | This error is no longer thrown on valid yet unsupported syntax. |
| v23.0.0, v22.10.0 | Added in: v23.0.0, v22.10.0 |
The provided TypeScript syntax is not valid.
ERR_INVALID_URL#
An invalid URL was passed to theWHATWGURLconstructor or the legacyurl.parse() to be parsed.The thrown error object typically has an additional property'input' thatcontains the URL that failed to parse.
ERR_INVALID_URL_PATTERN#
An invalid URLPattern was passed to theWHATWGURLPattern constructor to be parsed.
ERR_INVALID_URL_SCHEME#
An attempt was made to use a URL of an incompatible scheme (protocol) for aspecific purpose. It is only used in theWHATWG URL API support in thefs module (which only accepts URLs with'file' scheme), but may be usedin other Node.js APIs as well in the future.
ERR_IPC_CHANNEL_CLOSED#
An attempt was made to use an IPC communication channel that was already closed.
ERR_IPC_DISCONNECTED#
An attempt was made to disconnect an IPC communication channel that was alreadydisconnected. See the documentation for thechild_process modulefor more information.
ERR_IPC_ONE_PIPE#
An attempt was made to create a child Node.js process using more than one IPCcommunication channel. See the documentation for thechild_process modulefor more information.
ERR_IPC_SYNC_FORK#
An attempt was made to open an IPC communication channel with a synchronouslyforked Node.js process. See the documentation for thechild_process modulefor more information.
ERR_LOADER_CHAIN_INCOMPLETE#
An ESM loader hook returned without callingnext() and without explicitlysignaling a short circuit.
ERR_LOAD_SQLITE_EXTENSION#
An error occurred while loading a SQLite extension.
ERR_MEMORY_ALLOCATION_FAILED#
An attempt was made to allocate memory (usually in the C++ layer) but itfailed.
ERR_MESSAGE_TARGET_CONTEXT_UNAVAILABLE#
A message posted to aMessagePort could not be deserialized in the targetvmContext. Not all Node.js objects can be successfully instantiated inany context at this time, and attempting to transfer them usingpostMessage()can fail on the receiving side in that case.
ERR_MISSING_ARGS#
A required argument of a Node.js API was not passed. This is only used forstrict compliance with the API specification (which in some cases may acceptfunc(undefined) but notfunc()). In most native Node.js APIs,func(undefined) andfunc() are treated identically, and theERR_INVALID_ARG_TYPE error code may be used instead.
ERR_MISSING_OPTION#
For APIs that accept options objects, some options might be mandatory. This codeis thrown if a required option is missing.
ERR_MISSING_PASSPHRASE#
An attempt was made to read an encrypted key without specifying a passphrase.
ERR_MISSING_PLATFORM_FOR_WORKER#
The V8 platform used by this instance of Node.js does not support creatingWorkers. This is caused by lack of embedder support for Workers. In particular,this error will not occur with standard builds of Node.js.
ERR_MODULE_LINK_MISMATCH#
A module can not be linked because the same module requests in it are notresolved to the same module.
ERR_MODULE_NOT_FOUND#
A module file could not be resolved by the ECMAScript modules loader whileattempting animport operation or when loading the program entry point.
ERR_MULTIPLE_CALLBACK#
A callback was called more than once.
A callback is almost always meant to only be called once as the querycan either be fulfilled or rejected but not both at the same time. The latterwould be possible by calling a callback more than once.
ERR_NAPI_INVALID_DATAVIEW_ARGS#
While callingnapi_create_dataview(), a givenoffset was outside the boundsof the dataview oroffset + length was larger than a length of givenbuffer.
ERR_NAPI_INVALID_TYPEDARRAY_ALIGNMENT#
While callingnapi_create_typedarray(), the providedoffset was not amultiple of the element size.
ERR_NAPI_INVALID_TYPEDARRAY_LENGTH#
While callingnapi_create_typedarray(),(length * size_of_element) + byte_offset was larger than the length of givenbuffer.
ERR_NAPI_TSFN_CALL_JS#
An error occurred while invoking the JavaScript portion of the thread-safefunction.
ERR_NAPI_TSFN_GET_UNDEFINED#
An error occurred while attempting to retrieve the JavaScriptundefinedvalue.
ERR_NON_CONTEXT_AWARE_DISABLED#
A non-context-aware native addon was loaded in a process that disallows them.
ERR_NOT_BUILDING_SNAPSHOT#
An attempt was made to use operations that can only be used when buildingV8 startup snapshot even though Node.js isn't building one.
ERR_NOT_IN_SINGLE_EXECUTABLE_APPLICATION#
The operation cannot be performed when it's not in a single-executableapplication.
ERR_NOT_SUPPORTED_IN_SNAPSHOT#
An attempt was made to perform operations that are not supported whenbuilding a startup snapshot.
ERR_NO_CRYPTO#
An attempt was made to use crypto features while Node.js was not compiled withOpenSSL crypto support.
ERR_NO_ICU#
An attempt was made to use features that requireICU, but Node.js was notcompiled with ICU support.
ERR_NO_TYPESCRIPT#
An attempt was made to use features that requireNative TypeScript support, but Node.js was notcompiled with TypeScript support.
ERR_OPERATION_FAILED#
An operation failed. This is typically used to signal the general failureof an asynchronous operation.
ERR_OPTIONS_BEFORE_BOOTSTRAPPING#
An attempt was made to get options before the bootstrapping was completed.
ERR_PACKAGE_IMPORT_NOT_DEFINED#
Thepackage.json"imports" field does not define the given internalpackage specifier mapping.
ERR_PACKAGE_PATH_NOT_EXPORTED#
Thepackage.json"exports" field does not export the requested subpath.Because exports are encapsulated, private internal modules that are not exportedcannot be imported through the package resolution, unless using an absolute URL.
ERR_PARSE_ARGS_INVALID_OPTION_VALUE#
Whenstrict set totrue, thrown byutil.parseArgs() if a<boolean>value is provided for an option of type<string>, or if a<string>value is provided for an option of type<boolean>.
ERR_PARSE_ARGS_UNEXPECTED_POSITIONAL#
Thrown byutil.parseArgs(), when a positional argument is provided andallowPositionals is set tofalse.
ERR_PARSE_ARGS_UNKNOWN_OPTION#
Whenstrict set totrue, thrown byutil.parseArgs() if an argumentis not configured inoptions.
ERR_PERFORMANCE_INVALID_TIMESTAMP#
An invalid timestamp value was provided for a performance mark or measure.
ERR_PROTO_ACCESS#
AccessingObject.prototype.__proto__ has been forbidden using--disable-proto=throw.Object.getPrototypeOf andObject.setPrototypeOf should be used to get and set the prototype of anobject.
ERR_PROXY_TUNNEL#
Failed to establish proxy tunnel whenNODE_USE_ENV_PROXY or--use-env-proxy is enabled.
ERR_QUIC_APPLICATION_ERROR#
A QUIC application error occurred.
ERR_QUIC_CONNECTION_FAILED#
Establishing a QUIC connection failed.
ERR_QUIC_ENDPOINT_CLOSED#
A QUIC Endpoint closed with an error.
ERR_QUIC_OPEN_STREAM_FAILED#
Opening a QUIC stream failed.
ERR_QUIC_TRANSPORT_ERROR#
A QUIC transport error occurred.
ERR_QUIC_VERSION_NEGOTIATION_ERROR#
A QUIC session failed because version negotiation is required.
ERR_REQUIRE_ASYNC_MODULE#
When trying torequire() aES Module, the module turns out to be asynchronous.That is, it contains top-level await.
To see where the top-level await is, use--experimental-print-required-tla (this would execute the modulesbefore looking for the top-level awaits).
ERR_REQUIRE_CYCLE_MODULE#
When trying torequire() aES Module, a CommonJS to ESM or ESM to CommonJS edgeparticipates in an immediate cycle.This is not allowed because ES Modules cannot be evaluated while they arealready being evaluated.
To avoid the cycle, therequire() call involved in a cycle should not happenat the top-level of either an ES Module (viacreateRequire()) or a CommonJSmodule, and should be done lazily in an inner function.
ERR_REQUIRE_ESM#
History
| Version | Changes |
|---|---|
| v23.0.0, v22.12.0, v20.19.0 | require() now supports loading synchronous ES modules by default. |
An attempt was made torequire() anES Module.
This error has been deprecated sincerequire() now supports loading synchronousES modules. Whenrequire() encounters an ES module that contains top-levelawait, it will throwERR_REQUIRE_ASYNC_MODULE instead.
ERR_SCRIPT_EXECUTION_INTERRUPTED#
Script execution was interrupted bySIGINT (Forexample,Ctrl+C was pressed.)
ERR_SCRIPT_EXECUTION_TIMEOUT#
Script execution timed out, possibly due to bugs in the script being executed.
ERR_SERVER_ALREADY_LISTEN#
Theserver.listen() method was called while anet.Server was alreadylistening. This applies to all instances ofnet.Server, including HTTP, HTTPS,and HTTP/2Server instances.
ERR_SERVER_NOT_RUNNING#
Theserver.close() method was called when anet.Server was notrunning. This applies to all instances ofnet.Server, including HTTP, HTTPS,and HTTP/2Server instances.
ERR_SINGLE_EXECUTABLE_APPLICATION_ASSET_NOT_FOUND#
A key was passed to single executable application APIs to identify an asset,but no match could be found.
ERR_SOCKET_BAD_BUFFER_SIZE#
An invalid (negative) size was passed for either therecvBufferSize orsendBufferSize options indgram.createSocket().
ERR_SOCKET_BUFFER_SIZE#
While usingdgram.createSocket(), the size of the receive or sendBuffercould not be determined.
ERR_SOCKET_CLOSED_BEFORE_CONNECTION#
When callingnet.Socket.write() on a connecting socket and the socket wasclosed before the connection was established.
ERR_SOCKET_CONNECTION_TIMEOUT#
The socket was unable to connect to any address returned by the DNS within theallowed timeout when using the family autoselection algorithm.
ERR_SOCKET_DGRAM_NOT_CONNECTED#
Adgram.disconnect() ordgram.remoteAddress() call was made on adisconnected socket.
ERR_SOURCE_PHASE_NOT_DEFINED#
The provided module import does not provide a source phase imports representation for source phaseimport syntaximport source x from 'x' orimport.source(x).
ERR_SRI_PARSE#
A string was provided for a Subresource Integrity check, but was unable to beparsed. Check the format of integrity attributes by looking at theSubresource Integrity specification.
ERR_STREAM_ALREADY_FINISHED#
A stream method was called that cannot complete because the stream wasfinished.
ERR_STREAM_DESTROYED#
A stream method was called that cannot complete because the stream wasdestroyed usingstream.destroy().
ERR_STREAM_PREMATURE_CLOSE#
An error returned bystream.finished() andstream.pipeline(), when a streamor a pipeline ends non gracefully with no explicit error.
ERR_STREAM_PUSH_AFTER_EOF#
An attempt was made to callstream.push() after anull(EOF) had beenpushed to the stream.
ERR_STREAM_UNABLE_TO_PIPE#
An attempt was made to pipe to a closed or destroyed stream in a pipeline.
ERR_STREAM_UNSHIFT_AFTER_END_EVENT#
An attempt was made to callstream.unshift() after the'end' event wasemitted.
ERR_STREAM_WRAP#
Prevents an abort if a string decoder was set on the Socket or if the decoderis inobjectMode.
constSocket =require('node:net').Socket;const instance =newSocket();instance.setEncoding('utf8');ERR_STREAM_WRITE_AFTER_END#
An attempt was made to callstream.write() afterstream.end() has beencalled.
ERR_STRING_TOO_LONG#
An attempt has been made to create a string longer than the maximum allowedlength.
ERR_SYSTEM_ERROR#
An unspecified or non-specific system error has occurred within the Node.jsprocess. The error object will have anerr.info object property withadditional details.
ERR_TEST_FAILURE#
This error represents a failed test. Additional information about the failureis available via thecause property. ThefailureType property specifieswhat the test was doing when the failure occurred.
ERR_TLS_ALPN_CALLBACK_INVALID_RESULT#
This error is thrown when anALPNCallback returns a value that is not in thelist of ALPN protocols offered by the client.
ERR_TLS_ALPN_CALLBACK_WITH_PROTOCOLS#
This error is thrown when creating aTLSServer if the TLS options includebothALPNProtocols andALPNCallback. These options are mutually exclusive.
ERR_TLS_CERT_ALTNAME_FORMAT#
This error is thrown bycheckServerIdentity if a user-suppliedsubjectaltname property violates encoding rules. Certificate objects producedby Node.js itself always comply with encoding rules and will never causethis error.
ERR_TLS_CERT_ALTNAME_INVALID#
While using TLS, the host name/IP of the peer did not match any of thesubjectAltNames in its certificate.
ERR_TLS_DH_PARAM_SIZE#
While using TLS, the parameter offered for the Diffie-Hellman (DH)key-agreement protocol is too small. By default, the key length must be greaterthan or equal to 1024 bits to avoid vulnerabilities, even though it is stronglyrecommended to use 2048 bits or larger for stronger security.
ERR_TLS_HANDSHAKE_TIMEOUT#
A TLS/SSL handshake timed out. In this case, the server must also abort theconnection.
ERR_TLS_INVALID_PROTOCOL_METHOD#
The specifiedsecureProtocol method is invalid. It is either unknown, ordisabled because it is insecure.
ERR_TLS_INVALID_STATE#
The TLS socket must be connected and securely established. Ensure the 'secure'event is emitted before continuing.
ERR_TLS_PROTOCOL_VERSION_CONFLICT#
Attempting to set a TLS protocolminVersion ormaxVersion conflicts with anattempt to set thesecureProtocol explicitly. Use one mechanism or the other.
ERR_TLS_RENEGOTIATION_DISABLED#
An attempt was made to renegotiate TLS on a socket instance with renegotiationdisabled.
ERR_TLS_REQUIRED_SERVER_NAME#
While using TLS, theserver.addContext() method was called without providinga host name in the first parameter.
ERR_TLS_SESSION_ATTACK#
An excessive amount of TLS renegotiations is detected, which is a potentialvector for denial-of-service attacks.
ERR_TLS_SNI_FROM_SERVER#
An attempt was made to issue Server Name Indication from a TLS server-sidesocket, which is only valid from a client.
ERR_TRACE_EVENTS_CATEGORY_REQUIRED#
Thetrace_events.createTracing() method requires at least one trace eventcategory.
ERR_TRACE_EVENTS_UNAVAILABLE#
Thenode:trace_events module could not be loaded because Node.js was compiledwith the--without-v8-platform flag.
ERR_TRAILING_JUNK_AFTER_STREAM_END#
Trailing junk found after the end of the compressed stream.This error is thrown when extra, unexpected data is detectedafter the end of a compressed stream (for example, in zlibor gzip decompression).
ERR_UNAVAILABLE_DURING_EXIT#
Function was called within aprocess.on('exit') handler that shouldn't becalled withinprocess.on('exit') handler.
ERR_UNCAUGHT_EXCEPTION_CAPTURE_ALREADY_SET#
process.setUncaughtExceptionCaptureCallback() was called twice,without first resetting the callback tonull.
This error is designed to prevent accidentally overwriting a callback registeredfrom another module.
ERR_UNHANDLED_ERROR#
An unhandled error occurred (for instance, when an'error' event is emittedby anEventEmitter but an'error' handler is not registered).
ERR_UNKNOWN_BUILTIN_MODULE#
Used to identify a specific kind of internal Node.js error that should nottypically be triggered by user code. Instances of this error point to aninternal bug within the Node.js binary itself.
ERR_UNKNOWN_FILE_EXTENSION#
An attempt was made to load a module with an unknown or unsupported fileextension.
ERR_UNKNOWN_MODULE_FORMAT#
An attempt was made to load a module with an unknown or unsupported format.
ERR_UNKNOWN_SIGNAL#
An invalid or unknown process signal was passed to an API expecting a validsignal (such assubprocess.kill()).
ERR_UNSUPPORTED_DIR_IMPORT#
import a directory URL is unsupported. Instead,self-reference a package using its name anddefine a custom subpath inthe"exports" field of thepackage.json file.
import'./';// unsupportedimport'./index.js';// supportedimport'package-name';// supportedERR_UNSUPPORTED_NODE_MODULES_TYPE_STRIPPING#
Type stripping is not supported for files descendent of anode_modules directory.
ERR_UNSUPPORTED_RESOLVE_REQUEST#
An attempt was made to resolve an invalid module referrer. This can happen whenimporting or callingimport.meta.resolve() with either:
- a bare specifier that is not a builtin module from a module whose URL schemeis not
file. - arelative URL from a module whose URL scheme is not aspecial scheme.
try {// Trying to import the package 'bare-specifier' from a `data:` URL module:awaitimport('data:text/javascript,import "bare-specifier"');}catch (e) {console.log(e.code);// ERR_UNSUPPORTED_RESOLVE_REQUEST}ERR_UNSUPPORTED_TYPESCRIPT_SYNTAX#
The provided TypeScript syntax is unsupported.This could happen when using TypeScript syntax that requirestransformation withtype-stripping.
ERR_VALID_PERFORMANCE_ENTRY_TYPE#
While using the Performance Timing API (perf_hooks), no valid performanceentry types are found.
ERR_VM_DYNAMIC_IMPORT_CALLBACK_MISSING_FLAG#
A dynamic import callback was invoked without--experimental-vm-modules.
ERR_VM_MODULE_ALREADY_LINKED#
The module attempted to be linked is not eligible for linking, because of one ofthe following reasons:
- It has already been linked (
linkingStatusis'linked') - It is being linked (
linkingStatusis'linking') - Linking has failed for this module (
linkingStatusis'errored')
ERR_VM_MODULE_CANNOT_CREATE_CACHED_DATA#
Cached data cannot be created for modules which have already been evaluated.
ERR_VM_MODULE_DIFFERENT_CONTEXT#
The module being returned from the linker function is from a different contextthan the parent module. Linked modules must share the same context.
ERR_VM_MODULE_STATUS#
The current module's status does not allow for this operation. The specificmeaning of the error depends on the specific function.
ERR_WEBASSEMBLY_NOT_SUPPORTED#
A feature requiring WebAssembly was used, but WebAssembly is not supported orhas been disabled in the current environment (for example, when running with--jitless).
ERR_WEBASSEMBLY_RESPONSE#
TheResponse that has been passed toWebAssembly.compileStreaming or toWebAssembly.instantiateStreaming is not a valid WebAssembly response.
ERR_WORKER_INVALID_EXEC_ARGV#
TheexecArgv option passed to theWorker constructor containsinvalid flags.
ERR_WORKER_MESSAGING_ERRORED#
The destination thread threw an error while processing a message sent viapostMessageToThread().
ERR_WORKER_MESSAGING_FAILED#
The thread requested inpostMessageToThread() is invalid or has noworkerMessage listener.
ERR_WORKER_MESSAGING_SAME_THREAD#
The thread id requested inpostMessageToThread() is the current thread id.
ERR_WORKER_MESSAGING_TIMEOUT#
Sending a message viapostMessageToThread() timed out.
ERR_WORKER_PATH#
The path for the main script of a worker is neither an absolute pathnor a relative path starting with./ or../.
ERR_WORKER_UNSERIALIZABLE_ERROR#
All attempts at serializing an uncaught exception from a worker thread failed.
HPE_CHUNK_EXTENSIONS_OVERFLOW#
Too much data was received for a chunk extensions. In order to protect againstmalicious or malconfigured clients, if more than 16 KiB of data is receivedthen anError with this code will be emitted.
HPE_HEADER_OVERFLOW#
History
| Version | Changes |
|---|---|
| v11.4.0, v10.15.0 | Max header size in |
Too much HTTP header data was received. In order to protect against malicious ormalconfigured clients, if more thanmaxHeaderSize of HTTP header data is received thenHTTP parsing will abort without a request or response object being created, andanError with this code will be emitted.
HPE_UNEXPECTED_CONTENT_LENGTH#
Server is sending both aContent-Length header andTransfer-Encoding: chunked.
Transfer-Encoding: chunked allows the server to maintain an HTTP persistentconnection for dynamically generated content.In this case, theContent-Length HTTP header cannot be used.
UseContent-Length orTransfer-Encoding: chunked.
Legacy Node.js error codes#
ERR_CANNOT_TRANSFER_OBJECT#
The value passed topostMessage() contained an object that is not supportedfor transferring.
ERR_CRYPTO_HASH_DIGEST_NO_UTF16#
The UTF-16 encoding was used withhash.digest(). While thehash.digest() method does allow anencoding argument to be passed in,causing the method to return a string rather than aBuffer, the UTF-16encoding (e.g.ucs orutf16le) is not supported.
ERR_CRYPTO_SCRYPT_INVALID_PARAMETER#
An incompatible combination of options was passed tocrypto.scrypt() orcrypto.scryptSync(). New versions of Node.js use the error codeERR_INCOMPATIBLE_OPTION_PAIR instead, which is consistent with other APIs.
ERR_FS_INVALID_SYMLINK_TYPE#
An invalid symlink type was passed to thefs.symlink() orfs.symlinkSync() methods.
ERR_HTTP2_FRAME_ERROR#
Used when a failure occurs sending an individual frame on the HTTP/2session.
ERR_HTTP2_HEADERS_OBJECT#
Used when an HTTP/2 Headers Object is expected.
ERR_HTTP2_HEADER_REQUIRED#
Used when a required header is missing in an HTTP/2 message.
ERR_HTTP2_INFO_HEADERS_AFTER_RESPOND#
HTTP/2 informational headers must only be sentprior to calling theHttp2Stream.prototype.respond() method.
ERR_HTTP2_STREAM_CLOSED#
Used when an action has been performed on an HTTP/2 Stream that has alreadybeen closed.
ERR_HTTP_INVALID_CHAR#
Used when an invalid character is found in an HTTP response status message(reason phrase).
ERR_IMPORT_ASSERTION_TYPE_FAILED#
An import assertion has failed, preventing the specified module to be imported.
ERR_IMPORT_ASSERTION_TYPE_MISSING#
An import assertion is missing, preventing the specified module to be imported.
ERR_IMPORT_ASSERTION_TYPE_UNSUPPORTED#
An import attribute is not supported by this version of Node.js.
ERR_INDEX_OUT_OF_RANGE#
A given index was out of the accepted range (e.g. negative offsets).
ERR_INVALID_OPT_VALUE#
An invalid or unexpected value was passed in an options object.
ERR_INVALID_OPT_VALUE_ENCODING#
An invalid or unknown file encoding was passed.
ERR_INVALID_PERFORMANCE_MARK#
While using the Performance Timing API (perf_hooks), a performance mark isinvalid.
ERR_INVALID_TRANSFER_OBJECT#
History
| Version | Changes |
|---|---|
| v21.0.0 | A |
| v21.0.0 | Removed in: v21.0.0 |
An invalid transfer object was passed topostMessage().
ERR_MANIFEST_ASSERT_INTEGRITY#
An attempt was made to load a resource, but the resource did not match theintegrity defined by the policy manifest. See the documentation for policymanifests for more information.
ERR_MANIFEST_DEPENDENCY_MISSING#
An attempt was made to load a resource, but the resource was not listed as adependency from the location that attempted to load it. See the documentationfor policy manifests for more information.
ERR_MANIFEST_INTEGRITY_MISMATCH#
An attempt was made to load a policy manifest, but the manifest had multipleentries for a resource which did not match each other. Update the manifestentries to match in order to resolve this error. See the documentation forpolicy manifests for more information.
ERR_MANIFEST_INVALID_RESOURCE_FIELD#
A policy manifest resource had an invalid value for one of its fields. Updatethe manifest entry to match in order to resolve this error. See thedocumentation for policy manifests for more information.
ERR_MANIFEST_INVALID_SPECIFIER#
A policy manifest resource had an invalid value for one of its dependencymappings. Update the manifest entry to match to resolve this error. See thedocumentation for policy manifests for more information.
ERR_MANIFEST_PARSE_POLICY#
An attempt was made to load a policy manifest, but the manifest was unable tobe parsed. See the documentation for policy manifests for more information.
ERR_MANIFEST_TDZ#
An attempt was made to read from a policy manifest, but the manifestinitialization has not yet taken place. This is likely a bug in Node.js.
ERR_MANIFEST_UNKNOWN_ONERROR#
A policy manifest was loaded, but had an unknown value for its "onerror"behavior. See the documentation for policy manifests for more information.
ERR_MISSING_MESSAGE_PORT_IN_TRANSFER_LIST#
This error code was replaced byERR_MISSING_TRANSFERABLE_IN_TRANSFER_LISTin Node.js 15.0.0, because it is no longer accurate as other types oftransferable objects also exist now.
ERR_MISSING_TRANSFERABLE_IN_TRANSFER_LIST#
History
| Version | Changes |
|---|---|
| v21.0.0 | A |
| v21.0.0 | Removed in: v21.0.0 |
| v15.0.0 | Added in: v15.0.0 |
An object that needs to be explicitly listed in thetransferList argumentis in the object passed to apostMessage() call, but is not providedin thetransferList for that call. Usually, this is aMessagePort.
In Node.js versions prior to v15.0.0, the error code being used here wasERR_MISSING_MESSAGE_PORT_IN_TRANSFER_LIST. However, the set oftransferable object types has been expanded to cover more types thanMessagePort.
ERR_NAPI_CONS_PROTOTYPE_OBJECT#
Used by theNode-API whenConstructor.prototype is not an object.
ERR_NAPI_TSFN_START_IDLE_LOOP#
On the main thread, values are removed from the queue associated with thethread-safe function in an idle loop. This error indicates that an errorhas occurred when attempting to start the loop.
ERR_NAPI_TSFN_STOP_IDLE_LOOP#
Once no more items are left in the queue, the idle loop must be suspended. Thiserror indicates that the idle loop has failed to stop.
ERR_NO_LONGER_SUPPORTED#
A Node.js API was called in an unsupported manner, such asBuffer.write(string, encoding, offset[, length]).
ERR_OUTOFMEMORY#
Used generically to identify that an operation caused an out of memorycondition.
ERR_PARSE_HISTORY_DATA#
Thenode:repl module was unable to parse data from the REPL history file.
ERR_STDERR_CLOSE#
History
| Version | Changes |
|---|---|
| v10.12.0 | Rather than emitting an error, |
| v10.12.0 | Removed in: v10.12.0 |
An attempt was made to close theprocess.stderr stream. By design, Node.jsdoes not allowstdout orstderr streams to be closed by user code.
ERR_STDOUT_CLOSE#
History
| Version | Changes |
|---|---|
| v10.12.0 | Rather than emitting an error, |
| v10.12.0 | Removed in: v10.12.0 |
An attempt was made to close theprocess.stdout stream. By design, Node.jsdoes not allowstdout orstderr streams to be closed by user code.
ERR_STREAM_READ_NOT_IMPLEMENTED#
Used when an attempt is made to use a readable stream that has not implementedreadable._read().
ERR_TAP_PARSER_ERROR#
An error representing a failing parser state. Additional information aboutthe token causing the error is available via thecause property.
ERR_TLS_RENEGOTIATION_FAILED#
Used when a TLS renegotiation request has failed in a non-specific way.
ERR_TRANSFERRING_EXTERNALIZED_SHAREDARRAYBUFFER#
ASharedArrayBuffer whose memory is not managed by the JavaScript engineor by Node.js was encountered during serialization. Such aSharedArrayBuffercannot be serialized.
This can only happen when native addons createSharedArrayBuffers in"externalized" mode, or put existingSharedArrayBuffer into externalized mode.
ERR_UNKNOWN_STDIN_TYPE#
An attempt was made to launch a Node.js process with an unknownstdin filetype. This error is usually an indication of a bug within Node.js itself,although it is possible for user code to trigger it.
ERR_UNKNOWN_STREAM_TYPE#
An attempt was made to launch a Node.js process with an unknownstdout orstderr file type. This error is usually an indication of a bug within Node.jsitself, although it is possible for user code to trigger it.
ERR_VALUE_OUT_OF_RANGE#
Used when a given value is out of the accepted range.
ERR_VM_MODULE_LINKING_ERRORED#
The linker function returned a module for which linking has failed.
ERR_WORKER_UNSUPPORTED_EXTENSION#
The pathname used for the main script of a worker has anunknown file extension.
ERR_ZLIB_BINDING_CLOSED#
Used when an attempt is made to use azlib object after it has already beenclosed.
OpenSSL Error Codes#
Time Validity Errors#
Trust or Chain Related Errors#
UNABLE_TO_GET_ISSUER_CERT#
The issuer certificate of a looked up certificate could not be found. Thisnormally means the list of trusted certificates is not complete.
UNABLE_TO_GET_ISSUER_CERT_LOCALLY#
The certificate’s issuer is not known. This is the case if the issuer is notincluded in the trusted certificate list.
DEPTH_ZERO_SELF_SIGNED_CERT#
The passed certificate is self-signed and the same certificate cannot be foundin the list of trusted certificates.
SELF_SIGNED_CERT_IN_CHAIN#
The certificate’s issuer is not known. This is the case if the issuer is notincluded in the trusted certificate list.
UNABLE_TO_VERIFY_LEAF_SIGNATURE#
No signatures could be verified because the chain contains only one certificateand it is not self signed.
CERT_UNTRUSTED#
The root certificate authority (CA) is not marked as trusted for the specifiedpurpose.
Basic Extension Errors#
INVALID_CA#
A CA certificate is invalid. Either it is not a CA or its extensions are notconsistent with the supplied purpose.
Usage and Policy Errors#
Formatting Errors#
UNABLE_TO_DECRYPT_CERT_SIGNATURE#
The certificate signature could not be decrypted. This means that the actualsignature value could not be determined rather than it not matching the expectedvalue, this is only meaningful for RSA keys.
UNABLE_TO_DECRYPT_CRL_SIGNATURE#
The certificate revocation list (CRL) signature could not be decrypted: thismeans that the actual signature value could not be determined rather than it notmatching the expected value.
UNABLE_TO_DECODE_ISSUER_PUBLIC_KEY#
The public key in the certificate SubjectPublicKeyInfo could not be read.
Events#
Source Code:lib/events.js
Much of the Node.js core API is built around an idiomatic asynchronousevent-driven architecture in which certain kinds of objects (called "emitters")emit named events that causeFunction objects ("listeners") to be called.
For instance: anet.Server object emits an event each time a peerconnects to it; afs.ReadStream emits an event when the file is opened;astream emits an event whenever data is available to be read.
All objects that emit events are instances of theEventEmitter class. Theseobjects expose aneventEmitter.on() function that allows one or morefunctions to be attached to named events emitted by the object. Typically,event names are camel-cased strings but any valid JavaScript property keycan be used.
When theEventEmitter object emits an event, all of the functions attachedto that specific event are calledsynchronously. Any values returned by thecalled listeners areignored and discarded.
The following example shows a simpleEventEmitter instance with a singlelistener. TheeventEmitter.on() method is used to register listeners, whiletheeventEmitter.emit() method is used to trigger the event.
import {EventEmitter }from'node:events';classMyEmitterextendsEventEmitter {}const myEmitter =newMyEmitter();myEmitter.on('event',() => {console.log('an event occurred!');});myEmitter.emit('event');constEventEmitter =require('node:events');classMyEmitterextendsEventEmitter {}const myEmitter =newMyEmitter();myEmitter.on('event',() => {console.log('an event occurred!');});myEmitter.emit('event');
Passing arguments andthis to listeners#
TheeventEmitter.emit() method allows an arbitrary set of arguments to bepassed to the listener functions. Keep in mind that whenan ordinary listener function is called, the standardthis keywordis intentionally set to reference theEventEmitter instance to which thelistener is attached.
import {EventEmitter }from'node:events';classMyEmitterextendsEventEmitter {}const myEmitter =newMyEmitter();myEmitter.on('event',function(a, b) {console.log(a, b,this,this === myEmitter);// Prints:// a b MyEmitter {// _events: [Object: null prototype] { event: [Function (anonymous)] },// _eventsCount: 1,// _maxListeners: undefined,// Symbol(shapeMode): false,// Symbol(kCapture): false// } true});myEmitter.emit('event','a','b');constEventEmitter =require('node:events');classMyEmitterextendsEventEmitter {}const myEmitter =newMyEmitter();myEmitter.on('event',function(a, b) {console.log(a, b,this,this === myEmitter);// Prints:// a b MyEmitter {// _events: [Object: null prototype] { event: [Function (anonymous)] },// _eventsCount: 1,// _maxListeners: undefined,// Symbol(shapeMode): false,// Symbol(kCapture): false// } true});myEmitter.emit('event','a','b');
It is possible to use ES6 Arrow Functions as listeners, however, when doing so,thethis keyword will no longer reference theEventEmitter instance:
import {EventEmitter }from'node:events';classMyEmitterextendsEventEmitter {}const myEmitter =newMyEmitter();myEmitter.on('event',(a, b) => {console.log(a, b,this);// Prints: a b undefined});myEmitter.emit('event','a','b');constEventEmitter =require('node:events');classMyEmitterextendsEventEmitter {}const myEmitter =newMyEmitter();myEmitter.on('event',(a, b) => {console.log(a, b,this);// Prints: a b {}});myEmitter.emit('event','a','b');
Asynchronous vs. synchronous#
TheEventEmitter calls all listeners synchronously in the order in whichthey were registered. This ensures the proper sequencing ofevents and helps avoid race conditions and logic errors. When appropriate,listener functions can switch to an asynchronous mode of operation usingthesetImmediate() orprocess.nextTick() methods:
import {EventEmitter }from'node:events';classMyEmitterextendsEventEmitter {}const myEmitter =newMyEmitter();myEmitter.on('event',(a, b) => {setImmediate(() => {console.log('this happens asynchronously'); });});myEmitter.emit('event','a','b');constEventEmitter =require('node:events');classMyEmitterextendsEventEmitter {}const myEmitter =newMyEmitter();myEmitter.on('event',(a, b) => {setImmediate(() => {console.log('this happens asynchronously'); });});myEmitter.emit('event','a','b');
Handling events only once#
When a listener is registered using theeventEmitter.on() method, thatlistener is invokedevery time the named event is emitted.
import {EventEmitter }from'node:events';classMyEmitterextendsEventEmitter {}const myEmitter =newMyEmitter();let m =0;myEmitter.on('event',() => {console.log(++m);});myEmitter.emit('event');// Prints: 1myEmitter.emit('event');// Prints: 2constEventEmitter =require('node:events');classMyEmitterextendsEventEmitter {}const myEmitter =newMyEmitter();let m =0;myEmitter.on('event',() => {console.log(++m);});myEmitter.emit('event');// Prints: 1myEmitter.emit('event');// Prints: 2
Using theeventEmitter.once() method, it is possible to register a listenerthat is called at most once for a particular event. Once the event is emitted,the listener is unregistered andthen called.
import {EventEmitter }from'node:events';classMyEmitterextendsEventEmitter {}const myEmitter =newMyEmitter();let m =0;myEmitter.once('event',() => {console.log(++m);});myEmitter.emit('event');// Prints: 1myEmitter.emit('event');// IgnoredconstEventEmitter =require('node:events');classMyEmitterextendsEventEmitter {}const myEmitter =newMyEmitter();let m =0;myEmitter.once('event',() => {console.log(++m);});myEmitter.emit('event');// Prints: 1myEmitter.emit('event');// Ignored
Error events#
When an error occurs within anEventEmitter instance, the typical action isfor an'error' event to be emitted. These are treated as special caseswithin Node.js.
If anEventEmitter doesnot have at least one listener registered for the'error' event, and an'error' event is emitted, the error is thrown, astack trace is printed, and the Node.js process exits.
import {EventEmitter }from'node:events';classMyEmitterextendsEventEmitter {}const myEmitter =newMyEmitter();myEmitter.emit('error',newError('whoops!'));// Throws and crashes Node.jsconstEventEmitter =require('node:events');classMyEmitterextendsEventEmitter {}const myEmitter =newMyEmitter();myEmitter.emit('error',newError('whoops!'));// Throws and crashes Node.js
To guard against crashing the Node.js process thedomain module can beused. (Note, however, that thenode:domain module is deprecated.)
As a best practice, listeners should always be added for the'error' events.
import {EventEmitter }from'node:events';classMyEmitterextendsEventEmitter {}const myEmitter =newMyEmitter();myEmitter.on('error',(err) => {console.error('whoops! there was an error');});myEmitter.emit('error',newError('whoops!'));// Prints: whoops! there was an errorconstEventEmitter =require('node:events');classMyEmitterextendsEventEmitter {}const myEmitter =newMyEmitter();myEmitter.on('error',(err) => {console.error('whoops! there was an error');});myEmitter.emit('error',newError('whoops!'));// Prints: whoops! there was an error
It is possible to monitor'error' events without consuming the emitted errorby installing a listener using the symbolevents.errorMonitor.
import {EventEmitter, errorMonitor }from'node:events';const myEmitter =newEventEmitter();myEmitter.on(errorMonitor,(err) => {MyMonitoringTool.log(err);});myEmitter.emit('error',newError('whoops!'));// Still throws and crashes Node.jsconst {EventEmitter, errorMonitor } =require('node:events');const myEmitter =newEventEmitter();myEmitter.on(errorMonitor,(err) => {MyMonitoringTool.log(err);});myEmitter.emit('error',newError('whoops!'));// Still throws and crashes Node.js
Capture rejections of promises#
Usingasync functions with event handlers is problematic, because itcan lead to an unhandled rejection in case of a thrown exception:
import {EventEmitter }from'node:events';const ee =newEventEmitter();ee.on('something',async (value) => {thrownewError('kaboom');});constEventEmitter =require('node:events');const ee =newEventEmitter();ee.on('something',async (value) => {thrownewError('kaboom');});
ThecaptureRejections option in theEventEmitter constructor or the globalsetting change this behavior, installing a.then(undefined, handler)handler on thePromise. This handler routes the exceptionasynchronously to theSymbol.for('nodejs.rejection') methodif there is one, or to'error' event handler if there is none.
import {EventEmitter }from'node:events';const ee1 =newEventEmitter({captureRejections:true });ee1.on('something',async (value) => {thrownewError('kaboom');});ee1.on('error',console.log);const ee2 =newEventEmitter({captureRejections:true });ee2.on('something',async (value) => {thrownewError('kaboom');});ee2[Symbol.for('nodejs.rejection')] =console.log;constEventEmitter =require('node:events');const ee1 =newEventEmitter({captureRejections:true });ee1.on('something',async (value) => {thrownewError('kaboom');});ee1.on('error',console.log);const ee2 =newEventEmitter({captureRejections:true });ee2.on('something',async (value) => {thrownewError('kaboom');});ee2[Symbol.for('nodejs.rejection')] =console.log;
Settingevents.captureRejections = true will change the default for allnew instances ofEventEmitter.
import {EventEmitter }from'node:events';EventEmitter.captureRejections =true;const ee1 =newEventEmitter();ee1.on('something',async (value) => {thrownewError('kaboom');});ee1.on('error',console.log);const events =require('node:events');events.captureRejections =true;const ee1 =new events.EventEmitter();ee1.on('something',async (value) => {thrownewError('kaboom');});ee1.on('error',console.log);
The'error' events that are generated by thecaptureRejections behaviordo not have a catch handler to avoid infinite error loops: therecommendation is tonot useasync functions as'error' event handlers.
Class:EventEmitter#
History
| Version | Changes |
|---|---|
| v13.4.0, v12.16.0 | Added captureRejections option. |
| v0.1.26 | Added in: v0.1.26 |
TheEventEmitter class is defined and exposed by thenode:events module:
import {EventEmitter }from'node:events';constEventEmitter =require('node:events');
AllEventEmitters emit the event'newListener' when new listeners areadded and'removeListener' when existing listeners are removed.
It supports the following option:
captureRejections<boolean> It enablesautomatic capturing of promise rejection.Default:false.
Event:'newListener'#
eventName<string> |<symbol> The name of the event being listened forlistener<Function> The event handler function
TheEventEmitter instance will emit its own'newListener' eventbeforea listener is added to its internal array of listeners.
Listeners registered for the'newListener' event are passed the eventname and a reference to the listener being added.
The fact that the event is triggered before adding the listener has a subtlebut important side effect: anyadditional listeners registered to the samenamewithin the'newListener' callback are insertedbefore thelistener that is in the process of being added.
import {EventEmitter }from'node:events';classMyEmitterextendsEventEmitter {}const myEmitter =newMyEmitter();// Only do this once so we don't loop forevermyEmitter.once('newListener',(event, listener) => {if (event ==='event') {// Insert a new listener in front myEmitter.on('event',() => {console.log('B'); }); }});myEmitter.on('event',() => {console.log('A');});myEmitter.emit('event');// Prints:// B// AconstEventEmitter =require('node:events');classMyEmitterextendsEventEmitter {}const myEmitter =newMyEmitter();// Only do this once so we don't loop forevermyEmitter.once('newListener',(event, listener) => {if (event ==='event') {// Insert a new listener in front myEmitter.on('event',() => {console.log('B'); }); }});myEmitter.on('event',() => {console.log('A');});myEmitter.emit('event');// Prints:// B// A
Event:'removeListener'#
History
| Version | Changes |
|---|---|
| v6.1.0, v4.7.0 | For listeners attached using |
| v0.9.3 | Added in: v0.9.3 |
eventName<string> |<symbol> The event namelistener<Function> The event handler function
The'removeListener' event is emittedafter thelistener is removed.
emitter.addListener(eventName, listener)#
eventName<string> |<symbol>listener<Function>
Alias foremitter.on(eventName, listener).
emitter.emit(eventName[, ...args])#
Synchronously calls each of the listeners registered for the event namedeventName, in the order they were registered, passing the supplied argumentsto each.
Returnstrue if the event had listeners,false otherwise.
import {EventEmitter }from'node:events';const myEmitter =newEventEmitter();// First listenermyEmitter.on('event',functionfirstListener() {console.log('Helloooo! first listener');});// Second listenermyEmitter.on('event',functionsecondListener(arg1, arg2) {console.log(`event with parameters${arg1},${arg2} in second listener`);});// Third listenermyEmitter.on('event',functionthirdListener(...args) {const parameters = args.join(', ');console.log(`event with parameters${parameters} in third listener`);});console.log(myEmitter.listeners('event'));myEmitter.emit('event',1,2,3,4,5);// Prints:// [// [Function: firstListener],// [Function: secondListener],// [Function: thirdListener]// ]// Helloooo! first listener// event with parameters 1, 2 in second listener// event with parameters 1, 2, 3, 4, 5 in third listenerconstEventEmitter =require('node:events');const myEmitter =newEventEmitter();// First listenermyEmitter.on('event',functionfirstListener() {console.log('Helloooo! first listener');});// Second listenermyEmitter.on('event',functionsecondListener(arg1, arg2) {console.log(`event with parameters${arg1},${arg2} in second listener`);});// Third listenermyEmitter.on('event',functionthirdListener(...args) {const parameters = args.join(', ');console.log(`event with parameters${parameters} in third listener`);});console.log(myEmitter.listeners('event'));myEmitter.emit('event',1,2,3,4,5);// Prints:// [// [Function: firstListener],// [Function: secondListener],// [Function: thirdListener]// ]// Helloooo! first listener// event with parameters 1, 2 in second listener// event with parameters 1, 2, 3, 4, 5 in third listener
emitter.eventNames()#
- Returns:<string[]> |<symbol[]>
Returns an array listing the events for which the emitter has registeredlisteners.
import {EventEmitter }from'node:events';const myEE =newEventEmitter();myEE.on('foo',() => {});myEE.on('bar',() => {});const sym =Symbol('symbol');myEE.on(sym,() => {});console.log(myEE.eventNames());// Prints: [ 'foo', 'bar', Symbol(symbol) ]constEventEmitter =require('node:events');const myEE =newEventEmitter();myEE.on('foo',() => {});myEE.on('bar',() => {});const sym =Symbol('symbol');myEE.on(sym,() => {});console.log(myEE.eventNames());// Prints: [ 'foo', 'bar', Symbol(symbol) ]
emitter.getMaxListeners()#
- Returns:<integer>
Returns the current max listener value for theEventEmitter which is eitherset byemitter.setMaxListeners(n) or defaults toevents.defaultMaxListeners.
emitter.listenerCount(eventName[, listener])#
History
| Version | Changes |
|---|---|
| v19.8.0, v18.16.0 | Added the |
| v3.2.0 | Added in: v3.2.0 |
eventName<string> |<symbol> The name of the event being listened forlistener<Function> The event handler function- Returns:<integer>
Returns the number of listeners listening for the event namedeventName.Iflistener is provided, it will return how many times the listener is foundin the list of the listeners of the event.
emitter.listeners(eventName)#
History
| Version | Changes |
|---|---|
| v7.0.0 | For listeners attached using |
| v0.1.26 | Added in: v0.1.26 |
eventName<string> |<symbol>- Returns:<Function[]>
Returns a copy of the array of listeners for the event namedeventName.
server.on('connection',(stream) => {console.log('someone connected!');});console.log(util.inspect(server.listeners('connection')));// Prints: [ [Function] ]emitter.off(eventName, listener)#
eventName<string> |<symbol>listener<Function>- Returns:<EventEmitter>
Alias foremitter.removeListener().
emitter.on(eventName, listener)#
eventName<string> |<symbol> The name of the event.listener<Function> The callback function- Returns:<EventEmitter>
Adds thelistener function to the end of the listeners array for theevent namedeventName. No checks are made to see if thelistener hasalready been added. Multiple calls passing the same combination ofeventNameandlistener will result in thelistener being added, and called, multipletimes.
server.on('connection',(stream) => {console.log('someone connected!');});Returns a reference to theEventEmitter, so that calls can be chained.
By default, event listeners are invoked in the order they are added. Theemitter.prependListener() method can be used as an alternative to add theevent listener to the beginning of the listeners array.
import {EventEmitter }from'node:events';const myEE =newEventEmitter();myEE.on('foo',() =>console.log('a'));myEE.prependListener('foo',() =>console.log('b'));myEE.emit('foo');// Prints:// b// aconstEventEmitter =require('node:events');const myEE =newEventEmitter();myEE.on('foo',() =>console.log('a'));myEE.prependListener('foo',() =>console.log('b'));myEE.emit('foo');// Prints:// b// a
emitter.once(eventName, listener)#
eventName<string> |<symbol> The name of the event.listener<Function> The callback function- Returns:<EventEmitter>
Adds aone-timelistener function for the event namedeventName. Thenext timeeventName is triggered, this listener is removed and then invoked.
server.once('connection',(stream) => {console.log('Ah, we have our first user!');});Returns a reference to theEventEmitter, so that calls can be chained.
By default, event listeners are invoked in the order they are added. Theemitter.prependOnceListener() method can be used as an alternative to add theevent listener to the beginning of the listeners array.
import {EventEmitter }from'node:events';const myEE =newEventEmitter();myEE.once('foo',() =>console.log('a'));myEE.prependOnceListener('foo',() =>console.log('b'));myEE.emit('foo');// Prints:// b// aconstEventEmitter =require('node:events');const myEE =newEventEmitter();myEE.once('foo',() =>console.log('a'));myEE.prependOnceListener('foo',() =>console.log('b'));myEE.emit('foo');// Prints:// b// a
emitter.prependListener(eventName, listener)#
eventName<string> |<symbol> The name of the event.listener<Function> The callback function- Returns:<EventEmitter>
Adds thelistener function to thebeginning of the listeners array for theevent namedeventName. No checks are made to see if thelistener hasalready been added. Multiple calls passing the same combination ofeventNameandlistener will result in thelistener being added, and called, multipletimes.
server.prependListener('connection',(stream) => {console.log('someone connected!');});Returns a reference to theEventEmitter, so that calls can be chained.
emitter.prependOnceListener(eventName, listener)#
eventName<string> |<symbol> The name of the event.listener<Function> The callback function- Returns:<EventEmitter>
Adds aone-timelistener function for the event namedeventName to thebeginning of the listeners array. The next timeeventName is triggered, thislistener is removed, and then invoked.
server.prependOnceListener('connection',(stream) => {console.log('Ah, we have our first user!');});Returns a reference to theEventEmitter, so that calls can be chained.
emitter.removeAllListeners([eventName])#
eventName<string> |<symbol>- Returns:<EventEmitter>
Removes all listeners, or those of the specifiedeventName.
It is bad practice to remove listeners added elsewhere in the code,particularly when theEventEmitter instance was created by some othercomponent or module (e.g. sockets or file streams).
Returns a reference to theEventEmitter, so that calls can be chained.
emitter.removeListener(eventName, listener)#
eventName<string> |<symbol>listener<Function>- Returns:<EventEmitter>
Removes the specifiedlistener from the listener array for the event namedeventName.
constcallback = (stream) => {console.log('someone connected!');};server.on('connection', callback);// ...server.removeListener('connection', callback);removeListener() will remove, at most, one instance of a listener from thelistener array. If any single listener has been added multiple times to thelistener array for the specifiedeventName, thenremoveListener() must becalled multiple times to remove each instance.
Once an event is emitted, all listeners attached to it at thetime of emitting are called in order. This implies that anyremoveListener() orremoveAllListeners() callsafter emitting andbefore the last listener finishes execution will not remove them fromemit() in progress. Subsequent events behave as expected.
import {EventEmitter }from'node:events';classMyEmitterextendsEventEmitter {}const myEmitter =newMyEmitter();constcallbackA = () => {console.log('A'); myEmitter.removeListener('event', callbackB);};constcallbackB = () => {console.log('B');};myEmitter.on('event', callbackA);myEmitter.on('event', callbackB);// callbackA removes listener callbackB but it will still be called.// Internal listener array at time of emit [callbackA, callbackB]myEmitter.emit('event');// Prints:// A// B// callbackB is now removed.// Internal listener array [callbackA]myEmitter.emit('event');// Prints:// AconstEventEmitter =require('node:events');classMyEmitterextendsEventEmitter {}const myEmitter =newMyEmitter();constcallbackA = () => {console.log('A'); myEmitter.removeListener('event', callbackB);};constcallbackB = () => {console.log('B');};myEmitter.on('event', callbackA);myEmitter.on('event', callbackB);// callbackA removes listener callbackB but it will still be called.// Internal listener array at time of emit [callbackA, callbackB]myEmitter.emit('event');// Prints:// A// B// callbackB is now removed.// Internal listener array [callbackA]myEmitter.emit('event');// Prints:// A
Because listeners are managed using an internal array, calling this willchange the position indexes of any listener registeredafter the listenerbeing removed. This will not impact the order in which listeners are called,but it means that any copies of the listener array as returned bytheemitter.listeners() method will need to be recreated.
When a single function has been added as a handler multiple times for a singleevent (as in the example below),removeListener() will remove the mostrecently added instance. In the example theonce('ping')listener is removed:
import {EventEmitter }from'node:events';const ee =newEventEmitter();functionpong() {console.log('pong');}ee.on('ping', pong);ee.once('ping', pong);ee.removeListener('ping', pong);ee.emit('ping');ee.emit('ping');constEventEmitter =require('node:events');const ee =newEventEmitter();functionpong() {console.log('pong');}ee.on('ping', pong);ee.once('ping', pong);ee.removeListener('ping', pong);ee.emit('ping');ee.emit('ping');
Returns a reference to theEventEmitter, so that calls can be chained.
emitter.setMaxListeners(n)#
n<integer>- Returns:<EventEmitter>
By defaultEventEmitters will print a warning if more than10 listeners areadded for a particular event. This is a useful default that helps findingmemory leaks. Theemitter.setMaxListeners() method allows the limit to bemodified for this specificEventEmitter instance. The value can be set toInfinity (or0) to indicate an unlimited number of listeners.
Returns a reference to theEventEmitter, so that calls can be chained.
emitter.rawListeners(eventName)#
eventName<string> |<symbol>- Returns:<Function[]>
Returns a copy of the array of listeners for the event namedeventName,including any wrappers (such as those created by.once()).
import {EventEmitter }from'node:events';const emitter =newEventEmitter();emitter.once('log',() =>console.log('log once'));// Returns a new Array with a function `onceWrapper` which has a property// `listener` which contains the original listener bound aboveconst listeners = emitter.rawListeners('log');const logFnWrapper = listeners[0];// Logs "log once" to the console and does not unbind the `once` eventlogFnWrapper.listener();// Logs "log once" to the console and removes the listenerlogFnWrapper();emitter.on('log',() =>console.log('log persistently'));// Will return a new Array with a single function bound by `.on()` aboveconst newListeners = emitter.rawListeners('log');// Logs "log persistently" twicenewListeners[0]();emitter.emit('log');constEventEmitter =require('node:events');const emitter =newEventEmitter();emitter.once('log',() =>console.log('log once'));// Returns a new Array with a function `onceWrapper` which has a property// `listener` which contains the original listener bound aboveconst listeners = emitter.rawListeners('log');const logFnWrapper = listeners[0];// Logs "log once" to the console and does not unbind the `once` eventlogFnWrapper.listener();// Logs "log once" to the console and removes the listenerlogFnWrapper();emitter.on('log',() =>console.log('log persistently'));// Will return a new Array with a single function bound by `.on()` aboveconst newListeners = emitter.rawListeners('log');// Logs "log persistently" twicenewListeners[0]();emitter.emit('log');
emitter[Symbol.for('nodejs.rejection')](err, eventName[, ...args])#
History
| Version | Changes |
|---|---|
| v17.4.0, v16.14.0 | No longer experimental. |
| v13.4.0, v12.16.0 | Added in: v13.4.0, v12.16.0 |
TheSymbol.for('nodejs.rejection') method is called in case apromise rejection happens when emitting an event andcaptureRejections is enabled on the emitter.It is possible to useevents.captureRejectionSymbol inplace ofSymbol.for('nodejs.rejection').
import {EventEmitter, captureRejectionSymbol }from'node:events';classMyClassextendsEventEmitter {constructor() {super({captureRejections:true }); } [captureRejectionSymbol](err, event, ...args) {console.log('rejection happened for', event,'with', err, ...args);this.destroy(err); }destroy(err) {// Tear the resource down here. }}const {EventEmitter, captureRejectionSymbol } =require('node:events');classMyClassextendsEventEmitter {constructor() {super({captureRejections:true }); } [captureRejectionSymbol](err, event, ...args) {console.log('rejection happened for', event,'with', err, ...args);this.destroy(err); }destroy(err) {// Tear the resource down here. }}
events.defaultMaxListeners#
By default, a maximum of10 listeners can be registered for any singleevent. This limit can be changed for individualEventEmitter instancesusing theemitter.setMaxListeners(n) method. To change the defaultforallEventEmitter instances, theevents.defaultMaxListenersproperty can be used. If this value is not a positive number, aRangeErroris thrown.
Take caution when setting theevents.defaultMaxListeners because thechange affectsallEventEmitter instances, including those created beforethe change is made. However, callingemitter.setMaxListeners(n) still hasprecedence overevents.defaultMaxListeners.
This is not a hard limit. TheEventEmitter instance will allowmore listeners to be added but will output a trace warning to stderr indicatingthat a "possible EventEmitter memory leak" has been detected. For any singleEventEmitter, theemitter.getMaxListeners() andemitter.setMaxListeners()methods can be used to temporarily avoid this warning:
defaultMaxListeners has no effect onAbortSignal instances. While it isstill possible to useemitter.setMaxListeners(n) to set a warning limitfor individualAbortSignal instances, per defaultAbortSignal instances will not warn.
import {EventEmitter }from'node:events';const emitter =newEventEmitter();emitter.setMaxListeners(emitter.getMaxListeners() +1);emitter.once('event',() => {// do stuff emitter.setMaxListeners(Math.max(emitter.getMaxListeners() -1,0));});constEventEmitter =require('node:events');const emitter =newEventEmitter();emitter.setMaxListeners(emitter.getMaxListeners() +1);emitter.once('event',() => {// do stuff emitter.setMaxListeners(Math.max(emitter.getMaxListeners() -1,0));});
The--trace-warnings command-line flag can be used to display thestack trace for such warnings.
The emitted warning can be inspected withprocess.on('warning') and willhave the additionalemitter,type, andcount properties, referring tothe event emitter instance, the event's name and the number of attachedlisteners, respectively.Itsname property is set to'MaxListenersExceededWarning'.
events.errorMonitor#
This symbol shall be used to install a listener for only monitoring'error'events. Listeners installed using this symbol are called before the regular'error' listeners are called.
Installing a listener using this symbol does not change the behavior once an'error' event is emitted. Therefore, the process will still crash if noregular'error' listener is installed.
events.getEventListeners(emitterOrTarget, eventName)#
emitterOrTarget<EventEmitter> |<EventTarget>eventName<string> |<symbol>- Returns:<Function[]>
Returns a copy of the array of listeners for the event namedeventName.
ForEventEmitters this behaves exactly the same as calling.listeners onthe emitter.
ForEventTargets this is the only way to get the event listeners for theevent target. This is useful for debugging and diagnostic purposes.
import { getEventListeners,EventEmitter }from'node:events';{const ee =newEventEmitter();constlistener = () =>console.log('Events are fun'); ee.on('foo', listener);console.log(getEventListeners(ee,'foo'));// [ [Function: listener] ]}{const et =newEventTarget();constlistener = () =>console.log('Events are fun'); et.addEventListener('foo', listener);console.log(getEventListeners(et,'foo'));// [ [Function: listener] ]}const { getEventListeners,EventEmitter } =require('node:events');{const ee =newEventEmitter();constlistener = () =>console.log('Events are fun'); ee.on('foo', listener);console.log(getEventListeners(ee,'foo'));// [ [Function: listener] ]}{const et =newEventTarget();constlistener = () =>console.log('Events are fun'); et.addEventListener('foo', listener);console.log(getEventListeners(et,'foo'));// [ [Function: listener] ]}
events.getMaxListeners(emitterOrTarget)#
emitterOrTarget<EventEmitter> |<EventTarget>- Returns:<number>
Returns the currently set max amount of listeners.
ForEventEmitters this behaves exactly the same as calling.getMaxListeners onthe emitter.
ForEventTargets this is the only way to get the max event listeners for theevent target. If the number of event handlers on a single EventTarget exceedsthe max set, the EventTarget will print a warning.
import { getMaxListeners, setMaxListeners,EventEmitter }from'node:events';{const ee =newEventEmitter();console.log(getMaxListeners(ee));// 10setMaxListeners(11, ee);console.log(getMaxListeners(ee));// 11}{const et =newEventTarget();console.log(getMaxListeners(et));// 10setMaxListeners(11, et);console.log(getMaxListeners(et));// 11}const { getMaxListeners, setMaxListeners,EventEmitter } =require('node:events');{const ee =newEventEmitter();console.log(getMaxListeners(ee));// 10setMaxListeners(11, ee);console.log(getMaxListeners(ee));// 11}{const et =newEventTarget();console.log(getMaxListeners(et));// 10setMaxListeners(11, et);console.log(getMaxListeners(et));// 11}
events.once(emitter, name[, options])#
History
| Version | Changes |
|---|---|
| v15.0.0 | The |
| v11.13.0, v10.16.0 | Added in: v11.13.0, v10.16.0 |
emitter<EventEmitter>name<string> |<symbol>options<Object>signal<AbortSignal> Can be used to cancel waiting for the event.
- Returns:<Promise>
Creates aPromise that is fulfilled when theEventEmitter emits the givenevent or that is rejected if theEventEmitter emits'error' while waiting.ThePromise will resolve with an array of all the arguments emitted to thegiven event.
This method is intentionally generic and works with the web platformEventTarget interface, which has no special'error' event semantics and does not listen to the'error' event.
import { once,EventEmitter }from'node:events';import processfrom'node:process';const ee =newEventEmitter();process.nextTick(() => { ee.emit('myevent',42);});const [value] =awaitonce(ee,'myevent');console.log(value);const err =newError('kaboom');process.nextTick(() => { ee.emit('error', err);});try {awaitonce(ee,'myevent');}catch (err) {console.error('error happened', err);}const { once,EventEmitter } =require('node:events');asyncfunctionrun() {const ee =newEventEmitter(); process.nextTick(() => { ee.emit('myevent',42); });const [value] =awaitonce(ee,'myevent');console.log(value);const err =newError('kaboom'); process.nextTick(() => { ee.emit('error', err); });try {awaitonce(ee,'myevent'); }catch (err) {console.error('error happened', err); }}run();
The special handling of the'error' event is only used whenevents.once()is used to wait for another event. Ifevents.once() is used to wait for the'error' event itself, then it is treated as any other kind of event withoutspecial handling:
import {EventEmitter, once }from'node:events';const ee =newEventEmitter();once(ee,'error') .then(([err]) =>console.log('ok', err.message)) .catch((err) =>console.error('error', err.message));ee.emit('error',newError('boom'));// Prints: ok boomconst {EventEmitter, once } =require('node:events');const ee =newEventEmitter();once(ee,'error') .then(([err]) =>console.log('ok', err.message)) .catch((err) =>console.error('error', err.message));ee.emit('error',newError('boom'));// Prints: ok boom
An<AbortSignal> can be used to cancel waiting for the event:
import {EventEmitter, once }from'node:events';const ee =newEventEmitter();const ac =newAbortController();asyncfunctionfoo(emitter, event, signal) {try {awaitonce(emitter, event, { signal });console.log('event emitted!'); }catch (error) {if (error.name ==='AbortError') {console.error('Waiting for the event was canceled!'); }else {console.error('There was an error', error.message); } }}foo(ee,'foo', ac.signal);ac.abort();// Prints: Waiting for the event was canceled!const {EventEmitter, once } =require('node:events');const ee =newEventEmitter();const ac =newAbortController();asyncfunctionfoo(emitter, event, signal) {try {awaitonce(emitter, event, { signal });console.log('event emitted!'); }catch (error) {if (error.name ==='AbortError') {console.error('Waiting for the event was canceled!'); }else {console.error('There was an error', error.message); } }}foo(ee,'foo', ac.signal);ac.abort();// Prints: Waiting for the event was canceled!
Awaiting multiple events emitted onprocess.nextTick()#
There is an edge case worth noting when using theevents.once() functionto await multiple events emitted on in the same batch ofprocess.nextTick()operations, or whenever multiple events are emitted synchronously. Specifically,because theprocess.nextTick() queue is drained before thePromise microtaskqueue, and becauseEventEmitter emits all events synchronously, it is possibleforevents.once() to miss an event.
import {EventEmitter, once }from'node:events';import processfrom'node:process';const myEE =newEventEmitter();asyncfunctionfoo() {awaitonce(myEE,'bar');console.log('bar');// This Promise will never resolve because the 'foo' event will// have already been emitted before the Promise is created.awaitonce(myEE,'foo');console.log('foo');}process.nextTick(() => { myEE.emit('bar'); myEE.emit('foo');});foo().then(() =>console.log('done'));const {EventEmitter, once } =require('node:events');const myEE =newEventEmitter();asyncfunctionfoo() {awaitonce(myEE,'bar');console.log('bar');// This Promise will never resolve because the 'foo' event will// have already been emitted before the Promise is created.awaitonce(myEE,'foo');console.log('foo');}process.nextTick(() => { myEE.emit('bar'); myEE.emit('foo');});foo().then(() =>console.log('done'));
To catch both events, create each of the Promisesbefore awaiting eitherof them, then it becomes possible to usePromise.all(),Promise.race(),orPromise.allSettled():
import {EventEmitter, once }from'node:events';import processfrom'node:process';const myEE =newEventEmitter();asyncfunctionfoo() {awaitPromise.all([once(myEE,'bar'),once(myEE,'foo')]);console.log('foo','bar');}process.nextTick(() => { myEE.emit('bar'); myEE.emit('foo');});foo().then(() =>console.log('done'));const {EventEmitter, once } =require('node:events');const myEE =newEventEmitter();asyncfunctionfoo() {awaitPromise.all([once(myEE,'bar'),once(myEE,'foo')]);console.log('foo','bar');}process.nextTick(() => { myEE.emit('bar'); myEE.emit('foo');});foo().then(() =>console.log('done'));
events.captureRejections#
History
| Version | Changes |
|---|---|
| v17.4.0, v16.14.0 | No longer experimental. |
| v13.4.0, v12.16.0 | Added in: v13.4.0, v12.16.0 |
- Type:<boolean>
Change the defaultcaptureRejections option on all newEventEmitter objects.
events.captureRejectionSymbol#
History
| Version | Changes |
|---|---|
| v17.4.0, v16.14.0 | No longer experimental. |
| v13.4.0, v12.16.0 | Added in: v13.4.0, v12.16.0 |
- Type:<symbol>
Symbol.for('nodejs.rejection')
See how to write a customrejection handler.
events.listenerCount(emitterOrTarget, eventName)#
History
| Version | Changes |
|---|---|
| v25.4.0 | Now accepts EventTarget arguments. |
| v25.4.0 | Deprecation revoked. |
| v3.2.0 | Documentation-only deprecation. |
| v0.9.12 | Added in: v0.9.12 |
emitterOrTarget<EventEmitter> |<EventTarget>eventName<string> |<symbol>- Returns:<integer>
Returns the number of registered listeners for the event namedeventName.
ForEventEmitters this behaves exactly the same as calling.listenerCounton the emitter.
ForEventTargets this is the only way to obtain the listener count. This canbe useful for debugging and diagnostic purposes.
import {EventEmitter, listenerCount }from'node:events';{const ee =newEventEmitter(); ee.on('event',() => {}); ee.on('event',() => {});console.log(listenerCount(ee,'event'));// 2}{const et =newEventTarget(); et.addEventListener('event',() => {}); et.addEventListener('event',() => {});console.log(listenerCount(et,'event'));// 2}const {EventEmitter, listenerCount } =require('node:events');{const ee =newEventEmitter(); ee.on('event',() => {}); ee.on('event',() => {});console.log(listenerCount(ee,'event'));// 2}{const et =newEventTarget(); et.addEventListener('event',() => {}); et.addEventListener('event',() => {});console.log(listenerCount(et,'event'));// 2}
events.on(emitter, eventName[, options])#
History
| Version | Changes |
|---|---|
| v22.0.0, v20.13.0 | Support |
| v20.0.0 | The |
| v13.6.0, v12.16.0 | Added in: v13.6.0, v12.16.0 |
emitter<EventEmitter>eventName<string> |<symbol> The name of the event being listened foroptions<Object>signal<AbortSignal> Can be used to cancel awaiting events.close<string[]> Names of events that will end the iteration.highWaterMark<integer>Default:Number.MAX_SAFE_INTEGERThe high watermark. The emitter is paused every time the size of eventsbeing buffered is higher than it. Supported only on emitters implementingpause()andresume()methods.lowWaterMark<integer>Default:1The low watermark. The emitter is resumed every time the size of eventsbeing buffered is lower than it. Supported only on emitters implementingpause()andresume()methods.
- Returns:<AsyncIterator> that iterates
eventNameevents emitted by theemitter
import { on,EventEmitter }from'node:events';import processfrom'node:process';const ee =newEventEmitter();// Emit later onprocess.nextTick(() => { ee.emit('foo','bar'); ee.emit('foo',42);});forawait (const eventofon(ee,'foo')) {// The execution of this inner block is synchronous and it// processes one event at a time (even with await). Do not use// if concurrent execution is required.console.log(event);// prints ['bar'] [42]}// Unreachable hereconst { on,EventEmitter } =require('node:events');(async () => {const ee =newEventEmitter();// Emit later on process.nextTick(() => { ee.emit('foo','bar'); ee.emit('foo',42); });forawait (const eventofon(ee,'foo')) {// The execution of this inner block is synchronous and it// processes one event at a time (even with await). Do not use// if concurrent execution is required.console.log(event);// prints ['bar'] [42] }// Unreachable here})();
Returns anAsyncIterator that iterateseventName events. It will throwif theEventEmitter emits'error'. It removes all listeners whenexiting the loop. Thevalue returned by each iteration is an arraycomposed of the emitted event arguments.
An<AbortSignal> can be used to cancel waiting on events:
import { on,EventEmitter }from'node:events';import processfrom'node:process';const ac =newAbortController();(async () => {const ee =newEventEmitter();// Emit later on process.nextTick(() => { ee.emit('foo','bar'); ee.emit('foo',42); });forawait (const eventofon(ee,'foo', {signal: ac.signal })) {// The execution of this inner block is synchronous and it// processes one event at a time (even with await). Do not use// if concurrent execution is required.console.log(event);// prints ['bar'] [42] }// Unreachable here})();process.nextTick(() => ac.abort());const { on,EventEmitter } =require('node:events');const ac =newAbortController();(async () => {const ee =newEventEmitter();// Emit later on process.nextTick(() => { ee.emit('foo','bar'); ee.emit('foo',42); });forawait (const eventofon(ee,'foo', {signal: ac.signal })) {// The execution of this inner block is synchronous and it// processes one event at a time (even with await). Do not use// if concurrent execution is required.console.log(event);// prints ['bar'] [42] }// Unreachable here})();process.nextTick(() => ac.abort());
events.setMaxListeners(n[, ...eventTargets])#
n<number> A non-negative number. The maximum number of listeners perEventTargetevent....eventsTargets<EventTarget[]> |<EventEmitter[]> Zero or more<EventTarget>or<EventEmitter> instances. If none are specified,nis set as the defaultmax for all newly created<EventTarget> and<EventEmitter> objects.
import { setMaxListeners,EventEmitter }from'node:events';const target =newEventTarget();const emitter =newEventEmitter();setMaxListeners(5, target, emitter);const { setMaxListeners,EventEmitter,} =require('node:events');const target =newEventTarget();const emitter =newEventEmitter();setMaxListeners(5, target, emitter);
events.addAbortListener(signal, listener)#
History
| Version | Changes |
|---|---|
| v24.0.0, v22.16.0 | Change stability index for this feature from Experimental to Stable. |
| v20.5.0, v18.18.0 | Added in: v20.5.0, v18.18.0 |
signal<AbortSignal>listener<Function> |<EventListener>- Returns:<Disposable> A Disposable that removes the
abortlistener.
Listens once to theabort event on the providedsignal.
Listening to theabort event on abort signals is unsafe and maylead to resource leaks since another third party with the signal cancalle.stopImmediatePropagation(). Unfortunately Node.js cannot changethis since it would violate the web standard. Additionally, the originalAPI makes it easy to forget to remove listeners.
This API allows safely usingAbortSignals in Node.js APIs by solving thesetwo issues by listening to the event such thatstopImmediatePropagation doesnot prevent the listener from running.
Returns a disposable so that it may be unsubscribed from more easily.
const { addAbortListener } =require('node:events');functionexample(signal) {let disposable;try { signal.addEventListener('abort',(e) => e.stopImmediatePropagation()); disposable =addAbortListener(signal,(e) => {// Do something when signal is aborted. }); }finally { disposable?.[Symbol.dispose](); }}import { addAbortListener }from'node:events';functionexample(signal) {let disposable;try { signal.addEventListener('abort',(e) => e.stopImmediatePropagation()); disposable =addAbortListener(signal,(e) => {// Do something when signal is aborted. }); }finally { disposable?.[Symbol.dispose](); }}
Class:events.EventEmitterAsyncResource extends EventEmitter#
IntegratesEventEmitter with<AsyncResource> forEventEmitters thatrequire manual async tracking. Specifically, all events emitted by instancesofevents.EventEmitterAsyncResource will run within itsasync context.
import {EventEmitterAsyncResource,EventEmitter }from'node:events';import { notStrictEqual, strictEqual }from'node:assert';import { executionAsyncId, triggerAsyncId }from'node:async_hooks';// Async tracking tooling will identify this as 'Q'.const ee1 =newEventEmitterAsyncResource({name:'Q' });// 'foo' listeners will run in the EventEmitters async context.ee1.on('foo',() => {strictEqual(executionAsyncId(), ee1.asyncId);strictEqual(triggerAsyncId(), ee1.triggerAsyncId);});const ee2 =newEventEmitter();// 'foo' listeners on ordinary EventEmitters that do not track async// context, however, run in the same async context as the emit().ee2.on('foo',() => {notStrictEqual(executionAsyncId(), ee2.asyncId);notStrictEqual(triggerAsyncId(), ee2.triggerAsyncId);});Promise.resolve().then(() => { ee1.emit('foo'); ee2.emit('foo');});const {EventEmitterAsyncResource,EventEmitter } =require('node:events');const { notStrictEqual, strictEqual } =require('node:assert');const { executionAsyncId, triggerAsyncId } =require('node:async_hooks');// Async tracking tooling will identify this as 'Q'.const ee1 =newEventEmitterAsyncResource({name:'Q' });// 'foo' listeners will run in the EventEmitters async context.ee1.on('foo',() => {strictEqual(executionAsyncId(), ee1.asyncId);strictEqual(triggerAsyncId(), ee1.triggerAsyncId);});const ee2 =newEventEmitter();// 'foo' listeners on ordinary EventEmitters that do not track async// context, however, run in the same async context as the emit().ee2.on('foo',() => {notStrictEqual(executionAsyncId(), ee2.asyncId);notStrictEqual(triggerAsyncId(), ee2.triggerAsyncId);});Promise.resolve().then(() => { ee1.emit('foo'); ee2.emit('foo');});
TheEventEmitterAsyncResource class has the same methods and takes thesame options asEventEmitter andAsyncResource themselves.
new events.EventEmitterAsyncResource([options])#
options<Object>captureRejections<boolean> It enablesautomatic capturing of promise rejection.Default:false.name<string> The type of async event.Default:new.target.name.triggerAsyncId<number> The ID of the execution context that created thisasync event.Default:executionAsyncId().requireManualDestroy<boolean> If set totrue, disablesemitDestroywhen the object is garbage collected. This usually does not need to be set(even ifemitDestroyis called manually), unless the resource'sasyncIdis retrieved and the sensitive API'semitDestroyis called with it.When set tofalse, theemitDestroycall on garbage collectionwill only take place if there is at least one activedestroyhook.Default:false.
eventemitterasyncresource.asyncResource#
- Type:<AsyncResource> The underlying<AsyncResource>.
The returnedAsyncResource object has an additionaleventEmitter propertythat provides a reference to thisEventEmitterAsyncResource.
eventemitterasyncresource.emitDestroy()#
Call alldestroy hooks. This should only ever be called once. An error willbe thrown if it is called more than once. Thismust be manually called. Ifthe resource is left to be collected by the GC then thedestroy hooks willnever be called.
EventTarget andEvent API#
History
| Version | Changes |
|---|---|
| v16.0.0 | changed EventTarget error handling. |
| v15.4.0 | No longer experimental. |
| v15.0.0 | The |
| v14.5.0 | Added in: v14.5.0 |
TheEventTarget andEvent objects are a Node.js-specific implementationof theEventTarget Web API that are exposed by some Node.js core APIs.
const target =newEventTarget();target.addEventListener('foo',(event) => {console.log('foo event happened!');});Node.jsEventTarget vs. DOMEventTarget#
There are two key differences between the Node.jsEventTarget and theEventTarget Web API:
- Whereas DOM
EventTargetinstancesmay be hierarchical, there is noconcept of hierarchy and event propagation in Node.js. That is, an eventdispatched to anEventTargetdoes not propagate through a hierarchy ofnested target objects that may each have their own set of handlers for theevent. - In the Node.js
EventTarget, if an event listener is an async functionor returns aPromise, and the returnedPromiserejects, the rejectionis automatically captured and handled the same way as a listener thatthrows synchronously (seeEventTargeterror handling for details).
NodeEventTarget vs.EventEmitter#
TheNodeEventTarget object implements a modified subset of theEventEmitter API that allows it to closelyemulate anEventEmitter incertain situations. ANodeEventTarget isnot an instance ofEventEmitterand cannot be used in place of anEventEmitter in most cases.
- Unlike
EventEmitter, any givenlistenercan be registered at most onceper eventtype. Attempts to register alistenermultiple times areignored. - The
NodeEventTargetdoes not emulate the fullEventEmitterAPI.Specifically theprependListener(),prependOnceListener(),rawListeners(), anderrorMonitorAPIs are not emulated.The'newListener'and'removeListener'events will also not be emitted. - The
NodeEventTargetdoes not implement any special default behaviorfor events with type'error'. - The
NodeEventTargetsupportsEventListenerobjects as well asfunctions as handlers for all event types.
Event listener#
Event listeners registered for an eventtype may either be JavaScriptfunctions or objects with ahandleEvent property whose value is a function.
In either case, the handler function is invoked with theevent argumentpassed to theeventTarget.dispatchEvent() function.
Async functions may be used as event listeners. If an async handler functionrejects, the rejection is captured and handled as described inEventTarget error handling.
An error thrown by one handler function does not prevent the other handlersfrom being invoked.
The return value of a handler function is ignored.
Handlers are always invoked in the order they were added.
Handler functions may mutate theevent object.
functionhandler1(event) {console.log(event.type);// Prints 'foo' event.a =1;}asyncfunctionhandler2(event) {console.log(event.type);// Prints 'foo'console.log(event.a);// Prints 1}const handler3 = {handleEvent(event) {console.log(event.type);// Prints 'foo' },};const handler4 = {asynchandleEvent(event) {console.log(event.type);// Prints 'foo' },};const target =newEventTarget();target.addEventListener('foo', handler1);target.addEventListener('foo', handler2);target.addEventListener('foo', handler3);target.addEventListener('foo', handler4, {once:true });EventTarget error handling#
When a registered event listener throws (or returns a Promise that rejects),by default the error is treated as an uncaught exception onprocess.nextTick(). This means uncaught exceptions inEventTargets willterminate the Node.js process by default.
Throwing within an event listener willnot stop the other registered handlersfrom being invoked.
TheEventTarget does not implement any special default handling for'error'type events likeEventEmitter.
Currently errors are first forwarded to theprocess.on('error') eventbefore reachingprocess.on('uncaughtException'). This behavior isdeprecated and will change in a future release to alignEventTarget withother Node.js APIs. Any code relying on theprocess.on('error') event shouldbe aligned with the new behavior.
Class:Event#
History
| Version | Changes |
|---|---|
| v15.0.0 | The |
| v14.5.0 | Added in: v14.5.0 |
TheEvent object is an adaptation of theEvent Web API. Instancesare created internally by Node.js.
event.bubbles#
- Type:<boolean> Always returns
false.
This is not used in Node.js and is provided purely for completeness.
event.cancelBubble#
event.stopPropagation() instead.- Type:<boolean>
Alias forevent.stopPropagation() if set totrue. This is not usedin Node.js and is provided purely for completeness.
event.cancelable#
- Type:<boolean> True if the event was created with the
cancelableoption.
event.composed#
- Type:<boolean> Always returns
false.
This is not used in Node.js and is provided purely for completeness.
event.composedPath()#
Returns an array containing the currentEventTarget as the only entry orempty if the event is not being dispatched. This is not used inNode.js and is provided purely for completeness.
event.currentTarget#
- Type:<EventTarget> The
EventTargetdispatching the event.
Alias forevent.target.
event.defaultPrevented#
- Type:<boolean>
Istrue ifcancelable istrue andevent.preventDefault() has beencalled.
event.eventPhase#
- Type:<number> Returns
0while an event is not being dispatched,2whileit is being dispatched.
This is not used in Node.js and is provided purely for completeness.
event.initEvent(type[, bubbles[, cancelable]])#
Redundant with event constructors and incapable of settingcomposed.This is not used in Node.js and is provided purely for completeness.
event.isTrusted#
- Type:<boolean>
The<AbortSignal>"abort" event is emitted withisTrusted set totrue. Thevalue isfalse in all other cases.
event.preventDefault()#
Sets thedefaultPrevented property totrue ifcancelable istrue.
event.returnValue#
event.defaultPrevented instead.- Type:<boolean> True if the event has not been canceled.
The value ofevent.returnValue is always the opposite ofevent.defaultPrevented.This is not used in Node.js and is provided purely for completeness.
event.srcElement#
event.target instead.- Type:<EventTarget> The
EventTargetdispatching the event.
Alias forevent.target.
event.stopImmediatePropagation()#
Stops the invocation of event listeners after the current one completes.
event.stopPropagation()#
This is not used in Node.js and is provided purely for completeness.
Class:EventTarget#
History
| Version | Changes |
|---|---|
| v15.0.0 | The |
| v14.5.0 | Added in: v14.5.0 |
eventTarget.addEventListener(type, listener[, options])#
History
| Version | Changes |
|---|---|
| v15.4.0 | add support for |
| v14.5.0 | Added in: v14.5.0 |
type<string>listener<Function> |<EventListener>options<Object>once<boolean> Whentrue, the listener is automatically removedwhen it is first invoked.Default:false.passive<boolean> Whentrue, serves as a hint that the listener willnot call theEventobject'spreventDefault()method.Default:false.capture<boolean> Not directly used by Node.js. Added for APIcompleteness.Default:false.signal<AbortSignal> The listener will be removed when the givenAbortSignal object'sabort()method is called.
Adds a new handler for thetype event. Any givenlistener is addedonly once pertype and percapture option value.
If theonce option istrue, thelistener is removed after thenext time atype event is dispatched.
Thecapture option is not used by Node.js in any functional way other thantracking registered event listeners per theEventTarget specification.Specifically, thecapture option is used as part of the key when registeringalistener. Any individuallistener may be added once withcapture = false, and once withcapture = true.
functionhandler(event) {}const target =newEventTarget();target.addEventListener('foo', handler, {capture:true });// firsttarget.addEventListener('foo', handler, {capture:false });// second// Removes the second instance of handlertarget.removeEventListener('foo', handler);// Removes the first instance of handlertarget.removeEventListener('foo', handler, {capture:true });eventTarget.dispatchEvent(event)#
event<Event>- Returns:<boolean>
trueif either event'scancelableattribute value isfalse or itspreventDefault()method was not invoked, otherwisefalse.
Dispatches theevent to the list of handlers forevent.type.
The registered event listeners is synchronously invoked in the order theywere registered.
eventTarget.removeEventListener(type, listener[, options])#
type<string>listener<Function> |<EventListener>options<Object>capture<boolean>
Removes thelistener from the list of handlers for eventtype.
Class:CustomEvent#
History
| Version | Changes |
|---|---|
| v23.0.0 | No longer experimental. |
| v22.1.0, v20.13.0 | CustomEvent is now stable. |
| v19.0.0 | No longer behind |
| v18.7.0, v16.17.0 | Added in: v18.7.0, v16.17.0 |
- Extends:<Event>
TheCustomEvent object is an adaptation of theCustomEvent Web API.Instances are created internally by Node.js.
Class:NodeEventTarget#
- Extends:<EventTarget>
TheNodeEventTarget is a Node.js-specific extension toEventTargetthat emulates a subset of theEventEmitter API.
nodeEventTarget.addListener(type, listener)#
type<string>listener<Function> |<EventListener>Returns:<EventTarget> this
Node.js-specific extension to theEventTarget class that emulates theequivalentEventEmitter API. The only difference betweenaddListener() andaddEventListener() is thataddListener() will return a reference to theEventTarget.
nodeEventTarget.emit(type, arg)#
type<string>arg<any>- Returns:<boolean>
trueif event listeners registered for thetypeexist,otherwisefalse.
Node.js-specific extension to theEventTarget class that dispatches thearg to the list of handlers fortype.
nodeEventTarget.eventNames()#
- Returns:<string[]>
Node.js-specific extension to theEventTarget class that returns an arrayof eventtype names for which event listeners are registered.
nodeEventTarget.listenerCount(type)#
Node.js-specific extension to theEventTarget class that returns the numberof event listeners registered for thetype.
nodeEventTarget.setMaxListeners(n)#
Node.js-specific extension to theEventTarget class that sets the numberof max event listeners asn.
nodeEventTarget.getMaxListeners()#
- Returns:<number>
Node.js-specific extension to theEventTarget class that returns the numberof max event listeners.
nodeEventTarget.off(type, listener[, options])#
type<string>listener<Function> |<EventListener>options<Object>capture<boolean>
Returns:<EventTarget> this
Node.js-specific alias foreventTarget.removeEventListener().
nodeEventTarget.on(type, listener)#
type<string>listener<Function> |<EventListener>Returns:<EventTarget> this
Node.js-specific alias foreventTarget.addEventListener().
nodeEventTarget.once(type, listener)#
type<string>listener<Function> |<EventListener>Returns:<EventTarget> this
Node.js-specific extension to theEventTarget class that adds aoncelistener for the given eventtype. This is equivalent to callingonwith theonce option set totrue.
nodeEventTarget.removeAllListeners([type])#
type<string>Returns:<EventTarget> this
Node.js-specific extension to theEventTarget class. Iftype is specified,removes all registered listeners fortype, otherwise removes all registeredlisteners.
nodeEventTarget.removeListener(type, listener[, options])#
type<string>listener<Function> |<EventListener>options<Object>capture<boolean>
Returns:<EventTarget> this
Node.js-specific extension to theEventTarget class that removes thelistener for the giventype. The only difference betweenremoveListener()andremoveEventListener() is thatremoveListener() will return a referenceto theEventTarget.
File system#
Source Code:lib/fs.js
Thenode:fs module enables interacting with the file system in away modeled on standard POSIX functions.
To use the promise-based APIs:
import *as fsfrom'node:fs/promises';const fs =require('node:fs/promises');
To use the callback and sync APIs:
import *as fsfrom'node:fs';const fs =require('node:fs');
All file system operations have synchronous, callback, and promise-basedforms, and are accessible using both CommonJS syntax and ES6 Modules (ESM).
Promise example#
Promise-based operations return a promise that is fulfilled when theasynchronous operation is complete.
import { unlink }from'node:fs/promises';try {awaitunlink('/tmp/hello');console.log('successfully deleted /tmp/hello');}catch (error) {console.error('there was an error:', error.message);}const { unlink } =require('node:fs/promises');(asyncfunction(path) {try {awaitunlink(path);console.log(`successfully deleted${path}`); }catch (error) {console.error('there was an error:', error.message); }})('/tmp/hello');
Callback example#
The callback form takes a completion callback function as its lastargument and invokes the operation asynchronously. The arguments passed tothe completion callback depend on the method, but the first argument is alwaysreserved for an exception. If the operation is completed successfully, thenthe first argument isnull orundefined.
import { unlink }from'node:fs';unlink('/tmp/hello',(err) => {if (err)throw err;console.log('successfully deleted /tmp/hello');});const { unlink } =require('node:fs');unlink('/tmp/hello',(err) => {if (err)throw err;console.log('successfully deleted /tmp/hello');});
The callback-based versions of thenode:fs module APIs are preferable overthe use of the promise APIs when maximal performance (both in terms ofexecution time and memory allocation) is required.
Synchronous example#
The synchronous APIs block the Node.js event loop and further JavaScriptexecution until the operation is complete. Exceptions are thrown immediatelyand can be handled usingtry…catch, or can be allowed to bubble up.
import { unlinkSync }from'node:fs';try {unlinkSync('/tmp/hello');console.log('successfully deleted /tmp/hello');}catch (err) {// handle the error}const { unlinkSync } =require('node:fs');try {unlinkSync('/tmp/hello');console.log('successfully deleted /tmp/hello');}catch (err) {// handle the error}
Promises API#
History
| Version | Changes |
|---|---|
| v14.0.0 | Exposed as |
| v11.14.0, v10.17.0 | This API is no longer experimental. |
| v10.1.0 | The API is accessible via |
| v10.0.0 | Added in: v10.0.0 |
Thefs/promises API provides asynchronous file system methods that returnpromises.
The promise APIs use the underlying Node.js threadpool to perform filesystem operations off the event loop thread. These operations are notsynchronized or threadsafe. Care must be taken when performing multipleconcurrent modifications on the same file or data corruption may occur.
Class:FileHandle#
A<FileHandle> object is an object wrapper for a numeric file descriptor.
Instances of the<FileHandle> object are created by thefsPromises.open()method.
All<FileHandle> objects are<EventEmitter>s.
If a<FileHandle> is not closed using thefilehandle.close() method, it willtry to automatically close the file descriptor and emit a process warning,helping to prevent memory leaks. Please do not rely on this behavior becauseit can be unreliable and the file may not be closed. Instead, always explicitlyclose<FileHandle>s. Node.js may change this behavior in the future.
Event:'close'#
The'close' event is emitted when the<FileHandle> has been closed and can nolonger be used.
filehandle.appendFile(data[, options])#
History
| Version | Changes |
|---|---|
| v21.1.0, v20.10.0 | The |
| v15.14.0, v14.18.0 | The |
| v14.0.0 | The |
| v10.0.0 | Added in: v10.0.0 |
data<string> |<Buffer> |<TypedArray> |<DataView> |<AsyncIterable> |<Iterable> |<Stream>options<Object> |<string>encoding<string> |<null>Default:'utf8'signal<AbortSignal> |<undefined> allows aborting an in-progress writeFile.Default:undefined
- Returns:<Promise> Fulfills with
undefinedupon success.
Alias offilehandle.writeFile().
When operating on file handles, the mode cannot be changed from what it was setto withfsPromises.open(). Therefore, this is equivalent tofilehandle.writeFile().
filehandle.chown(uid, gid)#
uid<integer> The file's new owner's user id.gid<integer> The file's new group's group id.- Returns:<Promise> Fulfills with
undefinedupon success.
Changes the ownership of the file. A wrapper forchown(2).
filehandle.close()#
- Returns:<Promise> Fulfills with
undefinedupon success.
Closes the file handle after waiting for any pending operation on the handle tocomplete.
import { open }from'node:fs/promises';let filehandle;try { filehandle =awaitopen('thefile.txt','r');}finally {await filehandle?.close();}filehandle.createReadStream([options])#
options<Object>encoding<string>Default:nullautoClose<boolean>Default:trueemitClose<boolean>Default:truestart<integer>end<integer>Default:InfinityhighWaterMark<integer>Default:64 * 1024signal<AbortSignal> |<undefined>Default:undefined
- Returns:<fs.ReadStream>
options can includestart andend values to read a range of bytes fromthe file instead of the entire file. Bothstart andend are inclusive andstart counting at 0, allowed values are in the[0,Number.MAX_SAFE_INTEGER] range. Ifstart isomitted orundefined,filehandle.createReadStream() reads sequentially fromthe current file position. Theencoding can be any one of those accepted by<Buffer>.
If theFileHandle points to a character device that only supports blockingreads (such as keyboard or sound card), read operations do not finish until datais available. This can prevent the process from exiting and the stream fromclosing naturally.
By default, the stream will emit a'close' event after it has beendestroyed. Set theemitClose option tofalse to change this behavior.
import { open }from'node:fs/promises';const fd =awaitopen('/dev/input/event0');// Create a stream from some character device.const stream = fd.createReadStream();setTimeout(() => { stream.close();// This may not close the stream.// Artificially marking end-of-stream, as if the underlying resource had// indicated end-of-file by itself, allows the stream to close.// This does not cancel pending read operations, and if there is such an// operation, the process may still not be able to exit successfully// until it finishes. stream.push(null); stream.read(0);},100);IfautoClose is false, then the file descriptor won't be closed, even ifthere's an error. It is the application's responsibility to close it and makesure there's no file descriptor leak. IfautoClose is set to true (defaultbehavior), on'error' or'end' the file descriptor will be closedautomatically.
An example to read the last 10 bytes of a file which is 100 bytes long:
import { open }from'node:fs/promises';const fd =awaitopen('sample.txt');fd.createReadStream({start:90,end:99 });filehandle.createWriteStream([options])#
History
| Version | Changes |
|---|---|
| v21.0.0, v20.10.0 | The |
| v16.11.0 | Added in: v16.11.0 |
options<Object>- Returns:<fs.WriteStream>
options may also include astart option to allow writing data at someposition past the beginning of the file, allowed values are in the[0,Number.MAX_SAFE_INTEGER] range. Modifying a file rather thanreplacing it may require theflagsopen option to be set tor+ rather thanthe defaultr. Theencoding can be any one of those accepted by<Buffer>.
IfautoClose is set to true (default behavior) on'error' or'finish'the file descriptor will be closed automatically. IfautoClose is false,then the file descriptor won't be closed, even if there's an error.It is the application's responsibility to close it and make sure there's nofile descriptor leak.
By default, the stream will emit a'close' event after it has beendestroyed. Set theemitClose option tofalse to change this behavior.
filehandle.datasync()#
- Returns:<Promise> Fulfills with
undefinedupon success.
Forces all currently queued I/O operations associated with the file to theoperating system's synchronized I/O completion state. Refer to the POSIXfdatasync(2) documentation for details.
Unlikefilehandle.sync this method does not flush modified metadata.
filehandle.fd#
- Type:<number> The numeric file descriptor managed by the<FileHandle> object.
filehandle.read(buffer, offset, length, position)#
History
| Version | Changes |
|---|---|
| v21.0.0 | Accepts bigint values as |
| v10.0.0 | Added in: v10.0.0 |
buffer<Buffer> |<TypedArray> |<DataView> A buffer that will be filled with thefile data read.offset<integer> The location in the buffer at which to start filling.Default:0length<integer> The number of bytes to read.Default:buffer.byteLength - offsetposition<integer> |<bigint> |<null> The location where to begin reading datafrom the file. Ifnullor-1, data will be read from the current fileposition, and the position will be updated. Ifpositionis a non-negativeinteger, the current file position will remain unchanged.Default:null- Returns:<Promise> Fulfills upon success with an object with two properties:
bytesRead<integer> The number of bytes readbuffer<Buffer> |<TypedArray> |<DataView> A reference to the passed inbufferargument.
Reads data from the file and stores that in the given buffer.
If the file is not modified concurrently, the end-of-file is reached when thenumber of bytes read is zero.
filehandle.read([options])#
History
| Version | Changes |
|---|---|
| v21.0.0 | Accepts bigint values as |
| v13.11.0, v12.17.0 | Added in: v13.11.0, v12.17.0 |
options<Object>buffer<Buffer> |<TypedArray> |<DataView> A buffer that will be filled with thefile data read.Default:Buffer.alloc(16384)offset<integer> The location in the buffer at which to start filling.Default:0length<integer> The number of bytes to read.Default:buffer.byteLength - offsetposition<integer> |<bigint> |<null> The location where to begin reading datafrom the file. Ifnullor-1, data will be read from the current fileposition, and the position will be updated. Ifpositionis a non-negativeinteger, the current file position will remain unchanged.Default::null
- Returns:<Promise> Fulfills upon success with an object with two properties:
bytesRead<integer> The number of bytes readbuffer<Buffer> |<TypedArray> |<DataView> A reference to the passed inbufferargument.
Reads data from the file and stores that in the given buffer.
If the file is not modified concurrently, the end-of-file is reached when thenumber of bytes read is zero.
filehandle.read(buffer[, options])#
History
| Version | Changes |
|---|---|
| v21.0.0 | Accepts bigint values as |
| v18.2.0, v16.17.0 | Added in: v18.2.0, v16.17.0 |
buffer<Buffer> |<TypedArray> |<DataView> A buffer that will be filled with thefile data read.options<Object>offset<integer> The location in the buffer at which to start filling.Default:0length<integer> The number of bytes to read.Default:buffer.byteLength - offsetposition<integer> |<bigint> |<null> The location where to begin reading datafrom the file. Ifnullor-1, data will be read from the current fileposition, and the position will be updated. Ifpositionis a non-negativeinteger, the current file position will remain unchanged.Default::null
- Returns:<Promise> Fulfills upon success with an object with two properties:
bytesRead<integer> The number of bytes readbuffer<Buffer> |<TypedArray> |<DataView> A reference to the passed inbufferargument.
Reads data from the file and stores that in the given buffer.
If the file is not modified concurrently, the end-of-file is reached when thenumber of bytes read is zero.
filehandle.readableWebStream([options])#
History
| Version | Changes |
|---|---|
| v24.0.0, v22.17.0 | Marking the API stable. |
| v23.8.0, v22.15.0 | Removed option to create a 'bytes' stream. Streams are now always 'bytes' streams. |
| v20.0.0, v18.17.0 | Added option to create a 'bytes' stream. |
| v17.0.0 | Added in: v17.0.0 |
options<Object>autoClose<boolean> When true, causes the<FileHandle> to be closed when thestream is closed.Default:false
- Returns:<ReadableStream>
Returns a byte-orientedReadableStream that may be used to read the file'scontents.
An error will be thrown if this method is called more than once or is calledafter theFileHandle is closed or closing.
import { open,}from'node:fs/promises';const file =awaitopen('./some/file/to/read');forawait (const chunkof file.readableWebStream())console.log(chunk);await file.close();const { open,} =require('node:fs/promises');(async () => {const file =awaitopen('./some/file/to/read');forawait (const chunkof file.readableWebStream())console.log(chunk);await file.close();})();
While theReadableStream will read the file to completion, it will notclose theFileHandle automatically. User code must still call thefileHandle.close() method unless theautoClose option is set totrue.
filehandle.readFile(options)#
options<Object> |<string>encoding<string> |<null>Default:nullsignal<AbortSignal> allows aborting an in-progress readFile
- Returns:<Promise> Fulfills upon a successful read with the contents of thefile. If no encoding is specified (using
options.encoding), the data isreturned as a<Buffer> object. Otherwise, the data will be a string.
Asynchronously reads the entire contents of a file.
Ifoptions is a string, then it specifies theencoding.
The<FileHandle> has to support reading.
If one or morefilehandle.read() calls are made on a file handle and then afilehandle.readFile() call is made, the data will be read from the currentposition till the end of the file. It doesn't always read from the beginningof the file.
filehandle.readLines([options])#
options<Object>- Returns:<readline.InterfaceConstructor>
Convenience method to create areadline interface and stream over the file.Seefilehandle.createReadStream() for the options.
import { open }from'node:fs/promises';const file =awaitopen('./some/file/to/read');forawait (const lineof file.readLines()) {console.log(line);}const { open } =require('node:fs/promises');(async () => {const file =awaitopen('./some/file/to/read');forawait (const lineof file.readLines()) {console.log(line); }})();
filehandle.readv(buffers[, position])#
buffers<Buffer[]> |<TypedArray[]> |<DataView[]>position<integer> |<null> The offset from the beginning of the file wherethe data should be read from. Ifpositionis not anumber, the data willbe read from the current position.Default:null- Returns:<Promise> Fulfills upon success an object containing two properties:
bytesRead<integer> the number of bytes readbuffers<Buffer[]> |<TypedArray[]> |<DataView[]> property containinga reference to thebuffersinput.
Read from a file and write to an array of<ArrayBufferView>s
filehandle.stat([options])#
History
| Version | Changes |
|---|---|
| v10.5.0 | Accepts an additional |
| v10.0.0 | Added in: v10.0.0 |
options<Object>bigint<boolean> Whether the numeric values in the returned<fs.Stats> object should bebigint.Default:false.
- Returns:<Promise> Fulfills with an<fs.Stats> for the file.
filehandle.sync()#
- Returns:<Promise> Fulfills with
undefinedupon success.
Request that all data for the open file descriptor is flushed to the storagedevice. The specific implementation is operating system and device specific.Refer to the POSIXfsync(2) documentation for more detail.
filehandle.truncate(len)#
Truncates the file.
If the file was larger thanlen bytes, only the firstlen bytes will beretained in the file.
The following example retains only the first four bytes of the file:
import { open }from'node:fs/promises';let filehandle =null;try { filehandle =awaitopen('temp.txt','r+');await filehandle.truncate(4);}finally {await filehandle?.close();}If the file previously was shorter thanlen bytes, it is extended, and theextended part is filled with null bytes ('\0'):
Iflen is negative then0 will be used.
filehandle.utimes(atime, mtime)#
Change the file system timestamps of the object referenced by the<FileHandle>then fulfills the promise with no arguments upon success.
filehandle.write(buffer, offset[, length[, position]])#
History
| Version | Changes |
|---|---|
| v14.0.0 | The |
| v10.0.0 | Added in: v10.0.0 |
buffer<Buffer> |<TypedArray> |<DataView>offset<integer> The start position from withinbufferwhere the datato write begins.length<integer> The number of bytes frombufferto write.Default:buffer.byteLength - offsetposition<integer> |<null> The offset from the beginning of the file where thedata frombuffershould be written. Ifpositionis not anumber,the data will be written at the current position. See the POSIXpwrite(2)documentation for more detail.Default:null- Returns:<Promise>
Writebuffer to the file.
The promise is fulfilled with an object containing two properties:
bytesWritten<integer> the number of bytes writtenbuffer<Buffer> |<TypedArray> |<DataView> a reference to thebufferwritten.
It is unsafe to usefilehandle.write() multiple times on the same filewithout waiting for the promise to be fulfilled (or rejected). For thisscenario, usefilehandle.createWriteStream().
On Linux, positional writes do not work when the file is opened in append mode.The kernel ignores the position argument and always appends the data tothe end of the file.
filehandle.write(buffer[, options])#
buffer<Buffer> |<TypedArray> |<DataView>options<Object>- Returns:<Promise>
Writebuffer to the file.
Similar to the abovefilehandle.write function, this version takes anoptionaloptions object. If nooptions object is specified, it willdefault with the above values.
filehandle.write(string[, position[, encoding]])#
History
| Version | Changes |
|---|---|
| v14.0.0 | The |
| v10.0.0 | Added in: v10.0.0 |
string<string>position<integer> |<null> The offset from the beginning of the file where thedata fromstringshould be written. Ifpositionis not anumberthedata will be written at the current position. See the POSIXpwrite(2)documentation for more detail.Default:nullencoding<string> The expected string encoding.Default:'utf8'- Returns:<Promise>
Writestring to the file. Ifstring is not a string, the promise isrejected with an error.
The promise is fulfilled with an object containing two properties:
It is unsafe to usefilehandle.write() multiple times on the same filewithout waiting for the promise to be fulfilled (or rejected). For thisscenario, usefilehandle.createWriteStream().
On Linux, positional writes do not work when the file is opened in append mode.The kernel ignores the position argument and always appends the data tothe end of the file.
filehandle.writeFile(data, options)#
History
| Version | Changes |
|---|---|
| v15.14.0, v14.18.0 | The |
| v14.0.0 | The |
| v10.0.0 | Added in: v10.0.0 |
data<string> |<Buffer> |<TypedArray> |<DataView> |<AsyncIterable> |<Iterable> |<Stream>options<Object> |<string>encoding<string> |<null> The expected character encoding whendatais astring.Default:'utf8'signal<AbortSignal> |<undefined> allows aborting an in-progress writeFile.Default:undefined
- Returns:<Promise>
Asynchronously writes data to a file, replacing the file if it already exists.data can be a string, a buffer, an<AsyncIterable>, or an<Iterable> object.The promise is fulfilled with no arguments upon success.
Ifoptions is a string, then it specifies theencoding.
The<FileHandle> has to support writing.
It is unsafe to usefilehandle.writeFile() multiple times on the same filewithout waiting for the promise to be fulfilled (or rejected).
If one or morefilehandle.write() calls are made on a file handle and then afilehandle.writeFile() call is made, the data will be written from thecurrent position till the end of the file. It doesn't always write from thebeginning of the file.
filehandle.writev(buffers[, position])#
buffers<Buffer[]> |<TypedArray[]> |<DataView[]>position<integer> |<null> The offset from the beginning of the file where thedata frombuffersshould be written. Ifpositionis not anumber,the data will be written at the current position.Default:null- Returns:<Promise>
Write an array of<ArrayBufferView>s to the file.
The promise is fulfilled with an object containing a two properties:
bytesWritten<integer> the number of bytes writtenbuffers<Buffer[]> |<TypedArray[]> |<DataView[]> a reference to thebuffersinput.
It is unsafe to callwritev() multiple times on the same file without waitingfor the promise to be fulfilled (or rejected).
On Linux, positional writes don't work when the file is opened in append mode.The kernel ignores the position argument and always appends the data tothe end of the file.
filehandle[Symbol.asyncDispose]()#
History
| Version | Changes |
|---|---|
| v24.2.0 | No longer experimental. |
| v20.4.0, v18.18.0 | Added in: v20.4.0, v18.18.0 |
Callsfilehandle.close() and returns a promise that fulfills when thefilehandle is closed.
fsPromises.access(path[, mode])#
path<string> |<Buffer> |<URL>mode<integer>Default:fs.constants.F_OK- Returns:<Promise> Fulfills with
undefinedupon success.
Tests a user's permissions for the file or directory specified bypath.Themode argument is an optional integer that specifies the accessibilitychecks to be performed.mode should be either the valuefs.constants.F_OKor a mask consisting of the bitwise OR of any offs.constants.R_OK,fs.constants.W_OK, andfs.constants.X_OK (e.g.fs.constants.W_OK | fs.constants.R_OK). CheckFile access constants forpossible values ofmode.
If the accessibility check is successful, the promise is fulfilled with novalue. If any of the accessibility checks fail, the promise is rejectedwith an<Error> object. The following example checks if the file/etc/passwd can be read and written by the current process.
import { access, constants }from'node:fs/promises';try {awaitaccess('/etc/passwd', constants.R_OK | constants.W_OK);console.log('can access');}catch {console.error('cannot access');}UsingfsPromises.access() to check for the accessibility of a file beforecallingfsPromises.open() is not recommended. Doing so introduces a racecondition, since other processes may change the file's state between the twocalls. Instead, user code should open/read/write the file directly and handlethe error raised if the file is not accessible.
fsPromises.appendFile(path, data[, options])#
History
| Version | Changes |
|---|---|
| v21.1.0, v20.10.0 | The |
| v10.0.0 | Added in: v10.0.0 |
path<string> |<Buffer> |<URL> |<FileHandle> filename or<FileHandle>data<string> |<Buffer>options<Object> |<string>- Returns:<Promise> Fulfills with
undefinedupon success.
Asynchronously append data to a file, creating the file if it does not yetexist.data can be a string or a<Buffer>.
Ifoptions is a string, then it specifies theencoding.
Themode option only affects the newly created file. Seefs.open()for more details.
Thepath may be specified as a<FileHandle> that has been openedfor appending (usingfsPromises.open()).
fsPromises.chmod(path, mode)#
path<string> |<Buffer> |<URL>mode<string> |<integer>- Returns:<Promise> Fulfills with
undefinedupon success.
Changes the permissions of a file.
fsPromises.chown(path, uid, gid)#
path<string> |<Buffer> |<URL>uid<integer>gid<integer>- Returns:<Promise> Fulfills with
undefinedupon success.
Changes the ownership of a file.
fsPromises.copyFile(src, dest[, mode])#
History
| Version | Changes |
|---|---|
| v14.0.0 | Changed |
| v10.0.0 | Added in: v10.0.0 |
src<string> |<Buffer> |<URL> source filename to copydest<string> |<Buffer> |<URL> destination filename of the copy operationmode<integer> Optional modifiers that specify the behavior of the copyoperation. It is possible to create a mask consisting of the bitwise OR oftwo or more values (e.g.fs.constants.COPYFILE_EXCL | fs.constants.COPYFILE_FICLONE)Default:0.fs.constants.COPYFILE_EXCL: The copy operation will fail ifdestalready exists.fs.constants.COPYFILE_FICLONE: The copy operation will attempt to createa copy-on-write reflink. If the platform does not support copy-on-write,then a fallback copy mechanism is used.fs.constants.COPYFILE_FICLONE_FORCE: The copy operation will attempt tocreate a copy-on-write reflink. If the platform does not supportcopy-on-write, then the operation will fail.
- Returns:<Promise> Fulfills with
undefinedupon success.
Asynchronously copiessrc todest. By default,dest is overwritten if italready exists.
No guarantees are made about the atomicity of the copy operation. If anerror occurs after the destination file has been opened for writing, an attemptwill be made to remove the destination.
import { copyFile, constants }from'node:fs/promises';try {awaitcopyFile('source.txt','destination.txt');console.log('source.txt was copied to destination.txt');}catch {console.error('The file could not be copied');}// By using COPYFILE_EXCL, the operation will fail if destination.txt exists.try {awaitcopyFile('source.txt','destination.txt', constants.COPYFILE_EXCL);console.log('source.txt was copied to destination.txt');}catch {console.error('The file could not be copied');}fsPromises.cp(src, dest[, options])#
History
| Version | Changes |
|---|---|
| v22.3.0 | This API is no longer experimental. |
| v20.1.0, v18.17.0 | Accept an additional |
| v17.6.0, v16.15.0 | Accepts an additional |
| v16.7.0 | Added in: v16.7.0 |
src<string> |<URL> source path to copy.dest<string> |<URL> destination path to copy to.options<Object>dereference<boolean> dereference symlinks.Default:false.errorOnExist<boolean> whenforceisfalse, and the destinationexists, throw an error.Default:false.filter<Function> Function to filter copied files/directories. Returntrueto copy the item,falseto ignore it. When ignoring a directory,all of its contents will be skipped as well. Can also return aPromisethat resolves totrueorfalseDefault:undefined.force<boolean> overwrite existing file or directory. The copyoperation will ignore errors if you set this to false and the destinationexists. Use theerrorOnExistoption to change this behavior.Default:true.mode<integer> modifiers for copy operation.Default:0.Seemodeflag offsPromises.copyFile().preserveTimestamps<boolean> Whentruetimestamps fromsrcwillbe preserved.Default:false.recursive<boolean> copy directories recursivelyDefault:falseverbatimSymlinks<boolean> Whentrue, path resolution for symlinks willbe skipped.Default:false
- Returns:<Promise> Fulfills with
undefinedupon success.
Asynchronously copies the entire directory structure fromsrc todest,including subdirectories and files.
When copying a directory to another directory, globs are not supported andbehavior is similar tocp dir1/ dir2/.
fsPromises.glob(pattern[, options])#
History
| Version | Changes |
|---|---|
| v24.1.0, v22.17.0 | Add support for |
| v24.0.0, v22.17.0 | Marking the API stable. |
| v23.7.0, v22.14.0 | Add support for |
| v22.2.0 | Add support for |
| v22.0.0 | Added in: v22.0.0 |
pattern<string> |<string[]>options<Object>cwd<string> |<URL> current working directory.Default:process.cwd()exclude<Function> |<string[]> Function to filter out files/directories or alist of glob patterns to be excluded. If a function is provided, returntrueto exclude the item,falseto include it.Default:undefined.If a string array is provided, each string should be a glob pattern thatspecifies paths to exclude. Note: Negation patterns (e.g., '!foo.js') arenot supported.withFileTypes<boolean>trueif the glob should return paths as Dirents,falseotherwise.Default:false.
- Returns:<AsyncIterator> An AsyncIterator that yields the paths of filesthat match the pattern.
import { glob }from'node:fs/promises';forawait (const entryofglob('**/*.js'))console.log(entry);const { glob } =require('node:fs/promises');(async () => {forawait (const entryofglob('**/*.js'))console.log(entry);})();
fsPromises.lchmod(path, mode)#
Changes the permissions on a symbolic link.
This method is only implemented on macOS.
fsPromises.lchown(path, uid, gid)#
History
| Version | Changes |
|---|---|
| v10.6.0 | This API is no longer deprecated. |
| v10.0.0 | Added in: v10.0.0 |
path<string> |<Buffer> |<URL>uid<integer>gid<integer>- Returns:<Promise> Fulfills with
undefinedupon success.
Changes the ownership on a symbolic link.
fsPromises.lutimes(path, atime, mtime)#
path<string> |<Buffer> |<URL>atime<number> |<string> |<Date>mtime<number> |<string> |<Date>- Returns:<Promise> Fulfills with
undefinedupon success.
Changes the access and modification times of a file in the same way asfsPromises.utimes(), with the difference that if the path refers to asymbolic link, then the link is not dereferenced: instead, the timestamps ofthe symbolic link itself are changed.
fsPromises.link(existingPath, newPath)#
existingPath<string> |<Buffer> |<URL>newPath<string> |<Buffer> |<URL>- Returns:<Promise> Fulfills with
undefinedupon success.
Creates a new link from theexistingPath to thenewPath. See the POSIXlink(2) documentation for more detail.
fsPromises.lstat(path[, options])#
History
| Version | Changes |
|---|---|
| v10.5.0 | Accepts an additional |
| v10.0.0 | Added in: v10.0.0 |
path<string> |<Buffer> |<URL>options<Object>bigint<boolean> Whether the numeric values in the returned<fs.Stats> object should bebigint.Default:false.
- Returns:<Promise> Fulfills with the<fs.Stats> object for the givensymbolic link
path.
Equivalent tofsPromises.stat() unlesspath refers to a symbolic link,in which case the link itself is stat-ed, not the file that it refers to.Refer to the POSIXlstat(2) document for more detail.
fsPromises.mkdir(path[, options])#
path<string> |<Buffer> |<URL>options<Object> |<integer>recursive<boolean>Default:falsemode<string> |<integer> Not supported on Windows. SeeFile modesfor more details.Default:0o777.
- Returns:<Promise> Upon success, fulfills with
undefinedifrecursiveisfalse, or the first directory path created ifrecursiveistrue.
Asynchronously creates a directory.
The optionaloptions argument can be an integer specifyingmode (permissionand sticky bits), or an object with amode property and arecursiveproperty indicating whether parent directories should be created. CallingfsPromises.mkdir() whenpath is a directory that exists results in arejection only whenrecursive is false.
import { mkdir }from'node:fs/promises';try {const projectFolder =newURL('./test/project/',import.meta.url);const createDir =awaitmkdir(projectFolder, {recursive:true });console.log(`created${createDir}`);}catch (err) {console.error(err.message);}const { mkdir } =require('node:fs/promises');const { join } =require('node:path');asyncfunctionmakeDirectory() {const projectFolder =join(__dirname,'test','project');const dirCreation =awaitmkdir(projectFolder, {recursive:true });console.log(dirCreation);return dirCreation;}makeDirectory().catch(console.error);
fsPromises.mkdtemp(prefix[, options])#
History
| Version | Changes |
|---|---|
| v20.6.0, v18.19.0 | The |
| v16.5.0, v14.18.0 | The |
| v10.0.0 | Added in: v10.0.0 |
prefix<string> |<Buffer> |<URL>options<string> |<Object>encoding<string>Default:'utf8'
- Returns:<Promise> Fulfills with a string containing the file system pathof the newly created temporary directory.
Creates a unique temporary directory. A unique directory name is generated byappending six random characters to the end of the providedprefix. Due toplatform inconsistencies, avoid trailingX characters inprefix. Someplatforms, notably the BSDs, can return more than six random characters, andreplace trailingX characters inprefix with random characters.
The optionaloptions argument can be a string specifying an encoding, or anobject with anencoding property specifying the character encoding to use.
import { mkdtemp }from'node:fs/promises';import { join }from'node:path';import { tmpdir }from'node:os';try {awaitmkdtemp(join(tmpdir(),'foo-'));}catch (err) {console.error(err);}ThefsPromises.mkdtemp() method will append the six randomly selectedcharacters directly to theprefix string. For instance, given a directory/tmp, if the intention is to create a temporary directorywithin/tmp, theprefix must end with a trailing platform-specific path separator(require('node:path').sep).
fsPromises.mkdtempDisposable(prefix[, options])#
prefix<string> |<Buffer> |<URL>options<string> |<Object>encoding<string>Default:'utf8'
- Returns:<Promise> Fulfills with a Promise for an async-disposable Object:
path<string> The path of the created directory.remove<AsyncFunction> A function which removes the created directory.[Symbol.asyncDispose]<AsyncFunction> The same asremove.
The resulting Promise holds an async-disposable object whosepath propertyholds the created directory path. When the object is disposed, the directoryand its contents will be removed asynchronously if it still exists. If thedirectory cannot be deleted, disposal will throw an error. The object has anasyncremove() method which will perform the same task.
Both this function and the disposal function on the resulting object areasync, so it should be used withawait +await using as inawait using dir = await fsPromises.mkdtempDisposable('prefix').
For detailed information, see the documentation offsPromises.mkdtemp().
The optionaloptions argument can be a string specifying an encoding, or anobject with anencoding property specifying the character encoding to use.
fsPromises.open(path, flags[, mode])#
History
| Version | Changes |
|---|---|
| v11.1.0 | The |
| v10.0.0 | Added in: v10.0.0 |
path<string> |<Buffer> |<URL>flags<string> |<number> Seesupport of file systemflags.Default:'r'.mode<string> |<integer> Sets the file mode (permission and sticky bits)if the file is created. SeeFile modes for more details.Default:0o666(readable and writable)- Returns:<Promise> Fulfills with a<FileHandle> object.
Opens a<FileHandle>.
Refer to the POSIXopen(2) documentation for more detail.
Some characters (< > : " / \ | ? *) are reserved under Windows as documentedbyNaming Files, Paths, and Namespaces. Under NTFS, if the filename containsa colon, Node.js will open a file system stream, as described bythis MSDN page.
fsPromises.opendir(path[, options])#
History
| Version | Changes |
|---|---|
| v20.1.0, v18.17.0 | Added |
| v13.1.0, v12.16.0 | The |
| v12.12.0 | Added in: v12.12.0 |
path<string> |<Buffer> |<URL>options<Object>encoding<string> |<null>Default:'utf8'bufferSize<number> Number of directory entries that are bufferedinternally when reading from the directory. Higher values lead to betterperformance but higher memory usage.Default:32recursive<boolean> ResolvedDirwill be an<AsyncIterable>containing all sub files and directories.Default:false
- Returns:<Promise> Fulfills with an<fs.Dir>.
Asynchronously open a directory for iterative scanning. See the POSIXopendir(3) documentation for more detail.
Creates an<fs.Dir>, which contains all further functions for reading fromand cleaning up the directory.
Theencoding option sets the encoding for thepath while opening thedirectory and subsequent read operations.
Example using async iteration:
import { opendir }from'node:fs/promises';try {const dir =awaitopendir('./');forawait (const direntof dir)console.log(dirent.name);}catch (err) {console.error(err);}When using the async iterator, the<fs.Dir> object will be automaticallyclosed after the iterator exits.
fsPromises.readdir(path[, options])#
History
| Version | Changes |
|---|---|
| v20.1.0, v18.17.0 | Added |
| v10.11.0 | New option |
| v10.0.0 | Added in: v10.0.0 |
path<string> |<Buffer> |<URL>options<string> |<Object>- Returns:<Promise> Fulfills with an array of the names of the files inthe directory excluding
'.'and'..'.
Reads the contents of a directory.
The optionaloptions argument can be a string specifying an encoding, or anobject with anencoding property specifying the character encoding to use forthe filenames. If theencoding is set to'buffer', the filenames returnedwill be passed as<Buffer> objects.
Ifoptions.withFileTypes is set totrue, the returned array will contain<fs.Dirent> objects.
import { readdir }from'node:fs/promises';try {const files =awaitreaddir(path);for (const fileof files)console.log(file);}catch (err) {console.error(err);}fsPromises.readFile(path[, options])#
History
| Version | Changes |
|---|---|
| v15.2.0, v14.17.0 | The options argument may include an AbortSignal to abort an ongoing readFile request. |
| v10.0.0 | Added in: v10.0.0 |
path<string> |<Buffer> |<URL> |<FileHandle> filename orFileHandleoptions<Object> |<string>encoding<string> |<null>Default:nullflag<string> Seesupport of file systemflags.Default:'r'.signal<AbortSignal> allows aborting an in-progress readFile
- Returns:<Promise> Fulfills with the contents of the file.
Asynchronously reads the entire contents of a file.
If no encoding is specified (usingoptions.encoding), the data is returnedas a<Buffer> object. Otherwise, the data will be a string.
Ifoptions is a string, then it specifies the encoding.
When thepath is a directory, the behavior offsPromises.readFile() isplatform-specific. On macOS, Linux, and Windows, the promise will be rejectedwith an error. On FreeBSD, a representation of the directory's contents will bereturned.
An example of reading apackage.json file located in the same directory of therunning code:
import { readFile }from'node:fs/promises';try {const filePath =newURL('./package.json',import.meta.url);const contents =awaitreadFile(filePath, {encoding:'utf8' });console.log(contents);}catch (err) {console.error(err.message);}const { readFile } =require('node:fs/promises');const { resolve } =require('node:path');asyncfunctionlogFile() {try {const filePath =resolve('./package.json');const contents =awaitreadFile(filePath, {encoding:'utf8' });console.log(contents); }catch (err) {console.error(err.message); }}logFile();
It is possible to abort an ongoingreadFile using an<AbortSignal>. If arequest is aborted the promise returned is rejected with anAbortError:
import { readFile }from'node:fs/promises';try {const controller =newAbortController();const { signal } = controller;const promise =readFile(fileName, { signal });// Abort the request before the promise settles. controller.abort();await promise;}catch (err) {// When a request is aborted - err is an AbortErrorconsole.error(err);}Aborting an ongoing request does not abort individual operatingsystem requests but rather the internal bufferingfs.readFile performs.
Any specified<FileHandle> has to support reading.
fsPromises.readlink(path[, options])#
path<string> |<Buffer> |<URL>options<string> |<Object>encoding<string>Default:'utf8'
- Returns:<Promise> Fulfills with the
linkStringupon success.
Reads the contents of the symbolic link referred to bypath. See the POSIXreadlink(2) documentation for more detail. The promise is fulfilled with thelinkString upon success.
The optionaloptions argument can be a string specifying an encoding, or anobject with anencoding property specifying the character encoding to use forthe link path returned. If theencoding is set to'buffer', the link pathreturned will be passed as a<Buffer> object.
fsPromises.realpath(path[, options])#
path<string> |<Buffer> |<URL>options<string> |<Object>encoding<string>Default:'utf8'
- Returns:<Promise> Fulfills with the resolved path upon success.
Determines the actual location ofpath using the same semantics as thefs.realpath.native() function.
Only paths that can be converted to UTF8 strings are supported.
The optionaloptions argument can be a string specifying an encoding, or anobject with anencoding property specifying the character encoding to use forthe path. If theencoding is set to'buffer', the path returned will bepassed as a<Buffer> object.
On Linux, when Node.js is linked against musl libc, the procfs file system mustbe mounted on/proc in order for this function to work. Glibc does not havethis restriction.
fsPromises.rename(oldPath, newPath)#
oldPath<string> |<Buffer> |<URL>newPath<string> |<Buffer> |<URL>- Returns:<Promise> Fulfills with
undefinedupon success.
RenamesoldPath tonewPath.
fsPromises.rmdir(path[, options])#
History
| Version | Changes |
|---|---|
| v25.0.0 | Remove |
| v16.0.0 | Using |
| v16.0.0 | Using |
| v16.0.0 | The |
| v14.14.0 | The |
| v13.3.0, v12.16.0 | The |
| v12.10.0 | The |
| v10.0.0 | Added in: v10.0.0 |
path<string> |<Buffer> |<URL>options<Object> There are currently no options exposed. There used tobe options forrecursive,maxBusyTries, andemfileWaitbut they weredeprecated and removed. Theoptionsargument is still accepted forbackwards compatibility but it is not used.- Returns:<Promise> Fulfills with
undefinedupon success.
Removes the directory identified bypath.
UsingfsPromises.rmdir() on a file (not a directory) results in thepromise being rejected with anENOENT error on Windows and anENOTDIRerror on POSIX.
To get a behavior similar to therm -rf Unix command, usefsPromises.rm() with options{ recursive: true, force: true }.
fsPromises.rm(path[, options])#
path<string> |<Buffer> |<URL>options<Object>force<boolean> Whentrue, exceptions will be ignored ifpathdoesnot exist.Default:false.maxRetries<integer> If anEBUSY,EMFILE,ENFILE,ENOTEMPTY, orEPERMerror is encountered, Node.js will retry the operation with a linearbackoff wait ofretryDelaymilliseconds longer on each try. This optionrepresents the number of retries. This option is ignored if therecursiveoption is nottrue.Default:0.recursive<boolean> Iftrue, perform a recursive directory removal. Inrecursive mode operations are retried on failure.Default:false.retryDelay<integer> The amount of time in milliseconds to wait betweenretries. This option is ignored if therecursiveoption is nottrue.Default:100.
- Returns:<Promise> Fulfills with
undefinedupon success.
Removes files and directories (modeled on the standard POSIXrm utility).
fsPromises.stat(path[, options])#
History
| Version | Changes |
|---|---|
| v10.5.0 | Accepts an additional |
| v10.0.0 | Added in: v10.0.0 |
path<string> |<Buffer> |<URL>options<Object>bigint<boolean> Whether the numeric values in the returned<fs.Stats> object should bebigint.Default:false.
- Returns:<Promise> Fulfills with the<fs.Stats> object for thegiven
path.
fsPromises.statfs(path[, options])#
path<string> |<Buffer> |<URL>options<Object>bigint<boolean> Whether the numeric values in the returned<fs.StatFs> object should bebigint.Default:false.
- Returns:<Promise> Fulfills with the<fs.StatFs> object for thegiven
path.
fsPromises.symlink(target, path[, type])#
History
| Version | Changes |
|---|---|
| v19.0.0 | If the |
| v10.0.0 | Added in: v10.0.0 |
target<string> |<Buffer> |<URL>path<string> |<Buffer> |<URL>type<string> |<null>Default:null- Returns:<Promise> Fulfills with
undefinedupon success.
Creates a symbolic link.
Thetype argument is only used on Windows platforms and can be one of'dir','file', or'junction'. If thetype argument isnull, Node.js willautodetecttarget type and use'file' or'dir'. If thetarget does notexist,'file' will be used. Windows junction points require the destinationpath to be absolute. When using'junction', thetarget argument willautomatically be normalized to absolute path. Junction points on NTFS volumescan only point to directories.
fsPromises.truncate(path[, len])#
path<string> |<Buffer> |<URL>len<integer>Default:0- Returns:<Promise> Fulfills with
undefinedupon success.
Truncates (shortens or extends the length) of the content atpath tolenbytes.
fsPromises.unlink(path)#
Ifpath refers to a symbolic link, then the link is removed without affectingthe file or directory to which that link refers. If thepath refers to a filepath that is not a symbolic link, the file is deleted. See the POSIXunlink(2)documentation for more detail.
fsPromises.utimes(path, atime, mtime)#
path<string> |<Buffer> |<URL>atime<number> |<string> |<Date>mtime<number> |<string> |<Date>- Returns:<Promise> Fulfills with
undefinedupon success.
Change the file system timestamps of the object referenced bypath.
Theatime andmtime arguments follow these rules:
- Values can be either numbers representing Unix epoch time,
Dates, or anumeric string like'123456789.0'. - If the value can not be converted to a number, or is
NaN,Infinity, or-Infinity, anErrorwill be thrown.
fsPromises.watch(filename[, options])#
filename<string> |<Buffer> |<URL>options<string> |<Object>persistent<boolean> Indicates whether the process should continue to runas long as files are being watched.Default:true.recursive<boolean> Indicates whether all subdirectories should bewatched, or only the current directory. This applies when a directory isspecified, and only on supported platforms (Seecaveats).Default:false.encoding<string> Specifies the character encoding to be used for thefilename passed to the listener.Default:'utf8'.signal<AbortSignal> An<AbortSignal> used to signal when the watchershould stop.maxQueue<number> Specifies the number of events to queue between iterationsof the<AsyncIterator> returned.Default:2048.overflow<string> Either'ignore'or'throw'when there are more events to bequeued thanmaxQueueallows.'ignore'means overflow events are dropped and awarning is emitted, while'throw'means to throw an exception.Default:'ignore'.ignore<string> |<RegExp> |<Function> |<Array> Pattern(s) to ignore. Strings areglob patterns (usingminimatch), RegExp patterns are tested againstthe filename, and functions receive the filename and returntruetoignore.Default:undefined.
- Returns:<AsyncIterator> of objects with the properties:
Returns an async iterator that watches for changes onfilename, wherefilenameis either a file or a directory.
const { watch } =require('node:fs/promises');const ac =newAbortController();const { signal } = ac;setTimeout(() => ac.abort(),10000);(async () => {try {const watcher =watch(__filename, { signal });forawait (const eventof watcher)console.log(event); }catch (err) {if (err.name ==='AbortError')return;throw err; }})();On most platforms,'rename' is emitted whenever a filename appears ordisappears in the directory.
All thecaveats forfs.watch() also apply tofsPromises.watch().
fsPromises.writeFile(file, data[, options])#
History
| Version | Changes |
|---|---|
| v21.0.0, v20.10.0 | The |
| v15.14.0, v14.18.0 | The |
| v15.2.0, v14.17.0 | The options argument may include an AbortSignal to abort an ongoing writeFile request. |
| v14.0.0 | The |
| v10.0.0 | Added in: v10.0.0 |
file<string> |<Buffer> |<URL> |<FileHandle> filename orFileHandledata<string> |<Buffer> |<TypedArray> |<DataView> |<AsyncIterable> |<Iterable> |<Stream>options<Object> |<string>encoding<string> |<null>Default:'utf8'mode<integer>Default:0o666flag<string> Seesupport of file systemflags.Default:'w'.flush<boolean> If all data is successfully written to the file, andflushistrue,filehandle.sync()is used to flush the data.Default:false.signal<AbortSignal> allows aborting an in-progress writeFile
- Returns:<Promise> Fulfills with
undefinedupon success.
Asynchronously writes data to a file, replacing the file if it already exists.data can be a string, a buffer, an<AsyncIterable>, or an<Iterable> object.
Theencoding option is ignored ifdata is a buffer.
Ifoptions is a string, then it specifies the encoding.
Themode option only affects the newly created file. Seefs.open()for more details.
Any specified<FileHandle> has to support writing.
It is unsafe to usefsPromises.writeFile() multiple times on the same filewithout waiting for the promise to be settled.
Similarly tofsPromises.readFile -fsPromises.writeFile is a conveniencemethod that performs multiplewrite calls internally to write the bufferpassed to it. For performance sensitive code consider usingfs.createWriteStream() orfilehandle.createWriteStream().
It is possible to use an<AbortSignal> to cancel anfsPromises.writeFile().Cancelation is "best effort", and some amount of data is likely stillto be written.
import { writeFile }from'node:fs/promises';import {Buffer }from'node:buffer';try {const controller =newAbortController();const { signal } = controller;const data =newUint8Array(Buffer.from('Hello Node.js'));const promise =writeFile('message.txt', data, { signal });// Abort the request before the promise settles. controller.abort();await promise;}catch (err) {// When a request is aborted - err is an AbortErrorconsole.error(err);}Aborting an ongoing request does not abort individual operatingsystem requests but rather the internal bufferingfs.writeFile performs.
fsPromises.constants#
- Type:<Object>
Returns an object containing commonly used constants for file systemoperations. The object is the same asfs.constants. SeeFS constantsfor more details.
Callback API#
The callback APIs perform all operations asynchronously, without blocking theevent loop, then invoke a callback function upon completion or error.
The callback APIs use the underlying Node.js threadpool to perform filesystem operations off the event loop thread. These operations are notsynchronized or threadsafe. Care must be taken when performing multipleconcurrent modifications on the same file or data corruption may occur.
fs.access(path[, mode], callback)#
History
| Version | Changes |
|---|---|
| v25.0.0 | The constants |
| v20.8.0 | The constants |
| v18.0.0 | Passing an invalid callback to the |
| v7.6.0 | The |
| v6.3.0 | The constants like |
| v0.11.15 | Added in: v0.11.15 |
Tests a user's permissions for the file or directory specified bypath.Themode argument is an optional integer that specifies the accessibilitychecks to be performed.mode should be either the valuefs.constants.F_OKor a mask consisting of the bitwise OR of any offs.constants.R_OK,fs.constants.W_OK, andfs.constants.X_OK (e.g.fs.constants.W_OK | fs.constants.R_OK). CheckFile access constants forpossible values ofmode.
The final argument,callback, is a callback function that is invoked witha possible error argument. If any of the accessibility checks fail, the errorargument will be anError object. The following examples check ifpackage.json exists, and if it is readable or writable.
import { access, constants }from'node:fs';const file ='package.json';// Check if the file exists in the current directory.access(file, constants.F_OK,(err) => {console.log(`${file}${err ?'does not exist' :'exists'}`);});// Check if the file is readable.access(file, constants.R_OK,(err) => {console.log(`${file}${err ?'is not readable' :'is readable'}`);});// Check if the file is writable.access(file, constants.W_OK,(err) => {console.log(`${file}${err ?'is not writable' :'is writable'}`);});// Check if the file is readable and writable.access(file, constants.R_OK | constants.W_OK,(err) => {console.log(`${file}${err ?'is not' :'is'} readable and writable`);});Do not usefs.access() to check for the accessibility of a file before callingfs.open(),fs.readFile(), orfs.writeFile(). Doingso introduces a race condition, since other processes may change the file'sstate between the two calls. Instead, user code should open/read/write thefile directly and handle the error raised if the file is not accessible.
write (NOT RECOMMENDED)
import { access, open, close }from'node:fs';access('myfile',(err) => {if (!err) {console.error('myfile already exists');return; }open('myfile','wx',(err, fd) => {if (err)throw err;try {writeMyData(fd); }finally {close(fd,(err) => {if (err)throw err; }); } });});write (RECOMMENDED)
import { open, close }from'node:fs';open('myfile','wx',(err, fd) => {if (err) {if (err.code ==='EEXIST') {console.error('myfile already exists');return; }throw err; }try {writeMyData(fd); }finally {close(fd,(err) => {if (err)throw err; }); }});read (NOT RECOMMENDED)
import { access, open, close }from'node:fs';access('myfile',(err) => {if (err) {if (err.code ==='ENOENT') {console.error('myfile does not exist');return; }throw err; }open('myfile','r',(err, fd) => {if (err)throw err;try {readMyData(fd); }finally {close(fd,(err) => {if (err)throw err; }); } });});read (RECOMMENDED)
import { open, close }from'node:fs';open('myfile','r',(err, fd) => {if (err) {if (err.code ==='ENOENT') {console.error('myfile does not exist');return; }throw err; }try {readMyData(fd); }finally {close(fd,(err) => {if (err)throw err; }); }});The "not recommended" examples above check for accessibility and then use thefile; the "recommended" examples are better because they use the file directlyand handle the error, if any.
In general, check for the accessibility of a file only if the file will not beused directly, for example when its accessibility is a signal from anotherprocess.
On Windows, access-control policies (ACLs) on a directory may limit access toa file or directory. Thefs.access() function, however, does not check theACL and therefore may report that a path is accessible even if the ACL restrictsthe user from reading or writing to it.
fs.appendFile(path, data[, options], callback)#
History
| Version | Changes |
|---|---|
| v21.1.0, v20.10.0 | The |
| v18.0.0 | Passing an invalid callback to the |
| v10.0.0 | The |
| v7.0.0 | The |
| v7.0.0 | The passed |
| v5.0.0 | The |
| v0.6.7 | Added in: v0.6.7 |
path<string> |<Buffer> |<URL> |<number> filename or file descriptordata<string> |<Buffer>options<Object> |<string>callback<Function>err<Error>
Asynchronously append data to a file, creating the file if it does not yetexist.data can be a string or a<Buffer>.
Themode option only affects the newly created file. Seefs.open()for more details.
import { appendFile }from'node:fs';appendFile('message.txt','data to append',(err) => {if (err)throw err;console.log('The "data to append" was appended to file!');});Ifoptions is a string, then it specifies the encoding:
import { appendFile }from'node:fs';appendFile('message.txt','data to append','utf8', callback);Thepath may be specified as a numeric file descriptor that has been openedfor appending (usingfs.open() orfs.openSync()). The file descriptor willnot be closed automatically.
import { open, close, appendFile }from'node:fs';functioncloseFd(fd) {close(fd,(err) => {if (err)throw err; });}open('message.txt','a',(err, fd) => {if (err)throw err;try {appendFile(fd,'data to append','utf8',(err) => {closeFd(fd);if (err)throw err; }); }catch (err) {closeFd(fd);throw err; }});fs.chmod(path, mode, callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v10.0.0 | The |
| v7.6.0 | The |
| v7.0.0 | The |
| v0.1.30 | Added in: v0.1.30 |
Asynchronously changes the permissions of a file. No arguments other than apossible exception are given to the completion callback.
See the POSIXchmod(2) documentation for more detail.
import { chmod }from'node:fs';chmod('my_file.txt',0o775,(err) => {if (err)throw err;console.log('The permissions for file "my_file.txt" have been changed!');});File modes#
Themode argument used in both thefs.chmod() andfs.chmodSync()methods is a numeric bitmask created using a logical OR of the followingconstants:
| Constant | Octal | Description |
|---|---|---|
fs.constants.S_IRUSR | 0o400 | read by owner |
fs.constants.S_IWUSR | 0o200 | write by owner |
fs.constants.S_IXUSR | 0o100 | execute/search by owner |
fs.constants.S_IRGRP | 0o40 | read by group |
fs.constants.S_IWGRP | 0o20 | write by group |
fs.constants.S_IXGRP | 0o10 | execute/search by group |
fs.constants.S_IROTH | 0o4 | read by others |
fs.constants.S_IWOTH | 0o2 | write by others |
fs.constants.S_IXOTH | 0o1 | execute/search by others |
An easier method of constructing themode is to use a sequence of threeoctal digits (e.g.765). The left-most digit (7 in the example), specifiesthe permissions for the file owner. The middle digit (6 in the example),specifies permissions for the group. The right-most digit (5 in the example),specifies the permissions for others.
| Number | Description |
|---|---|
7 | read, write, and execute |
6 | read and write |
5 | read and execute |
4 | read only |
3 | write and execute |
2 | write only |
1 | execute only |
0 | no permission |
For example, the octal value0o765 means:
- The owner may read, write, and execute the file.
- The group may read and write the file.
- Others may read and execute the file.
When using raw numbers where file modes are expected, any value larger than0o777 may result in platform-specific behaviors that are not supported to workconsistently. Therefore constants likeS_ISVTX,S_ISGID, orS_ISUID arenot exposed infs.constants.
Caveats: on Windows only the write permission can be changed, and thedistinction among the permissions of group, owner, or others is notimplemented.
fs.chown(path, uid, gid, callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v10.0.0 | The |
| v7.6.0 | The |
| v7.0.0 | The |
| v0.1.97 | Added in: v0.1.97 |
Asynchronously changes owner and group of a file. No arguments other than apossible exception are given to the completion callback.
See the POSIXchown(2) documentation for more detail.
fs.close(fd[, callback])#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v15.9.0, v14.17.0 | A default callback is now used if one is not provided. |
| v10.0.0 | The |
| v7.0.0 | The |
| v0.0.2 | Added in: v0.0.2 |
fd<integer>callback<Function>err<Error>
Closes the file descriptor. No arguments other than a possible exception aregiven to the completion callback.
Callingfs.close() on any file descriptor (fd) that is currently in usethrough any otherfs operation may lead to undefined behavior.
See the POSIXclose(2) documentation for more detail.
fs.copyFile(src, dest[, mode], callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v14.0.0 | Changed |
| v8.5.0 | Added in: v8.5.0 |
src<string> |<Buffer> |<URL> source filename to copydest<string> |<Buffer> |<URL> destination filename of the copy operationmode<integer> modifiers for copy operation.Default:0.callback<Function>err<Error>
Asynchronously copiessrc todest. By default,dest is overwritten if italready exists. No arguments other than a possible exception are given to thecallback function. Node.js makes no guarantees about the atomicity of the copyoperation. If an error occurs after the destination file has been opened forwriting, Node.js will attempt to remove the destination.
mode is an optional integer that specifies the behaviorof the copy operation. It is possible to create a mask consisting of the bitwiseOR of two or more values (e.g.fs.constants.COPYFILE_EXCL | fs.constants.COPYFILE_FICLONE).
fs.constants.COPYFILE_EXCL: The copy operation will fail ifdestalreadyexists.fs.constants.COPYFILE_FICLONE: The copy operation will attempt to create acopy-on-write reflink. If the platform does not support copy-on-write, then afallback copy mechanism is used.fs.constants.COPYFILE_FICLONE_FORCE: The copy operation will attempt tocreate a copy-on-write reflink. If the platform does not supportcopy-on-write, then the operation will fail.
import { copyFile, constants }from'node:fs';functioncallback(err) {if (err)throw err;console.log('source.txt was copied to destination.txt');}// destination.txt will be created or overwritten by default.copyFile('source.txt','destination.txt', callback);// By using COPYFILE_EXCL, the operation will fail if destination.txt exists.copyFile('source.txt','destination.txt', constants.COPYFILE_EXCL, callback);fs.cp(src, dest[, options], callback)#
History
| Version | Changes |
|---|---|
| v22.3.0 | This API is no longer experimental. |
| v20.1.0, v18.17.0 | Accept an additional |
| v18.0.0 | Passing an invalid callback to the |
| v17.6.0, v16.15.0 | Accepts an additional |
| v16.7.0 | Added in: v16.7.0 |
src<string> |<URL> source path to copy.dest<string> |<URL> destination path to copy to.options<Object>dereference<boolean> dereference symlinks.Default:false.errorOnExist<boolean> whenforceisfalse, and the destinationexists, throw an error.Default:false.filter<Function> Function to filter copied files/directories. Returntrueto copy the item,falseto ignore it. When ignoring a directory,all of its contents will be skipped as well. Can also return aPromisethat resolves totrueorfalseDefault:undefined.force<boolean> overwrite existing file or directory. The copyoperation will ignore errors if you set this to false and the destinationexists. Use theerrorOnExistoption to change this behavior.Default:true.mode<integer> modifiers for copy operation.Default:0.Seemodeflag offs.copyFile().preserveTimestamps<boolean> Whentruetimestamps fromsrcwillbe preserved.Default:false.recursive<boolean> copy directories recursivelyDefault:falseverbatimSymlinks<boolean> Whentrue, path resolution for symlinks willbe skipped.Default:false
callback<Function>err<Error>
Asynchronously copies the entire directory structure fromsrc todest,including subdirectories and files.
When copying a directory to another directory, globs are not supported andbehavior is similar tocp dir1/ dir2/.
fs.createReadStream(path[, options])#
History
| Version | Changes |
|---|---|
| v16.10.0 | The |
| v16.10.0 | The |
| v15.5.0 | Add support for |
| v15.4.0 | The |
| v14.0.0 | Change |
| v13.6.0, v12.17.0 | The |
| v12.10.0 | Enable |
| v11.0.0 | Impose new restrictions on |
| v7.6.0 | The |
| v7.0.0 | The passed |
| v2.3.0 | The passed |
| v0.1.31 | Added in: v0.1.31 |
path<string> |<Buffer> |<URL>options<string> |<Object>flags<string> Seesupport of file systemflags.Default:'r'.encoding<string>Default:nullfd<integer> |<FileHandle>Default:nullmode<integer>Default:0o666autoClose<boolean>Default:trueemitClose<boolean>Default:truestart<integer>end<integer>Default:InfinityhighWaterMark<integer>Default:64 * 1024fs<Object> |<null>Default:nullsignal<AbortSignal> |<null>Default:null
- Returns:<fs.ReadStream>
options can includestart andend values to read a range of bytes fromthe file instead of the entire file. Bothstart andend are inclusive andstart counting at 0, allowed values are in the[0,Number.MAX_SAFE_INTEGER] range. Iffd is specified andstart isomitted orundefined,fs.createReadStream() reads sequentially from thecurrent file position. Theencoding can be any one of those accepted by<Buffer>.
Iffd is specified,ReadStream will ignore thepath argument and will usethe specified file descriptor. This means that no'open' event will beemitted.fd should be blocking; non-blockingfds should be passed to<net.Socket>.
Iffd points to a character device that only supports blocking reads(such as keyboard or sound card), read operations do not finish until data isavailable. This can prevent the process from exiting and the stream fromclosing naturally.
By default, the stream will emit a'close' event after it has beendestroyed. Set theemitClose option tofalse to change this behavior.
By providing thefs option, it is possible to override the correspondingfsimplementations foropen,read, andclose. When providing thefs option,an override forread is required. If nofd is provided, an override foropen is also required. IfautoClose istrue, an override forclose isalso required.
import { createReadStream }from'node:fs';// Create a stream from some character device.const stream =createReadStream('/dev/input/event0');setTimeout(() => { stream.close();// This may not close the stream.// Artificially marking end-of-stream, as if the underlying resource had// indicated end-of-file by itself, allows the stream to close.// This does not cancel pending read operations, and if there is such an// operation, the process may still not be able to exit successfully// until it finishes. stream.push(null); stream.read(0);},100);IfautoClose is false, then the file descriptor won't be closed, even ifthere's an error. It is the application's responsibility to close it and makesure there's no file descriptor leak. IfautoClose is set to true (defaultbehavior), on'error' or'end' the file descriptor will be closedautomatically.
mode sets the file mode (permission and sticky bits), but only if thefile was created.
An example to read the last 10 bytes of a file which is 100 bytes long:
import { createReadStream }from'node:fs';createReadStream('sample.txt', {start:90,end:99 });Ifoptions is a string, then it specifies the encoding.
fs.createWriteStream(path[, options])#
History
| Version | Changes |
|---|---|
| v21.0.0, v20.10.0 | The |
| v16.10.0 | The |
| v16.10.0 | The |
| v15.5.0 | Add support for |
| v15.4.0 | The |
| v14.0.0 | Change |
| v13.6.0, v12.17.0 | The |
| v12.10.0 | Enable |
| v7.6.0 | The |
| v7.0.0 | The passed |
| v5.5.0 | The |
| v2.3.0 | The passed |
| v0.1.31 | Added in: v0.1.31 |
path<string> |<Buffer> |<URL>options<string> |<Object>flags<string> Seesupport of file systemflags.Default:'w'.encoding<string>Default:'utf8'fd<integer> |<FileHandle>Default:nullmode<integer>Default:0o666autoClose<boolean>Default:trueemitClose<boolean>Default:truestart<integer>fs<Object> |<null>Default:nullsignal<AbortSignal> |<null>Default:nullhighWaterMark<number>Default:16384flush<boolean> Iftrue, the underlying file descriptor is flushedprior to closing it.Default:false.
- Returns:<fs.WriteStream>
options may also include astart option to allow writing data at someposition past the beginning of the file, allowed values are in the[0,Number.MAX_SAFE_INTEGER] range. Modifying a file rather thanreplacing it may require theflags option to be set tor+ rather than thedefaultw. Theencoding can be any one of those accepted by<Buffer>.
IfautoClose is set to true (default behavior) on'error' or'finish'the file descriptor will be closed automatically. IfautoClose is false,then the file descriptor won't be closed, even if there's an error.It is the application's responsibility to close it and make sure there's nofile descriptor leak.
By default, the stream will emit a'close' event after it has beendestroyed. Set theemitClose option tofalse to change this behavior.
By providing thefs option it is possible to override the correspondingfsimplementations foropen,write,writev, andclose. Overridingwrite()withoutwritev() can reduce performance as some optimizations (_writev())will be disabled. When providing thefs option, overrides for at least one ofwrite andwritev are required. If nofd option is supplied, an overrideforopen is also required. IfautoClose istrue, an override forcloseis also required.
Like<fs.ReadStream>, iffd is specified,<fs.WriteStream> will ignore thepath argument and will use the specified file descriptor. This means that no'open' event will be emitted.fd should be blocking; non-blockingfdsshould be passed to<net.Socket>.
Ifoptions is a string, then it specifies the encoding.
fs.exists(path, callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v7.6.0 | The |
| v1.0.0 | Deprecated since: v1.0.0 |
| v0.0.2 | Added in: v0.0.2 |
path<string> |<Buffer> |<URL>callback<Function>exists<boolean>
Test whether or not the element at the givenpath exists by checking with the file system.Then call thecallback argument with either true or false:
import { exists }from'node:fs';exists('/etc/passwd',(e) => {console.log(e ?'it exists' :'no passwd!');});The parameters for this callback are not consistent with other Node.jscallbacks. Normally, the first parameter to a Node.js callback is anerrparameter, optionally followed by other parameters. Thefs.exists() callbackhas only one boolean parameter. This is one reasonfs.access() is recommendedinstead offs.exists().
Ifpath is a symbolic link, it is followed. Thus, ifpath exists but pointsto a non-existent element, the callback will receive the valuefalse.
Usingfs.exists() to check for the existence of a file before callingfs.open(),fs.readFile(), orfs.writeFile() is not recommended. Doingso introduces a race condition, since other processes may change the file'sstate between the two calls. Instead, user code should open/read/write thefile directly and handle the error raised if the file does not exist.
write (NOT RECOMMENDED)
import { exists, open, close }from'node:fs';exists('myfile',(e) => {if (e) {console.error('myfile already exists'); }else {open('myfile','wx',(err, fd) => {if (err)throw err;try {writeMyData(fd); }finally {close(fd,(err) => {if (err)throw err; }); } }); }});write (RECOMMENDED)
import { open, close }from'node:fs';open('myfile','wx',(err, fd) => {if (err) {if (err.code ==='EEXIST') {console.error('myfile already exists');return; }throw err; }try {writeMyData(fd); }finally {close(fd,(err) => {if (err)throw err; }); }});read (NOT RECOMMENDED)
import { open, close, exists }from'node:fs';exists('myfile',(e) => {if (e) {open('myfile','r',(err, fd) => {if (err)throw err;try {readMyData(fd); }finally {close(fd,(err) => {if (err)throw err; }); } }); }else {console.error('myfile does not exist'); }});read (RECOMMENDED)
import { open, close }from'node:fs';open('myfile','r',(err, fd) => {if (err) {if (err.code ==='ENOENT') {console.error('myfile does not exist');return; }throw err; }try {readMyData(fd); }finally {close(fd,(err) => {if (err)throw err; }); }});The "not recommended" examples above check for existence and then use thefile; the "recommended" examples are better because they use the file directlyand handle the error, if any.
In general, check for the existence of a file only if the file won't beused directly, for example when its existence is a signal from anotherprocess.
fs.fchmod(fd, mode, callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v10.0.0 | The |
| v7.0.0 | The |
| v0.4.7 | Added in: v0.4.7 |
fd<integer>mode<string> |<integer>callback<Function>err<Error>
Sets the permissions on the file. No arguments other than a possible exceptionare given to the completion callback.
See the POSIXfchmod(2) documentation for more detail.
fs.fchown(fd, uid, gid, callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v10.0.0 | The |
| v7.0.0 | The |
| v0.4.7 | Added in: v0.4.7 |
fd<integer>uid<integer>gid<integer>callback<Function>err<Error>
Sets the owner of the file. No arguments other than a possible exception aregiven to the completion callback.
See the POSIXfchown(2) documentation for more detail.
fs.fdatasync(fd, callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v10.0.0 | The |
| v7.0.0 | The |
| v0.1.96 | Added in: v0.1.96 |
fd<integer>callback<Function>err<Error>
Forces all currently queued I/O operations associated with the file to theoperating system's synchronized I/O completion state. Refer to the POSIXfdatasync(2) documentation for details. No arguments other than a possibleexception are given to the completion callback.
fs.fstat(fd[, options], callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v10.5.0 | Accepts an additional |
| v10.0.0 | The |
| v7.0.0 | The |
| v0.1.95 | Added in: v0.1.95 |
fd<integer>options<Object>bigint<boolean> Whether the numeric values in the returned<fs.Stats> object should bebigint.Default:false.
callback<Function>err<Error>stats<fs.Stats>
Invokes the callback with the<fs.Stats> for the file descriptor.
See the POSIXfstat(2) documentation for more detail.
fs.fsync(fd, callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v10.0.0 | The |
| v7.0.0 | The |
| v0.1.96 | Added in: v0.1.96 |
fd<integer>callback<Function>err<Error>
Request that all data for the open file descriptor is flushed to the storagedevice. The specific implementation is operating system and device specific.Refer to the POSIXfsync(2) documentation for more detail. No arguments otherthan a possible exception are given to the completion callback.
fs.ftruncate(fd[, len], callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v10.0.0 | The |
| v7.0.0 | The |
| v0.8.6 | Added in: v0.8.6 |
fd<integer>len<integer>Default:0callback<Function>err<Error>
Truncates the file descriptor. No arguments other than a possible exception aregiven to the completion callback.
See the POSIXftruncate(2) documentation for more detail.
If the file referred to by the file descriptor was larger thanlen bytes, onlythe firstlen bytes will be retained in the file.
For example, the following program retains only the first four bytes of thefile:
import { open, close, ftruncate }from'node:fs';functioncloseFd(fd) {close(fd,(err) => {if (err)throw err; });}open('temp.txt','r+',(err, fd) => {if (err)throw err;try {ftruncate(fd,4,(err) => {closeFd(fd);if (err)throw err; }); }catch (err) {closeFd(fd);if (err)throw err; }});If the file previously was shorter thanlen bytes, it is extended, and theextended part is filled with null bytes ('\0'):
Iflen is negative then0 will be used.
fs.futimes(fd, atime, mtime, callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v10.0.0 | The |
| v7.0.0 | The |
| v4.1.0 | Numeric strings, |
| v0.4.2 | Added in: v0.4.2 |
fd<integer>atime<number> |<string> |<Date>mtime<number> |<string> |<Date>callback<Function>err<Error>
Change the file system timestamps of the object referenced by the supplied filedescriptor. Seefs.utimes().
fs.glob(pattern[, options], callback)#
History
| Version | Changes |
|---|---|
| v24.1.0, v22.17.0 | Add support for |
| v24.0.0, v22.17.0 | Marking the API stable. |
| v23.7.0, v22.14.0 | Add support for |
| v22.2.0 | Add support for |
| v22.0.0 | Added in: v22.0.0 |
pattern<string> |<string[]>options<Object>cwd<string> |<URL> current working directory.Default:process.cwd()exclude<Function> |<string[]> Function to filter out files/directories or alist of glob patterns to be excluded. If a function is provided, returntrueto exclude the item,falseto include it.Default:undefined.withFileTypes<boolean>trueif the glob should return paths as Dirents,falseotherwise.Default:false.
callback<Function>err<Error>
Retrieves the files matching the specified pattern.
import { glob }from'node:fs';glob('**/*.js',(err, matches) => {if (err)throw err;console.log(matches);});const { glob } =require('node:fs');glob('**/*.js',(err, matches) => {if (err)throw err;console.log(matches);});
fs.lchmod(path, mode, callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v16.0.0 | The error returned may be an |
| v10.0.0 | The |
| v7.0.0 | The |
| v0.4.7 | Deprecated since: v0.4.7 |
path<string> |<Buffer> |<URL>mode<integer>callback<Function>
Changes the permissions on a symbolic link. No arguments other than a possibleexception are given to the completion callback.
This method is only implemented on macOS.
See the POSIXlchmod(2) documentation for more detail.
fs.lchown(path, uid, gid, callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v10.6.0 | This API is no longer deprecated. |
| v10.0.0 | The |
| v7.0.0 | The |
| v0.4.7 | Documentation-only deprecation. |
Set the owner of the symbolic link. No arguments other than a possibleexception are given to the completion callback.
See the POSIXlchown(2) documentation for more detail.
fs.lutimes(path, atime, mtime, callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v14.5.0, v12.19.0 | Added in: v14.5.0, v12.19.0 |
path<string> |<Buffer> |<URL>atime<number> |<string> |<Date>mtime<number> |<string> |<Date>callback<Function>err<Error>
Changes the access and modification times of a file in the same way asfs.utimes(), with the difference that if the path refers to a symboliclink, then the link is not dereferenced: instead, the timestamps of thesymbolic link itself are changed.
No arguments other than a possible exception are given to the completioncallback.
fs.link(existingPath, newPath, callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v10.0.0 | The |
| v7.6.0 | The |
| v7.0.0 | The |
| v0.1.31 | Added in: v0.1.31 |
Creates a new link from theexistingPath to thenewPath. See the POSIXlink(2) documentation for more detail. No arguments other than a possibleexception are given to the completion callback.
fs.lstat(path[, options], callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v10.5.0 | Accepts an additional |
| v10.0.0 | The |
| v7.6.0 | The |
| v7.0.0 | The |
| v0.1.30 | Added in: v0.1.30 |
path<string> |<Buffer> |<URL>options<Object>bigint<boolean> Whether the numeric values in the returned<fs.Stats> object should bebigint.Default:false.
callback<Function>err<Error>stats<fs.Stats>
Retrieves the<fs.Stats> for the symbolic link referred to by the path.The callback gets two arguments(err, stats) wherestats is a<fs.Stats>object.lstat() is identical tostat(), except that ifpath is a symboliclink, then the link itself is stat-ed, not the file that it refers to.
See the POSIXlstat(2) documentation for more details.
fs.mkdir(path[, options], callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v13.11.0, v12.17.0 | In |
| v10.12.0 | The second argument can now be an |
| v10.0.0 | The |
| v7.6.0 | The |
| v7.0.0 | The |
| v0.1.8 | Added in: v0.1.8 |
path<string> |<Buffer> |<URL>options<Object> |<integer>recursive<boolean>Default:falsemode<string> |<integer> Not supported on Windows. SeeFile modesfor more details.Default:0o777.
callback<Function>err<Error>path<string> |<undefined> Present only if a directory is created withrecursiveset totrue.
Asynchronously creates a directory.
The callback is given a possible exception and, ifrecursive istrue, thefirst directory path created,(err[, path]).path can still beundefined whenrecursive istrue, if no directory wascreated (for instance, if it was previously created).
The optionaloptions argument can be an integer specifyingmode (permissionand sticky bits), or an object with amode property and arecursiveproperty indicating whether parent directories should be created. Callingfs.mkdir() whenpath is a directory that exists results in an error onlywhenrecursive is false. Ifrecursive is false and the directory exists,anEEXIST error occurs.
import { mkdir }from'node:fs';// Create ./tmp/a/apple, regardless of whether ./tmp and ./tmp/a exist.mkdir('./tmp/a/apple', {recursive:true },(err) => {if (err)throw err;});On Windows, usingfs.mkdir() on the root directory even with recursion willresult in an error:
import { mkdir }from'node:fs';mkdir('/', {recursive:true },(err) => {// => [Error: EPERM: operation not permitted, mkdir 'C:\']});See the POSIXmkdir(2) documentation for more details.
fs.mkdtemp(prefix[, options], callback)#
History
| Version | Changes |
|---|---|
| v20.6.0, v18.19.0 | The |
| v18.0.0 | Passing an invalid callback to the |
| v16.5.0, v14.18.0 | The |
| v10.0.0 | The |
| v7.0.0 | The |
| v6.2.1 | The |
| v5.10.0 | Added in: v5.10.0 |
prefix<string> |<Buffer> |<URL>options<string> |<Object>encoding<string>Default:'utf8'
callback<Function>
Creates a unique temporary directory.
Generates six random characters to be appended behind a requiredprefix to create a unique temporary directory. Due to platforminconsistencies, avoid trailingX characters inprefix. Some platforms,notably the BSDs, can return more than six random characters, and replacetrailingX characters inprefix with random characters.
The created directory path is passed as a string to the callback's secondparameter.
The optionaloptions argument can be a string specifying an encoding, or anobject with anencoding property specifying the character encoding to use.
import { mkdtemp }from'node:fs';import { join }from'node:path';import { tmpdir }from'node:os';mkdtemp(join(tmpdir(),'foo-'),(err, directory) => {if (err)throw err;console.log(directory);// Prints: /tmp/foo-itXde2 or C:\Users\...\AppData\Local\Temp\foo-itXde2});Thefs.mkdtemp() method will append the six randomly selected charactersdirectly to theprefix string. For instance, given a directory/tmp, if theintention is to create a temporary directorywithin/tmp, theprefixmust end with a trailing platform-specific path separator(require('node:path').sep).
import { tmpdir }from'node:os';import { mkdtemp }from'node:fs';// The parent directory for the new temporary directoryconst tmpDir =tmpdir();// This method is *INCORRECT*:mkdtemp(tmpDir,(err, directory) => {if (err)throw err;console.log(directory);// Will print something similar to `/tmpabc123`.// A new temporary directory is created at the file system root// rather than *within* the /tmp directory.});// This method is *CORRECT*:import { sep }from'node:path';mkdtemp(`${tmpDir}${sep}`,(err, directory) => {if (err)throw err;console.log(directory);// Will print something similar to `/tmp/abc123`.// A new temporary directory is created within// the /tmp directory.});fs.open(path[, flags[, mode]], callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v11.1.0 | The |
| v9.9.0 | The |
| v7.6.0 | The |
| v0.0.2 | Added in: v0.0.2 |
path<string> |<Buffer> |<URL>flags<string> |<number> Seesupport of file systemflags.Default:'r'.mode<string> |<integer>Default:0o666(readable and writable)callback<Function>
Asynchronous file open. See the POSIXopen(2) documentation for more details.
mode sets the file mode (permission and sticky bits), but only if the file wascreated. On Windows, only the write permission can be manipulated; seefs.chmod().
The callback gets two arguments(err, fd).
Some characters (< > : " / \ | ? *) are reserved under Windows as documentedbyNaming Files, Paths, and Namespaces. Under NTFS, if the filename containsa colon, Node.js will open a file system stream, as described bythis MSDN page.
Functions based onfs.open() exhibit this behavior as well:fs.writeFile(),fs.readFile(), etc.
fs.openAsBlob(path[, options])#
History
| Version | Changes |
|---|---|
| v24.0.0, v22.17.0 | Marking the API stable. |
| v19.8.0 | Added in: v19.8.0 |
path<string> |<Buffer> |<URL>options<Object>type<string> An optional mime type for the blob.
- Returns:<Promise> Fulfills with a<Blob> upon success.
Returns a<Blob> whose data is backed by the given file.
The file must not be modified after the<Blob> is created. Any modificationswill cause reading the<Blob> data to fail with aDOMException error.Synchronous stat operations on the file when theBlob is created, and beforeeach read in order to detect whether the file data has been modified on disk.
import { openAsBlob }from'node:fs';const blob =awaitopenAsBlob('the.file.txt');const ab =await blob.arrayBuffer();blob.stream();const { openAsBlob } =require('node:fs');(async () => {const blob =awaitopenAsBlob('the.file.txt');const ab =await blob.arrayBuffer(); blob.stream();})();
fs.opendir(path[, options], callback)#
History
| Version | Changes |
|---|---|
| v20.1.0, v18.17.0 | Added |
| v18.0.0 | Passing an invalid callback to the |
| v13.1.0, v12.16.0 | The |
| v12.12.0 | Added in: v12.12.0 |
path<string> |<Buffer> |<URL>options<Object>callback<Function>
Asynchronously open a directory. See the POSIXopendir(3) documentation formore details.
Creates an<fs.Dir>, which contains all further functions for reading fromand cleaning up the directory.
Theencoding option sets the encoding for thepath while opening thedirectory and subsequent read operations.
fs.read(fd, buffer, offset, length, position, callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v10.10.0 | The |
| v7.4.0 | The |
| v6.0.0 | The |
| v0.0.2 | Added in: v0.0.2 |
fd<integer>buffer<Buffer> |<TypedArray> |<DataView> The buffer that the data will bewritten to.offset<integer> The position inbufferto write the data to.length<integer> The number of bytes to read.position<integer> |<bigint> |<null> Specifies where to begin reading from in thefile. Ifpositionisnullor-1, data will be read from the currentfile position, and the file position will be updated. Ifpositionisa non-negative integer, the file position will be unchanged.callback<Function>
Read data from the file specified byfd.
The callback is given the three arguments,(err, bytesRead, buffer).
If the file is not modified concurrently, the end-of-file is reached when thenumber of bytes read is zero.
If this method is invoked as itsutil.promisify()ed version, it returnsa promise for anObject withbytesRead andbuffer properties.
Thefs.read() method reads data from the file specifiedby the file descriptor (fd).Thelength argument indicates the maximum numberof bytes that Node.jswill attempt to read from the kernel.However, the actual number of bytes read (bytesRead) can be lowerthan the specifiedlength for various reasons.
For example:
- If the file is shorter than the specified
length,bytesReadwill be set to the actual number of bytes read. - If the file encounters EOF (End of File) before the buffer couldbe filled, Node.js will read all available bytes until EOF is encountered,and the
bytesReadparameter in the callback will indicatethe actual number of bytes read, which may be less than the specifiedlength. - If the file is on a slow network
filesystemor encounters any other issue during reading,bytesReadcan be lower than the specifiedlength.
Therefore, when usingfs.read(), it's important tocheck thebytesRead value todetermine how many bytes were actually read from the file.Depending on your applicationlogic, you may need to handle cases wherebytesReadis lower than the specifiedlength,such as by wrapping the read call in a loop if you requirea minimum amount of bytes.
This behavior is similar to the POSIXpreadv2 function.
fs.read(fd[, options], callback)#
History
| Version | Changes |
|---|---|
| v13.11.0, v12.17.0 | Options object can be passed in to make buffer, offset, length, and position optional. |
| v13.11.0, v12.17.0 | Added in: v13.11.0, v12.17.0 |
fd<integer>options<Object>buffer<Buffer> |<TypedArray> |<DataView>Default:Buffer.alloc(16384)offset<integer>Default:0length<integer>Default:buffer.byteLength - offsetposition<integer> |<bigint> |<null>Default:null
callback<Function>
Similar to thefs.read() function, this version takes an optionaloptions object. If nooptions object is specified, it will default with theabove values.
fs.read(fd, buffer[, options], callback)#
fd<integer>buffer<Buffer> |<TypedArray> |<DataView> The buffer that the data will bewritten to.options<Object>callback<Function>
Similar to thefs.read() function, this version takes an optionaloptions object. If nooptions object is specified, it will default with theabove values.
fs.readdir(path[, options], callback)#
History
| Version | Changes |
|---|---|
| v20.1.0, v18.17.0 | Added |
| v18.0.0 | Passing an invalid callback to the |
| v10.10.0 | New option |
| v10.0.0 | The |
| v7.6.0 | The |
| v7.0.0 | The |
| v6.0.0 | The |
| v0.1.8 | Added in: v0.1.8 |
path<string> |<Buffer> |<URL>options<string> |<Object>callback<Function>err<Error>files<string[]> |<Buffer[]> |<fs.Dirent[]>
Reads the contents of a directory. The callback gets two arguments(err, files)wherefiles is an array of the names of the files in the directory excluding'.' and'..'.
See the POSIXreaddir(3) documentation for more details.
The optionaloptions argument can be a string specifying an encoding, or anobject with anencoding property specifying the character encoding to use forthe filenames passed to the callback. If theencoding is set to'buffer',the filenames returned will be passed as<Buffer> objects.
Ifoptions.withFileTypes is set totrue, thefiles array will contain<fs.Dirent> objects.
fs.readFile(path[, options], callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v16.0.0 | The error returned may be an |
| v15.2.0, v14.17.0 | The options argument may include an AbortSignal to abort an ongoing readFile request. |
| v10.0.0 | The |
| v7.6.0 | The |
| v7.0.0 | The |
| v5.1.0 | The |
| v5.0.0 | The |
| v0.1.29 | Added in: v0.1.29 |
path<string> |<Buffer> |<URL> |<integer> filename or file descriptoroptions<Object> |<string>encoding<string> |<null>Default:nullflag<string> Seesupport of file systemflags.Default:'r'.signal<AbortSignal> allows aborting an in-progress readFile
callback<Function>err<Error> |<AggregateError>data<string> |<Buffer>
Asynchronously reads the entire contents of a file.
import { readFile }from'node:fs';readFile('/etc/passwd',(err, data) => {if (err)throw err;console.log(data);});The callback is passed two arguments(err, data), wheredata is thecontents of the file.
If no encoding is specified, then the raw buffer is returned.
Ifoptions is a string, then it specifies the encoding:
import { readFile }from'node:fs';readFile('/etc/passwd','utf8', callback);When the path is a directory, the behavior offs.readFile() andfs.readFileSync() is platform-specific. On macOS, Linux, and Windows, anerror will be returned. On FreeBSD, a representation of the directory's contentswill be returned.
import { readFile }from'node:fs';// macOS, Linux, and WindowsreadFile('<directory>',(err, data) => {// => [Error: EISDIR: illegal operation on a directory, read <directory>]});// FreeBSDreadFile('<directory>',(err, data) => {// => null, <data>});It is possible to abort an ongoing request using anAbortSignal. If arequest is aborted the callback is called with anAbortError:
import { readFile }from'node:fs';const controller =newAbortController();const signal = controller.signal;readFile(fileInfo[0].name, { signal },(err, buf) => {// ...});// When you want to abort the requestcontroller.abort();Thefs.readFile() function buffers the entire file. To minimize memory costs,when possible prefer streaming viafs.createReadStream().
Aborting an ongoing request does not abort individual operatingsystem requests but rather the internal bufferingfs.readFile performs.
File descriptors#
- Any specified file descriptor has to support reading.
- If a file descriptor is specified as the
path, it will not be closedautomatically. - The reading will begin at the current position. For example, if the filealready had
'Hello World'and six bytes are read with the file descriptor,the call tofs.readFile()with the same file descriptor, would give'World', rather than'Hello World'.
Performance Considerations#
Thefs.readFile() method asynchronously reads the contents of a file intomemory one chunk at a time, allowing the event loop to turn between each chunk.This allows the read operation to have less impact on other activity that maybe using the underlying libuv thread pool but means that it will take longerto read a complete file into memory.
The additional read overhead can vary broadly on different systems and dependson the type of file being read. If the file type is not a regular file (a pipefor instance) and Node.js is unable to determine an actual file size, each readoperation will load on 64 KiB of data. For regular files, each read will process512 KiB of data.
For applications that require as-fast-as-possible reading of file contents, itis better to usefs.read() directly and for application code to managereading the full contents of the file itself.
The Node.js GitHub issue#25741 provides more information and a detailedanalysis on the performance offs.readFile() for multiple file sizes indifferent Node.js versions.
fs.readlink(path[, options], callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v10.0.0 | The |
| v7.6.0 | The |
| v7.0.0 | The |
| v0.1.31 | Added in: v0.1.31 |
path<string> |<Buffer> |<URL>options<string> |<Object>encoding<string>Default:'utf8'
callback<Function>
Reads the contents of the symbolic link referred to bypath. The callback getstwo arguments(err, linkString).
See the POSIXreadlink(2) documentation for more details.
The optionaloptions argument can be a string specifying an encoding, or anobject with anencoding property specifying the character encoding to use forthe link path passed to the callback. If theencoding is set to'buffer',the link path returned will be passed as a<Buffer> object.
fs.readv(fd, buffers[, position], callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v13.13.0, v12.17.0 | Added in: v13.13.0, v12.17.0 |
fd<integer>buffers<ArrayBufferView[]>position<integer> |<null>Default:nullcallback<Function>err<Error>bytesRead<integer>buffers<ArrayBufferView[]>
Read from a file specified byfd and write to an array ofArrayBufferViewsusingreadv().
position is the offset from the beginning of the file from where datashould be read. Iftypeof position !== 'number', the data will be readfrom the current position.
The callback will be given three arguments:err,bytesRead, andbuffers.bytesRead is how many bytes were read from the file.
If this method is invoked as itsutil.promisify()ed version, it returnsa promise for anObject withbytesRead andbuffers properties.
fs.realpath(path[, options], callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v10.0.0 | The |
| v8.0.0 | Pipe/Socket resolve support was added. |
| v7.6.0 | The |
| v7.0.0 | The |
| v6.4.0 | Calling |
| v6.0.0 | The |
| v0.1.31 | Added in: v0.1.31 |
path<string> |<Buffer> |<URL>options<string> |<Object>encoding<string>Default:'utf8'
callback<Function>
Asynchronously computes the canonical pathname by resolving.,.., andsymbolic links.
A canonical pathname is not necessarily unique. Hard links and bind mounts canexpose a file system entity through many pathnames.
This function behaves likerealpath(3), with some exceptions:
No case conversion is performed on case-insensitive file systems.
The maximum number of symbolic links is platform-independent and generally(much) higher than what the native
realpath(3)implementation supports.
Thecallback gets two arguments(err, resolvedPath). May useprocess.cwdto resolve relative paths.
Only paths that can be converted to UTF8 strings are supported.
The optionaloptions argument can be a string specifying an encoding, or anobject with anencoding property specifying the character encoding to use forthe path passed to the callback. If theencoding is set to'buffer',the path returned will be passed as a<Buffer> object.
Ifpath resolves to a socket or a pipe, the function will return a systemdependent name for that object.
A path that does not exist results in an ENOENT error.error.path is the absolute file path.
fs.realpath.native(path[, options], callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v9.2.0 | Added in: v9.2.0 |
path<string> |<Buffer> |<URL>options<string> |<Object>encoding<string>Default:'utf8'
callback<Function>
Asynchronousrealpath(3).
Thecallback gets two arguments(err, resolvedPath).
Only paths that can be converted to UTF8 strings are supported.
The optionaloptions argument can be a string specifying an encoding, or anobject with anencoding property specifying the character encoding to use forthe path passed to the callback. If theencoding is set to'buffer',the path returned will be passed as a<Buffer> object.
On Linux, when Node.js is linked against musl libc, the procfs file system mustbe mounted on/proc in order for this function to work. Glibc does not havethis restriction.
fs.rename(oldPath, newPath, callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v10.0.0 | The |
| v7.6.0 | The |
| v7.0.0 | The |
| v0.0.2 | Added in: v0.0.2 |
Asynchronously rename file atoldPath to the pathname providedasnewPath. In the case thatnewPath already exists, it willbe overwritten. If there is a directory atnewPath, an error willbe raised instead. No arguments other than a possible exception aregiven to the completion callback.
See also:rename(2).
import { rename }from'node:fs';rename('oldFile.txt','newFile.txt',(err) => {if (err)throw err;console.log('Rename complete!');});fs.rmdir(path[, options], callback)#
History
| Version | Changes |
|---|---|
| v25.0.0 | Remove |
| v18.0.0 | Passing an invalid callback to the |
| v16.0.0 | Using |
| v16.0.0 | Using |
| v16.0.0 | The |
| v14.14.0 | The |
| v13.3.0, v12.16.0 | The |
| v12.10.0 | The |
| v10.0.0 | The |
| v7.6.0 | The |
| v7.0.0 | The |
| v0.0.2 | Added in: v0.0.2 |
path<string> |<Buffer> |<URL>options<Object> There are currently no options exposed. There used tobe options forrecursive,maxBusyTries, andemfileWaitbut they weredeprecated and removed. Theoptionsargument is still accepted forbackwards compatibility but it is not used.callback<Function>err<Error>
Asynchronousrmdir(2). No arguments other than a possible exception are givento the completion callback.
Usingfs.rmdir() on a file (not a directory) results in anENOENT error onWindows and anENOTDIR error on POSIX.
To get a behavior similar to therm -rf Unix command, usefs.rm()with options{ recursive: true, force: true }.
fs.rm(path[, options], callback)#
History
| Version | Changes |
|---|---|
| v17.3.0, v16.14.0 | The |
| v14.14.0 | Added in: v14.14.0 |
path<string> |<Buffer> |<URL>options<Object>force<boolean> Whentrue, exceptions will be ignored ifpathdoesnot exist.Default:false.maxRetries<integer> If anEBUSY,EMFILE,ENFILE,ENOTEMPTY, orEPERMerror is encountered, Node.js will retry the operation with a linearbackoff wait ofretryDelaymilliseconds longer on each try. This optionrepresents the number of retries. This option is ignored if therecursiveoption is nottrue.Default:0.recursive<boolean> Iftrue, perform a recursive removal. Inrecursive mode operations are retried on failure.Default:false.retryDelay<integer> The amount of time in milliseconds to wait betweenretries. This option is ignored if therecursiveoption is nottrue.Default:100.
callback<Function>err<Error>
Asynchronously removes files and directories (modeled on the standard POSIXrmutility). No arguments other than a possible exception are given to thecompletion callback.
fs.stat(path[, options], callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v10.5.0 | Accepts an additional |
| v10.0.0 | The |
| v7.6.0 | The |
| v7.0.0 | The |
| v0.0.2 | Added in: v0.0.2 |
path<string> |<Buffer> |<URL>options<Object>bigint<boolean> Whether the numeric values in the returned<fs.Stats> object should bebigint.Default:false.
callback<Function>err<Error>stats<fs.Stats>
Asynchronousstat(2). The callback gets two arguments(err, stats) wherestats is an<fs.Stats> object.
In case of an error, theerr.code will be one ofCommon System Errors.
fs.stat() follows symbolic links. Usefs.lstat() to look at thelinks themselves.
Usingfs.stat() to check for the existence of a file before callingfs.open(),fs.readFile(), orfs.writeFile() is not recommended.Instead, user code should open/read/write the file directly and handle theerror raised if the file is not available.
To check if a file exists without manipulating it afterwards,fs.access()is recommended.
For example, given the following directory structure:
- txtDir-- file.txt- app.jsThe next program will check for the stats of the given paths:
import { stat }from'node:fs';const pathsToCheck = ['./txtDir','./txtDir/file.txt'];for (let i =0; i < pathsToCheck.length; i++) {stat(pathsToCheck[i],(err, stats) => {console.log(stats.isDirectory());console.log(stats); });}The resulting output will resemble:
trueStats { dev: 16777220, mode: 16877, nlink: 3, uid: 501, gid: 20, rdev: 0, blksize: 4096, ino: 14214262, size: 96, blocks: 0, atimeMs: 1561174653071.963, mtimeMs: 1561174614583.3518, ctimeMs: 1561174626623.5366, birthtimeMs: 1561174126937.2893, atime: 2019-06-22T03:37:33.072Z, mtime: 2019-06-22T03:36:54.583Z, ctime: 2019-06-22T03:37:06.624Z, birthtime: 2019-06-22T03:28:46.937Z}falseStats { dev: 16777220, mode: 33188, nlink: 1, uid: 501, gid: 20, rdev: 0, blksize: 4096, ino: 14214074, size: 8, blocks: 8, atimeMs: 1561174616618.8555, mtimeMs: 1561174614584, ctimeMs: 1561174614583.8145, birthtimeMs: 1561174007710.7478, atime: 2019-06-22T03:36:56.619Z, mtime: 2019-06-22T03:36:54.584Z, ctime: 2019-06-22T03:36:54.584Z, birthtime: 2019-06-22T03:26:47.711Z}fs.statfs(path[, options], callback)#
path<string> |<Buffer> |<URL>options<Object>bigint<boolean> Whether the numeric values in the returned<fs.StatFs> object should bebigint.Default:false.
callback<Function>err<Error>stats<fs.StatFs>
Asynchronousstatfs(2). Returns information about the mounted file system whichcontainspath. The callback gets two arguments(err, stats) wherestatsis an<fs.StatFs> object.
In case of an error, theerr.code will be one ofCommon System Errors.
fs.symlink(target, path[, type], callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v12.0.0 | If the |
| v7.6.0 | The |
| v0.1.31 | Added in: v0.1.31 |
target<string> |<Buffer> |<URL>path<string> |<Buffer> |<URL>type<string> |<null>Default:nullcallback<Function>err<Error>
Creates the link calledpath pointing totarget. No arguments other than apossible exception are given to the completion callback.
See the POSIXsymlink(2) documentation for more details.
Thetype argument is only available on Windows and ignored on other platforms.It can be set to'dir','file', or'junction'. If thetype argument isnull, Node.js will autodetecttarget type and use'file' or'dir'.If thetarget does not exist,'file' will be used. Windows junction pointsrequire the destination path to be absolute. When using'junction', thetarget argument will automatically be normalized to absolute path. Junctionpoints on NTFS volumes can only point to directories.
Relative targets are relative to the link's parent directory.
import { symlink }from'node:fs';symlink('./mew','./mewtwo', callback);The above example creates a symbolic linkmewtwo which points tomew in thesame directory:
$ tree ..├── mew└── mewtwo -> ./mewfs.truncate(path[, len], callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v16.0.0 | The error returned may be an |
| v10.0.0 | The |
| v7.0.0 | The |
| v0.8.6 | Added in: v0.8.6 |
path<string> |<Buffer> |<URL>len<integer>Default:0callback<Function>
Truncates the file. No arguments other than a possible exception aregiven to the completion callback. A file descriptor can also be passed as thefirst argument. In this case,fs.ftruncate() is called.
import { truncate }from'node:fs';// Assuming that 'path/file.txt' is a regular file.truncate('path/file.txt',(err) => {if (err)throw err;console.log('path/file.txt was truncated');});const { truncate } =require('node:fs');// Assuming that 'path/file.txt' is a regular file.truncate('path/file.txt',(err) => {if (err)throw err;console.log('path/file.txt was truncated');});
Passing a file descriptor is deprecated and may result in an error being thrownin the future.
See the POSIXtruncate(2) documentation for more details.
fs.unlink(path, callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v10.0.0 | The |
| v7.6.0 | The |
| v7.0.0 | The |
| v0.0.2 | Added in: v0.0.2 |
path<string> |<Buffer> |<URL>callback<Function>err<Error>
Asynchronously removes a file or symbolic link. No arguments other than apossible exception are given to the completion callback.
import { unlink }from'node:fs';// Assuming that 'path/file.txt' is a regular file.unlink('path/file.txt',(err) => {if (err)throw err;console.log('path/file.txt was deleted');});fs.unlink() will not work on a directory, empty or otherwise. To remove adirectory, usefs.rmdir().
See the POSIXunlink(2) documentation for more details.
fs.unwatchFile(filename[, listener])#
filename<string> |<Buffer> |<URL>listener<Function> Optional, a listener previously attached usingfs.watchFile()
Stop watching for changes onfilename. Iflistener is specified, only thatparticular listener is removed. Otherwise,all listeners are removed,effectively stopping watching offilename.
Callingfs.unwatchFile() with a filename that is not being watched is ano-op, not an error.
Usingfs.watch() is more efficient thanfs.watchFile() andfs.unwatchFile().fs.watch() should be used instead offs.watchFile()andfs.unwatchFile() when possible.
fs.utimes(path, atime, mtime, callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v10.0.0 | The |
| v8.0.0 |
|
| v7.6.0 | The |
| v7.0.0 | The |
| v4.1.0 | Numeric strings, |
| v0.4.2 | Added in: v0.4.2 |
path<string> |<Buffer> |<URL>atime<number> |<string> |<Date>mtime<number> |<string> |<Date>callback<Function>err<Error>
Change the file system timestamps of the object referenced bypath.
Theatime andmtime arguments follow these rules:
- Values can be either numbers representing Unix epoch time in seconds,
Dates, or a numeric string like'123456789.0'. - If the value can not be converted to a number, or is
NaN,Infinity, or-Infinity, anErrorwill be thrown.
fs.watch(filename[, options][, listener])#
History
| Version | Changes |
|---|---|
| v19.1.0 | Added recursive support for Linux, AIX and IBMi. |
| v15.9.0, v14.17.0 | Added support for closing the watcher with an AbortSignal. |
| v7.6.0 | The |
| v7.0.0 | The passed |
| v0.5.10 | Added in: v0.5.10 |
filename<string> |<Buffer> |<URL>options<string> |<Object>persistent<boolean> Indicates whether the process should continue to runas long as files are being watched.Default:true.recursive<boolean> Indicates whether all subdirectories should bewatched, or only the current directory. This applies when a directory isspecified, and only on supported platforms (Seecaveats).Default:false.encoding<string> Specifies the character encoding to be used for thefilename passed to the listener.Default:'utf8'.signal<AbortSignal> allows closing the watcher with an AbortSignal.ignore<string> |<RegExp> |<Function> |<Array> Pattern(s) to ignore. Strings areglob patterns (usingminimatch), RegExp patterns are tested againstthe filename, and functions receive the filename and returntruetoignore.Default:undefined.
listener<Function> |<undefined>Default:undefined- Returns:<fs.FSWatcher>
Watch for changes onfilename, wherefilename is either a file or adirectory.
The second argument is optional. Ifoptions is provided as a string, itspecifies theencoding. Otherwiseoptions should be passed as an object.
The listener callback gets two arguments(eventType, filename).eventTypeis either'rename' or'change', andfilename is the name of the filewhich triggered the event.
On most platforms,'rename' is emitted whenever a filename appears ordisappears in the directory.
The listener callback is attached to the'change' event fired by<fs.FSWatcher>, but it is not the same thing as the'change' value ofeventType.
If asignal is passed, aborting the corresponding AbortController will closethe returned<fs.FSWatcher>.
Caveats#
Thefs.watch API is not 100% consistent across platforms, and isunavailable in some situations.
On Windows, no events will be emitted if the watched directory is moved orrenamed. AnEPERM error is reported when the watched directory is deleted.
Thefs.watch API does not provide any protection with respectto malicious actions on the file system. For example, on Windows it isimplemented by monitoring changes in a directory versus specific files. Thisallows substitution of a file and fs reporting changes on the new filewith the same filename.
Availability#
This feature depends on the underlying operating system providing a wayto be notified of file system changes.
- On Linux systems, this uses
inotify(7). - On BSD systems, this uses
kqueue(2). - On macOS, this uses
kqueue(2)for files andFSEventsfordirectories. - On SunOS systems (including Solaris and SmartOS), this uses
event ports. - On Windows systems, this feature depends on
ReadDirectoryChangesW. - On AIX systems, this feature depends on
AHAFS, which must be enabled. - On IBM i systems, this feature is not supported.
If the underlying functionality is not available for some reason, thenfs.watch() will not be able to function and may throw an exception.For example, watching files or directories can be unreliable, and in somecases impossible, on network file systems (NFS, SMB, etc) or host file systemswhen using virtualization software such as Vagrant or Docker.
It is still possible to usefs.watchFile(), which uses stat polling, butthis method is slower and less reliable.
Inodes#
On Linux and macOS systems,fs.watch() resolves the path to aninode andwatches the inode. If the watched path is deleted and recreated, it is assigneda new inode. The watch will emit an event for the delete but will continuewatching theoriginal inode. Events for the new inode will not be emitted.This is expected behavior.
AIX files retain the same inode for the lifetime of a file. Saving and closing awatched file on AIX will result in two notifications (one for adding newcontent, and one for truncation).
Filename argument#
Providingfilename argument in the callback is only supported on Linux,macOS, Windows, and AIX. Even on supported platforms,filename is not alwaysguaranteed to be provided. Therefore, don't assume thatfilename argument isalways provided in the callback, and have some fallback logic if it isnull.
import { watch }from'node:fs';watch('somedir',(eventType, filename) => {console.log(`event type is:${eventType}`);if (filename) {console.log(`filename provided:${filename}`); }else {console.log('filename not provided'); }});fs.watchFile(filename[, options], listener)#
History
| Version | Changes |
|---|---|
| v10.5.0 | The |
| v7.6.0 | The |
| v0.1.31 | Added in: v0.1.31 |
filename<string> |<Buffer> |<URL>options<Object>listener<Function>current<fs.Stats>previous<fs.Stats>
- Returns:<fs.StatWatcher>
Watch for changes onfilename. The callbacklistener will be called eachtime the file is accessed.
Theoptions argument may be omitted. If provided, it should be an object. Theoptions object may contain a boolean namedpersistent that indicateswhether the process should continue to run as long as files are being watched.Theoptions object may specify aninterval property indicating how often thetarget should be polled in milliseconds.
Thelistener gets two arguments the current stat object and the previousstat object:
import { watchFile }from'node:fs';watchFile('message.text',(curr, prev) => {console.log(`the current mtime is:${curr.mtime}`);console.log(`the previous mtime was:${prev.mtime}`);});These stat objects are instances offs.Stat. If thebigint option istrue,the numeric values in these objects are specified asBigInts.
To be notified when the file was modified, not just accessed, it is necessaryto comparecurr.mtimeMs andprev.mtimeMs.
When anfs.watchFile operation results in anENOENT error, itwill invoke the listener once, with all the fields zeroed (or, for dates, theUnix Epoch). If the file is created later on, the listener will be calledagain, with the latest stat objects. This is a change in functionality sincev0.10.
Usingfs.watch() is more efficient thanfs.watchFile andfs.unwatchFile.fs.watch should be used instead offs.watchFile andfs.unwatchFile when possible.
When a file being watched byfs.watchFile() disappears and reappears,then the contents ofprevious in the second callback event (the file'sreappearance) will be the same as the contents ofprevious in the firstcallback event (its disappearance).
This happens when:
- the file is deleted, followed by a restore
- the file is renamed and then renamed a second time back to its original name
fs.write(fd, buffer, offset[, length[, position]], callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v14.0.0 | The |
| v10.10.0 | The |
| v10.0.0 | The |
| v7.4.0 | The |
| v7.2.0 | The |
| v7.0.0 | The |
| v0.0.2 | Added in: v0.0.2 |
fd<integer>buffer<Buffer> |<TypedArray> |<DataView>offset<integer>Default:0length<integer>Default:buffer.byteLength - offsetposition<integer> |<null>Default:nullcallback<Function>err<Error>bytesWritten<integer>buffer<Buffer> |<TypedArray> |<DataView>
Writebuffer to the file specified byfd.
offset determines the part of the buffer to be written, andlength isan integer specifying the number of bytes to write.
position refers to the offset from the beginning of the file where this datashould be written. Iftypeof position !== 'number', the data will be writtenat the current position. Seepwrite(2).
The callback will be given three arguments(err, bytesWritten, buffer) wherebytesWritten specifies how manybytes were written frombuffer.
If this method is invoked as itsutil.promisify()ed version, it returnsa promise for anObject withbytesWritten andbuffer properties.
It is unsafe to usefs.write() multiple times on the same file without waitingfor the callback. For this scenario,fs.createWriteStream() isrecommended.
On Linux, positional writes don't work when the file is opened in append mode.The kernel ignores the position argument and always appends the data tothe end of the file.
fs.write(fd, buffer[, options], callback)#
fd<integer>buffer<Buffer> |<TypedArray> |<DataView>options<Object>callback<Function>err<Error>bytesWritten<integer>buffer<Buffer> |<TypedArray> |<DataView>
Writebuffer to the file specified byfd.
Similar to the abovefs.write function, this version takes anoptionaloptions object. If nooptions object is specified, it willdefault with the above values.
fs.write(fd, string[, position[, encoding]], callback)#
History
| Version | Changes |
|---|---|
| v19.0.0 | Passing to the |
| v17.8.0 | Passing to the |
| v14.12.0 | The |
| v14.0.0 | The |
| v10.0.0 | The |
| v7.2.0 | The |
| v7.0.0 | The |
| v0.11.5 | Added in: v0.11.5 |
fd<integer>string<string>position<integer> |<null>Default:nullencoding<string>Default:'utf8'callback<Function>
Writestring to the file specified byfd. Ifstring is not a string,an exception is thrown.
position refers to the offset from the beginning of the file where this datashould be written. Iftypeof position !== 'number' the data will be written atthe current position. Seepwrite(2).
encoding is the expected string encoding.
The callback will receive the arguments(err, written, string) wherewrittenspecifies how manybytes the passed string required to be written. Byteswritten is not necessarily the same as string characters written. SeeBuffer.byteLength.
It is unsafe to usefs.write() multiple times on the same file without waitingfor the callback. For this scenario,fs.createWriteStream() isrecommended.
On Linux, positional writes don't work when the file is opened in append mode.The kernel ignores the position argument and always appends the data tothe end of the file.
On Windows, if the file descriptor is connected to the console (e.g.fd == 1orstdout) a string containing non-ASCII characters will not be renderedproperly by default, regardless of the encoding used.It is possible to configure the console to render UTF-8 properly by changing theactive codepage with thechcp 65001 command. See thechcp docs for moredetails.
fs.writeFile(file, data[, options], callback)#
History
| Version | Changes |
|---|---|
| v21.0.0, v20.10.0 | The |
| v19.0.0 | Passing to the |
| v18.0.0 | Passing an invalid callback to the |
| v17.8.0 | Passing to the |
| v16.0.0 | The error returned may be an |
| v15.2.0, v14.17.0 | The options argument may include an AbortSignal to abort an ongoing writeFile request. |
| v14.12.0 | The |
| v14.0.0 | The |
| v10.10.0 | The |
| v10.0.0 | The |
| v7.4.0 | The |
| v7.0.0 | The |
| v5.0.0 | The |
| v0.1.29 | Added in: v0.1.29 |
file<string> |<Buffer> |<URL> |<integer> filename or file descriptordata<string> |<Buffer> |<TypedArray> |<DataView>options<Object> |<string>encoding<string> |<null>Default:'utf8'mode<integer>Default:0o666flag<string> Seesupport of file systemflags.Default:'w'.flush<boolean> If all data is successfully written to the file, andflushistrue,fs.fsync()is used to flush the data.Default:false.signal<AbortSignal> allows aborting an in-progress writeFile
callback<Function>
Whenfile is a filename, asynchronously writes data to the file, replacing thefile if it already exists.data can be a string or a buffer.
Whenfile is a file descriptor, the behavior is similar to callingfs.write() directly (which is recommended). See the notes below on usinga file descriptor.
Theencoding option is ignored ifdata is a buffer.
Themode option only affects the newly created file. Seefs.open()for more details.
import { writeFile }from'node:fs';import {Buffer }from'node:buffer';const data =newUint8Array(Buffer.from('Hello Node.js'));writeFile('message.txt', data,(err) => {if (err)throw err;console.log('The file has been saved!');});Ifoptions is a string, then it specifies the encoding:
import { writeFile }from'node:fs';writeFile('message.txt','Hello Node.js','utf8', callback);It is unsafe to usefs.writeFile() multiple times on the same file withoutwaiting for the callback. For this scenario,fs.createWriteStream() isrecommended.
Similarly tofs.readFile -fs.writeFile is a convenience method thatperforms multiplewrite calls internally to write the buffer passed to it.For performance sensitive code consider usingfs.createWriteStream().
It is possible to use an<AbortSignal> to cancel anfs.writeFile().Cancelation is "best effort", and some amount of data is likely stillto be written.
import { writeFile }from'node:fs';import {Buffer }from'node:buffer';const controller =newAbortController();const { signal } = controller;const data =newUint8Array(Buffer.from('Hello Node.js'));writeFile('message.txt', data, { signal },(err) => {// When a request is aborted - the callback is called with an AbortError});// When the request should be abortedcontroller.abort();Aborting an ongoing request does not abort individual operatingsystem requests but rather the internal bufferingfs.writeFile performs.
Usingfs.writeFile() with file descriptors#
Whenfile is a file descriptor, the behavior is almost identical to directlycallingfs.write() like:
import { write }from'node:fs';import {Buffer }from'node:buffer';write(fd,Buffer.from(data, options.encoding), callback);The difference from directly callingfs.write() is that under some unusualconditions,fs.write() might write only part of the buffer and need to beretried to write the remaining data, whereasfs.writeFile() retries untilthe data is entirely written (or an error occurs).
The implications of this are a common source of confusion. Inthe file descriptor case, the file is not replaced! The data is not necessarilywritten to the beginning of the file, and the file's original data may remainbefore and/or after the newly written data.
For example, iffs.writeFile() is called twice in a row, first to write thestring'Hello', then to write the string', World', the file would contain'Hello, World', and might contain some of the file's original data (dependingon the size of the original file, and the position of the file descriptor). Ifa file name had been used instead of a descriptor, the file would be guaranteedto contain only', World'.
fs.writev(fd, buffers[, position], callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v12.9.0 | Added in: v12.9.0 |
fd<integer>buffers<ArrayBufferView[]>position<integer> |<null>Default:nullcallback<Function>err<Error>bytesWritten<integer>buffers<ArrayBufferView[]>
Write an array ofArrayBufferViews to the file specified byfd usingwritev().
position is the offset from the beginning of the file where this datashould be written. Iftypeof position !== 'number', the data will be writtenat the current position.
The callback will be given three arguments:err,bytesWritten, andbuffers.bytesWritten is how many bytes were written frombuffers.
If this method isutil.promisify()ed, it returns a promise for anObject withbytesWritten andbuffers properties.
It is unsafe to usefs.writev() multiple times on the same file withoutwaiting for the callback. For this scenario, usefs.createWriteStream().
On Linux, positional writes don't work when the file is opened in append mode.The kernel ignores the position argument and always appends the data tothe end of the file.
Synchronous API#
The synchronous APIs perform all operations synchronously, blocking theevent loop until the operation completes or fails.
fs.accessSync(path[, mode])#
History
| Version | Changes |
|---|---|
| v7.6.0 | The |
| v0.11.15 | Added in: v0.11.15 |
Synchronously tests a user's permissions for the file or directory specifiedbypath. Themode argument is an optional integer that specifies theaccessibility checks to be performed.mode should be either the valuefs.constants.F_OK or a mask consisting of the bitwise OR of any offs.constants.R_OK,fs.constants.W_OK, andfs.constants.X_OK (e.g.fs.constants.W_OK | fs.constants.R_OK). CheckFile access constants forpossible values ofmode.
If any of the accessibility checks fail, anError will be thrown. Otherwise,the method will returnundefined.
import { accessSync, constants }from'node:fs';try {accessSync('etc/passwd', constants.R_OK | constants.W_OK);console.log('can read/write');}catch (err) {console.error('no access!');}fs.appendFileSync(path, data[, options])#
History
| Version | Changes |
|---|---|
| v21.1.0, v20.10.0 | The |
| v7.0.0 | The passed |
| v5.0.0 | The |
| v0.6.7 | Added in: v0.6.7 |
path<string> |<Buffer> |<URL> |<number> filename or file descriptordata<string> |<Buffer>options<Object> |<string>
Synchronously append data to a file, creating the file if it does not yetexist.data can be a string or a<Buffer>.
Themode option only affects the newly created file. Seefs.open()for more details.
import { appendFileSync }from'node:fs';try {appendFileSync('message.txt','data to append');console.log('The "data to append" was appended to file!');}catch (err) {/* Handle the error */}Ifoptions is a string, then it specifies the encoding:
import { appendFileSync }from'node:fs';appendFileSync('message.txt','data to append','utf8');Thepath may be specified as a numeric file descriptor that has been openedfor appending (usingfs.open() orfs.openSync()). The file descriptor willnot be closed automatically.
import { openSync, closeSync, appendFileSync }from'node:fs';let fd;try { fd =openSync('message.txt','a');appendFileSync(fd,'data to append','utf8');}catch (err) {/* Handle the error */}finally {if (fd !==undefined)closeSync(fd);}fs.chmodSync(path, mode)#
History
| Version | Changes |
|---|---|
| v7.6.0 | The |
| v0.6.7 | Added in: v0.6.7 |
For detailed information, see the documentation of the asynchronous version ofthis API:fs.chmod().
See the POSIXchmod(2) documentation for more detail.
fs.chownSync(path, uid, gid)#
History
| Version | Changes |
|---|---|
| v7.6.0 | The |
| v0.1.97 | Added in: v0.1.97 |
Synchronously changes owner and group of a file. Returnsundefined.This is the synchronous version offs.chown().
See the POSIXchown(2) documentation for more detail.
fs.closeSync(fd)#
Closes the file descriptor. Returnsundefined.
Callingfs.closeSync() on any file descriptor (fd) that is currently in usethrough any otherfs operation may lead to undefined behavior.
See the POSIXclose(2) documentation for more detail.
fs.copyFileSync(src, dest[, mode])#
History
| Version | Changes |
|---|---|
| v14.0.0 | Changed |
| v8.5.0 | Added in: v8.5.0 |
src<string> |<Buffer> |<URL> source filename to copydest<string> |<Buffer> |<URL> destination filename of the copy operationmode<integer> modifiers for copy operation.Default:0.
Synchronously copiessrc todest. By default,dest is overwritten if italready exists. Returnsundefined. Node.js makes no guarantees about theatomicity of the copy operation. If an error occurs after the destination filehas been opened for writing, Node.js will attempt to remove the destination.
mode is an optional integer that specifies the behaviorof the copy operation. It is possible to create a mask consisting of the bitwiseOR of two or more values (e.g.fs.constants.COPYFILE_EXCL | fs.constants.COPYFILE_FICLONE).
fs.constants.COPYFILE_EXCL: The copy operation will fail ifdestalreadyexists.fs.constants.COPYFILE_FICLONE: The copy operation will attempt to create acopy-on-write reflink. If the platform does not support copy-on-write, then afallback copy mechanism is used.fs.constants.COPYFILE_FICLONE_FORCE: The copy operation will attempt tocreate a copy-on-write reflink. If the platform does not supportcopy-on-write, then the operation will fail.
import { copyFileSync, constants }from'node:fs';// destination.txt will be created or overwritten by default.copyFileSync('source.txt','destination.txt');console.log('source.txt was copied to destination.txt');// By using COPYFILE_EXCL, the operation will fail if destination.txt exists.copyFileSync('source.txt','destination.txt', constants.COPYFILE_EXCL);fs.cpSync(src, dest[, options])#
History
| Version | Changes |
|---|---|
| v22.3.0 | This API is no longer experimental. |
| v20.1.0, v18.17.0 | Accept an additional |
| v17.6.0, v16.15.0 | Accepts an additional |
| v16.7.0 | Added in: v16.7.0 |
src<string> |<URL> source path to copy.dest<string> |<URL> destination path to copy to.options<Object>dereference<boolean> dereference symlinks.Default:false.errorOnExist<boolean> whenforceisfalse, and the destinationexists, throw an error.Default:false.filter<Function> Function to filter copied files/directories. Returntrueto copy the item,falseto ignore it. When ignoring a directory,all of its contents will be skipped as well.Default:undefinedforce<boolean> overwrite existing file or directory. The copyoperation will ignore errors if you set this to false and the destinationexists. Use theerrorOnExistoption to change this behavior.Default:true.mode<integer> modifiers for copy operation.Default:0.Seemodeflag offs.copyFileSync().preserveTimestamps<boolean> Whentruetimestamps fromsrcwillbe preserved.Default:false.recursive<boolean> copy directories recursivelyDefault:falseverbatimSymlinks<boolean> Whentrue, path resolution for symlinks willbe skipped.Default:false
Synchronously copies the entire directory structure fromsrc todest,including subdirectories and files.
When copying a directory to another directory, globs are not supported andbehavior is similar tocp dir1/ dir2/.
fs.existsSync(path)#
History
| Version | Changes |
|---|---|
| v7.6.0 | The |
| v0.1.21 | Added in: v0.1.21 |
Returnstrue if the path exists,false otherwise.
For detailed information, see the documentation of the asynchronous version ofthis API:fs.exists().
fs.exists() is deprecated, butfs.existsSync() is not. Thecallbackparameter tofs.exists() accepts parameters that are inconsistent with otherNode.js callbacks.fs.existsSync() does not use a callback.
import { existsSync }from'node:fs';if (existsSync('/etc/passwd'))console.log('The path exists.');fs.fchmodSync(fd, mode)#
Sets the permissions on the file. Returnsundefined.
See the POSIXfchmod(2) documentation for more detail.
fs.fchownSync(fd, uid, gid)#
fd<integer>uid<integer> The file's new owner's user id.gid<integer> The file's new group's group id.
Sets the owner of the file. Returnsundefined.
See the POSIXfchown(2) documentation for more detail.
fs.fdatasyncSync(fd)#
Forces all currently queued I/O operations associated with the file to theoperating system's synchronized I/O completion state. Refer to the POSIXfdatasync(2) documentation for details. Returnsundefined.
fs.fstatSync(fd[, options])#
History
| Version | Changes |
|---|---|
| v10.5.0 | Accepts an additional |
| v0.1.95 | Added in: v0.1.95 |
fd<integer>options<Object>bigint<boolean> Whether the numeric values in the returned<fs.Stats> object should bebigint.Default:false.
- Returns:<fs.Stats>
Retrieves the<fs.Stats> for the file descriptor.
See the POSIXfstat(2) documentation for more detail.
fs.fsyncSync(fd)#
Request that all data for the open file descriptor is flushed to the storagedevice. The specific implementation is operating system and device specific.Refer to the POSIXfsync(2) documentation for more detail. Returnsundefined.
fs.ftruncateSync(fd[, len])#
Truncates the file descriptor. Returnsundefined.
For detailed information, see the documentation of the asynchronous version ofthis API:fs.ftruncate().
fs.futimesSync(fd, atime, mtime)#
History
| Version | Changes |
|---|---|
| v4.1.0 | Numeric strings, |
| v0.4.2 | Added in: v0.4.2 |
Synchronous version offs.futimes(). Returnsundefined.
fs.globSync(pattern[, options])#
History
| Version | Changes |
|---|---|
| v24.1.0, v22.17.0 | Add support for |
| v24.0.0, v22.17.0 | Marking the API stable. |
| v23.7.0, v22.14.0 | Add support for |
| v22.2.0 | Add support for |
| v22.0.0 | Added in: v22.0.0 |
pattern<string> |<string[]>options<Object>cwd<string> |<URL> current working directory.Default:process.cwd()exclude<Function> |<string[]> Function to filter out files/directories or alist of glob patterns to be excluded. If a function is provided, returntrueto exclude the item,falseto include it.Default:undefined.withFileTypes<boolean>trueif the glob should return paths as Dirents,falseotherwise.Default:false.
- Returns:<string[]> paths of files that match the pattern.
import { globSync }from'node:fs';console.log(globSync('**/*.js'));const { globSync } =require('node:fs');console.log(globSync('**/*.js'));
fs.lchmodSync(path, mode)#
Changes the permissions on a symbolic link. Returnsundefined.
This method is only implemented on macOS.
See the POSIXlchmod(2) documentation for more detail.
fs.lchownSync(path, uid, gid)#
History
| Version | Changes |
|---|---|
| v10.6.0 | This API is no longer deprecated. |
| v0.4.7 | Documentation-only deprecation. |
path<string> |<Buffer> |<URL>uid<integer> The file's new owner's user id.gid<integer> The file's new group's group id.
Set the owner for the path. Returnsundefined.
See the POSIXlchown(2) documentation for more details.
fs.lutimesSync(path, atime, mtime)#
Change the file system timestamps of the symbolic link referenced bypath.Returnsundefined, or throws an exception when parameters are incorrect orthe operation fails. This is the synchronous version offs.lutimes().
fs.linkSync(existingPath, newPath)#
History
| Version | Changes |
|---|---|
| v7.6.0 | The |
| v0.1.31 | Added in: v0.1.31 |
Creates a new link from theexistingPath to thenewPath. See the POSIXlink(2) documentation for more detail. Returnsundefined.
fs.lstatSync(path[, options])#
History
| Version | Changes |
|---|---|
| v15.3.0, v14.17.0 | Accepts a |
| v10.5.0 | Accepts an additional |
| v7.6.0 | The |
| v0.1.30 | Added in: v0.1.30 |
path<string> |<Buffer> |<URL>options<Object>bigint<boolean> Whether the numeric values in the returned<fs.Stats> object should bebigint.Default:false.throwIfNoEntry<boolean> Whether an exception will be thrownif no file system entry exists, rather than returningundefined.Default:true.
- Returns:<fs.Stats>
Retrieves the<fs.Stats> for the symbolic link referred to bypath.
See the POSIXlstat(2) documentation for more details.
fs.mkdirSync(path[, options])#
History
| Version | Changes |
|---|---|
| v13.11.0, v12.17.0 | In |
| v10.12.0 | The second argument can now be an |
| v7.6.0 | The |
| v0.1.21 | Added in: v0.1.21 |
Synchronously creates a directory. Returnsundefined, or ifrecursive istrue, the first directory path created.This is the synchronous version offs.mkdir().
See the POSIXmkdir(2) documentation for more details.
fs.mkdtempSync(prefix[, options])#
History
| Version | Changes |
|---|---|
| v20.6.0, v18.19.0 | The |
| v16.5.0, v14.18.0 | The |
| v5.10.0 | Added in: v5.10.0 |
prefix<string> |<Buffer> |<URL>options<string> |<Object>encoding<string>Default:'utf8'
- Returns:<string>
Returns the created directory path.
For detailed information, see the documentation of the asynchronous version ofthis API:fs.mkdtemp().
The optionaloptions argument can be a string specifying an encoding, or anobject with anencoding property specifying the character encoding to use.
fs.mkdtempDisposableSync(prefix[, options])#
prefix<string> |<Buffer> |<URL>options<string> |<Object>encoding<string>Default:'utf8'
- Returns:<Object> A disposable object:
path<string> The path of the created directory.remove<Function> A function which removes the created directory.[Symbol.dispose]<Function> The same asremove.
Returns a disposable object whosepath property holds the created directorypath. When the object is disposed, the directory and its contents will beremoved if it still exists. If the directory cannot be deleted, disposal willthrow an error. The object has aremove() method which will perform the sametask.
For detailed information, see the documentation offs.mkdtemp().
There is no callback-based version of this API because it is designed for usewith theusing syntax.
The optionaloptions argument can be a string specifying an encoding, or anobject with anencoding property specifying the character encoding to use.
fs.opendirSync(path[, options])#
History
| Version | Changes |
|---|---|
| v20.1.0, v18.17.0 | Added |
| v13.1.0, v12.16.0 | The |
| v12.12.0 | Added in: v12.12.0 |
Synchronously open a directory. Seeopendir(3).
Creates an<fs.Dir>, which contains all further functions for reading fromand cleaning up the directory.
Theencoding option sets the encoding for thepath while opening thedirectory and subsequent read operations.
fs.openSync(path[, flags[, mode]])#
History
| Version | Changes |
|---|---|
| v11.1.0 | The |
| v9.9.0 | The |
| v7.6.0 | The |
| v0.1.21 | Added in: v0.1.21 |
path<string> |<Buffer> |<URL>flags<string> |<number>Default:'r'.Seesupport of file systemflags.mode<string> |<integer>Default:0o666- Returns:<number>
Returns an integer representing the file descriptor.
For detailed information, see the documentation of the asynchronous version ofthis API:fs.open().
fs.readdirSync(path[, options])#
History
| Version | Changes |
|---|---|
| v20.1.0, v18.17.0 | Added |
| v10.10.0 | New option |
| v7.6.0 | The |
| v0.1.21 | Added in: v0.1.21 |
path<string> |<Buffer> |<URL>options<string> |<Object>- Returns:<string[]> |<Buffer[]> |<fs.Dirent[]>
Reads the contents of the directory.
See the POSIXreaddir(3) documentation for more details.
The optionaloptions argument can be a string specifying an encoding, or anobject with anencoding property specifying the character encoding to use forthe filenames returned. If theencoding is set to'buffer',the filenames returned will be passed as<Buffer> objects.
Ifoptions.withFileTypes is set totrue, the result will contain<fs.Dirent> objects.
fs.readFileSync(path[, options])#
History
| Version | Changes |
|---|---|
| v7.6.0 | The |
| v5.0.0 | The |
| v0.1.8 | Added in: v0.1.8 |
path<string> |<Buffer> |<URL> |<integer> filename or file descriptoroptions<Object> |<string>encoding<string> |<null>Default:nullflag<string> Seesupport of file systemflags.Default:'r'.
- Returns:<string> |<Buffer>
Returns the contents of thepath.
For detailed information, see the documentation of the asynchronous version ofthis API:fs.readFile().
If theencoding option is specified then this function returns astring. Otherwise it returns a buffer.
Similar tofs.readFile(), when the path is a directory, the behavior offs.readFileSync() is platform-specific.
import { readFileSync }from'node:fs';// macOS, Linux, and WindowsreadFileSync('<directory>');// => [Error: EISDIR: illegal operation on a directory, read <directory>]// FreeBSDreadFileSync('<directory>');// => <data>fs.readlinkSync(path[, options])#
History
| Version | Changes |
|---|---|
| v7.6.0 | The |
| v0.1.31 | Added in: v0.1.31 |
path<string> |<Buffer> |<URL>options<string> |<Object>encoding<string>Default:'utf8'
- Returns:<string> |<Buffer>
Returns the symbolic link's string value.
See the POSIXreadlink(2) documentation for more details.
The optionaloptions argument can be a string specifying an encoding, or anobject with anencoding property specifying the character encoding to use forthe link path returned. If theencoding is set to'buffer',the link path returned will be passed as a<Buffer> object.
fs.readSync(fd, buffer, offset, length[, position])#
History
| Version | Changes |
|---|---|
| v10.10.0 | The |
| v6.0.0 | The |
| v0.1.21 | Added in: v0.1.21 |
fd<integer>buffer<Buffer> |<TypedArray> |<DataView>offset<integer>length<integer>position<integer> |<bigint> |<null>Default:null- Returns:<number>
Returns the number ofbytesRead.
For detailed information, see the documentation of the asynchronous version ofthis API:fs.read().
fs.readSync(fd, buffer[, options])#
History
| Version | Changes |
|---|---|
| v13.13.0, v12.17.0 | Options object can be passed in to make offset, length, and position optional. |
| v13.13.0, v12.17.0 | Added in: v13.13.0, v12.17.0 |
fd<integer>buffer<Buffer> |<TypedArray> |<DataView>options<Object>- Returns:<number>
Returns the number ofbytesRead.
Similar to the abovefs.readSync function, this version takes an optionaloptions object.If nooptions object is specified, it will default with the above values.
For detailed information, see the documentation of the asynchronous version ofthis API:fs.read().
fs.readvSync(fd, buffers[, position])#
fd<integer>buffers<ArrayBufferView[]>position<integer> |<null>Default:null- Returns:<number> The number of bytes read.
For detailed information, see the documentation of the asynchronous version ofthis API:fs.readv().
fs.realpathSync(path[, options])#
History
| Version | Changes |
|---|---|
| v8.0.0 | Pipe/Socket resolve support was added. |
| v7.6.0 | The |
| v6.4.0 | Calling |
| v6.0.0 | The |
| v0.1.31 | Added in: v0.1.31 |
path<string> |<Buffer> |<URL>options<string> |<Object>encoding<string>Default:'utf8'
- Returns:<string> |<Buffer>
Returns the resolved pathname.
For detailed information, see the documentation of the asynchronous version ofthis API:fs.realpath().
fs.realpathSync.native(path[, options])#
path<string> |<Buffer> |<URL>options<string> |<Object>encoding<string>Default:'utf8'
- Returns:<string> |<Buffer>
Synchronousrealpath(3).
Only paths that can be converted to UTF8 strings are supported.
The optionaloptions argument can be a string specifying an encoding, or anobject with anencoding property specifying the character encoding to use forthe path returned. If theencoding is set to'buffer',the path returned will be passed as a<Buffer> object.
On Linux, when Node.js is linked against musl libc, the procfs file system mustbe mounted on/proc in order for this function to work. Glibc does not havethis restriction.
fs.renameSync(oldPath, newPath)#
History
| Version | Changes |
|---|---|
| v7.6.0 | The |
| v0.1.21 | Added in: v0.1.21 |
Renames the file fromoldPath tonewPath. Returnsundefined.
See the POSIXrename(2) documentation for more details.
fs.rmdirSync(path[, options])#
History
| Version | Changes |
|---|---|
| v25.0.0 | Remove |
| v16.0.0 | Using |
| v16.0.0 | Using |
| v16.0.0 | The |
| v14.14.0 | The |
| v13.3.0, v12.16.0 | The |
| v12.10.0 | The |
| v7.6.0 | The |
| v0.1.21 | Added in: v0.1.21 |
path<string> |<Buffer> |<URL>options<Object> There are currently no options exposed. There used tobe options forrecursive,maxBusyTries, andemfileWaitbut they weredeprecated and removed. Theoptionsargument is still accepted forbackwards compatibility but it is not used.
Synchronousrmdir(2). Returnsundefined.
Usingfs.rmdirSync() on a file (not a directory) results in anENOENT erroron Windows and anENOTDIR error on POSIX.
To get a behavior similar to therm -rf Unix command, usefs.rmSync()with options{ recursive: true, force: true }.
fs.rmSync(path[, options])#
History
| Version | Changes |
|---|---|
| v17.3.0, v16.14.0 | The |
| v14.14.0 | Added in: v14.14.0 |
path<string> |<Buffer> |<URL>options<Object>force<boolean> Whentrue, exceptions will be ignored ifpathdoesnot exist.Default:false.maxRetries<integer> If anEBUSY,EMFILE,ENFILE,ENOTEMPTY, orEPERMerror is encountered, Node.js will retry the operation with a linearbackoff wait ofretryDelaymilliseconds longer on each try. This optionrepresents the number of retries. This option is ignored if therecursiveoption is nottrue.Default:0.recursive<boolean> Iftrue, perform a recursive directory removal. Inrecursive mode operations are retried on failure.Default:false.retryDelay<integer> The amount of time in milliseconds to wait betweenretries. This option is ignored if therecursiveoption is nottrue.Default:100.
Synchronously removes files and directories (modeled on the standard POSIXrmutility). Returnsundefined.
fs.statSync(path[, options])#
History
| Version | Changes |
|---|---|
| v15.3.0, v14.17.0 | Accepts a |
| v10.5.0 | Accepts an additional |
| v7.6.0 | The |
| v0.1.21 | Added in: v0.1.21 |
path<string> |<Buffer> |<URL>options<Object>bigint<boolean> Whether the numeric values in the returned<fs.Stats> object should bebigint.Default:false.throwIfNoEntry<boolean> Whether an exception will be thrownif no file system entry exists, rather than returningundefined.Default:true.
- Returns:<fs.Stats>
Retrieves the<fs.Stats> for the path.
fs.statfsSync(path[, options])#
path<string> |<Buffer> |<URL>options<Object>bigint<boolean> Whether the numeric values in the returned<fs.StatFs> object should bebigint.Default:false.
- Returns:<fs.StatFs>
Synchronousstatfs(2). Returns information about the mounted file system whichcontainspath.
In case of an error, theerr.code will be one ofCommon System Errors.
fs.symlinkSync(target, path[, type])#
History
| Version | Changes |
|---|---|
| v12.0.0 | If the |
| v7.6.0 | The |
| v0.1.31 | Added in: v0.1.31 |
target<string> |<Buffer> |<URL>path<string> |<Buffer> |<URL>type<string> |<null>Default:null- Returns:
undefined.
For detailed information, see the documentation of the asynchronous version ofthis API:fs.symlink().
fs.truncateSync(path[, len])#
Truncates the file. Returnsundefined. A file descriptor can also bepassed as the first argument. In this case,fs.ftruncateSync() is called.
Passing a file descriptor is deprecated and may result in an error being thrownin the future.
fs.unlinkSync(path)#
History
| Version | Changes |
|---|---|
| v7.6.0 | The |
| v0.1.21 | Added in: v0.1.21 |
Synchronousunlink(2). Returnsundefined.
fs.utimesSync(path, atime, mtime)#
History
| Version | Changes |
|---|---|
| v8.0.0 |
|
| v7.6.0 | The |
| v4.1.0 | Numeric strings, |
| v0.4.2 | Added in: v0.4.2 |
path<string> |<Buffer> |<URL>atime<number> |<string> |<Date>mtime<number> |<string> |<Date>- Returns:
undefined.
For detailed information, see the documentation of the asynchronous version ofthis API:fs.utimes().
fs.writeFileSync(file, data[, options])#
History
| Version | Changes |
|---|---|
| v21.0.0, v20.10.0 | The |
| v19.0.0 | Passing to the |
| v17.8.0 | Passing to the |
| v14.12.0 | The |
| v14.0.0 | The |
| v10.10.0 | The |
| v7.4.0 | The |
| v5.0.0 | The |
| v0.1.29 | Added in: v0.1.29 |
file<string> |<Buffer> |<URL> |<integer> filename or file descriptordata<string> |<Buffer> |<TypedArray> |<DataView>options<Object> |<string>- Returns:
undefined.
Themode option only affects the newly created file. Seefs.open()for more details.
For detailed information, see the documentation of the asynchronous version ofthis API:fs.writeFile().
fs.writeSync(fd, buffer, offset[, length[, position]])#
History
| Version | Changes |
|---|---|
| v14.0.0 | The |
| v10.10.0 | The |
| v7.4.0 | The |
| v7.2.0 | The |
| v0.1.21 | Added in: v0.1.21 |
fd<integer>buffer<Buffer> |<TypedArray> |<DataView>offset<integer>Default:0length<integer>Default:buffer.byteLength - offsetposition<integer> |<null>Default:null- Returns:<number> The number of bytes written.
For detailed information, see the documentation of the asynchronous version ofthis API:fs.write(fd, buffer...).
fs.writeSync(fd, buffer[, options])#
fd<integer>buffer<Buffer> |<TypedArray> |<DataView>options<Object>- Returns:<number> The number of bytes written.
For detailed information, see the documentation of the asynchronous version ofthis API:fs.write(fd, buffer...).
fs.writeSync(fd, string[, position[, encoding]])#
History
| Version | Changes |
|---|---|
| v14.0.0 | The |
| v7.2.0 | The |
| v0.11.5 | Added in: v0.11.5 |
fd<integer>string<string>position<integer> |<null>Default:nullencoding<string>Default:'utf8'- Returns:<number> The number of bytes written.
For detailed information, see the documentation of the asynchronous version ofthis API:fs.write(fd, string...).
fs.writevSync(fd, buffers[, position])#
fd<integer>buffers<ArrayBufferView[]>position<integer> |<null>Default:null- Returns:<number> The number of bytes written.
For detailed information, see the documentation of the asynchronous version ofthis API:fs.writev().
Common Objects#
The common objects are shared by all of the file system API variants(promise, callback, and synchronous).
Class:fs.Dir#
A class representing a directory stream.
Created byfs.opendir(),fs.opendirSync(), orfsPromises.opendir().
import { opendir }from'node:fs/promises';try {const dir =awaitopendir('./');forawait (const direntof dir)console.log(dirent.name);}catch (err) {console.error(err);}When using the async iterator, the<fs.Dir> object will be automaticallyclosed after the iterator exits.
dir.close()#
- Returns:<Promise>
Asynchronously close the directory's underlying resource handle.Subsequent reads will result in errors.
A promise is returned that will be fulfilled after the resource has beenclosed.
dir.close(callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v12.12.0 | Added in: v12.12.0 |
callback<Function>err<Error>
Asynchronously close the directory's underlying resource handle.Subsequent reads will result in errors.
Thecallback will be called after the resource handle has been closed.
dir.closeSync()#
Synchronously close the directory's underlying resource handle.Subsequent reads will result in errors.
dir.path#
- Type:<string>
The read-only path of this directory as was provided tofs.opendir(),fs.opendirSync(), orfsPromises.opendir().
dir.read()#
- Returns:<Promise> Fulfills with a<fs.Dirent> |<null>
Asynchronously read the next directory entry viareaddir(3) as an<fs.Dirent>.
A promise is returned that will be fulfilled with an<fs.Dirent>, ornullif there are no more directory entries to read.
Directory entries returned by this function are in no particular order asprovided by the operating system's underlying directory mechanisms.Entries added or removed while iterating over the directory might not beincluded in the iteration results.
dir.read(callback)#
callback<Function>err<Error>dirent<fs.Dirent> |<null>
Asynchronously read the next directory entry viareaddir(3) as an<fs.Dirent>.
After the read is completed, thecallback will be called with an<fs.Dirent>, ornull if there are no more directory entries to read.
Directory entries returned by this function are in no particular order asprovided by the operating system's underlying directory mechanisms.Entries added or removed while iterating over the directory might not beincluded in the iteration results.
dir.readSync()#
- Returns:<fs.Dirent> |<null>
Synchronously read the next directory entry as an<fs.Dirent>. See thePOSIXreaddir(3) documentation for more detail.
If there are no more directory entries to read,null will be returned.
Directory entries returned by this function are in no particular order asprovided by the operating system's underlying directory mechanisms.Entries added or removed while iterating over the directory might not beincluded in the iteration results.
dir[Symbol.asyncIterator]()#
- Returns:<AsyncIterator> An AsyncIterator of<fs.Dirent>
Asynchronously iterates over the directory until all entries havebeen read. Refer to the POSIXreaddir(3) documentation for more detail.
Entries returned by the async iterator are always an<fs.Dirent>.Thenull case fromdir.read() is handled internally.
See<fs.Dir> for an example.
Directory entries returned by this iterator are in no particular order asprovided by the operating system's underlying directory mechanisms.Entries added or removed while iterating over the directory might not beincluded in the iteration results.
dir[Symbol.asyncDispose]()#
History
| Version | Changes |
|---|---|
| v24.2.0 | No longer experimental. |
| v24.1.0, v22.1.0 | Added in: v24.1.0, v22.1.0 |
Callsdir.close() if the directory handle is open, and returns a promise thatfulfills when disposal is complete.
dir[Symbol.dispose]()#
History
| Version | Changes |
|---|---|
| v24.2.0 | No longer experimental. |
| v24.1.0, v22.1.0 | Added in: v24.1.0, v22.1.0 |
Callsdir.closeSync() if the directory handle is open, and returnsundefined.
Class:fs.Dirent#
A representation of a directory entry, which can be a file or a subdirectorywithin the directory, as returned by reading from an<fs.Dir>. Thedirectory entry is a combination of the file name and file type pairs.
Additionally, whenfs.readdir() orfs.readdirSync() is called withthewithFileTypes option set totrue, the resulting array is filled with<fs.Dirent> objects, rather than strings or<Buffer>s.
dirent.isBlockDevice()#
- Returns:<boolean>
Returnstrue if the<fs.Dirent> object describes a block device.
dirent.isCharacterDevice()#
- Returns:<boolean>
Returnstrue if the<fs.Dirent> object describes a character device.
dirent.isDirectory()#
- Returns:<boolean>
Returnstrue if the<fs.Dirent> object describes a file systemdirectory.
dirent.isFIFO()#
- Returns:<boolean>
Returnstrue if the<fs.Dirent> object describes a first-in-first-out(FIFO) pipe.
dirent.isFile()#
- Returns:<boolean>
Returnstrue if the<fs.Dirent> object describes a regular file.
dirent.isSocket()#
- Returns:<boolean>
Returnstrue if the<fs.Dirent> object describes a socket.
dirent.isSymbolicLink()#
- Returns:<boolean>
Returnstrue if the<fs.Dirent> object describes a symbolic link.
dirent.name#
The file name that this<fs.Dirent> object refers to. The type of thisvalue is determined by theoptions.encoding passed tofs.readdir() orfs.readdirSync().
dirent.parentPath#
History
| Version | Changes |
|---|---|
| v24.0.0, v22.17.0 | Marking the API stable. |
| v21.4.0, v20.12.0, v18.20.0 | Added in: v21.4.0, v20.12.0, v18.20.0 |
- Type:<string>
The path to the parent directory of the file this<fs.Dirent> object refers to.
Class:fs.FSWatcher#
- Extends<EventEmitter>
A successful call tofs.watch() method will return a new<fs.FSWatcher>object.
All<fs.FSWatcher> objects emit a'change' event whenever a specific watchedfile is modified.
Event:'change'#
eventType<string> The type of change event that has occurredfilename<string> |<Buffer> The filename that changed (if relevant/available)
Emitted when something changes in a watched directory or file.See more details infs.watch().
Thefilename argument may not be provided depending on operating systemsupport. Iffilename is provided, it will be provided as a<Buffer> iffs.watch() is called with itsencoding option set to'buffer', otherwisefilename will be a UTF-8 string.
import { watch }from'node:fs';// Example when handled through fs.watch() listenerwatch('./tmp', {encoding:'buffer' },(eventType, filename) => {if (filename) {console.log(filename);// Prints: <Buffer ...> }});Event:'close'#
Emitted when the watcher stops watching for changes. The closed<fs.FSWatcher> object is no longer usable in the event handler.
Event:'error'#
error<Error>
Emitted when an error occurs while watching the file. The errored<fs.FSWatcher> object is no longer usable in the event handler.
watcher.close()#
Stop watching for changes on the given<fs.FSWatcher>. Once stopped, the<fs.FSWatcher> object is no longer usable.
watcher.ref()#
- Returns:<fs.FSWatcher>
When called, requests that the Node.js event loopnot exit so long as the<fs.FSWatcher> is active. Callingwatcher.ref() multiple times will haveno effect.
By default, all<fs.FSWatcher> objects are "ref'ed", making it normallyunnecessary to callwatcher.ref() unlesswatcher.unref() had beencalled previously.
watcher.unref()#
- Returns:<fs.FSWatcher>
When called, the active<fs.FSWatcher> object will not require the Node.jsevent loop to remain active. If there is no other activity keeping theevent loop running, the process may exit before the<fs.FSWatcher> object'scallback is invoked. Callingwatcher.unref() multiple times will haveno effect.
Class:fs.StatWatcher#
- Extends<EventEmitter>
A successful call tofs.watchFile() method will return a new<fs.StatWatcher>object.
watcher.ref()#
- Returns:<fs.StatWatcher>
When called, requests that the Node.js event loopnot exit so long as the<fs.StatWatcher> is active. Callingwatcher.ref() multiple times will haveno effect.
By default, all<fs.StatWatcher> objects are "ref'ed", making it normallyunnecessary to callwatcher.ref() unlesswatcher.unref() had beencalled previously.
watcher.unref()#
- Returns:<fs.StatWatcher>
When called, the active<fs.StatWatcher> object will not require the Node.jsevent loop to remain active. If there is no other activity keeping theevent loop running, the process may exit before the<fs.StatWatcher> object'scallback is invoked. Callingwatcher.unref() multiple times will haveno effect.
Class:fs.ReadStream#
- Extends:<stream.Readable>
Instances of<fs.ReadStream> are created and returned using thefs.createReadStream() function.
Event:'close'#
Emitted when the<fs.ReadStream>'s underlying file descriptor has been closed.
Event:'open'#
fd<integer> Integer file descriptor used by the<fs.ReadStream>.
Emitted when the<fs.ReadStream>'s file descriptor has been opened.
Event:'ready'#
Emitted when the<fs.ReadStream> is ready to be used.
Fires immediately after'open'.
readStream.path#
The path to the file the stream is reading from as specified in the firstargument tofs.createReadStream(). Ifpath is passed as a string, thenreadStream.path will be a string. Ifpath is passed as a<Buffer>, thenreadStream.path will be a<Buffer>. Iffd is specified, thenreadStream.path will beundefined.
Class:fs.Stats#
History
| Version | Changes |
|---|---|
| v22.0.0, v20.13.0 | Public constructor is deprecated. |
| v8.1.0 | Added times as numbers. |
| v0.1.21 | Added in: v0.1.21 |
A<fs.Stats> object provides information about a file.
Objects returned fromfs.stat(),fs.lstat(),fs.fstat(), andtheir synchronous counterparts are of this type.Ifbigint in theoptions passed to those methods is true, the numeric valueswill bebigint instead ofnumber, and the object will contain additionalnanosecond-precision properties suffixed withNs.Stat objects are not to be created directly using thenew keyword.
Stats { dev: 2114, ino: 48064969, mode: 33188, nlink: 1, uid: 85, gid: 100, rdev: 0, size: 527, blksize: 4096, blocks: 8, atimeMs: 1318289051000.1, mtimeMs: 1318289051000.1, ctimeMs: 1318289051000.1, birthtimeMs: 1318289051000.1, atime: Mon, 10 Oct 2011 23:24:11 GMT, mtime: Mon, 10 Oct 2011 23:24:11 GMT, ctime: Mon, 10 Oct 2011 23:24:11 GMT, birthtime: Mon, 10 Oct 2011 23:24:11 GMT }bigint version:
BigIntStats { dev: 2114n, ino: 48064969n, mode: 33188n, nlink: 1n, uid: 85n, gid: 100n, rdev: 0n, size: 527n, blksize: 4096n, blocks: 8n, atimeMs: 1318289051000n, mtimeMs: 1318289051000n, ctimeMs: 1318289051000n, birthtimeMs: 1318289051000n, atimeNs: 1318289051000000000n, mtimeNs: 1318289051000000000n, ctimeNs: 1318289051000000000n, birthtimeNs: 1318289051000000000n, atime: Mon, 10 Oct 2011 23:24:11 GMT, mtime: Mon, 10 Oct 2011 23:24:11 GMT, ctime: Mon, 10 Oct 2011 23:24:11 GMT, birthtime: Mon, 10 Oct 2011 23:24:11 GMT }stats.isBlockDevice()#
- Returns:<boolean>
Returnstrue if the<fs.Stats> object describes a block device.
stats.isCharacterDevice()#
- Returns:<boolean>
Returnstrue if the<fs.Stats> object describes a character device.
stats.isDirectory()#
- Returns:<boolean>
Returnstrue if the<fs.Stats> object describes a file system directory.
If the<fs.Stats> object was obtained from callingfs.lstat() on asymbolic link which resolves to a directory, this method will returnfalse.This is becausefs.lstat() returns informationabout a symbolic link itself and not the path it resolves to.
stats.isFIFO()#
- Returns:<boolean>
Returnstrue if the<fs.Stats> object describes a first-in-first-out (FIFO)pipe.
stats.isFile()#
- Returns:<boolean>
Returnstrue if the<fs.Stats> object describes a regular file.
stats.isSocket()#
- Returns:<boolean>
Returnstrue if the<fs.Stats> object describes a socket.
stats.isSymbolicLink()#
- Returns:<boolean>
Returnstrue if the<fs.Stats> object describes a symbolic link.
This method is only valid when usingfs.lstat().
stats.uid#
The numeric user identifier of the user that owns the file (POSIX).
stats.gid#
The numeric group identifier of the group that owns the file (POSIX).
stats.size#
The size of the file in bytes.
If the underlying file system does not support getting the size of the file,this will be0.
stats.atimeMs#
The timestamp indicating the last time this file was accessed expressed inmilliseconds since the POSIX Epoch.
stats.mtimeMs#
The timestamp indicating the last time this file was modified expressed inmilliseconds since the POSIX Epoch.
stats.ctimeMs#
The timestamp indicating the last time the file status was changed expressedin milliseconds since the POSIX Epoch.
stats.birthtimeMs#
The timestamp indicating the creation time of this file expressed inmilliseconds since the POSIX Epoch.
stats.atimeNs#
- Type:<bigint>
Only present whenbigint: true is passed into the method that generatesthe object.The timestamp indicating the last time this file was accessed expressed innanoseconds since the POSIX Epoch.
stats.mtimeNs#
- Type:<bigint>
Only present whenbigint: true is passed into the method that generatesthe object.The timestamp indicating the last time this file was modified expressed innanoseconds since the POSIX Epoch.
stats.ctimeNs#
- Type:<bigint>
Only present whenbigint: true is passed into the method that generatesthe object.The timestamp indicating the last time the file status was changed expressedin nanoseconds since the POSIX Epoch.
stats.birthtimeNs#
- Type:<bigint>
Only present whenbigint: true is passed into the method that generatesthe object.The timestamp indicating the creation time of this file expressed innanoseconds since the POSIX Epoch.
stats.atime#
- Type:<Date>
The timestamp indicating the last time this file was accessed.
stats.mtime#
- Type:<Date>
The timestamp indicating the last time this file was modified.
stats.ctime#
- Type:<Date>
The timestamp indicating the last time the file status was changed.
stats.birthtime#
- Type:<Date>
The timestamp indicating the creation time of this file.
Stat time values#
TheatimeMs,mtimeMs,ctimeMs,birthtimeMs properties arenumeric values that hold the corresponding times in milliseconds. Theirprecision is platform specific. Whenbigint: true is passed into themethod that generates the object, the properties will bebigints,otherwise they will benumbers.
TheatimeNs,mtimeNs,ctimeNs,birthtimeNs properties arebigints that hold the corresponding times in nanoseconds. They areonly present whenbigint: true is passed into the method that generatesthe object. Their precision is platform specific.
atime,mtime,ctime, andbirthtime areDate object alternate representations of the various times. TheDate and number values are not connected. Assigning a new number value, ormutating theDate value, will not be reflected in the corresponding alternaterepresentation.
The times in the stat object have the following semantics:
atime"Access Time": Time when file data last accessed. Changedby themknod(2),utimes(2), andread(2)system calls.mtime"Modified Time": Time when file data last modified.Changed by themknod(2),utimes(2), andwrite(2)system calls.ctime"Change Time": Time when file status was last changed(inode data modification). Changed by thechmod(2),chown(2),link(2),mknod(2),rename(2),unlink(2),utimes(2),read(2), andwrite(2)system calls.birthtime"Birth Time": Time of file creation. Set once when thefile is created. On file systems where birthtime is not available,this field may instead hold either thectimeor1970-01-01T00:00Z(ie, Unix epoch timestamp0). This value may be greaterthanatimeormtimein this case. On Darwin and other FreeBSD variants,also set if theatimeis explicitly set to an earlier value than the currentbirthtimeusing theutimes(2)system call.
Prior to Node.js 0.12, thectime held thebirthtime on Windows systems. Asof 0.12,ctime is not "creation time", and on Unix systems, it never was.
Class:fs.StatFs#
Provides information about a mounted file system.
Objects returned fromfs.statfs() and its synchronous counterpart are ofthis type. Ifbigint in theoptions passed to those methods istrue, thenumeric values will bebigint instead ofnumber.
StatFs { type: 1397114950, bsize: 4096, blocks: 121938943, bfree: 61058895, bavail: 61058895, files: 999, ffree: 1000000}bigint version:
StatFs { type: 1397114950n, bsize: 4096n, blocks: 121938943n, bfree: 61058895n, bavail: 61058895n, files: 999n, ffree: 1000000n}Class:fs.Utf8Stream#
An optimized UTF-8 stream writer that allows for flushing all the internalbuffering on demand. It handlesEAGAIN errors correctly, allowing forcustomization, for example, by dropping content if the disk is busy.
Event:'close'#
The'close' event is emitted when the stream is fully closed.
Event:'drain'#
The'drain' event is emitted when the internal buffer has drained sufficientlyto allow continued writing.
Event:'drop'#
The'drop' event is emitted when the maximal length is reached and that datawill not be written. The data that was dropped is passed as the first argumentto the event handler.
Event:'error'#
The'error' event is emitted when an error occurs.
Event:'finish'#
The'finish' event is emitted when the stream has been ended and all data hasbeen flushed to the underlying file.
Event:'ready'#
The'ready' event is emitted when the stream is ready to accept writes.
Event:'write'#
The'write' event is emitted when a write operation has completed. The numberof bytes written is passed as the first argument to the event handler.
new fs.Utf8Stream([options])#
options<Object>append:<boolean> Appends writes to dest file instead of truncating it.Default:true.contentMode:<string> Which type of data you can send to the writefunction, supported values are'utf8'or'buffer'.Default:'utf8'.dest:<string> A path to a file to be written to (mode controlled by theappend option).fd:<number> A file descriptor, something that is returned byfs.open()orfs.openSync().fs:<Object> An object that has the same API as thefsmodule, usefulfor mocking, testing, or customizing the behavior of the stream.fsync:<boolean> Perform afs.fsyncSync()every time a write iscompleted.maxLength:<number> The maximum length of the internal buffer. If a writeoperation would cause the buffer to exceedmaxLength, the data written isdropped and a drop event is emitted with the dropped datamaxWrite:<number> The maximum number of bytes that can be written;Default:16384minLength:<number> The minimum length of the internal buffer that isrequired to be full before flushing.mkdir:<boolean> Ensure directory fordestfile exists when true.Default:false.mode:<number> |<string> Specify the creating file mode (seefs.open()).periodicFlush:<number> Calls flush everyperiodicFlushmilliseconds.retryEAGAIN<Function> A function that will be called whenwrite(),writeSync(), orflushSync()encounters anEAGAINorEBUSYerror.If the return value istruethe operation will be retried, otherwise itwill bubble the error. Theerris the error that caused this function tobe called,writeBufferLenis the length of the buffer that was written,andremainingBufferLenis the length of the remaining buffer that thestream did not try to write.sync:<boolean> Perform writes synchronously.
utf8Stream.contentMode#
- <string> The type of data that can be written to the stream. Supportedvalues are
'utf8'or'buffer'.Default:'utf8'.
utf8Stream.destroy()#
Close the stream immediately, without flushing the internal buffer.
utf8Stream.end()#
Close the stream gracefully, flushing the internal buffer before closing.
utf8Stream.flush(callback)#
callback<Function>
Writes the current buffer to the file if a write was not in progress. Donothing ifminLength is zero or if it is already writing.
utf8Stream.flushSync()#
Flushes the buffered data synchronously. This is a costly operation.
utf8Stream.fsync#
- <boolean> Whether the stream is performing a
fs.fsyncSync()after everywrite operation.
utf8Stream.maxLength#
- <number> The maximum length of the internal buffer. If a writeoperation would cause the buffer to exceed
maxLength, the data written isdropped and a drop event is emitted with the dropped data.
utf8Stream.minLength#
- <number> The minimum length of the internal buffer that is required to befull before flushing.
utf8Stream.mkdir#
- <boolean> Whether the stream should ensure that the directory for the
destfile exists. Iftrue, it will create the directory if it does notexist.Default:false.
utf8Stream.periodicFlush#
- <number> The number of milliseconds between flushes. If set to
0, noperiodic flushes will be performed.
utf8Stream.reopen(file)#
file:<string> |<Buffer> |<URL> A path to a file to be written to (modecontrolled by the append option).
Reopen the file in place, useful for log rotation.
utf8Stream.write(data)#
When theoptions.contentMode is set to'utf8' when the stream is created,thedata argument must be a string. If thecontentMode is set to'buffer',thedata argument must be a<Buffer>.
utf8Stream[Symbol.dispose]()#
Callsutf8Stream.destroy().
Class:fs.WriteStream#
- Extends<stream.Writable>
Instances of<fs.WriteStream> are created and returned using thefs.createWriteStream() function.
Event:'close'#
Emitted when the<fs.WriteStream>'s underlying file descriptor has been closed.
Event:'open'#
fd<integer> Integer file descriptor used by the<fs.WriteStream>.
Emitted when the<fs.WriteStream>'s file is opened.
Event:'ready'#
Emitted when the<fs.WriteStream> is ready to be used.
Fires immediately after'open'.
writeStream.bytesWritten#
The number of bytes written so far. Does not include data that is still queuedfor writing.
writeStream.close([callback])#
callback<Function>err<Error>
CloseswriteStream. Optionally accepts acallback that will be executed once thewriteStreamis closed.
writeStream.path#
The path to the file the stream is writing to as specified in the firstargument tofs.createWriteStream(). Ifpath is passed as a string, thenwriteStream.path will be a string. Ifpath is passed as a<Buffer>, thenwriteStream.path will be a<Buffer>.
fs.constants#
- Type:<Object>
Returns an object containing commonly used constants for file systemoperations.
FS constants#
The following constants are exported byfs.constants andfsPromises.constants.
Not every constant will be available on every operating system;this is especially important for Windows, where many of the POSIX specificdefinitions are not available.For portable applications it is recommended to check for their presencebefore use.
To use more than one constant, use the bitwise OR| operator.
Example:
import { open, constants }from'node:fs';const {O_RDWR,O_CREAT,O_EXCL,} = constants;open('/path/to/my/file',O_RDWR |O_CREAT |O_EXCL,(err, fd) => {// ...});File access constants#
The following constants are meant for use as themode parameter passed tofsPromises.access(),fs.access(), andfs.accessSync().
| Constant | Description |
|---|---|
F_OK | Flag indicating that the file is visible to the calling process. This is useful for determining if a file exists, but says nothing aboutrwx permissions. Default if no mode is specified. |
R_OK | Flag indicating that the file can be read by the calling process. |
W_OK | Flag indicating that the file can be written by the calling process. |
X_OK | Flag indicating that the file can be executed by the calling process. This has no effect on Windows (will behave likefs.constants.F_OK). |
The definitions are also available on Windows.
File copy constants#
The following constants are meant for use withfs.copyFile().
| Constant | Description |
|---|---|
COPYFILE_EXCL | If present, the copy operation will fail with an error if the destination path already exists. |
COPYFILE_FICLONE | If present, the copy operation will attempt to create a copy-on-write reflink. If the underlying platform does not support copy-on-write, then a fallback copy mechanism is used. |
COPYFILE_FICLONE_FORCE | If present, the copy operation will attempt to create a copy-on-write reflink. If the underlying platform does not support copy-on-write, then the operation will fail with an error. |
The definitions are also available on Windows.
File open constants#
The following constants are meant for use withfs.open().
| Constant | Description |
|---|---|
O_RDONLY | Flag indicating to open a file for read-only access. |
O_WRONLY | Flag indicating to open a file for write-only access. |
O_RDWR | Flag indicating to open a file for read-write access. |
O_CREAT | Flag indicating to create the file if it does not already exist. |
O_EXCL | Flag indicating that opening a file should fail if theO_CREAT flag is set and the file already exists. |
O_NOCTTY | Flag indicating that if path identifies a terminal device, opening the path shall not cause that terminal to become the controlling terminal for the process (if the process does not already have one). |
O_TRUNC | Flag indicating that if the file exists and is a regular file, and the file is opened successfully for write access, its length shall be truncated to zero. |
O_APPEND | Flag indicating that data will be appended to the end of the file. |
O_DIRECTORY | Flag indicating that the open should fail if the path is not a directory. |
O_NOATIME | Flag indicating reading accesses to the file system will no longer result in an update to theatime information associated with the file. This flag is available on Linux operating systems only. |
O_NOFOLLOW | Flag indicating that the open should fail if the path is a symbolic link. |
O_SYNC | Flag indicating that the file is opened for synchronized I/O with write operations waiting for file integrity. |
O_DSYNC | Flag indicating that the file is opened for synchronized I/O with write operations waiting for data integrity. |
O_SYMLINK | Flag indicating to open the symbolic link itself rather than the resource it is pointing to. |
O_DIRECT | When set, an attempt will be made to minimize caching effects of file I/O. |
O_NONBLOCK | Flag indicating to open the file in nonblocking mode when possible. |
UV_FS_O_FILEMAP | When set, a memory file mapping is used to access the file. This flag is available on Windows operating systems only. On other operating systems, this flag is ignored. |
On Windows, onlyO_APPEND,O_CREAT,O_EXCL,O_RDONLY,O_RDWR,O_TRUNC,O_WRONLY, andUV_FS_O_FILEMAP are available.
File type constants#
The following constants are meant for use with the<fs.Stats> object'smode property for determining a file's type.
| Constant | Description |
|---|---|
S_IFMT | Bit mask used to extract the file type code. |
S_IFREG | File type constant for a regular file. |
S_IFDIR | File type constant for a directory. |
S_IFCHR | File type constant for a character-oriented device file. |
S_IFBLK | File type constant for a block-oriented device file. |
S_IFIFO | File type constant for a FIFO/pipe. |
S_IFLNK | File type constant for a symbolic link. |
S_IFSOCK | File type constant for a socket. |
On Windows, onlyS_IFCHR,S_IFDIR,S_IFLNK,S_IFMT, andS_IFREG,are available.
File mode constants#
The following constants are meant for use with the<fs.Stats> object'smode property for determining the access permissions for a file.
| Constant | Description |
|---|---|
S_IRWXU | File mode indicating readable, writable, and executable by owner. |
S_IRUSR | File mode indicating readable by owner. |
S_IWUSR | File mode indicating writable by owner. |
S_IXUSR | File mode indicating executable by owner. |
S_IRWXG | File mode indicating readable, writable, and executable by group. |
S_IRGRP | File mode indicating readable by group. |
S_IWGRP | File mode indicating writable by group. |
S_IXGRP | File mode indicating executable by group. |
S_IRWXO | File mode indicating readable, writable, and executable by others. |
S_IROTH | File mode indicating readable by others. |
S_IWOTH | File mode indicating writable by others. |
S_IXOTH | File mode indicating executable by others. |
On Windows, onlyS_IRUSR andS_IWUSR are available.
Notes#
Ordering of callback and promise-based operations#
Because they are executed asynchronously by the underlying thread pool,there is no guaranteed ordering when using either the callback orpromise-based methods.
For example, the following is prone to error because thefs.stat()operation might complete before thefs.rename() operation:
const fs =require('node:fs');fs.rename('/tmp/hello','/tmp/world',(err) => {if (err)throw err;console.log('renamed complete');});fs.stat('/tmp/world',(err, stats) => {if (err)throw err;console.log(`stats:${JSON.stringify(stats)}`);});It is important to correctly order the operations by awaiting the resultsof one before invoking the other:
import { rename, stat }from'node:fs/promises';const oldPath ='/tmp/hello';const newPath ='/tmp/world';try {awaitrename(oldPath, newPath);const stats =awaitstat(newPath);console.log(`stats:${JSON.stringify(stats)}`);}catch (error) {console.error('there was an error:', error.message);}const { rename, stat } =require('node:fs/promises');(asyncfunction(oldPath, newPath) {try {awaitrename(oldPath, newPath);const stats =awaitstat(newPath);console.log(`stats:${JSON.stringify(stats)}`); }catch (error) {console.error('there was an error:', error.message); }})('/tmp/hello','/tmp/world');
Or, when using the callback APIs, move thefs.stat() call into the callbackof thefs.rename() operation:
import { rename, stat }from'node:fs';rename('/tmp/hello','/tmp/world',(err) => {if (err)throw err;stat('/tmp/world',(err, stats) => {if (err)throw err;console.log(`stats:${JSON.stringify(stats)}`); });});const { rename, stat } =require('node:fs/promises');rename('/tmp/hello','/tmp/world',(err) => {if (err)throw err;stat('/tmp/world',(err, stats) => {if (err)throw err;console.log(`stats:${JSON.stringify(stats)}`); });});
File paths#
Mostfs operations accept file paths that may be specified in the form ofa string, a<Buffer>, or a<URL> object using thefile: protocol.
String paths#
String paths are interpreted as UTF-8 character sequences identifyingthe absolute or relative filename. Relative paths will be resolved relativeto the current working directory as determined by callingprocess.cwd().
Example using an absolute path on POSIX:
import { open }from'node:fs/promises';let fd;try { fd =awaitopen('/open/some/file.txt','r');// Do something with the file}finally {await fd?.close();}Example using a relative path on POSIX (relative toprocess.cwd()):
import { open }from'node:fs/promises';let fd;try { fd =awaitopen('file.txt','r');// Do something with the file}finally {await fd?.close();}File URL paths#
For mostnode:fs module functions, thepath orfilename argument may bepassed as a<URL> object using thefile: protocol.
import { readFileSync }from'node:fs';readFileSync(newURL('file:///tmp/hello'));file: URLs are always absolute paths.
Platform-specific considerations#
On Windows,file:<URL>s with a host name convert to UNC paths, whilefile:<URL>s with drive letters convert to local absolute paths.file:<URL>swith no host name and no drive letter will result in an error:
import { readFileSync }from'node:fs';// On Windows :// - WHATWG file URLs with hostname convert to UNC path// file://hostname/p/a/t/h/file => \\hostname\p\a\t\h\filereadFileSync(newURL('file://hostname/p/a/t/h/file'));// - WHATWG file URLs with drive letters convert to absolute path// file:///C:/tmp/hello => C:\tmp\helloreadFileSync(newURL('file:///C:/tmp/hello'));// - WHATWG file URLs without hostname must have a drive lettersreadFileSync(newURL('file:///notdriveletter/p/a/t/h/file'));readFileSync(newURL('file:///c/p/a/t/h/file'));// TypeError [ERR_INVALID_FILE_URL_PATH]: File URL path must be absolutefile:<URL>s with drive letters must use: as a separator just afterthe drive letter. Using another separator will result in an error.
On all other platforms,file:<URL>s with a host name are unsupported andwill result in an error:
import { readFileSync }from'node:fs';// On other platforms:// - WHATWG file URLs with hostname are unsupported// file://hostname/p/a/t/h/file => throw!readFileSync(newURL('file://hostname/p/a/t/h/file'));// TypeError [ERR_INVALID_FILE_URL_PATH]: must be absolute// - WHATWG file URLs convert to absolute path// file:///tmp/hello => /tmp/helloreadFileSync(newURL('file:///tmp/hello'));Afile:<URL> having encoded slash characters will result in an error on allplatforms:
import { readFileSync }from'node:fs';// On WindowsreadFileSync(newURL('file:///C:/p/a/t/h/%2F'));readFileSync(newURL('file:///C:/p/a/t/h/%2f'));/* TypeError [ERR_INVALID_FILE_URL_PATH]: File URL path must not include encoded\ or / characters */// On POSIXreadFileSync(newURL('file:///p/a/t/h/%2F'));readFileSync(newURL('file:///p/a/t/h/%2f'));/* TypeError [ERR_INVALID_FILE_URL_PATH]: File URL path must not include encoded/ characters */On Windows,file:<URL>s having encoded backslash will result in an error:
import { readFileSync }from'node:fs';// On WindowsreadFileSync(newURL('file:///C:/path/%5C'));readFileSync(newURL('file:///C:/path/%5c'));/* TypeError [ERR_INVALID_FILE_URL_PATH]: File URL path must not include encoded\ or / characters */Buffer paths#
Paths specified using a<Buffer> are useful primarily on certain POSIXoperating systems that treat file paths as opaque byte sequences. On suchsystems, it is possible for a single file path to contain sub-sequences thatuse multiple character encodings. As with string paths,<Buffer> paths maybe relative or absolute:
Example using an absolute path on POSIX:
import { open }from'node:fs/promises';import {Buffer }from'node:buffer';let fd;try { fd =awaitopen(Buffer.from('/open/some/file.txt'),'r');// Do something with the file}finally {await fd?.close();}Per-drive working directories on Windows#
On Windows, Node.js follows the concept of per-drive working directory. Thisbehavior can be observed when using a drive path without a backslash. Forexamplefs.readdirSync('C:\\') can potentially return a different result thanfs.readdirSync('C:'). For more information, seethis MSDN page.
File descriptors#
On POSIX systems, for every process, the kernel maintains a table of currentlyopen files and resources. Each open file is assigned a simple numericidentifier called afile descriptor. At the system-level, all file systemoperations use these file descriptors to identify and track each specificfile. Windows systems use a different but conceptually similar mechanism fortracking resources. To simplify things for users, Node.js abstracts away thedifferences between operating systems and assigns all open files a numeric filedescriptor.
The callback-basedfs.open(), and synchronousfs.openSync() methods open afile and allocate a new file descriptor. Once allocated, the file descriptor maybe used to read data from, write data to, or request information about the file.
Operating systems limit the number of file descriptors that may be openat any given time so it is critical to close the descriptor when operationsare completed. Failure to do so will result in a memory leak that willeventually cause an application to crash.
import { open, close, fstat }from'node:fs';functioncloseFd(fd) {close(fd,(err) => {if (err)throw err; });}open('/open/some/file.txt','r',(err, fd) => {if (err)throw err;try {fstat(fd,(err, stat) => {if (err) {closeFd(fd);throw err; }// use statcloseFd(fd); }); }catch (err) {closeFd(fd);throw err; }});The promise-based APIs use a<FileHandle> object in place of the numericfile descriptor. These objects are better managed by the system to ensurethat resources are not leaked. However, it is still required that they areclosed when operations are completed:
import { open }from'node:fs/promises';let file;try { file =awaitopen('/open/some/file.txt','r');const stat =await file.stat();// use stat}finally {await file.close();}Threadpool usage#
All callback and promise-based file system APIs (with the exception offs.FSWatcher()) use libuv's threadpool. This can have surprising and negativeperformance implications for some applications. See theUV_THREADPOOL_SIZE documentation for more information.
File system flags#
The following flags are available wherever theflag option takes astring.
'a': Open file for appending.The file is created if it does not exist.'ax': Like'a'but fails if the path exists.'a+': Open file for reading and appending.The file is created if it does not exist.'ax+': Like'a+'but fails if the path exists.'as': Open file for appending in synchronous mode.The file is created if it does not exist.'as+': Open file for reading and appending in synchronous mode.The file is created if it does not exist.'r': Open file for reading.An exception occurs if the file does not exist.'rs': Open file for reading in synchronous mode.An exception occurs if the file does not exist.'r+': Open file for reading and writing.An exception occurs if the file does not exist.'rs+': Open file for reading and writing in synchronous mode. Instructsthe operating system to bypass the local file system cache.This is primarily useful for opening files on NFS mounts as it allowsskipping the potentially stale local cache. It has a very real impact onI/O performance so using this flag is not recommended unless it is needed.
This doesn't turn
fs.open()orfsPromises.open()into a synchronousblocking call. If synchronous operation is desired, something likefs.openSync()should be used.'w': Open file for writing.The file is created (if it does not exist) or truncated (if it exists).'wx': Like'w'but fails if the path exists.'w+': Open file for reading and writing.The file is created (if it does not exist) or truncated (if it exists).'wx+': Like'w+'but fails if the path exists.
flag can also be a number as documented byopen(2); commonly used constantsare available fromfs.constants. On Windows, flags are translated totheir equivalent ones where applicable, e.g.O_WRONLY toFILE_GENERIC_WRITE,orO_EXCL|O_CREAT toCREATE_NEW, as accepted byCreateFileW.
The exclusive flag'x' (O_EXCL flag inopen(2)) causes the operation toreturn an error if the path already exists. On POSIX, if the path is a symboliclink, usingO_EXCL returns an error even if the link is to a path that doesnot exist. The exclusive flag might not work with network file systems.
On Linux, positional writes don't work when the file is opened in append mode.The kernel ignores the position argument and always appends the data tothe end of the file.
Modifying a file rather than replacing it may require theflag option to beset to'r+' rather than the default'w'.
The behavior of some flags are platform-specific. As such, opening a directoryon macOS and Linux with the'a+' flag, as in the example below, will return anerror. In contrast, on Windows and FreeBSD, a file descriptor or aFileHandlewill be returned.
// macOS and Linuxfs.open('<directory>','a+',(err, fd) => {// => [Error: EISDIR: illegal operation on a directory, open <directory>]});// Windows and FreeBSDfs.open('<directory>','a+',(err, fd) => {// => null, <fd>});On Windows, opening an existing hidden file using the'w' flag (eitherthroughfs.open(),fs.writeFile(), orfsPromises.open()) will fail withEPERM. Existing hidden files can be opened for writing with the'r+' flag.
A call tofs.ftruncate() orfilehandle.truncate() can be used to resetthe file contents.
Global objects#
These objects are available in all modules.
The following variables may appear to be global but are not. They exist only inthe scope ofCommonJS modules:
The objects listed here are specific to Node.js. There arebuilt-in objectsthat are part of the JavaScript language itself, which are also globallyaccessible.
Class:AbortController#
History
| Version | Changes |
|---|---|
| v15.4.0 | No longer experimental. |
| v15.0.0, v14.17.0 | Added in: v15.0.0, v14.17.0 |
A utility class used to signal cancelation in selectedPromise-based APIs.The API is based on the Web API<AbortController>.
const ac =newAbortController();ac.signal.addEventListener('abort',() =>console.log('Aborted!'), {once:true });ac.abort();console.log(ac.signal.aborted);// Prints trueabortController.abort([reason])#
History
| Version | Changes |
|---|---|
| v17.2.0, v16.14.0 | Added the new optional reason argument. |
| v15.0.0, v14.17.0 | Added in: v15.0.0, v14.17.0 |
reason<any> An optional reason, retrievable on theAbortSignal'sreasonproperty.
Triggers the abort signal, causing theabortController.signal to emitthe'abort' event.
Class:AbortSignal#
- Extends:<EventTarget>
TheAbortSignal is used to notify observers when theabortController.abort() method is called.
Static method:AbortSignal.abort([reason])#
History
| Version | Changes |
|---|---|
| v17.2.0, v16.14.0 | Added the new optional reason argument. |
| v15.12.0, v14.17.0 | Added in: v15.12.0, v14.17.0 |
reason<any>- Returns:<AbortSignal>
Returns a new already abortedAbortSignal.
Static method:AbortSignal.timeout(delay)#
delay<number> The number of milliseconds to wait before triggeringthe AbortSignal.
Returns a newAbortSignal which will be aborted indelay milliseconds.
Static method:AbortSignal.any(signals)#
signals<AbortSignal[]> TheAbortSignals of which to compose a newAbortSignal.
Returns a newAbortSignal which will be aborted if any of the providedsignals are aborted. ItsabortSignal.reason will be set to whicheverone of thesignals caused it to be aborted.
Event:'abort'#
The'abort' event is emitted when theabortController.abort() methodis called. The callback is invoked with a single object argument with asingletype property set to'abort':
const ac =newAbortController();// Use either the onabort property...ac.signal.onabort =() =>console.log('aborted!');// Or the EventTarget API...ac.signal.addEventListener('abort',(event) => {console.log(event.type);// Prints 'abort'}, {once:true });ac.abort();TheAbortController with which theAbortSignal is associated will onlyever trigger the'abort' event once. We recommended that code checkthat theabortSignal.aborted attribute isfalse before adding an'abort'event listener.
Any event listeners attached to theAbortSignal should use the{ once: true } option (or, if using theEventEmitter APIs to attach alistener, use theonce() method) to ensure that the event listener isremoved as soon as the'abort' event is handled. Failure to do so mayresult in memory leaks.
abortSignal.aborted#
- Type:<boolean> True after the
AbortControllerhas been aborted.
abortSignal.onabort#
- Type:<Function>
An optional callback function that may be set by user code to be notifiedwhen theabortController.abort() function has been called.
abortSignal.reason#
- Type:<any>
An optional reason specified when theAbortSignal was triggered.
const ac =newAbortController();ac.abort(newError('boom!'));console.log(ac.signal.reason);// Error: boom!abortSignal.throwIfAborted()#
IfabortSignal.aborted istrue, throwsabortSignal.reason.
Class:Blob#
See<Blob>.
Class:Buffer#
- Type:<Function>
Used to handle binary data. See thebuffer section.
Class:ByteLengthQueuingStrategy#
History
| Version | Changes |
|---|---|
| v23.11.0, v22.15.0 | Marking the API stable. |
| v18.0.0 | Added in: v18.0.0 |
A browser-compatible implementation ofByteLengthQueuingStrategy.
__dirname#
This variable may appear to be global but is not. See__dirname.
__filename#
This variable may appear to be global but is not. See__filename.
atob(data)#
Buffer.from(data, 'base64') instead.Global alias forbuffer.atob().
An automated migration is available (source):
npx codemod@latest @nodejs/buffer-atob-btoaClass:BroadcastChannel#
btoa(data)#
buf.toString('base64') instead.Global alias forbuffer.btoa().
An automated migration is available (source):
npx codemod@latest @nodejs/buffer-atob-btoaclearImmediate(immediateObject)#
clearImmediate is described in thetimers section.
clearInterval(intervalObject)#
clearInterval is described in thetimers section.
clearTimeout(timeoutObject)#
clearTimeout is described in thetimers section.
Class:CloseEvent#
A browser-compatible implementation of<CloseEvent>. Disable this APIwith the--no-experimental-websocket CLI flag.
Class:CompressionStream#
History
| Version | Changes |
|---|---|
| v24.7.0, v22.20.0 | format now accepts |
| v23.11.0, v22.15.0 | Marking the API stable. |
| v18.0.0 | Added in: v18.0.0 |
A browser-compatible implementation ofCompressionStream.
console#
- Type:<Object>
Used to print to stdout and stderr. See theconsole section.
Class:CountQueuingStrategy#
History
| Version | Changes |
|---|---|
| v23.11.0, v22.15.0 | Marking the API stable. |
| v18.0.0 | Added in: v18.0.0 |
A browser-compatible implementation ofCountQueuingStrategy.
Class:Crypto#
History
| Version | Changes |
|---|---|
| v23.0.0 | No longer experimental. |
| v19.0.0 | No longer behind |
| v17.6.0, v16.15.0 | Added in: v17.6.0, v16.15.0 |
A browser-compatible implementation of<Crypto>. This global is availableonly if the Node.js binary was compiled with including support for thenode:crypto module.
crypto#
History
| Version | Changes |
|---|---|
| v23.0.0 | No longer experimental. |
| v19.0.0 | No longer behind |
| v17.6.0, v16.15.0 | Added in: v17.6.0, v16.15.0 |
A browser-compatible implementation of theWeb Crypto API.
Class:CryptoKey#
History
| Version | Changes |
|---|---|
| v23.0.0 | No longer experimental. |
| v19.0.0 | No longer behind |
| v17.6.0, v16.15.0 | Added in: v17.6.0, v16.15.0 |
A browser-compatible implementation of<CryptoKey>. This global is availableonly if the Node.js binary was compiled with including support for thenode:crypto module.
Class:CustomEvent#
History
| Version | Changes |
|---|---|
| v23.0.0 | No longer experimental. |
| v22.1.0, v20.13.0 | CustomEvent is now stable. |
| v19.0.0 | No longer behind |
| v18.7.0, v16.17.0 | Added in: v18.7.0, v16.17.0 |
A browser-compatible implementation of<CustomEvent>.
Class:DecompressionStream#
History
| Version | Changes |
|---|---|
| v24.7.0, v22.20.0 | format now accepts |
| v23.11.0, v22.15.0 | Marking the API stable. |
| v18.0.0 | Added in: v18.0.0 |
A browser-compatible implementation ofDecompressionStream.
ErrorEvent#
A browser-compatible implementation of<ErrorEvent>.
Class:Event#
History
| Version | Changes |
|---|---|
| v15.4.0 | No longer experimental. |
| v15.0.0 | Added in: v15.0.0 |
A browser-compatible implementation of theEvent class. SeeEventTarget andEvent API for more details.
Class:EventSource#
--experimental-eventsourceCLI flag.A browser-compatible implementation of<EventSource>.
Class:EventTarget#
History
| Version | Changes |
|---|---|
| v15.4.0 | No longer experimental. |
| v15.0.0 | Added in: v15.0.0 |
A browser-compatible implementation of theEventTarget class. SeeEventTarget andEvent API for more details.
exports#
This variable may appear to be global but is not. Seeexports.
fetch#
History
| Version | Changes |
|---|---|
| v21.0.0 | No longer experimental. |
| v18.0.0 | No longer behind |
| v17.5.0, v16.15.0 | Added in: v17.5.0, v16.15.0 |
A browser-compatible implementation of thefetch() function.
const res =awaitfetch('https://nodejs.org/api/documentation.json');if (res.ok) {const data =await res.json();console.log(data);}The implementation is based uponundici, an HTTP/1.1 clientwritten from scratch for Node.js. You can figure out which version ofundici is bundledin your Node.js process reading theprocess.versions.undici property.
Custom dispatcher#
You can use a custom dispatcher to dispatch requests passing it in fetch's options object.The dispatcher must be compatible withundici'sDispatcher class.
fetch(url, {dispatcher:newMyAgent() });It is possible to change the global dispatcher in Node.js by installingundici and usingthesetGlobalDispatcher() method. Calling this method will affect bothundici andNode.js.
import { setGlobalDispatcher }from'undici';setGlobalDispatcher(newMyAgent());Class:File#
See<File>.
Class:FormData#
History
| Version | Changes |
|---|---|
| v21.0.0 | No longer experimental. |
| v18.0.0 | No longer behind |
| v17.6.0, v16.15.0 | Added in: v17.6.0, v16.15.0 |
A browser-compatible implementation of<FormData>.
global#
globalThis instead.- Type:<Object> The global namespace object.
In browsers, the top-level scope has traditionally been the global scope. Thismeans thatvar something will define a new global variable, except withinECMAScript modules. In Node.js, this is different. The top-level scope is notthe global scope;var something inside a Node.js module will be local to thatmodule, regardless of whether it is aCommonJS module or anECMAScript module.
Class:Headers#
History
| Version | Changes |
|---|---|
| v21.0.0 | No longer experimental. |
| v18.0.0 | No longer behind |
| v17.5.0, v16.15.0 | Added in: v17.5.0, v16.15.0 |
A browser-compatible implementation of<Headers>.
localStorage#
History
| Version | Changes |
|---|---|
| v25.0.0 | When webstorage is enabled and |
| v25.0.0 | This API is no longer behind |
| v22.4.0 | Added in: v22.4.0 |
--no-experimental-webstorage.A browser-compatible implementation oflocalStorage. Data is storedunencrypted in the file specified by the--localstorage-file CLI flag.The maximum amount of data that can be stored is 10 MB.Any modification of this data outside of the Web Storage API is not supported.localStorage data is not stored per user or per request when used in the contextof a server, it is shared across all users and requests.
Class:MessageChannel#
TheMessageChannel class. SeeMessageChannel for more details.
Class:MessageEvent#
A browser-compatible implementation of<MessageEvent>.
Class:MessagePort#
TheMessagePort class. SeeMessagePort for more details.
module#
This variable may appear to be global but is not. Seemodule.
Class:Navigator#
--no-experimental-global-navigator CLI flag.A partial implementation of theNavigator API.
navigator#
--no-experimental-global-navigator CLI flag.A partial implementation ofwindow.navigator.
navigator.hardwareConcurrency#
- Type:<number>
Thenavigator.hardwareConcurrency read-only property returns the number oflogical processors available to the current Node.js instance.
console.log(`This process is running on${navigator.hardwareConcurrency} logical processors`);navigator.language#
- Type:<string>
Thenavigator.language read-only property returns a string representing thepreferred language of the Node.js instance. The language will be determined bythe ICU library used by Node.js at runtime based on thedefault language of the operating system.
The value is representing the language version as defined inRFC 5646.
The fallback value on builds without ICU is'en-US'.
console.log(`The preferred language of the Node.js instance has the tag '${navigator.language}'`);navigator.languages#
- Type: {Array
}
Thenavigator.languages read-only property returns an array of stringsrepresenting the preferred languages of the Node.js instance.By defaultnavigator.languages contains only the value ofnavigator.language, which will be determined by the ICU library used byNode.js at runtime based on the default language of the operating system.
The fallback value on builds without ICU is['en-US'].
console.log(`The preferred languages are '${navigator.languages}'`);navigator.platform#
- Type:<string>
Thenavigator.platform read-only property returns a string identifying theplatform on which the Node.js instance is running.
console.log(`This process is running on${navigator.platform}`);navigator.userAgent#
- Type:<string>
Thenavigator.userAgent read-only property returns user agentconsisting of the runtime name and major version number.
console.log(`The user-agent is${navigator.userAgent}`);// Prints "Node.js/21"navigator.locks#
Thenavigator.locks read-only property returns aLockManager instance thatcan be used to coordinate access to resources that may be shared across multiplethreads within the same process. This global implementation matches the semanticsof thebrowserLockManager API.
// Request an exclusive lockawait navigator.locks.request('my_resource',async (lock) => {// The lock has been acquired.console.log(`Lock acquired:${lock.name}`);// Lock is automatically released when the function returns});// Request a shared lockawait navigator.locks.request('shared_resource', {mode:'shared' },async (lock) => {// Multiple shared locks can be held simultaneouslyconsole.log(`Shared lock acquired:${lock.name}`);});// Request an exclusive locknavigator.locks.request('my_resource',async (lock) => {// The lock has been acquired.console.log(`Lock acquired:${lock.name}`);// Lock is automatically released when the function returns}).then(() => {console.log('Lock released');});// Request a shared locknavigator.locks.request('shared_resource', {mode:'shared' },async (lock) => {// Multiple shared locks can be held simultaneouslyconsole.log(`Shared lock acquired:${lock.name}`);}).then(() => {console.log('Shared lock released');});
Seeworker_threads.locks for detailed API documentation.
Class:PerformanceEntry#
ThePerformanceEntry class. SeePerformanceEntry for more details.
Class:PerformanceMark#
ThePerformanceMark class. SeePerformanceMark for more details.
Class:PerformanceMeasure#
ThePerformanceMeasure class. SeePerformanceMeasure for more details.
Class:PerformanceObserver#
ThePerformanceObserver class. SeePerformanceObserver for more details.
Class:PerformanceObserverEntryList#
ThePerformanceObserverEntryList class. SeePerformanceObserverEntryList for more details.
Class:PerformanceResourceTiming#
ThePerformanceResourceTiming class. SeePerformanceResourceTiming formore details.
performance#
Theperf_hooks.performance object.
process#
- Type:<Object>
The process object. See theprocess object section.
queueMicrotask(callback)#
callback<Function> Function to be queued.
ThequeueMicrotask() method queues a microtask to invokecallback. Ifcallback throws an exception, theprocess object'uncaughtException'event will be emitted.
The microtask queue is managed by V8 and may be used in a similar manner totheprocess.nextTick() queue, which is managed by Node.js. Theprocess.nextTick() queue is always processed before the microtask queuewithin each turn of the Node.js event loop.
// Here, `queueMicrotask()` is used to ensure the 'load' event is always// emitted asynchronously, and therefore consistently. Using// `process.nextTick()` here would result in the 'load' event always emitting// before any other promise jobs.DataHandler.prototype.load =asyncfunctionload(key) {const hit =this._cache.get(key);if (hit !==undefined) {queueMicrotask(() => {this.emit('load', hit); });return; }const data =awaitfetchData(key);this._cache.set(key, data);this.emit('load', data);};Class:ReadableByteStreamController#
History
| Version | Changes |
|---|---|
| v23.11.0, v22.15.0 | Marking the API stable. |
| v18.0.0 | Added in: v18.0.0 |
A browser-compatible implementation ofReadableByteStreamController.
Class:ReadableStream#
History
| Version | Changes |
|---|---|
| v23.11.0, v22.15.0 | Marking the API stable. |
| v18.0.0 | Added in: v18.0.0 |
A browser-compatible implementation ofReadableStream.
Class:ReadableStreamBYOBReader#
History
| Version | Changes |
|---|---|
| v23.11.0, v22.15.0 | Marking the API stable. |
| v18.0.0 | Added in: v18.0.0 |
A browser-compatible implementation ofReadableStreamBYOBReader.
Class:ReadableStreamBYOBRequest#
History
| Version | Changes |
|---|---|
| v23.11.0, v22.15.0 | Marking the API stable. |
| v18.0.0 | Added in: v18.0.0 |
A browser-compatible implementation ofReadableStreamBYOBRequest.
Class:ReadableStreamDefaultController#
History
| Version | Changes |
|---|---|
| v23.11.0, v22.15.0 | Marking the API stable. |
| v18.0.0 | Added in: v18.0.0 |
A browser-compatible implementation ofReadableStreamDefaultController.
Class:ReadableStreamDefaultReader#
History
| Version | Changes |
|---|---|
| v23.11.0, v22.15.0 | Marking the API stable. |
| v18.0.0 | Added in: v18.0.0 |
A browser-compatible implementation ofReadableStreamDefaultReader.
require()#
This variable may appear to be global but is not. Seerequire().
Class:Response#
History
| Version | Changes |
|---|---|
| v21.0.0 | No longer experimental. |
| v18.0.0 | No longer behind |
| v17.5.0, v16.15.0 | Added in: v17.5.0, v16.15.0 |
A browser-compatible implementation of<Response>.
Class:Request#
History
| Version | Changes |
|---|---|
| v21.0.0 | No longer experimental. |
| v18.0.0 | No longer behind |
| v17.5.0, v16.15.0 | Added in: v17.5.0, v16.15.0 |
A browser-compatible implementation of<Request>.
sessionStorage#
History
| Version | Changes |
|---|---|
| v25.0.0 | This API is no longer behind |
| v22.4.0 | Added in: v22.4.0 |
--no-experimental-webstorage.A browser-compatible implementation ofsessionStorage. Data is stored inmemory, with a storage quota of 10 MB.sessionStorage data persists only withinthe currently running process, and is not shared between workers.
setImmediate(callback[, ...args])#
setImmediate is described in thetimers section.
setInterval(callback, delay[, ...args])#
setInterval is described in thetimers section.
setTimeout(callback, delay[, ...args])#
setTimeout is described in thetimers section.
Class:Storage#
--no-experimental-webstorage.A browser-compatible implementation of<Storage>.
structuredClone(value[, options])#
The WHATWGstructuredClone method.
Class:SubtleCrypto#
History
| Version | Changes |
|---|---|
| v19.0.0 | No longer behind |
| v17.6.0, v16.15.0 | Added in: v17.6.0, v16.15.0 |
A browser-compatible implementation of<SubtleCrypto>. This global is availableonly if the Node.js binary was compiled with including support for thenode:crypto module.
Class:DOMException#
The WHATWG<DOMException> class.
Class:TextDecoder#
The WHATWGTextDecoder class. See theTextDecoder section.
Class:TextDecoderStream#
History
| Version | Changes |
|---|---|
| v23.11.0, v22.15.0 | Marking the API stable. |
| v18.0.0 | Added in: v18.0.0 |
A browser-compatible implementation ofTextDecoderStream.
Class:TextEncoder#
The WHATWGTextEncoder class. See theTextEncoder section.
Class:TextEncoderStream#
History
| Version | Changes |
|---|---|
| v23.11.0, v22.15.0 | Marking the API stable. |
| v18.0.0 | Added in: v18.0.0 |
A browser-compatible implementation ofTextEncoderStream.
Class:TransformStream#
History
| Version | Changes |
|---|---|
| v23.11.0, v22.15.0 | Marking the API stable. |
| v18.0.0 | Added in: v18.0.0 |
A browser-compatible implementation ofTransformStream.
Class:TransformStreamDefaultController#
History
| Version | Changes |
|---|---|
| v23.11.0, v22.15.0 | Marking the API stable. |
| v18.0.0 | Added in: v18.0.0 |
A browser-compatible implementation ofTransformStreamDefaultController.
Class:URL#
The WHATWGURL class. See theURL section.
Class:URLPattern#
The WHATWGURLPattern class. See theURLPattern section.
Class:URLSearchParams#
The WHATWGURLSearchParams class. See theURLSearchParams section.
Class:WebAssembly#
- Type:<Object>
The object that acts as the namespace for all W3CWebAssembly related functionality. See theMozilla Developer Network for usage and compatibility.
Class:WebSocket#
History
| Version | Changes |
|---|---|
| v22.4.0 | No longer experimental. |
| v22.0.0 | No longer behind |
| v21.0.0, v20.10.0 | Added in: v21.0.0, v20.10.0 |
A browser-compatible implementation of<WebSocket>. Disable this APIwith the--no-experimental-websocket CLI flag.
Class:WritableStream#
History
| Version | Changes |
|---|---|
| v23.11.0, v22.15.0 | Marking the API stable. |
| v18.0.0 | Added in: v18.0.0 |
A browser-compatible implementation ofWritableStream.
Class:WritableStreamDefaultController#
History
| Version | Changes |
|---|---|
| v23.11.0, v22.15.0 | Marking the API stable. |
| v18.0.0 | Added in: v18.0.0 |
A browser-compatible implementation ofWritableStreamDefaultController.
Class:WritableStreamDefaultWriter#
History
| Version | Changes |
|---|---|
| v23.11.0, v22.15.0 | Marking the API stable. |
| v18.0.0 | Added in: v18.0.0 |
A browser-compatible implementation ofWritableStreamDefaultWriter.
HTTP#
Source Code:lib/http.js
This module, containing both a client and server, can be imported viarequire('node:http') (CommonJS) orimport * as http from 'node:http' (ES module).
The HTTP interfaces in Node.js are designed to support many featuresof the protocol which have been traditionally difficult to use.In particular, large, possibly chunk-encoded, messages. The interface iscareful to never buffer entire requests or responses, so theuser is able to stream data.
HTTP message headers are represented by an object like this:
{"content-length":"123","content-type":"text/plain","connection":"keep-alive","host":"example.com","accept":"*/*"}Keys are lowercased. Values are not modified.
In order to support the full spectrum of possible HTTP applications, the Node.jsHTTP API is very low-level. It deals with stream handling and messageparsing only. It parses a message into headers and body but it does notparse the actual headers or the body.
Seemessage.headers for details on how duplicate headers are handled.
The raw headers as they were received are retained in therawHeadersproperty, which is an array of[key, value, key2, value2, ...]. Forexample, the previous message header object might have arawHeaderslist like the following:
['ConTent-Length','123456','content-LENGTH','123','content-type','text/plain','CONNECTION','keep-alive','Host','example.com','accepT','*/*' ]Class:http.Agent#
AnAgent is responsible for managing connection persistenceand reuse for HTTP clients. It maintains a queue of pending requestsfor a given host and port, reusing a single socket connection for eachuntil the queue is empty, at which time the socket is either destroyedor put into a pool where it is kept to be used again for requests to thesame host and port. Whether it is destroyed or pooled depends on thekeepAliveoption.
Pooled connections have TCP Keep-Alive enabled for them, but servers maystill close idle connections, in which case they will be removed from thepool and a new connection will be made when a new HTTP request is made forthat host and port. Servers may also refuse to allow multiple requestsover the same connection, in which case the connection will have to beremade for every request and cannot be pooled. TheAgent will still makethe requests to that server, but each one will occur over a new connection.
When a connection is closed by the client or the server, it is removedfrom the pool. Any unused sockets in the pool will be unrefed so as notto keep the Node.js process running when there are no outstanding requests.(seesocket.unref()).
It is good practice, todestroy() anAgent instance when it is nolonger in use, because unused sockets consume OS resources.
Sockets are removed from an agent when the socket emits eithera'close' event or an'agentRemove' event. When intending to keep oneHTTP request open for a long time without keeping it in the agent, somethinglike the following may be done:
http.get(options,(res) => {// Do stuff}).on('socket',(socket) => { socket.emit('agentRemove');});An agent may also be used for an individual request. By providing{agent: false} as an option to thehttp.get() orhttp.request()functions, a one-time useAgent with default options will be usedfor the client connection.
agent:false:
http.get({hostname:'localhost',port:80,path:'/',agent:false,// Create a new agent just for this one request},(res) => {// Do stuff with response});new Agent([options])#
History
| Version | Changes |
|---|---|
| v24.5.0 | Add support for |
| v24.5.0 | Add support for |
| v24.7.0, v22.20.0 | Add support for |
| v15.6.0, v14.17.0 | Change the default scheduling from 'fifo' to 'lifo'. |
| v14.5.0, v12.20.0 | Add |
| v14.5.0, v12.19.0 | Add |
| v0.3.4 | Added in: v0.3.4 |
options<Object> Set of configurable options to set on the agent.Can have the following fields:keepAlive<boolean> Keep sockets around even when there are nooutstanding requests, so they can be used for future requests withouthaving to reestablish a TCP connection. Not to be confused with thekeep-alivevalue of theConnectionheader. TheConnection: keep-aliveheader is always sent when using an agent except when theConnectionheader is explicitly specified or when thekeepAliveandmaxSocketsoptions are respectively set tofalseandInfinity, in which caseConnection: closewill be used.Default:false.keepAliveMsecs<number> When using thekeepAliveoption, specifiestheinitial delayfor TCP Keep-Alive packets. Ignored when thekeepAliveoption isfalseorundefined.Default:1000.agentKeepAliveTimeoutBuffer<number> Milliseconds to subtract fromthe server-providedkeep-alive: timeout=...hint when determining socketexpiration time. This buffer helps ensure the agent closes the socketslightly before the server does, reducing the chance of sending a requeston a socket that’s about to be closed by the server.Default:1000.maxSockets<number> Maximum number of sockets to allow per host.If the same host opens multiple concurrent connections, each requestwill use new socket until themaxSocketsvalue is reached.If the host attempts to open more connections thanmaxSockets,the additional requests will enter into a pending request queue, andwill enter active connection state when an existing connection terminates.This makes sure there are at mostmaxSocketsactive connections atany point in time, from a given host.Default:Infinity.maxTotalSockets<number> Maximum number of sockets allowed forall hosts in total. Each request will use a new socketuntil the maximum is reached.Default:Infinity.maxFreeSockets<number> Maximum number of sockets per host to leave openin a free state. Only relevant ifkeepAliveis set totrue.Default:256.scheduling<string> Scheduling strategy to apply when pickingthe next free socket to use. It can be'fifo'or'lifo'.The main difference between the two scheduling strategies is that'lifo'selects the most recently used socket, while'fifo'selectsthe least recently used socket.In case of a low rate of request per second, the'lifo'schedulingwill lower the risk of picking a socket that might have been closedby the server due to inactivity.In case of a high rate of request per second,the'fifo'scheduling will maximize the number of open sockets,while the'lifo'scheduling will keep it as low as possible.Default:'lifo'.timeout<number> Socket timeout in milliseconds.This will set the timeout when the socket is created.proxyEnv<Object> |<undefined> Environment variables for proxy configuration.SeeBuilt-in Proxy Support for details.Default:undefinedHTTP_PROXY<string> |<undefined> URL for the proxy server that HTTP requests should use.If undefined, no proxy is used for HTTP requests.HTTPS_PROXY<string> |<undefined> URL for the proxy server that HTTPS requests should use.If undefined, no proxy is used for HTTPS requests.NO_PROXY<string> |<undefined> Patterns specifying the endpointsthat should not be routed through a proxy.http_proxy<string> |<undefined> Same asHTTP_PROXY. If both are set,http_proxytakes precedence.https_proxy<string> |<undefined> Same asHTTPS_PROXY. If both are set,https_proxytakes precedence.no_proxy<string> |<undefined> Same asNO_PROXY. If both are set,no_proxytakes precedence.
defaultPort<number> Default port to use when the port is not specifiedin requests.Default:80.protocol<string> The protocol to use for the agent.Default:'http:'.
options insocket.connect() are also supported.
To configure any of them, a customhttp.Agent instance must be created.
import {Agent, request }from'node:http';const keepAliveAgent =newAgent({keepAlive:true });options.agent = keepAliveAgent;request(options, onResponseCallback);const http =require('node:http');const keepAliveAgent =new http.Agent({keepAlive:true });options.agent = keepAliveAgent;http.request(options, onResponseCallback);
agent.createConnection(options[, callback])#
options<Object> Options containing connection details. Checknet.createConnection()for the format of the options. For custom agents,this object is passed to the customcreateConnectionfunction.callback<Function> (Optional, primarily for custom agents) A function to becalled by a customcreateConnectionimplementation when the socket iscreated, especially for asynchronous operations.err<Error> |<null> An error object if socket creation failed.socket<stream.Duplex> The created socket.
- Returns:<stream.Duplex> The created socket. This is returned by the defaultimplementation or by a custom synchronous
createConnectionimplementation.If a customcreateConnectionuses thecallbackfor asynchronousoperation, this return value might not be the primary way to obtain the socket.
Produces a socket/stream to be used for HTTP requests.
By default, this function behaves identically tonet.createConnection(),synchronously returning the created socket. The optionalcallback parameter in thesignature isnot used by this default implementation.
However, custom agents may override this method to provide greater flexibility,for example, to create sockets asynchronously. When overridingcreateConnection:
- Synchronous socket creation: The overriding method can return thesocket/stream directly.
- Asynchronous socket creation: The overriding method can accept the
callbackand pass the created socket/stream to it (e.g.,callback(null, newSocket)).If an error occurs during socket creation, it should be passed as the firstargument to thecallback(e.g.,callback(err)).
The agent will call the providedcreateConnection function withoptions andthis internalcallback. Thecallback provided by the agent has a signatureof(err, stream).
agent.keepSocketAlive(socket)#
socket<stream.Duplex>
Called whensocket is detached from a request and could be persisted by theAgent. Default behavior is to:
socket.setKeepAlive(true,this.keepAliveMsecs);socket.unref();returntrue;This method can be overridden by a particularAgent subclass. If thismethod returns a falsy value, the socket will be destroyed instead of persistingit for use with the next request.
Thesocket argument can be an instance of<net.Socket>, a subclass of<stream.Duplex>.
agent.reuseSocket(socket, request)#
socket<stream.Duplex>request<http.ClientRequest>
Called whensocket is attached torequest after being persisted because ofthe keep-alive options. Default behavior is to:
socket.ref();This method can be overridden by a particularAgent subclass.
Thesocket argument can be an instance of<net.Socket>, a subclass of<stream.Duplex>.
agent.destroy()#
Destroy any sockets that are currently in use by the agent.
It is usually not necessary to do this. However, if using anagent withkeepAlive enabled, then it is best to explicitly shut downthe agent when it is no longer needed. Otherwise,sockets might stay open for quite a long time before the serverterminates them.
agent.freeSockets#
History
| Version | Changes |
|---|---|
| v16.0.0 | The property now has a |
| v0.11.4 | Added in: v0.11.4 |
- Type:<Object>
An object which contains arrays of sockets currently awaiting use bythe agent whenkeepAlive is enabled. Do not modify.
Sockets in thefreeSockets list will be automatically destroyed andremoved from the array on'timeout'.
agent.getName([options])#
History
| Version | Changes |
|---|---|
| v17.7.0, v16.15.0 | The |
| v0.11.4 | Added in: v0.11.4 |
Get a unique name for a set of request options, to determine whether aconnection can be reused. For an HTTP agent, this returnshost:port:localAddress orhost:port:localAddress:family. For an HTTPS agent,the name includes the CA, cert, ciphers, and other HTTPS/TLS-specific optionsthat determine socket reusability.
agent.maxFreeSockets#
- Type:<number>
By default set to 256. For agents withkeepAlive enabled, thissets the maximum number of sockets that will be left open in the freestate.
agent.maxSockets#
- Type:<number>
By default set toInfinity. Determines how many concurrent sockets the agentcan have open per origin. Origin is the returned value ofagent.getName().
agent.maxTotalSockets#
- Type:<number>
By default set toInfinity. Determines how many concurrent sockets the agentcan have open. UnlikemaxSockets, this parameter applies across all origins.
Class:http.ClientRequest#
- Extends:<http.OutgoingMessage>
This object is created internally and returned fromhttp.request(). Itrepresents anin-progress request whose header has already been queued. Theheader is still mutable using thesetHeader(name, value),getHeader(name),removeHeader(name) API. The actual header willbe sent along with the first data chunk or when callingrequest.end().
To get the response, add a listener for'response' to the request object.'response' will be emitted from the request object when the responseheaders have been received. The'response' event is executed with oneargument which is an instance ofhttp.IncomingMessage.
During the'response' event, one can add listeners to theresponse object; particularly to listen for the'data' event.
If no'response' handler is added, then the response will beentirely discarded. However, if a'response' event handler is added,then the data from the response objectmust be consumed, either bycallingresponse.read() whenever there is a'readable' event, orby adding a'data' handler, or by calling the.resume() method.Until the data is consumed, the'end' event will not fire. Also, untilthe data is read it will consume memory that can eventually lead to a'process out of memory' error.
For backward compatibility,res will only emit'error' if there is an'error' listener registered.
SetContent-Length header to limit the response body size.Ifresponse.strictContentLength is set totrue, mismatching theContent-Length header value will result in anError being thrown,identified bycode:'ERR_HTTP_CONTENT_LENGTH_MISMATCH'.
Content-Length value should be in bytes, not characters. UseBuffer.byteLength() to determine the length of the body in bytes.
Event:'abort'#
'close' event instead.Emitted when the request has been aborted by the client. This event is onlyemitted on the first call toabort().
Event:'close'#
Indicates that the request is completed, or its underlying connection wasterminated prematurely (before the response completion).
Event:'connect'#
response<http.IncomingMessage>socket<stream.Duplex>head<Buffer>
Emitted each time a server responds to a request with aCONNECT method. Ifthis event is not being listened for, clients receiving aCONNECT method willhave their connections closed.
This event is guaranteed to be passed an instance of the<net.Socket> class,a subclass of<stream.Duplex>, unless the user specifies a sockettype other than<net.Socket>.
A client and server pair demonstrating how to listen for the'connect' event:
import { createServer, request }from'node:http';import { connect }from'node:net';import {URL }from'node:url';// Create an HTTP tunneling proxyconst proxy =createServer((req, res) => { res.writeHead(200, {'Content-Type':'text/plain' }); res.end('okay');});proxy.on('connect',(req, clientSocket, head) => {// Connect to an origin serverconst { port, hostname } =newURL(`http://${req.url}`);const serverSocket =connect(port ||80, hostname,() => { clientSocket.write('HTTP/1.1 200 Connection Established\r\n' +'Proxy-agent: Node.js-Proxy\r\n' +'\r\n'); serverSocket.write(head); serverSocket.pipe(clientSocket); clientSocket.pipe(serverSocket); });});// Now that proxy is runningproxy.listen(1337,'127.0.0.1',() => {// Make a request to a tunneling proxyconst options = {port:1337,host:'127.0.0.1',method:'CONNECT',path:'www.google.com:80', };const req =request(options); req.end(); req.on('connect',(res, socket, head) => {console.log('got connected!');// Make a request over an HTTP tunnel socket.write('GET / HTTP/1.1\r\n' +'Host: www.google.com:80\r\n' +'Connection: close\r\n' +'\r\n'); socket.on('data',(chunk) => {console.log(chunk.toString()); }); socket.on('end',() => { proxy.close(); }); });});const http =require('node:http');const net =require('node:net');const {URL } =require('node:url');// Create an HTTP tunneling proxyconst proxy = http.createServer((req, res) => { res.writeHead(200, {'Content-Type':'text/plain' }); res.end('okay');});proxy.on('connect',(req, clientSocket, head) => {// Connect to an origin serverconst { port, hostname } =newURL(`http://${req.url}`);const serverSocket = net.connect(port ||80, hostname,() => { clientSocket.write('HTTP/1.1 200 Connection Established\r\n' +'Proxy-agent: Node.js-Proxy\r\n' +'\r\n'); serverSocket.write(head); serverSocket.pipe(clientSocket); clientSocket.pipe(serverSocket); });});// Now that proxy is runningproxy.listen(1337,'127.0.0.1',() => {// Make a request to a tunneling proxyconst options = {port:1337,host:'127.0.0.1',method:'CONNECT',path:'www.google.com:80', };const req = http.request(options); req.end(); req.on('connect',(res, socket, head) => {console.log('got connected!');// Make a request over an HTTP tunnel socket.write('GET / HTTP/1.1\r\n' +'Host: www.google.com:80\r\n' +'Connection: close\r\n' +'\r\n'); socket.on('data',(chunk) => {console.log(chunk.toString()); }); socket.on('end',() => { proxy.close(); }); });});
Event:'continue'#
Emitted when the server sends a '100 Continue' HTTP response, usually becausethe request contained 'Expect: 100-continue'. This is an instruction thatthe client should send the request body.
Event:'finish'#
Emitted when the request has been sent. More specifically, this event is emittedwhen the last segment of the request headers and body have been handed off tothe operating system for transmission over the network. It does not imply thatthe server has received anything yet.
Event:'information'#
info<Object>
Emitted when the server sends a 1xx intermediate response (excluding 101Upgrade). The listeners of this event will receive an object containing theHTTP version, status code, status message, key-value headers object,and array with the raw header names followed by their respective values.
import { request }from'node:http';const options = {host:'127.0.0.1',port:8080,path:'/length_request',};// Make a requestconst req =request(options);req.end();req.on('information',(info) => {console.log(`Got information prior to main response:${info.statusCode}`);});const http =require('node:http');const options = {host:'127.0.0.1',port:8080,path:'/length_request',};// Make a requestconst req = http.request(options);req.end();req.on('information',(info) => {console.log(`Got information prior to main response:${info.statusCode}`);});
101 Upgrade statuses do not fire this event due to their break from thetraditional HTTP request/response chain, such as web sockets, in-place TLSupgrades, or HTTP 2.0. To be notified of 101 Upgrade notices, listen for the'upgrade' event instead.
Event:'response'#
response<http.IncomingMessage>
Emitted when a response is received to this request. This event is emitted onlyonce.
Event:'socket'#
socket<stream.Duplex>
This event is guaranteed to be passed an instance of the<net.Socket> class,a subclass of<stream.Duplex>, unless the user specifies a sockettype other than<net.Socket>.
Event:'timeout'#
Emitted when the underlying socket times out from inactivity. This only notifiesthat the socket has been idle. The request must be destroyed manually.
See also:request.setTimeout().
Event:'upgrade'#
response<http.IncomingMessage>socket<stream.Duplex>head<Buffer>
Emitted each time a server responds to a request with an upgrade. If thisevent is not being listened for and the response status code is 101 SwitchingProtocols, clients receiving an upgrade header will have their connectionsclosed.
This event is guaranteed to be passed an instance of the<net.Socket> class,a subclass of<stream.Duplex>, unless the user specifies a sockettype other than<net.Socket>.
A client server pair demonstrating how to listen for the'upgrade' event.
import httpfrom'node:http';import processfrom'node:process';// Create an HTTP serverconst server = http.createServer((req, res) => { res.writeHead(200, {'Content-Type':'text/plain' }); res.end('okay');});server.on('upgrade',(req, socket, head) => { socket.write('HTTP/1.1 101 Web Socket Protocol Handshake\r\n' +'Upgrade: WebSocket\r\n' +'Connection: Upgrade\r\n' +'\r\n'); socket.pipe(socket);// echo back});// Now that server is runningserver.listen(1337,'127.0.0.1',() => {// make a requestconst options = {port:1337,host:'127.0.0.1',headers: {'Connection':'Upgrade','Upgrade':'websocket', }, };const req = http.request(options); req.end(); req.on('upgrade',(res, socket, upgradeHead) => {console.log('got upgraded!'); socket.end(); process.exit(0); });});const http =require('node:http');// Create an HTTP serverconst server = http.createServer((req, res) => { res.writeHead(200, {'Content-Type':'text/plain' }); res.end('okay');});server.on('upgrade',(req, socket, head) => { socket.write('HTTP/1.1 101 Web Socket Protocol Handshake\r\n' +'Upgrade: WebSocket\r\n' +'Connection: Upgrade\r\n' +'\r\n'); socket.pipe(socket);// echo back});// Now that server is runningserver.listen(1337,'127.0.0.1',() => {// make a requestconst options = {port:1337,host:'127.0.0.1',headers: {'Connection':'Upgrade','Upgrade':'websocket', }, };const req = http.request(options); req.end(); req.on('upgrade',(res, socket, upgradeHead) => {console.log('got upgraded!'); socket.end(); process.exit(0); });});
request.abort()#
request.destroy() instead.Marks the request as aborting. Calling this will cause remaining datain the response to be dropped and the socket to be destroyed.
request.aborted#
History
| Version | Changes |
|---|---|
| v17.0.0, v16.12.0 | Deprecated since: v17.0.0, v16.12.0 |
| v11.0.0 | The |
| v0.11.14 | Added in: v0.11.14 |
request.destroyed instead.- Type:<boolean>
Therequest.aborted property will betrue if the request hasbeen aborted.
request.connection#
request.socket.- Type:<stream.Duplex>
Seerequest.socket.
request.end([data[, encoding]][, callback])#
History
| Version | Changes |
|---|---|
| v15.0.0 | The |
| v10.0.0 | This method now returns a reference to |
| v0.1.90 | Added in: v0.1.90 |
data<string> |<Buffer> |<Uint8Array>encoding<string>callback<Function>- Returns:<this>
Finishes sending the request. If any parts of the body areunsent, it will flush them to the stream. If the request ischunked, this will send the terminating'0\r\n\r\n'.
Ifdata is specified, it is equivalent to callingrequest.write(data, encoding) followed byrequest.end(callback).
Ifcallback is specified, it will be called when the request streamis finished.
request.destroy([error])#
History
| Version | Changes |
|---|---|
| v14.5.0 | The function returns |
| v0.3.0 | Added in: v0.3.0 |
Destroy the request. Optionally emit an'error' event,and emit a'close' event. Calling this will cause remaining datain the response to be dropped and the socket to be destroyed.
Seewritable.destroy() for further details.
request.destroyed#
- Type:<boolean>
Istrue afterrequest.destroy() has been called.
Seewritable.destroyed for further details.
request.finished#
request.writableEnded.- Type:<boolean>
Therequest.finished property will betrue ifrequest.end()has been called.request.end() will automatically be called if therequest was initiated viahttp.get().
request.flushHeaders()#
Flushes the request headers.
For efficiency reasons, Node.js normally buffers the request headers untilrequest.end() is called or the first chunk of request data is written. Itthen tries to pack the request headers and data into a single TCP packet.
That's usually desired (it saves a TCP round-trip), but not when the firstdata is not sent until possibly much later.request.flushHeaders() bypassesthe optimization and kickstarts the request.
request.getHeader(name)#
Reads out a header on the request. The name is case-insensitive.The type of the return value depends on the arguments provided torequest.setHeader().
request.setHeader('content-type','text/html');request.setHeader('Content-Length',Buffer.byteLength(body));request.setHeader('Cookie', ['type=ninja','language=javascript']);const contentType = request.getHeader('Content-Type');// 'contentType' is 'text/html'const contentLength = request.getHeader('Content-Length');// 'contentLength' is of type numberconst cookie = request.getHeader('Cookie');// 'cookie' is of type string[]request.getHeaderNames()#
- Returns:<string[]>
Returns an array containing the unique names of the current outgoing headers.All header names are lowercase.
request.setHeader('Foo','bar');request.setHeader('Cookie', ['foo=bar','bar=baz']);const headerNames = request.getHeaderNames();// headerNames === ['foo', 'cookie']request.getHeaders()#
- Returns:<Object>
Returns a shallow copy of the current outgoing headers. Since a shallow copyis used, array values may be mutated without additional calls to variousheader-related http module methods. The keys of the returned object are theheader names and the values are the respective header values. All header namesare lowercase.
The object returned by therequest.getHeaders() methoddoes notprototypically inherit from the JavaScriptObject. This means that typicalObject methods such asobj.toString(),obj.hasOwnProperty(), and othersare not defined andwill not work.
request.setHeader('Foo','bar');request.setHeader('Cookie', ['foo=bar','bar=baz']);const headers = request.getHeaders();// headers === { foo: 'bar', 'cookie': ['foo=bar', 'bar=baz'] }request.getRawHeaderNames()#
- Returns:<string[]>
Returns an array containing the unique names of the current outgoing rawheaders. Header names are returned with their exact casing being set.
request.setHeader('Foo','bar');request.setHeader('Set-Cookie', ['foo=bar','bar=baz']);const headerNames = request.getRawHeaderNames();// headerNames === ['Foo', 'Set-Cookie']request.hasHeader(name)#
Returnstrue if the header identified byname is currently set in theoutgoing headers. The header name matching is case-insensitive.
const hasContentType = request.hasHeader('content-type');request.maxHeadersCount#
- Type:<number>Default:
2000
Limits maximum response headers count. If set to 0, no limit will be applied.
request.removeHeader(name)#
name<string>
Removes a header that's already defined into headers object.
request.removeHeader('Content-Type');request.reusedSocket#
- Type:<boolean> Whether the request is send through a reused socket.
When sending request through a keep-alive enabled agent, the underlying socketmight be reused. But if server closes connection at unfortunate time, clientmay run into a 'ECONNRESET' error.
import httpfrom'node:http';const agent =new http.Agent({keepAlive:true });// Server has a 5 seconds keep-alive timeout by defaulthttp .createServer((req, res) => { res.write('hello\n'); res.end(); }) .listen(3000);setInterval(() => {// Adapting a keep-alive agent http.get('http://localhost:3000', { agent },(res) => { res.on('data',(data) => {// Do nothing }); });},5000);// Sending request on 5s interval so it's easy to hit idle timeoutconst http =require('node:http');const agent =new http.Agent({keepAlive:true });// Server has a 5 seconds keep-alive timeout by defaulthttp .createServer((req, res) => { res.write('hello\n'); res.end(); }) .listen(3000);setInterval(() => {// Adapting a keep-alive agent http.get('http://localhost:3000', { agent },(res) => { res.on('data',(data) => {// Do nothing }); });},5000);// Sending request on 5s interval so it's easy to hit idle timeout
By marking a request whether it reused socket or not, we can doautomatic error retry base on it.
import httpfrom'node:http';const agent =new http.Agent({keepAlive:true });functionretriableRequest() {const req = http .get('http://localhost:3000', { agent },(res) => {// ... }) .on('error',(err) => {// Check if retry is neededif (req.reusedSocket && err.code ==='ECONNRESET') {retriableRequest(); } });}retriableRequest();const http =require('node:http');const agent =new http.Agent({keepAlive:true });functionretriableRequest() {const req = http .get('http://localhost:3000', { agent },(res) => {// ... }) .on('error',(err) => {// Check if retry is neededif (req.reusedSocket && err.code ==='ECONNRESET') {retriableRequest(); } });}retriableRequest();
request.setHeader(name, value)#
Sets a single header value for headers object. If this header already exists inthe to-be-sent headers, its value will be replaced. Use an array of stringshere to send multiple headers with the same name. Non-string values will bestored without modification. Therefore,request.getHeader() may returnnon-string values. However, the non-string values will be converted to stringsfor network transmission.
request.setHeader('Content-Type','application/json');or
request.setHeader('Cookie', ['type=ninja','language=javascript']);When the value is a string an exception will be thrown if it containscharacters outside thelatin1 encoding.
If you need to pass UTF-8 characters in the value please encode the valueusing theRFC 8187 standard.
const filename ='Rock 🎵.txt';request.setHeader('Content-Disposition',`attachment; filename*=utf-8''${encodeURIComponent(filename)}`);request.setNoDelay([noDelay])#
noDelay<boolean>
Once a socket is assigned to this request and is connectedsocket.setNoDelay() will be called.
request.setSocketKeepAlive([enable][, initialDelay])#
Once a socket is assigned to this request and is connectedsocket.setKeepAlive() will be called.
request.setTimeout(timeout[, callback])#
History
| Version | Changes |
|---|---|
| v9.0.0 | Consistently set socket timeout only when the socket connects. |
| v0.5.9 | Added in: v0.5.9 |
timeout<number> Milliseconds before a request times out.callback<Function> Optional function to be called when a timeout occurs.Same as binding to the'timeout'event.- Returns:<http.ClientRequest>
Once a socket is assigned to this request and is connectedsocket.setTimeout() will be called.
request.socket#
- Type:<stream.Duplex>
Reference to the underlying socket. Usually users will not want to accessthis property. In particular, the socket will not emit'readable' eventsbecause of how the protocol parser attaches to the socket.
import httpfrom'node:http';const options = {host:'www.google.com',};const req = http.get(options);req.end();req.once('response',(res) => {const ip = req.socket.localAddress;const port = req.socket.localPort;console.log(`Your IP address is${ip} and your source port is${port}.`);// Consume response object});const http =require('node:http');const options = {host:'www.google.com',};const req = http.get(options);req.end();req.once('response',(res) => {const ip = req.socket.localAddress;const port = req.socket.localPort;console.log(`Your IP address is${ip} and your source port is${port}.`);// Consume response object});
This property is guaranteed to be an instance of the<net.Socket> class,a subclass of<stream.Duplex>, unless the user specified a sockettype other than<net.Socket>.
request.writableEnded#
- Type:<boolean>
Istrue afterrequest.end() has been called. This propertydoes not indicate whether the data has been flushed, for this userequest.writableFinished instead.
request.writableFinished#
- Type:<boolean>
Istrue if all data has been flushed to the underlying system, immediatelybefore the'finish' event is emitted.
request.write(chunk[, encoding][, callback])#
History
| Version | Changes |
|---|---|
| v15.0.0 | The |
| v0.1.29 | Added in: v0.1.29 |
chunk<string> |<Buffer> |<Uint8Array>encoding<string>callback<Function>- Returns:<boolean>
Sends a chunk of the body. This method can be called multiple times. If noContent-Length is set, data will automatically be encoded in HTTP Chunkedtransfer encoding, so that server knows when the data ends. TheTransfer-Encoding: chunked header is added. Callingrequest.end()is necessary to finish sending the request.
Theencoding argument is optional and only applies whenchunk is a string.Defaults to'utf8'.
Thecallback argument is optional and will be called when this chunk of datais flushed, but only if the chunk is non-empty.
Returnstrue if the entire data was flushed successfully to the kernelbuffer. Returnsfalse if all or part of the data was queued in user memory.'drain' will be emitted when the buffer is free again.
Whenwrite function is called with empty string or buffer, it doesnothing and waits for more input.
Class:http.Server#
- Extends:<net.Server>
Event:'checkContinue'#
request<http.IncomingMessage>response<http.ServerResponse>
Emitted each time a request with an HTTPExpect: 100-continue is received.If this event is not listened for, the server will automatically respondwith a100 Continue as appropriate.
Handling this event involves callingresponse.writeContinue() if theclient should continue to send the request body, or generating an appropriateHTTP response (e.g. 400 Bad Request) if the client should not continue to sendthe request body.
When this event is emitted and handled, the'request' event willnot be emitted.
Event:'checkExpectation'#
request<http.IncomingMessage>response<http.ServerResponse>
Emitted each time a request with an HTTPExpect header is received, where thevalue is not100-continue. If this event is not listened for, the server willautomatically respond with a417 Expectation Failed as appropriate.
When this event is emitted and handled, the'request' event willnot be emitted.
Event:'clientError'#
History
| Version | Changes |
|---|---|
| v12.0.0 | The default behavior will return a 431 Request Header Fields Too Large if a HPE_HEADER_OVERFLOW error occurs. |
| v9.4.0 | The |
| v6.0.0 | The default action of calling |
| v0.1.94 | Added in: v0.1.94 |
exception<Error>socket<stream.Duplex>
If a client connection emits an'error' event, it will be forwarded here.Listener of this event is responsible for closing/destroying the underlyingsocket. For example, one may wish to more gracefully close the socket with acustom HTTP response instead of abruptly severing the connection. The socketmust be closed or destroyed before the listener ends.
This event is guaranteed to be passed an instance of the<net.Socket> class,a subclass of<stream.Duplex>, unless the user specifies a sockettype other than<net.Socket>.
Default behavior is to try close the socket with a HTTP '400 Bad Request',or a HTTP '431 Request Header Fields Too Large' in the case of aHPE_HEADER_OVERFLOW error. If the socket is not writable or headersof the current attachedhttp.ServerResponse has been sent, it isimmediately destroyed.
socket is thenet.Socket object that the error originated from.
import httpfrom'node:http';const server = http.createServer((req, res) => { res.end();});server.on('clientError',(err, socket) => { socket.end('HTTP/1.1 400 Bad Request\r\n\r\n');});server.listen(8000);const http =require('node:http');const server = http.createServer((req, res) => { res.end();});server.on('clientError',(err, socket) => { socket.end('HTTP/1.1 400 Bad Request\r\n\r\n');});server.listen(8000);
When the'clientError' event occurs, there is norequest orresponseobject, so any HTTP response sent, including response headers and payload,must be written directly to thesocket object. Care must be taken toensure the response is a properly formatted HTTP response message.
err is an instance ofError with two extra columns:
bytesParsed: the bytes count of request packet that Node.js may have parsedcorrectly;rawPacket: the raw packet of current request.
In some cases, the client has already received the response and/or the sockethas already been destroyed, like in case ofECONNRESET errors. Beforetrying to send data to the socket, it is better to check that it is stillwritable.
server.on('clientError',(err, socket) => {if (err.code ==='ECONNRESET' || !socket.writable) {return; } socket.end('HTTP/1.1 400 Bad Request\r\n\r\n');});Event:'connect'#
request<http.IncomingMessage> Arguments for the HTTP request, as it is inthe'request'eventsocket<stream.Duplex> Network socket between the server and clienthead<Buffer> The first packet of the tunneling stream (may be empty)
Emitted each time a client requests an HTTPCONNECT method. If this event isnot listened for, then clients requesting aCONNECT method will have theirconnections closed.
This event is guaranteed to be passed an instance of the<net.Socket> class,a subclass of<stream.Duplex>, unless the user specifies a sockettype other than<net.Socket>.
After this event is emitted, the request's socket will not have a'data'event listener, meaning it will need to be bound in order to handle datasent to the server on that socket.
Event:'connection'#
socket<stream.Duplex>
This event is emitted when a new TCP stream is established.socket istypically an object of typenet.Socket. Usually users will not want toaccess this event. In particular, the socket will not emit'readable' eventsbecause of how the protocol parser attaches to the socket. Thesocket canalso be accessed atrequest.socket.
This event can also be explicitly emitted by users to inject connectionsinto the HTTP server. In that case, anyDuplex stream can be passed.
Ifsocket.setTimeout() is called here, the timeout will be replaced withserver.keepAliveTimeout when the socket has served a request (ifserver.keepAliveTimeout is non-zero).
This event is guaranteed to be passed an instance of the<net.Socket> class,a subclass of<stream.Duplex>, unless the user specifies a sockettype other than<net.Socket>.
Event:'dropRequest'#
request<http.IncomingMessage> Arguments for the HTTP request, as it is inthe'request'eventsocket<stream.Duplex> Network socket between the server and client
When the number of requests on a socket reaches the threshold ofserver.maxRequestsPerSocket, the server will drop new requestsand emit'dropRequest' event instead, then send503 to client.
Event:'request'#
request<http.IncomingMessage>response<http.ServerResponse>
Emitted each time there is a request. There may be multiple requestsper connection (in the case of HTTP Keep-Alive connections).
Event:'upgrade'#
History
| Version | Changes |
|---|---|
| v24.9.0 | Whether this event is fired can now be controlled by the |
| v10.0.0 | Not listening to this event no longer causes the socket to be destroyed if a client sends an Upgrade header. |
| v0.1.94 | Added in: v0.1.94 |
request<http.IncomingMessage> Arguments for the HTTP request, as it is inthe'request'eventsocket<stream.Duplex> Network socket between the server and clienthead<Buffer> The first packet of the upgraded stream (may be empty)
Emitted each time a client's HTTP upgrade request is accepted. By defaultall HTTP upgrade requests are ignored (i.e. only regular'request' eventsare emitted, sticking with the normal HTTP request/response flow) unless youlisten to this event, in which case they are all accepted (i.e. the'upgrade'event is emitted instead, and future communication must handled directlythrough the raw socket). You can control this more precisely by using theservershouldUpgradeCallback option.
Listening to this event is optional and clients cannot insist on a protocolchange.
After this event is emitted, the request's socket will not have a'data'event listener, meaning it will need to be bound in order to handle datasent to the server on that socket.
If an upgrade is accepted byshouldUpgradeCallback but no event handleris registered then the socket is destroyed, resulting in an immediateconnection closure for the client.
This event is guaranteed to be passed an instance of the<net.Socket> class,a subclass of<stream.Duplex>, unless the user specifies a sockettype other than<net.Socket>.
server.close([callback])#
History
| Version | Changes |
|---|---|
| v19.0.0 | The method closes idle connections before returning. |
| v0.1.90 | Added in: v0.1.90 |
callback<Function>
Stops the server from accepting new connections and closes all connectionsconnected to this server which are not sending a request or waiting fora response.Seenet.Server.close().
const http =require('node:http');const server = http.createServer({keepAliveTimeout:60000 },(req, res) => { res.writeHead(200, {'Content-Type':'application/json' }); res.end(JSON.stringify({data:'Hello World!', }));});server.listen(8000);// Close the server after 10 secondssetTimeout(() => { server.close(() => {console.log('server on port 8000 closed successfully'); });},10000);server.closeAllConnections()#
Closes all established HTTP(S) connections connected to this server, includingactive connections connected to this server which are sending a request orwaiting for a response. This doesnot destroy sockets upgraded to a differentprotocol, such as WebSocket or HTTP/2.
This is a forceful way of closing all connections and should be used withcaution. Whenever using this in conjunction with
server.close, calling thisafterserver.closeis recommended as to avoid race conditions where newconnections are created between a call to this and a call toserver.close.
const http =require('node:http');const server = http.createServer({keepAliveTimeout:60000 },(req, res) => { res.writeHead(200, {'Content-Type':'application/json' }); res.end(JSON.stringify({data:'Hello World!', }));});server.listen(8000);// Close the server after 10 secondssetTimeout(() => { server.close(() => {console.log('server on port 8000 closed successfully'); });// Closes all connections, ensuring the server closes successfully server.closeAllConnections();},10000);server.closeIdleConnections()#
Closes all connections connected to this server which are not sending a requestor waiting for a response.
Starting with Node.js 19.0.0, there's no need for calling this method inconjunction with
server.closeto reapkeep-aliveconnections. Using itwon't cause any harm though, and it can be useful to ensure backwardscompatibility for libraries and applications that need to support versionsolder than 19.0.0. Whenever using this in conjunction withserver.close,calling thisafterserver.closeis recommended as to avoid raceconditions where new connections are created between a call to this and acall toserver.close.
const http =require('node:http');const server = http.createServer({keepAliveTimeout:60000 },(req, res) => { res.writeHead(200, {'Content-Type':'application/json' }); res.end(JSON.stringify({data:'Hello World!', }));});server.listen(8000);// Close the server after 10 secondssetTimeout(() => { server.close(() => {console.log('server on port 8000 closed successfully'); });// Closes idle connections, such as keep-alive connections. Server will close// once remaining active connections are terminated server.closeIdleConnections();},10000);server.headersTimeout#
History
| Version | Changes |
|---|---|
| v19.4.0, v18.14.0 | The default is now set to the minimum between 60000 (60 seconds) or |
| v11.3.0, v10.14.0 | Added in: v11.3.0, v10.14.0 |
- Type:<number>Default: The minimum between
server.requestTimeoutor60000.
Limit the amount of time the parser will wait to receive the complete HTTPheaders.
If the timeout expires, the server responds with status 408 withoutforwarding the request to the request listener and then closes the connection.
It must be set to a non-zero value (e.g. 120 seconds) to protect againstpotential Denial-of-Service attacks in case the server is deployed without areverse proxy in front.
server.listen()#
Starts the HTTP server listening for connections.This method is identical toserver.listen() fromnet.Server.
server.listening#
- Type:<boolean> Indicates whether or not the server is listening for connections.
server.maxHeadersCount#
- Type:<number>Default:
2000
Limits maximum incoming headers count. If set to 0, no limit will be applied.
server.requestTimeout#
History
| Version | Changes |
|---|---|
| v18.0.0 | The default request timeout changed from no timeout to 300s (5 minutes). |
| v14.11.0 | Added in: v14.11.0 |
- Type:<number>Default:
300000
Sets the timeout value in milliseconds for receiving the entire request fromthe client.
If the timeout expires, the server responds with status 408 withoutforwarding the request to the request listener and then closes the connection.
It must be set to a non-zero value (e.g. 120 seconds) to protect againstpotential Denial-of-Service attacks in case the server is deployed without areverse proxy in front.
server.setTimeout([msecs][, callback])#
History
| Version | Changes |
|---|---|
| v13.0.0 | The default timeout changed from 120s to 0 (no timeout). |
| v0.9.12 | Added in: v0.9.12 |
msecs<number>Default: 0 (no timeout)callback<Function>- Returns:<http.Server>
Sets the timeout value for sockets, and emits a'timeout' event onthe Server object, passing the socket as an argument, if a timeoutoccurs.
If there is a'timeout' event listener on the Server object, then itwill be called with the timed-out socket as an argument.
By default, the Server does not timeout sockets. However, if a callbackis assigned to the Server's'timeout' event, timeouts must be handledexplicitly.
server.maxRequestsPerSocket#
- Type:<number> Requests per socket.Default: 0 (no limit)
The maximum number of requests socket can handlebefore closing keep alive connection.
A value of0 will disable the limit.
When the limit is reached it will set theConnection header value toclose,but will not actually close the connection, subsequent requests sentafter the limit is reached will get503 Service Unavailable as a response.
server.timeout#
History
| Version | Changes |
|---|---|
| v13.0.0 | The default timeout changed from 120s to 0 (no timeout). |
| v0.9.12 | Added in: v0.9.12 |
- Type:<number> Timeout in milliseconds.Default: 0 (no timeout)
The number of milliseconds of inactivity before a socket is presumedto have timed out.
A value of0 will disable the timeout behavior on incoming connections.
The socket timeout logic is set up on connection, so changing thisvalue only affects new connections to the server, not any existing connections.
server.keepAliveTimeout#
- Type:<number> Timeout in milliseconds.Default:
5000(5 seconds).
The number of milliseconds of inactivity a server needs to wait for additionalincoming data, after it has finished writing the last response, before a socketwill be destroyed.
This timeout value is combined with theserver.keepAliveTimeoutBuffer option to determine the actual sockettimeout, calculated as:socketTimeout = keepAliveTimeout + keepAliveTimeoutBufferIf the server receives new data before the keep-alive timeout has fired, itwill reset the regular inactivity timeout, i.e.,server.timeout.
A value of0 will disable the keep-alive timeout behavior on incomingconnections.A value of0 makes the HTTP server behave similarly to Node.js versions priorto 8.0.0, which did not have a keep-alive timeout.
The socket timeout logic is set up on connection, so changing this value onlyaffects new connections to the server, not any existing connections.
server.keepAliveTimeoutBuffer#
- Type:<number> Timeout in milliseconds.Default:
1000(1 second).
An additional buffer time added to theserver.keepAliveTimeout to extend the internal socket timeout.
This buffer helps reduce connection reset (ECONNRESET) errors by increasingthe socket timeout slightly beyond the advertised keep-alive timeout.
This option applies only to new incoming connections.
server[Symbol.asyncDispose]()#
History
| Version | Changes |
|---|---|
| v24.2.0 | No longer experimental. |
| v20.4.0 | Added in: v20.4.0 |
Callsserver.close() and returns a promise that fulfills when theserver has closed.
Class:http.ServerResponse#
- Extends:<http.OutgoingMessage>
This object is created internally by an HTTP server, not by the user. It ispassed as the second parameter to the'request' event.
Event:'close'#
Indicates that the response is completed, or its underlying connection wasterminated prematurely (before the response completion).
Event:'finish'#
Emitted when the response has been sent. More specifically, this event isemitted when the last segment of the response headers and body have beenhanded off to the operating system for transmission over the network. Itdoes not imply that the client has received anything yet.
response.addTrailers(headers)#
headers<Object>
This method adds HTTP trailing headers (a header but at the end of themessage) to the response.
Trailers willonly be emitted if chunked encoding is used for theresponse; if it is not (e.g. if the request was HTTP/1.0), they willbe silently discarded.
HTTP requires theTrailer header to be sent in order toemit trailers, with a list of the header fields in its value. E.g.,
response.writeHead(200, {'Content-Type':'text/plain','Trailer':'Content-MD5' });response.write(fileData);response.addTrailers({'Content-MD5':'7895bf4b8828b55ceaf47747b4bca667' });response.end();Attempting to set a header field name or value that contains invalid characterswill result in aTypeError being thrown.
response.connection#
response.socket.- Type:<stream.Duplex>
Seeresponse.socket.
response.end([data[, encoding]][, callback])#
History
| Version | Changes |
|---|---|
| v15.0.0 | The |
| v10.0.0 | This method now returns a reference to |
| v0.1.90 | Added in: v0.1.90 |
data<string> |<Buffer> |<Uint8Array>encoding<string>callback<Function>- Returns:<this>
This method signals to the server that all of the response headers and bodyhave been sent; that server should consider this message complete.The method,response.end(), MUST be called on each response.
Ifdata is specified, it is similar in effect to callingresponse.write(data, encoding) followed byresponse.end(callback).
Ifcallback is specified, it will be called when the response streamis finished.
response.finished#
response.writableEnded.- Type:<boolean>
Theresponse.finished property will betrue ifresponse.end()has been called.
response.flushHeaders()#
Flushes the response headers. See also:request.flushHeaders().
response.getHeader(name)#
name<string>- Returns:<number> |<string> |<string[]> |<undefined>
Reads out a header that's already been queued but not sent to the client.The name is case-insensitive. The type of the return value dependson the arguments provided toresponse.setHeader().
response.setHeader('Content-Type','text/html');response.setHeader('Content-Length',Buffer.byteLength(body));response.setHeader('Set-Cookie', ['type=ninja','language=javascript']);const contentType = response.getHeader('content-type');// contentType is 'text/html'const contentLength = response.getHeader('Content-Length');// contentLength is of type numberconst setCookie = response.getHeader('set-cookie');// setCookie is of type string[]response.getHeaderNames()#
- Returns:<string[]>
Returns an array containing the unique names of the current outgoing headers.All header names are lowercase.
response.setHeader('Foo','bar');response.setHeader('Set-Cookie', ['foo=bar','bar=baz']);const headerNames = response.getHeaderNames();// headerNames === ['foo', 'set-cookie']response.getHeaders()#
- Returns:<Object>
Returns a shallow copy of the current outgoing headers. Since a shallow copyis used, array values may be mutated without additional calls to variousheader-related http module methods. The keys of the returned object are theheader names and the values are the respective header values. All header namesare lowercase.
The object returned by theresponse.getHeaders() methoddoes notprototypically inherit from the JavaScriptObject. This means that typicalObject methods such asobj.toString(),obj.hasOwnProperty(), and othersare not defined andwill not work.
response.setHeader('Foo','bar');response.setHeader('Set-Cookie', ['foo=bar','bar=baz']);const headers = response.getHeaders();// headers === { foo: 'bar', 'set-cookie': ['foo=bar', 'bar=baz'] }response.hasHeader(name)#
Returnstrue if the header identified byname is currently set in theoutgoing headers. The header name matching is case-insensitive.
const hasContentType = response.hasHeader('content-type');response.headersSent#
- Type:<boolean>
Boolean (read-only). True if headers were sent, false otherwise.
response.removeHeader(name)#
name<string>
Removes a header that's queued for implicit sending.
response.removeHeader('Content-Encoding');response.sendDate#
- Type:<boolean>
When true, the Date header will be automatically generated and sent inthe response if it is not already present in the headers. Defaults to true.
This should only be disabled for testing; HTTP requires the Date headerin responses.
response.setHeader(name, value)#
name<string>value<number> |<string> |<string[]>- Returns:<http.ServerResponse>
Returns the response object.
Sets a single header value for implicit headers. If this header already existsin the to-be-sent headers, its value will be replaced. Use an array of stringshere to send multiple headers with the same name. Non-string values will bestored without modification. Therefore,response.getHeader() may returnnon-string values. However, the non-string values will be converted to stringsfor network transmission. The same response object is returned to the caller,to enable call chaining.
response.setHeader('Content-Type','text/html');or
response.setHeader('Set-Cookie', ['type=ninja','language=javascript']);Attempting to set a header field name or value that contains invalid characterswill result in aTypeError being thrown.
When headers have been set withresponse.setHeader(), they will be mergedwith any headers passed toresponse.writeHead(), with the headers passedtoresponse.writeHead() given precedence.
// Returns content-type = text/plainconst server = http.createServer((req, res) => { res.setHeader('Content-Type','text/html'); res.setHeader('X-Foo','bar'); res.writeHead(200, {'Content-Type':'text/plain' }); res.end('ok');});Ifresponse.writeHead() method is called and this method has not beencalled, it will directly write the supplied header values onto the networkchannel without caching internally, and theresponse.getHeader() on theheader will not yield the expected result. If progressive population of headersis desired with potential future retrieval and modification, useresponse.setHeader() instead ofresponse.writeHead().
response.setTimeout(msecs[, callback])#
msecs<number>callback<Function>- Returns:<http.ServerResponse>
Sets the Socket's timeout value tomsecs. If a callback isprovided, then it is added as a listener on the'timeout' event onthe response object.
If no'timeout' listener is added to the request, the response, orthe server, then sockets are destroyed when they time out. If a handler isassigned to the request, the response, or the server's'timeout' events,timed out sockets must be handled explicitly.
response.socket#
- Type:<stream.Duplex>
Reference to the underlying socket. Usually users will not want to accessthis property. In particular, the socket will not emit'readable' eventsbecause of how the protocol parser attaches to the socket. Afterresponse.end(), the property is nulled.
import httpfrom'node:http';const server = http.createServer((req, res) => {const ip = res.socket.remoteAddress;const port = res.socket.remotePort; res.end(`Your IP address is${ip} and your source port is${port}.`);}).listen(3000);const http =require('node:http');const server = http.createServer((req, res) => {const ip = res.socket.remoteAddress;const port = res.socket.remotePort; res.end(`Your IP address is${ip} and your source port is${port}.`);}).listen(3000);
This property is guaranteed to be an instance of the<net.Socket> class,a subclass of<stream.Duplex>, unless the user specified a sockettype other than<net.Socket>.
response.statusCode#
- Type:<number>Default:
200
When using implicit headers (not callingresponse.writeHead() explicitly),this property controls the status code that will be sent to the client whenthe headers get flushed.
response.statusCode =404;After response header was sent to the client, this property indicates thestatus code which was sent out.
response.statusMessage#
- Type:<string>
When using implicit headers (not callingresponse.writeHead() explicitly),this property controls the status message that will be sent to the client whenthe headers get flushed. If this is left asundefined then the standardmessage for the status code will be used.
response.statusMessage ='Not found';After response header was sent to the client, this property indicates thestatus message which was sent out.
response.strictContentLength#
- Type:<boolean>Default:
false
If set totrue, Node.js will check whether theContent-Lengthheader value and the size of the body, in bytes, are equal.Mismatching theContent-Length header value will resultin anError being thrown, identified bycode:'ERR_HTTP_CONTENT_LENGTH_MISMATCH'.
response.writableEnded#
- Type:<boolean>
Istrue afterresponse.end() has been called. This propertydoes not indicate whether the data has been flushed, for this useresponse.writableFinished instead.
response.writableFinished#
- Type:<boolean>
Istrue if all data has been flushed to the underlying system, immediatelybefore the'finish' event is emitted.
response.write(chunk[, encoding][, callback])#
History
| Version | Changes |
|---|---|
| v15.0.0 | The |
| v0.1.29 | Added in: v0.1.29 |
chunk<string> |<Buffer> |<Uint8Array>encoding<string>Default:'utf8'callback<Function>- Returns:<boolean>
If this method is called andresponse.writeHead() has not been called,it will switch to implicit header mode and flush the implicit headers.
This sends a chunk of the response body. This method maybe called multiple times to provide successive parts of the body.
IfrejectNonStandardBodyWrites is set to true increateServerthen writing to the body is not allowed when the request method or responsestatus do not support content. If an attempt is made to write to the body for aHEAD request or as part of a204 or304response, a synchronousErrorwith the codeERR_HTTP_BODY_NOT_ALLOWED is thrown.
chunk can be a string or a buffer. Ifchunk is a string,the second parameter specifies how to encode it into a byte stream.callback will be called when this chunk of data is flushed.
This is the raw HTTP body and has nothing to do with higher-level multi-partbody encodings that may be used.
The first timeresponse.write() is called, it will send the bufferedheader information and the first chunk of the body to the client. The secondtimeresponse.write() is called, Node.js assumes data will be streamed,and sends the new data separately. That is, the response is buffered up to thefirst chunk of the body.
Returnstrue if the entire data was flushed successfully to the kernelbuffer. Returnsfalse if all or part of the data was queued in user memory.'drain' will be emitted when the buffer is free again.
response.writeContinue()#
Sends an HTTP/1.1 100 Continue message to the client, indicating thatthe request body should be sent. See the'checkContinue' event onServer.
response.writeEarlyHints(hints[, callback])#
History
| Version | Changes |
|---|---|
| v18.11.0 | Allow passing hints as an object. |
| v18.11.0 | Added in: v18.11.0 |
hints<Object>callback<Function>
Sends an HTTP/1.1 103 Early Hints message to the client with a Link header,indicating that the user agent can preload/preconnect the linked resources.Thehints is an object containing the values of headers to be sent withearly hints message. The optionalcallback argument will be called whenthe response message has been written.
Example
const earlyHintsLink ='</styles.css>; rel=preload; as=style';response.writeEarlyHints({'link': earlyHintsLink,});const earlyHintsLinks = ['</styles.css>; rel=preload; as=style','</scripts.js>; rel=preload; as=script',];response.writeEarlyHints({'link': earlyHintsLinks,'x-trace-id':'id for diagnostics',});constearlyHintsCallback = () =>console.log('early hints message sent');response.writeEarlyHints({'link': earlyHintsLinks,}, earlyHintsCallback);response.writeHead(statusCode[, statusMessage][, headers])#
History
| Version | Changes |
|---|---|
| v14.14.0 | Allow passing headers as an array. |
| v11.10.0, v10.17.0 | Return |
| v5.11.0, v4.4.5 | A |
| v0.1.30 | Added in: v0.1.30 |
statusCode<number>statusMessage<string>headers<Object> |<Array>- Returns:<http.ServerResponse>
Sends a response header to the request. The status code is a 3-digit HTTPstatus code, like404. The last argument,headers, are the response headers.Optionally one can give a human-readablestatusMessage as the secondargument.
headers may be anArray where the keys and values are in the same list.It isnot a list of tuples. So, the even-numbered offsets are key values,and the odd-numbered offsets are the associated values. The array is in the sameformat asrequest.rawHeaders.
Returns a reference to theServerResponse, so that calls can be chained.
const body ='hello world';response .writeHead(200, {'Content-Length':Buffer.byteLength(body),'Content-Type':'text/plain', }) .end(body);This method must only be called once on a message and it mustbe called beforeresponse.end() is called.
Ifresponse.write() orresponse.end() are called before callingthis, the implicit/mutable headers will be calculated and call this function.
When headers have been set withresponse.setHeader(), they will be mergedwith any headers passed toresponse.writeHead(), with the headers passedtoresponse.writeHead() given precedence.
If this method is called andresponse.setHeader() has not been called,it will directly write the supplied header values onto the network channelwithout caching internally, and theresponse.getHeader() on the headerwill not yield the expected result. If progressive population of headers isdesired with potential future retrieval and modification, useresponse.setHeader() instead.
// Returns content-type = text/plainconst server = http.createServer((req, res) => { res.setHeader('Content-Type','text/html'); res.setHeader('X-Foo','bar'); res.writeHead(200, {'Content-Type':'text/plain' }); res.end('ok');});Content-Length is read in bytes, not characters. UseBuffer.byteLength() to determine the length of the body in bytes. Node.jswill check whetherContent-Length and the length of the body which hasbeen transmitted are equal or not.
Attempting to set a header field name or value that contains invalid characterswill result in aTypeError being thrown.
response.writeProcessing()#
Sends a HTTP/1.1 102 Processing message to the client, indicating thatthe request body should be sent.
Class:http.IncomingMessage#
History
| Version | Changes |
|---|---|
| v15.5.0 | The |
| v13.1.0, v12.16.0 | The |
| v0.1.17 | Added in: v0.1.17 |
- Extends:<stream.Readable>
AnIncomingMessage object is created byhttp.Server orhttp.ClientRequest and passed as the first argument to the'request'and'response' event respectively. It may be used to access responsestatus, headers, and data.
Different from itssocket value which is a subclass of<stream.Duplex>, theIncomingMessage itself extends<stream.Readable> and is created separately toparse and emit the incoming HTTP headers and payload, as the underlying socketmay be reused multiple times in case of keep-alive.
Event:'aborted'#
'close' event instead.Emitted when the request has been aborted.
Event:'close'#
History
| Version | Changes |
|---|---|
| v16.0.0 | The close event is now emitted when the request has been completed and not when the underlying socket is closed. |
| v0.4.2 | Added in: v0.4.2 |
Emitted when the request has been completed.
message.aborted#
- Type:<boolean>
Themessage.aborted property will betrue if the request hasbeen aborted.
message.complete#
- Type:<boolean>
Themessage.complete property will betrue if a complete HTTP message hasbeen received and successfully parsed.
This property is particularly useful as a means of determining if a client orserver fully transmitted a message before a connection was terminated:
const req = http.request({host:'127.0.0.1',port:8080,method:'POST',},(res) => { res.resume(); res.on('end',() => {if (!res.complete)console.error('The connection was terminated while the message was still being sent'); });});message.connection#
message.socket.Alias formessage.socket.
message.destroy([error])#
History
| Version | Changes |
|---|---|
| v14.5.0, v12.19.0 | The function returns |
| v0.3.0 | Added in: v0.3.0 |
Callsdestroy() on the socket that received theIncomingMessage. Iferroris provided, an'error' event is emitted on the socket anderror is passedas an argument to any listeners on the event.
message.headers#
History
| Version | Changes |
|---|---|
| v19.5.0, v18.14.0 | The |
| v15.1.0 |
|
| v0.1.5 | Added in: v0.1.5 |
- Type:<Object>
The request/response headers object.
Key-value pairs of header names and values. Header names are lower-cased.
// Prints something like://// { 'user-agent': 'curl/7.22.0',// host: '127.0.0.1:8000',// accept: '*/*' }console.log(request.headers);Duplicates in raw headers are handled in the following ways, depending on theheader name:
- Duplicates of
age,authorization,content-length,content-type,etag,expires,from,host,if-modified-since,if-unmodified-since,last-modified,location,max-forwards,proxy-authorization,referer,retry-after,server, oruser-agentare discarded.To allow duplicate values of the headers listed above to be joined,use the optionjoinDuplicateHeadersinhttp.request()andhttp.createServer(). See RFC 9110 Section 5.3 for moreinformation. set-cookieis always an array. Duplicates are added to the array.- For duplicate
cookieheaders, the values are joined together with;. - For all other headers, the values are joined together with
,.
message.headersDistinct#
- Type:<Object>
Similar tomessage.headers, but there is no join logic and the values arealways arrays of strings, even for headers received just once.
// Prints something like://// { 'user-agent': ['curl/7.22.0'],// host: ['127.0.0.1:8000'],// accept: ['*/*'] }console.log(request.headersDistinct);message.httpVersion#
- Type:<string>
In case of server request, the HTTP version sent by the client. In the case ofclient response, the HTTP version of the connected-to server.Probably either'1.1' or'1.0'.
Alsomessage.httpVersionMajor is the first integer andmessage.httpVersionMinor is the second.
message.method#
- Type:<string>
Only valid for request obtained fromhttp.Server.
The request method as a string. Read only. Examples:'GET','DELETE'.
message.rawHeaders#
- Type:<string[]>
The raw request/response headers list exactly as they were received.
The keys and values are in the same list. It isnot alist of tuples. So, the even-numbered offsets are key values, and theodd-numbered offsets are the associated values.
Header names are not lowercased, and duplicates are not merged.
// Prints something like://// [ 'user-agent',// 'this is invalid because there can be only one',// 'User-Agent',// 'curl/7.22.0',// 'Host',// '127.0.0.1:8000',// 'ACCEPT',// '*/*' ]console.log(request.rawHeaders);message.rawTrailers#
- Type:<string[]>
The raw request/response trailer keys and values exactly as they werereceived. Only populated at the'end' event.
message.setTimeout(msecs[, callback])#
msecs<number>callback<Function>- Returns:<http.IncomingMessage>
Callsmessage.socket.setTimeout(msecs, callback).
message.socket#
- Type:<stream.Duplex>
Thenet.Socket object associated with the connection.
With HTTPS support, userequest.socket.getPeerCertificate() to obtain theclient's authentication details.
This property is guaranteed to be an instance of the<net.Socket> class,a subclass of<stream.Duplex>, unless the user specified a sockettype other than<net.Socket> or internally nulled.
message.statusCode#
- Type:<number>
Only valid for response obtained fromhttp.ClientRequest.
The 3-digit HTTP response status code. E.G.404.
message.statusMessage#
- Type:<string>
Only valid for response obtained fromhttp.ClientRequest.
The HTTP response status message (reason phrase). E.G.OK orInternal Server Error.
message.trailers#
- Type:<Object>
The request/response trailers object. Only populated at the'end' event.
message.trailersDistinct#
- Type:<Object>
Similar tomessage.trailers, but there is no join logic and the values arealways arrays of strings, even for headers received just once.Only populated at the'end' event.
message.url#
- Type:<string>
Only valid for request obtained fromhttp.Server.
Request URL string. This contains only the URL that is present in the actualHTTP request. Take the following request:
GET/status?name=ryanHTTP/1.1Accept:text/plainTo parse the URL into its parts:
newURL(`http://${process.env.HOST ??'localhost'}${request.url}`);Whenrequest.url is'/status?name=ryan' andprocess.env.HOST is undefined:
$node>new URL(`http://${process.env.HOST ?? 'localhost'}${request.url}`);URL { href: 'http://localhost/status?name=ryan', origin: 'http://localhost', protocol: 'http:', username: '', password: '', host: 'localhost', hostname: 'localhost', port: '', pathname: '/status', search: '?name=ryan', searchParams: URLSearchParams { 'name' => 'ryan' }, hash: ''}Ensure that you setprocess.env.HOST to the server's host name, or considerreplacing this part entirely. If usingreq.headers.host, ensure propervalidation is used, as clients may specify a customHost header.
Class:http.OutgoingMessage#
- Extends:<Stream>
This class serves as the parent class ofhttp.ClientRequestandhttp.ServerResponse. It is an abstract outgoing message fromthe perspective of the participants of an HTTP transaction.
Event:'prefinish'#
Emitted afteroutgoingMessage.end() is called.When the event is emitted, all data has been processed but not necessarilycompletely flushed.
outgoingMessage.addTrailers(headers)#
headers<Object>
Adds HTTP trailers (headers but at the end of the message) to the message.
Trailers willonly be emitted if the message is chunked encoded. If not,the trailers will be silently discarded.
HTTP requires theTrailer header to be sent to emit trailers,with a list of header field names in its value, e.g.
message.writeHead(200, {'Content-Type':'text/plain','Trailer':'Content-MD5' });message.write(fileData);message.addTrailers({'Content-MD5':'7895bf4b8828b55ceaf47747b4bca667' });message.end();Attempting to set a header field name or value that contains invalid characterswill result in aTypeError being thrown.
outgoingMessage.appendHeader(name, value)#
name<string> Header namevalue<string> |<string[]> Header value- Returns:<this>
Append a single header value to the header object.
If the value is an array, this is equivalent to calling this method multipletimes.
If there were no previous values for the header, this is equivalent to callingoutgoingMessage.setHeader(name, value).
Depending of the value ofoptions.uniqueHeaders when the client request or theserver were created, this will end up in the header being sent multiple times ora single time with values joined using;.
outgoingMessage.connection#
outgoingMessage.socket instead.Alias ofoutgoingMessage.socket.
outgoingMessage.destroy([error])#
Destroys the message. Once a socket is associated with the messageand is connected, that socket will be destroyed as well.
outgoingMessage.end(chunk[, encoding][, callback])#
History
| Version | Changes |
|---|---|
| v15.0.0 | The |
| v0.11.6 | add |
| v0.1.90 | Added in: v0.1.90 |
chunk<string> |<Buffer> |<Uint8Array>encoding<string> Optional,Default:utf8callback<Function> Optional- Returns:<this>
Finishes the outgoing message. If any parts of the body are unsent, it willflush them to the underlying system. If the message is chunked, it willsend the terminating chunk0\r\n\r\n, and send the trailers (if any).
Ifchunk is specified, it is equivalent to callingoutgoingMessage.write(chunk, encoding), followed byoutgoingMessage.end(callback).
Ifcallback is provided, it will be called when the message is finished(equivalent to a listener of the'finish' event).
outgoingMessage.flushHeaders()#
Flushes the message headers.
For efficiency reason, Node.js normally buffers the message headersuntiloutgoingMessage.end() is called or the first chunk of message datais written. It then tries to pack the headers and data into a single TCPpacket.
It is usually desired (it saves a TCP round-trip), but not when the firstdata is not sent until possibly much later.outgoingMessage.flushHeaders()bypasses the optimization and kickstarts the message.
outgoingMessage.getHeader(name)#
name<string> Name of header- Returns:<number> |<string> |<string[]> |<undefined>
Gets the value of the HTTP header with the given name. If that header is notset, the returned value will beundefined.
outgoingMessage.getHeaderNames()#
- Returns:<string[]>
Returns an array containing the unique names of the current outgoing headers.All names are lowercase.
outgoingMessage.getHeaders()#
- Returns:<Object>
Returns a shallow copy of the current outgoing headers. Since a shallowcopy is used, array values may be mutated without additional calls tovarious header-related HTTP module methods. The keys of the returnedobject are the header names and the values are the respective headervalues. All header names are lowercase.
The object returned by theoutgoingMessage.getHeaders() method doesnot prototypically inherit from the JavaScriptObject. This means thattypicalObject methods such asobj.toString(),obj.hasOwnProperty(),and others are not defined and will not work.
outgoingMessage.setHeader('Foo','bar');outgoingMessage.setHeader('Set-Cookie', ['foo=bar','bar=baz']);const headers = outgoingMessage.getHeaders();// headers === { foo: 'bar', 'set-cookie': ['foo=bar', 'bar=baz'] }outgoingMessage.hasHeader(name)#
Returnstrue if the header identified byname is currently set in theoutgoing headers. The header name is case-insensitive.
const hasContentType = outgoingMessage.hasHeader('content-type');outgoingMessage.headersSent#
- Type:<boolean>
Read-only.true if the headers were sent, otherwisefalse.
outgoingMessage.pipe()#
Overrides thestream.pipe() method inherited from the legacyStream classwhich is the parent class ofhttp.OutgoingMessage.
Calling this method will throw anError becauseoutgoingMessage is awrite-only stream.
outgoingMessage.removeHeader(name)#
name<string> Header name
Removes a header that is queued for implicit sending.
outgoingMessage.removeHeader('Content-Encoding');outgoingMessage.setHeader(name, value)#
name<string> Header namevalue<number> |<string> |<string[]> Header value- Returns:<this>
Sets a single header value. If the header already exists in the to-be-sentheaders, its value will be replaced. Use an array of strings to send multipleheaders with the same name.
outgoingMessage.setHeaders(headers)#
Sets multiple header values for implicit headers.headers must be an instance ofHeaders orMap,if a header already exists in the to-be-sent headers,its value will be replaced.
const headers =newHeaders({foo:'bar' });outgoingMessage.setHeaders(headers);or
const headers =newMap([['foo','bar']]);outgoingMessage.setHeaders(headers);When headers have been set withoutgoingMessage.setHeaders(),they will be merged with any headers passed toresponse.writeHead(),with the headers passed toresponse.writeHead() given precedence.
// Returns content-type = text/plainconst server = http.createServer((req, res) => {const headers =newHeaders({'Content-Type':'text/html' }); res.setHeaders(headers); res.writeHead(200, {'Content-Type':'text/plain' }); res.end('ok');});outgoingMessage.setTimeout(msecs[, callback])#
msecs<number>callback<Function> Optional function to be called when a timeoutoccurs. Same as binding to thetimeoutevent.- Returns:<this>
Once a socket is associated with the message and is connected,socket.setTimeout() will be called withmsecs as the first parameter.
outgoingMessage.socket#
- Type:<stream.Duplex>
Reference to the underlying socket. Usually, users will not want to accessthis property.
After callingoutgoingMessage.end(), this property will be nulled.
outgoingMessage.writableCorked#
- Type:<number>
The number of timesoutgoingMessage.cork() has been called.
outgoingMessage.writableEnded#
- Type:<boolean>
Istrue ifoutgoingMessage.end() has been called. This property doesnot indicate whether the data has been flushed. For that purpose, usemessage.writableFinished instead.
outgoingMessage.writableFinished#
- Type:<boolean>
Istrue if all data has been flushed to the underlying system.
outgoingMessage.writableHighWaterMark#
- Type:<number>
ThehighWaterMark of the underlying socket if assigned. Otherwise, the defaultbuffer level whenwritable.write() starts returning false (16384).
outgoingMessage.write(chunk[, encoding][, callback])#
History
| Version | Changes |
|---|---|
| v15.0.0 | The |
| v0.11.6 | The |
| v0.1.29 | Added in: v0.1.29 |
chunk<string> |<Buffer> |<Uint8Array>encoding<string>Default:utf8callback<Function>- Returns:<boolean>
Sends a chunk of the body. This method can be called multiple times.
Theencoding argument is only relevant whenchunk is a string. Defaults to'utf8'.
Thecallback argument is optional and will be called when this chunk of datais flushed.
Returnstrue if the entire data was flushed successfully to the kernelbuffer. Returnsfalse if all or part of the data was queued in the usermemory. The'drain' event will be emitted when the buffer is free again.
http.METHODS#
- Type:<string[]>
A list of the HTTP methods that are supported by the parser.
http.STATUS_CODES#
- Type:<Object>
A collection of all the standard HTTP response status codes, and theshort description of each. For example,http.STATUS_CODES[404] === 'Not Found'.
http.createServer([options][, requestListener])#
History
| Version | Changes |
|---|---|
| v25.1.0 | Add optimizeEmptyRequests option. |
| v24.9.0 | The |
| v20.1.0, v18.17.0 | The |
| v18.0.0 | The |
| v18.0.0 | The |
| v17.7.0, v16.15.0 | The |
| v13.3.0 | The |
| v13.8.0, v12.15.0, v10.19.0 | The |
| v9.6.0, v8.12.0 | The |
| v0.1.13 | Added in: v0.1.13 |
options<Object>connectionsCheckingInterval: Sets the interval value in milliseconds tocheck for request and headers timeout in incomplete requests.Default:30000.headersTimeout: Sets the timeout value in milliseconds for receivingthe complete HTTP headers from the client.Seeserver.headersTimeoutfor more information.Default:60000.highWaterMark<number> Optionally overrides allsockets'readableHighWaterMarkandwritableHighWaterMark. This affectshighWaterMarkproperty of bothIncomingMessageandServerResponse.Default: Seestream.getDefaultHighWaterMark().insecureHTTPParser<boolean> If set totrue, it will use a HTTP parserwith leniency flags enabled. Using the insecure parser should be avoided.See--insecure-http-parserfor more information.Default:false.IncomingMessage<http.IncomingMessage> Specifies theIncomingMessageclass to be used. Useful for extending the originalIncomingMessage.Default:IncomingMessage.joinDuplicateHeaders<boolean> If set totrue, this option allowsjoining the field line values of multiple headers in a request witha comma (,) instead of discarding the duplicates.For more information, refer tomessage.headers.Default:false.keepAlive<boolean> If set totrue, it enables keep-alive functionalityon the socket immediately after a new incoming connection is received,similarly on what is done in [socket.setKeepAlive([enable][, initialDelay])][socket.setKeepAlive(enable, initialDelay)].Default:false.keepAliveInitialDelay<number> If set to a positive number, it sets theinitial delay before the first keepalive probe is sent on an idle socket.Default:0.keepAliveTimeout: The number of milliseconds of inactivity a serverneeds to wait for additional incoming data, after it has finished writingthe last response, before a socket will be destroyed.Seeserver.keepAliveTimeoutfor more information.Default:5000.maxHeaderSize<number> Optionally overrides the value of--max-http-header-sizefor requests received by this server, i.e.the maximum length of request headers in bytes.Default: 16384 (16 KiB).noDelay<boolean> If set totrue, it disables the use of Nagle'salgorithm immediately after a new incoming connection is received.Default:true.requestTimeout: Sets the timeout value in milliseconds for receivingthe entire request from the client.Seeserver.requestTimeoutfor more information.Default:300000.requireHostHeader<boolean> If set totrue, it forces the server torespond with a 400 (Bad Request) status code to any HTTP/1.1request message that lacks a Host header(as mandated by the specification).Default:true.ServerResponse<http.ServerResponse> Specifies theServerResponseclassto be used. Useful for extending the originalServerResponse.Default:ServerResponse.shouldUpgradeCallback(request)<Function> A callback which receives anincoming request and returns a boolean, to control which upgrade attemptsshould be accepted. Accepted upgrades will fire an'upgrade'event (ortheir sockets will be destroyed, if no listener is registered) whilerejected upgrades will fire a'request'event like any non-upgraderequest. This options defaults to() => server.listenerCount('upgrade') > 0.uniqueHeaders<Array> A list of response headers that should be sent onlyonce. If the header's value is an array, the items will be joinedusing;.rejectNonStandardBodyWrites<boolean> If set totrue, an error is thrownwhen writing to an HTTP response which does not have a body.Default:false.optimizeEmptyRequests<boolean> If set totrue, requests withoutContent-LengthorTransfer-Encodingheaders (indicating no body) will be initialized with analready-ended body stream, so they will never emit any stream events(like'data'or'end'). You can usereq.readableEndedto detect this case.Default:false.
requestListener<Function>Returns:<http.Server>
Returns a new instance ofhttp.Server.
TherequestListener is a function which is automaticallyadded to the'request' event.
import httpfrom'node:http';// Create a local server to receive data fromconst server = http.createServer((req, res) => { res.writeHead(200, {'Content-Type':'application/json' }); res.end(JSON.stringify({data:'Hello World!', }));});server.listen(8000);const http =require('node:http');// Create a local server to receive data fromconst server = http.createServer((req, res) => { res.writeHead(200, {'Content-Type':'application/json' }); res.end(JSON.stringify({data:'Hello World!', }));});server.listen(8000);
import httpfrom'node:http';// Create a local server to receive data fromconst server = http.createServer();// Listen to the request eventserver.on('request',(request, res) => { res.writeHead(200, {'Content-Type':'application/json' }); res.end(JSON.stringify({data:'Hello World!', }));});server.listen(8000);const http =require('node:http');// Create a local server to receive data fromconst server = http.createServer();// Listen to the request eventserver.on('request',(request, res) => { res.writeHead(200, {'Content-Type':'application/json' }); res.end(JSON.stringify({data:'Hello World!', }));});server.listen(8000);
http.get(options[, callback])#
http.get(url[, options][, callback])#
History
| Version | Changes |
|---|---|
| v10.9.0 | The |
| v7.5.0 | The |
| v0.3.6 | Added in: v0.3.6 |
url<string> |<URL>options<Object> Accepts the sameoptionsashttp.request(), with the method set to GET by default.callback<Function>- Returns:<http.ClientRequest>
Since most requests are GET requests without bodies, Node.js provides thisconvenience method. The only difference between this method andhttp.request() is that it sets the method to GET by default and callsreq.end()automatically. The callback must take care to consume the responsedata for reasons stated inhttp.ClientRequest section.
Thecallback is invoked with a single argument that is an instance ofhttp.IncomingMessage.
JSON fetching example:
http.get('http://localhost:8000/',(res) => {const { statusCode } = res;const contentType = res.headers['content-type'];let error;// Any 2xx status code signals a successful response but// here we're only checking for 200.if (statusCode !==200) { error =newError('Request Failed.\n' +`Status Code:${statusCode}`); }elseif (!/^application\/json/.test(contentType)) { error =newError('Invalid content-type.\n' +`Expected application/json but received${contentType}`); }if (error) {console.error(error.message);// Consume response data to free up memory res.resume();return; } res.setEncoding('utf8');let rawData =''; res.on('data',(chunk) => { rawData += chunk; }); res.on('end',() => {try {const parsedData =JSON.parse(rawData);console.log(parsedData); }catch (e) {console.error(e.message); } });}).on('error',(e) => {console.error(`Got error:${e.message}`);});// Create a local server to receive data fromconst server = http.createServer((req, res) => { res.writeHead(200, {'Content-Type':'application/json' }); res.end(JSON.stringify({data:'Hello World!', }));});server.listen(8000);http.globalAgent#
History
| Version | Changes |
|---|---|
| v19.0.0 | The agent now uses HTTP Keep-Alive and a 5 second timeout by default. |
| v0.5.9 | Added in: v0.5.9 |
- Type:<http.Agent>
Global instance ofAgent which is used as the default for all HTTP clientrequests. Diverges from a defaultAgent configuration by havingkeepAliveenabled and atimeout of 5 seconds.
http.maxHeaderSize#
- Type:<number>
Read-only property specifying the maximum allowed size of HTTP headers in bytes.Defaults to 16 KiB. Configurable using the--max-http-header-size CLIoption.
This can be overridden for servers and client requests by passing themaxHeaderSize option.
http.request(options[, callback])#
http.request(url[, options][, callback])#
History
| Version | Changes |
|---|---|
| v16.7.0, v14.18.0 | When using a |
| v15.3.0, v14.17.0 | It is possible to abort a request with an AbortSignal. |
| v13.3.0 | The |
| v13.8.0, v12.15.0, v10.19.0 | The |
| v10.9.0 | The |
| v7.5.0 | The |
| v0.3.6 | Added in: v0.3.6 |
url<string> |<URL>options<Object>agent<http.Agent> |<boolean> ControlsAgentbehavior. Possiblevalues:undefined(default): usehttp.globalAgentfor this host and port.Agentobject: explicitly use the passed inAgent.false: causes a newAgentwith default values to be used.
auth<string> Basic authentication ('user:password') to compute anAuthorization header.createConnection<Function> A function that produces a socket/stream touse for the request when theagentoption is not used. This can be used toavoid creating a customAgentclass just to override the defaultcreateConnectionfunction. Seeagent.createConnection()for moredetails. AnyDuplexstream is a valid return value.defaultPort<number> Default port for the protocol.Default:agent.defaultPortif anAgentis used, elseundefined.family<number> IP address family to use when resolvinghostorhostname. Valid values are4or6. When unspecified, both IP v4 andv6 will be used.headers<Object> |<Array> An object or an array of strings containing requestheaders. The array is in the same format asmessage.rawHeaders.hints<number> Optionaldns.lookup()hints.host<string> A domain name or IP address of the server to issue therequest to.Default:'localhost'.hostname<string> Alias forhost. To supporturl.parse(),hostnamewill be used if bothhostandhostnameare specified.insecureHTTPParser<boolean> If set totrue, it will use a HTTP parserwith leniency flags enabled. Using the insecure parser should be avoided.See--insecure-http-parserfor more information.Default:falsejoinDuplicateHeaders<boolean> It joins the field line values ofmultiple headers in a request with,instead of discardingthe duplicates. Seemessage.headersfor more information.Default:false.localAddress<string> Local interface to bind for network connections.localPort<number> Local port to connect from.lookup<Function> Custom lookup function.Default:dns.lookup().maxHeaderSize<number> Optionally overrides the value of--max-http-header-size(the maximum length of response headers inbytes) for responses received from the server.Default: 16384 (16 KiB).method<string> A string specifying the HTTP request method.Default:'GET'.path<string> Request path. Should include query string if any.E.G.'/index.html?page=12'. An exception is thrown when the request pathcontains illegal characters. Currently, only spaces are rejected but thatmay change in the future.Default:'/'.port<number> Port of remote server.Default:defaultPortif set,else80.protocol<string> Protocol to use.Default:'http:'.setDefaultHeaders<boolean>: Specifies whether or not to automatically adddefault headers such asConnection,Content-Length,Transfer-Encoding,andHost. If set tofalsethen all necessary headers must be addedmanually. Defaults totrue.setHost<boolean>: Specifies whether or not to automatically add theHostheader. If provided, this overridessetDefaultHeaders. Defaults totrue.signal<AbortSignal>: An AbortSignal that may be used to abort an ongoingrequest.socketPath<string> Unix domain socket. Cannot be used if one ofhostorportis specified, as those specify a TCP Socket.timeout<number>: A number specifying the socket timeout in milliseconds.This will set the timeout before the socket is connected.uniqueHeaders<Array> A list of request headers that should be sentonly once. If the header's value is an array, the items will be joinedusing;.
callback<Function>- Returns:<http.ClientRequest>
options insocket.connect() are also supported.
Node.js maintains several connections per server to make HTTP requests.This function allows one to transparently issue requests.
url can be a string or aURL object. Ifurl is astring, it is automatically parsed withnew URL(). If it is aURLobject, it will be automatically converted to an ordinaryoptions object.
If bothurl andoptions are specified, the objects are merged, with theoptions properties taking precedence.
The optionalcallback parameter will be added as a one-time listener forthe'response' event.
http.request() returns an instance of thehttp.ClientRequestclass. TheClientRequest instance is a writable stream. If one needs toupload a file with a POST request, then write to theClientRequest object.
import httpfrom'node:http';import {Buffer }from'node:buffer';const postData =JSON.stringify({'msg':'Hello World!',});const options = {hostname:'www.google.com',port:80,path:'/upload',method:'POST',headers: {'Content-Type':'application/json','Content-Length':Buffer.byteLength(postData), },};const req = http.request(options,(res) => {console.log(`STATUS:${res.statusCode}`);console.log(`HEADERS:${JSON.stringify(res.headers)}`); res.setEncoding('utf8'); res.on('data',(chunk) => {console.log(`BODY:${chunk}`); }); res.on('end',() => {console.log('No more data in response.'); });});req.on('error',(e) => {console.error(`problem with request:${e.message}`);});// Write data to request bodyreq.write(postData);req.end();const http =require('node:http');const postData =JSON.stringify({'msg':'Hello World!',});const options = {hostname:'www.google.com',port:80,path:'/upload',method:'POST',headers: {'Content-Type':'application/json','Content-Length':Buffer.byteLength(postData), },};const req = http.request(options,(res) => {console.log(`STATUS:${res.statusCode}`);console.log(`HEADERS:${JSON.stringify(res.headers)}`); res.setEncoding('utf8'); res.on('data',(chunk) => {console.log(`BODY:${chunk}`); }); res.on('end',() => {console.log('No more data in response.'); });});req.on('error',(e) => {console.error(`problem with request:${e.message}`);});// Write data to request bodyreq.write(postData);req.end();
In the examplereq.end() was called. Withhttp.request() onemust always callreq.end() to signify the end of the request -even if there is no data being written to the request body.
If any error is encountered during the request (be that with DNS resolution,TCP level errors, or actual HTTP parse errors) an'error' event is emittedon the returned request object. As with all'error' events, if no listenersare registered the error will be thrown.
There are a few special headers that should be noted.
Sending a 'Connection: keep-alive' will notify Node.js that the connection tothe server should be persisted until the next request.
Sending a 'Content-Length' header will disable the default chunked encoding.
Sending an 'Expect' header will immediately send the request headers.Usually, when sending 'Expect: 100-continue', both a timeout and a listenerfor the
'continue'event should be set. See RFC 2616 Section 8.2.3 for moreinformation.Sending an Authorization header will override using the
authoptionto compute basic authentication.
Example using aURL asoptions:
const options =newURL('http://abc:xyz@example.com');const req = http.request(options,(res) => {// ...});In a successful request, the following events will be emitted in the followingorder:
'socket''response''data'any number of times, on theresobject('data'will not be emitted at all if the response body is empty, forinstance, in most redirects)'end'on theresobject
'close'
In the case of a connection error, the following events will be emitted:
'socket''error''close'
In the case of a premature connection close before the response is received,the following events will be emitted in the following order:
'socket''error'with an error with message'Error: socket hang up'and code'ECONNRESET''close'
In the case of a premature connection close after the response is received,the following events will be emitted in the following order:
'socket''response''data'any number of times, on theresobject
- (connection closed here)
'aborted'on theresobject'close''error'on theresobject with an error with message'Error: aborted'and code'ECONNRESET''close'on theresobject
Ifreq.destroy() is called before a socket is assigned, the followingevents will be emitted in the following order:
- (
req.destroy()called here) 'error'with an error with message'Error: socket hang up'and code'ECONNRESET', or the error with whichreq.destroy()was called'close'
Ifreq.destroy() is called before the connection succeeds, the followingevents will be emitted in the following order:
'socket'- (
req.destroy()called here) 'error'with an error with message'Error: socket hang up'and code'ECONNRESET', or the error with whichreq.destroy()was called'close'
Ifreq.destroy() is called after the response is received, the followingevents will be emitted in the following order:
'socket''response''data'any number of times, on theresobject
- (
req.destroy()called here) 'aborted'on theresobject'close''error'on theresobject with an error with message'Error: aborted'and code'ECONNRESET', or the error with whichreq.destroy()was called'close'on theresobject
Ifreq.abort() is called before a socket is assigned, the followingevents will be emitted in the following order:
- (
req.abort()called here) 'abort''close'
Ifreq.abort() is called before the connection succeeds, the followingevents will be emitted in the following order:
'socket'- (
req.abort()called here) 'abort''error'with an error with message'Error: socket hang up'and code'ECONNRESET''close'
Ifreq.abort() is called after the response is received, the followingevents will be emitted in the following order:
'socket''response''data'any number of times, on theresobject
- (
req.abort()called here) 'abort''aborted'on theresobject'error'on theresobject with an error with message'Error: aborted'and code'ECONNRESET'.'close''close'on theresobject
Setting thetimeout option or using thesetTimeout() function willnot abort the request or do anything besides add a'timeout' event.
Passing anAbortSignal and then callingabort() on the correspondingAbortController will behave the same way as calling.destroy() on therequest. Specifically, the'error' event will be emitted with an error withthe message'AbortError: The operation was aborted', the code'ABORT_ERR'and thecause, if one was provided.
http.validateHeaderName(name[, label])#
History
| Version | Changes |
|---|---|
| v19.5.0, v18.14.0 | The |
| v14.3.0 | Added in: v14.3.0 |
Performs the low-level validations on the providedname that are done whenres.setHeader(name, value) is called.
Passing illegal value asname will result in aTypeError being thrown,identified bycode: 'ERR_INVALID_HTTP_TOKEN'.
It is not necessary to use this method before passing headers to an HTTP requestor response. The HTTP module will automatically validate such headers.
Example:
import { validateHeaderName }from'node:http';try {validateHeaderName('');}catch (err) {console.error(errinstanceofTypeError);// --> trueconsole.error(err.code);// --> 'ERR_INVALID_HTTP_TOKEN'console.error(err.message);// --> 'Header name must be a valid HTTP token [""]'}const { validateHeaderName } =require('node:http');try {validateHeaderName('');}catch (err) {console.error(errinstanceofTypeError);// --> trueconsole.error(err.code);// --> 'ERR_INVALID_HTTP_TOKEN'console.error(err.message);// --> 'Header name must be a valid HTTP token [""]'}
http.validateHeaderValue(name, value)#
Performs the low-level validations on the providedvalue that are done whenres.setHeader(name, value) is called.
Passing illegal value asvalue will result in aTypeError being thrown.
- Undefined value error is identified by
code: 'ERR_HTTP_INVALID_HEADER_VALUE'. - Invalid value character error is identified by
code: 'ERR_INVALID_CHAR'.
It is not necessary to use this method before passing headers to an HTTP requestor response. The HTTP module will automatically validate such headers.
Examples:
import { validateHeaderValue }from'node:http';try {validateHeaderValue('x-my-header',undefined);}catch (err) {console.error(errinstanceofTypeError);// --> trueconsole.error(err.code ==='ERR_HTTP_INVALID_HEADER_VALUE');// --> trueconsole.error(err.message);// --> 'Invalid value "undefined" for header "x-my-header"'}try {validateHeaderValue('x-my-header','oʊmɪɡə');}catch (err) {console.error(errinstanceofTypeError);// --> trueconsole.error(err.code ==='ERR_INVALID_CHAR');// --> trueconsole.error(err.message);// --> 'Invalid character in header content ["x-my-header"]'}const { validateHeaderValue } =require('node:http');try {validateHeaderValue('x-my-header',undefined);}catch (err) {console.error(errinstanceofTypeError);// --> trueconsole.error(err.code ==='ERR_HTTP_INVALID_HEADER_VALUE');// --> trueconsole.error(err.message);// --> 'Invalid value "undefined" for header "x-my-header"'}try {validateHeaderValue('x-my-header','oʊmɪɡə');}catch (err) {console.error(errinstanceofTypeError);// --> trueconsole.error(err.code ==='ERR_INVALID_CHAR');// --> trueconsole.error(err.message);// --> 'Invalid character in header content ["x-my-header"]'}
http.setMaxIdleHTTPParsers(max)#
max<number>Default:1000.
Set the maximum number of idle HTTP parsers.
http.setGlobalProxyFromEnv([proxyEnv])#
proxyEnv<Object> An object containing proxy configuration. This accepts thesame options as theproxyEnvoption accepted byAgent.Default:process.env.- Returns:<Function> A function that restores the original agent and dispatchersettings to the state before this
http.setGlobalProxyFromEnv()is invoked.
Dynamically resets the global configurations to enable built-in proxy support forfetch() andhttp.request()/https.request() at runtime, as an alternativeto using the--use-env-proxy flag orNODE_USE_ENV_PROXY environment variable.It can also be used to override settings configured from the environment variables.
As this function resets the global configurations, any previously configuredhttp.globalAgent,https.globalAgent or undici global dispatcher would beoverridden after this function is invoked. It's recommended to invoke it before anyrequests are made and avoid invoking it in the middle of any requests.
SeeBuilt-in Proxy Support for details on proxy URL formats andNO_PROXYsyntax.
Class:WebSocket#
A browser-compatible implementation of<WebSocket>.
Built-in Proxy Support#
When Node.js creates the global agent, if theNODE_USE_ENV_PROXY environment variable isset to1 or--use-env-proxy is enabled, the global agent will be constructedwithproxyEnv: process.env, enabling proxy support based on the environment variables.
To enable proxy support dynamically and globally, usehttp.setGlobalProxyFromEnv().
Custom agents can also be created with proxy support by passing aproxyEnv option when constructing the agent. The value can beprocess.envif they just want to inherit the configuration from the environment variables,or an object with specific setting overriding the environment.
The following properties of theproxyEnv are checked to configure proxysupport.
HTTP_PROXYorhttp_proxy: Proxy server URL for HTTP requests. If both are set,http_proxytakes precedence.HTTPS_PROXYorhttps_proxy: Proxy server URL for HTTPS requests. If both are set,https_proxytakes precedence.NO_PROXYorno_proxy: Comma-separated list of hosts to bypass the proxy. If both are set,no_proxytakes precedence.
If the request is made to a Unix domain socket, the proxy settings will be ignored.
Proxy URL Format#
Proxy URLs can use either HTTP or HTTPS protocols:
- HTTP proxy:
http://proxy.example.com:8080 - HTTPS proxy:
https://proxy.example.com:8080 - Proxy with authentication:
http://username:password@proxy.example.com:8080
NO_PROXY Format#
TheNO_PROXY environment variable supports several formats:
*- Bypass proxy for all hostsexample.com- Exact host name match.example.com- Domain suffix match (matchessub.example.com)*.example.com- Wildcard domain match192.168.1.100- Exact IP address match192.168.1.1-192.168.1.100- IP address rangeexample.com:8080- Hostname with specific port
Multiple entries should be separated by commas.
Example#
To start a Node.js process with proxy support enabled for all requests sentthrough the default global agent, either use theNODE_USE_ENV_PROXY environmentvariable:
NODE_USE_ENV_PROXY=1 HTTP_PROXY=http://proxy.example.com:8080 NO_PROXY=localhost,127.0.0.1 node client.jsOr the--use-env-proxy flag.
HTTP_PROXY=http://proxy.example.com:8080 NO_PROXY=localhost,127.0.0.1 node --use-env-proxy client.jsTo enable proxy support dynamically and globally withprocess.env (the default option ofhttp.setGlobalProxyFromEnv()):
const http =require('node:http');// Reads proxy-related environment variables from process.envconst restore = http.setGlobalProxyFromEnv();// Subsequent requests will use the configured proxies from environment variableshttp.get('http://www.example.com',(res) => {// This request will be proxied if HTTP_PROXY or http_proxy is set});fetch('https://www.example.com',(res) => {// This request will be proxied if HTTPS_PROXY or https_proxy is set});// To restore the original global agent and dispatcher settings, call the returned function.// restore();import httpfrom'node:http';// Reads proxy-related environment variables from process.envhttp.setGlobalProxyFromEnv();// Subsequent requests will use the configured proxies from environment variableshttp.get('http://www.example.com',(res) => {// This request will be proxied if HTTP_PROXY or http_proxy is set});fetch('https://www.example.com',(res) => {// This request will be proxied if HTTPS_PROXY or https_proxy is set});// To restore the original global agent and dispatcher settings, call the returned function.// restore();
To enable proxy support dynamically and globally with custom settings:
const http =require('node:http');const restore = http.setGlobalProxyFromEnv({http_proxy:'http://proxy.example.com:8080',https_proxy:'https://proxy.example.com:8443',no_proxy:'localhost,127.0.0.1,.internal.example.com',});// Subsequent requests will use the configured proxieshttp.get('http://www.example.com',(res) => {// This request will be proxied through proxy.example.com:8080});fetch('https://www.example.com',(res) => {// This request will be proxied through proxy.example.com:8443});import httpfrom'node:http';http.setGlobalProxyFromEnv({http_proxy:'http://proxy.example.com:8080',https_proxy:'https://proxy.example.com:8443',no_proxy:'localhost,127.0.0.1,.internal.example.com',});// Subsequent requests will use the configured proxieshttp.get('http://www.example.com',(res) => {// This request will be proxied through proxy.example.com:8080});fetch('https://www.example.com',(res) => {// This request will be proxied through proxy.example.com:8443});
To create a custom agent with built-in proxy support:
const http =require('node:http');// Creating a custom agent with custom proxy support.const agent =new http.Agent({proxyEnv: {HTTP_PROXY:'http://proxy.example.com:8080' } });http.request({hostname:'www.example.com',port:80,path:'/', agent,},(res) => {// This request will be proxied through proxy.example.com:8080 using the HTTP protocol.console.log(`STATUS:${res.statusCode}`);});Alternatively, the following also works:
const http =require('node:http');// Use lower-cased option name.const agent1 =new http.Agent({proxyEnv: {http_proxy:'http://proxy.example.com:8080' } });// Use values inherited from the environment variables, if the process is started with// HTTP_PROXY=http://proxy.example.com:8080 this will use the proxy server specified// in process.env.HTTP_PROXY.const agent2 =new http.Agent({proxyEnv: process.env });HTTP/2#
History
| Version | Changes |
|---|---|
| v15.0.0 | Requests with the |
| v15.3.0, v14.17.0 | It is possible to abort a request with an AbortSignal. |
| v10.10.0 | HTTP/2 is now Stable. Previously, it had been Experimental. |
| v8.4.0 | Added in: v8.4.0 |
Source Code:lib/http2.js
Thenode:http2 module provides an implementation of theHTTP/2 protocol.It can be accessed using:
const http2 =require('node:http2');Determining if crypto support is unavailable#
It is possible for Node.js to be built without including support for thenode:crypto module. In such cases, attempting toimport fromnode:http2 orcallingrequire('node:http2') will result in an error being thrown.
When using CommonJS, the error thrown can be caught using try/catch:
let http2;try { http2 =require('node:http2');}catch (err) {console.error('http2 support is disabled!');}When using the lexical ESMimport keyword, the error can only becaught if a handler forprocess.on('uncaughtException') is registeredbefore any attempt to load the module is made (using, for instance,a preload module).
When using ESM, if there is a chance that the code may be run on a buildof Node.js where crypto support is not enabled, consider using theimport() function instead of the lexicalimport keyword:
let http2;try { http2 =awaitimport('node:http2');}catch (err) {console.error('http2 support is disabled!');}Core API#
The Core API provides a low-level interface designed specifically aroundsupport for HTTP/2 protocol features. It is specificallynot designed forcompatibility with the existingHTTP/1 module API. However,theCompatibility API is.
Thehttp2 Core API is much more symmetric between client and server than thehttp API. For instance, most events, like'error','connect' and'stream', can be emitted either by client-side code or server-side code.
Server-side example#
The following illustrates a simple HTTP/2 server using the Core API.Since there are no browsers known that supportunencrypted HTTP/2, the use ofhttp2.createSecureServer() is necessary when communicatingwith browser clients.
import { createSecureServer }from'node:http2';import { readFileSync }from'node:fs';const server =createSecureServer({key:readFileSync('localhost-privkey.pem'),cert:readFileSync('localhost-cert.pem'),});server.on('error',(err) =>console.error(err));server.on('stream',(stream, headers) => {// stream is a Duplex stream.respond({'content-type':'text/html; charset=utf-8',':status':200, }); stream.end('<h1>Hello World</h1>');});server.listen(8443);const http2 =require('node:http2');const fs =require('node:fs');const server = http2.createSecureServer({key: fs.readFileSync('localhost-privkey.pem'),cert: fs.readFileSync('localhost-cert.pem'),});server.on('error',(err) =>console.error(err));server.on('stream',(stream, headers) => {// stream is a Duplex stream.respond({'content-type':'text/html; charset=utf-8',':status':200, }); stream.end('<h1>Hello World</h1>');});server.listen(8443);
To generate the certificate and key for this example, run:
openssl req -x509 -newkey rsa:2048 -nodes -sha256 -subj'/CN=localhost' \ -keyout localhost-privkey.pem -out localhost-cert.pemClient-side example#
The following illustrates an HTTP/2 client:
import { connect }from'node:http2';import { readFileSync }from'node:fs';const client =connect('https://localhost:8443', {ca:readFileSync('localhost-cert.pem'),});client.on('error',(err) =>console.error(err));const req = client.request({':path':'/' });req.on('response',(headers, flags) => {for (const namein headers) {console.log(`${name}:${headers[name]}`); }});req.setEncoding('utf8');let data ='';req.on('data',(chunk) => { data += chunk; });req.on('end',() => {console.log(`\n${data}`); client.close();});req.end();const http2 =require('node:http2');const fs =require('node:fs');const client = http2.connect('https://localhost:8443', {ca: fs.readFileSync('localhost-cert.pem'),});client.on('error',(err) =>console.error(err));const req = client.request({':path':'/' });req.on('response',(headers, flags) => {for (const namein headers) {console.log(`${name}:${headers[name]}`); }});req.setEncoding('utf8');let data ='';req.on('data',(chunk) => { data += chunk; });req.on('end',() => {console.log(`\n${data}`); client.close();});req.end();
Class:Http2Session#
- Extends:<EventEmitter>
Instances of thehttp2.Http2Session class represent an active communicationssession between an HTTP/2 client and server. Instances of this class arenotintended to be constructed directly by user code.
EachHttp2Session instance will exhibit slightly different behaviorsdepending on whether it is operating as a server or a client. Thehttp2session.type property can be used to determine the mode in which anHttp2Session is operating. On the server side, user code should rarelyhave occasion to work with theHttp2Session object directly, with mostactions typically taken through interactions with either theHttp2Server orHttp2Stream objects.
User code will not createHttp2Session instances directly. Server-sideHttp2Session instances are created by theHttp2Server instance when anew HTTP/2 connection is received. Client-sideHttp2Session instances arecreated using thehttp2.connect() method.
Http2Session and sockets#
EveryHttp2Session instance is associated with exactly onenet.Socket ortls.TLSSocket when it is created. When either theSocket or theHttp2Session are destroyed, both will be destroyed.
Because of the specific serialization and processing requirements imposedby the HTTP/2 protocol, it is not recommended for user code to read data fromor write data to aSocket instance bound to aHttp2Session. Doing so canput the HTTP/2 session into an indeterminate state causing the session andthe socket to become unusable.
Once aSocket has been bound to anHttp2Session, user code should relysolely on the API of theHttp2Session.
Event:'close'#
The'close' event is emitted once theHttp2Session has been destroyed. Itslistener does not expect any arguments.
Event:'connect'#
session<Http2Session>socket<net.Socket>
The'connect' event is emitted once theHttp2Session has been successfullyconnected to the remote peer and communication may begin.
User code will typically not listen for this event directly.
Event:'error'#
error<Error>
The'error' event is emitted when an error occurs during the processing ofanHttp2Session.
Event:'frameError'#
type<integer> The frame type.code<integer> The error code.id<integer> The stream id (or0if the frame isn't associated with astream).
The'frameError' event is emitted when an error occurs while attempting tosend a frame on the session. If the frame that could not be sent is associatedwith a specificHttp2Stream, an attempt to emit a'frameError' event on theHttp2Stream is made.
If the'frameError' event is associated with a stream, the stream will beclosed and destroyed immediately following the'frameError' event. If theevent is not associated with a stream, theHttp2Session will be shut downimmediately following the'frameError' event.
Event:'goaway'#
errorCode<number> The HTTP/2 error code specified in theGOAWAYframe.lastStreamID<number> The ID of the last stream the remote peer successfullyprocessed (or0if no ID is specified).opaqueData<Buffer> If additional opaque data was included in theGOAWAYframe, aBufferinstance will be passed containing that data.
The'goaway' event is emitted when aGOAWAY frame is received.
TheHttp2Session instance will be shut down automatically when the'goaway'event is emitted.
Event:'localSettings'#
settings<HTTP/2 Settings Object> A copy of theSETTINGSframe received.
The'localSettings' event is emitted when an acknowledgmentSETTINGS framehas been received.
When usinghttp2session.settings() to submit new settings, the modifiedsettings do not take effect until the'localSettings' event is emitted.
session.settings({enablePush:false });session.on('localSettings',(settings) => {/* Use the new settings */});Event:'ping'#
payload<Buffer> ThePINGframe 8-byte payload
The'ping' event is emitted whenever aPING frame is received from theconnected peer.
Event:'remoteSettings'#
settings<HTTP/2 Settings Object> A copy of theSETTINGSframe received.
The'remoteSettings' event is emitted when a newSETTINGS frame is receivedfrom the connected peer.
session.on('remoteSettings',(settings) => {/* Use the new settings */});Event:'stream'#
stream<Http2Stream> A reference to the streamheaders<HTTP/2 Headers Object> An object describing the headersflags<number> The associated numeric flagsrawHeaders<HTTP/2 Raw Headers> An array containing the raw headers
The'stream' event is emitted when a newHttp2Stream is created.
session.on('stream',(stream, headers, flags) => {const method = headers[':method'];const path = headers[':path'];// ... stream.respond({':status':200,'content-type':'text/plain; charset=utf-8', }); stream.write('hello '); stream.end('world');});On the server side, user code will typically not listen for this event directly,and would instead register a handler for the'stream' event emitted by thenet.Server ortls.Server instances returned byhttp2.createServer() andhttp2.createSecureServer(), respectively, as in the example below:
import { createServer }from'node:http2';// Create an unencrypted HTTP/2 serverconst server =createServer();server.on('stream',(stream, headers) => { stream.respond({'content-type':'text/html; charset=utf-8',':status':200, }); stream.on('error',(error) =>console.error(error)); stream.end('<h1>Hello World</h1>');});server.listen(8000);const http2 =require('node:http2');// Create an unencrypted HTTP/2 serverconst server = http2.createServer();server.on('stream',(stream, headers) => { stream.respond({'content-type':'text/html; charset=utf-8',':status':200, }); stream.on('error',(error) =>console.error(error)); stream.end('<h1>Hello World</h1>');});server.listen(8000);
Even though HTTP/2 streams and network sockets are not in a 1:1 correspondence,a network error will destroy each individual stream and must be handled on thestream level, as shown above.
Event:'timeout'#
After thehttp2session.setTimeout() method is used to set the timeout periodfor thisHttp2Session, the'timeout' event is emitted if there is noactivity on theHttp2Session after the configured number of milliseconds.Its listener does not expect any arguments.
session.setTimeout(2000);session.on('timeout',() => {/* .. */ });http2session.alpnProtocol#
- Type:<string> |<undefined>
Value will beundefined if theHttp2Session is not yet connected to asocket,h2c if theHttp2Session is not connected to aTLSSocket, orwill return the value of the connectedTLSSocket's ownalpnProtocolproperty.
http2session.close([callback])#
callback<Function>
Gracefully closes theHttp2Session, allowing any existing streams tocomplete on their own and preventing newHttp2Stream instances from beingcreated. Once closed,http2session.destroy()might be called if thereare no openHttp2Stream instances.
If specified, thecallback function is registered as a handler for the'close' event.
http2session.closed#
- Type:<boolean>
Will betrue if thisHttp2Session instance has been closed, otherwisefalse.
http2session.connecting#
- Type:<boolean>
Will betrue if thisHttp2Session instance is still connecting, will be settofalse before emittingconnect event and/or calling thehttp2.connectcallback.
http2session.destroy([error][, code])#
error<Error> AnErrorobject if theHttp2Sessionis being destroyeddue to an error.code<number> The HTTP/2 error code to send in the finalGOAWAYframe.If unspecified, anderroris not undefined, the default isINTERNAL_ERROR,otherwise defaults toNO_ERROR.
Immediately terminates theHttp2Session and the associatednet.Socket ortls.TLSSocket.
Once destroyed, theHttp2Session will emit the'close' event. Iferroris not undefined, an'error' event will be emitted immediately before the'close' event.
If there are any remaining openHttp2Streams associated with theHttp2Session, those will also be destroyed.
http2session.destroyed#
- Type:<boolean>
Will betrue if thisHttp2Session instance has been destroyed and must nolonger be used, otherwisefalse.
http2session.encrypted#
- Type:<boolean> |<undefined>
Value isundefined if theHttp2Session session socket has not yet beenconnected,true if theHttp2Session is connected with aTLSSocket,andfalse if theHttp2Session is connected to any other kind of socketor stream.
http2session.goaway([code[, lastStreamID[, opaqueData]]])#
code<number> An HTTP/2 error codelastStreamID<number> The numeric ID of the last processedHttp2StreamopaqueData<Buffer> |<TypedArray> |<DataView> ATypedArrayorDataViewinstance containing additional data to be carried within theGOAWAYframe.
Transmits aGOAWAY frame to the connected peerwithout shutting down theHttp2Session.
http2session.localSettings#
A prototype-less object describing the current local settings of thisHttp2Session. The local settings are local tothisHttp2Session instance.
http2session.originSet#
- Type:<string[]> |<undefined>
If theHttp2Session is connected to aTLSSocket, theoriginSet propertywill return anArray of origins for which theHttp2Session may beconsidered authoritative.
TheoriginSet property is only available when using a secure TLS connection.
http2session.pendingSettingsAck#
- Type:<boolean>
Indicates whether theHttp2Session is currently waiting for acknowledgment ofa sentSETTINGS frame. Will betrue after calling thehttp2session.settings() method. Will befalse once all sentSETTINGSframes have been acknowledged.
http2session.ping([payload, ]callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v8.9.3 | Added in: v8.9.3 |
payload<Buffer> |<TypedArray> |<DataView> Optional ping payload.callback<Function>- Returns:<boolean>
Sends aPING frame to the connected HTTP/2 peer. Acallback function mustbe provided. The method will returntrue if thePING was sent,falseotherwise.
The maximum number of outstanding (unacknowledged) pings is determined by themaxOutstandingPings configuration option. The default maximum is 10.
If provided, thepayload must be aBuffer,TypedArray, orDataViewcontaining 8 bytes of data that will be transmitted with thePING andreturned with the ping acknowledgment.
The callback will be invoked with three arguments: an error argument that willbenull if thePING was successfully acknowledged, aduration argumentthat reports the number of milliseconds elapsed since the ping was sent and theacknowledgment was received, and aBuffer containing the 8-bytePINGpayload.
session.ping(Buffer.from('abcdefgh'),(err, duration, payload) => {if (!err) {console.log(`Ping acknowledged in${duration} milliseconds`);console.log(`With payload '${payload.toString()}'`); }});If thepayload argument is not specified, the default payload will be the64-bit timestamp (little endian) marking the start of thePING duration.
http2session.remoteSettings#
A prototype-less object describing the current remote settings of thisHttp2Session. The remote settings are set by theconnected HTTP/2 peer.
http2session.setLocalWindowSize(windowSize)#
windowSize<number>
Sets the local endpoint's window size.ThewindowSize is the total window size to set, notthe delta.
import { createServer }from'node:http2';const server =createServer();const expectedWindowSize =2 **20;server.on('session',(session) => {// Set local window size to be 2 ** 20 session.setLocalWindowSize(expectedWindowSize);});const http2 =require('node:http2');const server = http2.createServer();const expectedWindowSize =2 **20;server.on('session',(session) => {// Set local window size to be 2 ** 20 session.setLocalWindowSize(expectedWindowSize);});
For http2 clients the proper event is either'connect' or'remoteSettings'.
http2session.setTimeout(msecs, callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v8.4.0 | Added in: v8.4.0 |
msecs<number>callback<Function>
Used to set a callback function that is called when there is no activity ontheHttp2Session aftermsecs milliseconds. The givencallback isregistered as a listener on the'timeout' event.
http2session.socket#
- Type:<net.Socket> |<tls.TLSSocket>
Returns aProxy object that acts as anet.Socket (ortls.TLSSocket) butlimits available methods to ones safe to use with HTTP/2.
destroy,emit,end,pause,read,resume, andwrite will throwan error with codeERR_HTTP2_NO_SOCKET_MANIPULATION. SeeHttp2Session and Sockets for more information.
setTimeout method will be called on thisHttp2Session.
All other interactions will be routed directly to the socket.
http2session.state#
Provides miscellaneous information about the current state of theHttp2Session.
- Type:<Object>
effectiveLocalWindowSize<number> The current local (receive)flow control window size for theHttp2Session.effectiveRecvDataLength<number> The current number of bytesthat have been received since the last flow controlWINDOW_UPDATE.nextStreamID<number> The numeric identifier to be used thenext time a newHttp2Streamis created by thisHttp2Session.localWindowSize<number> The number of bytes that the remote peer cansend without receiving aWINDOW_UPDATE.lastProcStreamID<number> The numeric id of theHttp2Streamfor which aHEADERSorDATAframe was most recently received.remoteWindowSize<number> The number of bytes that thisHttp2Sessionmay send without receiving aWINDOW_UPDATE.outboundQueueSize<number> The number of frames currently within theoutbound queue for thisHttp2Session.deflateDynamicTableSize<number> The current size in bytes of theoutbound header compression state table.inflateDynamicTableSize<number> The current size in bytes of theinbound header compression state table.
An object describing the current status of thisHttp2Session.
http2session.settings([settings][, callback])#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v8.4.0 | Added in: v8.4.0 |
settings<HTTP/2 Settings Object>callback<Function> Callback that is called once the session is connected orright away if the session is already connected.err<Error> |<null>settings<HTTP/2 Settings Object> The updatedsettingsobject.duration<integer>
Updates the current local settings for thisHttp2Session and sends a newSETTINGS frame to the connected HTTP/2 peer.
Once called, thehttp2session.pendingSettingsAck property will betruewhile the session is waiting for the remote peer to acknowledge the newsettings.
The new settings will not become effective until theSETTINGS acknowledgmentis received and the'localSettings' event is emitted. It is possible to sendmultipleSETTINGS frames while acknowledgment is still pending.
http2session.type#
- Type:<number>
Thehttp2session.type will be equal tohttp2.constants.NGHTTP2_SESSION_SERVER if thisHttp2Session instance is aserver, andhttp2.constants.NGHTTP2_SESSION_CLIENT if the instance is aclient.
http2session.unref()#
Callsunref() on thisHttp2Sessioninstance's underlyingnet.Socket.
Class:ServerHttp2Session#
- Extends:<Http2Session>
serverhttp2session.altsvc(alt, originOrStream)#
alt<string> A description of the alternative service configuration asdefined byRFC 7838.originOrStream<number> |<string> |<URL> |<Object> Either a URL string specifyingthe origin (or anObjectwith anoriginproperty) or the numericidentifier of an activeHttp2Streamas given by thehttp2stream.idproperty.
Submits anALTSVC frame (as defined byRFC 7838) to the connected client.
import { createServer }from'node:http2';const server =createServer();server.on('session',(session) => {// Set altsvc for origin https://example.org:80 session.altsvc('h2=":8000"','https://example.org:80');});server.on('stream',(stream) => {// Set altsvc for a specific stream stream.session.altsvc('h2=":8000"', stream.id);});const http2 =require('node:http2');const server = http2.createServer();server.on('session',(session) => {// Set altsvc for origin https://example.org:80 session.altsvc('h2=":8000"','https://example.org:80');});server.on('stream',(stream) => {// Set altsvc for a specific stream stream.session.altsvc('h2=":8000"', stream.id);});
Sending anALTSVC frame with a specific stream ID indicates that the alternateservice is associated with the origin of the givenHttp2Stream.
Thealt and origin stringmust contain only ASCII bytes and arestrictly interpreted as a sequence of ASCII bytes. The special value'clear'may be passed to clear any previously set alternative service for a givendomain.
When a string is passed for theoriginOrStream argument, it will be parsed asa URL and the origin will be derived. For instance, the origin for theHTTP URL'https://example.org/foo/bar' is the ASCII string'https://example.org'. An error will be thrown if either the given stringcannot be parsed as a URL or if a valid origin cannot be derived.
AURL object, or any object with anorigin property, may be passed asoriginOrStream, in which case the value of theorigin property will beused. The value of theorigin propertymust be a properly serializedASCII origin.
Specifying alternative services#
The format of thealt parameter is strictly defined byRFC 7838 as anASCII string containing a comma-delimited list of "alternative" protocolsassociated with a specific host and port.
For example, the value'h2="example.org:81"' indicates that the HTTP/2protocol is available on the host'example.org' on TCP/IP port 81. Thehost and portmust be contained within the quote (") characters.
Multiple alternatives may be specified, for instance:'h2="example.org:81", h2=":82"'.
The protocol identifier ('h2' in the examples) may be any validALPN Protocol ID.
The syntax of these values is not validated by the Node.js implementation andare passed through as provided by the user or received from the peer.
serverhttp2session.origin(...origins)#
Submits anORIGIN frame (as defined byRFC 8336) to the connected clientto advertise the set of origins for which the server is capable of providingauthoritative responses.
import { createSecureServer }from'node:http2';const options =getSecureOptionsSomehow();const server =createSecureServer(options);server.on('stream',(stream) => { stream.respond(); stream.end('ok');});server.on('session',(session) => { session.origin('https://example.com','https://example.org');});const http2 =require('node:http2');const options =getSecureOptionsSomehow();const server = http2.createSecureServer(options);server.on('stream',(stream) => { stream.respond(); stream.end('ok');});server.on('session',(session) => { session.origin('https://example.com','https://example.org');});
When a string is passed as anorigin, it will be parsed as a URL and theorigin will be derived. For instance, the origin for the HTTP URL'https://example.org/foo/bar' is the ASCII string'https://example.org'. An error will be thrown if either the given stringcannot be parsed as a URL or if a valid origin cannot be derived.
AURL object, or any object with anorigin property, may be passed asanorigin, in which case the value of theorigin property will beused. The value of theorigin propertymust be a properly serializedASCII origin.
Alternatively, theorigins option may be used when creating a new HTTP/2server using thehttp2.createSecureServer() method:
import { createSecureServer }from'node:http2';const options =getSecureOptionsSomehow();options.origins = ['https://example.com','https://example.org'];const server =createSecureServer(options);server.on('stream',(stream) => { stream.respond(); stream.end('ok');});const http2 =require('node:http2');const options =getSecureOptionsSomehow();options.origins = ['https://example.com','https://example.org'];const server = http2.createSecureServer(options);server.on('stream',(stream) => { stream.respond(); stream.end('ok');});
Class:ClientHttp2Session#
- Extends:<Http2Session>
Event:'altsvc'#
The'altsvc' event is emitted whenever anALTSVC frame is received bythe client. The event is emitted with theALTSVC value, origin, and streamID. If noorigin is provided in theALTSVC frame,origin willbe an empty string.
import { connect }from'node:http2';const client =connect('https://example.org');client.on('altsvc',(alt, origin, streamId) => {console.log(alt);console.log(origin);console.log(streamId);});const http2 =require('node:http2');const client = http2.connect('https://example.org');client.on('altsvc',(alt, origin, streamId) => {console.log(alt);console.log(origin);console.log(streamId);});
Event:'origin'#
origins<string[]>
The'origin' event is emitted whenever anORIGIN frame is received bythe client. The event is emitted with an array oforigin strings. Thehttp2session.originSet will be updated to include the receivedorigins.
import { connect }from'node:http2';const client =connect('https://example.org');client.on('origin',(origins) => {for (let n =0; n < origins.length; n++)console.log(origins[n]);});const http2 =require('node:http2');const client = http2.connect('https://example.org');client.on('origin',(origins) => {for (let n =0; n < origins.length; n++)console.log(origins[n]);});
The'origin' event is only emitted when using a secure TLS connection.
clienthttp2session.request(headers[, options])#
History
| Version | Changes |
|---|---|
| v24.2.0 | The |
| v24.2.0, v22.17.0 | Following the deprecation of priority signaling as of RFC 9113, |
| v24.0.0, v22.17.0 | Allow passing headers in raw array format. |
| v8.4.0 | Added in: v8.4.0 |
options<Object>endStream<boolean>trueif theHttp2Streamwritable side shouldbe closed initially, such as when sending aGETrequest that should notexpect a payload body.exclusive<boolean> Whentrueandparentidentifies a parent Stream,the created stream is made the sole direct dependency of the parent, withall other existing dependents made a dependent of the newly created stream.Default:false.parent<number> Specifies the numeric identifier of a stream the newlycreated stream is dependent on.waitForTrailers<boolean> Whentrue, theHttp2Streamwill emit the'wantTrailers'event after the finalDATAframe has been sent.signal<AbortSignal> An AbortSignal that may be used to abort an ongoingrequest.
Returns:<ClientHttp2Stream>
For HTTP/2 ClientHttp2Session instances only, thehttp2session.request()creates and returns anHttp2Stream instance that can be used to send anHTTP/2 request to the connected server.
When aClientHttp2Session is first created, the socket may not yet beconnected. ifclienthttp2session.request() is called during this time, theactual request will be deferred until the socket is ready to go.If thesession is closed before the actual request be executed, anERR_HTTP2_GOAWAY_SESSION is thrown.
This method is only available ifhttp2session.type is equal tohttp2.constants.NGHTTP2_SESSION_CLIENT.
import { connect, constants }from'node:http2';const clientSession =connect('https://localhost:1234');const {HTTP2_HEADER_PATH,HTTP2_HEADER_STATUS,} = constants;const req = clientSession.request({ [HTTP2_HEADER_PATH]:'/' });req.on('response',(headers) => {console.log(headers[HTTP2_HEADER_STATUS]); req.on('data',(chunk) => {/* .. */ }); req.on('end',() => {/* .. */ });});const http2 =require('node:http2');const clientSession = http2.connect('https://localhost:1234');const {HTTP2_HEADER_PATH,HTTP2_HEADER_STATUS,} = http2.constants;const req = clientSession.request({ [HTTP2_HEADER_PATH]:'/' });req.on('response',(headers) => {console.log(headers[HTTP2_HEADER_STATUS]); req.on('data',(chunk) => {/* .. */ }); req.on('end',() => {/* .. */ });});
When theoptions.waitForTrailers option is set, the'wantTrailers' eventis emitted immediately after queuing the last chunk of payload data to be sent.Thehttp2stream.sendTrailers() method can then be called to send trailingheaders to the peer.
Whenoptions.waitForTrailers is set, theHttp2Stream will not automaticallyclose when the finalDATA frame is transmitted. User code must call eitherhttp2stream.sendTrailers() orhttp2stream.close() to close theHttp2Stream.
Whenoptions.signal is set with anAbortSignal and thenabort on thecorrespondingAbortController is called, the request will emit an'error'event with anAbortError error.
The:method and:path pseudo-headers are not specified withinheaders,they respectively default to:
:method='GET':path=/
Class:Http2Stream#
- Extends:<stream.Duplex>
Each instance of theHttp2Stream class represents a bidirectional HTTP/2communications stream over anHttp2Session instance. Any singleHttp2Sessionmay have up to 231-1Http2Stream instances over its lifetime.
User code will not constructHttp2Stream instances directly. Rather, theseare created, managed, and provided to user code through theHttp2Sessioninstance. On the server,Http2Stream instances are created either in responseto an incoming HTTP request (and handed off to user code via the'stream'event), or in response to a call to thehttp2stream.pushStream() method.On the client,Http2Stream instances are created and returned when either thehttp2session.request() method is called, or in response to an incoming'push' event.
TheHttp2Stream class is a base for theServerHttp2Stream andClientHttp2Stream classes, each of which is used specifically by eitherthe Server or Client side, respectively.
AllHttp2Stream instances areDuplex streams. TheWritable side of theDuplex is used to send data to the connected peer, while theReadable sideis used to receive data sent by the connected peer.
The default text character encoding for anHttp2Stream is UTF-8. When using anHttp2Stream to send text, use the'content-type' header to set the characterencoding.
stream.respond({'content-type':'text/html; charset=utf-8',':status':200,});Http2Stream Lifecycle#
Creation#
On the server side, instances ofServerHttp2Stream are created eitherwhen:
- A new HTTP/2
HEADERSframe with a previously unused stream ID is received; - The
http2stream.pushStream()method is called.
On the client side, instances ofClientHttp2Stream are created when thehttp2session.request() method is called.
On the client, theHttp2Stream instance returned byhttp2session.request()may not be immediately ready for use if the parentHttp2Session has not yetbeen fully established. In such cases, operations called on theHttp2Streamwill be buffered until the'ready' event is emitted. User code should rarely,if ever, need to handle the'ready' event directly. The ready status of anHttp2Stream can be determined by checking the value ofhttp2stream.id. Ifthe value isundefined, the stream is not yet ready for use.
Destruction#
AllHttp2Stream instances are destroyed either when:
- An
RST_STREAMframe for the stream is received by the connected peer,and (for client streams only) pending data has been read. - The
http2stream.close()method is called, and (for client streams only)pending data has been read. - The
http2stream.destroy()orhttp2session.destroy()methods are called.
When anHttp2Stream instance is destroyed, an attempt will be made to send anRST_STREAM frame to the connected peer.
When theHttp2Stream instance is destroyed, the'close' event willbe emitted. BecauseHttp2Stream is an instance ofstream.Duplex, the'end' event will also be emitted if the stream data is currently flowing.The'error' event may also be emitted ifhttp2stream.destroy() was calledwith anError passed as the first argument.
After theHttp2Stream has been destroyed, thehttp2stream.destroyedproperty will betrue and thehttp2stream.rstCode property will specify theRST_STREAM error code. TheHttp2Stream instance is no longer usable oncedestroyed.
Event:'aborted'#
The'aborted' event is emitted whenever aHttp2Stream instance isabnormally aborted in mid-communication.Its listener does not expect any arguments.
The'aborted' event will only be emitted if theHttp2Stream writable sidehas not been ended.
Event:'close'#
The'close' event is emitted when theHttp2Stream is destroyed. Oncethis event is emitted, theHttp2Stream instance is no longer usable.
The HTTP/2 error code used when closing the stream can be retrieved usingthehttp2stream.rstCode property. If the code is any value other thanNGHTTP2_NO_ERROR (0), an'error' event will have also been emitted.
Event:'error'#
error<Error>
The'error' event is emitted when an error occurs during the processing ofanHttp2Stream.
Event:'frameError'#
type<integer> The frame type.code<integer> The error code.id<integer> The stream id (or0if the frame isn't associated with astream).
The'frameError' event is emitted when an error occurs while attempting tosend a frame. When invoked, the handler function will receive an integerargument identifying the frame type, and an integer argument identifying theerror code. TheHttp2Stream instance will be destroyed immediately after the'frameError' event is emitted.
Event:'ready'#
The'ready' event is emitted when theHttp2Stream has been opened, hasbeen assigned anid, and can be used. The listener does not expect anyarguments.
Event:'timeout'#
The'timeout' event is emitted after no activity is received for thisHttp2Stream within the number of milliseconds set usinghttp2stream.setTimeout().Its listener does not expect any arguments.
Event:'trailers'#
headers<HTTP/2 Headers Object> An object describing the headersflags<number> The associated numeric flags
The'trailers' event is emitted when a block of headers associated withtrailing header fields is received. The listener callback is passed theHTTP/2 Headers Object and flags associated with the headers.
This event might not be emitted ifhttp2stream.end() is calledbefore trailers are received and the incoming data is not being read orlistened for.
stream.on('trailers',(headers, flags) => {console.log(headers);});Event:'wantTrailers'#
The'wantTrailers' event is emitted when theHttp2Stream has queued thefinalDATA frame to be sent on a frame and theHttp2Stream is ready to sendtrailing headers. When initiating a request or response, thewaitForTrailersoption must be set for this event to be emitted.
http2stream.aborted#
- Type:<boolean>
Set totrue if theHttp2Stream instance was aborted abnormally. When set,the'aborted' event will have been emitted.
http2stream.bufferSize#
- Type:<number>
This property shows the number of characters currently buffered to be written.Seenet.Socket.bufferSize for details.
http2stream.close(code[, callback])#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v8.4.0 | Added in: v8.4.0 |
code<number> Unsigned 32-bit integer identifying the error code.Default:http2.constants.NGHTTP2_NO_ERROR(0x00).callback<Function> An optional function registered to listen for the'close'event.
Closes theHttp2Stream instance by sending anRST_STREAM frame to theconnected HTTP/2 peer.
http2stream.closed#
- Type:<boolean>
Set totrue if theHttp2Stream instance has been closed.
http2stream.destroyed#
- Type:<boolean>
Set totrue if theHttp2Stream instance has been destroyed and is no longerusable.
http2stream.endAfterHeaders#
- Type:<boolean>
Set totrue if theEND_STREAM flag was set in the request or responseHEADERS frame received, indicating that no additional data should be receivedand the readable side of theHttp2Stream will be closed.
http2stream.id#
- Type:<number> |<undefined>
The numeric stream identifier of thisHttp2Stream instance. Set toundefinedif the stream identifier has not yet been assigned.
http2stream.pending#
- Type:<boolean>
Set totrue if theHttp2Stream instance has not yet been assigned anumeric stream identifier.
http2stream.priority(options)#
History
| Version | Changes |
|---|---|
| v24.2.0 | This method no longer sets the priority of the stream. Using it now triggers a runtime warning. |
| v24.2.0, v22.17.0 | Deprecated since: v24.2.0, v22.17.0 |
| v8.4.0 | Added in: v8.4.0 |
Empty method, only there to maintain some backward compatibility.
http2stream.rstCode#
- Type:<number>
Set to theRST_STREAMerror code reported when theHttp2Stream isdestroyed after either receiving anRST_STREAM frame from the connected peer,callinghttp2stream.close(), orhttp2stream.destroy(). Will beundefined if theHttp2Stream has not been closed.
http2stream.sentHeaders#
An object containing the outbound headers sent for thisHttp2Stream.
http2stream.sentInfoHeaders#
An array of objects containing the outbound informational (additional) headerssent for thisHttp2Stream.
http2stream.sentTrailers#
An object containing the outbound trailers sent for thisHttpStream.
http2stream.session#
- Type:<Http2Session>
A reference to theHttp2Session instance that owns thisHttp2Stream. Thevalue will beundefined after theHttp2Stream instance is destroyed.
http2stream.setTimeout(msecs, callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v8.4.0 | Added in: v8.4.0 |
msecs<number>callback<Function>
import { connect, constants }from'node:http2';const client =connect('http://example.org:8000');const {NGHTTP2_CANCEL } = constants;const req = client.request({':path':'/' });// Cancel the stream if there's no activity after 5 secondsreq.setTimeout(5000,() => req.close(NGHTTP2_CANCEL));const http2 =require('node:http2');const client = http2.connect('http://example.org:8000');const {NGHTTP2_CANCEL } = http2.constants;const req = client.request({':path':'/' });// Cancel the stream if there's no activity after 5 secondsreq.setTimeout(5000,() => req.close(NGHTTP2_CANCEL));
http2stream.state#
History
| Version | Changes |
|---|---|
| v24.2.0 | The |
| v24.2.0, v22.17.0 | Following the deprecation of priority signaling as of RFC 9113, |
| v8.4.0 | Added in: v8.4.0 |
Provides miscellaneous information about the current state of theHttp2Stream.
- Type:<Object>
localWindowSize<number> The number of bytes the connected peer may sendfor thisHttp2Streamwithout receiving aWINDOW_UPDATE.state<number> A flag indicating the low-level current state of theHttp2Streamas determined bynghttp2.localClose<number>1if thisHttp2Streamhas been closed locally.remoteClose<number>1if thisHttp2Streamhas been closedremotely.sumDependencyWeight<number> Legacy property, always set to0.weight<number> Legacy property, always set to16.
A current state of thisHttp2Stream.
http2stream.sendTrailers(headers)#
headers<HTTP/2 Headers Object>
Sends a trailingHEADERS frame to the connected HTTP/2 peer. This methodwill cause theHttp2Stream to be immediately closed and must only becalled after the'wantTrailers' event has been emitted. When sending arequest or sending a response, theoptions.waitForTrailers option must be setin order to keep theHttp2Stream open after the finalDATA frame so thattrailers can be sent.
import { createServer }from'node:http2';const server =createServer();server.on('stream',(stream) => { stream.respond(undefined, {waitForTrailers:true }); stream.on('wantTrailers',() => { stream.sendTrailers({xyz:'abc' }); }); stream.end('Hello World');});const http2 =require('node:http2');const server = http2.createServer();server.on('stream',(stream) => { stream.respond(undefined, {waitForTrailers:true }); stream.on('wantTrailers',() => { stream.sendTrailers({xyz:'abc' }); }); stream.end('Hello World');});
The HTTP/1 specification forbids trailers from containing HTTP/2 pseudo-headerfields (e.g.':method',':path', etc).
Class:ClientHttp2Stream#
- Extends<Http2Stream>
TheClientHttp2Stream class is an extension ofHttp2Stream that isused exclusively on HTTP/2 Clients.Http2Stream instances on the clientprovide events such as'response' and'push' that are only relevant onthe client.
Event:'continue'#
Emitted when the server sends a100 Continue status, usually becausethe request containedExpect: 100-continue. This is an instruction thatthe client should send the request body.
Event:'headers'#
headers<HTTP/2 Headers Object>flags<number>rawHeaders<HTTP/2 Raw Headers>
The'headers' event is emitted when an additional block of headers is receivedfor a stream, such as when a block of1xx informational headers is received.The listener callback is passed theHTTP/2 Headers Object, flags associatedwith the headers, and the headers in raw format (seeHTTP/2 Raw Headers).
stream.on('headers',(headers, flags) => {console.log(headers);});Event:'push'#
headers<HTTP/2 Headers Object>flags<number>
The'push' event is emitted when response headers for a Server Push streamare received. The listener callback is passed theHTTP/2 Headers Object andflags associated with the headers.
stream.on('push',(headers, flags) => {console.log(headers);});Event:'response'#
headers<HTTP/2 Headers Object>flags<number>rawHeaders<HTTP/2 Raw Headers>
The'response' event is emitted when a responseHEADERS frame has beenreceived for this stream from the connected HTTP/2 server. The listener isinvoked with three arguments: anObject containing the receivedHTTP/2 Headers Object, flags associated with the headers, and the headersin raw format (seeHTTP/2 Raw Headers).
import { connect }from'node:http2';const client =connect('https://localhost');const req = client.request({':path':'/' });req.on('response',(headers, flags) => {console.log(headers[':status']);});const http2 =require('node:http2');const client = http2.connect('https://localhost');const req = client.request({':path':'/' });req.on('response',(headers, flags) => {console.log(headers[':status']);});
Class:ServerHttp2Stream#
- Extends:<Http2Stream>
TheServerHttp2Stream class is an extension ofHttp2Stream that isused exclusively on HTTP/2 Servers.Http2Stream instances on the serverprovide additional methods such ashttp2stream.pushStream() andhttp2stream.respond() that are only relevant on the server.
http2stream.additionalHeaders(headers)#
headers<HTTP/2 Headers Object>
Sends an additional informationalHEADERS frame to the connected HTTP/2 peer.
http2stream.headersSent#
- Type:<boolean>
True if headers were sent, false otherwise (read-only).
http2stream.pushAllowed#
- Type:<boolean>
Read-only property mapped to theSETTINGS_ENABLE_PUSH flag of the remoteclient's most recentSETTINGS frame. Will betrue if the remote peeraccepts push streams,false otherwise. Settings are the same for everyHttp2Stream in the sameHttp2Session.
http2stream.pushStream(headers[, options], callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v8.4.0 | Added in: v8.4.0 |
headers<HTTP/2 Headers Object>options<Object>exclusive<boolean> Whentrueandparentidentifies a parent Stream,the created stream is made the sole direct dependency of the parent, withall other existing dependents made a dependent of the newly created stream.Default:false.parent<number> Specifies the numeric identifier of a stream the newlycreated stream is dependent on.
callback<Function> Callback that is called once the push stream has beeninitiated.err<Error>pushStream<ServerHttp2Stream> The returnedpushStreamobject.headers<HTTP/2 Headers Object> Headers object thepushStreamwasinitiated with.
Initiates a push stream. The callback is invoked with the newHttp2Streaminstance created for the push stream passed as the second argument, or anError passed as the first argument.
import { createServer }from'node:http2';const server =createServer();server.on('stream',(stream) => { stream.respond({':status':200 }); stream.pushStream({':path':'/' },(err, pushStream, headers) => {if (err)throw err; pushStream.respond({':status':200 }); pushStream.end('some pushed data'); }); stream.end('some data');});const http2 =require('node:http2');const server = http2.createServer();server.on('stream',(stream) => { stream.respond({':status':200 }); stream.pushStream({':path':'/' },(err, pushStream, headers) => {if (err)throw err; pushStream.respond({':status':200 }); pushStream.end('some pushed data'); }); stream.end('some data');});
Setting the weight of a push stream is not allowed in theHEADERS frame. Passaweight value tohttp2stream.priority with thesilent option set totrue to enable server-side bandwidth balancing between concurrent streams.
Callinghttp2stream.pushStream() from within a pushed stream is not permittedand will throw an error.
http2stream.respond([headers[, options]])#
History
| Version | Changes |
|---|---|
| v24.7.0, v22.20.0 | Allow passing headers in raw array format. |
| v14.5.0, v12.19.0 | Allow explicitly setting date headers. |
| v8.4.0 | Added in: v8.4.0 |
headers<HTTP/2 Headers Object> |<HTTP/2 Raw Headers>options<Object>
import { createServer }from'node:http2';const server =createServer();server.on('stream',(stream) => { stream.respond({':status':200 }); stream.end('some data');});const http2 =require('node:http2');const server = http2.createServer();server.on('stream',(stream) => { stream.respond({':status':200 }); stream.end('some data');});
Initiates a response. When theoptions.waitForTrailers option is set, the'wantTrailers' event will be emitted immediately after queuing the last chunkof payload data to be sent. Thehttp2stream.sendTrailers() method can then beused to sent trailing header fields to the peer.
Whenoptions.waitForTrailers is set, theHttp2Stream will not automaticallyclose when the finalDATA frame is transmitted. User code must call eitherhttp2stream.sendTrailers() orhttp2stream.close() to close theHttp2Stream.
import { createServer }from'node:http2';const server =createServer();server.on('stream',(stream) => { stream.respond({':status':200 }, {waitForTrailers:true }); stream.on('wantTrailers',() => { stream.sendTrailers({ABC:'some value to send' }); }); stream.end('some data');});const http2 =require('node:http2');const server = http2.createServer();server.on('stream',(stream) => { stream.respond({':status':200 }, {waitForTrailers:true }); stream.on('wantTrailers',() => { stream.sendTrailers({ABC:'some value to send' }); }); stream.end('some data');});
http2stream.respondWithFD(fd[, headers[, options]])#
History
| Version | Changes |
|---|---|
| v14.5.0, v12.19.0 | Allow explicitly setting date headers. |
| v12.12.0 | The |
| v10.0.0 | Any readable file descriptor, not necessarily for a regular file, is supported now. |
| v8.4.0 | Added in: v8.4.0 |
fd<number> |<FileHandle> A readable file descriptor.headers<HTTP/2 Headers Object>options<Object>statCheck<Function>waitForTrailers<boolean> Whentrue, theHttp2Streamwill emit the'wantTrailers'event after the finalDATAframe has been sent.offset<number> The offset position at which to begin reading.length<number> The amount of data from the fd to send.
Initiates a response whose data is read from the given file descriptor. Novalidation is performed on the given file descriptor. If an error occurs whileattempting to read data using the file descriptor, theHttp2Stream will beclosed using anRST_STREAM frame using the standardINTERNAL_ERROR code.
When used, theHttp2Stream object'sDuplex interface will be closedautomatically.
import { createServer }from'node:http2';import { openSync, fstatSync, closeSync }from'node:fs';const server =createServer();server.on('stream',(stream) => {const fd =openSync('/some/file','r');const stat =fstatSync(fd);const headers = {'content-length': stat.size,'last-modified': stat.mtime.toUTCString(),'content-type':'text/plain; charset=utf-8', }; stream.respondWithFD(fd, headers); stream.on('close',() =>closeSync(fd));});const http2 =require('node:http2');const fs =require('node:fs');const server = http2.createServer();server.on('stream',(stream) => {const fd = fs.openSync('/some/file','r');const stat = fs.fstatSync(fd);const headers = {'content-length': stat.size,'last-modified': stat.mtime.toUTCString(),'content-type':'text/plain; charset=utf-8', }; stream.respondWithFD(fd, headers); stream.on('close',() => fs.closeSync(fd));});
The optionaloptions.statCheck function may be specified to give user codean opportunity to set additional content headers based on thefs.Stat detailsof the given fd. If thestatCheck function is provided, thehttp2stream.respondWithFD() method will perform anfs.fstat() call tocollect details on the provided file descriptor.
Theoffset andlength options may be used to limit the response to aspecific range subset. This can be used, for instance, to support HTTP Rangerequests.
The file descriptor orFileHandle is not closed when the stream is closed,so it will need to be closed manually once it is no longer needed.Using the same file descriptor concurrently for multiple streamsis not supported and may result in data loss. Re-using a file descriptorafter a stream has finished is supported.
When theoptions.waitForTrailers option is set, the'wantTrailers' eventwill be emitted immediately after queuing the last chunk of payload data to besent. Thehttp2stream.sendTrailers() method can then be used to sent trailingheader fields to the peer.
Whenoptions.waitForTrailers is set, theHttp2Stream will not automaticallyclose when the finalDATA frame is transmitted. User codemust call eitherhttp2stream.sendTrailers() orhttp2stream.close() to close theHttp2Stream.
import { createServer }from'node:http2';import { openSync, fstatSync, closeSync }from'node:fs';const server =createServer();server.on('stream',(stream) => {const fd =openSync('/some/file','r');const stat =fstatSync(fd);const headers = {'content-length': stat.size,'last-modified': stat.mtime.toUTCString(),'content-type':'text/plain; charset=utf-8', }; stream.respondWithFD(fd, headers, {waitForTrailers:true }); stream.on('wantTrailers',() => { stream.sendTrailers({ABC:'some value to send' }); }); stream.on('close',() =>closeSync(fd));});const http2 =require('node:http2');const fs =require('node:fs');const server = http2.createServer();server.on('stream',(stream) => {const fd = fs.openSync('/some/file','r');const stat = fs.fstatSync(fd);const headers = {'content-length': stat.size,'last-modified': stat.mtime.toUTCString(),'content-type':'text/plain; charset=utf-8', }; stream.respondWithFD(fd, headers, {waitForTrailers:true }); stream.on('wantTrailers',() => { stream.sendTrailers({ABC:'some value to send' }); }); stream.on('close',() => fs.closeSync(fd));});
http2stream.respondWithFile(path[, headers[, options]])#
History
| Version | Changes |
|---|---|
| v14.5.0, v12.19.0 | Allow explicitly setting date headers. |
| v10.0.0 | Any readable file, not necessarily a regular file, is supported now. |
| v8.4.0 | Added in: v8.4.0 |
path<string> |<Buffer> |<URL>headers<HTTP/2 Headers Object>options<Object>statCheck<Function>onError<Function> Callback function invoked in the case of anerror before send.waitForTrailers<boolean> Whentrue, theHttp2Streamwill emit the'wantTrailers'event after the finalDATAframe has been sent.offset<number> The offset position at which to begin reading.length<number> The amount of data from the fd to send.
Sends a regular file as the response. Thepath must specify a regular fileor an'error' event will be emitted on theHttp2Stream object.
When used, theHttp2Stream object'sDuplex interface will be closedautomatically.
The optionaloptions.statCheck function may be specified to give user codean opportunity to set additional content headers based on thefs.Stat detailsof the given file:
If an error occurs while attempting to read the file data, theHttp2Streamwill be closed using anRST_STREAM frame using the standardINTERNAL_ERRORcode. If theonError callback is defined, then it will be called. Otherwisethe stream will be destroyed.
Example using a file path:
import { createServer }from'node:http2';const server =createServer();server.on('stream',(stream) => {functionstatCheck(stat, headers) { headers['last-modified'] = stat.mtime.toUTCString(); }functiononError(err) {// stream.respond() can throw if the stream has been destroyed by// the other side.try {if (err.code ==='ENOENT') { stream.respond({':status':404 }); }else { stream.respond({':status':500 }); } }catch (err) {// Perform actual error handling.console.error(err); } stream.end(); } stream.respondWithFile('/some/file', {'content-type':'text/plain; charset=utf-8' }, { statCheck, onError });});const http2 =require('node:http2');const server = http2.createServer();server.on('stream',(stream) => {functionstatCheck(stat, headers) { headers['last-modified'] = stat.mtime.toUTCString(); }functiononError(err) {// stream.respond() can throw if the stream has been destroyed by// the other side.try {if (err.code ==='ENOENT') { stream.respond({':status':404 }); }else { stream.respond({':status':500 }); } }catch (err) {// Perform actual error handling.console.error(err); } stream.end(); } stream.respondWithFile('/some/file', {'content-type':'text/plain; charset=utf-8' }, { statCheck, onError });});
Theoptions.statCheck function may also be used to cancel the send operationby returningfalse. For instance, a conditional request may check the statresults to determine if the file has been modified to return an appropriate304 response:
import { createServer }from'node:http2';const server =createServer();server.on('stream',(stream) => {functionstatCheck(stat, headers) {// Check the stat here... stream.respond({':status':304 });returnfalse;// Cancel the send operation } stream.respondWithFile('/some/file', {'content-type':'text/plain; charset=utf-8' }, { statCheck });});const http2 =require('node:http2');const server = http2.createServer();server.on('stream',(stream) => {functionstatCheck(stat, headers) {// Check the stat here... stream.respond({':status':304 });returnfalse;// Cancel the send operation } stream.respondWithFile('/some/file', {'content-type':'text/plain; charset=utf-8' }, { statCheck });});
Thecontent-length header field will be automatically set.
Theoffset andlength options may be used to limit the response to aspecific range subset. This can be used, for instance, to support HTTP Rangerequests.
Theoptions.onError function may also be used to handle all the errorsthat could happen before the delivery of the file is initiated. Thedefault behavior is to destroy the stream.
When theoptions.waitForTrailers option is set, the'wantTrailers' eventwill be emitted immediately after queuing the last chunk of payload data to besent. Thehttp2stream.sendTrailers() method can then be used to sent trailingheader fields to the peer.
Whenoptions.waitForTrailers is set, theHttp2Stream will not automaticallyclose when the finalDATA frame is transmitted. User code must call eitherhttp2stream.sendTrailers() orhttp2stream.close() to close theHttp2Stream.
import { createServer }from'node:http2';const server =createServer();server.on('stream',(stream) => { stream.respondWithFile('/some/file', {'content-type':'text/plain; charset=utf-8' }, {waitForTrailers:true }); stream.on('wantTrailers',() => { stream.sendTrailers({ABC:'some value to send' }); });});const http2 =require('node:http2');const server = http2.createServer();server.on('stream',(stream) => { stream.respondWithFile('/some/file', {'content-type':'text/plain; charset=utf-8' }, {waitForTrailers:true }); stream.on('wantTrailers',() => { stream.sendTrailers({ABC:'some value to send' }); });});
Class:Http2Server#
- Extends:<net.Server>
Instances ofHttp2Server are created using thehttp2.createServer()function. TheHttp2Server class is not exported directly by thenode:http2 module.
Event:'checkContinue'#
request<http2.Http2ServerRequest>response<http2.Http2ServerResponse>
If a'request' listener is registered orhttp2.createServer() issupplied a callback function, the'checkContinue' event is emitted each timea request with an HTTPExpect: 100-continue is received. If this event isnot listened for, the server will automatically respond with a status100 Continue as appropriate.
Handling this event involves callingresponse.writeContinue() if theclient should continue to send the request body, or generating an appropriateHTTP response (e.g. 400 Bad Request) if the client should not continue to sendthe request body.
When this event is emitted and handled, the'request' event willnot be emitted.
Event:'connection'#
socket<stream.Duplex>
This event is emitted when a new TCP stream is established.socket istypically an object of typenet.Socket. Usually users will not want toaccess this event.
This event can also be explicitly emitted by users to inject connectionsinto the HTTP server. In that case, anyDuplex stream can be passed.
Event:'request'#
request<http2.Http2ServerRequest>response<http2.Http2ServerResponse>
Emitted each time there is a request. There may be multiple requestsper session. See theCompatibility API.
Event:'session'#
session<ServerHttp2Session>
The'session' event is emitted when a newHttp2Session is created by theHttp2Server.
Event:'sessionError'#
error<Error>session<ServerHttp2Session>
The'sessionError' event is emitted when an'error' event is emitted byanHttp2Session object associated with theHttp2Server.
Event:'stream'#
stream<Http2Stream> A reference to the streamheaders<HTTP/2 Headers Object> An object describing the headersflags<number> The associated numeric flagsrawHeaders<HTTP/2 Raw Headers> An array containing the raw headers
The'stream' event is emitted when a'stream' event has been emitted byanHttp2Session associated with the server.
See alsoHttp2Session's'stream' event.
import { createServer, constants }from'node:http2';const {HTTP2_HEADER_METHOD,HTTP2_HEADER_PATH,HTTP2_HEADER_STATUS,HTTP2_HEADER_CONTENT_TYPE,} = constants;const server =createServer();server.on('stream',(stream, headers, flags) => {const method = headers[HTTP2_HEADER_METHOD];const path = headers[HTTP2_HEADER_PATH];// ... stream.respond({ [HTTP2_HEADER_STATUS]:200, [HTTP2_HEADER_CONTENT_TYPE]:'text/plain; charset=utf-8', }); stream.write('hello '); stream.end('world');});const http2 =require('node:http2');const {HTTP2_HEADER_METHOD,HTTP2_HEADER_PATH,HTTP2_HEADER_STATUS,HTTP2_HEADER_CONTENT_TYPE,} = http2.constants;const server = http2.createServer();server.on('stream',(stream, headers, flags) => {const method = headers[HTTP2_HEADER_METHOD];const path = headers[HTTP2_HEADER_PATH];// ... stream.respond({ [HTTP2_HEADER_STATUS]:200, [HTTP2_HEADER_CONTENT_TYPE]:'text/plain; charset=utf-8', }); stream.write('hello '); stream.end('world');});
Event:'timeout'#
History
| Version | Changes |
|---|---|
| v13.0.0 | The default timeout changed from 120s to 0 (no timeout). |
| v8.4.0 | Added in: v8.4.0 |
The'timeout' event is emitted when there is no activity on the Server fora given number of milliseconds set usinghttp2server.setTimeout().Default: 0 (no timeout)
server.close([callback])#
callback<Function>
Stops the server from establishing new sessions. This does not prevent newrequest streams from being created due to the persistent nature of HTTP/2sessions. To gracefully shut down the server, callhttp2session.close() onall active sessions.
Ifcallback is provided, it is not invoked until all active sessions have beenclosed, although the server has already stopped allowing new sessions. Seenet.Server.close() for more details.
server[Symbol.asyncDispose]()#
History
| Version | Changes |
|---|---|
| v24.2.0 | No longer experimental. |
| v20.4.0 | Added in: v20.4.0 |
Callsserver.close() and returns a promise that fulfills when theserver has closed.
server.setTimeout([msecs][, callback])#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v13.0.0 | The default timeout changed from 120s to 0 (no timeout). |
| v8.4.0 | Added in: v8.4.0 |
msecs<number>Default: 0 (no timeout)callback<Function>- Returns:<Http2Server>
Used to set the timeout value for http2 server requests,and sets a callback function that is called when there is no activityon theHttp2Server aftermsecs milliseconds.
The given callback is registered as a listener on the'timeout' event.
In case ifcallback is not a function, a newERR_INVALID_ARG_TYPEerror will be thrown.
server.timeout#
History
| Version | Changes |
|---|---|
| v13.0.0 | The default timeout changed from 120s to 0 (no timeout). |
| v8.4.0 | Added in: v8.4.0 |
- Type:<number> Timeout in milliseconds.Default: 0 (no timeout)
The number of milliseconds of inactivity before a socket is presumedto have timed out.
A value of0 will disable the timeout behavior on incoming connections.
The socket timeout logic is set up on connection, so changing thisvalue only affects new connections to the server, not any existing connections.
server.updateSettings([settings])#
settings<HTTP/2 Settings Object>
Used to update the server with the provided settings.
ThrowsERR_HTTP2_INVALID_SETTING_VALUE for invalidsettings values.
ThrowsERR_INVALID_ARG_TYPE for invalidsettings argument.
Class:Http2SecureServer#
- Extends:<tls.Server>
Instances ofHttp2SecureServer are created using thehttp2.createSecureServer() function. TheHttp2SecureServer class is notexported directly by thenode:http2 module.
Event:'checkContinue'#
request<http2.Http2ServerRequest>response<http2.Http2ServerResponse>
If a'request' listener is registered orhttp2.createSecureServer()is supplied a callback function, the'checkContinue' event is emitted eachtime a request with an HTTPExpect: 100-continue is received. If this eventis not listened for, the server will automatically respond with a status100 Continue as appropriate.
Handling this event involves callingresponse.writeContinue() if theclient should continue to send the request body, or generating an appropriateHTTP response (e.g. 400 Bad Request) if the client should not continue to sendthe request body.
When this event is emitted and handled, the'request' event willnot be emitted.
Event:'connection'#
socket<stream.Duplex>
This event is emitted when a new TCP stream is established, before the TLShandshake begins.socket is typically an object of typenet.Socket.Usually users will not want to access this event.
This event can also be explicitly emitted by users to inject connectionsinto the HTTP server. In that case, anyDuplex stream can be passed.
Event:'request'#
request<http2.Http2ServerRequest>response<http2.Http2ServerResponse>
Emitted each time there is a request. There may be multiple requestsper session. See theCompatibility API.
Event:'session'#
session<ServerHttp2Session>
The'session' event is emitted when a newHttp2Session is created by theHttp2SecureServer.
Event:'sessionError'#
error<Error>session<ServerHttp2Session>
The'sessionError' event is emitted when an'error' event is emitted byanHttp2Session object associated with theHttp2SecureServer.
Event:'stream'#
stream<Http2Stream> A reference to the streamheaders<HTTP/2 Headers Object> An object describing the headersflags<number> The associated numeric flagsrawHeaders<HTTP/2 Raw Headers> An array containing the raw headers
The'stream' event is emitted when a'stream' event has been emitted byanHttp2Session associated with the server.
See alsoHttp2Session's'stream' event.
import { createSecureServer, constants }from'node:http2';const {HTTP2_HEADER_METHOD,HTTP2_HEADER_PATH,HTTP2_HEADER_STATUS,HTTP2_HEADER_CONTENT_TYPE,} = constants;const options =getOptionsSomehow();const server =createSecureServer(options);server.on('stream',(stream, headers, flags) => {const method = headers[HTTP2_HEADER_METHOD];const path = headers[HTTP2_HEADER_PATH];// ... stream.respond({ [HTTP2_HEADER_STATUS]:200, [HTTP2_HEADER_CONTENT_TYPE]:'text/plain; charset=utf-8', }); stream.write('hello '); stream.end('world');});const http2 =require('node:http2');const {HTTP2_HEADER_METHOD,HTTP2_HEADER_PATH,HTTP2_HEADER_STATUS,HTTP2_HEADER_CONTENT_TYPE,} = http2.constants;const options =getOptionsSomehow();const server = http2.createSecureServer(options);server.on('stream',(stream, headers, flags) => {const method = headers[HTTP2_HEADER_METHOD];const path = headers[HTTP2_HEADER_PATH];// ... stream.respond({ [HTTP2_HEADER_STATUS]:200, [HTTP2_HEADER_CONTENT_TYPE]:'text/plain; charset=utf-8', }); stream.write('hello '); stream.end('world');});
Event:'timeout'#
The'timeout' event is emitted when there is no activity on the Server fora given number of milliseconds set usinghttp2secureServer.setTimeout().Default: 2 minutes.
Event:'unknownProtocol'#
History
| Version | Changes |
|---|---|
| v19.0.0 | This event will only be emitted if the client did not transmit an ALPN extension during the TLS handshake. |
| v8.4.0 | Added in: v8.4.0 |
socket<stream.Duplex>
The'unknownProtocol' event is emitted when a connecting client fails tonegotiate an allowed protocol (i.e. HTTP/2 or HTTP/1.1). The event handlerreceives the socket for handling. If no listener is registered for this event,the connection is terminated. A timeout may be specified using the'unknownProtocolTimeout' option passed tohttp2.createSecureServer().
In earlier versions of Node.js, this event would be emitted ifallowHTTP1 isfalse and, during the TLS handshake, the client either does not send an ALPNextension or sends an ALPN extension that does not include HTTP/2 (h2). Newerversions of Node.js only emit this event ifallowHTTP1 isfalse and theclient does not send an ALPN extension. If the client sends an ALPN extensionthat does not include HTTP/2 (or HTTP/1.1 ifallowHTTP1 istrue), the TLShandshake will fail and no secure connection will be established.
See theCompatibility API.
server.close([callback])#
callback<Function>
Stops the server from establishing new sessions. This does not prevent newrequest streams from being created due to the persistent nature of HTTP/2sessions. To gracefully shut down the server, callhttp2session.close() onall active sessions.
Ifcallback is provided, it is not invoked until all active sessions have beenclosed, although the server has already stopped allowing new sessions. Seetls.Server.close() for more details.
server.setTimeout([msecs][, callback])#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v8.4.0 | Added in: v8.4.0 |
msecs<number>Default:120000(2 minutes)callback<Function>- Returns:<Http2SecureServer>
Used to set the timeout value for http2 secure server requests,and sets a callback function that is called when there is no activityon theHttp2SecureServer aftermsecs milliseconds.
The given callback is registered as a listener on the'timeout' event.
In case ifcallback is not a function, a newERR_INVALID_ARG_TYPEerror will be thrown.
server.timeout#
History
| Version | Changes |
|---|---|
| v13.0.0 | The default timeout changed from 120s to 0 (no timeout). |
| v8.4.0 | Added in: v8.4.0 |
- Type:<number> Timeout in milliseconds.Default: 0 (no timeout)
The number of milliseconds of inactivity before a socket is presumedto have timed out.
A value of0 will disable the timeout behavior on incoming connections.
The socket timeout logic is set up on connection, so changing thisvalue only affects new connections to the server, not any existing connections.
server.updateSettings([settings])#
settings<HTTP/2 Settings Object>
Used to update the server with the provided settings.
ThrowsERR_HTTP2_INVALID_SETTING_VALUE for invalidsettings values.
ThrowsERR_INVALID_ARG_TYPE for invalidsettings argument.
http2.createServer([options][, onRequestHandler])#
History
| Version | Changes |
|---|---|
| v23.0.0, v22.10.0 | Added |
| v13.0.0 | The |
| v13.3.0, v12.16.0 | Added |
| v13.3.0, v12.16.0 | Added |
| v12.4.0 | The |
| v15.10.0, v14.16.0, v12.21.0, v10.24.0 | Added |
| v14.4.0, v12.18.0, v10.21.0 | Added |
| v9.6.0 | Added the |
| v8.9.3 | Added the |
| v8.9.3 | Added the |
| v8.4.0 | Added in: v8.4.0 |
options<Object>maxDeflateDynamicTableSize<number> Sets the maximum dynamic table sizefor deflating header fields.Default:4Kib.maxSettings<number> Sets the maximum number of settings entries perSETTINGSframe. The minimum value allowed is1.Default:32.maxSessionMemory<number> Sets the maximum memory that theHttp2Sessionis permitted to use. The value is expressed in terms of number of megabytes,e.g.1equal 1 megabyte. The minimum value allowed is1.This is a credit based limit, existingHttp2Streams may cause thislimit to be exceeded, but newHttp2Streaminstances will be rejectedwhile this limit is exceeded. The current number ofHttp2Streamsessions,the current memory use of the header compression tables, current dataqueued to be sent, and unacknowledgedPINGandSETTINGSframes are allcounted towards the current limit.Default:10.maxHeaderListPairs<number> Sets the maximum number of header entries.This is similar toserver.maxHeadersCountorrequest.maxHeadersCountin thenode:httpmodule. The minimum valueis4.Default:128.maxOutstandingPings<number> Sets the maximum number of outstanding,unacknowledged pings.Default:10.maxSendHeaderBlockLength<number> Sets the maximum allowed size for aserialized, compressed block of headers. Attempts to send headers thatexceed this limit will result in a'frameError'event being emittedand the stream being closed and destroyed.While this sets the maximum allowed size to the entire block of headers,nghttp2(the internal http2 library) has a limit of65536for each decompressed key/value pair.paddingStrategy<number> The strategy used for determining the amount ofpadding to use forHEADERSandDATAframes.Default:http2.constants.PADDING_STRATEGY_NONE. Value may be one of:http2.constants.PADDING_STRATEGY_NONE: No padding is applied.http2.constants.PADDING_STRATEGY_MAX: The maximum amount of padding,determined by the internal implementation, is applied.http2.constants.PADDING_STRATEGY_ALIGNED: Attempts to apply enoughpadding to ensure that the total frame length, including the 9-byteheader, is a multiple of 8. For each frame, there is a maximum allowednumber of padding bytes that is determined by current flow control stateand settings. If this maximum is less than the calculated amount needed toensure alignment, the maximum is used and the total frame length is notnecessarily aligned at 8 bytes.
peerMaxConcurrentStreams<number> Sets the maximum number of concurrentstreams for the remote peer as if aSETTINGSframe had been received. Willbe overridden if the remote peer sets its own value formaxConcurrentStreams.Default:100.maxSessionInvalidFrames<integer> Sets the maximum number of invalidframes that will be tolerated before the session is closed.Default:1000.maxSessionRejectedStreams<integer> Sets the maximum number of rejectedupon creation streams that will be tolerated before the session is closed.Each rejection is associated with anNGHTTP2_ENHANCE_YOUR_CALMerror that should tell the peer to not open any more streams, continuingto open streams is therefore regarded as a sign of a misbehaving peer.Default:100.settings<HTTP/2 Settings Object> The initial settings to send to theremote peer upon connection.streamResetBurst<number> andstreamResetRate<number> Sets the ratelimit for the incoming stream reset (RST_STREAM frame). Both settings mustbe set to have any effect, and default to 1000 and 33 respectively.remoteCustomSettings<Array> The array of integer values determines thesettings types, which are included in theCustomSettings-property ofthe received remoteSettings. Please see theCustomSettings-property oftheHttp2Settingsobject for more information,on the allowed setting types.Http1IncomingMessage<http.IncomingMessage> Specifies theIncomingMessageclass to used for HTTP/1 fallback. Useful for extendingthe originalhttp.IncomingMessage.Default:http.IncomingMessage.Http1ServerResponse<http.ServerResponse> Specifies theServerResponseclass to used for HTTP/1 fallback. Useful for extending the originalhttp.ServerResponse.Default:http.ServerResponse.Http2ServerRequest<http2.Http2ServerRequest> Specifies theHttp2ServerRequestclass to use.Useful for extending the originalHttp2ServerRequest.Default:Http2ServerRequest.Http2ServerResponse<http2.Http2ServerResponse> Specifies theHttp2ServerResponseclass to use.Useful for extending the originalHttp2ServerResponse.Default:Http2ServerResponse.unknownProtocolTimeout<number> Specifies a timeout in milliseconds thata server should wait when an'unknownProtocol'is emitted. If thesocket has not been destroyed by that time the server will destroy it.Default:10000.strictFieldWhitespaceValidation<boolean> Iftrue, it turns on strict leadingand trailing whitespace validation for HTTP/2 header field names and valuesas perRFC-9113.Default:true....options<Object> Anynet.createServer()option can be provided.
onRequestHandler<Function> SeeCompatibility API- Returns:<Http2Server>
Returns anet.Server instance that creates and managesHttp2Sessioninstances.
Since there are no browsers known that supportunencrypted HTTP/2, the use ofhttp2.createSecureServer() is necessary when communicatingwith browser clients.
import { createServer }from'node:http2';// Create an unencrypted HTTP/2 server.// Since there are no browsers known that support// unencrypted HTTP/2, the use of `createSecureServer()`// is necessary when communicating with browser clients.const server =createServer();server.on('stream',(stream, headers) => { stream.respond({'content-type':'text/html; charset=utf-8',':status':200, }); stream.end('<h1>Hello World</h1>');});server.listen(8000);const http2 =require('node:http2');// Create an unencrypted HTTP/2 server.// Since there are no browsers known that support// unencrypted HTTP/2, the use of `http2.createSecureServer()`// is necessary when communicating with browser clients.const server = http2.createServer();server.on('stream',(stream, headers) => { stream.respond({'content-type':'text/html; charset=utf-8',':status':200, }); stream.end('<h1>Hello World</h1>');});server.listen(8000);
http2.createSecureServer(options[, onRequestHandler])#
History
| Version | Changes |
|---|---|
| v13.0.0 | The |
| v13.3.0, v12.16.0 | Added |
| v13.3.0, v12.16.0 | Added |
| v15.10.0, v14.16.0, v12.21.0, v10.24.0 | Added |
| v14.4.0, v12.18.0, v10.21.0 | Added |
| v10.12.0 | Added the |
| v8.9.3 | Added the |
| v8.9.3 | Added the |
| v8.4.0 | Added in: v8.4.0 |
options<Object>allowHTTP1<boolean> Incoming client connections that do not supportHTTP/2 will be downgraded to HTTP/1.x when set totrue.See the'unknownProtocol'event. SeeALPN negotiation.Default:false.maxDeflateDynamicTableSize<number> Sets the maximum dynamic table sizefor deflating header fields.Default:4Kib.maxSettings<number> Sets the maximum number of settings entries perSETTINGSframe. The minimum value allowed is1.Default:32.maxSessionMemory<number> Sets the maximum memory that theHttp2Sessionis permitted to use. The value is expressed in terms of number of megabytes,e.g.1equal 1 megabyte. The minimum value allowed is1. This is acredit based limit, existingHttp2Streams may cause thislimit to be exceeded, but newHttp2Streaminstances will be rejectedwhile this limit is exceeded. The current number ofHttp2Streamsessions,the current memory use of the header compression tables, current dataqueued to be sent, and unacknowledgedPINGandSETTINGSframes are allcounted towards the current limit.Default:10.maxHeaderListPairs<number> Sets the maximum number of header entries.This is similar toserver.maxHeadersCountorrequest.maxHeadersCountin thenode:httpmodule. The minimum valueis4.Default:128.maxOutstandingPings<number> Sets the maximum number of outstanding,unacknowledged pings.Default:10.maxSendHeaderBlockLength<number> Sets the maximum allowed size for aserialized, compressed block of headers. Attempts to send headers thatexceed this limit will result in a'frameError'event being emittedand the stream being closed and destroyed.paddingStrategy<number> Strategy used for determining the amount ofpadding to use forHEADERSandDATAframes.Default:http2.constants.PADDING_STRATEGY_NONE. Value may be one of:http2.constants.PADDING_STRATEGY_NONE: No padding is applied.http2.constants.PADDING_STRATEGY_MAX: The maximum amount of padding,determined by the internal implementation, is applied.http2.constants.PADDING_STRATEGY_ALIGNED: Attempts to apply enoughpadding to ensure that the total frame length, including the9-byte header, is a multiple of 8. For each frame, there is a maximumallowed number of padding bytes that is determined by current flow controlstate and settings. If this maximum is less than the calculated amountneeded to ensure alignment, the maximum is used and the total frame lengthis not necessarily aligned at 8 bytes.
peerMaxConcurrentStreams<number> Sets the maximum number of concurrentstreams for the remote peer as if aSETTINGSframe had been received. Willbe overridden if the remote peer sets its own value formaxConcurrentStreams.Default:100.maxSessionInvalidFrames<integer> Sets the maximum number of invalidframes that will be tolerated before the session is closed.Default:1000.maxSessionRejectedStreams<integer> Sets the maximum number of rejectedupon creation streams that will be tolerated before the session is closed.Each rejection is associated with anNGHTTP2_ENHANCE_YOUR_CALMerror that should tell the peer to not open any more streams, continuingto open streams is therefore regarded as a sign of a misbehaving peer.Default:100.settings<HTTP/2 Settings Object> The initial settings to send to theremote peer upon connection.streamResetBurst<number> andstreamResetRate<number> Sets the ratelimit for the incoming stream reset (RST_STREAM frame). Both settings mustbe set to have any effect, and default to 1000 and 33 respectively.remoteCustomSettings<Array> The array of integer values determines thesettings types, which are included in thecustomSettings-property of thereceived remoteSettings. Please see thecustomSettings-property of theHttp2Settingsobject for more information, on the allowed setting types....options<Object> Anytls.createServer()options can be provided.For servers, the identity options (pfxorkey/cert) are usually required.origins<string[]> An array of origin strings to send within anORIGINframe immediately following creation of a new serverHttp2Session.unknownProtocolTimeout<number> Specifies a timeout in milliseconds thata server should wait when an'unknownProtocol'event is emitted. Ifthe socket has not been destroyed by that time the server will destroy it.Default:10000.strictFieldWhitespaceValidation<boolean> Iftrue, it turns on strict leadingand trailing whitespace validation for HTTP/2 header field names and valuesas perRFC-9113.Default:true.
onRequestHandler<Function> SeeCompatibility API- Returns:<Http2SecureServer>
Returns atls.Server instance that creates and managesHttp2Sessioninstances.
import { createSecureServer }from'node:http2';import { readFileSync }from'node:fs';const options = {key:readFileSync('server-key.pem'),cert:readFileSync('server-cert.pem'),};// Create a secure HTTP/2 serverconst server =createSecureServer(options);server.on('stream',(stream, headers) => { stream.respond({'content-type':'text/html; charset=utf-8',':status':200, }); stream.end('<h1>Hello World</h1>');});server.listen(8443);const http2 =require('node:http2');const fs =require('node:fs');const options = {key: fs.readFileSync('server-key.pem'),cert: fs.readFileSync('server-cert.pem'),};// Create a secure HTTP/2 serverconst server = http2.createSecureServer(options);server.on('stream',(stream, headers) => { stream.respond({'content-type':'text/html; charset=utf-8',':status':200, }); stream.end('<h1>Hello World</h1>');});server.listen(8443);
http2.connect(authority[, options][, listener])#
History
| Version | Changes |
|---|---|
| v13.0.0 | The |
| v15.10.0, v14.16.0, v12.21.0, v10.24.0 | Added |
| v14.4.0, v12.18.0, v10.21.0 | Added |
| v8.9.3 | Added the |
| v8.9.3 | Added the |
| v8.4.0 | Added in: v8.4.0 |
authority<string> |<URL> The remote HTTP/2 server to connect to. This mustbe in the form of a minimal, valid URL with thehttp://orhttps://prefix, host name, and IP port (if a non-default port is used). Userinfo(user ID and password), path, querystring, and fragment details in theURL will be ignored.options<Object>maxDeflateDynamicTableSize<number> Sets the maximum dynamic table sizefor deflating header fields.Default:4Kib.maxSettings<number> Sets the maximum number of settings entries perSETTINGSframe. The minimum value allowed is1.Default:32.maxSessionMemory<number> Sets the maximum memory that theHttp2Sessionis permitted to use. The value is expressed in terms of number of megabytes,e.g.1equal 1 megabyte. The minimum value allowed is1.This is a credit based limit, existingHttp2Streams may cause thislimit to be exceeded, but newHttp2Streaminstances will be rejectedwhile this limit is exceeded. The current number ofHttp2Streamsessions,the current memory use of the header compression tables, current dataqueued to be sent, and unacknowledgedPINGandSETTINGSframes are allcounted towards the current limit.Default:10.maxHeaderListPairs<number> Sets the maximum number of header entries.This is similar toserver.maxHeadersCountorrequest.maxHeadersCountin thenode:httpmodule. The minimum valueis1.Default:128.maxOutstandingPings<number> Sets the maximum number of outstanding,unacknowledged pings.Default:10.maxReservedRemoteStreams<number> Sets the maximum number of reserved pushstreams the client will accept at any given time. Once the current number ofcurrently reserved push streams exceeds reaches this limit, new push streamssent by the server will be automatically rejected. The minimum allowed valueis 0. The maximum allowed value is 232-1. A negative value setsthis option to the maximum allowed value.Default:200.maxSendHeaderBlockLength<number> Sets the maximum allowed size for aserialized, compressed block of headers. Attempts to send headers thatexceed this limit will result in a'frameError'event being emittedand the stream being closed and destroyed.paddingStrategy<number> Strategy used for determining the amount ofpadding to use forHEADERSandDATAframes.Default:http2.constants.PADDING_STRATEGY_NONE. Value may be one of:http2.constants.PADDING_STRATEGY_NONE: No padding is applied.http2.constants.PADDING_STRATEGY_MAX: The maximum amount of padding,determined by the internal implementation, is applied.http2.constants.PADDING_STRATEGY_ALIGNED: Attempts to apply enoughpadding to ensure that the total frame length, including the9-byte header, is a multiple of 8. For each frame, there is a maximumallowed number of padding bytes that is determined by current flow controlstate and settings. If this maximum is less than the calculated amountneeded to ensure alignment, the maximum is used and the total frame lengthis not necessarily aligned at 8 bytes.
peerMaxConcurrentStreams<number> Sets the maximum number of concurrentstreams for the remote peer as if aSETTINGSframe had been received. Willbe overridden if the remote peer sets its own value formaxConcurrentStreams.Default:100.protocol<string> The protocol to connect with, if not set in theauthority. Value may be either'http:'or'https:'.Default:'https:'settings<HTTP/2 Settings Object> The initial settings to send to theremote peer upon connection.remoteCustomSettings<Array> The array of integer values determines thesettings types, which are included in theCustomSettings-property of thereceived remoteSettings. Please see theCustomSettings-property of theHttp2Settingsobject for more information, on the allowed setting types.createConnection<Function> An optional callback that receives theURLinstance passed toconnectand theoptionsobject, and returns anyDuplexstream that is to be used as the connection for this session....options<Object> Anynet.connect()ortls.connect()optionscan be provided.unknownProtocolTimeout<number> Specifies a timeout in milliseconds thata server should wait when an'unknownProtocol'event is emitted. Ifthe socket has not been destroyed by that time the server will destroy it.Default:10000.strictFieldWhitespaceValidation<boolean> Iftrue, it turns on strict leadingand trailing whitespace validation for HTTP/2 header field names and valuesas perRFC-9113.Default:true.
listener<Function> Will be registered as a one-time listener of the'connect'event.- Returns:<ClientHttp2Session>
Returns aClientHttp2Session instance.
import { connect }from'node:http2';const client =connect('https://localhost:1234');/* Use the client */client.close();const http2 =require('node:http2');const client = http2.connect('https://localhost:1234');/* Use the client */client.close();
http2.constants#
Error codes forRST_STREAM andGOAWAY#
| Value | Name | Constant |
|---|---|---|
0x00 | No Error | http2.constants.NGHTTP2_NO_ERROR |
0x01 | Protocol Error | http2.constants.NGHTTP2_PROTOCOL_ERROR |
0x02 | Internal Error | http2.constants.NGHTTP2_INTERNAL_ERROR |
0x03 | Flow Control Error | http2.constants.NGHTTP2_FLOW_CONTROL_ERROR |
0x04 | Settings Timeout | http2.constants.NGHTTP2_SETTINGS_TIMEOUT |
0x05 | Stream Closed | http2.constants.NGHTTP2_STREAM_CLOSED |
0x06 | Frame Size Error | http2.constants.NGHTTP2_FRAME_SIZE_ERROR |
0x07 | Refused Stream | http2.constants.NGHTTP2_REFUSED_STREAM |
0x08 | Cancel | http2.constants.NGHTTP2_CANCEL |
0x09 | Compression Error | http2.constants.NGHTTP2_COMPRESSION_ERROR |
0x0a | Connect Error | http2.constants.NGHTTP2_CONNECT_ERROR |
0x0b | Enhance Your Calm | http2.constants.NGHTTP2_ENHANCE_YOUR_CALM |
0x0c | Inadequate Security | http2.constants.NGHTTP2_INADEQUATE_SECURITY |
0x0d | HTTP/1.1 Required | http2.constants.NGHTTP2_HTTP_1_1_REQUIRED |
The'timeout' event is emitted when there is no activity on the Server fora given number of milliseconds set usinghttp2server.setTimeout().
http2.getDefaultSettings()#
- Returns:<HTTP/2 Settings Object>
Returns an object containing the default settings for anHttp2Sessioninstance. This method returns a new object instance every time it is calledso instances returned may be safely modified for use.
http2.getPackedSettings([settings])#
settings<HTTP/2 Settings Object>- Returns:<Buffer>
Returns aBuffer instance containing serialized representation of the givenHTTP/2 settings as specified in theHTTP/2 specification. This is intendedfor use with theHTTP2-Settings header field.
import { getPackedSettings }from'node:http2';const packed =getPackedSettings({enablePush:false });console.log(packed.toString('base64'));// Prints: AAIAAAAAconst http2 =require('node:http2');const packed = http2.getPackedSettings({enablePush:false });console.log(packed.toString('base64'));// Prints: AAIAAAAA
http2.getUnpackedSettings(buf)#
buf<Buffer> |<TypedArray> The packed settings.- Returns:<HTTP/2 Settings Object>
Returns aHTTP/2 Settings Object containing the deserialized settings fromthe givenBuffer as generated byhttp2.getPackedSettings().
http2.performServerHandshake(socket[, options])#
socket<stream.Duplex>options<Object> Anyhttp2.createServer()option can be provided.- Returns:<ServerHttp2Session>
Create an HTTP/2 server session from an existing socket.
http2.sensitiveHeaders#
- Type:<symbol>
This symbol can be set as a property on the HTTP/2 headers object with an arrayvalue in order to provide a list of headers considered sensitive.SeeSensitive headers for more details.
Headers object#
Headers are represented as own-properties on JavaScript objects. The propertykeys will be serialized to lower-case. Property values should be strings (ifthey are not they will be coerced to strings) or anArray of strings (in orderto send more than one value per header field).
const headers = {':status':'200','content-type':'text-plain','ABC': ['has','more','than','one','value'],};stream.respond(headers);Header objects passed to callback functions will have anull prototype. Thismeans that normal JavaScript object methods such asObject.prototype.toString() andObject.prototype.hasOwnProperty() willnot work.
For incoming headers:
- The
:statusheader is converted tonumber. - Duplicates of
:status,:method,:authority,:scheme,:path,:protocol,age,authorization,access-control-allow-credentials,access-control-max-age,access-control-request-method,content-encoding,content-language,content-length,content-location,content-md5,content-range,content-type,date,dnt,etag,expires,from,host,if-match,if-modified-since,if-none-match,if-range,if-unmodified-since,last-modified,location,max-forwards,proxy-authorization,range,referer,retry-after,tk,upgrade-insecure-requests,user-agentorx-content-type-optionsarediscarded. set-cookieis always an array. Duplicates are added to the array.- For duplicate
cookieheaders, the values are joined together with '; '. - For all other headers, the values are joined together with ', '.
import { createServer }from'node:http2';const server =createServer();server.on('stream',(stream, headers) => {console.log(headers[':path']);console.log(headers.ABC);});const http2 =require('node:http2');const server = http2.createServer();server.on('stream',(stream, headers) => {console.log(headers[':path']);console.log(headers.ABC);});
Raw headers#
In some APIs, in addition to object format, headers can also be passed oraccessed as a raw flat array, preserving details of ordering andduplicate keys to match the raw transmission format.
In this format the keys and values are in the same list. It isnot alist of tuples. So, the even-numbered offsets are key values, and theodd-numbered offsets are the associated values. Duplicate headers arenot merged and so each key-value pair will appear separately.
This can be useful for cases such as proxies, where existing headersshould be exactly forwarded as received, or as a performanceoptimization when the headers are already available in raw format.
const rawHeaders = [':status','404','content-type','text/plain',];stream.respond(rawHeaders);Sensitive headers#
HTTP2 headers can be marked as sensitive, which means that the HTTP/2header compression algorithm will never index them. This can make sense forheader values with low entropy and that may be considered valuable to anattacker, for exampleCookie orAuthorization. To achieve this, addthe header name to the[http2.sensitiveHeaders] property as an array:
const headers = {':status':'200','content-type':'text-plain','cookie':'some-cookie','other-sensitive-header':'very secret data', [http2.sensitiveHeaders]: ['cookie','other-sensitive-header'],};stream.respond(headers);For some headers, such asAuthorization and shortCookie headers,this flag is set automatically.
This property is also set for received headers. It will contain the names ofall headers marked as sensitive, including ones marked that way automatically.
For raw headers, this should still be set as a property on the array, likerawHeadersArray[http2.sensitiveHeaders] = ['cookie'], not as a separate keyand value pair within the array itself.
Settings object#
History
| Version | Changes |
|---|---|
| v12.12.0 | The |
| v8.9.3 | The |
| v8.4.0 | Added in: v8.4.0 |
Thehttp2.getDefaultSettings(),http2.getPackedSettings(),http2.createServer(),http2.createSecureServer(),http2session.settings(),http2session.localSettings, andhttp2session.remoteSettings APIs either return or receive as input anobject that defines configuration settings for anHttp2Session object.These objects are ordinary JavaScript objects containing the followingproperties.
headerTableSize<number> Specifies the maximum number of bytes used forheader compression. The minimum allowed value is 0. The maximum allowed valueis 232-1.Default:4096.enablePush<boolean> Specifiestrueif HTTP/2 Push Streams are to bepermitted on theHttp2Sessioninstances.Default:true.initialWindowSize<number> Specifies thesender's initial window size inbytes for stream-level flow control. The minimum allowed value is 0. Themaximum allowed value is 232-1.Default:65535.maxFrameSize<number> Specifies the size in bytes of the largest framepayload. The minimum allowed value is 16,384. The maximum allowed value is224-1.Default:16384.maxConcurrentStreams<number> Specifies the maximum number of concurrentstreams permitted on anHttp2Session. There is no default value whichimplies, at least theoretically, 232-1 streams may be openconcurrently at any given time in anHttp2Session. The minimum valueis 0. The maximum allowed value is 232-1.Default:4294967295.maxHeaderListSize<number> Specifies the maximum size (uncompressed octets)of header list that will be accepted. The minimum allowed value is 0. Themaximum allowed value is 232-1.Default:65535.maxHeaderSize<number> Alias formaxHeaderListSize.enableConnectProtocol<boolean> Specifiestrueif the "Extended ConnectProtocol" defined byRFC 8441 is to be enabled. This setting is onlymeaningful if sent by the server. Once theenableConnectProtocolsettinghas been enabled for a givenHttp2Session, it cannot be disabled.Default:false.customSettings<Object> Specifies additional settings, yet not implementedin node and the underlying libraries. The key of the object defines thenumeric value of the settings type (as defined in the "HTTP/2 SETTINGS"registry established by [RFC 7540]) and the values the actual numeric valueof the settings.The settings type has to be an integer in the range from 1 to 2^16-1.It should not be a settings type already handled by node, i.e. currentlyit should be greater than 6, although it is not an error.The values need to be unsigned integers in the range from 0 to 2^32-1.Currently, a maximum of up 10 custom settings is supported.It is only supported for sending SETTINGS, or for receiving settings valuesspecified in theremoteCustomSettingsoptions of the server or clientobject. Do not mix thecustomSettings-mechanism for a settings id withinterfaces for the natively handled settings, in case a setting becomesnatively supported in a future node version.
All additional properties on the settings object are ignored.
Error handling#
There are several types of error conditions that may arise when using thenode:http2 module:
Validation errors occur when an incorrect argument, option, or setting value ispassed in. These will always be reported by a synchronousthrow.
State errors occur when an action is attempted at an incorrect time (forinstance, attempting to send data on a stream after it has closed). These willbe reported using either a synchronousthrow or via an'error' event ontheHttp2Stream,Http2Session or HTTP/2 Server objects, depending on whereand when the error occurs.
Internal errors occur when an HTTP/2 session fails unexpectedly. These will bereported via an'error' event on theHttp2Session or HTTP/2 Server objects.
Protocol errors occur when various HTTP/2 protocol constraints are violated.These will be reported using either a synchronousthrow or via an'error'event on theHttp2Stream,Http2Session or HTTP/2 Server objects, dependingon where and when the error occurs.
Invalid character handling in header names and values#
The HTTP/2 implementation applies stricter handling of invalid characters inHTTP header names and values than the HTTP/1 implementation.
Header field names arecase-insensitive and are transmitted over the wirestrictly as lower-case strings. The API provided by Node.js allows headernames to be set as mixed-case strings (e.g.Content-Type) but will convertthose to lower-case (e.g.content-type) upon transmission.
Header field-namesmust only contain one or more of the following ASCIIcharacters:a-z,A-Z,0-9,!,#,$,%,&,',*,+,-,.,^,_,` (backtick),|, and~.
Using invalid characters within an HTTP header field name will cause thestream to be closed with a protocol error being reported.
Header field values are handled with more leniency butshould not containnew-line or carriage return characters andshould be limited to US-ASCIIcharacters, per the requirements of the HTTP specification.
Push streams on the client#
To receive pushed streams on the client, set a listener for the'stream'event on theClientHttp2Session:
import { connect }from'node:http2';const client =connect('http://localhost');client.on('stream',(pushedStream, requestHeaders) => { pushedStream.on('push',(responseHeaders) => {// Process response headers }); pushedStream.on('data',(chunk) => {/* handle pushed data */ });});const req = client.request({':path':'/' });const http2 =require('node:http2');const client = http2.connect('http://localhost');client.on('stream',(pushedStream, requestHeaders) => { pushedStream.on('push',(responseHeaders) => {// Process response headers }); pushedStream.on('data',(chunk) => {/* handle pushed data */ });});const req = client.request({':path':'/' });
Supporting theCONNECT method#
TheCONNECT method is used to allow an HTTP/2 server to be used as a proxyfor TCP/IP connections.
A simple TCP Server:
import { createServer }from'node:net';const server =createServer((socket) => {let name =''; socket.setEncoding('utf8'); socket.on('data',(chunk) => name += chunk); socket.on('end',() => socket.end(`hello${name}`));});server.listen(8000);const net =require('node:net');const server = net.createServer((socket) => {let name =''; socket.setEncoding('utf8'); socket.on('data',(chunk) => name += chunk); socket.on('end',() => socket.end(`hello${name}`));});server.listen(8000);
An HTTP/2 CONNECT proxy:
import { createServer, constants }from'node:http2';const {NGHTTP2_REFUSED_STREAM,NGHTTP2_CONNECT_ERROR } = constants;import { connect }from'node:net';const proxy =createServer();proxy.on('stream',(stream, headers) => {if (headers[':method'] !=='CONNECT') {// Only accept CONNECT requests stream.close(NGHTTP2_REFUSED_STREAM);return; }const auth =newURL(`tcp://${headers[':authority']}`);// It's a very good idea to verify that hostname and port are// things this proxy should be connecting to.const socket =connect(auth.port, auth.hostname,() => { stream.respond(); socket.pipe(stream); stream.pipe(socket); }); socket.on('error',(error) => { stream.close(NGHTTP2_CONNECT_ERROR); });});proxy.listen(8001);const http2 =require('node:http2');const {NGHTTP2_REFUSED_STREAM } = http2.constants;const net =require('node:net');const proxy = http2.createServer();proxy.on('stream',(stream, headers) => {if (headers[':method'] !=='CONNECT') {// Only accept CONNECT requests stream.close(NGHTTP2_REFUSED_STREAM);return; }const auth =newURL(`tcp://${headers[':authority']}`);// It's a very good idea to verify that hostname and port are// things this proxy should be connecting to.const socket = net.connect(auth.port, auth.hostname,() => { stream.respond(); socket.pipe(stream); stream.pipe(socket); }); socket.on('error',(error) => { stream.close(http2.constants.NGHTTP2_CONNECT_ERROR); });});proxy.listen(8001);
An HTTP/2 CONNECT client:
import { connect, constants }from'node:http2';const client =connect('http://localhost:8001');// Must not specify the ':path' and ':scheme' headers// for CONNECT requests or an error will be thrown.const req = client.request({':method':'CONNECT',':authority':'localhost:8000',});req.on('response',(headers) => {console.log(headers[constants.HTTP2_HEADER_STATUS]);});let data ='';req.setEncoding('utf8');req.on('data',(chunk) => data += chunk);req.on('end',() => {console.log(`The server says:${data}`); client.close();});req.end('Jane');const http2 =require('node:http2');const client = http2.connect('http://localhost:8001');// Must not specify the ':path' and ':scheme' headers// for CONNECT requests or an error will be thrown.const req = client.request({':method':'CONNECT',':authority':'localhost:8000',});req.on('response',(headers) => {console.log(headers[http2.constants.HTTP2_HEADER_STATUS]);});let data ='';req.setEncoding('utf8');req.on('data',(chunk) => data += chunk);req.on('end',() => {console.log(`The server says:${data}`); client.close();});req.end('Jane');
The extendedCONNECT protocol#
RFC 8441 defines an "Extended CONNECT Protocol" extension to HTTP/2 thatmay be used to bootstrap the use of anHttp2Stream using theCONNECTmethod as a tunnel for other communication protocols (such as WebSockets).
The use of the Extended CONNECT Protocol is enabled by HTTP/2 servers by usingtheenableConnectProtocol setting:
import { createServer }from'node:http2';const settings = {enableConnectProtocol:true };const server =createServer({ settings });const http2 =require('node:http2');const settings = {enableConnectProtocol:true };const server = http2.createServer({ settings });
Once the client receives theSETTINGS frame from the server indicating thatthe extended CONNECT may be used, it may sendCONNECT requests that use the':protocol' HTTP/2 pseudo-header:
import { connect }from'node:http2';const client =connect('http://localhost:8080');client.on('remoteSettings',(settings) => {if (settings.enableConnectProtocol) {const req = client.request({':method':'CONNECT',':protocol':'foo' });// ... }});const http2 =require('node:http2');const client = http2.connect('http://localhost:8080');client.on('remoteSettings',(settings) => {if (settings.enableConnectProtocol) {const req = client.request({':method':'CONNECT',':protocol':'foo' });// ... }});
Compatibility API#
The Compatibility API has the goal of providing a similar developer experienceof HTTP/1 when using HTTP/2, making it possible to develop applicationsthat support bothHTTP/1 and HTTP/2. This API targets only thepublic API of theHTTP/1. However many modules use internalmethods or state, and thoseare not supported as it is a completelydifferent implementation.
The following example creates an HTTP/2 server using the compatibilityAPI:
import { createServer }from'node:http2';const server =createServer((req, res) => { res.setHeader('Content-Type','text/html'); res.setHeader('X-Foo','bar'); res.writeHead(200, {'Content-Type':'text/plain; charset=utf-8' }); res.end('ok');});const http2 =require('node:http2');const server = http2.createServer((req, res) => { res.setHeader('Content-Type','text/html'); res.setHeader('X-Foo','bar'); res.writeHead(200, {'Content-Type':'text/plain; charset=utf-8' }); res.end('ok');});
In order to create a mixedHTTPS and HTTP/2 server, refer to theALPN negotiation section.Upgrading from non-tls HTTP/1 servers is not supported.
The HTTP/2 compatibility API is composed ofHttp2ServerRequest andHttp2ServerResponse. They aim at API compatibility with HTTP/1, butthey do not hide the differences between the protocols. As an example,the status message for HTTP codes is ignored.
ALPN negotiation#
ALPN negotiation allows supporting bothHTTPS and HTTP/2 overthe same socket. Thereq andres objects can be either HTTP/1 orHTTP/2, and an applicationmust restrict itself to the public API ofHTTP/1, and detect if it is possible to use the more advancedfeatures of HTTP/2.
The following example creates a server that supports both protocols:
import { createSecureServer }from'node:http2';import { readFileSync }from'node:fs';const cert =readFileSync('./cert.pem');const key =readFileSync('./key.pem');const server =createSecureServer( { cert, key,allowHTTP1:true }, onRequest,).listen(8000);functiononRequest(req, res) {// Detects if it is a HTTPS request or HTTP/2const {socket: { alpnProtocol } } = req.httpVersion ==='2.0' ? req.stream.session : req; res.writeHead(200, {'content-type':'application/json' }); res.end(JSON.stringify({ alpnProtocol,httpVersion: req.httpVersion, }));}const { createSecureServer } =require('node:http2');const { readFileSync } =require('node:fs');const cert =readFileSync('./cert.pem');const key =readFileSync('./key.pem');const server =createSecureServer( { cert, key,allowHTTP1:true }, onRequest,).listen(4443);functiononRequest(req, res) {// Detects if it is a HTTPS request or HTTP/2const {socket: { alpnProtocol } } = req.httpVersion ==='2.0' ? req.stream.session : req; res.writeHead(200, {'content-type':'application/json' }); res.end(JSON.stringify({ alpnProtocol,httpVersion: req.httpVersion, }));}
The'request' event works identically on bothHTTPS andHTTP/2.
Class:http2.Http2ServerRequest#
- Extends:<stream.Readable>
AHttp2ServerRequest object is created byhttp2.Server orhttp2.SecureServer and passed as the first argument to the'request' event. It may be used to access a request status, headers, anddata.
Event:'aborted'#
The'aborted' event is emitted whenever aHttp2ServerRequest instance isabnormally aborted in mid-communication.
The'aborted' event will only be emitted if theHttp2ServerRequest writableside has not been ended.
Event:'close'#
Indicates that the underlyingHttp2Stream was closed.Just like'end', this event occurs only once per response.
request.aborted#
- Type:<boolean>
Therequest.aborted property will betrue if the request hasbeen aborted.
request.authority#
- Type:<string>
The request authority pseudo header field. Because HTTP/2 allows requeststo set either:authority orhost, this value is derived fromreq.headers[':authority'] if present. Otherwise, it is derived fromreq.headers['host'].
request.complete#
- Type:<boolean>
Therequest.complete property will betrue if the request hasbeen completed, aborted, or destroyed.
request.connection#
request.socket.- Type:<net.Socket> |<tls.TLSSocket>
Seerequest.socket.
request.destroy([error])#
error<Error>
Callsdestroy() on theHttp2Stream that receivedtheHttp2ServerRequest. Iferror is provided, an'error' eventis emitted anderror is passed as an argument to any listeners on the event.
It does nothing if the stream was already destroyed.
request.headers#
- Type:<Object>
The request/response headers object.
Key-value pairs of header names and values. Header names are lower-cased.
// Prints something like://// { 'user-agent': 'curl/7.22.0',// host: '127.0.0.1:8000',// accept: '*/*' }console.log(request.headers);In HTTP/2, the request path, host name, protocol, and method are represented asspecial headers prefixed with the: character (e.g.':path'). These specialheaders will be included in therequest.headers object. Care must be taken notto inadvertently modify these special headers or errors may occur. For instance,removing all headers from the request will cause errors to occur:
removeAllHeaders(request.headers);assert(request.url);// Fails because the :path header has been removedrequest.httpVersion#
- Type:<string>
In case of server request, the HTTP version sent by the client. In the case ofclient response, the HTTP version of the connected-to server. Returns'2.0'.
Alsomessage.httpVersionMajor is the first integer andmessage.httpVersionMinor is the second.
request.method#
- Type:<string>
The request method as a string. Read-only. Examples:'GET','DELETE'.
request.rawHeaders#
- Type:<HTTP/2 Raw Headers>
The raw request/response headers list exactly as they were received.
// Prints something like://// [ 'user-agent',// 'this is invalid because there can be only one',// 'User-Agent',// 'curl/7.22.0',// 'Host',// '127.0.0.1:8000',// 'ACCEPT',// '*/*' ]console.log(request.rawHeaders);request.rawTrailers#
- Type:<string[]>
The raw request/response trailer keys and values exactly as they werereceived. Only populated at the'end' event.
request.scheme#
- Type:<string>
The request scheme pseudo header field indicating the schemeportion of the target URL.
request.setTimeout(msecs, callback)#
msecs<number>callback<Function>- Returns:<http2.Http2ServerRequest>
Sets theHttp2Stream's timeout value tomsecs. If a callback isprovided, then it is added as a listener on the'timeout' event onthe response object.
If no'timeout' listener is added to the request, the response, orthe server, thenHttp2Streams are destroyed when they time out. If ahandler is assigned to the request, the response, or the server's'timeout'events, timed out sockets must be handled explicitly.
request.socket#
- Type:<net.Socket> |<tls.TLSSocket>
Returns aProxy object that acts as anet.Socket (ortls.TLSSocket) butapplies getters, setters, and methods based on HTTP/2 logic.
destroyed,readable, andwritable properties will be retrieved from andset onrequest.stream.
destroy,emit,end,on andonce methods will be called onrequest.stream.
setTimeout method will be called onrequest.stream.session.
pause,read,resume, andwrite will throw an error with codeERR_HTTP2_NO_SOCKET_MANIPULATION. SeeHttp2Session and Sockets formore information.
All other interactions will be routed directly to the socket. With TLS support,userequest.socket.getPeerCertificate() to obtain the client'sauthentication details.
request.trailers#
- Type:<Object>
The request/response trailers object. Only populated at the'end' event.
request.url#
- Type:<string>
Request URL string. This contains only the URL that is present in the actualHTTP request. If the request is:
GET/status?name=ryanHTTP/1.1Accept:text/plainThenrequest.url will be:
'/status?name=ryan'To parse the url into its parts,new URL() can be used:
$node>new URL('/status?name=ryan','http://example.com')URL { href: 'http://example.com/status?name=ryan', origin: 'http://example.com', protocol: 'http:', username: '', password: '', host: 'example.com', hostname: 'example.com', port: '', pathname: '/status', search: '?name=ryan', searchParams: URLSearchParams { 'name' => 'ryan' }, hash: ''}Class:http2.Http2ServerResponse#
- Extends:<Stream>
This object is created internally by an HTTP server, not by the user. It ispassed as the second parameter to the'request' event.
Event:'close'#
Indicates that the underlyingHttp2Stream was terminated beforeresponse.end() was called or able to flush.
Event:'finish'#
Emitted when the response has been sent. More specifically, this event isemitted when the last segment of the response headers and body have beenhanded off to the HTTP/2 multiplexing for transmission over the network. Itdoes not imply that the client has received anything yet.
After this event, no more events will be emitted on the response object.
response.addTrailers(headers)#
headers<Object>
This method adds HTTP trailing headers (a header but at the end of themessage) to the response.
Attempting to set a header field name or value that contains invalid characterswill result in aTypeError being thrown.
response.appendHeader(name, value)#
name<string>value<string> |<string[]>
Append a single header value to the header object.
If the value is an array, this is equivalent to calling this method multipletimes.
If there were no previous values for the header, this is equivalent to callingresponse.setHeader().
Attempting to set a header field name or value that contains invalid characterswill result in aTypeError being thrown.
// Returns headers including "set-cookie: a" and "set-cookie: b"const server = http2.createServer((req, res) => { res.setHeader('set-cookie','a'); res.appendHeader('set-cookie','b'); res.writeHead(200); res.end('ok');});response.connection#
response.socket.- Type:<net.Socket> |<tls.TLSSocket>
Seeresponse.socket.
response.createPushResponse(headers, callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v8.4.0 | Added in: v8.4.0 |
headers<HTTP/2 Headers Object> An object describing the headerscallback<Function> Called oncehttp2stream.pushStream()is finished,or either when the attempt to create the pushedHttp2Streamhas failed orhas been rejected, or the state ofHttp2ServerRequestis closed prior tocalling thehttp2stream.pushStream()methoderr<Error>res<http2.Http2ServerResponse> The newly-createdHttp2ServerResponseobject
Callhttp2stream.pushStream() with the given headers, and wrap thegivenHttp2Stream on a newly createdHttp2ServerResponse as the callbackparameter if successful. WhenHttp2ServerRequest is closed, the callback iscalled with an errorERR_HTTP2_INVALID_STREAM.
response.end([data[, encoding]][, callback])#
History
| Version | Changes |
|---|---|
| v10.0.0 | This method now returns a reference to |
| v8.4.0 | Added in: v8.4.0 |
data<string> |<Buffer> |<Uint8Array>encoding<string>callback<Function>- Returns:<this>
This method signals to the server that all of the response headers and bodyhave been sent; that server should consider this message complete.The method,response.end(), MUST be called on each response.
Ifdata is specified, it is equivalent to callingresponse.write(data, encoding) followed byresponse.end(callback).
Ifcallback is specified, it will be called when the response streamis finished.
response.finished#
response.writableEnded.- Type:<boolean>
Boolean value that indicates whether the response has completed. Startsasfalse. Afterresponse.end() executes, the value will betrue.
response.getHeader(name)#
Reads out a header that has already been queued but not sent to the client.The name is case-insensitive.
const contentType = response.getHeader('content-type');response.getHeaderNames()#
- Returns:<string[]>
Returns an array containing the unique names of the current outgoing headers.All header names are lowercase.
response.setHeader('Foo','bar');response.setHeader('Set-Cookie', ['foo=bar','bar=baz']);const headerNames = response.getHeaderNames();// headerNames === ['foo', 'set-cookie']response.getHeaders()#
- Returns:<Object>
Returns a shallow copy of the current outgoing headers. Since a shallow copyis used, array values may be mutated without additional calls to variousheader-related http module methods. The keys of the returned object are theheader names and the values are the respective header values. All header namesare lowercase.
The object returned by theresponse.getHeaders() methoddoes notprototypically inherit from the JavaScriptObject. This means that typicalObject methods such asobj.toString(),obj.hasOwnProperty(), and othersare not defined andwill not work.
response.setHeader('Foo','bar');response.setHeader('Set-Cookie', ['foo=bar','bar=baz']);const headers = response.getHeaders();// headers === { foo: 'bar', 'set-cookie': ['foo=bar', 'bar=baz'] }response.hasHeader(name)#
Returnstrue if the header identified byname is currently set in theoutgoing headers. The header name matching is case-insensitive.
const hasContentType = response.hasHeader('content-type');response.headersSent#
- Type:<boolean>
True if headers were sent, false otherwise (read-only).
response.removeHeader(name)#
name<string>
Removes a header that has been queued for implicit sending.
response.removeHeader('Content-Encoding');response.sendDate#
- Type:<boolean>
When true, the Date header will be automatically generated and sent inthe response if it is not already present in the headers. Defaults to true.
This should only be disabled for testing; HTTP requires the Date headerin responses.
response.setHeader(name, value)#
name<string>value<string> |<string[]>
Sets a single header value for implicit headers. If this header already existsin the to-be-sent headers, its value will be replaced. Use an array of stringshere to send multiple headers with the same name.
response.setHeader('Content-Type','text/html; charset=utf-8');or
response.setHeader('Set-Cookie', ['type=ninja','language=javascript']);Attempting to set a header field name or value that contains invalid characterswill result in aTypeError being thrown.
When headers have been set withresponse.setHeader(), they will be mergedwith any headers passed toresponse.writeHead(), with the headers passedtoresponse.writeHead() given precedence.
// Returns content-type = text/plainconst server = http2.createServer((req, res) => { res.setHeader('Content-Type','text/html; charset=utf-8'); res.setHeader('X-Foo','bar'); res.writeHead(200, {'Content-Type':'text/plain; charset=utf-8' }); res.end('ok');});response.setTimeout(msecs[, callback])#
msecs<number>callback<Function>- Returns:<http2.Http2ServerResponse>
Sets theHttp2Stream's timeout value tomsecs. If a callback isprovided, then it is added as a listener on the'timeout' event onthe response object.
If no'timeout' listener is added to the request, the response, orthe server, thenHttp2Streams are destroyed when they time out. If ahandler is assigned to the request, the response, or the server's'timeout'events, timed out sockets must be handled explicitly.
response.socket#
- Type:<net.Socket> |<tls.TLSSocket>
Returns aProxy object that acts as anet.Socket (ortls.TLSSocket) butapplies getters, setters, and methods based on HTTP/2 logic.
destroyed,readable, andwritable properties will be retrieved from andset onresponse.stream.
destroy,emit,end,on andonce methods will be called onresponse.stream.
setTimeout method will be called onresponse.stream.session.
pause,read,resume, andwrite will throw an error with codeERR_HTTP2_NO_SOCKET_MANIPULATION. SeeHttp2Session and Sockets formore information.
All other interactions will be routed directly to the socket.
import { createServer }from'node:http2';const server =createServer((req, res) => {const ip = req.socket.remoteAddress;const port = req.socket.remotePort; res.end(`Your IP address is${ip} and your source port is${port}.`);}).listen(3000);const http2 =require('node:http2');const server = http2.createServer((req, res) => {const ip = req.socket.remoteAddress;const port = req.socket.remotePort; res.end(`Your IP address is${ip} and your source port is${port}.`);}).listen(3000);
response.statusCode#
- Type:<number>
When using implicit headers (not callingresponse.writeHead() explicitly),this property controls the status code that will be sent to the client whenthe headers get flushed.
response.statusCode =404;After response header was sent to the client, this property indicates thestatus code which was sent out.
response.statusMessage#
- Type:<string>
Status message is not supported by HTTP/2 (RFC 7540 8.1.2.4). It returnsan empty string.
response.writableEnded#
- Type:<boolean>
Istrue afterresponse.end() has been called. This propertydoes not indicate whether the data has been flushed, for this usewritable.writableFinished instead.
response.write(chunk[, encoding][, callback])#
chunk<string> |<Buffer> |<Uint8Array>encoding<string>callback<Function>- Returns:<boolean>
If this method is called andresponse.writeHead() has not been called,it will switch to implicit header mode and flush the implicit headers.
This sends a chunk of the response body. This method maybe called multiple times to provide successive parts of the body.
In thenode:http module, the response body is omitted when therequest is a HEAD request. Similarly, the204 and304 responsesmust not include a message body.
chunk can be a string or a buffer. Ifchunk is a string,the second parameter specifies how to encode it into a byte stream.By default theencoding is'utf8'.callback will be called when this chunkof data is flushed.
This is the raw HTTP body and has nothing to do with higher-level multi-partbody encodings that may be used.
The first timeresponse.write() is called, it will send the bufferedheader information and the first chunk of the body to the client. The secondtimeresponse.write() is called, Node.js assumes data will be streamed,and sends the new data separately. That is, the response is buffered up to thefirst chunk of the body.
Returnstrue if the entire data was flushed successfully to the kernelbuffer. Returnsfalse if all or part of the data was queued in user memory.'drain' will be emitted when the buffer is free again.
response.writeContinue()#
Sends a status100 Continue to the client, indicating that the request bodyshould be sent. See the'checkContinue' event onHttp2Server andHttp2SecureServer.
response.writeEarlyHints(hints)#
hints<Object>
Sends a status103 Early Hints to the client with a Link header,indicating that the user agent can preload/preconnect the linked resources.Thehints is an object containing the values of headers to be sent withearly hints message.
Example
const earlyHintsLink ='</styles.css>; rel=preload; as=style';response.writeEarlyHints({'link': earlyHintsLink,});const earlyHintsLinks = ['</styles.css>; rel=preload; as=style','</scripts.js>; rel=preload; as=script',];response.writeEarlyHints({'link': earlyHintsLinks,});response.writeHead(statusCode[, statusMessage][, headers])#
History
| Version | Changes |
|---|---|
| v11.10.0, v10.17.0 | Return |
| v8.4.0 | Added in: v8.4.0 |
statusCode<number>statusMessage<string>headers<HTTP/2 Headers Object> |<HTTP/2 Raw Headers>- Returns:<http2.Http2ServerResponse>
Sends a response header to the request. The status code is a 3-digit HTTPstatus code, like404. The last argument,headers, are the response headers.
Returns a reference to theHttp2ServerResponse, so that calls can be chained.
For compatibility withHTTP/1, a human-readablestatusMessage may bepassed as the second argument. However, because thestatusMessage has nomeaning within HTTP/2, the argument will have no effect and a process warningwill be emitted.
const body ='hello world';response.writeHead(200, {'Content-Length':Buffer.byteLength(body),'Content-Type':'text/plain; charset=utf-8',});Content-Length is given in bytes not characters. TheBuffer.byteLength() API may be used to determine the number of bytes in agiven encoding. On outbound messages, Node.js does not check if Content-Lengthand the length of the body being transmitted are equal or not. However, whenreceiving messages, Node.js will automatically reject messages when theContent-Length does not match the actual payload size.
This method may be called at most one time on a message beforeresponse.end() is called.
Ifresponse.write() orresponse.end() are called before callingthis, the implicit/mutable headers will be calculated and call this function.
When headers have been set withresponse.setHeader(), they will be mergedwith any headers passed toresponse.writeHead(), with the headers passedtoresponse.writeHead() given precedence.
// Returns content-type = text/plainconst server = http2.createServer((req, res) => { res.setHeader('Content-Type','text/html; charset=utf-8'); res.setHeader('X-Foo','bar'); res.writeHead(200, {'Content-Type':'text/plain; charset=utf-8' }); res.end('ok');});Attempting to set a header field name or value that contains invalid characterswill result in aTypeError being thrown.
Collecting HTTP/2 performance metrics#
ThePerformance Observer API can be used to collect basic performancemetrics for eachHttp2Session andHttp2Stream instance.
import {PerformanceObserver }from'node:perf_hooks';const obs =newPerformanceObserver((items) => {const entry = items.getEntries()[0];console.log(entry.entryType);// prints 'http2'if (entry.name ==='Http2Session') {// Entry contains statistics about the Http2Session }elseif (entry.name ==='Http2Stream') {// Entry contains statistics about the Http2Stream }});obs.observe({entryTypes: ['http2'] });const {PerformanceObserver } =require('node:perf_hooks');const obs =newPerformanceObserver((items) => {const entry = items.getEntries()[0];console.log(entry.entryType);// prints 'http2'if (entry.name ==='Http2Session') {// Entry contains statistics about the Http2Session }elseif (entry.name ==='Http2Stream') {// Entry contains statistics about the Http2Stream }});obs.observe({entryTypes: ['http2'] });
TheentryType property of thePerformanceEntry will be equal to'http2'.
Thename property of thePerformanceEntry will be equal to either'Http2Stream' or'Http2Session'.
Ifname is equal toHttp2Stream, thePerformanceEntry will contain thefollowing additional properties:
bytesRead<number> The number ofDATAframe bytes received for thisHttp2Stream.bytesWritten<number> The number ofDATAframe bytes sent for thisHttp2Stream.id<number> The identifier of the associatedHttp2StreamtimeToFirstByte<number> The number of milliseconds elapsed between thePerformanceEntrystartTimeand the reception of the firstDATAframe.timeToFirstByteSent<number> The number of milliseconds elapsed betweenthePerformanceEntrystartTimeand sending of the firstDATAframe.timeToFirstHeader<number> The number of milliseconds elapsed between thePerformanceEntrystartTimeand the reception of the first header.
Ifname is equal toHttp2Session, thePerformanceEntry will contain thefollowing additional properties:
bytesRead<number> The number of bytes received for thisHttp2Session.bytesWritten<number> The number of bytes sent for thisHttp2Session.framesReceived<number> The number of HTTP/2 frames received by theHttp2Session.framesSent<number> The number of HTTP/2 frames sent by theHttp2Session.maxConcurrentStreams<number> The maximum number of streams concurrentlyopen during the lifetime of theHttp2Session.pingRTT<number> The number of milliseconds elapsed since the transmissionof aPINGframe and the reception of its acknowledgment. Only present ifaPINGframe has been sent on theHttp2Session.streamAverageDuration<number> The average duration (in milliseconds) forallHttp2Streaminstances.streamCount<number> The number ofHttp2Streaminstances processed bytheHttp2Session.type<string> Either'server'or'client'to identify the type ofHttp2Session.
Note on:authority andhost#
HTTP/2 requires requests to have either the:authority pseudo-headeror thehost header. Prefer:authority when constructing an HTTP/2request directly, andhost when converting from HTTP/1 (in proxies,for instance).
The compatibility API falls back tohost if:authority is notpresent. Seerequest.authority for more information. However,if you don't use the compatibility API (or usereq.headers directly),you need to implement any fall-back behavior yourself.
HTTPS#
Source Code:lib/https.js
HTTPS is the HTTP protocol over TLS/SSL. In Node.js this is implemented as aseparate module.
Determining if crypto support is unavailable#
It is possible for Node.js to be built without including support for thenode:crypto module. In such cases, attempting toimport fromhttps orcallingrequire('node:https') will result in an error being thrown.
When using CommonJS, the error thrown can be caught using try/catch:
let https;try { https =require('node:https');}catch (err) {console.error('https support is disabled!');}When using the lexical ESMimport keyword, the error can only becaught if a handler forprocess.on('uncaughtException') is registeredbefore any attempt to load the module is made (using, for instance,a preload module).
When using ESM, if there is a chance that the code may be run on a buildof Node.js where crypto support is not enabled, consider using theimport() function instead of the lexicalimport keyword:
let https;try { https =awaitimport('node:https');}catch (err) {console.error('https support is disabled!');}Class:https.Agent#
History
| Version | Changes |
|---|---|
| v5.3.0 | support |
| v2.5.0 | parameter |
| v0.4.5 | Added in: v0.4.5 |
AnAgent object for HTTPS similar tohttp.Agent. Seehttps.request() for more information.
Likehttp.Agent, thecreateConnection(options[, callback]) method can be overriddento customize how TLS connections are established.
See
agent.createConnection()for details on overriding this method,including asynchronous socket creation with a callback.
new Agent([options])#
History
| Version | Changes |
|---|---|
| v24.5.0 | Add support for |
| v24.5.0 | Add support for |
| v12.5.0 | do not automatically set servername if the target host was specified using an IP address. |
options<Object> Set of configurable options to set on the agent.Can have the same fields as forhttp.Agent(options), andmaxCachedSessions<number> maximum number of TLS cached sessions.Use0to disable TLS session caching.Default:100.servername<string> the value ofServer Name Indication extension to be sent to the server. Useempty string''to disable sending the extension.Default: host name of the target server, unless the target serveris specified using an IP address, in which case the default is''(noextension).See
Session Resumptionfor information about TLS session reuse.
Event:'keylog'#
line<Buffer> Line of ASCII text, in NSSSSLKEYLOGFILEformat.tlsSocket<tls.TLSSocket> Thetls.TLSSocketinstance on which it wasgenerated.
Thekeylog event is emitted when key material is generated or received by aconnection managed by this agent (typically before handshake has completed, butnot necessarily). This keying material can be stored for debugging, as itallows captured TLS traffic to be decrypted. It may be emitted multiple timesfor each socket.
A typical use case is to append received lines to a common text file, which islater used by software (such as Wireshark) to decrypt the traffic:
// ...https.globalAgent.on('keylog',(line, tlsSocket) => { fs.appendFileSync('/tmp/ssl-keys.log', line, {mode:0o600 });});Class:https.Server#
- Extends:<tls.Server>
Seehttp.Server for more information.
server.close([callback])#
callback<Function>- Returns:<https.Server>
Seeserver.close() in thenode:http module.
server[Symbol.asyncDispose]()#
History
| Version | Changes |
|---|---|
| v24.2.0 | No longer experimental. |
| v20.4.0 | Added in: v20.4.0 |
Callsserver.close() and returns a promise thatfulfills when the server has closed.
server.closeAllConnections()#
Seeserver.closeAllConnections() in thenode:http module.
server.closeIdleConnections()#
Seeserver.closeIdleConnections() in thenode:http module.
server.headersTimeout#
- Type:<number>Default:
60000
Seeserver.headersTimeout in thenode:http module.
server.listen()#
Starts the HTTPS server listening for encrypted connections.This method is identical toserver.listen() fromnet.Server.
server.requestTimeout#
History
| Version | Changes |
|---|---|
| v18.0.0 | The default request timeout changed from no timeout to 300s (5 minutes). |
| v14.11.0 | Added in: v14.11.0 |
- Type:<number>Default:
300000
Seeserver.requestTimeout in thenode:http module.
server.setTimeout([msecs][, callback])#
msecs<number>Default:120000(2 minutes)callback<Function>- Returns:<https.Server>
Seeserver.setTimeout() in thenode:http module.
server.timeout#
History
| Version | Changes |
|---|---|
| v13.0.0 | The default timeout changed from 120s to 0 (no timeout). |
| v0.11.2 | Added in: v0.11.2 |
- Type:<number>Default: 0 (no timeout)
Seeserver.timeout in thenode:http module.
server.keepAliveTimeout#
- Type:<number>Default:
5000(5 seconds)
Seeserver.keepAliveTimeout in thenode:http module.
https.createServer([options][, requestListener])#
options<Object> Acceptsoptionsfromtls.createServer(),tls.createSecureContext()andhttp.createServer().requestListener<Function> A listener to be added to the'request'event.- Returns:<https.Server>
// curl -k https://localhost:8000/import { createServer }from'node:https';import { readFileSync }from'node:fs';const options = {key:readFileSync('private-key.pem'),cert:readFileSync('certificate.pem'),};createServer(options,(req, res) => { res.writeHead(200); res.end('hello world\n');}).listen(8000);// curl -k https://localhost:8000/const https =require('node:https');const fs =require('node:fs');const options = {key: fs.readFileSync('private-key.pem'),cert: fs.readFileSync('certificate.pem'),};https.createServer(options,(req, res) => { res.writeHead(200); res.end('hello world\n');}).listen(8000);
Or
import { createServer }from'node:https';import { readFileSync }from'node:fs';const options = {pfx:readFileSync('test_cert.pfx'),passphrase:'sample',};createServer(options,(req, res) => { res.writeHead(200); res.end('hello world\n');}).listen(8000);const https =require('node:https');const fs =require('node:fs');const options = {pfx: fs.readFileSync('test_cert.pfx'),passphrase:'sample',};https.createServer(options,(req, res) => { res.writeHead(200); res.end('hello world\n');}).listen(8000);
To generate the certificate and key for this example, run:
openssl req -x509 -newkey rsa:2048 -nodes -sha256 -subj'/CN=localhost' \ -keyout private-key.pem -out certificate.pemThen, to generate thepfx certificate for this example, run:
openssl pkcs12 -certpbe AES-256-CBC -export -out test_cert.pfx \ -inkey private-key.pem -in certificate.pem -passout pass:samplehttps.get(options[, callback])#
https.get(url[, options][, callback])#
History
| Version | Changes |
|---|---|
| v10.9.0 | The |
| v7.5.0 | The |
| v0.3.6 | Added in: v0.3.6 |
url<string> |<URL>options<Object> |<string> |<URL> Accepts the sameoptionsashttps.request(), with the method set to GET by default.callback<Function>- Returns:<http.ClientRequest>
Likehttp.get() but for HTTPS.
options can be an object, a string, or aURL object. Ifoptions is astring, it is automatically parsed withnew URL(). If it is aURLobject, it will be automatically converted to an ordinaryoptions object.
import { get }from'node:https';import processfrom'node:process';get('https://encrypted.google.com/',(res) => {console.log('statusCode:', res.statusCode);console.log('headers:', res.headers); res.on('data',(d) => { process.stdout.write(d); });}).on('error',(e) => {console.error(e);});const https =require('node:https');https.get('https://encrypted.google.com/',(res) => {console.log('statusCode:', res.statusCode);console.log('headers:', res.headers); res.on('data',(d) => { process.stdout.write(d); });}).on('error',(e) => {console.error(e);});
https.globalAgent#
History
| Version | Changes |
|---|---|
| v19.0.0 | The agent now uses HTTP Keep-Alive and a 5 second timeout by default. |
| v0.5.9 | Added in: v0.5.9 |
Global instance ofhttps.Agent for all HTTPS client requests. Divergesfrom a defaulthttps.Agent configuration by havingkeepAlive enabled andatimeout of 5 seconds.
https.request(options[, callback])#
https.request(url[, options][, callback])#
History
| Version | Changes |
|---|---|
| v22.4.0, v20.16.0 | The |
| v16.7.0, v14.18.0 | When using a |
| v14.1.0, v13.14.0 | The |
| v10.9.0 | The |
| v9.3.0 | The |
| v7.5.0 | The |
| v0.3.6 | Added in: v0.3.6 |
url<string> |<URL>options<Object> |<string> |<URL> Accepts alloptionsfromhttp.request(), with some differences in default values:protocolDefault:'https:'portDefault:443agentDefault:https.globalAgent
callback<Function>- Returns:<http.ClientRequest>
Makes a request to a secure web server.
The following additionaloptions fromtls.connect() are also accepted:ca,cert,ciphers,clientCertEngine (deprecated),crl,dhparam,ecdhCurve,honorCipherOrder,key,passphrase,pfx,rejectUnauthorized,secureOptions,secureProtocol,servername,sessionIdContext,highWaterMark.
options can be an object, a string, or aURL object. Ifoptions is astring, it is automatically parsed withnew URL(). If it is aURLobject, it will be automatically converted to an ordinaryoptions object.
https.request() returns an instance of thehttp.ClientRequestclass. TheClientRequest instance is a writable stream. If one needs toupload a file with a POST request, then write to theClientRequest object.
import { request }from'node:https';import processfrom'node:process';const options = {hostname:'encrypted.google.com',port:443,path:'/',method:'GET',};const req =request(options,(res) => {console.log('statusCode:', res.statusCode);console.log('headers:', res.headers); res.on('data',(d) => { process.stdout.write(d); });});req.on('error',(e) => {console.error(e);});req.end();const https =require('node:https');const options = {hostname:'encrypted.google.com',port:443,path:'/',method:'GET',};const req = https.request(options,(res) => {console.log('statusCode:', res.statusCode);console.log('headers:', res.headers); res.on('data',(d) => { process.stdout.write(d); });});req.on('error',(e) => {console.error(e);});req.end();
Example using options fromtls.connect():
const options = {hostname:'encrypted.google.com',port:443,path:'/',method:'GET',key: fs.readFileSync('private-key.pem'),cert: fs.readFileSync('certificate.pem'),};options.agent =new https.Agent(options);const req = https.request(options,(res) => {// ...});Alternatively, opt out of connection pooling by not using anAgent.
const options = {hostname:'encrypted.google.com',port:443,path:'/',method:'GET',key: fs.readFileSync('private-key.pem'),cert: fs.readFileSync('certificate.pem'),agent:false,};const req = https.request(options,(res) => {// ...});Example using aURL asoptions:
const options =newURL('https://abc:xyz@example.com');const req = https.request(options,(res) => {// ...});Example pinning on certificate fingerprint, or the public key (similar topin-sha256):
import { checkServerIdentity }from'node:tls';import {Agent, request }from'node:https';import { createHash }from'node:crypto';functionsha256(s) {returncreateHash('sha256').update(s).digest('base64');}const options = {hostname:'github.com',port:443,path:'/',method:'GET',checkServerIdentity:function(host, cert) {// Make sure the certificate is issued to the host we are connected toconst err =checkServerIdentity(host, cert);if (err) {return err; }// Pin the public key, similar to HPKP pin-sha256 pinningconst pubkey256 ='SIXvRyDmBJSgatgTQRGbInBaAK+hZOQ18UmrSwnDlK8=';if (sha256(cert.pubkey) !== pubkey256) {const msg ='Certificate verification error: ' +`The public key of '${cert.subject.CN}' ` +'does not match our pinned fingerprint';returnnewError(msg); }// Pin the exact certificate, rather than the pub keyconst cert256 ='FD:6E:9B:0E:F3:98:BC:D9:04:C3:B2:EC:16:7A:7B:' +'0F:DA:72:01:C9:03:C5:3A:6A:6A:E5:D0:41:43:63:EF:65';if (cert.fingerprint256 !== cert256) {const msg ='Certificate verification error: ' +`The certificate of '${cert.subject.CN}' ` +'does not match our pinned fingerprint';returnnewError(msg); }// This loop is informational only.// Print the certificate and public key fingerprints of all certs in the// chain. Its common to pin the public key of the issuer on the public// internet, while pinning the public key of the service in sensitive// environments.let lastprint256;do {console.log('Subject Common Name:', cert.subject.CN);console.log(' Certificate SHA256 fingerprint:', cert.fingerprint256);const hash =createHash('sha256');console.log(' Public key ping-sha256:',sha256(cert.pubkey)); lastprint256 = cert.fingerprint256; cert = cert.issuerCertificate; }while (cert.fingerprint256 !== lastprint256); },};options.agent =newAgent(options);const req =request(options,(res) => {console.log('All OK. Server matched our pinned cert or public key');console.log('statusCode:', res.statusCode); res.on('data',(d) => {});});req.on('error',(e) => {console.error(e.message);});req.end();const tls =require('node:tls');const https =require('node:https');const crypto =require('node:crypto');functionsha256(s) {return crypto.createHash('sha256').update(s).digest('base64');}const options = {hostname:'github.com',port:443,path:'/',method:'GET',checkServerIdentity:function(host, cert) {// Make sure the certificate is issued to the host we are connected toconst err = tls.checkServerIdentity(host, cert);if (err) {return err; }// Pin the public key, similar to HPKP pin-sha256 pinningconst pubkey256 ='SIXvRyDmBJSgatgTQRGbInBaAK+hZOQ18UmrSwnDlK8=';if (sha256(cert.pubkey) !== pubkey256) {const msg ='Certificate verification error: ' +`The public key of '${cert.subject.CN}' ` +'does not match our pinned fingerprint';returnnewError(msg); }// Pin the exact certificate, rather than the pub keyconst cert256 ='FD:6E:9B:0E:F3:98:BC:D9:04:C3:B2:EC:16:7A:7B:' +'0F:DA:72:01:C9:03:C5:3A:6A:6A:E5:D0:41:43:63:EF:65';if (cert.fingerprint256 !== cert256) {const msg ='Certificate verification error: ' +`The certificate of '${cert.subject.CN}' ` +'does not match our pinned fingerprint';returnnewError(msg); }// This loop is informational only.// Print the certificate and public key fingerprints of all certs in the// chain. Its common to pin the public key of the issuer on the public// internet, while pinning the public key of the service in sensitive// environments.do {console.log('Subject Common Name:', cert.subject.CN);console.log(' Certificate SHA256 fingerprint:', cert.fingerprint256); hash = crypto.createHash('sha256');console.log(' Public key ping-sha256:',sha256(cert.pubkey)); lastprint256 = cert.fingerprint256; cert = cert.issuerCertificate; }while (cert.fingerprint256 !== lastprint256); },};options.agent =new https.Agent(options);const req = https.request(options,(res) => {console.log('All OK. Server matched our pinned cert or public key');console.log('statusCode:', res.statusCode); res.on('data',(d) => {});});req.on('error',(e) => {console.error(e.message);});req.end();
Outputs for example:
Subject Common Name: github.com Certificate SHA256 fingerprint: FD:6E:9B:0E:F3:98:BC:D9:04:C3:B2:EC:16:7A:7B:0F:DA:72:01:C9:03:C5:3A:6A:6A:E5:D0:41:43:63:EF:65 Public key ping-sha256: SIXvRyDmBJSgatgTQRGbInBaAK+hZOQ18UmrSwnDlK8=Subject Common Name: Sectigo ECC Domain Validation Secure Server CA Certificate SHA256 fingerprint: 61:E9:73:75:E9:F6:DA:98:2F:F5:C1:9E:2F:94:E6:6C:4E:35:B6:83:7C:E3:B9:14:D2:24:5C:7F:5F:65:82:5F Public key ping-sha256: Eep0p/AsSa9lFUH6KT2UY+9s1Z8v7voAPkQ4fGknZ2g=Subject Common Name: USERTrust ECC Certification Authority Certificate SHA256 fingerprint: A6:CF:64:DB:B4:C8:D5:FD:19:CE:48:89:60:68:DB:03:B5:33:A8:D1:33:6C:62:56:A8:7D:00:CB:B3:DE:F3:EA Public key ping-sha256: UJM2FOhG9aTNY0Pg4hgqjNzZ/lQBiMGRxPD5Y2/e0bw=Subject Common Name: AAA Certificate Services Certificate SHA256 fingerprint: D7:A7:A0:FB:5D:7E:27:31:D7:71:E9:48:4E:BC:DE:F7:1D:5F:0C:3E:0A:29:48:78:2B:C8:3E:E0:EA:69:9E:F4 Public key ping-sha256: vRU+17BDT2iGsXvOi76E7TQMcTLXAqj0+jGPdW7L1vM=All OK. Server matched our pinned cert or public keystatusCode: 200Inspector#
Source Code:lib/inspector.js
Thenode:inspector module provides an API for interacting with the V8inspector.
It can be accessed using:
import *as inspectorfrom'node:inspector/promises';const inspector =require('node:inspector/promises');
or
import *as inspectorfrom'node:inspector';const inspector =require('node:inspector');
Promises API#
Class:inspector.Session#
- Extends:<EventEmitter>
Theinspector.Session is used for dispatching messages to the V8 inspectorback-end and receiving message responses and notifications.
new inspector.Session()#
Create a new instance of theinspector.Session class. The inspector sessionneeds to be connected throughsession.connect() before the messagescan be dispatched to the inspector backend.
When usingSession, the object outputted by the console API will not bereleased, unless we performed manuallyRuntime.DiscardConsoleEntriescommand.
Event:'inspectorNotification'#
- Type:<Object> The notification message object
Emitted when any notification from the V8 Inspector is received.
session.on('inspectorNotification',(message) =>console.log(message.method));// Debugger.paused// Debugger.resumedCaveat Breakpoints with same-thread session is not recommended, seesupport of breakpoints.
It is also possible to subscribe only to notifications with specific method:
Event:<inspector-protocol-method>#
- Type:<Object> The notification message object
Emitted when an inspector notification is received that has its method field setto the<inspector-protocol-method> value.
The following snippet installs a listener on the'Debugger.paused'event, and prints the reason for program suspension whenever programexecution is suspended (through breakpoints, for example):
session.on('Debugger.paused',({ params }) => {console.log(params.hitBreakpoints);});// [ '/the/file/that/has/the/breakpoint.js:11:0' ]Caveat Breakpoints with same-thread session is not recommended, seesupport of breakpoints.
session.connectToMainThread()#
Connects a session to the main thread inspector back-end. An exception willbe thrown if this API was not called on a Worker thread.
session.disconnect()#
Immediately close the session. All pending message callbacks will be calledwith an error.session.connect() will need to be called to be able to sendmessages again. Reconnected session will lose all inspector state, such asenabled agents or configured breakpoints.
session.post(method[, params])#
Posts a message to the inspector back-end.
import {Session }from'node:inspector/promises';try {const session =newSession(); session.connect();const result =await session.post('Runtime.evaluate', {expression:'2 + 2' });console.log(result);}catch (error) {console.error(error);}// Output: { result: { type: 'number', value: 4, description: '4' } }The latest version of the V8 inspector protocol is published on theChrome DevTools Protocol Viewer.
Node.js inspector supports all the Chrome DevTools Protocol domains declaredby V8. Chrome DevTools Protocol domain provides an interface for interactingwith one of the runtime agents used to inspect the application state and listento the run-time events.
Example usage#
Apart from the debugger, various V8 Profilers are available through the DevToolsprotocol.
CPU profiler#
Here's an example showing how to use theCPU Profiler:
import {Session }from'node:inspector/promises';import fsfrom'node:fs';const session =newSession();session.connect();await session.post('Profiler.enable');await session.post('Profiler.start');// Invoke business logic under measurement here...// some time later...const { profile } =await session.post('Profiler.stop');// Write profile to disk, upload, etc.fs.writeFileSync('./profile.cpuprofile',JSON.stringify(profile));Heap profiler#
Here's an example showing how to use theHeap Profiler:
import {Session }from'node:inspector/promises';import fsfrom'node:fs';const session =newSession();const fd = fs.openSync('profile.heapsnapshot','w');session.connect();session.on('HeapProfiler.addHeapSnapshotChunk',(m) => { fs.writeSync(fd, m.params.chunk);});const result =await session.post('HeapProfiler.takeHeapSnapshot',null);console.log('HeapProfiler.takeHeapSnapshot done:', result);session.disconnect();fs.closeSync(fd);Callback API#
Class:inspector.Session#
- Extends:<EventEmitter>
Theinspector.Session is used for dispatching messages to the V8 inspectorback-end and receiving message responses and notifications.
new inspector.Session()#
Create a new instance of theinspector.Session class. The inspector sessionneeds to be connected throughsession.connect() before the messagescan be dispatched to the inspector backend.
When usingSession, the object outputted by the console API will not bereleased, unless we performed manuallyRuntime.DiscardConsoleEntriescommand.
Event:'inspectorNotification'#
- Type:<Object> The notification message object
Emitted when any notification from the V8 Inspector is received.
session.on('inspectorNotification',(message) =>console.log(message.method));// Debugger.paused// Debugger.resumedCaveat Breakpoints with same-thread session is not recommended, seesupport of breakpoints.
It is also possible to subscribe only to notifications with specific method:
Event:<inspector-protocol-method>;#
- Type:<Object> The notification message object
Emitted when an inspector notification is received that has its method field setto the<inspector-protocol-method> value.
The following snippet installs a listener on the'Debugger.paused'event, and prints the reason for program suspension whenever programexecution is suspended (through breakpoints, for example):
session.on('Debugger.paused',({ params }) => {console.log(params.hitBreakpoints);});// [ '/the/file/that/has/the/breakpoint.js:11:0' ]Caveat Breakpoints with same-thread session is not recommended, seesupport of breakpoints.
session.connectToMainThread()#
Connects a session to the main thread inspector back-end. An exception willbe thrown if this API was not called on a Worker thread.
session.disconnect()#
Immediately close the session. All pending message callbacks will be calledwith an error.session.connect() will need to be called to be able to sendmessages again. Reconnected session will lose all inspector state, such asenabled agents or configured breakpoints.
session.post(method[, params][, callback])#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v8.0.0 | Added in: v8.0.0 |
method<string>params<Object>callback<Function>
Posts a message to the inspector back-end.callback will be notified whena response is received.callback is a function that accepts two optionalarguments: error and message-specific result.
session.post('Runtime.evaluate', {expression:'2 + 2' },(error, { result }) =>console.log(result));// Output: { type: 'number', value: 4, description: '4' }The latest version of the V8 inspector protocol is published on theChrome DevTools Protocol Viewer.
Node.js inspector supports all the Chrome DevTools Protocol domains declaredby V8. Chrome DevTools Protocol domain provides an interface for interactingwith one of the runtime agents used to inspect the application state and listento the run-time events.
You can not setreportProgress totrue when sending aHeapProfiler.takeHeapSnapshot orHeapProfiler.stopTrackingHeapObjectscommand to V8.
Example usage#
Apart from the debugger, various V8 Profilers are available through the DevToolsprotocol.
CPU profiler#
Here's an example showing how to use theCPU Profiler:
const inspector =require('node:inspector');const fs =require('node:fs');const session =new inspector.Session();session.connect();session.post('Profiler.enable',() => { session.post('Profiler.start',() => {// Invoke business logic under measurement here...// some time later... session.post('Profiler.stop',(err, { profile }) => {// Write profile to disk, upload, etc.if (!err) { fs.writeFileSync('./profile.cpuprofile',JSON.stringify(profile)); } }); });});Heap profiler#
Here's an example showing how to use theHeap Profiler:
const inspector =require('node:inspector');const fs =require('node:fs');const session =new inspector.Session();const fd = fs.openSync('profile.heapsnapshot','w');session.connect();session.on('HeapProfiler.addHeapSnapshotChunk',(m) => { fs.writeSync(fd, m.params.chunk);});session.post('HeapProfiler.takeHeapSnapshot',null,(err, r) => {console.log('HeapProfiler.takeHeapSnapshot done:', err, r); session.disconnect(); fs.closeSync(fd);});Common Objects#
inspector.close()#
History
| Version | Changes |
|---|---|
| v18.10.0 | The API is exposed in the worker threads. |
| v9.0.0 | Added in: v9.0.0 |
Attempts to close all remaining connections, blocking the event loop until allare closed. Once all connections are closed, deactivates the inspector.
inspector.console#
- Type:<Object> An object to send messages to the remote inspector console.
require('node:inspector').console.log('a message');The inspector console does not have API parity with Node.jsconsole.
inspector.open([port[, host[, wait]]])#
History
| Version | Changes |
|---|---|
| v20.6.0 | inspector.open() now returns a |
port<number> Port to listen on for inspector connections. Optional.Default: what was specified on the CLI.host<string> Host to listen on for inspector connections. Optional.Default: what was specified on the CLI.wait<boolean> Block until a client has connected. Optional.Default:false.- Returns:<Disposable> A Disposable that calls
inspector.close().
Activate inspector on host and port. Equivalent tonode --inspect=[[host:]port], but can be done programmatically after node hasstarted.
If wait istrue, will block until a client has connected to the inspect portand flow control has been passed to the debugger client.
See thesecurity warning regarding thehostparameter usage.
inspector.url()#
- Returns:<string> |<undefined>
Return the URL of the active inspector, orundefined if there is none.
$node --inspect -p'inspector.url()'Debugger listening on ws://127.0.0.1:9229/166e272e-7a30-4d09-97ce-f1c012b43c34For help, see: https://nodejs.org/en/docs/inspectorws://127.0.0.1:9229/166e272e-7a30-4d09-97ce-f1c012b43c34$node --inspect=localhost:3000 -p'inspector.url()'Debugger listening on ws://localhost:3000/51cf8d0e-3c36-4c59-8efd-54519839e56aFor help, see: https://nodejs.org/en/docs/inspectorws://localhost:3000/51cf8d0e-3c36-4c59-8efd-54519839e56a$node -p'inspector.url()'undefinedinspector.waitForDebugger()#
Blocks until a client (existing or connected later) has sentRuntime.runIfWaitingForDebugger command.
An exception will be thrown if there is no active inspector.
Integration with DevTools#
Thenode:inspector module provides an API for integrating with devtools that support Chrome DevTools Protocol.DevTools frontends connected to a running Node.js instance can capture protocol events emitted from the instanceand display them accordingly to facilitate debugging.The following methods broadcast a protocol event to all connected frontends.Theparams passed to the methods can be optional, depending on the protocol.
// The `Network.requestWillBeSent` event will be fired.inspector.Network.requestWillBeSent({requestId:'request-id-1',timestamp:Date.now() /1000,wallTime:Date.now(),request: {url:'https://nodejs.org/en',method:'GET', },});inspector.Network.dataReceived([params])#
params<Object>
This feature is only available with the--experimental-network-inspection flag enabled.
Broadcasts theNetwork.dataReceived event to connected frontends, or buffers the data ifNetwork.streamResourceContent command was not invoked for the given request yet.
Also enablesNetwork.getResponseBody command to retrieve the response data.
inspector.Network.dataSent([params])#
params<Object>
This feature is only available with the--experimental-network-inspection flag enabled.
EnablesNetwork.getRequestPostData command to retrieve the request data.
inspector.Network.requestWillBeSent([params])#
params<Object>
This feature is only available with the--experimental-network-inspection flag enabled.
Broadcasts theNetwork.requestWillBeSent event to connected frontends. This event indicates thatthe application is about to send an HTTP request.
inspector.Network.responseReceived([params])#
params<Object>
This feature is only available with the--experimental-network-inspection flag enabled.
Broadcasts theNetwork.responseReceived event to connected frontends. This event indicates thatHTTP response is available.
inspector.Network.loadingFinished([params])#
params<Object>
This feature is only available with the--experimental-network-inspection flag enabled.
Broadcasts theNetwork.loadingFinished event to connected frontends. This event indicates thatHTTP request has finished loading.
inspector.Network.loadingFailed([params])#
params<Object>
This feature is only available with the--experimental-network-inspection flag enabled.
Broadcasts theNetwork.loadingFailed event to connected frontends. This event indicates thatHTTP request has failed to load.
inspector.Network.webSocketCreated([params])#
params<Object>
This feature is only available with the--experimental-network-inspection flag enabled.
Broadcasts theNetwork.webSocketCreated event to connected frontends. This event indicates thata WebSocket connection has been initiated.
inspector.Network.webSocketHandshakeResponseReceived([params])#
params<Object>
This feature is only available with the--experimental-network-inspection flag enabled.
Broadcasts theNetwork.webSocketHandshakeResponseReceived event to connected frontends.This event indicates that the WebSocket handshake response has been received.
inspector.Network.webSocketClosed([params])#
params<Object>
This feature is only available with the--experimental-network-inspection flag enabled.
Broadcasts theNetwork.webSocketClosed event to connected frontends.This event indicates that a WebSocket connection has been closed.
inspector.NetworkResources.put#
This feature is only available with the--experimental-inspector-network-resource flag enabled.
The inspector.NetworkResources.put method is used to provide a response for a loadNetworkResourcerequest issued via the Chrome DevTools Protocol (CDP).This is typically triggered when a source map is specified by URL, and a DevTools frontend—such asChrome—requests the resource to retrieve the source map.
This method allows developers to predefine the resource content to be served in response to such CDP requests.
const inspector =require('node:inspector');// By preemptively calling put to register the resource, a source map can be resolved when// a loadNetworkResource request is made from the frontend.asyncfunctionsetNetworkResources() {const mapUrl ='http://localhost:3000/dist/app.js.map';const tsUrl ='http://localhost:3000/src/app.ts';const distAppJsMap =awaitfetch(mapUrl).then((res) => res.text());const srcAppTs =awaitfetch(tsUrl).then((res) => res.text()); inspector.NetworkResources.put(mapUrl, distAppJsMap); inspector.NetworkResources.put(tsUrl, srcAppTs);};setNetworkResources().then(() => {require('./dist/app');});For more details, see the official CDP documentation:Network.loadNetworkResource
inspector.DOMStorage.domStorageItemAdded#
params<Object>
This feature is only available with the--experimental-storage-inspection flag enabled.
Broadcasts theDOMStorage.domStorageItemAdded event to connected frontends.This event indicates that a new item has been added to the storage.
inspector.DOMStorage.domStorageItemRemoved#
params<Object>
This feature is only available with the--experimental-storage-inspection flag enabled.
Broadcasts theDOMStorage.domStorageItemRemoved event to connected frontends.This event indicates that an item has been removed from the storage.
inspector.DOMStorage.domStorageItemUpdated#
params<Object>
This feature is only available with the--experimental-storage-inspection flag enabled.
Broadcasts theDOMStorage.domStorageItemUpdated event to connected frontends.This event indicates that a storage item has been updated.
inspector.DOMStorage.domStorageItemsCleared#
This feature is only available with the--experimental-storage-inspection flag enabled.
Broadcasts theDOMStorage.domStorageItemsCleared event to connectedfrontends. This event indicates that all items have been cleared from thestorage.
Support of breakpoints#
The Chrome DevTools ProtocolDebugger domain allows aninspector.Session to attach to a program and set breakpoints to step throughthe codes.
However, setting breakpoints with a same-threadinspector.Session, which isconnected bysession.connect(), should be avoided as the program beingattached and paused is exactly the debugger itself. Instead, try connect to themain thread bysession.connectToMainThread() and set breakpoints in aworker thread, or connect with aDebugger program over WebSocketconnection.
Internationalization support#
Node.js has many features that make it easier to write internationalizedprograms. Some of them are:
- Locale-sensitive or Unicode-aware functions in theECMAScript LanguageSpecification:
- All functionality described in theECMAScript Internationalization APISpecification (aka ECMA-402):
Intlobject- Locale-sensitive methods like
String.prototype.localeCompare()andDate.prototype.toLocaleString()
- TheWHATWG URL parser'sinternationalized domain names (IDNs) support
require('node:buffer').transcode()- More accurateREPL line editing
require('node:util').TextDecoderRegExpUnicode Property Escapes
Node.js and the underlying V8 engine useInternational Components for Unicode (ICU) to implement these featuresin native C/C++ code. The full ICU data set is provided by Node.js by default.However, due to the size of the ICU data file, severaloptions are provided for customizing the ICU data set either whenbuilding or running Node.js.
Options for building Node.js#
To control how ICU is used in Node.js, fourconfigure options are availableduring compilation. Additional details on how to compile Node.js are documentedinBUILDING.md.
--with-intl=none/--without-intl--with-intl=system-icu--with-intl=small-icu--with-intl=full-icu(default)
An overview of available Node.js and JavaScript features for eachconfigureoption:
| Feature | none | system-icu | small-icu | full-icu |
|---|---|---|---|---|
String.prototype.normalize() | none (function is no-op) | full | full | full |
String.prototype.to*Case() | full | full | full | full |
Intl | none (object does not exist) | partial/full (depends on OS) | partial (English-only) | full |
String.prototype.localeCompare() | partial (not locale-aware) | full | full | full |
String.prototype.toLocale*Case() | partial (not locale-aware) | full | full | full |
Number.prototype.toLocaleString() | partial (not locale-aware) | partial/full (depends on OS) | partial (English-only) | full |
Date.prototype.toLocale*String() | partial (not locale-aware) | partial/full (depends on OS) | partial (English-only) | full |
| Legacy URL Parser | partial (no IDN support) | full | full | full |
| WHATWG URL Parser | partial (no IDN support) | full | full | full |
require('node:buffer').transcode() | none (function does not exist) | full | full | full |
| REPL | partial (inaccurate line editing) | full | full | full |
require('node:util').TextDecoder | partial (basic encodings support) | partial/full (depends on OS) | partial (Unicode-only) | full |
RegExp Unicode Property Escapes | none (invalidRegExp error) | full | full | full |
The "(not locale-aware)" designation denotes that the function carries out itsoperation just like the non-Locale version of the function, if oneexists. For example, undernone mode,Date.prototype.toLocaleString()'soperation is identical to that ofDate.prototype.toString().
Disable all internationalization features (none)#
If this option is chosen, ICU is disabled and most internationalizationfeatures mentioned above will beunavailable in the resultingnode binary.
Build with a pre-installed ICU (system-icu)#
Node.js can link against an ICU build already installed on the system. In fact,most Linux distributions already come with ICU installed, and this option wouldmake it possible to reuse the same set of data used by other components in theOS.
Functionalities that only require the ICU library itself, such asString.prototype.normalize() and theWHATWG URL parser, are fullysupported undersystem-icu. Features that require ICU locale data inaddition, such asIntl.DateTimeFormatmay be fully or partiallysupported, depending on the completeness of the ICU data installed on thesystem.
Embed a limited set of ICU data (small-icu)#
This option makes the resulting binary link against the ICU library statically,and includes a subset of ICU data (typically only the English locale) withinthenode executable.
Functionalities that only require the ICU library itself, such asString.prototype.normalize() and theWHATWG URL parser, are fullysupported undersmall-icu. Features that require ICU locale data in addition,such asIntl.DateTimeFormat, generally only work with the English locale:
const january =newDate(9e8);const english =newIntl.DateTimeFormat('en', {month:'long' });const spanish =newIntl.DateTimeFormat('es', {month:'long' });console.log(english.format(january));// Prints "January"console.log(spanish.format(january));// Prints either "M01" or "January" on small-icu, depending on the user’s default locale// Should print "enero"This mode provides a balance between features and binary size.
Providing ICU data at runtime#
If thesmall-icu option is used, one can still provide additional locale dataat runtime so that the JS methods would work for all ICU locales. Assuming thedata file is stored at/runtime/directory/with/dat/file, it can be madeavailable to ICU through either:
The
--with-icu-default-data-dirconfigure option:./configure --with-icu-default-data-dir=/runtime/directory/with/dat/file --with-intl=small-icuThis only embeds the default data directory path into the binary.The actual data file is going to be loaded at runtime from this directorypath.
The
NODE_ICU_DATAenvironment variable:env NODE_ICU_DATA=/runtime/directory/with/dat/file nodeThe
--icu-data-dirCLI parameter:node --icu-data-dir=/runtime/directory/with/dat/file
When more than one of them is specified, the--icu-data-dir CLI parameter hasthe highest precedence, then theNODE_ICU_DATA environment variable, thenthe--with-icu-default-data-dir configure option.
ICU is able to automatically find and load a variety of data formats, but thedata must be appropriate for the ICU version, and the file correctly named.The most common name for the data file isicudtX[bl].dat, whereX denotesthe intended ICU version, andb orl indicates the system's endianness.Node.js would fail to load if the expected data file cannot be read from thespecified directory. The name of the data file corresponding to the currentNode.js version can be computed with:
`icudt${process.versions.icu.split('.')[0]}${os.endianness()[0].toLowerCase()}.dat`;Check"ICU Data" article in the ICU User Guide for other supported formatsand more details on ICU data in general.
Thefull-icu npm module can greatly simplify ICU data installation bydetecting the ICU version of the runningnode executable and downloading theappropriate data file. After installing the module throughnpm i full-icu,the data file will be available at./node_modules/full-icu. This path can bethen passed either toNODE_ICU_DATA or--icu-data-dir as shown above toenable fullIntl support.
Embed the entire ICU (full-icu)#
This option makes the resulting binary link against ICU statically and includea full set of ICU data. A binary created this way has no further externaldependencies and supports all locales, but might be rather large. This isthe default behavior if no--with-intl flag is passed. The official binariesare also built in this mode.
Detecting internationalization support#
To verify that ICU is enabled at all (system-icu,small-icu, orfull-icu), simply checking the existence ofIntl should suffice:
const hasICU =typeofIntl ==='object';Alternatively, checking forprocess.versions.icu, a property defined onlywhen ICU is enabled, works too:
const hasICU =typeof process.versions.icu ==='string';To check for support for a non-English locale (i.e.full-icu orsystem-icu),Intl.DateTimeFormat can be a good distinguishing factor:
const hasFullICU = (() => {try {const january =newDate(9e8);const spanish =newIntl.DateTimeFormat('es', {month:'long' });return spanish.format(january) ==='enero'; }catch (err) {returnfalse; }})();For more verbose tests forIntl support, the following resources may be foundto be helpful:
Modules: CommonJS modules#
CommonJS modules are the original way to package JavaScript code for Node.js.Node.js also supports theECMAScript modules standard used by browsersand other JavaScript runtimes.
In Node.js, each file is treated as a separate module. Forexample, consider a file namedfoo.js:
const circle =require('./circle.js');console.log(`The area of a circle of radius 4 is${circle.area(4)}`);On the first line,foo.js loads the modulecircle.js that is in the samedirectory asfoo.js.
Here are the contents ofcircle.js:
const {PI } =Math;exports.area =(r) =>PI * r **2;exports.circumference =(r) =>2 *PI * r;The modulecircle.js has exported the functionsarea() andcircumference(). Functions and objects are added to the root of a moduleby specifying additional properties on the specialexports object.
Variables local to the module will be private, because the module is wrappedin a function by Node.js (seemodule wrapper).In this example, the variablePI is private tocircle.js.
Themodule.exports property can be assigned a new value (such as a functionor object).
In the following code,bar.js makes use of thesquare module, which exportsa Square class:
constSquare =require('./square.js');const mySquare =newSquare(2);console.log(`The area of mySquare is${mySquare.area()}`);Thesquare module is defined insquare.js:
// Assigning to exports will not modify module, must use module.exportsmodule.exports =classSquare {constructor(width) {this.width = width; }area() {returnthis.width **2; }};The CommonJS module system is implemented in themodule core module.
Enabling#
Node.js has two module systems: CommonJS modules andECMAScript modules.
By default, Node.js will treat the following as CommonJS modules:
Files with a
.cjsextension;Files with a
.jsextension when the nearest parentpackage.jsonfilecontains a top-level field"type"with a value of"commonjs".Files with a
.jsextension or without an extension, when the nearest parentpackage.jsonfile doesn't contain a top-level field"type"or there isnopackage.jsonin any parent folder; unless the file contains syntax thaterrors unless it is evaluated as an ES module. Package authors should includethe"type"field, even in packages where all sources are CommonJS. Beingexplicit about thetypeof the package will make things easier for buildtools and loaders to determine how the files in the package should beinterpreted.Files with an extension that is not
.mjs,.cjs,.json,.node, or.js(when the nearest parentpackage.jsonfile contains a top-level field"type"with a value of"module", those files will be recognized asCommonJS modules only if they are being included viarequire(), not whenused as the command-line entry point of the program).
SeeDetermining module system for more details.
Callingrequire() always use the CommonJS module loader. Callingimport()always use the ECMAScript module loader.
Accessing the main module#
When a file is run directly from Node.js,require.main is set to itsmodule. That means that it is possible to determine whether a file has beenrun directly by testingrequire.main === module.
For a filefoo.js, this will betrue if run vianode foo.js, butfalse if run byrequire('./foo').
When the entry point is not a CommonJS module,require.main isundefined,and the main module is out of reach.
Package manager tips#
The semantics of the Node.jsrequire() function were designed to be generalenough to support reasonable directory structures. Package manager programssuch asdpkg,rpm, andnpm will hopefully find it possible to buildnative packages from Node.js modules without modification.
In the following, we give a suggested directory structure that could work:
Let's say that we wanted to have the folder at/usr/lib/node/<some-package>/<some-version> hold the contents of aspecific version of a package.
Packages can depend on one another. In order to install packagefoo, itmay be necessary to install a specific version of packagebar. Thebarpackage may itself have dependencies, and in some cases, these may even collideor form cyclic dependencies.
Because Node.js looks up therealpath of any modules it loads (that is, itresolves symlinks) and thenlooks for their dependencies innode_modules folders,this situation can be resolved with the following architecture:
/usr/lib/node/foo/1.2.3/: Contents of thefoopackage, version 1.2.3./usr/lib/node/bar/4.3.2/: Contents of thebarpackage thatfoodependson./usr/lib/node/foo/1.2.3/node_modules/bar: Symbolic link to/usr/lib/node/bar/4.3.2/./usr/lib/node/bar/4.3.2/node_modules/*: Symbolic links to the packages thatbardepends on.
Thus, even if a cycle is encountered, or if there are dependencyconflicts, every module will be able to get a version of its dependencythat it can use.
When the code in thefoo package doesrequire('bar'), it will get theversion that is symlinked into/usr/lib/node/foo/1.2.3/node_modules/bar.Then, when the code in thebar package callsrequire('quux'), it'll getthe version that is symlinked into/usr/lib/node/bar/4.3.2/node_modules/quux.
Furthermore, to make the module lookup process even more optimal, ratherthan putting packages directly in/usr/lib/node, we could put them in/usr/lib/node_modules/<name>/<version>. Then Node.js will not botherlooking for missing dependencies in/usr/node_modules or/node_modules.
In order to make modules available to the Node.js REPL, it might be useful toalso add the/usr/lib/node_modules folder to the$NODE_PATH environmentvariable. Since the module lookups usingnode_modules folders are allrelative, and based on the real path of the files making the calls torequire(), the packages themselves can be anywhere.
Loading ECMAScript modules usingrequire()#
History
| Version | Changes |
|---|---|
| v25.4.0 | This feature is no longer experimental. |
| v23.0.0, v22.12.0 | Support |
| v23.5.0, v22.13.0, v20.19.0 | This feature no longer emits an experimental warning by default, though the warning can still be emitted by --trace-require-module. |
| v23.0.0, v22.12.0, v20.19.0 | This feature is no longer behind the |
| v22.0.0, v20.17.0 | Added in: v22.0.0, v20.17.0 |
The.mjs extension is reserved forECMAScript Modules.SeeDetermining module system section for more inforegarding which files are parsed as ECMAScript modules.
require() only supports loading ECMAScript modules that meet the following requirements:
- The module is fully synchronous (contains no top-level
await); and - One of these conditions are met:
- The file has a
.mjsextension. - The file has a
.jsextension, and the closestpackage.jsoncontains"type": "module" - The file has a
.jsextension, the closestpackage.jsondoes not contain"type": "commonjs", and the module contains ES module syntax.
- The file has a
If the ES Module being loaded meets the requirements,require() can load it andreturn themodule namespace object. In this case it is similar to dynamicimport() but is run synchronously and returns the name space objectdirectly.
With the following ES Modules:
// distance.mjsexportfunctiondistance(a, b) {returnMath.sqrt((b.x - a.x) **2 + (b.y - a.y) **2); }// point.mjsexportdefaultclassPoint {constructor(x, y) {this.x = x;this.y = y; }}A CommonJS module can load them withrequire():
const distance =require('./distance.mjs');console.log(distance);// [Module: null prototype] {// distance: [Function: distance]// }const point =require('./point.mjs');console.log(point);// [Module: null prototype] {// default: [class Point],// __esModule: true,// }For interoperability with existing tools that convert ES Modules into CommonJS,which could then load real ES Modules throughrequire(), the returned namespacewould contain a__esModule: true property if it has adefault export so thatconsuming code generated by tools can recognize the default exports in realES Modules. If the namespace already defines__esModule, this would not be added.This property is experimental and can change in the future. It should only be usedby tools converting ES modules into CommonJS modules, following existing ecosystemconventions. Code authored directly in CommonJS should avoid depending on it.
The result returned byrequire() is themodule namespace object, which placesthe default export in the.default property, similar to the results returned byimport().To customize what should be returned byrequire(esm) directly, the ES Module can export thedesired value using the string name"module.exports".
// point.mjsexportdefaultclassPoint {constructor(x, y) {this.x = x;this.y = y; }}// `distance` is lost to CommonJS consumers of this module, unless it's// added to `Point` as a static property.exportfunctiondistance(a, b) {returnMath.sqrt((b.x - a.x) **2 + (b.y - a.y) **2); }export {Pointas'module.exports' }constPoint =require('./point.mjs');console.log(Point);// [class Point]// Named exports are lost when 'module.exports' is usedconst { distance } =require('./point.mjs');console.log(distance);// undefinedNotice in the example above, when themodule.exports export name is used, named exportswill be lost to CommonJS consumers. To allow CommonJS consumers to continue accessingnamed exports, the module can make sure that the default export is an object with thenamed exports attached to it as properties. For example with the example above,distance can be attached to the default export, thePoint class, as a static method.
exportfunctiondistance(a, b) {returnMath.sqrt((b.x - a.x) **2 + (b.y - a.y) **2); }exportdefaultclassPoint {constructor(x, y) {this.x = x;this.y = y; }static distance = distance;}export {Pointas'module.exports' }constPoint =require('./point.mjs');console.log(Point);// [class Point]const { distance } =require('./point.mjs');console.log(distance);// [Function: distance]If the module beingrequire()'d contains top-levelawait, or the modulegraph itimports contains top-levelawait,ERR_REQUIRE_ASYNC_MODULE will be thrown. In this case, users shouldload the asynchronous module usingimport().
If--experimental-print-required-tla is enabled, instead of throwingERR_REQUIRE_ASYNC_MODULE before evaluation, Node.js will evaluate themodule, try to locate the top-level awaits, and print their location tohelp users fix them.
If support for loading ES modules usingrequire() results in unexpectedbreakage, it can be disabled using--no-require-module.To print where this feature is used, use--trace-require-module.
This feature can be detected by checking ifprocess.features.require_module istrue.
All together#
To get the exact filename that will be loaded whenrequire() is called, usetherequire.resolve() function.
Putting together all of the above, here is the high-level algorithmin pseudocode of whatrequire() does:
require(X) from module at path Y1. If X is a core module, a. return the core module b. STOP2. If X begins with '/' a. set Y to the file system root3. If X is equal to '.', or X begins with './', '/' or '../' a. LOAD_AS_FILE(Y + X) b. LOAD_AS_DIRECTORY(Y + X) c. THROW "not found"4. If X begins with '#' a. LOAD_PACKAGE_IMPORTS(X, dirname(Y))5. LOAD_PACKAGE_SELF(X, dirname(Y))6. LOAD_NODE_MODULES(X, dirname(Y))7. THROW "not found"MAYBE_DETECT_AND_LOAD(X)1. If X parses as a CommonJS module, load X as a CommonJS module. STOP.2. Else, if the source code of X can be parsed as ECMAScript module using <a href="esm.md#resolver-algorithm-specification">DETECT_MODULE_SYNTAX defined in the ESM resolver</a>, a. Load X as an ECMAScript module. STOP.3. THROW the SyntaxError from attempting to parse X as CommonJS in 1. STOP.LOAD_AS_FILE(X)1. If X is a file, load X as its file extension format. STOP2. If X.js is a file, a. Find the closest package scope SCOPE to X. b. If no scope was found 1. MAYBE_DETECT_AND_LOAD(X.js) c. If the SCOPE/package.json contains "type" field, 1. If the "type" field is "module", load X.js as an ECMAScript module. STOP. 2. If the "type" field is "commonjs", load X.js as a CommonJS module. STOP. d. MAYBE_DETECT_AND_LOAD(X.js)3. If X.json is a file, load X.json to a JavaScript Object. STOP4. If X.node is a file, load X.node as binary addon. STOPLOAD_INDEX(X)1. If X/index.js is a file a. Find the closest package scope SCOPE to X. b. If no scope was found, load X/index.js as a CommonJS module. STOP. c. If the SCOPE/package.json contains "type" field, 1. If the "type" field is "module", load X/index.js as an ECMAScript module. STOP. 2. Else, load X/index.js as a CommonJS module. STOP.2. If X/index.json is a file, parse X/index.json to a JavaScript object. STOP3. If X/index.node is a file, load X/index.node as binary addon. STOPLOAD_AS_DIRECTORY(X)1. If X/package.json is a file, a. Parse X/package.json, and look for "main" field. b. If "main" is a falsy value, GOTO 2. c. let M = X + (json main field) d. LOAD_AS_FILE(M) e. LOAD_INDEX(M) f. LOAD_INDEX(X) DEPRECATED g. THROW "not found"2. LOAD_INDEX(X)LOAD_NODE_MODULES(X, START)1. let DIRS = NODE_MODULES_PATHS(START)2. for each DIR in DIRS: a. LOAD_PACKAGE_EXPORTS(X, DIR) b. LOAD_AS_FILE(DIR/X) c. LOAD_AS_DIRECTORY(DIR/X)NODE_MODULES_PATHS(START)1. let PARTS = path split(START)2. let I = count of PARTS - 13. let DIRS = []4. while I >= 0, a. if PARTS[I] = "node_modules", GOTO d. b. DIR = path join(PARTS[0 .. I] + "node_modules") c. DIRS = DIRS + DIR d. let I = I - 15. return DIRS + GLOBAL_FOLDERSLOAD_PACKAGE_IMPORTS(X, DIR)1. Find the closest package scope SCOPE to DIR.2. If no scope was found, return.3. If the SCOPE/package.json "imports" is null or undefined, return.4. If `--no-require-module` is not enabled a. let CONDITIONS = ["node", "require", "module-sync"] b. Else, let CONDITIONS = ["node", "require"]5. let MATCH = PACKAGE_IMPORTS_RESOLVE(X, pathToFileURL(SCOPE), CONDITIONS) <a href="esm.md#resolver-algorithm-specification">defined in the ESM resolver</a>.6. RESOLVE_ESM_MATCH(MATCH).LOAD_PACKAGE_EXPORTS(X, DIR)1. Try to interpret X as a combination of NAME and SUBPATH where the name may have a @scope/ prefix and the subpath begins with a slash (`/`).2. If X does not match this pattern or DIR/NAME/package.json is not a file, return.3. Parse DIR/NAME/package.json, and look for "exports" field.4. If "exports" is null or undefined, return.5. If `--no-require-module` is not enabled a. let CONDITIONS = ["node", "require", "module-sync"] b. Else, let CONDITIONS = ["node", "require"]6. let MATCH = PACKAGE_EXPORTS_RESOLVE(pathToFileURL(DIR/NAME), "." + SUBPATH, `package.json` "exports", CONDITIONS) <a href="esm.md#resolver-algorithm-specification">defined in the ESM resolver</a>.7. RESOLVE_ESM_MATCH(MATCH)LOAD_PACKAGE_SELF(X, DIR)1. Find the closest package scope SCOPE to DIR.2. If no scope was found, return.3. If the SCOPE/package.json "exports" is null or undefined, return.4. If the SCOPE/package.json "name" is not the first segment of X, return.5. let MATCH = PACKAGE_EXPORTS_RESOLVE(pathToFileURL(SCOPE), "." + X.slice("name".length), `package.json` "exports", ["node", "require"]) <a href="esm.md#resolver-algorithm-specification">defined in the ESM resolver</a>.6. RESOLVE_ESM_MATCH(MATCH)RESOLVE_ESM_MATCH(MATCH)1. let RESOLVED_PATH = fileURLToPath(MATCH)2. If the file at RESOLVED_PATH exists, load RESOLVED_PATH as its extension format. STOP3. THROW "not found"Caching#
Modules are cached after the first time they are loaded. This means (among otherthings) that every call torequire('foo') will get exactly the same objectreturned, if it would resolve to the same file.
Providedrequire.cache is not modified, multiple calls torequire('foo')will not cause the module code to be executed multiple times. This is animportant feature. With it, "partially done" objects can be returned, thusallowing transitive dependencies to be loaded even when they would cause cycles.
To have a module execute code multiple times, export a function, and call thatfunction.
Module caching caveats#
Modules are cached based on their resolved filename. Since modules may resolveto a different filename based on the location of the calling module (loadingfromnode_modules folders), it is not aguarantee thatrequire('foo') willalways return the exact same object, if it would resolve to different files.
Additionally, on case-insensitive file systems or operating systems, differentresolved filenames can point to the same file, but the cache will still treatthem as different modules and will reload the file multiple times. For example,require('./foo') andrequire('./FOO') return two different objects,irrespective of whether or not./foo and./FOO are the same file.
Built-in modules#
History
| Version | Changes |
|---|---|
| v16.0.0, v14.18.0 | Added |
Node.js has several modules compiled into the binary. These modules aredescribed in greater detail elsewhere in this documentation.
The built-in modules are defined within the Node.js source and are located in thelib/ folder.
Built-in modules can be identified using thenode: prefix, in which caseit bypasses therequire cache. For instance,require('node:http') willalways return the built in HTTP module, even if there isrequire.cache entryby that name.
Some built-in modules are always preferentially loaded if their identifier ispassed torequire(). For instance,require('http') will alwaysreturn the built-in HTTP module, even if there is a file by that name.
The list of all the built-in modules can be retrieved frommodule.builtinModules.The modules being all listed without thenode: prefix, except those that mandate suchprefix (as explained in the next section).
Built-in modules with mandatorynode: prefix#
When being loaded byrequire(), some built-in modules must be requested with thenode: prefix. This requirement exists to prevent newly introduced built-inmodules from having a conflict with user land packages that already havetaken the name. Currently the built-in modules that requires thenode: prefix are:
The list of these modules is exposed inmodule.builtinModules, including the prefix.
Cycles#
When there are circularrequire() calls, a module might not have finishedexecuting when it is returned.
Consider this situation:
a.js:
console.log('a starting');exports.done =false;const b =require('./b.js');console.log('in a, b.done = %j', b.done);exports.done =true;console.log('a done');b.js:
console.log('b starting');exports.done =false;const a =require('./a.js');console.log('in b, a.done = %j', a.done);exports.done =true;console.log('b done');main.js:
console.log('main starting');const a =require('./a.js');const b =require('./b.js');console.log('in main, a.done = %j, b.done = %j', a.done, b.done);Whenmain.js loadsa.js, thena.js in turn loadsb.js. At thatpoint,b.js tries to loada.js. In order to prevent an infiniteloop, anunfinished copy of thea.js exports object is returned to theb.js module.b.js then finishes loading, and itsexports object isprovided to thea.js module.
By the timemain.js has loaded both modules, they're both finished.The output of this program would thus be:
$node main.jsmain startinga startingb startingin b, a.done = falseb donein a, b.done = truea donein main, a.done = true, b.done = trueCareful planning is required to allow cyclic module dependencies to workcorrectly within an application.
File modules#
If the exact filename is not found, then Node.js will attempt to load therequired filename with the added extensions:.js,.json, and finally.node. When loading a file that has a different extension (e.g..cjs), itsfull name must be passed torequire(), including its file extension (e.g.require('./file.cjs')).
.json files are parsed as JSON text files,.node files are interpreted ascompiled addon modules loaded withprocess.dlopen(). Files using any otherextension (or no extension at all) are parsed as JavaScript text files. Refer totheDetermining module system section to understand what parse goal will beused.
A required module prefixed with'/' is an absolute path to the file. Forexample,require('/home/marco/foo.js') will load the file at/home/marco/foo.js.
A required module prefixed with'./' is relative to the file callingrequire(). That is,circle.js must be in the same directory asfoo.js forrequire('./circle') to find it.
Without a leading'/','./', or'../' to indicate a file, the module musteither be a core module or is loaded from anode_modules folder.
If the given path does not exist,require() will throw aMODULE_NOT_FOUND error.
Folders as modules#
There are three ways in which a folder may be passed torequire() asan argument.
The first is to create apackage.json file in the root of the folder,which specifies amain module. An examplepackage.json file mightlook like this:
{"name":"some-library","main":"./lib/some-library.js"}If this was in a folder at./some-library, thenrequire('./some-library') would attempt to load./some-library/lib/some-library.js.
If there is nopackage.json file present in the directory, or if the"main" entry is missing or cannot be resolved, then Node.jswill attempt to load anindex.js orindex.node file out of thatdirectory. For example, if there was nopackage.json file in the previousexample, thenrequire('./some-library') would attempt to load:
./some-library/index.js./some-library/index.node
If these attempts fail, then Node.js will report the entire module as missingwith the default error:
Error: Cannot find module 'some-library'In all three above cases, animport('./some-library') call would result in aERR_UNSUPPORTED_DIR_IMPORT error. Using packagesubpath exports orsubpath imports can provide the same containment organization benefits asfolders as modules, and work for bothrequire andimport.
Loading fromnode_modules folders#
If the module identifier passed torequire() is not abuilt-in module, and does not begin with'/','../', or'./', then Node.js starts at the directory of the current module, andadds/node_modules, and attempts to load the module from that location.Node.js will not appendnode_modules to a path already ending innode_modules.
If it is not found there, then it moves to the parent directory, and soon, until the root of the file system is reached.
For example, if the file at'/home/ry/projects/foo.js' calledrequire('bar.js'), then Node.js would look in the following locations, inthis order:
/home/ry/projects/node_modules/bar.js/home/ry/node_modules/bar.js/home/node_modules/bar.js/node_modules/bar.js
This allows programs to localize their dependencies, so that they do notclash.
It is possible to require specific files or sub modules distributed with amodule by including a path suffix after the module name. For instancerequire('example-module/path/to/file') would resolvepath/to/filerelative to whereexample-module is located. The suffixed path follows thesame module resolution semantics.
Loading from the global folders#
If theNODE_PATH environment variable is set to a colon-delimited listof absolute paths, then Node.js will search those paths for modules if theyare not found elsewhere.
On Windows,NODE_PATH is delimited by semicolons (;) instead of colons.
NODE_PATH was originally created to support loading modules fromvarying paths before the currentmodule resolution algorithm was defined.
NODE_PATH is still supported, but is less necessary now that the Node.jsecosystem has settled on a convention for locating dependent modules.Sometimes deployments that rely onNODE_PATH show surprising behaviorwhen people are unaware thatNODE_PATH must be set. Sometimes amodule's dependencies change, causing a different version (or even adifferent module) to be loaded as theNODE_PATH is searched.
Additionally, Node.js will search in the following list of GLOBAL_FOLDERS:
- 1:
$HOME/.node_modules - 2:
$HOME/.node_libraries - 3:
$PREFIX/lib/node
Where$HOME is the user's home directory, and$PREFIX is the Node.jsconfigurednode_prefix.
These are mostly for historic reasons.
It is strongly encouraged to place dependencies in the localnode_modulesfolder. These will be loaded faster, and more reliably.
The module wrapper#
Before a module's code is executed, Node.js will wrap it with a functionwrapper that looks like the following:
(function(exports,require,module, __filename, __dirname) {// Module code actually lives in here});By doing this, Node.js achieves a few things:
- It keeps top-level variables (defined with
var,const, orlet) scoped tothe module rather than the global object. - It helps to provide some global-looking variables that are actually specificto the module, such as:
- The
moduleandexportsobjects that the implementor can use to exportvalues from the module. - The convenience variables
__filenameand__dirname, containing themodule's absolute filename and directory path.
- The
The module scope#
__dirname#
- Type:<string>
The directory name of the current module. This is the same as thepath.dirname() of the__filename.
Example: runningnode example.js from/Users/mjr
console.log(__dirname);// Prints: /Users/mjrconsole.log(path.dirname(__filename));// Prints: /Users/mjr__filename#
- Type:<string>
The file name of the current module. This is the current module file's absolutepath with symlinks resolved.
For a main program this is not necessarily the same as the file name used in thecommand line.
See__dirname for the directory name of the current module.
Examples:
Runningnode example.js from/Users/mjr
console.log(__filename);// Prints: /Users/mjr/example.jsconsole.log(__dirname);// Prints: /Users/mjrGiven two modules:a andb, whereb is a dependency ofa and there is a directory structure of:
/Users/mjr/app/a.js/Users/mjr/app/node_modules/b/b.js
References to__filename withinb.js will return/Users/mjr/app/node_modules/b/b.js while references to__filename withina.js will return/Users/mjr/app/a.js.
exports#
- Type:<Object>
A reference to themodule.exports that is shorter to type.See the section about theexports shortcut for details on when to useexports and when to usemodule.exports.
module#
- Type:<module>
A reference to the current module, see the section about themodule object. In particular,module.exports is used for defining whata module exports and makes available throughrequire().
require(id)#
Used to import modules,JSON, and local files. Modules can be importedfromnode_modules. Local modules and JSON files can be imported usinga relative path (e.g../,./foo,./bar/baz,../foo) that will beresolved against the directory named by__dirname (if defined) orthe current working directory. The relative paths of POSIX style are resolvedin an OS independent fashion, meaning that the examples above will work onWindows in the same way they would on Unix systems.
// Importing a local module with a path relative to the `__dirname` or current// working directory. (On Windows, this would resolve to .\path\myLocalModule.)const myLocalModule =require('./path/myLocalModule');// Importing a JSON file:const jsonData =require('./path/filename.json');// Importing a module from node_modules or Node.js built-in module:const crypto =require('node:crypto');require.cache#
- Type:<Object>
Modules are cached in this object when they are required. By deleting a keyvalue from this object, the nextrequire will reload the module.This does not apply tonative addons, for which reloading will result in anerror.
Adding or replacing entries is also possible. This cache is checked beforebuilt-in modules and if a name matching a built-in module is added to the cache,onlynode:-prefixed require calls are going to receive the built-in module.Use with care!
const assert =require('node:assert');const realFs =require('node:fs');const fakeFs = {};require.cache.fs = {exports: fakeFs };assert.strictEqual(require('fs'), fakeFs);assert.strictEqual(require('node:fs'), realFs);require.extensions#
- Type:<Object>
Instructrequire on how to handle certain file extensions.
Process files with the extension.sjs as.js:
require.extensions['.sjs'] =require.extensions['.js'];Deprecated. In the past, this list has been used to load non-JavaScriptmodules into Node.js by compiling them on-demand. However, in practice, thereare much better ways to do this, such as loading modules via some other Node.jsprogram, or compiling them to JavaScript ahead of time.
Avoid usingrequire.extensions. Use could cause subtle bugs and resolving theextensions gets slower with each registered extension.
require.main#
- Type:<module> |<undefined>
TheModule object representing the entry script loaded when the Node.jsprocess launched, orundefined if the entry point of the program is not aCommonJS module.See"Accessing the main module".
Inentry.js script:
console.log(require.main);node entry.jsModule {id:'.',path:'/absolute/path/to',exports: {},filename:'/absolute/path/to/entry.js',loaded:false,children: [],paths: ['/absolute/path/to/node_modules','/absolute/path/node_modules','/absolute/node_modules','/node_modules' ] }require.resolve(request[, options])#
History
| Version | Changes |
|---|---|
| v8.9.0 | The |
| v0.3.0 | Added in: v0.3.0 |
request<string> The module path to resolve.options<Object>paths<string[]> Paths to resolve module location from. If present, thesepaths are used instead of the default resolution paths, with the exceptionofGLOBAL_FOLDERS like$HOME/.node_modules, which arealways included. Each of these paths is used as a starting point forthe module resolution algorithm, meaning that thenode_moduleshierarchyis checked from this location.
- Returns:<string>
Use the internalrequire() machinery to look up the location of a module,but rather than loading the module, just return the resolved filename.
If the module can not be found, aMODULE_NOT_FOUND error is thrown.
require.resolve.paths(request)#
request<string> The module path whose lookup paths are being retrieved.- Returns:<string[]> |<null>
Returns an array containing the paths searched during resolution ofrequest ornull if therequest string references a core module, for examplehttp orfs.
Themodule object#
- Type:<Object>
In each module, themodule free variable is a reference to the objectrepresenting the current module. For convenience,module.exports isalso accessible via theexports module-global.module is not actuallya global but rather local to each module.
module.children#
- Type:<module[]>
The module objects required for the first time by this one.
module.exports#
- Type:<Object>
Themodule.exports object is created by theModule system. Sometimes this isnot acceptable; many want their module to be an instance of some class. To dothis, assign the desired export object tomodule.exports. Assigningthe desired object toexports will simply rebind the localexports variable,which is probably not what is desired.
For example, suppose we were making a module calleda.js:
constEventEmitter =require('node:events');module.exports =newEventEmitter();// Do some work, and after some time emit// the 'ready' event from the module itself.setTimeout(() => {module.exports.emit('ready');},1000);Then in another file we could do:
const a =require('./a');a.on('ready',() => {console.log('module "a" is ready');});Assignment tomodule.exports must be done immediately. It cannot bedone in any callbacks. This does not work:
x.js:
setTimeout(() => {module.exports = {a:'hello' };},0);y.js:
const x =require('./x');console.log(x.a);exports shortcut#
Theexports variable is available within a module's file-level scope, and isassigned the value ofmodule.exports before the module is evaluated.
It allows a shortcut, so thatmodule.exports.f = ... can be written moresuccinctly asexports.f = .... However, be aware that like any variable, if anew value is assigned toexports, it is no longer bound tomodule.exports:
module.exports.hello =true;// Exported from require of moduleexports = {hello:false };// Not exported, only available in the moduleWhen themodule.exports property is being completely replaced by a newobject, it is common to also reassignexports:
module.exports =exports =functionConstructor() {// ... etc.};To illustrate the behavior, imagine this hypothetical implementation ofrequire(), which is quite similar to what is actually done byrequire():
functionrequire(/* ... */) {constmodule = {exports: {} }; ((module,exports) => {// Module code here. In this example, define a function.functionsomeFunc() {}exports = someFunc;// At this point, exports is no longer a shortcut to module.exports, and// this module will still export an empty default object.module.exports = someFunc;// At this point, the module will now export someFunc, instead of the// default object. })(module,module.exports);returnmodule.exports;}module.id#
- Type:<string>
The identifier for the module. Typically this is the fully resolvedfilename.
module.isPreloading#
- Type:<boolean>
trueif the module is running during the Node.js preloadphase.
module.loaded#
- Type:<boolean>
Whether or not the module is done loading, or is in the process ofloading.
module.parent#
- Type:<module> |<null> |<undefined>
The module that first required this one, ornull if the current module is theentry point of the current process, orundefined if the module was loaded bysomething that is not a CommonJS module (E.G.: REPL orimport).
module.path#
- Type:<string>
The directory name of the module. This is usually the same as thepath.dirname() of themodule.id.
module.require(id)#
Themodule.require() method provides a way to load a module as ifrequire() was called from the original module.
In order to do this, it is necessary to get a reference to themodule object.Sincerequire() returns themodule.exports, and themodule is typicallyonly available within a specific module's code, it must be explicitly exportedin order to be used.
TheModule object#
This section was moved toModules:module core module.
Source map v3 support#
This section was moved toModules:module core module.
Modules: ECMAScript modules#
History
| Version | Changes |
|---|---|
| v22.0.0 | Drop support for import assertions. |
| v23.1.0, v22.12.0, v20.18.3, v18.20.5 | Import attributes are no longer experimental. |
| v21.0.0, v20.10.0, v18.20.0 | Add experimental support for import attributes. |
| v20.0.0, v18.19.0 | Module customization hooks are executed off the main thread. |
| v18.6.0, v16.17.0 | Add support for chaining module customization hooks. |
| v17.1.0, v16.14.0 | Add experimental support for import assertions. |
| v17.0.0, v16.12.0 | Consolidate customization hooks, removed |
| v14.8.0 | Unflag Top-Level Await. |
| v15.3.0, v14.17.0, v12.22.0 | Stabilize modules implementation. |
| v14.13.0, v12.20.0 | Support for detection of CommonJS named exports. |
| v14.0.0, v13.14.0, v12.20.0 | Remove experimental modules warning. |
| v13.2.0, v12.17.0 | Loading ECMAScript modules no longer requires a command-line flag. |
| v12.0.0 | Add support for ES modules using |
| v8.5.0 | Added in: v8.5.0 |
Introduction#
ECMAScript modules arethe official standard format to package JavaScriptcode for reuse. Modules are defined using a variety ofimport andexport statements.
The following example of an ES module exports a function:
// addTwo.mjsfunctionaddTwo(num) {return num +2;}export { addTwo };The following example of an ES module imports the function fromaddTwo.mjs:
// app.mjsimport { addTwo }from'./addTwo.mjs';// Prints: 6console.log(addTwo(4));Node.js fully supports ECMAScript modules as they are currently specified andprovides interoperability between them and its original module format,CommonJS.
Enabling#
Node.js has two module systems:CommonJS modules and ECMAScript modules.
Authors can tell Node.js to interpret JavaScript as an ES module via the.mjsfile extension, thepackage.json"type" field with a value"module",or the--input-type flag with a value of"module". These are explicitmarkers of code being intended to run as an ES module.
Inversely, authors can explicitly tell Node.js to interpret JavaScript asCommonJS via the.cjs file extension, thepackage.json"type" fieldwith a value"commonjs", or the--input-type flag with a value of"commonjs".
When code lacks explicit markers for either module system, Node.js will inspectthe source code of a module to look for ES module syntax. If such syntax isfound, Node.js will run the code as an ES module; otherwise it will run themodule as CommonJS. SeeDetermining module system for more details.
Packages#
This section was moved toModules: Packages.
import Specifiers#
Terminology#
Thespecifier of animport statement is the string after thefrom keyword,e.g.'node:path' inimport { sep } from 'node:path'. Specifiers are alsoused inexport from statements, and as the argument to animport()expression.
There are three types of specifiers:
Relative specifiers like
'./startup.js'or'../config.mjs'. They referto a path relative to the location of the importing file.The file extensionis always necessary for these.Bare specifiers like
'some-package'or'some-package/shuffle'. They canrefer to the main entry point of a package by the package name, or aspecific feature module within a package prefixed by the package name as perthe examples respectively.Including the file extension is only necessaryfor packages without an"exports"field.Absolute specifiers like
'file:///opt/nodejs/config.js'. They referdirectly and explicitly to a full path.
Bare specifier resolutions are handled by theNode.js moduleresolution and loading algorithm.All other specifier resolutions are always only resolved withthe standard relativeURL resolution semantics.
Like in CommonJS, module files within packages can be accessed by appending apath to the package name unless the package'spackage.json contains an"exports" field, in which case files within packages can only be accessedvia the paths defined in"exports".
For details on these package resolution rules that apply to bare specifiers inthe Node.js module resolution, see thepackages documentation.
Mandatory file extensions#
A file extension must be provided when using theimport keyword to resolverelative or absolute specifiers. Directory indexes (e.g.'./startup/index.js')must also be fully specified.
This behavior matches howimport behaves in browser environments, assuming atypically configured server.
URLs#
ES modules are resolved and cached as URLs. This means that special charactersmust bepercent-encoded, such as# with%23 and? with%3F.
file:,node:, anddata: URL schemes are supported. A specifier like'https://example.com/app.js' is not supported natively in Node.js unless usingacustom HTTPS loader.
file: URLs#
Modules are loaded multiple times if theimport specifier used to resolvethem has a different query or fragment.
import'./foo.mjs?query=1';// loads ./foo.mjs with query of "?query=1"import'./foo.mjs?query=2';// loads ./foo.mjs with query of "?query=2"The volume root may be referenced via/,//, orfile:///. Given thedifferences betweenURL and path resolution (such as percent encodingdetails), it is recommended to useurl.pathToFileURL when importing a path.
data: imports#
data: URLs are supported for importing with the following MIME types:
text/javascriptfor ES modulesapplication/jsonfor JSONapplication/wasmfor Wasm
import'data:text/javascript,console.log("hello!");';import _from'data:application/json,"world!"'with {type:'json' };data: URLs only resolvebare specifiers for builtin modulesandabsolute specifiers. Resolvingrelative specifiers does not work becausedata: is not aspecial scheme. For example, attempting to load./foofromdata:text/javascript,import "./foo"; fails to resolve because thereis no concept of relative resolution fordata: URLs.
node: imports#
History
| Version | Changes |
|---|---|
| v16.0.0, v14.18.0 | Added |
| v14.13.1, v12.20.0 | Added in: v14.13.1, v12.20.0 |
node: URLs are supported as an alternative means to load Node.js builtinmodules. This URL scheme allows for builtin modules to be referenced by validabsolute URL strings.
import fsfrom'node:fs/promises';Import attributes#
History
| Version | Changes |
|---|---|
| v21.0.0, v20.10.0, v18.20.0 | Switch from Import Assertions to Import Attributes. |
| v17.1.0, v16.14.0 | Added in: v17.1.0, v16.14.0 |
Import attributes are an inline syntax for module importstatements to pass on more information alongside the module specifier.
import fooDatafrom'./foo.json'with {type:'json' };const {default: barData } =awaitimport('./bar.json', {with: {type:'json' } });Node.js only supports thetype attribute, for which it supports the following values:
Attributetype | Needed for |
|---|---|
'json' | JSON modules |
Thetype: 'json' attribute is mandatory when importing JSON modules.
Built-in modules#
Built-in modules provide named exports of their public API. Adefault export is also provided which is the value of the CommonJS exports.The default export can be used for, among other things, modifying the namedexports. Named exports of built-in modules are updated only by callingmodule.syncBuiltinESMExports().
importEventEmitterfrom'node:events';const e =newEventEmitter();import { readFile }from'node:fs';readFile('./foo.txt',(err, source) => {if (err) {console.error(err); }else {console.log(source); }});import fs, { readFileSync }from'node:fs';import { syncBuiltinESMExports }from'node:module';import {Buffer }from'node:buffer';fs.readFileSync =() =>Buffer.from('Hello, ESM');syncBuiltinESMExports();fs.readFileSync === readFileSync;When importing built-in modules, all the named exports (i.e. properties of the module exports object)are populated even if they are not individually accessed.This can make initial imports of built-in modules slightly slower compared to loading them with
require()orprocess.getBuiltinModule(), where the module exports object is evaluated immediately,but some of its properties may only be initialized when first accessed individually.
import() expressions#
Dynamicimport() provides an asynchronous way to import modules. It issupported in both CommonJS and ES modules, and can be used to load both CommonJSand ES modules.
import.meta#
- Type:<Object>
Theimport.meta meta property is anObject that contains the followingproperties. It is only supported in ES modules.
import.meta.dirname#
History
| Version | Changes |
|---|---|
| v24.0.0, v22.16.0 | This property is no longer experimental. |
| v21.2.0, v20.11.0 | Added in: v21.2.0, v20.11.0 |
- Type:<string> The directory name of the current module.
This is the same as thepath.dirname() of theimport.meta.filename.
Caveat: only present on
file:modules.
import.meta.filename#
History
| Version | Changes |
|---|---|
| v24.0.0, v22.16.0 | This property is no longer experimental. |
| v21.2.0, v20.11.0 | Added in: v21.2.0, v20.11.0 |
- Type:<string> The full absolute path and filename of the current module, withsymlinks resolved.
This is the same as theurl.fileURLToPath() of theimport.meta.url.
Caveat only local modules support this property. Modules not using the
file:protocol will not provide it.
import.meta.url#
- Type:<string> The absolute
file:URL of the module.
This is defined exactly the same as it is in browsers providing the URL of thecurrent module file.
This enables useful patterns such as relative file loading:
import { readFileSync }from'node:fs';const buffer =readFileSync(newURL('./data.proto',import.meta.url));import.meta.main#
- Type:<boolean>
truewhen the current module is the entry point of the current process;falseotherwise.
Equivalent torequire.main === module in CommonJS.
Analogous to Python's__name__ == "__main__".
exportfunctionfoo() {return'Hello, world';}functionmain() {const message =foo();console.log(message);}if (import.meta.main)main();// `foo` can be imported from another module without possible side-effects from `main`import.meta.resolve(specifier)#
History
| Version | Changes |
|---|---|
| v20.6.0, v18.19.0 | No longer behind |
| v20.6.0, v18.19.0 | This API no longer throws when targeting |
| v20.0.0, v18.19.0 | This API now returns a string synchronously instead of a Promise. |
| v16.2.0, v14.18.0 | Add support for WHATWG |
| v13.9.0, v12.16.2 | Added in: v13.9.0, v12.16.2 |
specifier<string> The module specifier to resolve relative to thecurrent module.- Returns:<string> The absolute URL string that the specifier would resolve to.
import.meta.resolve is a module-relative resolution function scoped toeach module, returning the URL string.
const dependencyAsset =import.meta.resolve('component-lib/asset.css');// file:///app/node_modules/component-lib/asset.cssimport.meta.resolve('./dep.js');// file:///app/dep.jsAll features of the Node.js module resolution are supported. Dependencyresolutions are subject to the permitted exports resolutions within the package.
Caveats:
- This can result in synchronous file-system operations, whichcan impact performance similarly to
require.resolve. - This feature is not available within custom loaders (it wouldcreate a deadlock).
Non-standard API:
When using the--experimental-import-meta-resolve flag, that function acceptsa second argument:
Interoperability with CommonJS#
import statements#
Animport statement can reference an ES module or a CommonJS module.import statements are permitted only in ES modules, but dynamicimport()expressions are supported in CommonJS for loading ES modules.
When importingCommonJS modules, themodule.exports object is provided as the default export. Named exports may beavailable, provided by static analysis as a convenience for better ecosystemcompatibility.
require#
The CommonJS modulerequire currently only supports loading synchronous ESmodules (that is, ES modules that do not use top-levelawait).
SeeLoading ECMAScript modules usingrequire() for details.
CommonJS Namespaces#
History
| Version | Changes |
|---|---|
| v23.0.0 | Added |
| v14.13.0 | Added in: v14.13.0 |
CommonJS modules consist of amodule.exports object which can be of any type.
To support this, when importing CommonJS from an ECMAScript module, a namespacewrapper for the CommonJS module is constructed, which always provides adefault export key pointing to the CommonJSmodule.exports value.
In addition, a heuristic static analysis is performed against the source text ofthe CommonJS module to get a best-effort static list of exports to provide onthe namespace from values onmodule.exports. This is necessary since thesenamespaces must be constructed prior to the evaluation of the CJS module.
These CommonJS namespace objects also provide thedefault export as a'module.exports' named export, in order to unambiguously indicate that theirrepresentation in CommonJS uses this value, and not the namespace value. Thismirrors the semantics of the handling of the'module.exports' export name inrequire(esm) interop support.
When importing a CommonJS module, it can be reliably imported using the ESmodule default import or its corresponding sugar syntax:
import {defaultas cjs }from'cjs';// Identical to the aboveimport cjsSugarfrom'cjs';console.log(cjs);console.log(cjs === cjsSugar);// Prints:// <module.exports>// trueThis Module Namespace Exotic Object can be directly observed either when usingimport * as m from 'cjs' or a dynamic import:
import *as mfrom'cjs';console.log(m);console.log(m ===awaitimport('cjs'));// Prints:// [Module] { default: <module.exports>, 'module.exports': <module.exports> }// trueFor better compatibility with existing usage in the JS ecosystem, Node.jsin addition attempts to determine the CommonJS named exports of every importedCommonJS module to provide them as separate ES module exports using a staticanalysis process.
For example, consider a CommonJS module written:
// cjs.cjsexports.name ='exported';The preceding module supports named imports in ES modules:
import { name }from'./cjs.cjs';console.log(name);// Prints: 'exported'import cjsfrom'./cjs.cjs';console.log(cjs);// Prints: { name: 'exported' }import *as mfrom'./cjs.cjs';console.log(m);// Prints:// [Module] {// default: { name: 'exported' },// 'module.exports': { name: 'exported' },// name: 'exported'// }As can be seen from the last example of the Module Namespace Exotic Object beinglogged, thename export is copied off of themodule.exports object and setdirectly on the ES module namespace when the module is imported.
Live binding updates or new exports added tomodule.exports are not detectedfor these named exports.
The detection of named exports is based on common syntax patterns but does notalways correctly detect named exports. In these cases, using the defaultimport form described above can be a better option.
Named exports detection covers many common export patterns, reexport patternsand build tool and transpiler outputs. Seemerve for the exactsemantics implemented.
Differences between ES modules and CommonJS#
Norequire,exports, ormodule.exports#
In most cases, the ES moduleimport can be used to load CommonJS modules.
If needed, arequire function can be constructed within an ES module usingmodule.createRequire().
No__filename or__dirname#
These CommonJS variables are not available in ES modules.
__filename and__dirname use cases can be replicated viaimport.meta.filename andimport.meta.dirname.
No Addon Loading#
Addons are not currently supported with ES module imports.
They can instead be loaded withmodule.createRequire() orprocess.dlopen.
Norequire.main#
To replacerequire.main === module, there is theimport.meta.main API.
Norequire.resolve#
Relative resolution can be handled vianew URL('./local', import.meta.url).
For a completerequire.resolve replacement, there is theimport.meta.resolve API.
Alternativelymodule.createRequire() can be used.
NoNODE_PATH#
NODE_PATH is not part of resolvingimport specifiers. Please use symlinksif this behavior is desired.
Norequire.extensions#
require.extensions is not used byimport. Module customization hooks canprovide a replacement.
Norequire.cache#
require.cache is not used byimport as the ES module loader has its ownseparate cache.
JSON modules#
History
| Version | Changes |
|---|---|
| v23.1.0, v22.12.0, v20.18.3, v18.20.5 | JSON modules are no longer experimental. |
JSON files can be referenced byimport:
import packageConfigfrom'./package.json'with {type:'json' };Thewith { type: 'json' } syntax is mandatory; seeImport Attributes.
The imported JSON only exposes adefault export. There is no support for namedexports. A cache entry is created in the CommonJS cache to avoid duplication.The same object is returned in CommonJS if the JSON module has already beenimported from the same path.
Wasm modules#
History
| Version | Changes |
|---|---|
| v24.5.0, v22.19.0 | Wasm modules no longer require the |
Importing both WebAssembly module instances and WebAssembly source phaseimports is supported.
Both of these integrations are in line with theES Module Integration Proposal for WebAssembly.
Wasm Source Phase Imports#
TheSource Phase Imports proposal allows theimport source keywordcombination to import aWebAssembly.Module object directly, instead of gettinga module instance already instantiated with its dependencies.
This is useful when needing custom instantiations for Wasm, while stillresolving and loading it through the ES module integration.
For example, to create multiple instances of a module, or to pass custom importsinto a new instance oflibrary.wasm:
import source libraryModulefrom'./library.wasm';const instance1 =awaitWebAssembly.instantiate(libraryModule, importObject1);const instance2 =awaitWebAssembly.instantiate(libraryModule, importObject2);In addition to the static source phase, there is also a dynamic variant of thesource phase via theimport.source dynamic phase import syntax:
const dynamicLibrary =awaitimport.source('./library.wasm');const instance =awaitWebAssembly.instantiate(dynamicLibrary, importObject);JavaScript String Builtins#
When importing WebAssembly modules, theWebAssembly JS String Builtins Proposal is automatically enabled through theESM Integration. This allows WebAssembly modules to directly use efficientcompile-time string builtins from thewasm:js-string namespace.
For example, the following Wasm module exports a stringgetLength function usingthewasm:js-stringlength builtin:
(module ;; Compile-time import of the string length builtin. (import "wasm:js-string" "length" (func $string_length (param externref) (result i32))) ;; Define getLength, taking a JS value parameter assumed to be a string, ;; calling string length on it and returning the result. (func $getLength (param $str externref) (result i32) local.get $str call $string_length ) ;; Export the getLength function. (export "getLength" (func $get_length)))import { getLength }from'./string-len.wasm';getLength('foo');// Returns 3.Wasm builtins are compile-time imports that are linked during module compilationrather than during instantiation. They do not behave like normal module graphimports and they cannot be inspected viaWebAssembly.Module.imports(mod)or virtualized unless recompiling the module using the directWebAssembly.compile API with string builtins disabled.
Importing a module in the source phase before it has been instantiated will alsouse the compile-time builtins automatically:
import source modfrom'./string-len.wasm';const {exports: { getLength } } =awaitWebAssembly.instantiate(mod, {});getLength('foo');// Also returns 3.Wasm Instance Phase Imports#
Instance imports allow any.wasm files to be imported as normal modules,supporting their module imports in turn.
For example, anindex.js containing:
import *as Mfrom'./library.wasm';console.log(M);executed under:
node index.mjswould provide the exports interface for the instantiation oflibrary.wasm.
Reserved Wasm Namespaces#
When importing WebAssembly module instances, they cannot use import modulenames or import/export names that start with reserved prefixes:
wasm-js:- reserved in all module import names, module names and exportnames.wasm:- reserved in module import names and export names (imported modulenames are allowed in order to support future builtin polyfills).
Importing a module using the above reserved names will throw aWebAssembly.LinkError.
Top-levelawait#
Theawait keyword may be used in the top level body of an ECMAScript module.
Assuming ana.mjs with
exportconst five =awaitPromise.resolve(5);And ab.mjs with
import { five }from'./a.mjs';console.log(five);// Logs `5`node b.mjs# worksIf a top levelawait expression never resolves, thenode process will exitwith a13status code.
import { spawn }from'node:child_process';import { execPath }from'node:process';spawn(execPath, ['--input-type=module','--eval',// Never-resolving Promise:'await new Promise(() => {})',]).once('exit',(code) => {console.log(code);// Logs `13`});Loaders#
The former Loaders documentation is now atModules: Customization hooks.
Resolution and loading algorithm#
Features#
The default resolver has the following properties:
- FileURL-based resolution as is used by ES modules
- Relative and absolute URL resolution
- No default extensions
- No folder mains
- Bare specifier package resolution lookup through node_modules
- Does not fail on unknown extensions or protocols
- Can optionally provide a hint of the format to the loading phase
The default loader has the following properties
- Support for builtin module loading via
node:URLs - Support for "inline" module loading via
data:URLs - Support for
file:module loading - Fails on any other URL protocol
- Fails on unknown extensions for
file:loading(supports only.cjs,.js, and.mjs)
Resolution algorithm#
The algorithm to load an ES module specifier is given through theESM_RESOLVE method below. It returns the resolved URL for amodule specifier relative to a parentURL.
The resolution algorithm determines the full resolved URL for a moduleload, along with its suggested module format. The resolution algorithmdoes not determine whether the resolved URL protocol can be loaded,or whether the file extensions are permitted, instead these validationsare applied by Node.js during the load phase(for example, if it was asked to load a URL that has a protocol that isnotfile:,data: ornode:.
The algorithm also tries to determine the format of the file basedon the extension (seeESM_FILE_FORMAT algorithm below). If it doesnot recognize the file extension (eg if it is not.mjs,.cjs, or.json), then a format ofundefined is returned,which will throw during the load phase.
The algorithm to determine the module format of a resolved URL isprovided byESM_FILE_FORMAT, which returns the unique moduleformat for any file. The"module" format is returned for an ECMAScriptModule, while the"commonjs" format is used to indicate loading through thelegacy CommonJS loader. Additional formats such as"addon" can be extended infuture updates.
In the following algorithms, all subroutine errors are propagated as errorsof these top-level routines unless stated otherwise.
defaultConditions is the conditional environment name array,["node", "import"].
The resolver can throw the following errors:
- Invalid Module Specifier: Module specifier is an invalid URL, package nameor package subpath specifier.
- Invalid Package Configuration: package.json configuration is invalid orcontains an invalid configuration.
- Invalid Package Target: Package exports or imports define a target modulefor the package that is an invalid type or string target.
- Package Path Not Exported: Package exports do not define or permit a targetsubpath in the package for the given module.
- Package Import Not Defined: Package imports do not define the specifier.
- Module Not Found: The package or module requested does not exist.
- Unsupported Directory Import: The resolved path corresponds to a directory,which is not a supported target for module imports.
Resolution Algorithm Specification#
ESM_RESOLVE(specifier,parentURL)
- Letresolved beundefined.
- Ifspecifier is a valid URL, then
- Setresolved to the result of parsing and reserializingspecifier as a URL.
- Otherwise, ifspecifier starts with"/","./", or"../", then
- Setresolved to the URL resolution ofspecifier relative toparentURL.
- Otherwise, ifspecifier starts with"#", then
- Setresolved to the result ofPACKAGE_IMPORTS_RESOLVE(specifier,parentURL,defaultConditions).
- Otherwise,
- Note:specifier is now a bare specifier.
- Setresolved the result ofPACKAGE_RESOLVE(specifier,parentURL).
- Letformat beundefined.
- Ifresolved is a"file:" URL, then
- Ifresolved contains any percent encodings of"/" or"\" ("%2F"and"%5C" respectively), then
- Throw anInvalid Module Specifier error.
- If the file atresolved is a directory, then
- Throw anUnsupported Directory Import error.
- If the file atresolved does not exist, then
- Throw aModule Not Found error.
- Setresolved to the real path ofresolved, maintaining thesame URL querystring and fragment components.
- Setformat to the result ofESM_FILE_FORMAT(resolved).
- Otherwise,
- Setformat the module format of the content type associated with theURLresolved.
- Returnformat andresolved to the loading phase
PACKAGE_RESOLVE(packageSpecifier,parentURL)
- LetpackageName beundefined.
- IfpackageSpecifier is an empty string, then
- Throw anInvalid Module Specifier error.
- IfpackageSpecifier is a Node.js builtin module name, then
- Return the string"node:" concatenated withpackageSpecifier.
- IfpackageSpecifier does not start with"@", then
- SetpackageName to the substring ofpackageSpecifier until the first"/" separator or the end of the string.
- Otherwise,
- IfpackageSpecifier does not contain a"/" separator, then
- Throw anInvalid Module Specifier error.
- SetpackageName to the substring ofpackageSpecifieruntil the second"/" separator or the end of the string.
- IfpackageName starts with"." or contains"\" or"%", then
- Throw anInvalid Module Specifier error.
- LetpackageSubpath be"." concatenated with the substring ofpackageSpecifier from the position at the length ofpackageName.
- LetselfUrl be the result ofPACKAGE_SELF_RESOLVE(packageName,packageSubpath,parentURL).
- IfselfUrl is notundefined, returnselfUrl.
- WhileparentURL is not the file system root,
- LetpackageURL be the URL resolution of"node_modules/"concatenated withpackageName, relative toparentURL.
- SetparentURL to the parent folder URL ofparentURL.
- If the folder atpackageURL does not exist, then
- Continue the next loop iteration.
- Letpjson be the result ofREAD_PACKAGE_JSON(packageURL).
- Ifpjson is notnull andpjson.exports is notnull orundefined, then
- Return the result ofPACKAGE_EXPORTS_RESOLVE(packageURL,packageSubpath,pjson.exports,defaultConditions).
- Otherwise, ifpackageSubpath is equal to".", then
- Ifpjson.main is a string, then
- Return the URL resolution ofmain inpackageURL.
- Otherwise,
- Return the URL resolution ofpackageSubpath inpackageURL.
- Throw aModule Not Found error.
PACKAGE_SELF_RESOLVE(packageName,packageSubpath,parentURL)
- LetpackageURL be the result ofLOOKUP_PACKAGE_SCOPE(parentURL).
- IfpackageURL isnull, then
- Returnundefined.
- Letpjson be the result ofREAD_PACKAGE_JSON(packageURL).
- Ifpjson isnull or ifpjson.exports isnull orundefined, then
- Returnundefined.
- Ifpjson.name is equal topackageName, then
- Return the result ofPACKAGE_EXPORTS_RESOLVE(packageURL,packageSubpath,pjson.exports,defaultConditions).
- Otherwise, returnundefined.
PACKAGE_EXPORTS_RESOLVE(packageURL,subpath,exports,conditions)
Note: This function is directly invoked by the CommonJS resolution algorithm.
- Ifexports is an Object with both a key starting with"." and a key notstarting with".", throw anInvalid Package Configuration error.
- Ifsubpath is equal to".", then
- LetmainExport beundefined.
- Ifexports is a String or Array, or an Object containing no keysstarting with".", then
- SetmainExport toexports.
- Otherwise ifexports is an Object containing a"." property, then
- SetmainExport toexports["."].
- IfmainExport is notundefined, then
- Letresolved be the result ofPACKAGE_TARGET_RESOLVE(packageURL,mainExport,null,false,conditions).
- Ifresolved is notnull orundefined, returnresolved.
- Otherwise, ifexports is an Object and all keys ofexports start with".", then
- Assert:subpath begins with"./".
- Letresolved be the result ofPACKAGE_IMPORTS_EXPORTS_RESOLVE(subpath,exports,packageURL,false,conditions).
- Ifresolved is notnull orundefined, returnresolved.
- Throw aPackage Path Not Exported error.
PACKAGE_IMPORTS_RESOLVE(specifier,parentURL,conditions)
Note: This function is directly invoked by the CommonJS resolution algorithm.
- Assert:specifier begins with"#".
- Ifspecifier is exactly equal to"#", then
- Throw anInvalid Module Specifier error.
- LetpackageURL be the result ofLOOKUP_PACKAGE_SCOPE(parentURL).
- IfpackageURL is notnull, then
- Letpjson be the result ofREAD_PACKAGE_JSON(packageURL).
- Ifpjson.imports is a non-null Object, then
- Letresolved be the result ofPACKAGE_IMPORTS_EXPORTS_RESOLVE(specifier,pjson.imports,packageURL,true,conditions).
- Ifresolved is notnull orundefined, returnresolved.
- Throw aPackage Import Not Defined error.
PACKAGE_IMPORTS_EXPORTS_RESOLVE(matchKey,matchObj,packageURL,isImports,conditions)
- IfmatchKey ends in"/", then
- Throw anInvalid Module Specifier error.
- IfmatchKey is a key ofmatchObj and does not contain"*", then
- Lettarget be the value ofmatchObj[matchKey].
- Return the result ofPACKAGE_TARGET_RESOLVE(packageURL,target,null,isImports,conditions).
- LetexpansionKeys be the list of keys ofmatchObj containing only asingle"*", sorted by the sorting functionPATTERN_KEY_COMPAREwhich orders in descending order of specificity.
- For each keyexpansionKey inexpansionKeys, do
- LetpatternBase be the substring ofexpansionKey up to but excludingthe first"*" character.
- IfmatchKey starts with but is not equal topatternBase, then
- LetpatternTrailer be the substring ofexpansionKey from theindex after the first"*" character.
- IfpatternTrailer has zero length, or ifmatchKey ends withpatternTrailer and the length ofmatchKey is greater than orequal to the length ofexpansionKey, then
- Lettarget be the value ofmatchObj[expansionKey].
- LetpatternMatch be the substring ofmatchKey starting at theindex of the length ofpatternBase up to the length ofmatchKey minus the length ofpatternTrailer.
- Return the result ofPACKAGE_TARGET_RESOLVE(packageURL,target,patternMatch,isImports,conditions).
- Returnnull.
PATTERN_KEY_COMPARE(keyA,keyB)
- Assert:keyA contains only a single"*".
- Assert:keyB contains only a single"*".
- LetbaseLengthA be the index of"*" inkeyA.
- LetbaseLengthB be the index of"*" inkeyB.
- IfbaseLengthA is greater thanbaseLengthB, return -1.
- IfbaseLengthB is greater thanbaseLengthA, return 1.
- If the length ofkeyA is greater than the length ofkeyB, return -1.
- If the length ofkeyB is greater than the length ofkeyA, return 1.
- Return 0.
PACKAGE_TARGET_RESOLVE(packageURL,target,patternMatch,isImports,conditions)
- Iftarget is a String, then
- Iftarget does not start with"./", then
- IfisImports isfalse, or iftarget starts with"../" or"/", or iftarget is a valid URL, then
- Throw anInvalid Package Target error.
- IfpatternMatch is a String, then
- ReturnPACKAGE_RESOLVE(target with every instance of"*"replaced bypatternMatch,packageURL +"/").
- ReturnPACKAGE_RESOLVE(target,packageURL +"/").
- Iftarget split on"/" or"\" contains any"",".","..",or"node_modules" segments after the first"." segment, caseinsensitive and including percent encoded variants, throw anInvalidPackage Target error.
- LetresolvedTarget be the URL resolution of the concatenation ofpackageURL andtarget.
- Assert:packageURL is contained inresolvedTarget.
- IfpatternMatch isnull, then
- ReturnresolvedTarget.
- IfpatternMatch split on"/" or"\" contains any"",".","..", or"node_modules" segments, case insensitive and includingpercent encoded variants, throw anInvalid Module Specifier error.
- Return the URL resolution ofresolvedTarget with every instance of"*" replaced withpatternMatch.
- Otherwise, iftarget is a non-null Object, then
- Iftarget contains any index property keys, as defined in ECMA-2626.1.7 Array Index, throw anInvalid Package Configuration error.
- For each propertyp oftarget, in object insertion order as,
- Ifp equals"default" orconditions contains an entry forp,then
- LettargetValue be the value of thep property intarget.
- Letresolved be the result ofPACKAGE_TARGET_RESOLVE(packageURL,targetValue,patternMatch,isImports,conditions).
- Ifresolved is equal toundefined, continue the loop.
- Returnresolved.
- Returnundefined.
- Otherwise, iftarget is an Array, then
- If _target.length is zero, returnnull.
- For each itemtargetValue intarget, do
- Letresolved be the result ofPACKAGE_TARGET_RESOLVE(packageURL,targetValue,patternMatch,isImports,conditions), continuing the loop on anyInvalid Package Targeterror.
- Ifresolved isundefined, continue the loop.
- Returnresolved.
- Return or throw the last fallback resolutionnull return or error.
- Otherwise, iftarget isnull, returnnull.
- Otherwise throw anInvalid Package Target error.
ESM_FILE_FORMAT(url)
- Assert:url corresponds to an existing file.
- Ifurl ends in".mjs", then
- Return"module".
- Ifurl ends in".cjs", then
- Return"commonjs".
- Ifurl ends in".json", then
- Return"json".
- Ifurl ends in".wasm", then
- Return"wasm".
- If
--experimental-addon-modulesis enabled andurl ends in".node", then
- Return"addon".
- LetpackageURL be the result ofLOOKUP_PACKAGE_SCOPE(url).
- Letpjson be the result ofREAD_PACKAGE_JSON(packageURL).
- LetpackageType benull.
- Ifpjson?.type is"module" or"commonjs", then
- SetpackageType topjson.type.
- Ifurl ends in".js", then
- IfpackageType is notnull, then
- ReturnpackageType.
- If the result ofDETECT_MODULE_SYNTAX(source) is true, then
- Return"module".
- Return"commonjs".
- Ifurl does not have any extension, then
- IfpackageType is"module" and the file aturl contains the"application/wasm" content type header for a WebAssembly module, then
- Return"wasm".
- IfpackageType is notnull, then
- ReturnpackageType.
- If the result ofDETECT_MODULE_SYNTAX(source) is true, then
- Return"module".
- Return"commonjs".
- Returnundefined (will throw during load phase).
LOOKUP_PACKAGE_SCOPE(url)
- LetscopeURL beurl.
- WhilescopeURL is not the file system root,
- SetscopeURL to the parent URL ofscopeURL.
- IfscopeURL ends in a"node_modules" path segment, returnnull.
- LetpjsonURL be the resolution of"package.json" withinscopeURL.
- if the file atpjsonURL exists, then
- ReturnscopeURL.
- Returnnull.
READ_PACKAGE_JSON(packageURL)
- LetpjsonURL be the resolution of"package.json" withinpackageURL.
- If the file atpjsonURL does not exist, then
- Returnnull.
- If the file atpackageURL does not parse as valid JSON, then
- Throw anInvalid Package Configuration error.
- Return the parsed JSON source of the file atpjsonURL.
DETECT_MODULE_SYNTAX(source)
- Parsesource as an ECMAScript module.
- If the parse is successful, then
- Ifsource contains top-level
await, staticimportorexportstatements, orimport.meta, returntrue.- Ifsource contains a top-level lexical declaration (
const,let,orclass) of any of the CommonJS wrapper variables (require,exports,module,__filename, or__dirname) then returntrue.- Else returnfalse.
Customizing ESM specifier resolution algorithm#
Module customization hooks provide a mechanism for customizing the ESMspecifier resolution algorithm. An example that provides CommonJS-styleresolution for ESM specifiers iscommonjs-extension-resolution-loader.
Modules:node:module API#
TheModule object#
- Type:<Object>
Provides general utility methods when interacting with instances ofModule, themodule variable often seen inCommonJS modules. Accessedviaimport 'node:module' orrequire('node:module').
module.builtinModules#
History
| Version | Changes |
|---|---|
| v23.5.0 | The list now also contains prefix-only modules. |
| v9.3.0, v8.10.0, v6.13.0 | Added in: v9.3.0, v8.10.0, v6.13.0 |
- Type:<string[]>
A list of the names of all modules provided by Node.js. Can be used to verifyif a module is maintained by a third party or not.
module in this context isn't the same object that's providedby themodule wrapper. To access it, require theModule module:
// module.mjs// In an ECMAScript moduleimport { builtinModulesas builtin }from'node:module';// module.cjs// In a CommonJS moduleconst builtin =require('node:module').builtinModules;
module.createRequire(filename)#
filename<string> |<URL> Filename to be used to construct the requirefunction. Must be a file URL object, file URL string, or absolute pathstring.- Returns:<require> Require function
import { createRequire }from'node:module';constrequire =createRequire(import.meta.url);// sibling-module.js is a CommonJS module.const siblingModule =require('./sibling-module');module.findPackageJSON(specifier[, base])#
specifier<string> |<URL> The specifier for the module whosepackage.jsontoretrieve. When passing abare specifier, thepackage.jsonat the root ofthe package is returned. When passing arelative specifier or anabsolute specifier,the closest parentpackage.jsonis returned.base<string> |<URL> The absolute location (file:URL string or FS path) of thecontaining module. For CJS, use__filename(not__dirname!); for ESM, useimport.meta.url. You do not need to pass it ifspecifieris anabsolute specifier.- Returns:<string> |<undefined> A path if the
package.jsonis found. Whenspecifieris a package, the package's rootpackage.json; when a relative or unresolved, the closestpackage.jsonto thespecifier.
Caveat: Do not use this to try to determine module format. There are many things affectingthat determination; the
typefield of package.json is theleast definitive (ex file extensionsupersedes it, and a loader hook supersedes that).
Caveat: This currently leverages only the built-in default resolver; if
resolvecustomization hooks are registered, they will not affect the resolution.This may change in the future.
/path/to/project ├ packages/ ├ bar/ ├ bar.js └ package.json // name = '@foo/bar' └ qux/ ├ node_modules/ └ some-package/ └ package.json // name = 'some-package' ├ qux.js └ package.json // name = '@foo/qux' ├ main.js └ package.json // name = '@foo'// /path/to/project/packages/bar/bar.jsimport { findPackageJSON }from'node:module';findPackageJSON('..',import.meta.url);// '/path/to/project/package.json'// Same result when passing an absolute specifier instead:findPackageJSON(newURL('../',import.meta.url));findPackageJSON(import.meta.resolve('../'));findPackageJSON('some-package',import.meta.url);// '/path/to/project/packages/bar/node_modules/some-package/package.json'// When passing an absolute specifier, you might get a different result if the// resolved module is inside a subfolder that has nested `package.json`.findPackageJSON(import.meta.resolve('some-package'));// '/path/to/project/packages/bar/node_modules/some-package/some-subfolder/package.json'findPackageJSON('@foo/qux',import.meta.url);// '/path/to/project/packages/qux/package.json'// /path/to/project/packages/bar/bar.jsconst { findPackageJSON } =require('node:module');const { pathToFileURL } =require('node:url');const path =require('node:path');findPackageJSON('..', __filename);// '/path/to/project/package.json'// Same result when passing an absolute specifier instead:findPackageJSON(pathToFileURL(path.join(__dirname,'..')));findPackageJSON('some-package', __filename);// '/path/to/project/packages/bar/node_modules/some-package/package.json'// When passing an absolute specifier, you might get a different result if the// resolved module is inside a subfolder that has nested `package.json`.findPackageJSON(pathToFileURL(require.resolve('some-package')));// '/path/to/project/packages/bar/node_modules/some-package/some-subfolder/package.json'findPackageJSON('@foo/qux', __filename);// '/path/to/project/packages/qux/package.json'
module.isBuiltin(moduleName)#
moduleName<string> name of the module- Returns:<boolean> returns true if the module is builtin else returns false
import { isBuiltin }from'node:module';isBuiltin('node:fs');// trueisBuiltin('fs');// trueisBuiltin('wss');// falsemodule.register(specifier[, parentURL][, options])#
History
| Version | Changes |
|---|---|
| v23.6.1, v22.13.1, v20.18.2 | Using this feature with the permission model enabled requires passing |
| v20.8.0, v18.19.0 | Add support for WHATWG URL instances. |
| v20.6.0, v18.19.0 | Added in: v20.6.0, v18.19.0 |
specifier<string> |<URL> Customization hooks to be registered; this should bethe same string that would be passed toimport(), except that if it isrelative, it is resolved relative toparentURL.parentURL<string> |<URL> If you want to resolvespecifierrelative to a baseURL, such asimport.meta.url, you can pass that URL here.Default:'data:'options<Object>parentURL<string> |<URL> If you want to resolvespecifierrelative to abase URL, such asimport.meta.url, you can pass that URL here. Thisproperty is ignored if theparentURLis supplied as the second argument.Default:'data:'data<any> Any arbitrary, cloneable JavaScript value to pass into theinitializehook.transferList<Object[]>transferable objects to be passed into theinitializehook.
Register a module that exportshooks that customize Node.js moduleresolution and loading behavior. SeeCustomization hooks.
This feature requires--allow-worker if used with thePermission Model.
module.registerHooks(options)#
History
| Version | Changes |
|---|---|
| v25.4.0 | Synchronous and in-thread hooks are now release candidate. |
| v23.5.0, v22.15.0 | Added in: v23.5.0, v22.15.0 |
options<Object>load<Function> |<undefined> Seeload hook.Default:undefined.resolve<Function> |<undefined> Seeresolve hook.Default:undefined.
Registerhooks that customize Node.js module resolution and loading behavior.SeeCustomization hooks.
module.stripTypeScriptTypes(code[, options])#
code<string> The code to strip type annotations from.options<Object>mode<string>Default:'strip'. Possible values are:'strip'Only strip type annotations without performing the transformation of TypeScript features.'transform'Strip type annotations and transform TypeScript features to JavaScript.
sourceMap<boolean>Default:false. Only whenmodeis'transform', iftrue, a source mapwill be generated for the transformed code.sourceUrl<string> Specifies the source url used in the source map.
- Returns:<string> The code with type annotations stripped.
module.stripTypeScriptTypes()removes type annotations from TypeScript code. Itcan be used to strip type annotations from TypeScript code before running itwithvm.runInContext()orvm.compileFunction().By default, it will throw an error if the code contains TypeScript featuresthat require transformation such asEnums,seetype-stripping for more information.When mode is'transform', it also transforms TypeScript features to JavaScript,seetransform TypeScript features for more information.When mode is'strip', source maps are not generated, because locations are preserved.IfsourceMapis provided, when mode is'strip', an error will be thrown.
WARNING: The output of this function should not be considered stable across Node.js versions,due to changes in the TypeScript parser.
import { stripTypeScriptTypes }from'node:module';const code ='const a: number = 1;';const strippedCode =stripTypeScriptTypes(code);console.log(strippedCode);// Prints: const a = 1;const { stripTypeScriptTypes } =require('node:module');const code ='const a: number = 1;';const strippedCode =stripTypeScriptTypes(code);console.log(strippedCode);// Prints: const a = 1;
IfsourceUrl is provided, it will be used appended as a comment at the end of the output:
import { stripTypeScriptTypes }from'node:module';const code ='const a: number = 1;';const strippedCode =stripTypeScriptTypes(code, {mode:'strip',sourceUrl:'source.ts' });console.log(strippedCode);// Prints: const a = 1\n\n//# sourceURL=source.ts;const { stripTypeScriptTypes } =require('node:module');const code ='const a: number = 1;';const strippedCode =stripTypeScriptTypes(code, {mode:'strip',sourceUrl:'source.ts' });console.log(strippedCode);// Prints: const a = 1\n\n//# sourceURL=source.ts;
Whenmode is'transform', the code is transformed to #"checkbox" checked aria-label="Show modern ES modules syntax">import { stripTypeScriptTypes }from'node:module';const code =` namespace MathUtil { export const add = (a: number, b: number) => a + b; }`;const strippedCode =stripTypeScriptTypes(code, {mode:'transform',sourceMap:true });console.log(strippedCode);// Prints:// var MathUtil;// (function(MathUtil) {// MathUtil.add = (a, b)=>a + b;// })(MathUtil || (MathUtil = {}));// # sourceMappingURL=data:application/json;base64, ...const { stripTypeScriptTypes } =require('node:module');const code =` namespace MathUtil { export const add = (a: number, b: number) => a + b; }`;const strippedCode =stripTypeScriptTypes(code, {mode:'transform',sourceMap:true });console.log(strippedCode);// Prints:// var MathUtil;// (function(MathUtil) {// MathUtil.add = (a, b)=>a + b;// })(MathUtil || (MathUtil = {}));// # sourceMappingURL=data:application/json;base64, ...
module.syncBuiltinESMExports()#
Themodule.syncBuiltinESMExports() method updates all the live bindings forbuiltinES Modules to match the properties of theCommonJS exports. Itdoes not add or remove exported names from theES Modules.
const fs =require('node:fs');const assert =require('node:assert');const { syncBuiltinESMExports } =require('node:module');fs.readFile = newAPI;delete fs.readFileSync;functionnewAPI() {// ...}fs.newAPI = newAPI;syncBuiltinESMExports();import('node:fs').then((esmFS) => {// It syncs the existing readFile property with the new value assert.strictEqual(esmFS.readFile, newAPI);// readFileSync has been deleted from the required fs assert.strictEqual('readFileSync'in fs,false);// syncBuiltinESMExports() does not remove readFileSync from esmFS assert.strictEqual('readFileSync'in esmFS,true);// syncBuiltinESMExports() does not add names assert.strictEqual(esmFS.newAPI,undefined);});Module compile cache#
History
| Version | Changes |
|---|---|
| v22.8.0 | add initial JavaScript APIs for runtime access. |
| v22.1.0 | Added in: v22.1.0 |
The module compile cache can be enabled either using themodule.enableCompileCache()method or theNODE_COMPILE_CACHE=dir environment variable. After it is enabled,whenever Node.js compiles a CommonJS, a ECMAScript Module, or a TypeScript module, it willuse on-diskV8 code cache persisted in the specified directory to speed up the compilation.This may slow down the first load of a module graph, but subsequent loads of the same modulegraph may get a significant speedup if the contents of the modules do not change.
To clean up the generated compile cache on disk, simply remove the cache directory. The cachedirectory will be recreated the next time the same directory is used for for compile cachestorage. To avoid filling up the disk with stale cache, it is recommended to use a directoryunder theos.tmpdir(). If the compile cache is enabled by a call tomodule.enableCompileCache() without specifying thedirectory, Node.js will usetheNODE_COMPILE_CACHE=dir environment variable if it's set, or defaultstopath.join(os.tmpdir(), 'node-compile-cache') otherwise. To locate the compile cachedirectory used by a running Node.js instance, usemodule.getCompileCacheDir().
The enabled module compile cache can be disabled by theNODE_DISABLE_COMPILE_CACHE=1environment variable. This can be useful when the compile cache leads to unexpected orundesired behaviors (e.g. less precise test coverage).
At the moment, when the compile cache is enabled and a module is loaded afresh, thecode cache is generated from the compiled code immediately, but will only be writtento disk when the Node.js instance is about to exit. This is subject to change. Themodule.flushCompileCache() method can be used to ensure the accumulated code cacheis flushed to disk in case the application wants to spawn other Node.js instancesand let them share the cache long before the parent exits.
The compile cache layout on disk is an implementation detail and should not berelied upon. The compile cache generated is typically only reusable in the sameversion of Node.js, and should be not assumed to be compatible across differentversions of Node.js.
Portability of the compile cache#
By default, caches are invalidated when the absolute paths of the modules beingcached are changed. To keep the cache working after moving theproject directory, enable portable compile cache. This allows previously compiledmodules to be reused across different directory locations as long as the layout relativeto the cache directory remains the same. This would be done on a best-effort basis. IfNode.js cannot compute the location of a module relative to the cache directory, the modulewill not be cached.
There are two ways to enable the portable mode:
Using the portable option in
module.enableCompileCache():// Non-portable cache (default): cache breaks if project is movedmodule.enableCompileCache({directory:'/path/to/cache/storage/dir' });// Portable cache: cache works after the project is movedmodule.enableCompileCache({directory:'/path/to/cache/storage/dir',portable:true });Setting the environment variable:
NODE_COMPILE_CACHE_PORTABLE=1
Limitations of the compile cache#
Currently when using the compile cache withV8 JavaScript code coverage, thecoverage being collected by V8 may be less precise in functions that aredeserialized from the code cache. It's recommended to turn this off whenrunning tests to generate precise coverage.
Compilation cache generated by one version of Node.js can not be reused by a differentversion of Node.js. Cache generated by different versions of Node.js will be storedseparately if the same base directory is used to persist the cache, so they can co-exist.
module.constants.compileCacheStatus#
History
| Version | Changes |
|---|---|
| v25.4.0 | This feature is no longer experimental. |
| v22.8.0 | Added in: v22.8.0 |
The following constants are returned as thestatus field in the object returned bymodule.enableCompileCache() to indicate the result of the attempt to enable themodule compile cache.
| Constant | Description |
|---|---|
ENABLED | Node.js has enabled the compile cache successfully. The directory used to store the compile cache will be returned in thedirectory field in the returned object. |
ALREADY_ENABLED | The compile cache has already been enabled before, either by a previous call tomodule.enableCompileCache(), or by theNODE_COMPILE_CACHE=dir environment variable. The directory used to store the compile cache will be returned in thedirectory field in the returned object. |
FAILED | Node.js fails to enable the compile cache. This can be caused by the lack of permission to use the specified directory, or various kinds of file system errors. The detail of the failure will be returned in themessage field in the returned object. |
DISABLED | Node.js cannot enable the compile cache because the environment variableNODE_DISABLE_COMPILE_CACHE=1 has been set. |
module.enableCompileCache([options])#
History
| Version | Changes |
|---|---|
| v25.4.0 | This feature is no longer experimental. |
| v25.0.0 | Add |
| v25.0.0 | Rename the unreleased |
| v22.8.0 | Added in: v22.8.0 |
options<string> |<Object> Optional. If a string is passed, it is considered to beoptions.directory.directory<string> Optional. Directory to store the compile cache. If not specified,the directory specified by theNODE_COMPILE_CACHE=direnvironment variablewill be used if it's set, orpath.join(os.tmpdir(), 'node-compile-cache')otherwise.portable<boolean> Optional. Iftrue, enables portable compile cache so thatthe cache can be reused even if the project directory is moved. This is a best-effortfeature. If not specified, it will depend on whether the environment variableNODE_COMPILE_CACHE_PORTABLE=1is set.
- Returns:<Object>
status<integer> One of themodule.constants.compileCacheStatusmessage<string> |<undefined> If Node.js cannot enable the compile cache, this containsthe error message. Only set ifstatusismodule.constants.compileCacheStatus.FAILED.directory<string> |<undefined> If the compile cache is enabled, this contains the directorywhere the compile cache is stored. Only set ifstatusismodule.constants.compileCacheStatus.ENABLEDormodule.constants.compileCacheStatus.ALREADY_ENABLED.
Enablemodule compile cache in the current Node.js instance.
For general use cases, it's recommended to callmodule.enableCompileCache() withoutspecifying theoptions.directory, so that the directory can be overridden by theNODE_COMPILE_CACHE environment variable when necessary.
Since compile cache is supposed to be a optimization that is not mission critical, thismethod is designed to not throw any exception when the compile cache cannot be enabled.Instead, it will return an object containing an error message in themessage field toaid debugging. If compile cache is enabled successfully, thedirectory field in thereturned object contains the path to the directory where the compile cache is stored. Thestatus field in the returned object would be one of themodule.constants.compileCacheStatusvalues to indicate the result of the attempt to enable themodule compile cache.
This method only affects the current Node.js instance. To enable it in child worker threads,either call this method in child worker threads too, or set theprocess.env.NODE_COMPILE_CACHE value to compile cache directory so the behavior canbe inherited into the child workers. The directory can be obtained either from thedirectory field returned by this method, or withmodule.getCompileCacheDir().
module.flushCompileCache()#
History
| Version | Changes |
|---|---|
| v25.4.0 | This feature is no longer experimental. |
| v23.0.0, v22.10.0 | Added in: v23.0.0, v22.10.0 |
Flush themodule compile cache accumulated from modules already loadedin the current Node.js instance to disk. This returns after all the flushingfile system operations come to an end, no matter they succeed or not. If thereare any errors, this will fail silently, since compile cache misses should notinterfere with the actual operation of the application.
module.getCompileCacheDir()#
History
| Version | Changes |
|---|---|
| v25.4.0 | This feature is no longer experimental. |
| v22.8.0 | Added in: v22.8.0 |
- Returns:<string> |<undefined> Path to themodule compile cache directory if it is enabled,or
undefinedotherwise.
Customization Hooks#
History
| Version | Changes |
|---|---|
| v25.4.0 | Synchronous and in-thread hooks are now release candidate. |
| v23.5.0, v22.15.0 | Add support for synchronous and in-thread hooks. |
| v20.6.0, v18.19.0 | Added |
| v18.6.0, v16.17.0 | Add support for chaining loaders. |
| v16.12.0 | Removed |
| v8.8.0 | Added in: v8.8.0 |
Node.js currently supports two types of module customization hooks:
module.registerHooks(options): takes synchronous hookfunctions that are run directly on the thread where the modules are loaded.module.register(specifier[, parentURL][, options]): takes specifier to amodule that exports asynchronous hook functions. The functions are run on aseparate loader thread.
The asynchronous hooks incur extra overhead from inter-thread communication,and haveseveral caveats especiallywhen customizing CommonJS modules in the module graph.In most cases, it's recommended to use synchronous hooks viamodule.registerHooks()for simplicity.
Synchronous customization hooks#
Registration of synchronous customization hooks#
To register synchronous customization hooks, usemodule.registerHooks(), whichtakessynchronous hook functions directly in-line.
// register-hooks.jsimport { registerHooks }from'node:module';registerHooks({resolve(specifier, context, nextResolve) {/* implementation */ },load(url, context, nextLoad) {/* implementation */ },});// register-hooks.jsconst { registerHooks } =require('node:module');registerHooks({resolve(specifier, context, nextResolve) {/* implementation */ },load(url, context, nextLoad) {/* implementation */ },});
Registering hooks before application code runs with flags#
The hooks can be registered before the application code is run by using the--import or--require flag:
node --import ./register-hooks.js ./my-app.jsnode --require ./register-hooks.js ./my-app.jsThe specifier passed to--import or--require can also come from a package:
node --import some-package/register ./my-app.jsnode --require some-package/register ./my-app.jsWheresome-package has an"exports" field defining the/registerexport to map to a file that callsregisterHooks(), like theregister-hooks.js examples above.
Using--import or--require ensures that the hooks are registered before anyapplication code is loaded, including the entry point of the application and forany worker threads by default as well.
Registering hooks before application code runs programmatically#
Alternatively,registerHooks() can be called from the entry point.
If the entry point needs to load other modules and the loading process needs to becustomized, load them using eitherrequire() or dynamicimport() after the hooksare registered. Do not use staticimport statements to load modules that need to becustomized in the same module that registers the hooks, because staticimport statementsare evaluated before any code in the importer module is run, including the call toregisterHooks(), regardless of where the staticimport statements appear in the importermodule.
import { registerHooks }from'node:module';registerHooks({/* implementation of synchronous hooks */ });// If loaded using static import, the hooks would not be applied when loading// my-app.mjs, because statically imported modules are all executed before its// importer regardless of where the static import appears.// import './my-app.mjs';// my-app.mjs must be loaded dynamically to ensure the hooks are applied.awaitimport('./my-app.mjs');const { registerHooks } =require('node:module');registerHooks({/* implementation of synchronous hooks */ });import('./my-app.mjs');// Or, if my-app.mjs does not have top-level await or it's a CommonJS module,// require() can also be used:// require('./my-app.mjs');
Registering hooks before application code runs with adata: URL#
Alternatively, inline JavaScript code can be embedded indata: URLs to registerthe hooks before the application code runs. For example,
node --import'data:text/javascript,import {registerHooks} from "node:module"; registerHooks(/* hooks code */);' ./my-app.jsConvention of hooks and chaining#
Hooks are part of a chain, even if that chain consists of only onecustom (user-provided) hook and the default hook, which is always present.
Hook functions nest: each one must always return a plain object, and chaining happensas a result of each function callingnext<hookName>(), which is a reference tothe subsequent loader's hook (in LIFO order).
It's possible to callregisterHooks() more than once:
// entrypoint.mjsimport { registerHooks }from'node:module';const hook1 = {/* implementation of hooks */ };const hook2 = {/* implementation of hooks */ };// hook2 runs before hook1.registerHooks(hook1);registerHooks(hook2);// entrypoint.cjsconst { registerHooks } =require('node:module');const hook1 = {/* implementation of hooks */ };const hook2 = {/* implementation of hooks */ };// hook2 runs before hook1.registerHooks(hook1);registerHooks(hook2);
In this example, the registered hooks will form chains. These chains runlast-in, first-out (LIFO). If bothhook1 andhook2 define aresolvehook, they will be called like so (note the right-to-left,starting withhook2.resolve, thenhook1.resolve, then the Node.js default):
Node.js defaultresolve ←hook1.resolve ←hook2.resolve
The same applies to all the other hooks.
A hook that returns a value lacking a required property triggers an exception. Ahook that returns without callingnext<hookName>()and without returningshortCircuit: true also triggers an exception. These errors are to helpprevent unintentional breaks in the chain. ReturnshortCircuit: true from ahook to signal that the chain is intentionally ending at your hook.
If a hook should be applied when loading other hook modules, the other hookmodules should be loaded after the hook is registered.
Hook functions accepted bymodule.registerHooks()#
Themodule.registerHooks() method accepts the following synchronous hook functions.
functionresolve(specifier, context, nextResolve) {// Take an `import` or `require` specifier and resolve it to a URL.}functionload(url, context, nextLoad) {// Take a resolved URL and return the source code to be evaluated.}Synchronous hooks are run in the same thread and the samerealm where the modulesare loaded, the code in the hook function can pass values to the modules being referenceddirectly via global variables or other shared states.
Unlike the asynchronous hooks, the synchronous hooks are not inherited into child workerthreads by default, though if the hooks are registered using a file preloaded by--import or--require, child worker threads can inherit the preloaded scriptsviaprocess.execArgv inheritance. Seethe documentation ofWorker for details.
Synchronousresolve(specifier, context, nextResolve)#
History
| Version | Changes |
|---|---|
| v23.5.0, v22.15.0 | Add support for synchronous and in-thread hooks. |
specifier<string>context<Object>conditions<string[]> Export conditions of the relevantpackage.jsonimportAttributes<Object> An object whose key-value pairs represent theattributes for the module to importparentURL<string> |<undefined> The module importing this one, or undefinedif this is the Node.js entry point
nextResolve<Function> The subsequentresolvehook in the chain, or theNode.js defaultresolvehook after the last user-suppliedresolvehookspecifier<string>context<Object> |<undefined> When omitted, the defaults are provided. When provided, defaultsare merged in with preference to the provided properties.
- Returns:<Object>
format<string> |<null> |<undefined> A hint to theloadhook (it might be ignored). It can be amodule format (such as'commonjs'or'module') or an arbitrary value like'css'or'yaml'.importAttributes<Object> |<undefined> The import attributes to use whencaching the module (optional; if excluded the input will be used)shortCircuit<undefined> |<boolean> A signal that this hook intends toterminate the chain ofresolvehooks.Default:falseurl<string> The absolute URL to which this input resolves
Theresolve hook chain is responsible for telling Node.js where to find andhow to cache a givenimport statement or expression, orrequire call. It canoptionally return a format (such as'module') as a hint to theload hook. Ifa format is specified, theload hook is ultimately responsible for providingthe finalformat value (and it is free to ignore the hint provided byresolve); ifresolve provides aformat, a customload hook is requiredeven if only to pass the value to the Node.js defaultload hook.
Import type attributes are part of the cache key for saving loaded modules intothe internal module cache. Theresolve hook is responsible for returning animportAttributes object if the module should be cached with differentattributes than were present in the source code.
Theconditions property incontext is an array of conditions that will be usedto matchpackage exports conditions for this resolutionrequest. They can be used for looking up conditional mappings elsewhere or tomodify the list when calling the default resolution logic.
The currentpackage exports conditions are always inthecontext.conditions array passed into the hook. To guaranteedefaultNode.js module specifier resolution behavior when callingdefaultResolve, thecontext.conditions array passed to itmust includeall elements of thecontext.conditions array originally passed into theresolve hook.
import { registerHooks }from'node:module';functionresolve(specifier, context, nextResolve) {// When calling `defaultResolve`, the arguments can be modified. For example,// to change the specifier or to add applicable export conditions.if (specifier.includes('foo')) { specifier = specifier.replace('foo','bar');returnnextResolve(specifier, { ...context,conditions: [...context.conditions,'another-condition'], }); }// The hook can also skip default resolution and provide a custom URL.if (specifier ==='special-module') {return {url:'file:///path/to/special-module.mjs',format:'module',shortCircuit:true,// This is mandatory if nextResolve() is not called. }; }// If no customization is needed, defer to the next hook in the chain which would be the// Node.js default resolve if this is the last user-specified loader.returnnextResolve(specifier);}registerHooks({ resolve });Synchronousload(url, context, nextLoad)#
History
| Version | Changes |
|---|---|
| v23.5.0, v22.15.0 | Add support for synchronous and in-thread version. |
url<string> The URL returned by theresolvechaincontext<Object>conditions<string[]> Export conditions of the relevantpackage.jsonformat<string> |<null> |<undefined> The format optionally supplied by theresolvehook chain. This can be any string value as an input; input values do not need toconform to the list of acceptable return values described below.importAttributes<Object>
nextLoad<Function> The subsequentloadhook in the chain, or theNode.js defaultloadhook after the last user-suppliedloadhookurl<string>context<Object> |<undefined> When omitted, defaults are provided. When provided, defaults aremerged in with preference to the provided properties. In the defaultnextLoad, ifthe module pointed to byurldoes not have explicit module type information,context.formatis mandatory.
- Returns:<Object>
format<string> One of the acceptable module formats listedbelow.shortCircuit<undefined> |<boolean> A signal that this hook intends toterminate the chain ofloadhooks.Default:falsesource<string> |<ArrayBuffer> |<TypedArray> The source for Node.js to evaluate
Theload hook provides a way to define a custom method for retrieving thesource code of a resolved URL. This would allow a loader to potentially avoidreading files from disk. It could also be used to map an unrecognized format toa supported one, for exampleyaml tomodule.
import { registerHooks }from'node:module';import {Buffer }from'node:buffer';functionload(url, context, nextLoad) {// The hook can skip default loading and provide a custom source code.if (url ==='special-module') {return {source:'export const special = 42;',format:'module',shortCircuit:true,// This is mandatory if nextLoad() is not called. }; }// It's possible to modify the source code loaded by the next - possibly default - step,// for example, replacing 'foo' with 'bar' in the source code of the module.const result =nextLoad(url, context);const source =typeof result.source ==='string' ? result.source :Buffer.from(result.source).toString('utf8');return {source: source.replace(/foo/g,'bar'), ...result, };}registerHooks({ resolve });In a more advanced scenario, this can also be used to transform an unsupportedsource to a supported one (seeExamples below).
Accepted final formats returned byload#
The final value offormat must be one of the following:
format | Description | Acceptable types forsource returned byload |
|---|---|---|
'addon' | Load a Node.js addon | <null> |
'builtin' | Load a Node.js builtin module | <null> |
'commonjs-typescript' | Load a Node.js CommonJS module with TypeScript syntax | <string> |<ArrayBuffer> |<TypedArray> |<null> |<undefined> |
'commonjs' | Load a Node.js CommonJS module | <string> |<ArrayBuffer> |<TypedArray> |<null> |<undefined> |
'json' | Load a JSON file | <string> |<ArrayBuffer> |<TypedArray> |
'module-typescript' | Load an ES module with TypeScript syntax | <string> |<ArrayBuffer> |<TypedArray> |
'module' | Load an ES module | <string> |<ArrayBuffer> |<TypedArray> |
'wasm' | Load a WebAssembly module | <ArrayBuffer> |<TypedArray> |
The value ofsource is ignored for format'builtin' because currently it isnot possible to replace the value of a Node.js builtin (core) module.
These types all correspond to classes defined in ECMAScript.
- The specific<ArrayBuffer> object is a<SharedArrayBuffer>.
- The specific<TypedArray> object is a<Uint8Array>.
If the source value of a text-based format (i.e.,'json','module')is not a string, it is converted to a string usingutil.TextDecoder.
Asynchronous customization hooks#
Caveats of asynchronous customization hooks#
The asynchronous customization hooks have many caveats and it is uncertain if theirissues can be resolved. Users are encouraged to use the synchronous customization hooksviamodule.registerHooks() instead to avoid these caveats.
- Asynchronous hooks run on a separate thread, so the hook functions cannot directlymutate the global state of the modules being customized. It's typical to use messagechannels and atomics to pass data between the two or to affect control flows.SeeCommunication with asynchronous module customization hooks.
- Asynchronous hooks do not affect all
require()calls in the module graph.- Custom
requirefunctions created usingmodule.createRequire()are notaffected. - If the asynchronous
loadhook does not override thesourcefor CommonJS modulesthat go through it, the child modules loaded by those CommonJS modules via built-inrequire()would not be affected by the asynchronous hooks either.
- Custom
- There are several caveats that the asynchronous hooks need to handle whencustomizing CommonJS modules. Seeasynchronous
resolvehook andasynchronousloadhook for details. - When
require()calls inside CommonJS modules are customized by asynchronous hooks,Node.js may need to load the source code of the CommonJS module multiple times to maintaincompatibility with existing CommonJS monkey-patching. If the module code changes betweenloads, this may lead to unexpected behaviors.- As a side effect, if both asynchronous hooks and synchronous hooks are registered and theasynchronous hooks choose to customize the CommonJS module, the synchronous hooks may beinvoked multiple times for the
require()calls in that CommonJS module.
- As a side effect, if both asynchronous hooks and synchronous hooks are registered and theasynchronous hooks choose to customize the CommonJS module, the synchronous hooks may beinvoked multiple times for the
Registration of asynchronous customization hooks#
Asynchronous customization hooks are registered usingmodule.register() which takesa path or URL to another module that exports theasynchronous hook functions.
Similar toregisterHooks(),register() can be called in a module preloaded by--import or--require, or called directly within the entry point.
// Use module.register() to register asynchronous hooks in a dedicated thread.import { register }from'node:module';register('./hooks.mjs',import.meta.url);// If my-app.mjs is loaded statically here as `import './my-app.mjs'`, since ESM// dependencies are evaluated before the module that imports them,// it's loaded _before_ the hooks are registered above and won't be affected.// To ensure the hooks are applied, dynamic import() must be used to load ESM// after the hooks are registered.import('./my-app.mjs');const { register } =require('node:module');const { pathToFileURL } =require('node:url');// Use module.register() to register asynchronous hooks in a dedicated thread.register('./hooks.mjs',pathToFileURL(__filename));import('./my-app.mjs');
Inhooks.mjs:
// hooks.mjsexportasyncfunctionresolve(specifier, context, nextResolve) {/* implementation */}exportasyncfunctionload(url, context, nextLoad) {/* implementation */}Unlike synchronous hooks, the asynchronous hooks would not run for these modules loaded in the filethat callsregister():
// register-hooks.jsimport { register, createRequire }from'node:module';register('./hooks.mjs',import.meta.url);// Asynchronous hooks does not affect modules loaded via custom require()// functions created by module.createRequire().const userRequire =createRequire(__filename);userRequire('./my-app-2.cjs');// Hooks won't affect this// register-hooks.jsconst { register, createRequire } =require('node:module');const { pathToFileURL } =require('node:url');register('./hooks.mjs',pathToFileURL(__filename));// Asynchronous hooks does not affect modules loaded via built-in require()// in the module calling `register()`require('./my-app-2.cjs');// Hooks won't affect this// .. or custom require() functions created by module.createRequire().const userRequire =createRequire(__filename);userRequire('./my-app-3.cjs');// Hooks won't affect thisAsynchronous hooks can also be registered using adata: URL with the--import flag:
node --import'data:text/javascript,import { register } from "node:module"; import { pathToFileURL } from "node:url"; register("my-instrumentation", pathToFileURL("./"));' ./my-app.jsChaining of asynchronous customization hooks#
Chaining ofregister() work similarly toregisterHooks(). If synchronous and asynchronoushooks are mixed, the synchronous hooks are always run first before the asynchronoushooks start running, that is, in the last synchronous hook being run, its nexthook includes invocation of the asynchronous hooks.
// entrypoint.mjsimport { register }from'node:module';register('./foo.mjs',import.meta.url);register('./bar.mjs',import.meta.url);awaitimport('./my-app.mjs');// entrypoint.cjsconst { register } =require('node:module');const { pathToFileURL } =require('node:url');const parentURL =pathToFileURL(__filename);register('./foo.mjs', parentURL);register('./bar.mjs', parentURL);import('./my-app.mjs');
Iffoo.mjs andbar.mjs define aresolve hook, they will be called like so(note the right-to-left, starting with./bar.mjs, then./foo.mjs, then the Node.js default):
Node.js default ←./foo.mjs ←./bar.mjs
When using the asynchronous hooks, the registered hooks also affect subsequentregister calls, which takes care of loading hook modules. In the example above,bar.mjs will be resolved and loaded via the hooks registered byfoo.mjs(becausefoo's hooks will have already been added to the chain). This allowsfor things like writing hooks in non-JavaScript languages, so long asearlier registered hooks transpile into JavaScript.
Theregister() method cannot be called from the thread running the hook module thatexports the asynchronous hooks or its dependencies.
Communication with asynchronous module customization hooks#
Asynchronous hooks run on a dedicated thread, separate from the mainthread that runs application code. This means mutating global variables won'taffect the other thread(s), and message channels must be used to communicatebetween the threads.
Theregister method can be used to pass data to aninitialize hook. Thedata passed to the hook may include transferable objects like ports.
import { register }from'node:module';import {MessageChannel }from'node:worker_threads';// This example demonstrates how a message channel can be used to// communicate with the hooks, by sending `port2` to the hooks.const { port1, port2 } =newMessageChannel();port1.on('message',(msg) => {console.log(msg);});port1.unref();register('./my-hooks.mjs', {parentURL:import.meta.url,data: {number:1,port: port2 },transferList: [port2],});const { register } =require('node:module');const { pathToFileURL } =require('node:url');const {MessageChannel } =require('node:worker_threads');// This example showcases how a message channel can be used to// communicate with the hooks, by sending `port2` to the hooks.const { port1, port2 } =newMessageChannel();port1.on('message',(msg) => {console.log(msg);});port1.unref();register('./my-hooks.mjs', {parentURL:pathToFileURL(__filename),data: {number:1,port: port2 },transferList: [port2],});
Asynchronous hooks accepted bymodule.register()#
History
| Version | Changes |
|---|---|
| v20.6.0, v18.19.0 | Added |
| v18.6.0, v16.17.0 | Add support for chaining loaders. |
| v16.12.0 | Removed |
| v8.8.0 | Added in: v8.8.0 |
Theregister method can be used to register a module that exports a set ofhooks. The hooks are functions that are called by Node.js to customize themodule resolution and loading process. The exported functions must have specificnames and signatures, and they must be exported as named exports.
exportasyncfunctioninitialize({ number, port }) {// Receives data from `register`.}exportasyncfunctionresolve(specifier, context, nextResolve) {// Take an `import` or `require` specifier and resolve it to a URL.}exportasyncfunctionload(url, context, nextLoad) {// Take a resolved URL and return the source code to be evaluated.}Asynchronous hooks are run in a separate thread, isolated from the main thread whereapplication code runs. That means it is a differentrealm. The hooks threadmay be terminated by the main thread at any time, so do not depend onasynchronous operations (likeconsole.log) to complete. They are inherited intochild workers by default.
initialize()#
data<any> The data fromregister(loader, import.meta.url, { data }).
Theinitialize hook is only accepted byregister.registerHooks() doesnot support nor need it since initialization done for synchronous hooks can be rundirectly before the call toregisterHooks().
Theinitialize hook provides a way to define a custom function that runs inthe hooks thread when the hooks module is initialized. Initialization happenswhen the hooks module is registered viaregister.
This hook can receive data from aregister invocation, includingports and other transferable objects. The return value ofinitialize can be a<Promise>, in which case it will be awaited before the main application threadexecution resumes.
Module customization code:
// path-to-my-hooks.jsexportasyncfunctioninitialize({ number, port }) { port.postMessage(`increment:${number +1}`);}Caller code:
import assertfrom'node:assert';import { register }from'node:module';import {MessageChannel }from'node:worker_threads';// This example showcases how a message channel can be used to communicate// between the main (application) thread and the hooks running on the hooks// thread, by sending `port2` to the `initialize` hook.const { port1, port2 } =newMessageChannel();port1.on('message',(msg) => { assert.strictEqual(msg,'increment: 2');});port1.unref();register('./path-to-my-hooks.js', {parentURL:import.meta.url,data: {number:1,port: port2 },transferList: [port2],});const assert =require('node:assert');const { register } =require('node:module');const { pathToFileURL } =require('node:url');const {MessageChannel } =require('node:worker_threads');// This example showcases how a message channel can be used to communicate// between the main (application) thread and the hooks running on the hooks// thread, by sending `port2` to the `initialize` hook.const { port1, port2 } =newMessageChannel();port1.on('message',(msg) => { assert.strictEqual(msg,'increment: 2');});port1.unref();register('./path-to-my-hooks.js', {parentURL:pathToFileURL(__filename),data: {number:1,port: port2 },transferList: [port2],});
Asynchronousresolve(specifier, context, nextResolve)#
History
| Version | Changes |
|---|---|
| v21.0.0, v20.10.0, v18.19.0 | The property |
| v18.6.0, v16.17.0 | Add support for chaining resolve hooks. Each hook must either call |
| v17.1.0, v16.14.0 | Add support for import assertions. |
specifier<string>context<Object>conditions<string[]> Export conditions of the relevantpackage.jsonimportAttributes<Object> An object whose key-value pairs represent theattributes for the module to importparentURL<string> |<undefined> The module importing this one, or undefinedif this is the Node.js entry point
nextResolve<Function> The subsequentresolvehook in the chain, or theNode.js defaultresolvehook after the last user-suppliedresolvehookspecifier<string>context<Object> |<undefined> When omitted, the defaults are provided. When provided, defaultsare merged in with preference to the provided properties.
- Returns:<Object> |<Promise> The asynchronous version takes either an object containing thefollowing properties, or a
Promisethat will resolve to such an object.format<string> |<null> |<undefined> A hint to theloadhook (it might be ignored). It can be amodule format (such as'commonjs'or'module') or an arbitrary value like'css'or'yaml'.importAttributes<Object> |<undefined> The import attributes to use whencaching the module (optional; if excluded the input will be used)shortCircuit<undefined> |<boolean> A signal that this hook intends toterminate the chain ofresolvehooks.Default:falseurl<string> The absolute URL to which this input resolves
The asynchronous version works similarly to the synchronous version, only that thenextResolve function returns aPromise, and theresolve hook itself can return aPromise.
Warning In the case of the asynchronous version, despite support for returningpromises and async functions, calls to
resolvemay still block the main thread whichcan impact performance.
Warning The
resolvehook invoked forrequire()calls inside CommonJS modulescustomized by asynchronous hooks does not receive the original specifier passed torequire(). Instead, it receives a URL already fully resolved using the defaultCommonJS resolution.
Warning In the CommonJS modules that are customized by the asynchronous customization hooks,
require.resolve()andrequire()will use"import"export condition instead of"require", which may cause unexpected behaviors when loading dual packages.
exportasyncfunctionresolve(specifier, context, nextResolve) {// When calling `defaultResolve`, the arguments can be modified. For example,// to change the specifier or add conditions.if (specifier.includes('foo')) { specifier = specifier.replace('foo','bar');returnnextResolve(specifier, { ...context,conditions: [...context.conditions,'another-condition'], }); }// The hook can also skips default resolution and provide a custom URL.if (specifier ==='special-module') {return {url:'file:///path/to/special-module.mjs',format:'module',shortCircuit:true,// This is mandatory if not calling nextResolve(). }; }// If no customization is needed, defer to the next hook in the chain which would be the// Node.js default resolve if this is the last user-specified loader.returnnextResolve(specifier);}Asynchronousload(url, context, nextLoad)#
History
| Version | Changes |
|---|---|
| v22.6.0 | Add support for |
| v20.6.0 | Add support for |
| v18.6.0, v16.17.0 | Add support for chaining load hooks. Each hook must either call |
url<string> The URL returned by theresolvechaincontext<Object>conditions<string[]> Export conditions of the relevantpackage.jsonformat<string> |<null> |<undefined> The format optionally supplied by theresolvehook chain. This can be any string value as an input; input values do not need toconform to the list of acceptable return values described below.importAttributes<Object>
nextLoad<Function> The subsequentloadhook in the chain, or theNode.js defaultloadhook after the last user-suppliedloadhookurl<string>context<Object> |<undefined> When omitted, defaults are provided. When provided, defaults aremerged in with preference to the provided properties. In the defaultnextLoad, ifthe module pointed to byurldoes not have explicit module type information,context.formatis mandatory.
- Returns:<Promise> The asynchronous version takes either an object containing thefollowing properties, or a
Promisethat will resolve to such an object.format<string>shortCircuit<undefined> |<boolean> A signal that this hook intends toterminate the chain ofloadhooks.Default:falsesource<string> |<ArrayBuffer> |<TypedArray> The source for Node.js to evaluate
Warning: The asynchronous
loadhook and namespaced exports from CommonJSmodules are incompatible. Attempting to use them together will result in an emptyobject from the import. This may be addressed in the future. This does not applyto the synchronousloadhook, in which case exports can be used as usual.
The asynchronous version works similarly to the synchronous version, thoughwhen using the asynchronousload hook, omitting vs providing asource for'commonjs' has very different effects:
- When a
sourceis provided, allrequirecalls from this module will beprocessed by the ESM loader with registeredresolveandloadhooks; allrequire.resolvecalls from this module will be processed by the ESM loaderwith registeredresolvehooks; only a subset of the CommonJS API will beavailable (e.g. norequire.extensions, norequire.cache, norequire.resolve.paths) and monkey-patching on the CommonJS module loaderwill not apply. - If
sourceis undefined ornull, it will be handled by the CommonJS moduleloader andrequire/require.resolvecalls will not go through theregistered hooks. This behavior for nullishsourceis temporary — in thefuture, nullishsourcewill not be supported.
These caveats do not apply to the synchronousload hook, in which casethe complete set of CommonJS APIs available to the customized CommonJSmodules, andrequire/require.resolve always go through the registeredhooks.
The Node.js internal asynchronousload implementation, which is the value ofnext for thelast hook in theload chain, returnsnull forsource whenformat is'commonjs' for backward compatibility. Here is an example hook that wouldopt-in to using the non-default behavior:
import { readFile }from'node:fs/promises';// Asynchronous version accepted by module.register(). This fix is not needed// for the synchronous version accepted by module.registerHooks().exportasyncfunctionload(url, context, nextLoad) {const result =awaitnextLoad(url, context);if (result.format ==='commonjs') { result.source ??=awaitreadFile(newURL(result.responseURL ?? url)); }return result;}This doesn't apply to the synchronousload hook either, in which case thesource returned contains source code loaded by the next hook, regardlessof module format.
Examples#
The various module customization hooks can be used together to accomplishwide-ranging customizations of the Node.js code loading and evaluationbehaviors.
Import from HTTPS#
The hook below registers hooks to enable rudimentary support for suchspecifiers. While this may seem like a significant improvement to Node.js corefunctionality, there are substantial downsides to actually using these hooks:performance is much slower than loading files from disk, there is no caching,and there is no security.
// https-hooks.mjsimport { get }from'node:https';exportfunctionload(url, context, nextLoad) {// For JavaScript to be loaded over the network, we need to fetch and// return it.if (url.startsWith('https://')) {returnnewPromise((resolve, reject) => {get(url,(res) => {let data =''; res.setEncoding('utf8'); res.on('data',(chunk) => data += chunk); res.on('end',() =>resolve({// This example assumes all network-provided JavaScript is ES module// code.format:'module',shortCircuit:true,source: data, })); }).on('error',(err) =>reject(err)); }); }// Let Node.js handle all other URLs.returnnextLoad(url);}// main.mjsimport {VERSION }from'https://coffeescript.org/browser-compiler-modern/coffeescript.js';console.log(VERSION);With the preceding hooks module, runningnode --import 'data:text/javascript,import { register } from "node:module"; import { pathToFileURL } from "node:url"; register(pathToFileURL("./https-hooks.mjs"));' ./main.mjsprints the current version of CoffeeScript per the module at the URL inmain.mjs.
Transpilation#
Sources that are in formats Node.js doesn't understand can be converted intoJavaScript using theload hook.
This is less performant than transpiling source files before running Node.js;transpiler hooks should only be used for development and testing purposes.
Asynchronous version#
// coffeescript-hooks.mjsimport { readFile }from'node:fs/promises';import { findPackageJSON }from'node:module';import coffeescriptfrom'coffeescript';const extensionsRegex =/\.(coffee|litcoffee|coffee\.md)$/;exportasyncfunctionload(url, context, nextLoad) {if (extensionsRegex.test(url)) {// CoffeeScript files can be either CommonJS or ES modules. Use a custom format// to tell Node.js not to detect its module type.const {source: rawSource } =awaitnextLoad(url, { ...context,format:'coffee' });// This hook converts CoffeeScript source code into JavaScript source code// for all imported CoffeeScript files.const transformedSource = coffeescript.compile(rawSource.toString(), url);// To determine how Node.js would interpret the transpilation result,// search up the file system for the nearest parent package.json file// and read its "type" field.return {format:awaitgetPackageType(url),shortCircuit:true,source: transformedSource, }; }// Let Node.js handle all other URLs.returnnextLoad(url, context);}asyncfunctiongetPackageType(url) {// `url` is only a file path during the first iteration when passed the// resolved url from the load() hook// an actual file path from load() will contain a file extension as it's// required by the spec// this simple truthy check for whether `url` contains a file extension will// work for most projects but does not cover some edge-cases (such as// extensionless files or a url ending in a trailing space)const pJson =findPackageJSON(url);returnreadFile(pJson,'utf8') .then(JSON.parse) .then((json) => json?.type) .catch(() =>undefined);}Synchronous version#
// coffeescript-sync-hooks.mjsimport { readFileSync }from'node:fs';import { registerHooks, findPackageJSON }from'node:module';import coffeescriptfrom'coffeescript';const extensionsRegex =/\.(coffee|litcoffee|coffee\.md)$/;functionload(url, context, nextLoad) {if (extensionsRegex.test(url)) {const {source: rawSource } =nextLoad(url, { ...context,format:'coffee' });const transformedSource = coffeescript.compile(rawSource.toString(), url);return {format:getPackageType(url),shortCircuit:true,source: transformedSource, }; }returnnextLoad(url, context);}functiongetPackageType(url) {const pJson =findPackageJSON(url);if (!pJson) {returnundefined; }try {const file =readFileSync(pJson,'utf-8');returnJSON.parse(file)?.type; }catch {returnundefined; }}registerHooks({ load });Running hooks#
# main.coffeeimport { scream }from'./scream.coffee'console.log scream'hello, world'import { version }from'node:process'console.log"Brought to you by Node.js version#{version}"# scream.coffeeexport scream =(str) -> str.toUpperCase()For the sake of running the example, add apackage.json file containing themodule type of the CoffeeScript files.
{"type":"module"}This is only for running the example. In real world loaders,getPackageType() must beable to return anformat known to Node.js even in the absence of an explicit type in apackage.json, or otherwise thenextLoad call would throwERR_UNKNOWN_FILE_EXTENSION(if undefined) orERR_UNKNOWN_MODULE_FORMAT (if it's not a known format listed intheload hook documentation).
With the preceding hooks modules, runningnode --import 'data:text/javascript,import { register } from "node:module"; import { pathToFileURL } from "node:url"; register(pathToFileURL("./coffeescript-hooks.mjs"));' ./main.coffeeornode --import ./coffeescript-sync-hooks.mjs ./main.coffeecausesmain.coffee to be turned into JavaScript after its source code isloaded from disk but before Node.js executes it; and so on for any.coffee,.litcoffee or.coffee.md files referenced viaimport statements of anyloaded file.
Import maps#
The previous two examples definedload hooks. This is an example of aresolve hook. This hooks module reads animport-map.json file that defineswhich specifiers to override to other URLs (this is a very simplisticimplementation of a small subset of the "import maps" specification).
Asynchronous version#
// import-map-hooks.jsimport fsfrom'node:fs/promises';const { imports } =JSON.parse(await fs.readFile('import-map.json'));exportasyncfunctionresolve(specifier, context, nextResolve) {if (Object.hasOwn(imports, specifier)) {returnnextResolve(imports[specifier], context); }returnnextResolve(specifier, context);}Synchronous version#
// import-map-sync-hooks.jsimport fsfrom'node:fs/promises';importmodulefrom'node:module';const { imports } =JSON.parse(fs.readFileSync('import-map.json','utf-8'));functionresolve(specifier, context, nextResolve) {if (Object.hasOwn(imports, specifier)) {returnnextResolve(imports[specifier], context); }returnnextResolve(specifier, context);}module.registerHooks({ resolve });Using the hooks#
With these files:
// main.jsimport'a-module';// import-map.json{"imports":{"a-module":"./some-module.js"}}// some-module.jsconsole.log('some module!');Runningnode --import 'data:text/javascript,import { register } from "node:module"; import { pathToFileURL } from "node:url"; register(pathToFileURL("./import-map-hooks.js"));' main.jsornode --import ./import-map-sync-hooks.js main.jsshould printsome module!.
Source Map Support#
Node.js supports TC39 ECMA-426Source Map format (it was called Source maprevision 3 format).
The APIs in this section are helpers for interacting with the source mapcache. This cache is populated when source map parsing is enabled andsource map include directives are found in a modules' footer.
To enable source map parsing, Node.js must be run with the flag--enable-source-maps, or with code coverage enabled by settingNODE_V8_COVERAGE=dir, or be enabled programmatically viamodule.setSourceMapsSupport().
// module.mjs// In an ECMAScript moduleimport { findSourceMap,SourceMap }from'node:module';// module.cjs// In a CommonJS moduleconst { findSourceMap,SourceMap } =require('node:module');
module.getSourceMapsSupport()#
- Returns:<Object>
This method returns whether theSource Map v3 support for stacktraces is enabled.
module.findSourceMap(path)#
path<string>- Returns:<module.SourceMap> |<undefined> Returns
module.SourceMapif a sourcemap is found,undefinedotherwise.
path is the resolved path for the file for which a corresponding source mapshould be fetched.
module.setSourceMapsSupport(enabled[, options])#
This function enables or disables theSource Map v3 support forstack traces.
It provides same features as launching Node.js process with commandline options--enable-source-maps, with additional options to alter the support for filesinnode_modules or generated codes.
Only source maps in JavaScript files that are loaded after source maps has beenenabled will be parsed and loaded. Preferably, use the commandline options--enable-source-maps to avoid losing track of source maps of modules loadedbefore this API call.
Class:module.SourceMap#
new SourceMap(payload[, { lineLengths }])#
History
| Version | Changes |
|---|---|
| v20.5.0 | Add support for |
payload<Object>lineLengths<number[]>
Creates a newsourceMap instance.
payload is an object with keys matching theSource map format:
file<string>version<number>sources<string[]>sourcesContent<string[]>names<string[]>mappings<string>sourceRoot<string>
lineLengths is an optional array of the length of each line in thegenerated code.
sourceMap.findEntry(lineOffset, columnOffset)#
lineOffset<number> The zero-indexed line number offset inthe generated sourcecolumnOffset<number> The zero-indexed column number offsetin the generated source- Returns:<Object>
Given a line offset and column offset in the generated sourcefile, returns an object representing the SourceMap range in theoriginal file if found, or an empty object if not.
The object returned contains the following keys:
generatedLine<number> The line offset of the start of therange in the generated sourcegeneratedColumn<number> The column offset of start of therange in the generated sourceoriginalSource<string> The file name of the original source,as reported in the SourceMaporiginalLine<number> The line offset of the start of therange in the original sourceoriginalColumn<number> The column offset of start of therange in the original sourcename<string>
The returned value represents the raw range as it appears in theSourceMap, based on zero-indexed offsets,not 1-indexed line andcolumn numbers as they appear in Error messages and CallSiteobjects.
To get the corresponding 1-indexed line and column numbers from alineNumber and columnNumber as they are reported by Error stacksand CallSite objects, usesourceMap.findOrigin(lineNumber, columnNumber)
sourceMap.findOrigin(lineNumber, columnNumber)#
lineNumber<number> The 1-indexed line number of the callsite in the generated sourcecolumnNumber<number> The 1-indexed column numberof the call site in the generated source- Returns:<Object>
Given a 1-indexedlineNumber andcolumnNumber from a call site inthe generated source, find the corresponding call site locationin the original source.
If thelineNumber andcolumnNumber provided are not found in anysource map, then an empty object is returned. Otherwise, thereturned object contains the following keys:
name<string> |<undefined> The name of the range in thesource map, if one was providedfileName<string> The file name of the original source, asreported in the SourceMaplineNumber<number> The 1-indexed lineNumber of thecorresponding call site in the original sourcecolumnNumber<number> The 1-indexed columnNumber of thecorresponding call site in the original source
Modules: Packages#
History
| Version | Changes |
|---|---|
| v14.13.0, v12.20.0 | Add support for |
| v14.6.0, v12.19.0 | Add package |
| v13.7.0, v12.17.0 | Unflag conditional exports. |
| v13.7.0, v12.16.0 | Remove the |
| v13.6.0, v12.16.0 | Unflag self-referencing a package using its name. |
| v12.7.0 | Introduce |
| v12.0.0 | Add support for ES modules using |
Introduction#
A package is a folder tree described by apackage.json file. The packageconsists of the folder containing thepackage.json file and all subfoldersuntil the next folder containing anotherpackage.json file, or a foldernamednode_modules.
This page provides guidance for package authors writingpackage.json filesalong with a reference for thepackage.json fields defined by Node.js.
Determining module system#
Introduction#
Node.js will treat the following asES modules when passed tonode as theinitial input, or when referenced byimport statements orimport()expressions:
Files with an
.mjsextension.Files with a
.jsextension when the nearest parentpackage.jsonfilecontains a top-level"type"field with a value of"module".Strings passed in as an argument to
--eval, or piped tonodeviaSTDIN,with the flag--input-type=module.Code containing syntax only successfully parsed asES modules, such as
importorexportstatements orimport.meta, with no explicit marker ofhow it should be interpreted. Explicit markers are.mjsor.cjsextensions,package.json"type"fields with either"module"or"commonjs"values, or the--input-typeflag. Dynamicimport()expressions are supported in either CommonJS or ES modules and would not forcea file to be treated as an ES module. SeeSyntax detection.
Node.js will treat the following asCommonJS when passed tonode as theinitial input, or when referenced byimport statements orimport()expressions:
Files with a
.cjsextension.Files with a
.jsextension when the nearest parentpackage.jsonfilecontains a top-level field"type"with a value of"commonjs".Strings passed in as an argument to
--evalor--print, or piped tonodeviaSTDIN, with the flag--input-type=commonjs.Files with a
.jsextension with no parentpackage.jsonfile or where thenearest parentpackage.jsonfile lacks atypefield, and where the codecan evaluate successfully as CommonJS. In other words, Node.js tries to runsuch "ambiguous" files as CommonJS first, and will retry evaluating them as ESmodules if the evaluation as CommonJS fails because the parser found ES modulesyntax.
Writing ES module syntax in "ambiguous" files incurs a performance cost, andtherefore it is encouraged that authors be explicit wherever possible. Inparticular, package authors should always include the"type" field intheirpackage.json files, even in packages where all sources are CommonJS.Being explicit about thetype of the package will future-proof the package incase the default type of Node.js ever changes, and it will also make thingseasier for build tools and loaders to determine how the files in the packageshould be interpreted.
Syntax detection#
History
| Version | Changes |
|---|---|
| v22.7.0, v20.19.0 | Syntax detection is enabled by default. |
| v21.1.0, v20.10.0 | Added in: v21.1.0, v20.10.0 |
Node.js will inspect the source code of ambiguous input to determine whether itcontains ES module syntax; if such syntax is detected, the input will be treatedas an ES module.
Ambiguous input is defined as:
- Files with a
.jsextension or no extension; and either no controllingpackage.jsonfile or one that lacks atypefield. - String input (
--evalorSTDIN) when--input-typeis not specified.
ES module syntax is defined as syntax that would throw when evaluated asCommonJS. This includes the following:
importstatements (butnotimport()expressions, which are valid inCommonJS).exportstatements.import.metareferences.awaitat the top level of a module.- Lexical redeclarations of the CommonJS wrapper variables (
require,module,exports,__dirname,__filename).
Module resolution and loading#
Node.js has two types of module resolution and loading, chosen based on how the module is requested.
When a module is requested viarequire() (available by default in CommonJS modules,and can be dynamically generated usingcreateRequire() in both CommonJS and ES Modules):
- Resolution:
- The resolution initiated by
require()supportsfolders as modules. - When resolving a specifier, if no exact match is found,
require()will try to addextensions (.js,.json, and finally.node) and then attempt to resolvefolders as modules. - It does not support URLs as specifiers by default.
- The resolution initiated by
- Loading:
.jsonfiles are treated as JSON text files..nodefiles are interpreted as compiled addon modules loaded withprocess.dlopen()..ts,.mtsand.ctsfiles are treated asTypeScript text files.- Files with any other extension, or without extensions, are treated as JavaScripttext files.
require()can only be used toload ECMAScript modules from CommonJS modules iftheECMAScript moduleand its dependencies are synchronous(i.e. they do not contain top-levelawait).
When a module is requested via staticimport statements (only available in ES Modules)orimport() expressions (available in both CommonJS and ES Modules):
- Resolution:
- The resolution of
import/import()does not support folders as modules,directory indexes (e.g.'./startup/index.js') must be fully specified. - It does not perform extension searching. A file extension must be providedwhen the specifier is a relative or absolute file URL.
- It supports
file://anddata:URLs as specifiers by default.
- The resolution of
- Loading:
.jsonfiles are treated as JSON text files. When importing JSON modules,an import type attribute is required (e.g.import json from './data.json' with { type: 'json' })..nodefiles are interpreted as compiled addon modules loaded withprocess.dlopen(), if--experimental-addon-modulesis enabled..ts,.mtsand.ctsfiles are treated asTypeScript text files.- It accepts only
.js,.mjs, and.cjsextensions for JavaScript textfiles. .wasmfiles are treated asWebAssembly modules.- Any other file extensions will result in a
ERR_UNKNOWN_FILE_EXTENSIONerror.Additional file extensions can be facilitated viacustomization hooks. import/import()can be used to load JavaScriptCommonJS modules.Such modules are passed throughmerve to try to identify namedexports, which are available if they can be determined through static analysis.
Regardless of how a module is requested, the resolution and loading process can be customizedusingcustomization hooks.
package.json and file extensions#
Within a package, thepackage.json"type" field defines howNode.js should interpret.js files. If apackage.json file does not have a"type" field,.js files are treated asCommonJS.
Apackage.json"type" value of"module" tells Node.js to interpret.jsfiles within that package as usingES module syntax.
The"type" field applies not only to initial entry points (node my-app.js)but also to files referenced byimport statements andimport() expressions.
// my-app.js, treated as an ES module because there is a package.json// file in the same folder with "type": "module".import'./startup/init.js';// Loaded as ES module since ./startup contains no package.json file,// and therefore inherits the "type" value from one level up.import'commonjs-package';// Loaded as CommonJS since ./node_modules/commonjs-package/package.json// lacks a "type" field or contains "type": "commonjs".import'./node_modules/commonjs-package/index.js';// Loaded as CommonJS since ./node_modules/commonjs-package/package.json// lacks a "type" field or contains "type": "commonjs".Files ending with.mjs are always loaded asES modules regardless ofthe nearest parentpackage.json.
Files ending with.cjs are always loaded asCommonJS regardless of thenearest parentpackage.json.
import'./legacy-file.cjs';// Loaded as CommonJS since .cjs is always loaded as CommonJS.import'commonjs-package/src/index.mjs';// Loaded as ES module since .mjs is always loaded as ES module.The.mjs and.cjs extensions can be used to mix types within the samepackage:
Within a
"type": "module"package, Node.js can be instructed tointerpret a particular file asCommonJS by naming it with a.cjsextension (since both.jsand.mjsfiles are treated as ES modules withina"module"package).Within a
"type": "commonjs"package, Node.js can be instructed tointerpret a particular file as anES module by naming it with an.mjsextension (since both.jsand.cjsfiles are treated as CommonJS within a"commonjs"package).
--input-type flag#
Strings passed in as an argument to--eval (or-e), or piped tonode viaSTDIN, are treated asES modules when the--input-type=module flagis set.
node --input-type=module --eval"import { sep } from 'node:path'; console.log(sep);"echo"import { sep } from 'node:path'; console.log(sep);" | node --input-type=moduleFor completeness there is also--input-type=commonjs, for explicitly runningstring input as CommonJS. This is the default behavior if--input-type isunspecified.
Package entry points#
In a package'spackage.json file, two fields can define entry points for apackage:"main" and"exports". Both fields apply to both ES moduleand CommonJS module entry points.
The"main" field is supported in all versions of Node.js, but itscapabilities are limited: it only defines the main entry point of the package.
The"exports" provides a modern alternative to"main" allowingmultiple entry points to be defined, conditional entry resolution supportbetween environments, andpreventing any other entry points besides thosedefined in"exports". This encapsulation allows module authors toclearly define the public interface for their package.
For new packages targeting the currently supported versions of Node.js, the"exports" field is recommended. For packages supporting Node.js 10 andbelow, the"main" field is required. If both"exports" and"main" are defined, the"exports" field takes precedence over"main" in supported versions of Node.js.
Conditional exports can be used within"exports" to define differentpackage entry points per environment, including whether the package isreferenced viarequire or viaimport. For more information about supportingboth CommonJS and ES modules in a single package please consultthe dual CommonJS/ES module packages section.
Existing packages introducing the"exports" field will prevent consumersof the package from using any entry points that are not defined, including thepackage.json (e.g.require('your-package/package.json')).This willlikely be a breaking change.
To make the introduction of"exports" non-breaking, ensure that everypreviously supported entry point is exported. It is best to explicitly specifyentry points so that the package's public API is well-defined. For example,a project that previously exportedmain,lib,feature, and thepackage.json could use the followingpackage.exports:
{"name":"my-package","exports":{".":"./lib/index.js","./lib":"./lib/index.js","./lib/index":"./lib/index.js","./lib/index.js":"./lib/index.js","./feature":"./feature/index.js","./feature/index":"./feature/index.js","./feature/index.js":"./feature/index.js","./package.json":"./package.json"}}Alternatively a project could choose to export entire folders both with andwithout extensioned subpaths using export patterns:
{"name":"my-package","exports":{".":"./lib/index.js","./lib":"./lib/index.js","./lib/*":"./lib/*.js","./lib/*.js":"./lib/*.js","./feature":"./feature/index.js","./feature/*":"./feature/*.js","./feature/*.js":"./feature/*.js","./package.json":"./package.json"}}With the above providing backwards-compatibility for any minor package versions,a future major change for the package can then properly restrict the exportsto only the specific feature exports exposed:
{"name":"my-package","exports":{".":"./lib/index.js","./feature/*.js":"./feature/*.js","./feature/internal/*":null}}Main entry point export#
When writing a new package, it is recommended to use the"exports" field:
{"exports":"./index.js"}When the"exports" field is defined, all subpaths of the package areencapsulated and no longer available to importers. For example,require('pkg/subpath.js') throws anERR_PACKAGE_PATH_NOT_EXPORTEDerror.
This encapsulation of exports provides more reliable guaranteesabout package interfaces for tools and when handling semver upgrades for apackage. It is not a strong encapsulation since a direct require of anyabsolute subpath of the package such asrequire('/path/to/node_modules/pkg/subpath.js') will still loadsubpath.js.
All currently supported versions of Node.js and modern build tools support the"exports" field. For projects using an older version of Node.js or a relatedbuild tool, compatibility can be achieved by including the"main" fieldalongside"exports" pointing to the same module:
{"main":"./index.js","exports":"./index.js"}Subpath exports#
When using the"exports" field, custom subpaths can be defined alongwith the main entry point by treating the main entry point as the"." subpath:
{"exports":{".":"./index.js","./submodule.js":"./src/submodule.js"}}Now only the defined subpath in"exports" can be imported by a consumer:
import submodulefrom'es-module-package/submodule.js';// Loads ./node_modules/es-module-package/src/submodule.jsWhile other subpaths will error:
import submodulefrom'es-module-package/private-module.js';// Throws ERR_PACKAGE_PATH_NOT_EXPORTEDExtensions in subpaths#
Package authors should provide either extensioned (import 'pkg/subpath.js') orextensionless (import 'pkg/subpath') subpaths in their exports. This ensuresthat there is only one subpath for each exported module so that all dependentsimport the same consistent specifier, keeping the package contract clear forconsumers and simplifying package subpath completions.
Traditionally, packages tended to use the extensionless style, which has thebenefits of readability and of masking the true path of the file within thepackage.
Withimport maps now providing a standard for package resolution in browsersand other JavaScript runtimes, using the extensionless style can result inbloated import map definitions. Explicit file extensions can avoid this issue byenabling the import map to utilize apackages folder mapping to map multiplesubpaths where possible instead of a separate map entry per package subpathexport. This also mirrors the requirement of usingthe full specifier pathin relative and absolute import specifiers.
Path Rules and Validation for Export Targets#
When defining paths as targets in the"exports" field, Node.js enforcesseveral rules to ensure security, predictability, and proper encapsulation.Understanding these rules is crucial for authors publishing packages.
Targets must be relative URLs#
All target paths in the"exports" map (the values associated with exportkeys) must be relative URL strings starting with./.
// package.json{"name":"my-package","exports":{".":"./dist/main.js",// Correct"./feature":"./lib/feature.js",// Correct// "./origin-relative": "/dist/main.js", // Incorrect: Must start with ./// "./absolute": "file:///dev/null", // Incorrect: Must start with ./// "./outside": "../common/util.js" // Incorrect: Must start with ./}}Reasons for this behavior include:
- Security: Prevents exporting arbitrary files from outside thepackage's own directory.
- Encapsulation: Ensures all exported paths are resolved relative tothe package root, making the package self-contained.
No path traversal or invalid segments#
Export targets must not resolve to a location outside the package's rootdirectory. Additionally, path segments like. (single dot),.. (double dot),ornode_modules (and their URL-encoded equivalents) are generally disallowedwithin thetarget string after the initial./ and in anysubpath partsubstituted into a target pattern.
// package.json{"name":"my-package","exports":{// ".": "./dist/../../elsewhere/file.js", // Invalid: path traversal// ".": "././dist/main.js", // Invalid: contains "." segment// ".": "./dist/../dist/main.js", // Invalid: contains ".." segment// "./utils/./helper.js": "./utils/helper.js" // Key has invalid segment}}Exports sugar#
If the"." export is the only export, the"exports" field provides sugarfor this case being the direct"exports" field value.
{"exports":{".":"./index.js"}}can be written:
{"exports":"./index.js"}Subpath imports#
History
| Version | Changes |
|---|---|
| v25.4.0 | Allow subpath imports that start with |
| v14.6.0, v12.19.0 | Added in: v14.6.0, v12.19.0 |
In addition to the"exports" field, there is a package"imports" fieldto create private mappings that only apply to import specifiers from within thepackage itself.
Entries in the"imports" field must always start with# to ensure they aredisambiguated from external package specifiers.
For example, the imports field can be used to gain the benefits of conditionalexports for internal modules:
// package.json{"imports":{"#dep":{"node":"dep-node-native","default":"./dep-polyfill.js"}},"dependencies":{"dep-node-native":"^1.0.0"}}whereimport '#dep' does not get the resolution of the external packagedep-node-native (including its exports in turn), and instead gets the localfile./dep-polyfill.js relative to the package in other environments.
Unlike the"exports" field, the"imports" field permits mapping to externalpackages.
The resolution rules for the imports field are otherwise analogous to theexports field.
Subpath patterns#
History
| Version | Changes |
|---|---|
| v16.10.0, v14.19.0 | Support pattern trailers in "imports" field. |
| v16.9.0, v14.19.0 | Support pattern trailers. |
| v14.13.0, v12.20.0 | Added in: v14.13.0, v12.20.0 |
For packages with a small number of exports or imports, we recommendexplicitly listing each exports subpath entry. But for packages that havelarge numbers of subpaths, this might causepackage.json bloat andmaintenance issues.
For these use cases, subpath export patterns can be used instead:
// ./node_modules/es-module-package/package.json{"exports":{"./features/*.js":"./src/features/*.js"},"imports":{"#internal/*.js":"./src/internal/*.js"}}* maps expose nested subpaths as it is a string replacement syntaxonly.
All instances of* on the right hand side will then be replaced with thisvalue, including if it contains any/ separators.
import featureXfrom'es-module-package/features/x.js';// Loads ./node_modules/es-module-package/src/features/x.jsimport featureYfrom'es-module-package/features/y/y.js';// Loads ./node_modules/es-module-package/src/features/y/y.jsimport internalZfrom'#internal/z.js';// Loads ./src/internal/z.jsThis is a direct static matching and replacement without any special handlingfor file extensions. Including the"*.js" on both sides of the mappingrestricts the exposed package exports to only JS files.
The property of exports being statically enumerable is maintained with exportspatterns since the individual exports for a package can be determined bytreating the right hand side target pattern as a** glob against the list offiles within the package. Becausenode_modules paths are forbidden in exportstargets, this expansion is dependent on only the files of the package itself.
To exclude private subfolders from patterns,null targets can be used:
// ./node_modules/es-module-package/package.json{"exports":{"./features/*.js":"./src/features/*.js","./features/private-internal/*":null}}import featureInternalfrom'es-module-package/features/private-internal/m.js';// Throws: ERR_PACKAGE_PATH_NOT_EXPORTEDimport featureXfrom'es-module-package/features/x.js';// Loads ./node_modules/es-module-package/src/features/x.jsConditional exports#
History
| Version | Changes |
|---|---|
| v13.7.0, v12.16.0 | Unflag conditional exports. |
| v13.2.0, v12.16.0 | Added in: v13.2.0, v12.16.0 |
Conditional exports provide a way to map to different paths depending oncertain conditions. They are supported for both CommonJS and ES module imports.
For example, a package that wants to provide different ES module exports forrequire() andimport can be written:
// package.json{"exports":{"import":"./index-module.js","require":"./index-require.cjs"},"type":"module"}Node.js implements the following conditions, listed in order from mostspecific to least specific as conditions should be defined:
"node-addons"- similar to"node"and matches for any Node.js environment.This condition can be used to provide an entry point which uses native C++addons as opposed to an entry point which is more universal and doesn't relyon native addons. This condition can be disabled via the--no-addonsflag."node"- matches for any Node.js environment. Can be a CommonJS or ESmodule file.In most cases explicitly calling out the Node.js platform isnot necessary."import"- matches when the package is loaded viaimportorimport(), or via any top-level import or resolve operation by theECMAScript module loader. Applies regardless of the module format of thetarget file.Always mutually exclusive with"require"."require"- matches when the package is loaded viarequire(). Thereferenced file should be loadable withrequire()although the conditionmatches regardless of the module format of the target file. Expectedformats include CommonJS, JSON, native addons, and ES modules.Always mutuallyexclusive with"import"."module-sync"- matches no matter the package is loaded viaimport,import()orrequire(). The format is expected to be ES modules that doesnot contain top-level await in its module graph - if it does,ERR_REQUIRE_ASYNC_MODULEwill be thrown when the module isrequire()-ed."default"- the generic fallback that always matches. Can be a CommonJSor ES module file.This condition should always come last.
Within the"exports" object, key order is significant. During conditionmatching, earlier entries have higher priority and take precedence over laterentries.The general rule is that conditions should be from most specific toleast specific in object order.
Using the"import" and"require" conditions can lead to some hazards,which are further explained inthe dual CommonJS/ES module packages section.
The"node-addons" condition can be used to provide an entry point whichuses native C++ addons. However, this condition can be disabled via the--no-addons flag. When using"node-addons", it's recommended to treat"default" as an enhancement that provides a more universal entry point, e.g.using WebAssembly instead of a native addon.
Conditional exports can also be extended to exports subpaths, for example:
{"exports":{".":"./index.js","./feature.js":{"node":"./feature-node.js","default":"./feature.js"}}}Defines a package whererequire('pkg/feature.js') andimport 'pkg/feature.js' could provide different implementations betweenNode.js and other JS environments.
When using environment branches, always include a"default" condition wherepossible. Providing a"default" condition ensures that any unknown JSenvironments are able to use this universal implementation, which helps avoidthese JS environments from having to pretend to be existing environments inorder to support packages with conditional exports. For this reason, using"node" and"default" condition branches is usually preferable to using"node" and"browser" condition branches.
Nested conditions#
In addition to direct mappings, Node.js also supports nested condition objects.
For example, to define a package that only has dual mode entry points foruse in Node.js but not the browser:
{"exports":{"node":{"import":"./feature-node.mjs","require":"./feature-node.cjs"},"default":"./feature.mjs"}}Conditions continue to be matched in order as with flat conditions. Ifa nested condition does not have any mapping it will continue checkingthe remaining conditions of the parent condition. In this way nestedconditions behave analogously to nested JavaScriptif statements.
Resolving user conditions#
When running Node.js, custom user conditions can be added with the--conditions flag:
node --conditions=development index.jswhich would then resolve the"development" condition in package imports andexports, while resolving the existing"node","node-addons","default","import", and"require" conditions as appropriate.
Any number of custom conditions can be set with repeat flags.
Typical conditions should only contain alphanumerical characters,using ":", "-", or "=" as separators if necessary. Anything else may runinto compability issues outside of node.
In node, conditions have very few restrictions, but specifically these include:
- They must contain at least one character.
- They cannot start with "." since they may appear in places that alsoallow relative paths.
- They cannot contain "," since they may be parsed as a comma-separatedlist by some CLI tools.
- They cannot be integer property keys like "10" since that can haveunexpected effects on property key ordering for JS objects.
Community Conditions Definitions#
Condition strings other than the"import","require","node","module-sync","node-addons" and"default" conditionsimplemented in Node.js core are ignored by default.
Other platforms may implement other conditions and user conditions can beenabled in Node.js via the--conditions /-C flag.
Since custom package conditions require clear definitions to ensure correctusage, a list of common known package conditions and their strict definitionsis provided below to assist with ecosystem coordination.
"types"- can be used by typing systems to resolve the typing file forthe given export.This condition should always be included first."browser"- any web browser environment."development"- can be used to define a development-only environmententry point, for example to provide additional debugging context such asbetter error messages when running in a development mode.Must always bemutually exclusive with"production"."production"- can be used to define a production environment entrypoint.Must always be mutually exclusive with"development".
For other runtimes, platform-specific condition key definitions are maintainedby theWinterCG in theRuntime Keys proposal specification.
New conditions definitions may be added to this list by creating a pull requestto theNode.js documentation for this section. The requirements for listinga new condition definition here are that:
- The definition should be clear and unambiguous for all implementers.
- The use case for why the condition is needed should be clearly justified.
- There should exist sufficient existing implementation usage.
- The condition name should not conflict with another condition definition orcondition in wide usage.
- The listing of the condition definition should provide a coordinationbenefit to the ecosystem that wouldn't otherwise be possible. For example,this would not necessarily be the case for company-specific orapplication-specific conditions.
- The condition should be such that a Node.js user would expect it to be inNode.js core documentation. The
"types"condition is a good example: Itdoesn't really belong in theRuntime Keys proposal but is a good fithere in the Node.js docs.
The above definitions may be moved to a dedicated conditions registry in duecourse.
Self-referencing a package using its name#
History
| Version | Changes |
|---|---|
| v13.6.0, v12.16.0 | Unflag self-referencing a package using its name. |
| v13.1.0, v12.16.0 | Added in: v13.1.0, v12.16.0 |
Within a package, the values defined in the package'spackage.json"exports" field can be referenced via the package's name.For example, assuming thepackage.json is:
// package.json{"name":"a-package","exports":{".":"./index.mjs","./foo.js":"./foo.js"}}Then any modulein that package can reference an export in the package itself:
// ./a-module.mjsimport { something }from'a-package';// Imports "something" from ./index.mjs.Self-referencing is available only ifpackage.json has"exports", andwill allow importing only what that"exports" (in thepackage.json)allows. So the code below, given the previous package, will generate a runtimeerror:
// ./another-module.mjs// Imports "another" from ./m.mjs. Fails because// the "package.json" "exports" field// does not provide an export named "./m.mjs".import { another }from'a-package/m.mjs';Self-referencing is also available when usingrequire, both in an ES module,and in a CommonJS one. For example, this code will also work:
// ./a-module.jsconst { something } =require('a-package/foo.js');// Loads from ./foo.js.Finally, self-referencing also works with scoped packages. For example, thiscode will also work:
// package.json{"name":"@my/package","exports":"./index.js"}// ./index.jsmodule.exports =42;// ./other.jsconsole.log(require('@my/package'));$node other.js42Dual CommonJS/ES module packages#
Seethe package examples repository for details.
Node.jspackage.json field definitions#
This section describes the fields used by the Node.js runtime. Other tools (suchasnpm) useadditional fields which are ignored by Node.js and not documented here.
The following fields inpackage.json files are used in Node.js:
"name"- Relevant when using named imports within a package. Also usedby package managers as the name of the package."main"- The default module when loading the package, if exports is notspecified, and in versions of Node.js prior to the introduction of exports."type"- The package type determining whether to load.jsfiles asCommonJS or ES modules."exports"- Package exports and conditional exports. When present,limits which submodules can be loaded from within the package."imports"- Package imports, for use by modules within the packageitself.
"name"#
History
| Version | Changes |
|---|---|
| v13.6.0, v12.16.0 | Remove the |
| v13.1.0, v12.16.0 | Added in: v13.1.0, v12.16.0 |
- Type:<string>
{"name":"package-name"}The"name" field defines your package's name. Publishing to thenpm registry requires a name that satisfiescertain requirements.
The"name" field can be used in addition to the"exports" field toself-reference a package using its name.
"main"#
- Type:<string>
{"main":"./index.js"}The"main" field defines the entry point of a package when imported by namevia anode_modules lookup. Its value is a path.
When a package has an"exports" field, this will take precedence over the"main" field when importing the package by name.
It also defines the script that is used when thepackage directory is loadedviarequire().
// This resolves to ./path/to/directory/index.js.require('./path/to/directory');"type"#
History
| Version | Changes |
|---|---|
| v13.2.0, v12.17.0 | Unflag |
| v12.0.0 | Added in: v12.0.0 |
- Type:<string>
The"type" field defines the module format that Node.js uses for all.js files that have thatpackage.json file as their nearest parent.
Files ending with.js are loaded as ES modules when the nearest parentpackage.json file contains a top-level field"type" with a value of"module".
The nearest parentpackage.json is defined as the firstpackage.json foundwhen searching in the current folder, that folder's parent, and so on upuntil a node_modules folder or the volume root is reached.
// package.json{"type":"module"}# In same folder as preceding package.jsonnode my-app.js# Runs as ES moduleIf the nearest parentpackage.json lacks a"type" field, or contains"type": "commonjs",.js files are treated asCommonJS. If the volumeroot is reached and nopackage.json is found,.js files are treated asCommonJS.
import statements of.js files are treated as ES modules if the nearestparentpackage.json contains"type": "module".
// my-app.js, part of the same example as aboveimport'./startup.js';// Loaded as ES module because of package.jsonRegardless of the value of the"type" field,.mjs files are always treatedas ES modules and.cjs files are always treated as CommonJS.
"exports"#
History
| Version | Changes |
|---|---|
| v14.13.0, v12.20.0 | Add support for |
| v13.7.0, v12.17.0 | Unflag conditional exports. |
| v13.7.0, v12.16.0 | Implement logical conditional exports ordering. |
| v13.7.0, v12.16.0 | Remove the |
| v13.2.0, v12.16.0 | Implement conditional exports. |
| v12.7.0 | Added in: v12.7.0 |
- Type:<Object> |<string> |<string[]>
{"exports":"./index.js"}The"exports" field allows defining theentry points of a package whenimported by name loaded either via anode_modules lookup or aself-reference to its own name. It is supported in Node.js 12+ as analternative to the"main" that can support definingsubpath exportsandconditional exports while encapsulating internal unexported modules.
Conditional Exports can also be used within"exports" to define differentpackage entry points per environment, including whether the package isreferenced viarequire or viaimport.
All paths defined in the"exports" must be relative file URLs starting with./.
"imports"#
- Type:<Object>
// package.json{"imports":{"#dep":{"node":"dep-node-native","default":"./dep-polyfill.js"}},"dependencies":{"dep-node-native":"^1.0.0"}}Entries in the imports field must be strings starting with#.
Package imports permit mapping to external packages.
This field definessubpath imports for the current package.
Modules: TypeScript#
History
| Version | Changes |
|---|---|
| v25.2.0 | Type stripping is now stable. |
| v24.3.0, v22.18.0 | Type stripping no longer emits an experimental warning. |
| v23.6.0, v22.18.0 | Type stripping is enabled by default. |
| v22.7.0 | Added |
Enabling#
There are two ways to enable runtime TypeScript support in Node.js:
Forfull support of all of TypeScript's syntax and features, includingusing any version of TypeScript, use a third-party package.
For lightweight support, you can use the built-in support fortype stripping.
Full TypeScript support#
To use TypeScript with full support for all TypeScript features, includingtsconfig.json, you can use a third-party package. These instructions usetsx as an example but there are many other similar libraries available.
Install the package as a development dependency using whatever packagemanager you're using for your project. For example, with
npm:npm install --save-dev tsxThen you can run your TypeScript code via:
npx tsx your-file.tsOr alternatively, you can run with
nodevia:node --import=tsx your-file.ts
Type stripping#
History
| Version | Changes |
|---|---|
| v25.2.0 | Type stripping is now stable. |
| v22.6.0 | Added in: v22.6.0 |
By default Node.js will execute TypeScript files that contains onlyerasable TypeScript syntax.Node.js will replace TypeScript syntax with whitespace,and no type checking is performed.To enable the transformation of non erasable TypeScript syntax, which requires JavaScript code generation,such asenum declarations, parameter properties use the flag--experimental-transform-types.To disable this feature, use the flag--no-strip-types.
Node.js ignorestsconfig.json files and thereforefeatures that depend on settings withintsconfig.json,such as paths or converting newer JavaScript syntax to older standards, areintentionally unsupported. To get full TypeScript support, seeFull TypeScript support.
The type stripping feature is designed to be lightweight.By intentionally not supporting syntaxes that require JavaScript codegeneration, and by replacing inline types with whitespace, Node.js can runTypeScript code without the need for source maps.
Type stripping is compatible with most versions of TypeScriptbut we recommend version 5.8 or newer with the followingtsconfig.json settings:
{"compilerOptions":{"noEmit":true,// Optional - see note below"target":"esnext","module":"nodenext","rewriteRelativeImportExtensions":true,"erasableSyntaxOnly":true,"verbatimModuleSyntax":true}}Use thenoEmit option if you intend to only execute*.ts files, for examplea build script. You won't need this flag if you intend to distribute*.jsfiles.
Determining module system#
Node.js supports bothCommonJS andES Modules syntax in TypeScriptfiles. Node.js will not convert from one module system to another; if you wantyour code to run as an ES module, you must useimport andexport syntax, andif you want your code to run as CommonJS you must userequire andmodule.exports.
.tsfiles will have their module system determinedthe same way as.jsfiles. To useimportandexportsyntax, add"type": "module"to thenearest parentpackage.json..mtsfiles will always be run as ES modules, similar to.mjsfiles..ctsfiles will always be run as CommonJS modules, similar to.cjsfiles..tsxfiles are unsupported.
As in JavaScript files,file extensions are mandatory inimport statementsandimport() expressions:import './file.ts', notimport './file'. Becauseof backward compatibility, file extensions are also mandatory inrequire()calls:require('./file.ts'), notrequire('./file'), similar to how the.cjs extension is mandatory inrequire calls in CommonJS files.
Thetsconfig.json optionallowImportingTsExtensions will allow theTypeScript compilertsc to type-check files withimport specifiers thatinclude the.ts extension.
TypeScript features#
Since Node.js is only removing inline types, any TypeScript features thatinvolvereplacing TypeScript syntax with new JavaScript syntax will error,unless the flag--experimental-transform-types is passed.
The most prominent features that require transformation are:
Enumdeclarationsnamespacewith runtime code- legacy
modulewith runtime code - parameter properties
- import aliases
namespaces andmodule that do not contain runtime code are supported.This example will work correctly:
// This namespace is exporting a typenamespaceTypeOnly {exporttype A =string;}This will result inERR_UNSUPPORTED_TYPESCRIPT_SYNTAX error:
// This namespace is exporting a valuenamespaceA {exportlet x =1}Since Decorators are currently aTC39 Stage 3 proposal,they are not transformed and will result in a parser error.Node.js does not provide polyfills and thus will not support decorators untilthey are supported natively in JavaScript.
In addition, Node.js does not readtsconfig.json files and does not supportfeatures that depend on settings withintsconfig.json, such as paths orconverting newer JavaScript syntax into older standards.
Importing types withouttype keyword#
Due to the nature of type stripping, thetype keyword is necessary tocorrectly strip type imports. Without thetype keyword, Node.js will treat theimport as a value import, which will result in a runtime error. The tsconfigoptionverbatimModuleSyntax can be used to match this behavior.
This example will work correctly:
importtype {Type1,Type2 }from'./module.ts';import { fn,typeFnParams }from'./fn.ts';This will result in a runtime error:
import {Type1,Type2 }from'./module.ts';import { fn,FnParams }from'./fn.ts';Non-file forms of input#
Type stripping can be enabled for--eval and STDIN. The module systemwill be determined by--input-type, as it is for JavaScript.
TypeScript syntax is unsupported in the REPL,--check, andinspect.
Source maps#
Since inline types are replaced by whitespace, source maps are unnecessary forcorrect line numbers in stack traces; and Node.js does not generate them.When--experimental-transform-types is enabled, source-mapsare enabled by default.
Type stripping in dependencies#
To discourage package authors from publishing packages written in TypeScript,Node.js refuses to handle TypeScript files inside folders under anode_modulespath.
Paths aliases#
tsconfig "paths" won't be transformed and therefore produce an error. The closestfeature available issubpath imports with the limitation that they need to startwith#.
Net#
Source Code:lib/net.js
Thenode:net module provides an asynchronous network API for creating stream-basedTCP orIPC servers (net.createServer()) and clients(net.createConnection()).
It can be accessed using:
import netfrom'node:net';const net =require('node:net');
IPC support#
History
| Version | Changes |
|---|---|
| v20.8.0 | Support binding to abstract Unix domain socket path like |
Thenode:net module supports IPC with named pipes on Windows, and Unix domainsockets on other operating systems.
Identifying paths for IPC connections#
net.connect(),net.createConnection(),server.listen(), andsocket.connect() take apath parameter to identify IPC endpoints.
On Unix, the local domain is also known as the Unix domain. The path is afile system pathname. It will throw an error when the length of pathname isgreater than the length ofsizeof(sockaddr_un.sun_path). Typical values are107 bytes on Linux and 103 bytes on macOS. If a Node.js API abstraction createsthe Unix domain socket, it will unlink the Unix domain socket as well. Forexample,net.createServer() may create a Unix domain socket andserver.close() will unlink it. But if a user creates the Unix domainsocket outside of these abstractions, the user will need to remove it. The sameapplies when a Node.js API creates a Unix domain socket but the program thencrashes. In short, a Unix domain socket will be visible in the file system andwill persist until unlinked. On Linux, You can use Unix abstract socket by adding\0 to the beginning of the path, such as\0abstract. The path to the Unixabstract socket is not visible in the file system and it will disappear automaticallywhen all open references to the socket are closed.
On Windows, the local domain is implemented using a named pipe. The pathmustrefer to an entry in\\?\pipe\ or\\.\pipe\. Any characters are permitted,but the latter may do some processing of pipe names, such as resolving..sequences. Despite how it might look, the pipe namespace is flat. Pipes willnot persist. They are removed when the last reference to them is closed.Unlike Unix domain sockets, Windows will close and remove the pipe when theowning process exits.
JavaScript string escaping requires paths to be specified with extra backslashescaping such as:
net.createServer().listen( path.join('\\\\?\\pipe', process.cwd(),'myctl'));Class:net.BlockList#
TheBlockList object can be used with some network APIs to specify rules fordisabling inbound or outbound access to specific IP addresses, IP ranges, orIP subnets.
blockList.addAddress(address[, type])#
address<string> |<net.SocketAddress> An IPv4 or IPv6 address.type<string> Either'ipv4'or'ipv6'.Default:'ipv4'.
Adds a rule to block the given IP address.
blockList.addRange(start, end[, type])#
start<string> |<net.SocketAddress> The starting IPv4 or IPv6 address in therange.end<string> |<net.SocketAddress> The ending IPv4 or IPv6 address in the range.type<string> Either'ipv4'or'ipv6'.Default:'ipv4'.
Adds a rule to block a range of IP addresses fromstart (inclusive) toend (inclusive).
blockList.addSubnet(net, prefix[, type])#
net<string> |<net.SocketAddress> The network IPv4 or IPv6 address.prefix<number> The number of CIDR prefix bits. For IPv4, thismust be a value between0and32. For IPv6, this must be between0and128.type<string> Either'ipv4'or'ipv6'.Default:'ipv4'.
Adds a rule to block a range of IP addresses specified as a subnet mask.
blockList.check(address[, type])#
address<string> |<net.SocketAddress> The IP address to checktype<string> Either'ipv4'or'ipv6'.Default:'ipv4'.- Returns:<boolean>
Returnstrue if the given IP address matches any of the rules added to theBlockList.
const blockList =new net.BlockList();blockList.addAddress('123.123.123.123');blockList.addRange('10.0.0.1','10.0.0.10');blockList.addSubnet('8592:757c:efae:4e45::',64,'ipv6');console.log(blockList.check('123.123.123.123'));// Prints: trueconsole.log(blockList.check('10.0.0.3'));// Prints: trueconsole.log(blockList.check('222.111.111.222'));// Prints: false// IPv6 notation for IPv4 addresses works:console.log(blockList.check('::ffff:7b7b:7b7b','ipv6'));// Prints: trueconsole.log(blockList.check('::ffff:123.123.123.123','ipv6'));// Prints: trueBlockList.isBlockList(value)#
value<any> Any JS value- Returns
trueif thevalueis anet.BlockList.
blockList.fromJSON(value)#
const blockList =new net.BlockList();const data = ['Subnet: IPv4 192.168.1.0/24','Address: IPv4 10.0.0.5','Range: IPv4 192.168.2.1-192.168.2.10','Range: IPv4 10.0.0.1-10.0.0.10',];blockList.fromJSON(data);blockList.fromJSON(JSON.stringify(data));valueBlocklist.rules
Class:net.SocketAddress#
new net.SocketAddress([options])#
SocketAddress.parse(input)#
input<string> An input string containing an IP address and optional port,e.g.123.1.2.3:1234or[1::1]:1234.- Returns:<net.SocketAddress> Returns a
SocketAddressif parsing was successful.Otherwise returnsundefined.
Class:net.Server#
- Extends:<EventEmitter>
This class is used to create a TCP orIPC server.
new net.Server([options][, connectionListener])#
options<Object> Seenet.createServer([options][, connectionListener]).connectionListener<Function> Automatically set as a listener for the'connection'event.- Returns:<net.Server>
net.Server is anEventEmitter with the following events:
Event:'close'#
Emitted when the server closes. If connections exist, thisevent is not emitted until all connections are ended.
Event:'connection'#
- Type:<net.Socket> The connection object
Emitted when a new connection is made.socket is an instance ofnet.Socket.
Event:'error'#
- Type:<Error>
Emitted when an error occurs. Unlikenet.Socket, the'close'event willnot be emitted directly following this event unlessserver.close() is manually called. See the example in discussion ofserver.listen().
Event:'listening'#
Emitted when the server has been bound after callingserver.listen().
Event:'drop'#
When the number of connections reaches the threshold ofserver.maxConnections,the server will drop new connections and emit'drop' event instead. If it is aTCP server, the argument is as follows, otherwise the argument isundefined.
server.address()#
History
| Version | Changes |
|---|---|
| v18.4.0 | The |
| v18.0.0 | The |
| v0.1.90 | Added in: v0.1.90 |
Returns the boundaddress, the addressfamily name, andport of the serveras reported by the operating system if listening on an IP socket(useful to find which port was assigned when getting an OS-assigned address):{ port: 12346, family: 'IPv4', address: '127.0.0.1' }.
For a server listening on a pipe or Unix domain socket, the name is returnedas a string.
const server = net.createServer((socket) => { socket.end('goodbye\n');}).on('error',(err) => {// Handle errors here.throw err;});// Grab an arbitrary unused port.server.listen(() => {console.log('opened server on', server.address());});server.address() returnsnull before the'listening' event has beenemitted or after callingserver.close().
server.close([callback])#
callback<Function> Called when the server is closed.- Returns:<net.Server>
Stops the server from accepting new connections and keeps existingconnections. This function is asynchronous, the server is finally closedwhen all connections are ended and the server emits a'close' event.The optionalcallback will be called once the'close' event occurs. Unlikethat event, it will be called with anError as its only argument if the serverwas not open when it was closed.
server[Symbol.asyncDispose]()#
History
| Version | Changes |
|---|---|
| v24.2.0 | No longer experimental. |
| v20.5.0, v18.18.0 | Added in: v20.5.0, v18.18.0 |
Callsserver.close() and returns a promise that fulfills when theserver has closed.
server.getConnections(callback)#
callback<Function>- Returns:<net.Server>
Asynchronously get the number of concurrent connections on the server. Workswhen sockets were sent to forks.
Callback should take two argumentserr andcount.
server.listen()#
Start a server listening for connections. Anet.Server can be a TCP oranIPC server depending on what it listens to.
Possible signatures:
server.listen(handle[, backlog][, callback])server.listen(options[, callback])server.listen(path[, backlog][, callback])forIPC serversserver.listen([port[, host[, backlog]]][, callback])for TCP servers
This function is asynchronous. When the server starts listening, the'listening' event will be emitted. The last parametercallbackwill be added as a listener for the'listening' event.
Alllisten() methods can take abacklog parameter to specify the maximumlength of the queue of pending connections. The actual length will be determinedby the OS through sysctl settings such astcp_max_syn_backlog andsomaxconnon Linux. The default value of this parameter is 511 (not 512).
Allnet.Socket are set toSO_REUSEADDR (seesocket(7) fordetails).
Theserver.listen() method can be called again if and only if there was anerror during the firstserver.listen() call orserver.close() has beencalled. Otherwise, anERR_SERVER_ALREADY_LISTEN error will be thrown.
One of the most common errors raised when listening isEADDRINUSE.This happens when another server is already listening on the requestedport/path/handle. One way to handle this would be to retryafter a certain amount of time:
server.on('error',(e) => {if (e.code ==='EADDRINUSE') {console.error('Address in use, retrying...');setTimeout(() => { server.close(); server.listen(PORT,HOST); },1000); }});server.listen(handle[, backlog][, callback])#
handle<Object>backlog<number> Common parameter ofserver.listen()functionscallback<Function>- Returns:<net.Server>
Start a server listening for connections on a givenhandle that hasalready been bound to a port, a Unix domain socket, or a Windows named pipe.
Thehandle object can be either a server, a socket (anything with anunderlying_handle member), or an object with anfd member that is avalid file descriptor.
Listening on a file descriptor is not supported on Windows.
server.listen(options[, callback])#
History
| Version | Changes |
|---|---|
| v23.1.0, v22.12.0 | The |
| v15.6.0 | AbortSignal support was added. |
| v11.4.0 | The |
| v0.11.14 | Added in: v0.11.14 |
options<Object> Required. Supports the following properties:backlog<number> Common parameter ofserver.listen()functions.exclusive<boolean>Default:falsehost<string>ipv6Only<boolean> For TCP servers, settingipv6Onlytotruewilldisable dual-stack support, i.e., binding to host::won't make0.0.0.0be bound.Default:false.reusePort<boolean> For TCP servers, settingreusePorttotrueallowsmultiple sockets on the same host to bind to the same port. Incoming connectionsare distributed by the operating system to listening sockets. This option isavailable only on some platforms, such as Linux 3.9+, DragonFlyBSD 3.6+, FreeBSD 12.0+,Solaris 11.4, and AIX 7.2.5+. On unsupported platforms, this option raisesan error.Default:false.path<string> Will be ignored ifportis specified. SeeIdentifying paths for IPC connections.port<number>readableAll<boolean> For IPC servers makes the pipe readablefor all users.Default:false.signal<AbortSignal> An AbortSignal that may be used to close a listeningserver.writableAll<boolean> For IPC servers makes the pipe writablefor all users.Default:false.
callback<Function>functions.- Returns:<net.Server>
Ifport is specified, it behaves the same asserver.listen([port[, host[, backlog]]][, callback]).Otherwise, ifpath is specified, it behaves the same asserver.listen(path[, backlog][, callback]).If none of them is specified, an error will be thrown.
Ifexclusive isfalse (default), then cluster workers will use the sameunderlying handle, allowing connection handling duties to be shared. Whenexclusive istrue, the handle is not shared, and attempted port sharingresults in an error. An example which listens on an exclusive port isshown below.
server.listen({host:'localhost',port:80,exclusive:true,});Whenexclusive istrue and the underlying handle is shared, it ispossible that several workers query a handle with different backlogs.In this case, the firstbacklog passed to the master process will be used.
Starting an IPC server as root may cause the server path to be inaccessible forunprivileged users. UsingreadableAll andwritableAll will make the serveraccessible for all users.
If thesignal option is enabled, calling.abort() on the correspondingAbortController is similar to calling.close() on the server:
const controller =newAbortController();server.listen({host:'localhost',port:80,signal: controller.signal,});// Later, when you want to close the server.controller.abort();server.listen(path[, backlog][, callback])#
path<string> Path the server should listen to. SeeIdentifying paths for IPC connections.backlog<number> Common parameter ofserver.listen()functions.callback<Function>.- Returns:<net.Server>
Start anIPC server listening for connections on the givenpath.
server.listen([port[, host[, backlog]]][, callback])#
port<number>host<string>backlog<number> Common parameter ofserver.listen()functions.callback<Function>.- Returns:<net.Server>
Start a TCP server listening for connections on the givenport andhost.
Ifport is omitted or is 0, the operating system will assign an arbitraryunused port, which can be retrieved by usingserver.address().portafter the'listening' event has been emitted.
Ifhost is omitted, the server will accept connections on theunspecified IPv6 address (::) when IPv6 is available, or theunspecified IPv4 address (0.0.0.0) otherwise.
In most operating systems, listening to theunspecified IPv6 address (::)may cause thenet.Server to also listen on theunspecified IPv4 address(0.0.0.0).
server.listening#
- Type:<boolean> Indicates whether or not the server is listening for connections.
server.maxConnections#
History
| Version | Changes |
|---|---|
| v21.0.0 | Setting |
| v0.2.0 | Added in: v0.2.0 |
- Type:<integer>
When the number of connections reaches theserver.maxConnections threshold:
If the process is not running in cluster mode, Node.js will close the connection.
If the process is running in cluster mode, Node.js will, by default, route the connection to another worker process. To close the connection instead, set
server.dropMaxConnectiontotrue.
It is not recommended to use this option once a socket has been sent to a childwithchild_process.fork().
server.dropMaxConnection#
- Type:<boolean>
Set this property totrue to begin closing connections once the number of connections reaches the [server.maxConnections][] threshold. This setting is only effective in cluster mode.
server.ref()#
- Returns:<net.Server>
Opposite ofunref(), callingref() on a previouslyunrefed server willnot let the program exit if it's the only server left (the default behavior).If the server isrefed callingref() again will have no effect.
server.unref()#
- Returns:<net.Server>
Callingunref() on a server will allow the program to exit if this is the onlyactive server in the event system. If the server is alreadyunrefed callingunref() again will have no effect.
Class:net.Socket#
- Extends:<stream.Duplex>
This class is an abstraction of a TCP socket or a streamingIPC endpoint(uses named pipes on Windows, and Unix domain sockets otherwise). It is alsoanEventEmitter.
Anet.Socket can be created by the user and used directly to interact witha server. For example, it is returned bynet.createConnection(),so the user can use it to talk to the server.
It can also be created by Node.js and passed to the user when a connectionis received. For example, it is passed to the listeners of a'connection' event emitted on anet.Server, so the user can useit to interact with the client.
new net.Socket([options])#
History
| Version | Changes |
|---|---|
| v25.6.0 | Added |
| v15.14.0 | AbortSignal support was added. |
| v12.10.0 | Added |
| v0.3.4 | Added in: v0.3.4 |
options<Object> Available options are:allowHalfOpen<boolean> If set tofalse, then the socket willautomatically end the writable side when the readable side ends. Seenet.createServer()and the'end'event for details.Default:false.blockList<net.BlockList>blockListcan be used for disabling outboundaccess to specific IP addresses, IP ranges, or IP subnets.fd<number> If specified, wrap around an existing socket withthe given file descriptor, otherwise a new socket will be created.keepAlive<boolean> If set totrue, it enables keep-alive functionality onthe socket immediately after the connection is established, similarly on whatis done insocket.setKeepAlive().Default:false.keepAliveInitialDelay<number> If set to a positive number, it sets theinitial delay before the first keepalive probe is sent on an idle socket.Default:0.noDelay<boolean> If set totrue, it disables the use of Nagle's algorithmimmediately after the socket is established.Default:false.onread<Object> If specified, incoming data is stored in a singlebufferand passed to the suppliedcallbackwhen data arrives on the socket.This will cause the streaming functionality to not provide any data.The socket will emit events like'error','end', and'close'as usual. Methods likepause()andresume()will also behave asexpected.buffer<Buffer> |<Uint8Array> |<Function> Either a reusable chunk of memory touse for storing incoming data or a function that returns such.callback<Function> This function is called for every chunk of incomingdata. Two arguments are passed to it: the number of bytes written tobufferand a reference tobuffer. Returnfalsefrom this function toimplicitlypause()the socket. This function will be executed in theglobal context.
readable<boolean> Allow reads on the socket when anfdis passed,otherwise ignored.Default:false.signal<AbortSignal> An Abort signal that may be used to destroy thesocket.typeOfService<number> The initial Type of Service (TOS) value.writable<boolean> Allow writes on the socket when anfdis passed,otherwise ignored.Default:false.
- Returns:<net.Socket>
Creates a new socket object.
The newly created socket can be either a TCP socket or a streamingIPCendpoint, depending on what itconnect() to.
Event:'close'#
hadError<boolean>trueif the socket had a transmission error.
Emitted once the socket is fully closed. The argumenthadError is a booleanwhich says if the socket was closed due to a transmission error.
Event:'connect'#
Emitted when a socket connection is successfully established.Seenet.createConnection().
Event:'connectionAttempt'#
ip<string> The IP which the socket is attempting to connect to.port<number> The port which the socket is attempting to connect to.family<number> The family of the IP. It can be6for IPv6 or4for IPv4.
Emitted when a new connection attempt is started. This may be emitted multiple timesif the family autoselection algorithm is enabled insocket.connect(options).
Event:'connectionAttemptFailed'#
ip<string> The IP which the socket attempted to connect to.port<number> The port which the socket attempted to connect to.family<number> The family of the IP. It can be6for IPv6 or4for IPv4.error<Error> The error associated with the failure.
Emitted when a connection attempt failed. This may be emitted multiple timesif the family autoselection algorithm is enabled insocket.connect(options).
Event:'connectionAttemptTimeout'#
ip<string> The IP which the socket attempted to connect to.port<number> The port which the socket attempted to connect to.family<number> The family of the IP. It can be6for IPv6 or4for IPv4.
Emitted when a connection attempt timed out. This is only emitted (and may beemitted multiple times) if the family autoselection algorithm is enabledinsocket.connect(options).
Event:'data'#
Emitted when data is received. The argumentdata will be aBuffer orString. Encoding of data is set bysocket.setEncoding().
The data will be lost if there is no listener when aSocketemits a'data' event.
Event:'drain'#
Emitted when the write buffer becomes empty. Can be used to throttle uploads.
See also: the return values ofsocket.write().
Event:'end'#
Emitted when the other end of the socket signals the end of transmission, thusending the readable side of the socket.
By default (allowHalfOpen isfalse) the socket will send an end oftransmission packet back and destroy its file descriptor once it has written outits pending write queue. However, ifallowHalfOpen is set totrue, thesocket will not automaticallyend() its writable side,allowing the user to write arbitrary amounts of data. The user must callend() explicitly to close the connection (i.e. sending aFIN packet back).
Event:'error'#
- Type:<Error>
Emitted when an error occurs. The'close' event will be called directlyfollowing this event.
Event:'lookup'#
History
| Version | Changes |
|---|---|
| v5.10.0 | The |
| v0.11.3 | Added in: v0.11.3 |
Emitted after resolving the host name but before connecting.Not applicable to Unix sockets.
err<Error> |<null> The error object. Seedns.lookup().address<string> The IP address.family<number> |<null> The address type. Seedns.lookup().host<string> The host name.
Event:'ready'#
Emitted when a socket is ready to be used.
Triggered immediately after'connect'.
Event:'timeout'#
Emitted if the socket times out from inactivity. This is only to notify thatthe socket has been idle. The user must manually close the connection.
See also:socket.setTimeout().
socket.address()#
History
| Version | Changes |
|---|---|
| v18.4.0 | The |
| v18.0.0 | The |
| v0.1.90 | Added in: v0.1.90 |
- Returns:<Object>
Returns the boundaddress, the addressfamily name andport of thesocket as reported by the operating system:{ port: 12346, family: 'IPv4', address: '127.0.0.1' }
socket.autoSelectFamilyAttemptedAddresses#
- Type:<string[]>
This property is only present if the family autoselection algorithm is enabled insocket.connect(options) and it is an array of the addresses that have been attempted.
Each address is a string in the form of$IP:$PORT. If the connection was successful,then the last address is the one that the socket is currently connected to.
socket.bufferSize#
writable.writableLength instead.- Type:<integer>
This property shows the number of characters buffered for writing. The buffermay contain strings whose length after encoding is not yet known. So this numberis only an approximation of the number of bytes in the buffer.
net.Socket has the property thatsocket.write() always works. This is tohelp users get up and running quickly. The computer cannot always keep upwith the amount of data that is written to a socket. The network connectionsimply might be too slow. Node.js will internally queue up the data written to asocket and send it out over the wire when it is possible.
The consequence of this internal buffering is that memory may grow.Users who experience large or growingbufferSize should attempt to"throttle" the data flows in their program withsocket.pause() andsocket.resume().
socket.connect()#
Initiate a connection on a given socket.
Possible signatures:
socket.connect(options[, connectListener])socket.connect(path[, connectListener])forIPC connections.socket.connect(port[, host][, connectListener])for TCP connections.- Returns:<net.Socket> The socket itself.
This function is asynchronous. When the connection is established, the'connect' event will be emitted. If there is a problem connecting,instead of a'connect' event, an'error' event will be emitted withthe error passed to the'error' listener.The last parameterconnectListener, if supplied, will be added as a listenerfor the'connect' eventonce.
This function should only be used for reconnecting a socket after'close' has been emitted or otherwise it may lead to undefinedbehavior.
socket.connect(options[, connectListener])#
History
| Version | Changes |
|---|---|
| v19.4.0 | The default value for autoSelectFamily option can be changed at runtime using |
| v20.0.0, v18.18.0 | The default value for the autoSelectFamily option is now true. The |
| v19.3.0, v18.13.0 | Added the |
| v17.7.0, v16.15.0 | The |
| v6.0.0 | The |
| v5.11.0 | The |
| v0.1.90 | Added in: v0.1.90 |
options<Object>connectListener<Function> Common parameter ofsocket.connect()methods. Will be added as a listener for the'connect'event once.- Returns:<net.Socket> The socket itself.
Initiate a connection on a given socket. Normally this method is not needed,the socket should be created and opened withnet.createConnection(). Usethis only when implementing a custom Socket.
For TCP connections, availableoptions are:
autoSelectFamily<boolean>: If set totrue, it enables a familyautodetection algorithm that loosely implements section 5 ofRFC 8305. Thealloption passed to lookup is set totrueand the sockets attempts toconnect to all obtained IPv6 and IPv4 addresses, in sequence, until aconnection is established. The first returned AAAA address is tried first,then the first returned A address, then the second returned AAAA address andso on. Each connection attempt (but the last one) is given the amount of timespecified by theautoSelectFamilyAttemptTimeoutoption before timing out andtrying the next address. Ignored if thefamilyoption is not0or iflocalAddressis set. Connection errors are not emitted if at least oneconnection succeeds. If all connections attempts fails, a singleAggregateErrorwith all failed attempts is emitted.Default:net.getDefaultAutoSelectFamily().autoSelectFamilyAttemptTimeout<number>: The amount of time in millisecondsto wait for a connection attempt to finish before trying the next address whenusing theautoSelectFamilyoption. If set to a positive integer less than10, then the value10will be used instead.Default:net.getDefaultAutoSelectFamilyAttemptTimeout().family<number>: Version of IP stack. Must be4,6, or0. The value0indicates that both IPv4 and IPv6 addresses are allowed.Default:0.hints<number> Optionaldns.lookup()hints.host<string> Host the socket should connect to.Default:'localhost'.localAddress<string> Local address the socket should connect from.localPort<number> Local port the socket should connect from.lookup<Function> Custom lookup function.Default:dns.lookup().port<number> Required. Port the socket should connect to.
ForIPC connections, availableoptions are:
path<string> Required. Path the client should connect to.SeeIdentifying paths for IPC connections. If provided, the TCP-specificoptions above are ignored.
socket.connect(path[, connectListener])#
path<string> Path the client should connect to. SeeIdentifying paths for IPC connections.connectListener<Function> Common parameter ofsocket.connect()methods. Will be added as a listener for the'connect'event once.- Returns:<net.Socket> The socket itself.
Initiate anIPC connection on the given socket.
Alias tosocket.connect(options[, connectListener])called with{ path: path } asoptions.
socket.connect(port[, host][, connectListener])#
port<number> Port the client should connect to.host<string> Host the client should connect to.connectListener<Function> Common parameter ofsocket.connect()methods. Will be added as a listener for the'connect'event once.- Returns:<net.Socket> The socket itself.
Initiate a TCP connection on the given socket.
Alias tosocket.connect(options[, connectListener])called with{port: port, host: host} asoptions.
socket.connecting#
- Type:<boolean>
Iftrue,socket.connect(options[, connectListener]) wascalled and has not yet finished. It will staytrue until the socket becomesconnected, then it is set tofalse and the'connect' event is emitted. Notethat thesocket.connect(options[, connectListener])callback is a listener for the'connect' event.
socket.destroy([error])#
error<Object>- Returns:<net.Socket>
Ensures that no more I/O activity happens on this socket.Destroys the stream and closes the connection.
Seewritable.destroy() for further details.
socket.destroyed#
- Type:<boolean> Indicates if the connection is destroyed or not. Once aconnection is destroyed no further data can be transferred using it.
Seewritable.destroyed for further details.
socket.destroySoon()#
Destroys the socket after all data is written. If the'finish' event wasalready emitted the socket is destroyed immediately. If the socket is stillwritable it implicitly callssocket.end().
socket.end([data[, encoding]][, callback])#
data<string> |<Buffer> |<Uint8Array>encoding<string> Only used when data isstring.Default:'utf8'.callback<Function> Optional callback for when the socket is finished.- Returns:<net.Socket> The socket itself.
Half-closes the socket. i.e., it sends a FIN packet. It is possible theserver will still send some data.
Seewritable.end() for further details.
socket.localAddress#
- Type:<string>
The string representation of the local IP address the remote client isconnecting on. For example, in a server listening on'0.0.0.0', if a clientconnects on'192.168.1.1', the value ofsocket.localAddress would be'192.168.1.1'.
socket.localPort#
- Type:<integer>
The numeric representation of the local port. For example,80 or21.
socket.localFamily#
- Type:<string>
The string representation of the local IP family.'IPv4' or'IPv6'.
socket.pause()#
- Returns:<net.Socket> The socket itself.
Pauses the reading of data. That is,'data' events will not be emitted.Useful to throttle back an upload.
socket.pending#
- Type:<boolean>
This istrue if the socket is not connected yet, either because.connect()has not yet been called or because it is still in the process of connecting(seesocket.connecting).
socket.ref()#
- Returns:<net.Socket> The socket itself.
Opposite ofunref(), callingref() on a previouslyunrefed socket willnot let the program exit if it's the only socket left (the default behavior).If the socket isrefed callingref again will have no effect.
socket.remoteAddress#
- Type:<string>
The string representation of the remote IP address. For example,'74.125.127.100' or'2001:4860:a005::68'. Value may beundefined ifthe socket is destroyed (for example, if the client disconnected).
socket.remoteFamily#
- Type:<string>
The string representation of the remote IP family.'IPv4' or'IPv6'. Value may beundefined ifthe socket is destroyed (for example, if the client disconnected).
socket.remotePort#
- Type:<integer>
The numeric representation of the remote port. For example,80 or21. Value may beundefined ifthe socket is destroyed (for example, if the client disconnected).
socket.resetAndDestroy()#
- Returns:<net.Socket>
Close the TCP connection by sending an RST packet and destroy the stream.If this TCP socket is in connecting status, it will send an RST packet and destroy this TCP socket once it is connected.Otherwise, it will callsocket.destroy with anERR_SOCKET_CLOSED Error.If this is not a TCP socket (for example, a pipe), calling this method will immediately throw anERR_INVALID_HANDLE_TYPE Error.
socket.resume()#
- Returns:<net.Socket> The socket itself.
Resumes reading after a call tosocket.pause().
socket.setEncoding([encoding])#
encoding<string>- Returns:<net.Socket> The socket itself.
Set the encoding for the socket as aReadable Stream. Seereadable.setEncoding() for more information.
socket.setKeepAlive([enable][, initialDelay])#
History
| Version | Changes |
|---|---|
| v13.12.0, v12.17.0 | New defaults for |
| v0.1.92 | Added in: v0.1.92 |
enable<boolean>Default:falseinitialDelay<number>Default:0- Returns:<net.Socket> The socket itself.
Enable/disable keep-alive functionality, and optionally set the initialdelay before the first keepalive probe is sent on an idle socket.
SetinitialDelay (in milliseconds) to set the delay between the lastdata packet received and the first keepalive probe. Setting0 forinitialDelay will leave the value unchanged from the default(or previous) setting.
Enabling the keep-alive functionality will set the following socket options:
SO_KEEPALIVE=1TCP_KEEPIDLE=initialDelayTCP_KEEPCNT=10TCP_KEEPINTVL=1
socket.setNoDelay([noDelay])#
noDelay<boolean>Default:true- Returns:<net.Socket> The socket itself.
Enable/disable the use of Nagle's algorithm.
When a TCP connection is created, it will have Nagle's algorithm enabled.
Nagle's algorithm delays data before it is sent via the network. It attemptsto optimize throughput at the expense of latency.
Passingtrue fornoDelay or not passing an argument will disable Nagle'salgorithm for the socket. Passingfalse fornoDelay will enable Nagle'salgorithm.
socket.setTimeout(timeout[, callback])#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v0.1.90 | Added in: v0.1.90 |
timeout<number>callback<Function>- Returns:<net.Socket> The socket itself.
Sets the socket to timeout aftertimeout milliseconds of inactivity onthe socket. By defaultnet.Socket do not have a timeout.
When an idle timeout is triggered the socket will receive a'timeout'event but the connection will not be severed. The user must manually callsocket.end() orsocket.destroy() to end the connection.
socket.setTimeout(3000);socket.on('timeout',() => {console.log('socket timeout'); socket.end();});Iftimeout is 0, then the existing idle timeout is disabled.
The optionalcallback parameter will be added as a one-time listener for the'timeout' event.
socket.getTypeOfService()#
- Returns:<integer> The current TOS value.
Returns the current Type of Service (TOS) field for IPv4 packets or TrafficClass for IPv6 packets for this socket.
setTypeOfService() may be called before the socket is connected; the valuewill be cached and applied when the socket establishes a connection.getTypeOfService() will return the currently set value even before connection.
On some platforms (e.g., Linux), certain TOS/ECN bits may be masked or ignored,and behavior can differ between IPv4 and IPv6 or dual-stack sockets. Callersshould verify platform-specific semantics.
socket.setTypeOfService(tos)#
tos<integer> The TOS value to set (0-255).- Returns:<net.Socket> The socket itself.
Sets the Type of Service (TOS) field for IPv4 packets or Traffic Class for IPv6Packets sent from this socket. This can be used to prioritize network traffic.
setTypeOfService() may be called before the socket is connected; the valuewill be cached and applied when the socket establishes a connection.getTypeOfService() will return the currently set value even before connection.
On some platforms (e.g., Linux), certain TOS/ECN bits may be masked or ignored,and behavior can differ between IPv4 and IPv6 or dual-stack sockets. Callersshould verify platform-specific semantics.
socket.timeout#
- Type:<number> |<undefined>
The socket timeout in milliseconds as set bysocket.setTimeout().It isundefined if a timeout has not been set.
socket.unref()#
- Returns:<net.Socket> The socket itself.
Callingunref() on a socket will allow the program to exit if this is the onlyactive socket in the event system. If the socket is alreadyunrefed callingunref() again will have no effect.
socket.write(data[, encoding][, callback])#
data<string> |<Buffer> |<Uint8Array>encoding<string> Only used when data isstring.Default:utf8.callback<Function>- Returns:<boolean>
Sends data on the socket. The second parameter specifies the encoding in thecase of a string. It defaults to UTF8 encoding.
Returnstrue if the entire data was flushed successfully to the kernelbuffer. Returnsfalse if all or part of the data was queued in user memory.'drain' will be emitted when the buffer is again free.
The optionalcallback parameter will be executed when the data is finallywritten out, which may not be immediately.
SeeWritable streamwrite() method for moreinformation.
socket.readyState#
- Type:<string>
This property represents the state of the connection as a string.
- If the stream is connecting
socket.readyStateisopening. - If the stream is readable and writable, it is
open. - If the stream is readable and not writable, it is
readOnly. - If the stream is not readable and writable, it is
writeOnly.
net.connect()#
Aliases tonet.createConnection().
Possible signatures:
net.connect(options[, connectListener])net.connect(path[, connectListener])forIPCconnections.net.connect(port[, host][, connectListener])for TCP connections.
net.connect(options[, connectListener])#
options<Object>connectListener<Function>- Returns:<net.Socket>
net.connect(path[, connectListener])#
path<string>connectListener<Function>- Returns:<net.Socket>
net.connect(port[, host][, connectListener])#
port<number>host<string>connectListener<Function>- Returns:<net.Socket>
Alias tonet.createConnection(port[, host][, connectListener]).
net.createConnection()#
A factory function, which creates a newnet.Socket,immediately initiates connection withsocket.connect(),then returns thenet.Socket that starts the connection.
When the connection is established, a'connect' event will be emittedon the returned socket. The last parameterconnectListener, if supplied,will be added as a listener for the'connect' eventonce.
Possible signatures:
net.createConnection(options[, connectListener])net.createConnection(path[, connectListener])forIPC connections.net.createConnection(port[, host][, connectListener])for TCP connections.
Thenet.connect() function is an alias to this function.
net.createConnection(options[, connectListener])#
options<Object> Required. Will be passed to both thenew net.Socket([options])call and thesocket.connect(options[, connectListener])method.connectListener<Function> Common parameter of thenet.createConnection()functions. If supplied, will be added asa listener for the'connect'event on the returned socket once.- Returns:<net.Socket> The newly created socket used to start the connection.
For available options, seenew net.Socket([options])andsocket.connect(options[, connectListener]).
Additional options:
timeout<number> If set, will be used to callsocket.setTimeout(timeout)after the socket is created, but beforeit starts the connection.
Following is an example of a client of the echo server describedin thenet.createServer() section:
import netfrom'node:net';const client = net.createConnection({port:8124 },() => {// 'connect' listener.console.log('connected to server!'); client.write('world!\r\n');});client.on('data',(data) => {console.log(data.toString()); client.end();});client.on('end',() => {console.log('disconnected from server');});const net =require('node:net');const client = net.createConnection({port:8124 },() => {// 'connect' listener.console.log('connected to server!'); client.write('world!\r\n');});client.on('data',(data) => {console.log(data.toString()); client.end();});client.on('end',() => {console.log('disconnected from server');});
To connect on the socket/tmp/echo.sock:
const client = net.createConnection({path:'/tmp/echo.sock' });Following is an example of a client using theport andonreadoption. In this case, theonread option will be only used to callnew net.Socket([options]) and theport option will be used tocallsocket.connect(options[, connectListener]).
import netfrom'node:net';import {Buffer }from'node:buffer';net.createConnection({port:8124,onread: {// Reuses a 4KiB Buffer for every read from the socket.buffer:Buffer.alloc(4 *1024),callback:function(nread, buf) {// Received data is available in `buf` from 0 to `nread`.console.log(buf.toString('utf8',0, nread)); }, },});const net =require('node:net');net.createConnection({port:8124,onread: {// Reuses a 4KiB Buffer for every read from the socket.buffer:Buffer.alloc(4 *1024),callback:function(nread, buf) {// Received data is available in `buf` from 0 to `nread`.console.log(buf.toString('utf8',0, nread)); }, },});
net.createConnection(path[, connectListener])#
path<string> Path the socket should connect to. Will be passed tosocket.connect(path[, connectListener]).SeeIdentifying paths for IPC connections.connectListener<Function> Common parameter of thenet.createConnection()functions, an "once" listener for the'connect'event on the initiating socket. Will be passed tosocket.connect(path[, connectListener]).- Returns:<net.Socket> The newly created socket used to start the connection.
Initiates anIPC connection.
This function creates a newnet.Socket with all options set to default,immediately initiates connection withsocket.connect(path[, connectListener]),then returns thenet.Socket that starts the connection.
net.createConnection(port[, host][, connectListener])#
port<number> Port the socket should connect to. Will be passed tosocket.connect(port[, host][, connectListener]).host<string> Host the socket should connect to. Will be passed tosocket.connect(port[, host][, connectListener]).Default:'localhost'.connectListener<Function> Common parameter of thenet.createConnection()functions, an "once" listener for the'connect'event on the initiating socket. Will be passed tosocket.connect(port[, host][, connectListener]).- Returns:<net.Socket> The newly created socket used to start the connection.
Initiates a TCP connection.
This function creates a newnet.Socket with all options set to default,immediately initiates connection withsocket.connect(port[, host][, connectListener]),then returns thenet.Socket that starts the connection.
net.createServer([options][, connectionListener])#
History
| Version | Changes |
|---|---|
| v20.1.0, v18.17.0 | The |
| v17.7.0, v16.15.0 | The |
| v0.5.0 | Added in: v0.5.0 |
options<Object>allowHalfOpen<boolean> If set tofalse, then the socket willautomatically end the writable side when the readable side ends.Default:false.highWaterMark<number> Optionally overrides allnet.Sockets'readableHighWaterMarkandwritableHighWaterMark.Default: Seestream.getDefaultHighWaterMark().keepAlive<boolean> If set totrue, it enables keep-alive functionalityon the socket immediately after a new incoming connection is received,similarly on what is done insocket.setKeepAlive().Default:false.keepAliveInitialDelay<number> If set to a positive number, it sets theinitial delay before the first keepalive probe is sent on an idle socket.Default:0.noDelay<boolean> If set totrue, it disables the use of Nagle'salgorithm immediately after a new incoming connection is received.Default:false.pauseOnConnect<boolean> Indicates whether the socket should bepaused on incoming connections.Default:false.blockList<net.BlockList>blockListcan be used for disabling inboundaccess to specific IP addresses, IP ranges, or IP subnets. This does notwork if the server is behind a reverse proxy, NAT, etc. because the addresschecked against the block list is the address of the proxy, or the onespecified by the NAT.
connectionListener<Function> Automatically set as a listener for the'connection'event.Returns:<net.Server>
Creates a new TCP orIPC server.
IfallowHalfOpen is set totrue, when the other end of the socketsignals the end of transmission, the server will only send back the end oftransmission whensocket.end() is explicitly called. For example, in thecontext of TCP, when a FIN packed is received, a FIN packed is sentback only whensocket.end() is explicitly called. Until then theconnection is half-closed (non-readable but still writable). See'end'event andRFC 1122 (section 4.2.2.13) for more information.
IfpauseOnConnect is set totrue, then the socket associated with eachincoming connection will be paused, and no data will be read from its handle.This allows connections to be passed between processes without any data beingread by the original process. To begin reading data from a paused socket, callsocket.resume().
The server can be a TCP server or anIPC server, depending on what itlisten() to.
Here is an example of a TCP echo server which listens for connectionson port 8124:
import netfrom'node:net';const server = net.createServer((c) => {// 'connection' listener.console.log('client connected'); c.on('end',() => {console.log('client disconnected'); }); c.write('hello\r\n'); c.pipe(c);});server.on('error',(err) => {throw err;});server.listen(8124,() => {console.log('server bound');});const net =require('node:net');const server = net.createServer((c) => {// 'connection' listener.console.log('client connected'); c.on('end',() => {console.log('client disconnected'); }); c.write('hello\r\n'); c.pipe(c);});server.on('error',(err) => {throw err;});server.listen(8124,() => {console.log('server bound');});
Test this by usingtelnet:
telnet localhost 8124To listen on the socket/tmp/echo.sock:
server.listen('/tmp/echo.sock',() => {console.log('server bound');});Usenc to connect to a Unix domain socket server:
nc -U /tmp/echo.socknet.getDefaultAutoSelectFamily()#
Gets the current default value of theautoSelectFamily option ofsocket.connect(options).The initial default value istrue, unless the command line option--no-network-family-autoselection is provided.
- Returns:<boolean> The current default value of the
autoSelectFamilyoption.
net.setDefaultAutoSelectFamily(value)#
Sets the default value of theautoSelectFamily option ofsocket.connect(options).
value<boolean> The new default value.The initial default value istrue, unless the command line option--no-network-family-autoselectionis provided.
net.getDefaultAutoSelectFamilyAttemptTimeout()#
Gets the current default value of theautoSelectFamilyAttemptTimeout option ofsocket.connect(options).The initial default value is500 or the value specified via the command lineoption--network-family-autoselection-attempt-timeout.
- Returns:<number> The current default value of the
autoSelectFamilyAttemptTimeoutoption.
net.setDefaultAutoSelectFamilyAttemptTimeout(value)#
Sets the default value of theautoSelectFamilyAttemptTimeout option ofsocket.connect(options).
value<number> The new default value, which must be a positive number. If the number is less than10,the value10is used instead. The initial default value is250or the value specified via the command lineoption--network-family-autoselection-attempt-timeout.
net.isIP(input)#
Returns6 ifinput is an IPv6 address. Returns4 ifinput is an IPv4address indot-decimal notation with no leading zeroes. Otherwise, returns0.
net.isIP('::1');// returns 6net.isIP('127.0.0.1');// returns 4net.isIP('127.000.000.001');// returns 0net.isIP('127.0.0.1/24');// returns 0net.isIP('fhqwhgads');// returns 0net.isIPv4(input)#
Returnstrue ifinput is an IPv4 address indot-decimal notation with noleading zeroes. Otherwise, returnsfalse.
net.isIPv4('127.0.0.1');// returns truenet.isIPv4('127.000.000.001');// returns falsenet.isIPv4('127.0.0.1/24');// returns falsenet.isIPv4('fhqwhgads');// returns falsenet.isIPv6(input)#
Returnstrue ifinput is an IPv6 address. Otherwise, returnsfalse.
net.isIPv6('::1');// returns truenet.isIPv6('fhqwhgads');// returns falseOS#
Source Code:lib/os.js
Thenode:os module provides operating system-related utility methods andproperties. It can be accessed using:
import osfrom'node:os';const os =require('node:os');
os.EOL#
- Type:<string>
The operating system-specific end-of-line marker.
\non POSIX\r\non Windows
os.availableParallelism()#
- Returns:<integer>
Returns an estimate of the default amount of parallelism a program should use.Always returns a value greater than zero.
This function is a small wrapper about libuv'suv_available_parallelism().
os.arch()#
- Returns:<string>
Returns the operating system CPU architecture for which the Node.js binary wascompiled. Possible values are'arm','arm64','ia32','loong64','mips','mipsel','ppc64','riscv64','s390x', and'x64'.
The return value is equivalent toprocess.arch.
os.constants#
- Type:<Object>
Contains commonly used operating system-specific constants for error codes,process signals, and so on. The specific constants defined are described inOS constants.
os.cpus()#
- Returns:<Object[]>
Returns an array of objects containing information about each logical CPU core.The array will be empty if no CPU information is available, such as if the/proc file system is unavailable.
The properties included on each object include:
model<string>speed<number> (in MHz)times<Object>user<number> The number of milliseconds the CPU has spent in user mode.nice<number> The number of milliseconds the CPU has spent in nice mode.sys<number> The number of milliseconds the CPU has spent in sys mode.idle<number> The number of milliseconds the CPU has spent in idle mode.irq<number> The number of milliseconds the CPU has spent in irq mode.
[ {model:'Intel(R) Core(TM) i7 CPU 860 @ 2.80GHz',speed:2926,times: {user:252020,nice:0,sys:30340,idle:1070356870,irq:0, }, }, {model:'Intel(R) Core(TM) i7 CPU 860 @ 2.80GHz',speed:2926,times: {user:306960,nice:0,sys:26980,idle:1071569080,irq:0, }, }, {model:'Intel(R) Core(TM) i7 CPU 860 @ 2.80GHz',speed:2926,times: {user:248450,nice:0,sys:21750,idle:1070919370,irq:0, }, }, {model:'Intel(R) Core(TM) i7 CPU 860 @ 2.80GHz',speed:2926,times: {user:256880,nice:0,sys:19430,idle:1070905480,irq:20, }, },]nice values are POSIX-only. On Windows, thenice values of all processorsare always 0.
os.cpus().length should not be used to calculate the amount of parallelismavailable to an application. Useos.availableParallelism() for this purpose.
os.devNull#
- Type:<string>
The platform-specific file path of the null device.
\\.\nulon Windows/dev/nullon POSIX
os.endianness()#
- Returns:<string>
Returns a string identifying the endianness of the CPU for which the Node.jsbinary was compiled.
Possible values are'BE' for big endian and'LE' for little endian.
os.freemem()#
- Returns:<integer>
Returns the amount of free system memory in bytes as an integer.
os.getPriority([pid])#
Returns the scheduling priority for the process specified bypid. Ifpid isnot provided or is0, the priority of the current process is returned.
os.homedir()#
- Returns:<string>
Returns the string path of the current user's home directory.
On POSIX, it uses the$HOME environment variable if defined. Otherwise ituses theeffective UID to look up the user's home directory.
On Windows, it uses theUSERPROFILE environment variable if defined.Otherwise it uses the path to the profile directory of the current user.
os.hostname()#
- Returns:<string>
Returns the host name of the operating system as a string.
os.loadavg()#
- Returns:<number[]>
Returns an array containing the 1, 5, and 15 minute load averages.
The load average is a measure of system activity calculated by the operatingsystem and expressed as a fractional number.
The load average is a Unix-specific concept. On Windows, the return value isalways[0, 0, 0].
os.machine()#
- Returns:<string>
Returns the machine type as a string, such asarm,arm64,aarch64,mips,mips64,ppc64,ppc64le,s390x,i386,i686,x86_64.
On POSIX systems, the machine type is determined by callinguname(3). On Windows,RtlGetVersion() is used, and if it is notavailable,GetVersionExW() will be used. Seehttps://en.wikipedia.org/wiki/Uname#Examples for more information.
os.networkInterfaces()#
History
| Version | Changes |
|---|---|
| v18.4.0 | The |
| v18.0.0 | The |
| v0.6.0 | Added in: v0.6.0 |
- Returns:<Object>
Returns an object containing network interfaces that have been assigned anetwork address.
Each key on the returned object identifies a network interface. The associatedvalue is an array of objects that each describe an assigned network address.
The properties available on the assigned network address object include:
address<string> The assigned IPv4 or IPv6 addressnetmask<string> The IPv4 or IPv6 network maskfamily<string> EitherIPv4orIPv6mac<string> The MAC address of the network interfaceinternal<boolean>trueif the network interface is a loopback orsimilar interface that is not remotely accessible; otherwisefalsescopeid<number> The numeric IPv6 scope ID (only specified whenfamilyisIPv6)cidr<string> The assigned IPv4 or IPv6 address with the routing prefixin CIDR notation. If thenetmaskis invalid, this property is settonull.
{lo: [ {address:'127.0.0.1',netmask:'255.0.0.0',family:'IPv4',mac:'00:00:00:00:00:00',internal:true,cidr:'127.0.0.1/8' }, {address:'::1',netmask:'ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff',family:'IPv6',mac:'00:00:00:00:00:00',scopeid:0,internal:true,cidr:'::1/128' } ],eth0: [ {address:'192.168.1.108',netmask:'255.255.255.0',family:'IPv4',mac:'01:02:03:0a:0b:0c',internal:false,cidr:'192.168.1.108/24' }, {address:'fe80::a00:27ff:fe4e:66a1',netmask:'ffff:ffff:ffff:ffff::',family:'IPv6',mac:'01:02:03:0a:0b:0c',scopeid:1,internal:false,cidr:'fe80::a00:27ff:fe4e:66a1/64' } ]}os.platform()#
- Returns:<string>
Returns a string identifying the operating system platform for whichthe Node.js binary was compiled. The value is set at compile time.Possible values are'aix','darwin','freebsd','linux','openbsd','sunos', and'win32'.
The return value is equivalent toprocess.platform.
The value'android' may also be returned if Node.js is built on the Androidoperating system.Android support is experimental.
os.release()#
- Returns:<string>
Returns the operating system as a string.
On POSIX systems, the operating system release is determined by callinguname(3). On Windows,GetVersionExW() is used. Seehttps://en.wikipedia.org/wiki/Uname#Examples for more information.
os.setPriority([pid, ]priority)#
pid<integer> The process ID to set scheduling priority for.Default:0.priority<integer> The scheduling priority to assign to the process.
Attempts to set the scheduling priority for the process specified bypid. Ifpid is not provided or is0, the process ID of the current process is used.
Thepriority input must be an integer between-20 (high priority) and19(low priority). Due to differences between Unix priority levels and Windowspriority classes,priority is mapped to one of six priority constants inos.constants.priority. When retrieving a process priority level, this rangemapping may cause the return value to be slightly different on Windows. To avoidconfusion, setpriority to one of the priority constants.
On Windows, setting priority toPRIORITY_HIGHEST requires elevated userprivileges. Otherwise the set priority will be silently reduced toPRIORITY_HIGH.
os.tmpdir()#
History
| Version | Changes |
|---|---|
| v2.0.0 | This function is now cross-platform consistent and no longer returns a path with a trailing slash on any platform. |
| v0.9.9 | Added in: v0.9.9 |
- Returns:<string>
Returns the operating system's default directory for temporary files as astring.
On Windows, the result can be overridden byTEMP andTMP environment variables, andTEMP takes precedence overTMP. If neither is set, it defaults to%SystemRoot%\tempor%windir%\temp.
On non-Windows platforms,TMPDIR,TMP andTEMP environment variables will be checkedto override the result of this method, in the described order. If none of them is set, itdefaults to/tmp.
Some operating system distributions would either configureTMPDIR (non-Windows) orTEMP andTMP (Windows) by default without additional configurations by the systemadministrators. The result ofos.tmpdir() typically reflects the system preferenceunless it's explicitly overridden by the users.
os.totalmem()#
- Returns:<integer>
Returns the total amount of system memory in bytes as an integer.
os.type()#
- Returns:<string>
Returns the operating system name as returned byuname(3). For example, itreturns'Linux' on Linux,'Darwin' on macOS, and'Windows_NT' on Windows.
Seehttps://en.wikipedia.org/wiki/Uname#Examples for additional informationabout the output of runninguname(3) on various operating systems.
os.uptime()#
History
| Version | Changes |
|---|---|
| v10.0.0 | The result of this function no longer contains a fraction component on Windows. |
| v0.3.3 | Added in: v0.3.3 |
- Returns:<integer>
Returns the system uptime in number of seconds.
os.userInfo([options])#
options<Object>encoding<string> Character encoding used to interpret resulting strings.Ifencodingis set to'buffer', theusername,shell, andhomedirvalues will beBufferinstances.Default:'utf8'.
- Returns:<Object>
Returns information about the currently effective user. On POSIX platforms,this is typically a subset of the password file. The returned object includestheusername,uid,gid,shell, andhomedir. On Windows, theuid andgid fields are-1, andshell isnull.
The value ofhomedir returned byos.userInfo() is provided by the operatingsystem. This differs from the result ofos.homedir(), which queriesenvironment variables for the home directory before falling back to theoperating system response.
Throws aSystemError if a user has nousername orhomedir.
os.version()#
- Returns:<string>
Returns a string identifying the kernel version.
On POSIX systems, the operating system release is determined by callinguname(3). On Windows,RtlGetVersion() is used, and if it is notavailable,GetVersionExW() will be used. Seehttps://en.wikipedia.org/wiki/Uname#Examples for more information.
OS constants#
The following constants are exported byos.constants.
Not all constants will be available on every operating system.
Signal constants#
History
| Version | Changes |
|---|---|
| v5.11.0 | Added support for |
The following signal constants are exported byos.constants.signals.
| Constant | Description |
|---|---|
SIGHUP | Sent to indicate when a controlling terminal is closed or a parent process exits. |
SIGINT | Sent to indicate when a user wishes to interrupt a process (Ctrl+C). |
SIGQUIT | Sent to indicate when a user wishes to terminate a process and perform a core dump. |
SIGILL | Sent to a process to notify that it has attempted to perform an illegal, malformed, unknown, or privileged instruction. |
SIGTRAP | Sent to a process when an exception has occurred. |
SIGABRT | Sent to a process to request that it abort. |
SIGIOT | Synonym forSIGABRT |
SIGBUS | Sent to a process to notify that it has caused a bus error. |
SIGFPE | Sent to a process to notify that it has performed an illegal arithmetic operation. |
SIGKILL | Sent to a process to terminate it immediately. |
SIGUSR1SIGUSR2 | Sent to a process to identify user-defined conditions. |
SIGSEGV | Sent to a process to notify of a segmentation fault. |
SIGPIPE | Sent to a process when it has attempted to write to a disconnected pipe. |
SIGALRM | Sent to a process when a system timer elapses. |
SIGTERM | Sent to a process to request termination. |
SIGCHLD | Sent to a process when a child process terminates. |
SIGSTKFLT | Sent to a process to indicate a stack fault on a coprocessor. |
SIGCONT | Sent to instruct the operating system to continue a paused process. |
SIGSTOP | Sent to instruct the operating system to halt a process. |
SIGTSTP | Sent to a process to request it to stop. |
SIGBREAK | Sent to indicate when a user wishes to interrupt a process. |
SIGTTIN | Sent to a process when it reads from the TTY while in the background. |
SIGTTOU | Sent to a process when it writes to the TTY while in the background. |
SIGURG | Sent to a process when a socket has urgent data to read. |
SIGXCPU | Sent to a process when it has exceeded its limit on CPU usage. |
SIGXFSZ | Sent to a process when it grows a file larger than the maximum allowed. |
SIGVTALRM | Sent to a process when a virtual timer has elapsed. |
SIGPROF | Sent to a process when a system timer has elapsed. |
SIGWINCH | Sent to a process when the controlling terminal has changed its size. |
SIGIO | Sent to a process when I/O is available. |
SIGPOLL | Synonym forSIGIO |
SIGLOST | Sent to a process when a file lock has been lost. |
SIGPWR | Sent to a process to notify of a power failure. |
SIGINFO | Synonym forSIGPWR |
SIGSYS | Sent to a process to notify of a bad argument. |
SIGUNUSED | Synonym forSIGSYS |
Error constants#
The following error constants are exported byos.constants.errno.
POSIX error constants#
| Constant | Description |
|---|---|
E2BIG | Indicates that the list of arguments is longer than expected. |
EACCES | Indicates that the operation did not have sufficient permissions. |
EADDRINUSE | Indicates that the network address is already in use. |
EADDRNOTAVAIL | Indicates that the network address is currently unavailable for use. |
EAFNOSUPPORT | Indicates that the network address family is not supported. |
EAGAIN | Indicates that there is no data available and to try the operation again later. |
EALREADY | Indicates that the socket already has a pending connection in progress. |
EBADF | Indicates that a file descriptor is not valid. |
EBADMSG | Indicates an invalid data message. |
EBUSY | Indicates that a device or resource is busy. |
ECANCELED | Indicates that an operation was canceled. |
ECHILD | Indicates that there are no child processes. |
ECONNABORTED | Indicates that the network connection has been aborted. |
ECONNREFUSED | Indicates that the network connection has been refused. |
ECONNRESET | Indicates that the network connection has been reset. |
EDEADLK | Indicates that a resource deadlock has been avoided. |
EDESTADDRREQ | Indicates that a destination address is required. |
EDOM | Indicates that an argument is out of the domain of the function. |
EDQUOT | Indicates that the disk quota has been exceeded. |
EEXIST | Indicates that the file already exists. |
EFAULT | Indicates an invalid pointer address. |
EFBIG | Indicates that the file is too large. |
EHOSTUNREACH | Indicates that the host is unreachable. |
EIDRM | Indicates that the identifier has been removed. |
EILSEQ | Indicates an illegal byte sequence. |
EINPROGRESS | Indicates that an operation is already in progress. |
EINTR | Indicates that a function call was interrupted. |
EINVAL | Indicates that an invalid argument was provided. |
EIO | Indicates an otherwise unspecified I/O error. |
EISCONN | Indicates that the socket is connected. |
EISDIR | Indicates that the path is a directory. |
ELOOP | Indicates too many levels of symbolic links in a path. |
EMFILE | Indicates that there are too many open files. |
EMLINK | Indicates that there are too many hard links to a file. |
EMSGSIZE | Indicates that the provided message is too long. |
EMULTIHOP | Indicates that a multihop was attempted. |
ENAMETOOLONG | Indicates that the filename is too long. |
ENETDOWN | Indicates that the network is down. |
ENETRESET | Indicates that the connection has been aborted by the network. |
ENETUNREACH | Indicates that the network is unreachable. |
ENFILE | Indicates too many open files in the system. |
ENOBUFS | Indicates that no buffer space is available. |
ENODATA | Indicates that no message is available on the stream head read queue. |
ENODEV | Indicates that there is no such device. |
ENOENT | Indicates that there is no such file or directory. |
ENOEXEC | Indicates an exec format error. |
ENOLCK | Indicates that there are no locks available. |
ENOLINK | Indications that a link has been severed. |
ENOMEM | Indicates that there is not enough space. |
ENOMSG | Indicates that there is no message of the desired type. |
ENOPROTOOPT | Indicates that a given protocol is not available. |
ENOSPC | Indicates that there is no space available on the device. |
ENOSR | Indicates that there are no stream resources available. |
ENOSTR | Indicates that a given resource is not a stream. |
ENOSYS | Indicates that a function has not been implemented. |
ENOTCONN | Indicates that the socket is not connected. |
ENOTDIR | Indicates that the path is not a directory. |
ENOTEMPTY | Indicates that the directory is not empty. |
ENOTSOCK | Indicates that the given item is not a socket. |
ENOTSUP | Indicates that a given operation is not supported. |
ENOTTY | Indicates an inappropriate I/O control operation. |
ENXIO | Indicates no such device or address. |
EOPNOTSUPP | Indicates that an operation is not supported on the socket. AlthoughENOTSUP andEOPNOTSUPP have the same value on Linux, according to POSIX.1 these error values should be distinct.) |
EOVERFLOW | Indicates that a value is too large to be stored in a given data type. |
EPERM | Indicates that the operation is not permitted. |
EPIPE | Indicates a broken pipe. |
EPROTO | Indicates a protocol error. |
EPROTONOSUPPORT | Indicates that a protocol is not supported. |
EPROTOTYPE | Indicates the wrong type of protocol for a socket. |
ERANGE | Indicates that the results are too large. |
EROFS | Indicates that the file system is read only. |
ESPIPE | Indicates an invalid seek operation. |
ESRCH | Indicates that there is no such process. |
ESTALE | Indicates that the file handle is stale. |
ETIME | Indicates an expired timer. |
ETIMEDOUT | Indicates that the connection timed out. |
ETXTBSY | Indicates that a text file is busy. |
EWOULDBLOCK | Indicates that the operation would block. |
EXDEV | Indicates an improper link. |
Windows-specific error constants#
The following error codes are specific to the Windows operating system.
| Constant | Description |
|---|---|
WSAEINTR | Indicates an interrupted function call. |
WSAEBADF | Indicates an invalid file handle. |
WSAEACCES | Indicates insufficient permissions to complete the operation. |
WSAEFAULT | Indicates an invalid pointer address. |
WSAEINVAL | Indicates that an invalid argument was passed. |
WSAEMFILE | Indicates that there are too many open files. |
WSAEWOULDBLOCK | Indicates that a resource is temporarily unavailable. |
WSAEINPROGRESS | Indicates that an operation is currently in progress. |
WSAEALREADY | Indicates that an operation is already in progress. |
WSAENOTSOCK | Indicates that the resource is not a socket. |
WSAEDESTADDRREQ | Indicates that a destination address is required. |
WSAEMSGSIZE | Indicates that the message size is too long. |
WSAEPROTOTYPE | Indicates the wrong protocol type for the socket. |
WSAENOPROTOOPT | Indicates a bad protocol option. |
WSAEPROTONOSUPPORT | Indicates that the protocol is not supported. |
WSAESOCKTNOSUPPORT | Indicates that the socket type is not supported. |
WSAEOPNOTSUPP | Indicates that the operation is not supported. |
WSAEPFNOSUPPORT | Indicates that the protocol family is not supported. |
WSAEAFNOSUPPORT | Indicates that the address family is not supported. |
WSAEADDRINUSE | Indicates that the network address is already in use. |
WSAEADDRNOTAVAIL | Indicates that the network address is not available. |
WSAENETDOWN | Indicates that the network is down. |
WSAENETUNREACH | Indicates that the network is unreachable. |
WSAENETRESET | Indicates that the network connection has been reset. |
WSAECONNABORTED | Indicates that the connection has been aborted. |
WSAECONNRESET | Indicates that the connection has been reset by the peer. |
WSAENOBUFS | Indicates that there is no buffer space available. |
WSAEISCONN | Indicates that the socket is already connected. |
WSAENOTCONN | Indicates that the socket is not connected. |
WSAESHUTDOWN | Indicates that data cannot be sent after the socket has been shutdown. |
WSAETOOMANYREFS | Indicates that there are too many references. |
WSAETIMEDOUT | Indicates that the connection has timed out. |
WSAECONNREFUSED | Indicates that the connection has been refused. |
WSAELOOP | Indicates that a name cannot be translated. |
WSAENAMETOOLONG | Indicates that a name was too long. |
WSAEHOSTDOWN | Indicates that a network host is down. |
WSAEHOSTUNREACH | Indicates that there is no route to a network host. |
WSAENOTEMPTY | Indicates that the directory is not empty. |
WSAEPROCLIM | Indicates that there are too many processes. |
WSAEUSERS | Indicates that the user quota has been exceeded. |
WSAEDQUOT | Indicates that the disk quota has been exceeded. |
WSAESTALE | Indicates a stale file handle reference. |
WSAEREMOTE | Indicates that the item is remote. |
WSASYSNOTREADY | Indicates that the network subsystem is not ready. |
WSAVERNOTSUPPORTED | Indicates that thewinsock.dll version is out of range. |
WSANOTINITIALISED | Indicates that successful WSAStartup has not yet been performed. |
WSAEDISCON | Indicates that a graceful shutdown is in progress. |
WSAENOMORE | Indicates that there are no more results. |
WSAECANCELLED | Indicates that an operation has been canceled. |
WSAEINVALIDPROCTABLE | Indicates that the procedure call table is invalid. |
WSAEINVALIDPROVIDER | Indicates an invalid service provider. |
WSAEPROVIDERFAILEDINIT | Indicates that the service provider failed to initialized. |
WSASYSCALLFAILURE | Indicates a system call failure. |
WSASERVICE_NOT_FOUND | Indicates that a service was not found. |
WSATYPE_NOT_FOUND | Indicates that a class type was not found. |
WSA_E_NO_MORE | Indicates that there are no more results. |
WSA_E_CANCELLED | Indicates that the call was canceled. |
WSAEREFUSED | Indicates that a database query was refused. |
dlopen constants#
If available on the operating system, the following constantsare exported inos.constants.dlopen. Seedlopen(3) for detailedinformation.
| Constant | Description |
|---|---|
RTLD_LAZY | Perform lazy binding. Node.js sets this flag by default. |
RTLD_NOW | Resolve all undefined symbols in the library before dlopen(3) returns. |
RTLD_GLOBAL | Symbols defined by the library will be made available for symbol resolution of subsequently loaded libraries. |
RTLD_LOCAL | The converse ofRTLD_GLOBAL. This is the default behavior if neither flag is specified. |
RTLD_DEEPBIND | Make a self-contained library use its own symbols in preference to symbols from previously loaded libraries. |
Priority constants#
The following process scheduling constants are exported byos.constants.priority.
| Constant | Description |
|---|---|
PRIORITY_LOW | The lowest process scheduling priority. This corresponds toIDLE_PRIORITY_CLASS on Windows, and a nice value of19 on all other platforms. |
PRIORITY_BELOW_NORMAL | The process scheduling priority abovePRIORITY_LOW and belowPRIORITY_NORMAL. This corresponds toBELOW_NORMAL_PRIORITY_CLASS on Windows, and a nice value of10 on all other platforms. |
PRIORITY_NORMAL | The default process scheduling priority. This corresponds toNORMAL_PRIORITY_CLASS on Windows, and a nice value of0 on all other platforms. |
PRIORITY_ABOVE_NORMAL | The process scheduling priority abovePRIORITY_NORMAL and belowPRIORITY_HIGH. This corresponds toABOVE_NORMAL_PRIORITY_CLASS on Windows, and a nice value of-7 on all other platforms. |
PRIORITY_HIGH | The process scheduling priority abovePRIORITY_ABOVE_NORMAL and belowPRIORITY_HIGHEST. This corresponds toHIGH_PRIORITY_CLASS on Windows, and a nice value of-14 on all other platforms. |
PRIORITY_HIGHEST | The highest process scheduling priority. This corresponds toREALTIME_PRIORITY_CLASS on Windows, and a nice value of-20 on all other platforms. |
libuv constants#
| Constant | Description |
|---|---|
UV_UDP_REUSEADDR |
Path#
Source Code:lib/path.js
Thenode:path module provides utilities for working with file and directorypaths. It can be accessed using:
const path =require('node:path');import pathfrom'node:path';
Windows vs. POSIX#
The default operation of thenode:path module varies based on the operatingsystem on which a Node.js application is running. Specifically, when running ona Windows operating system, thenode:path module will assume thatWindows-style paths are being used.
So usingpath.basename() might yield different results on POSIX and Windows:
On POSIX:
path.basename('C:\\temp\\myfile.html');// Returns: 'C:\\temp\\myfile.html'On Windows:
path.basename('C:\\temp\\myfile.html');// Returns: 'myfile.html'To achieve consistent results when working with Windows file paths on anyoperating system, usepath.win32:
On POSIX and Windows:
path.win32.basename('C:\\temp\\myfile.html');// Returns: 'myfile.html'To achieve consistent results when working with POSIX file paths on anyoperating system, usepath.posix:
On POSIX and Windows:
path.posix.basename('/tmp/myfile.html');// Returns: 'myfile.html'On Windows Node.js follows the concept of per-drive working directory.This behavior can be observed when using a drive path without a backslash. Forexample,path.resolve('C:\\') can potentially return a different result thanpath.resolve('C:'). For more information, seethis MSDN page.
path.basename(path[, suffix])#
History
| Version | Changes |
|---|---|
| v6.0.0 | Passing a non-string as the |
| v0.1.25 | Added in: v0.1.25 |
Thepath.basename() method returns the last portion of apath, similar tothe Unixbasename command. Trailingdirectory separators areignored.
path.basename('/foo/bar/baz/asdf/quux.html');// Returns: 'quux.html'path.basename('/foo/bar/baz/asdf/quux.html','.html');// Returns: 'quux'Although Windows usually treats file names, including file extensions, in acase-insensitive manner, this function does not. For example,C:\\foo.html andC:\\foo.HTML refer to the same file, butbasename treats the extension as acase-sensitive string:
path.win32.basename('C:\\foo.html','.html');// Returns: 'foo'path.win32.basename('C:\\foo.HTML','.html');// Returns: 'foo.HTML'ATypeError is thrown ifpath is not a string or ifsuffix is givenand is not a string.
path.delimiter#
- Type:<string>
Provides the platform-specific path delimiter:
;for Windows:for POSIX
For example, on POSIX:
console.log(process.env.PATH);// Prints: '/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin'process.env.PATH.split(path.delimiter);// Returns: ['/usr/bin', '/bin', '/usr/sbin', '/sbin', '/usr/local/bin']On Windows:
console.log(process.env.PATH);// Prints: 'C:\Windows\system32;C:\Windows;C:\Program Files\node\'process.env.PATH.split(path.delimiter);// Returns ['C:\\Windows\\system32', 'C:\\Windows', 'C:\\Program Files\\node\\']path.dirname(path)#
History
| Version | Changes |
|---|---|
| v6.0.0 | Passing a non-string as the |
| v0.1.16 | Added in: v0.1.16 |
Thepath.dirname() method returns the directory name of apath, similar tothe Unixdirname command. Trailing directory separators are ignored, seepath.sep.
path.dirname('/foo/bar/baz/asdf/quux');// Returns: '/foo/bar/baz/asdf'ATypeError is thrown ifpath is not a string.
path.extname(path)#
History
| Version | Changes |
|---|---|
| v6.0.0 | Passing a non-string as the |
| v0.1.25 | Added in: v0.1.25 |
Thepath.extname() method returns the extension of thepath, from the lastoccurrence of the. (period) character to end of string in the last portion ofthepath. If there is no. in the last portion of thepath, or ifthere are no. characters other than the first character ofthe basename ofpath (seepath.basename()) , an empty string is returned.
path.extname('index.html');// Returns: '.html'path.extname('index.coffee.md');// Returns: '.md'path.extname('index.');// Returns: '.'path.extname('index');// Returns: ''path.extname('.index');// Returns: ''path.extname('.index.md');// Returns: '.md'ATypeError is thrown ifpath is not a string.
path.format(pathObject)#
History
| Version | Changes |
|---|---|
| v19.0.0 | The dot will be added if it is not specified in |
| v0.11.15 | Added in: v0.11.15 |
Thepath.format() method returns a path string from an object. This is theopposite ofpath.parse().
When providing properties to thepathObject remember that there arecombinations where one property has priority over another:
pathObject.rootis ignored ifpathObject.diris providedpathObject.extandpathObject.nameare ignored ifpathObject.baseexists
For example, on POSIX:
// If `dir`, `root` and `base` are provided,// `${dir}${path.sep}${base}`// will be returned. `root` is ignored.path.format({root:'/ignored',dir:'/home/user/dir',base:'file.txt',});// Returns: '/home/user/dir/file.txt'// `root` will be used if `dir` is not specified.// If only `root` is provided or `dir` is equal to `root` then the// platform separator will not be included. `ext` will be ignored.path.format({root:'/',base:'file.txt',ext:'ignored',});// Returns: '/file.txt'// `name` + `ext` will be used if `base` is not specified.path.format({root:'/',name:'file',ext:'.txt',});// Returns: '/file.txt'// The dot will be added if it is not specified in `ext`.path.format({root:'/',name:'file',ext:'txt',});// Returns: '/file.txt'On Windows:
path.format({dir:'C:\\path\\dir',base:'file.txt',});// Returns: 'C:\\path\\dir\\file.txt'path.matchesGlob(path, pattern)#
History
| Version | Changes |
|---|---|
| v24.8.0, v22.20.0 | Marking the API stable. |
| v22.5.0, v20.17.0 | Added in: v22.5.0, v20.17.0 |
path<string> The path to glob-match against.pattern<string> The glob to check the path against.- Returns:<boolean> Whether or not the
pathmatched thepattern.
Thepath.matchesGlob() method determines ifpath matches thepattern.
For example:
path.matchesGlob('/foo/bar','/foo/*');// truepath.matchesGlob('/foo/bar*','foo/bird');// falseATypeError is thrown ifpath orpattern are not strings.
path.isAbsolute(path)#
Thepath.isAbsolute() method determines if the literalpath is absolute.Therefore, it’s not safe for mitigating path traversals.
If the givenpath is a zero-length string,false will be returned.
For example, on POSIX:
path.isAbsolute('/foo/bar');// truepath.isAbsolute('/baz/..');// truepath.isAbsolute('/baz/../..');// truepath.isAbsolute('qux/');// falsepath.isAbsolute('.');// falseOn Windows:
path.isAbsolute('//server');// truepath.isAbsolute('\\\\server');// truepath.isAbsolute('C:/foo/..');// truepath.isAbsolute('C:\\foo\\..');// truepath.isAbsolute('bar\\baz');// falsepath.isAbsolute('bar/baz');// falsepath.isAbsolute('.');// falseATypeError is thrown ifpath is not a string.
path.join([...paths])#
Thepath.join() method joins all givenpath segments together using theplatform-specific separator as a delimiter, then normalizes the resulting path.
Zero-lengthpath segments are ignored. If the joined path string is azero-length string then'.' will be returned, representing the currentworking directory.
path.join('/foo','bar','baz/asdf','quux','..');// Returns: '/foo/bar/baz/asdf'path.join('foo', {},'bar');// Throws 'TypeError: Path must be a string. Received {}'ATypeError is thrown if any of the path segments is not a string.
path.normalize(path)#
Thepath.normalize() method normalizes the givenpath, resolving'..' and'.' segments.
When multiple, sequential path segment separation characters are found (e.g./ on POSIX and either\ or/ on Windows), they are replaced by a singleinstance of the platform-specific path segment separator (/ on POSIX and\ on Windows). Trailing separators are preserved.
If thepath is a zero-length string,'.' is returned, representing thecurrent working directory.
On POSIX, the types of normalization applied by this function do not strictlyadhere to the POSIX specification. For example, this function will replace twoleading forward slashes with a single slash as if it was a regular absolutepath, whereas a few POSIX systems assign special meaning to paths beginning withexactly two forward slashes. Similarly, other substitutions performed by thisfunction, such as removing.. segments, may change how the underlying systemresolves the path.
For example, on POSIX:
path.normalize('/foo/bar//baz/asdf/quux/..');// Returns: '/foo/bar/baz/asdf'On Windows:
path.normalize('C:\\temp\\\\foo\\bar\\..\\');// Returns: 'C:\\temp\\foo\\'Since Windows recognizes multiple path separators, both separators will bereplaced by instances of the Windows preferred separator (\):
path.win32.normalize('C:////temp\\\\/\\/\\/foo/bar');// Returns: 'C:\\temp\\foo\\bar'ATypeError is thrown ifpath is not a string.
path.parse(path)#
Thepath.parse() method returns an object whose properties representsignificant elements of thepath. Trailing directory separators are ignored,seepath.sep.
The returned object will have the following properties:
For example, on POSIX:
path.parse('/home/user/dir/file.txt');// Returns:// { root: '/',// dir: '/home/user/dir',// base: 'file.txt',// ext: '.txt',// name: 'file' }┌─────────────────────┬────────────┐│ dir │ base │├──────┬ ├──────┬─────┤│ root │ │ name │ ext │" / home/user/dir / file .txt "└──────┴──────────────┴──────┴─────┘(All spaces in the "" line should be ignored. They are purely for formatting.)On Windows:
path.parse('C:\\path\\dir\\file.txt');// Returns:// { root: 'C:\\',// dir: 'C:\\path\\dir',// base: 'file.txt',// ext: '.txt',// name: 'file' }┌─────────────────────┬────────────┐│ dir │ base │├──────┬ ├──────┬─────┤│ root │ │ name │ ext │" C:\ path\dir \ file .txt "└──────┴──────────────┴──────┴─────┘(All spaces in the "" line should be ignored. They are purely for formatting.)ATypeError is thrown ifpath is not a string.
path.posix#
History
| Version | Changes |
|---|---|
| v15.3.0 | Exposed as |
| v0.11.15 | Added in: v0.11.15 |
- Type:<Object>
Thepath.posix property provides access to POSIX specific implementationsof thepath methods.
The API is accessible viarequire('node:path').posix orrequire('node:path/posix').
path.relative(from, to)#
History
| Version | Changes |
|---|---|
| v6.8.0 | On Windows, the leading slashes for UNC paths are now included in the return value. |
| v0.5.0 | Added in: v0.5.0 |
Thepath.relative() method returns the relative path fromfrom toto basedon the current working directory. Iffrom andto each resolve to the samepath (after callingpath.resolve() on each), a zero-length string is returned.
If a zero-length string is passed asfrom orto, the current workingdirectory will be used instead of the zero-length strings.
For example, on POSIX:
path.relative('/data/orandea/test/aaa','/data/orandea/impl/bbb');// Returns: '../../impl/bbb'On Windows:
path.relative('C:\\orandea\\test\\aaa','C:\\orandea\\impl\\bbb');// Returns: '..\\..\\impl\\bbb'ATypeError is thrown if eitherfrom orto is not a string.
path.resolve([...paths])#
Thepath.resolve() method resolves a sequence of paths or path segments intoan absolute path.
The given sequence of paths is processed from right to left, with eachsubsequentpath prepended until an absolute path is constructed.For instance, given the sequence of path segments:/foo,/bar,baz,callingpath.resolve('/foo', '/bar', 'baz') would return/bar/bazbecause'baz' is not an absolute path but'/bar' + '/' + 'baz' is.
If, after processing all givenpath segments, an absolute path has not yetbeen generated, the current working directory is used.
The resulting path is normalized and trailing slashes are removed unless thepath is resolved to the root directory.
Zero-lengthpath segments are ignored.
If nopath segments are passed,path.resolve() will return the absolute pathof the current working directory.
path.resolve('/foo/bar','./baz');// Returns: '/foo/bar/baz'path.resolve('/foo/bar','/tmp/file/');// Returns: '/tmp/file'path.resolve('wwwroot','static_files/png/','../gif/image.gif');// If the current working directory is /home/myself/node,// this returns '/home/myself/node/wwwroot/static_files/gif/image.gif'ATypeError is thrown if any of the arguments is not a string.
path.sep#
- Type:<string>
Provides the platform-specific path segment separator:
\on Windows/on POSIX
For example, on POSIX:
'foo/bar/baz'.split(path.sep);// Returns: ['foo', 'bar', 'baz']On Windows:
'foo\\bar\\baz'.split(path.sep);// Returns: ['foo', 'bar', 'baz']On Windows, both the forward slash (/) and backward slash (\) are acceptedas path segment separators; however, thepath methods only add backwardslashes (\).
path.toNamespacedPath(path)#
On Windows systems only, returns an equivalentnamespace-prefixed path forthe givenpath. Ifpath is not a string,path will be returned withoutmodifications.
This method is meaningful only on Windows systems. On POSIX systems, themethod is non-operational and always returnspath without modifications.
path.win32#
History
| Version | Changes |
|---|---|
| v15.3.0 | Exposed as |
| v0.11.15 | Added in: v0.11.15 |
- Type:<Object>
Thepath.win32 property provides access to Windows-specific implementationsof thepath methods.
The API is accessible viarequire('node:path').win32 orrequire('node:path/win32').
Performance measurement APIs#
Source Code:lib/perf_hooks.js
This module provides an implementation of a subset of the W3CWeb Performance APIs as well as additional APIs forNode.js-specific performance measurements.
Node.js supports the followingWeb Performance APIs:
import { performance,PerformanceObserver }from'node:perf_hooks';const obs =newPerformanceObserver((items) => {console.log(items.getEntries()[0].duration); performance.clearMarks();});obs.observe({type:'measure' });performance.measure('Start to Now');performance.mark('A');doSomeLongRunningProcess(() => { performance.measure('A to Now','A'); performance.mark('B'); performance.measure('A to B','A','B');});const {PerformanceObserver, performance } =require('node:perf_hooks');const obs =newPerformanceObserver((items) => {console.log(items.getEntries()[0].duration);});obs.observe({type:'measure' });performance.measure('Start to Now');performance.mark('A');(asyncfunctiondoSomeLongRunningProcess() {awaitnewPromise((r) =>setTimeout(r,5000)); performance.measure('A to Now','A'); performance.mark('B'); performance.measure('A to B','A','B');})();
perf_hooks.performance#
An object that can be used to collect performance metrics from the currentNode.js instance. It is similar towindow.performance in browsers.
performance.clearMarks([name])#
History
| Version | Changes |
|---|---|
| v19.0.0 | This method must be called with the |
| v8.5.0 | Added in: v8.5.0 |
name<string>
Ifname is not provided, removes allPerformanceMark objects from thePerformance Timeline. Ifname is provided, removes only the named mark.
performance.clearMeasures([name])#
History
| Version | Changes |
|---|---|
| v19.0.0 | This method must be called with the |
| v16.7.0 | Added in: v16.7.0 |
name<string>
Ifname is not provided, removes allPerformanceMeasure objects from thePerformance Timeline. Ifname is provided, removes only the named measure.
performance.clearResourceTimings([name])#
History
| Version | Changes |
|---|---|
| v19.0.0 | This method must be called with the |
| v18.2.0, v16.17.0 | Added in: v18.2.0, v16.17.0 |
name<string>
Ifname is not provided, removes allPerformanceResourceTiming objects fromthe Resource Timeline. Ifname is provided, removes only the named resource.
performance.eventLoopUtilization([utilization1[, utilization2]])#
History
| Version | Changes |
|---|---|
| v25.2.0 | Added |
| v14.10.0, v12.19.0 | Added in: v14.10.0, v12.19.0 |
utilization1<Object> The result of a previous call toeventLoopUtilization().utilization2<Object> The result of a previous call toeventLoopUtilization()prior toutilization1.- Returns:<Object>
This is an alias ofperf_hooks.eventLoopUtilization().
This property is an extension by Node.js. It is not available in Web browsers.
performance.getEntries()#
History
| Version | Changes |
|---|---|
| v19.0.0 | This method must be called with the |
| v16.7.0 | Added in: v16.7.0 |
- Returns:<PerformanceEntry[]>
Returns a list ofPerformanceEntry objects in chronological order withrespect toperformanceEntry.startTime. If you are only interested inperformance entries of certain types or that have certain names, seeperformance.getEntriesByType() andperformance.getEntriesByName().
performance.getEntriesByName(name[, type])#
History
| Version | Changes |
|---|---|
| v19.0.0 | This method must be called with the |
| v16.7.0 | Added in: v16.7.0 |
name<string>type<string>- Returns:<PerformanceEntry[]>
Returns a list ofPerformanceEntry objects in chronological orderwith respect toperformanceEntry.startTime whoseperformanceEntry.name isequal toname, and optionally, whoseperformanceEntry.entryType is equal totype.
performance.getEntriesByType(type)#
History
| Version | Changes |
|---|---|
| v19.0.0 | This method must be called with the |
| v16.7.0 | Added in: v16.7.0 |
type<string>- Returns:<PerformanceEntry[]>
Returns a list ofPerformanceEntry objects in chronological orderwith respect toperformanceEntry.startTime whoseperformanceEntry.entryTypeis equal totype.
performance.mark(name[, options])#
History
| Version | Changes |
|---|---|
| v19.0.0 | This method must be called with the |
| v16.0.0 | Updated to conform to the User Timing Level 3 specification. |
| v8.5.0 | Added in: v8.5.0 |
Creates a newPerformanceMark entry in the Performance Timeline. APerformanceMark is a subclass ofPerformanceEntry whoseperformanceEntry.entryType is always'mark', and whoseperformanceEntry.duration is always0. Performance marks are usedto mark specific significant moments in the Performance Timeline.
The createdPerformanceMark entry is put in the global Performance Timelineand can be queried withperformance.getEntries,performance.getEntriesByName, andperformance.getEntriesByType. When theobservation is performed, the entries should be cleared from the globalPerformance Timeline manually withperformance.clearMarks.
performance.markResourceTiming(timingInfo, requestedUrl, initiatorType, global, cacheMode, bodyInfo, responseStatus[, deliveryType])#
History
| Version | Changes |
|---|---|
| v22.2.0 | Added bodyInfo, responseStatus, and deliveryType arguments. |
| v18.2.0, v16.17.0 | Added in: v18.2.0, v16.17.0 |
timingInfo<Object>Fetch Timing InforequestedUrl<string> The resource urlinitiatorType<string> The initiator name, e.g: 'fetch'global<Object>cacheMode<string> The cache mode must be an empty string ('') or 'local'bodyInfo<Object>Fetch Response Body InforesponseStatus<number> The response's status codedeliveryType<string> The delivery type.Default:''.
This property is an extension by Node.js. It is not available in Web browsers.
Creates a newPerformanceResourceTiming entry in the Resource Timeline. APerformanceResourceTiming is a subclass ofPerformanceEntry whoseperformanceEntry.entryType is always'resource'. Performance resourcesare used to mark moments in the Resource Timeline.
The createdPerformanceMark entry is put in the global Resource Timelineand can be queried withperformance.getEntries,performance.getEntriesByName, andperformance.getEntriesByType. When theobservation is performed, the entries should be cleared from the globalPerformance Timeline manually withperformance.clearResourceTimings.
performance.measure(name[, startMarkOrOptions[, endMark]])#
History
| Version | Changes |
|---|---|
| v19.0.0 | This method must be called with the |
| v16.0.0 | Updated to conform to the User Timing Level 3 specification. |
| v13.13.0, v12.16.3 | Make |
| v8.5.0 | Added in: v8.5.0 |
name<string>startMarkOrOptions<string> |<Object> Optional.detail<any> Additional optional detail to include with the measure.duration<number> Duration between start and end times.end<number> |<string> Timestamp to be used as the end time, or a stringidentifying a previously recorded mark.start<number> |<string> Timestamp to be used as the start time, or a stringidentifying a previously recorded mark.
endMark<string> Optional. Must be omitted ifstartMarkOrOptionsis an<Object>.
Creates a newPerformanceMeasure entry in the Performance Timeline. APerformanceMeasure is a subclass ofPerformanceEntry whoseperformanceEntry.entryType is always'measure', and whoseperformanceEntry.duration measures the number of milliseconds elapsed sincestartMark andendMark.
ThestartMark argument may identify anyexistingPerformanceMark in thePerformance Timeline, ormay identify any of the timestamp propertiesprovided by thePerformanceNodeTiming class. If the namedstartMark doesnot exist, an error is thrown.
The optionalendMark argument must identify anyexistingPerformanceMarkin the Performance Timeline or any of the timestamp properties provided by thePerformanceNodeTiming class.endMark will beperformance.now()if no parameter is passed, otherwise if the namedendMark does not exist, anerror will be thrown.
The createdPerformanceMeasure entry is put in the global Performance Timelineand can be queried withperformance.getEntries,performance.getEntriesByName, andperformance.getEntriesByType. When theobservation is performed, the entries should be cleared from the globalPerformance Timeline manually withperformance.clearMeasures.
performance.nodeTiming#
This property is an extension by Node.js. It is not available in Web browsers.
An instance of thePerformanceNodeTiming class that provides performancemetrics for specific Node.js operational milestones.
performance.now()#
History
| Version | Changes |
|---|---|
| v19.0.0 | This method must be called with the |
| v8.5.0 | Added in: v8.5.0 |
- Returns:<number>
Returns the current high resolution millisecond timestamp, where 0 representsthe start of the currentnode process.
performance.setResourceTimingBufferSize(maxSize)#
History
| Version | Changes |
|---|---|
| v19.0.0 | This method must be called with the |
| v18.8.0 | Added in: v18.8.0 |
Sets the global performance resource timing buffer size to the specified numberof "resource" type performance entry objects.
By default the max buffer size is set to 250.
performance.timeOrigin#
- Type:<number>
ThetimeOrigin specifies the high resolution millisecond timestamp atwhich the currentnode process began, measured in Unix time.
performance.timerify(fn[, options])#
History
| Version | Changes |
|---|---|
| v25.2.0 | Added |
| v16.0.0 | Added the histogram option. |
| v16.0.0 | Re-implemented to use pure-JavaScript and the ability to time async functions. |
| v8.5.0 | Added in: v8.5.0 |
fn<Function>options<Object>histogram<RecordableHistogram> A histogram object created usingperf_hooks.createHistogram()that will record runtime durations innanoseconds.
This is an alias ofperf_hooks.timerify().
This property is an extension by Node.js. It is not available in Web browsers.
performance.toJSON()#
History
| Version | Changes |
|---|---|
| v19.0.0 | This method must be called with the |
| v16.1.0 | Added in: v16.1.0 |
An object which is JSON representation of theperformance object. Itis similar towindow.performance.toJSON in browsers.
Event:'resourcetimingbufferfull'#
The'resourcetimingbufferfull' event is fired when the global performanceresource timing buffer is full. Adjust resource timing buffer size withperformance.setResourceTimingBufferSize() or clear the buffer withperformance.clearResourceTimings() in the event listener to allowmore entries to be added to the performance timeline buffer.
Class:PerformanceEntry#
The constructor of this class is not exposed to users directly.
performanceEntry.duration#
History
| Version | Changes |
|---|---|
| v19.0.0 | This property getter must be called with the |
| v8.5.0 | Added in: v8.5.0 |
- Type:<number>
The total number of milliseconds elapsed for this entry. This value will notbe meaningful for all Performance Entry types.
performanceEntry.entryType#
History
| Version | Changes |
|---|---|
| v19.0.0 | This property getter must be called with the |
| v8.5.0 | Added in: v8.5.0 |
- Type:<string>
The type of the performance entry. It may be one of:
'dns'(Node.js only)'function'(Node.js only)'gc'(Node.js only)'http2'(Node.js only)'http'(Node.js only)'mark'(available on the Web)'measure'(available on the Web)'net'(Node.js only)'node'(Node.js only)'resource'(available on the Web)
Class:PerformanceMark#
- Extends:<PerformanceEntry>
Exposes marks created via thePerformance.mark() method.
Class:PerformanceMeasure#
- Extends:<PerformanceEntry>
Exposes measures created via thePerformance.measure() method.
The constructor of this class is not exposed to users directly.
Class:PerformanceNodeEntry#
- Extends:<PerformanceEntry>
This class is an extension by Node.js. It is not available in Web browsers.
Provides detailed Node.js timing data.
The constructor of this class is not exposed to users directly.
performanceNodeEntry.detail#
History
| Version | Changes |
|---|---|
| v19.0.0 | This property getter must be called with the |
| v16.0.0 | Added in: v16.0.0 |
- Type:<any>
Additional detail specific to theentryType.
performanceNodeEntry.flags#
History
| Version | Changes |
|---|---|
| v16.0.0 | Runtime deprecated. Now moved to the detail property when entryType is 'gc'. |
| v13.9.0, v12.17.0 | Added in: v13.9.0, v12.17.0 |
performanceNodeEntry.detail instead.- Type:<number>
WhenperformanceEntry.entryType is equal to'gc', theperformance.flagsproperty contains additional information about garbage collection operation.The value may be one of:
perf_hooks.constants.NODE_PERFORMANCE_GC_FLAGS_NOperf_hooks.constants.NODE_PERFORMANCE_GC_FLAGS_CONSTRUCT_RETAINEDperf_hooks.constants.NODE_PERFORMANCE_GC_FLAGS_FORCEDperf_hooks.constants.NODE_PERFORMANCE_GC_FLAGS_SYNCHRONOUS_PHANTOM_PROCESSINGperf_hooks.constants.NODE_PERFORMANCE_GC_FLAGS_ALL_AVAILABLE_GARBAGEperf_hooks.constants.NODE_PERFORMANCE_GC_FLAGS_ALL_EXTERNAL_MEMORYperf_hooks.constants.NODE_PERFORMANCE_GC_FLAGS_SCHEDULE_IDLE
performanceNodeEntry.kind#
History
| Version | Changes |
|---|---|
| v16.0.0 | Runtime deprecated. Now moved to the detail property when entryType is 'gc'. |
| v8.5.0 | Added in: v8.5.0 |
performanceNodeEntry.detail instead.- Type:<number>
WhenperformanceEntry.entryType is equal to'gc', theperformance.kindproperty identifies the type of garbage collection operation that occurred.The value may be one of:
perf_hooks.constants.NODE_PERFORMANCE_GC_MAJORperf_hooks.constants.NODE_PERFORMANCE_GC_MINORperf_hooks.constants.NODE_PERFORMANCE_GC_INCREMENTALperf_hooks.constants.NODE_PERFORMANCE_GC_WEAKCB
Garbage Collection ('gc') Details#
WhenperformanceEntry.type is equal to'gc', theperformanceNodeEntry.detail property will be an<Object> with two properties:
kind<number> One of:perf_hooks.constants.NODE_PERFORMANCE_GC_MAJORperf_hooks.constants.NODE_PERFORMANCE_GC_MINORperf_hooks.constants.NODE_PERFORMANCE_GC_INCREMENTALperf_hooks.constants.NODE_PERFORMANCE_GC_WEAKCB
flags<number> One of:perf_hooks.constants.NODE_PERFORMANCE_GC_FLAGS_NOperf_hooks.constants.NODE_PERFORMANCE_GC_FLAGS_CONSTRUCT_RETAINEDperf_hooks.constants.NODE_PERFORMANCE_GC_FLAGS_FORCEDperf_hooks.constants.NODE_PERFORMANCE_GC_FLAGS_SYNCHRONOUS_PHANTOM_PROCESSINGperf_hooks.constants.NODE_PERFORMANCE_GC_FLAGS_ALL_AVAILABLE_GARBAGEperf_hooks.constants.NODE_PERFORMANCE_GC_FLAGS_ALL_EXTERNAL_MEMORYperf_hooks.constants.NODE_PERFORMANCE_GC_FLAGS_SCHEDULE_IDLE
HTTP ('http') Details#
WhenperformanceEntry.type is equal to'http', theperformanceNodeEntry.detail property will be an<Object> containingadditional information.
IfperformanceEntry.name is equal toHttpClient, thedetailwill contain the following properties:req,res. And thereq propertywill be an<Object> containingmethod,url,headers, theres propertywill be an<Object> containingstatusCode,statusMessage,headers.
IfperformanceEntry.name is equal toHttpRequest, thedetailwill contain the following properties:req,res. And thereq propertywill be an<Object> containingmethod,url,headers, theres propertywill be an<Object> containingstatusCode,statusMessage,headers.
This could add additional memory overhead and should only be used fordiagnostic purposes, not left turned on in production by default.
HTTP/2 ('http2') Details#
WhenperformanceEntry.type is equal to'http2', theperformanceNodeEntry.detail property will be an<Object> containingadditional performance information.
IfperformanceEntry.name is equal toHttp2Stream, thedetailwill contain the following properties:
bytesRead<number> The number ofDATAframe bytes received for thisHttp2Stream.bytesWritten<number> The number ofDATAframe bytes sent for thisHttp2Stream.id<number> The identifier of the associatedHttp2StreamtimeToFirstByte<number> The number of milliseconds elapsed between thePerformanceEntrystartTimeand the reception of the firstDATAframe.timeToFirstByteSent<number> The number of milliseconds elapsed betweenthePerformanceEntrystartTimeand sending of the firstDATAframe.timeToFirstHeader<number> The number of milliseconds elapsed between thePerformanceEntrystartTimeand the reception of the first header.
IfperformanceEntry.name is equal toHttp2Session, thedetail willcontain the following properties:
bytesRead<number> The number of bytes received for thisHttp2Session.bytesWritten<number> The number of bytes sent for thisHttp2Session.framesReceived<number> The number of HTTP/2 frames received by theHttp2Session.framesSent<number> The number of HTTP/2 frames sent by theHttp2Session.maxConcurrentStreams<number> The maximum number of streams concurrentlyopen during the lifetime of theHttp2Session.pingRTT<number> The number of milliseconds elapsed since the transmissionof aPINGframe and the reception of its acknowledgment. Only present ifaPINGframe has been sent on theHttp2Session.streamAverageDuration<number> The average duration (in milliseconds) forallHttp2Streaminstances.streamCount<number> The number ofHttp2Streaminstances processed bytheHttp2Session.type<string> Either'server'or'client'to identify the type ofHttp2Session.
Timerify ('function') Details#
WhenperformanceEntry.type is equal to'function', theperformanceNodeEntry.detail property will be an<Array> listingthe input arguments to the timed function.
Net ('net') Details#
WhenperformanceEntry.type is equal to'net', theperformanceNodeEntry.detail property will be an<Object> containingadditional information.
IfperformanceEntry.name is equal toconnect, thedetailwill contain the following properties:host,port.
DNS ('dns') Details#
WhenperformanceEntry.type is equal to'dns', theperformanceNodeEntry.detail property will be an<Object> containingadditional information.
IfperformanceEntry.name is equal tolookup, thedetailwill contain the following properties:hostname,family,hints,verbatim,addresses.
IfperformanceEntry.name is equal tolookupService, thedetail willcontain the following properties:host,port,hostname,service.
IfperformanceEntry.name is equal toqueryxxx orgetHostByAddr, thedetail willcontain the following properties:host,ttl,result. The value ofresult issame as the result ofqueryxxx orgetHostByAddr.
Class:PerformanceNodeTiming#
- Extends:<PerformanceEntry>
This property is an extension by Node.js. It is not available in Web browsers.
Provides timing details for Node.js itself. The constructor of this classis not exposed to users.
performanceNodeTiming.bootstrapComplete#
- Type:<number>
The high resolution millisecond timestamp at which the Node.js processcompleted bootstrapping. If bootstrapping has not yet finished, the propertyhas the value of -1.
performanceNodeTiming.environment#
- Type:<number>
The high resolution millisecond timestamp at which the Node.js environment wasinitialized.
performanceNodeTiming.idleTime#
- Type:<number>
The high resolution millisecond timestamp of the amount of time the event loophas been idle within the event loop's event provider (e.g.epoll_wait). Thisdoes not take CPU usage into consideration. If the event loop has not yetstarted (e.g., in the first tick of the main script), the property has thevalue of 0.
performanceNodeTiming.loopExit#
- Type:<number>
The high resolution millisecond timestamp at which the Node.js event loopexited. If the event loop has not yet exited, the property has the value of -1.It can only have a value of not -1 in a handler of the'exit' event.
performanceNodeTiming.loopStart#
- Type:<number>
The high resolution millisecond timestamp at which the Node.js event loopstarted. If the event loop has not yet started (e.g., in the first tick of themain script), the property has the value of -1.
performanceNodeTiming.nodeStart#
- Type:<number>
The high resolution millisecond timestamp at which the Node.js process wasinitialized.
performanceNodeTiming.uvMetricsInfo#
- Returns:<Object>
This is a wrapper to theuv_metrics_info function.It returns the current set of event loop metrics.
It is recommended to use this property inside a function whose execution wasscheduled usingsetImmediate to avoid collecting metrics before finishing alloperations scheduled during the current loop iteration.
const { performance } =require('node:perf_hooks');setImmediate(() => {console.log(performance.nodeTiming.uvMetricsInfo);});import { performance }from'node:perf_hooks';setImmediate(() => {console.log(performance.nodeTiming.uvMetricsInfo);});
Class:PerformanceResourceTiming#
- Extends:<PerformanceEntry>
Provides detailed network timing data regarding the loading of an application'sresources.
The constructor of this class is not exposed to users directly.
performanceResourceTiming.workerStart#
History
| Version | Changes |
|---|---|
| v19.0.0 | This property getter must be called with the |
| v18.2.0, v16.17.0 | Added in: v18.2.0, v16.17.0 |
- Type:<number>
The high resolution millisecond timestamp at immediately before dispatchingthefetch request. If the resource is not intercepted by a worker the propertywill always return 0.
performanceResourceTiming.redirectStart#
History
| Version | Changes |
|---|---|
| v19.0.0 | This property getter must be called with the |
| v18.2.0, v16.17.0 | Added in: v18.2.0, v16.17.0 |
- Type:<number>
The high resolution millisecond timestamp that represents the start timeof the fetch which initiates the redirect.
performanceResourceTiming.redirectEnd#
History
| Version | Changes |
|---|---|
| v19.0.0 | This property getter must be called with the |
| v18.2.0, v16.17.0 | Added in: v18.2.0, v16.17.0 |
- Type:<number>
The high resolution millisecond timestamp that will be created immediately afterreceiving the last byte of the response of the last redirect.
performanceResourceTiming.fetchStart#
History
| Version | Changes |
|---|---|
| v19.0.0 | This property getter must be called with the |
| v18.2.0, v16.17.0 | Added in: v18.2.0, v16.17.0 |
- Type:<number>
The high resolution millisecond timestamp immediately before the Node.js startsto fetch the resource.
performanceResourceTiming.domainLookupStart#
History
| Version | Changes |
|---|---|
| v19.0.0 | This property getter must be called with the |
| v18.2.0, v16.17.0 | Added in: v18.2.0, v16.17.0 |
- Type:<number>
The high resolution millisecond timestamp immediately before the Node.js startsthe domain name lookup for the resource.
performanceResourceTiming.domainLookupEnd#
History
| Version | Changes |
|---|---|
| v19.0.0 | This property getter must be called with the |
| v18.2.0, v16.17.0 | Added in: v18.2.0, v16.17.0 |
- Type:<number>
The high resolution millisecond timestamp representing the time immediatelyafter the Node.js finished the domain name lookup for the resource.
performanceResourceTiming.connectStart#
History
| Version | Changes |
|---|---|
| v19.0.0 | This property getter must be called with the |
| v18.2.0, v16.17.0 | Added in: v18.2.0, v16.17.0 |
- Type:<number>
The high resolution millisecond timestamp representing the time immediatelybefore Node.js starts to establish the connection to the server to retrievethe resource.
performanceResourceTiming.connectEnd#
History
| Version | Changes |
|---|---|
| v19.0.0 | This property getter must be called with the |
| v18.2.0, v16.17.0 | Added in: v18.2.0, v16.17.0 |
- Type:<number>
The high resolution millisecond timestamp representing the time immediatelyafter Node.js finishes establishing the connection to the server to retrievethe resource.
performanceResourceTiming.secureConnectionStart#
History
| Version | Changes |
|---|---|
| v19.0.0 | This property getter must be called with the |
| v18.2.0, v16.17.0 | Added in: v18.2.0, v16.17.0 |
- Type:<number>
The high resolution millisecond timestamp representing the time immediatelybefore Node.js starts the handshake process to secure the current connection.
performanceResourceTiming.requestStart#
History
| Version | Changes |
|---|---|
| v19.0.0 | This property getter must be called with the |
| v18.2.0, v16.17.0 | Added in: v18.2.0, v16.17.0 |
- Type:<number>
The high resolution millisecond timestamp representing the time immediatelybefore Node.js receives the first byte of the response from the server.
performanceResourceTiming.responseEnd#
History
| Version | Changes |
|---|---|
| v19.0.0 | This property getter must be called with the |
| v18.2.0, v16.17.0 | Added in: v18.2.0, v16.17.0 |
- Type:<number>
The high resolution millisecond timestamp representing the time immediatelyafter Node.js receives the last byte of the resource or immediately beforethe transport connection is closed, whichever comes first.
performanceResourceTiming.transferSize#
History
| Version | Changes |
|---|---|
| v19.0.0 | This property getter must be called with the |
| v18.2.0, v16.17.0 | Added in: v18.2.0, v16.17.0 |
- Type:<number>
A number representing the size (in octets) of the fetched resource. The sizeincludes the response header fields plus the response payload body.
performanceResourceTiming.encodedBodySize#
History
| Version | Changes |
|---|---|
| v19.0.0 | This property getter must be called with the |
| v18.2.0, v16.17.0 | Added in: v18.2.0, v16.17.0 |
- Type:<number>
A number representing the size (in octets) received from the fetch(HTTP or cache), of the payload body, before removing any appliedcontent-codings.
performanceResourceTiming.decodedBodySize#
History
| Version | Changes |
|---|---|
| v19.0.0 | This property getter must be called with the |
| v18.2.0, v16.17.0 | Added in: v18.2.0, v16.17.0 |
- Type:<number>
A number representing the size (in octets) received from the fetch(HTTP or cache), of the message body, after removing any appliedcontent-codings.
performanceResourceTiming.toJSON()#
History
| Version | Changes |
|---|---|
| v19.0.0 | This method must be called with the |
| v18.2.0, v16.17.0 | Added in: v18.2.0, v16.17.0 |
Returns aobject that is the JSON representation of thePerformanceResourceTiming object
Class:PerformanceObserver#
new PerformanceObserver(callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v8.5.0 | Added in: v8.5.0 |
callback<Function>
PerformanceObserver objects provide notifications when newPerformanceEntry instances have been added to the Performance Timeline.
import { performance,PerformanceObserver }from'node:perf_hooks';const obs =newPerformanceObserver((list, observer) => {console.log(list.getEntries()); performance.clearMarks(); performance.clearMeasures(); observer.disconnect();});obs.observe({entryTypes: ['mark'],buffered:true });performance.mark('test');const { performance,PerformanceObserver,} =require('node:perf_hooks');const obs =newPerformanceObserver((list, observer) => {console.log(list.getEntries()); performance.clearMarks(); performance.clearMeasures(); observer.disconnect();});obs.observe({entryTypes: ['mark'],buffered:true });performance.mark('test');
BecausePerformanceObserver instances introduce their own additionalperformance overhead, instances should not be left subscribed to notificationsindefinitely. Users should disconnect observers as soon as they are nolonger needed.
Thecallback is invoked when aPerformanceObserver isnotified about newPerformanceEntry instances. The callback receives aPerformanceObserverEntryList instance and a reference to thePerformanceObserver.
performanceObserver.disconnect()#
Disconnects thePerformanceObserver instance from all notifications.
performanceObserver.observe(options)#
History
| Version | Changes |
|---|---|
| v16.7.0 | Updated to conform to Performance Timeline Level 2. The buffered option has been added back. |
| v16.0.0 | Updated to conform to User Timing Level 3. The buffered option has been removed. |
| v8.5.0 | Added in: v8.5.0 |
options<Object>type<string> A single<PerformanceEntry> type. Must not be givenifentryTypesis already specified.entryTypes<string[]> An array of strings identifying the types of<PerformanceEntry> instances the observer is interested in. If notprovided an error will be thrown.buffered<boolean> If true, the observer callback is called with alist globalPerformanceEntrybuffered entries. If false, onlyPerformanceEntrys created after the time point are sent to theobserver callback.Default:false.
Subscribes the<PerformanceObserver> instance to notifications of new<PerformanceEntry> instances identified either byoptions.entryTypesoroptions.type:
import { performance,PerformanceObserver }from'node:perf_hooks';const obs =newPerformanceObserver((list, observer) => {// Called once asynchronously. `list` contains three items.});obs.observe({type:'mark' });for (let n =0; n <3; n++) performance.mark(`test${n}`);const { performance,PerformanceObserver,} =require('node:perf_hooks');const obs =newPerformanceObserver((list, observer) => {// Called once asynchronously. `list` contains three items.});obs.observe({type:'mark' });for (let n =0; n <3; n++) performance.mark(`test${n}`);
performanceObserver.takeRecords()#
- Returns:<PerformanceEntry[]> Current list of entries stored in the performance observer, emptying it out.
Class:PerformanceObserverEntryList#
ThePerformanceObserverEntryList class is used to provide access to thePerformanceEntry instances passed to aPerformanceObserver.The constructor of this class is not exposed to users.
performanceObserverEntryList.getEntries()#
- Returns:<PerformanceEntry[]>
Returns a list ofPerformanceEntry objects in chronological orderwith respect toperformanceEntry.startTime.
import { performance,PerformanceObserver }from'node:perf_hooks';const obs =newPerformanceObserver((perfObserverList, observer) => {console.log(perfObserverList.getEntries());/** * [ * PerformanceEntry { * name: 'test', * entryType: 'mark', * startTime: 81.465639, * duration: 0, * detail: null * }, * PerformanceEntry { * name: 'meow', * entryType: 'mark', * startTime: 81.860064, * duration: 0, * detail: null * } * ] */ performance.clearMarks(); performance.clearMeasures(); observer.disconnect();});obs.observe({type:'mark' });performance.mark('test');performance.mark('meow');const { performance,PerformanceObserver,} =require('node:perf_hooks');const obs =newPerformanceObserver((perfObserverList, observer) => {console.log(perfObserverList.getEntries());/** * [ * PerformanceEntry { * name: 'test', * entryType: 'mark', * startTime: 81.465639, * duration: 0, * detail: null * }, * PerformanceEntry { * name: 'meow', * entryType: 'mark', * startTime: 81.860064, * duration: 0, * detail: null * } * ] */ performance.clearMarks(); performance.clearMeasures(); observer.disconnect();});obs.observe({type:'mark' });performance.mark('test');performance.mark('meow');
performanceObserverEntryList.getEntriesByName(name[, type])#
name<string>type<string>- Returns:<PerformanceEntry[]>
Returns a list ofPerformanceEntry objects in chronological orderwith respect toperformanceEntry.startTime whoseperformanceEntry.name isequal toname, and optionally, whoseperformanceEntry.entryType is equal totype.
import { performance,PerformanceObserver }from'node:perf_hooks';const obs =newPerformanceObserver((perfObserverList, observer) => {console.log(perfObserverList.getEntriesByName('meow'));/** * [ * PerformanceEntry { * name: 'meow', * entryType: 'mark', * startTime: 98.545991, * duration: 0, * detail: null * } * ] */console.log(perfObserverList.getEntriesByName('nope'));// []console.log(perfObserverList.getEntriesByName('test','mark'));/** * [ * PerformanceEntry { * name: 'test', * entryType: 'mark', * startTime: 63.518931, * duration: 0, * detail: null * } * ] */console.log(perfObserverList.getEntriesByName('test','measure'));// [] performance.clearMarks(); performance.clearMeasures(); observer.disconnect();});obs.observe({entryTypes: ['mark','measure'] });performance.mark('test');performance.mark('meow');const { performance,PerformanceObserver,} =require('node:perf_hooks');const obs =newPerformanceObserver((perfObserverList, observer) => {console.log(perfObserverList.getEntriesByName('meow'));/** * [ * PerformanceEntry { * name: 'meow', * entryType: 'mark', * startTime: 98.545991, * duration: 0, * detail: null * } * ] */console.log(perfObserverList.getEntriesByName('nope'));// []console.log(perfObserverList.getEntriesByName('test','mark'));/** * [ * PerformanceEntry { * name: 'test', * entryType: 'mark', * startTime: 63.518931, * duration: 0, * detail: null * } * ] */console.log(perfObserverList.getEntriesByName('test','measure'));// [] performance.clearMarks(); performance.clearMeasures(); observer.disconnect();});obs.observe({entryTypes: ['mark','measure'] });performance.mark('test');performance.mark('meow');
performanceObserverEntryList.getEntriesByType(type)#
type<string>- Returns:<PerformanceEntry[]>
Returns a list ofPerformanceEntry objects in chronological orderwith respect toperformanceEntry.startTime whoseperformanceEntry.entryTypeis equal totype.
import { performance,PerformanceObserver }from'node:perf_hooks';const obs =newPerformanceObserver((perfObserverList, observer) => {console.log(perfObserverList.getEntriesByType('mark'));/** * [ * PerformanceEntry { * name: 'test', * entryType: 'mark', * startTime: 55.897834, * duration: 0, * detail: null * }, * PerformanceEntry { * name: 'meow', * entryType: 'mark', * startTime: 56.350146, * duration: 0, * detail: null * } * ] */ performance.clearMarks(); performance.clearMeasures(); observer.disconnect();});obs.observe({type:'mark' });performance.mark('test');performance.mark('meow');const { performance,PerformanceObserver,} =require('node:perf_hooks');const obs =newPerformanceObserver((perfObserverList, observer) => {console.log(perfObserverList.getEntriesByType('mark'));/** * [ * PerformanceEntry { * name: 'test', * entryType: 'mark', * startTime: 55.897834, * duration: 0, * detail: null * }, * PerformanceEntry { * name: 'meow', * entryType: 'mark', * startTime: 56.350146, * duration: 0, * detail: null * } * ] */ performance.clearMarks(); performance.clearMeasures(); observer.disconnect();});obs.observe({type:'mark' });performance.mark('test');performance.mark('meow');
perf_hooks.createHistogram([options])#
options<Object>lowest<number> |<bigint> The lowest discernible value. Must be an integervalue greater than 0.Default:1.highest<number> |<bigint> The highest recordable value. Must be an integervalue that is equal to or greater than two timeslowest.Default:Number.MAX_SAFE_INTEGER.figures<number> The number of accuracy digits. Must be a number between1and5.Default:3.
- Returns:<RecordableHistogram>
Returns a<RecordableHistogram>.
perf_hooks.eventLoopUtilization([utilization1[, utilization2]])#
utilization1<Object> The result of a previous call toeventLoopUtilization().utilization2<Object> The result of a previous call toeventLoopUtilization()prior toutilization1.- Returns:<Object>
TheeventLoopUtilization() function returns an object that contains thecumulative duration of time the event loop has been both idle and active as ahigh resolution milliseconds timer. Theutilization value is the calculatedEvent Loop Utilization (ELU).
If bootstrapping has not yet finished on the main thread the properties havethe value of0. The ELU is immediately available onWorker threads sincebootstrap happens within the event loop.
Bothutilization1 andutilization2 are optional parameters.
Ifutilization1 is passed, then the delta between the current call'sactiveandidle times, as well as the correspondingutilization value arecalculated and returned (similar toprocess.hrtime()).
Ifutilization1 andutilization2 are both passed, then the delta iscalculated between the two arguments. This is a convenience option because,unlikeprocess.hrtime(), calculating the ELU is more complex than asingle subtraction.
ELU is similar to CPU utilization, except that it only measures event loopstatistics and not CPU usage. It represents the percentage of time the eventloop has spent outside the event loop's event provider (e.g.epoll_wait).No other CPU idle time is taken into consideration. The following is an exampleof how a mostly idle process will have a high ELU.
import { eventLoopUtilization }from'node:perf_hooks';import { spawnSync }from'node:child_process';setImmediate(() => {const elu =eventLoopUtilization();spawnSync('sleep', ['5']);console.log(eventLoopUtilization(elu).utilization);});'use strict';const { eventLoopUtilization } =require('node:perf_hooks');const { spawnSync } =require('node:child_process');setImmediate(() => {const elu =eventLoopUtilization();spawnSync('sleep', ['5']);console.log(eventLoopUtilization(elu).utilization);});
Although the CPU is mostly idle while running this script, the value ofutilization is1. This is because the call tochild_process.spawnSync() blocks the event loop from proceeding.
Passing in a user-defined object instead of the result of a previous call toeventLoopUtilization() will lead to undefined behavior. The return valuesare not guaranteed to reflect any correct state of the event loop.
perf_hooks.monitorEventLoopDelay([options])#
options<Object>resolution<number> The sampling rate in milliseconds. Must be greaterthan zero.Default:10.
- Returns:<IntervalHistogram>
This property is an extension by Node.js. It is not available in Web browsers.
Creates anIntervalHistogram object that samples and reports the event loopdelay over time. The delays will be reported in nanoseconds.
Using a timer to detect approximate event loop delay works because theexecution of timers is tied specifically to the lifecycle of the libuvevent loop. That is, a delay in the loop will cause a delay in the executionof the timer, and those delays are specifically what this API is intended todetect.
import { monitorEventLoopDelay }from'node:perf_hooks';const h =monitorEventLoopDelay({resolution:20 });h.enable();// Do something.h.disable();console.log(h.min);console.log(h.max);console.log(h.mean);console.log(h.stddev);console.log(h.percentiles);console.log(h.percentile(50));console.log(h.percentile(99));const { monitorEventLoopDelay } =require('node:perf_hooks');const h =monitorEventLoopDelay({resolution:20 });h.enable();// Do something.h.disable();console.log(h.min);console.log(h.max);console.log(h.mean);console.log(h.stddev);console.log(h.percentiles);console.log(h.percentile(50));console.log(h.percentile(99));
perf_hooks.timerify(fn[, options])#
fn<Function>options<Object>histogram<RecordableHistogram> A histogram object created usingperf_hooks.createHistogram()that will record runtime durations innanoseconds.
This property is an extension by Node.js. It is not available in Web browsers.
Wraps a function within a new function that measures the running time of thewrapped function. APerformanceObserver must be subscribed to the'function'event type in order for the timing details to be accessed.
import { timerify, performance,PerformanceObserver }from'node:perf_hooks';functionsomeFunction() {console.log('hello world');}const wrapped =timerify(someFunction);const obs =newPerformanceObserver((list) => {console.log(list.getEntries()[0].duration); performance.clearMarks(); performance.clearMeasures(); obs.disconnect();});obs.observe({entryTypes: ['function'] });// A performance timeline entry will be createdwrapped();const { timerify, performance,PerformanceObserver,} =require('node:perf_hooks');functionsomeFunction() {console.log('hello world');}const wrapped =timerify(someFunction);const obs =newPerformanceObserver((list) => {console.log(list.getEntries()[0].duration); performance.clearMarks(); performance.clearMeasures(); obs.disconnect();});obs.observe({entryTypes: ['function'] });// A performance timeline entry will be createdwrapped();
If the wrapped function returns a promise, a finally handler will be attachedto the promise and the duration will be reported once the finally handler isinvoked.
Class:Histogram#
histogram.count#
- Type:<number>
The number of samples recorded by the histogram.
histogram.countBigInt#
- Type:<bigint>
The number of samples recorded by the histogram.
histogram.exceeds#
- Type:<number>
The number of times the event loop delay exceeded the maximum 1 hour eventloop delay threshold.
histogram.exceedsBigInt#
- Type:<bigint>
The number of times the event loop delay exceeded the maximum 1 hour eventloop delay threshold.
histogram.percentileBigInt(percentile)#
Returns the value at the given percentile.
histogram.percentiles#
- Type:<Map>
Returns aMap object detailing the accumulated percentile distribution.
Class:IntervalHistogram extends Histogram#
AHistogram that is periodically updated on a given interval.
histogram.disable()#
- Returns:<boolean>
Disables the update interval timer. Returnstrue if the timer wasstopped,false if it was already stopped.
histogram.enable()#
- Returns:<boolean>
Enables the update interval timer. Returnstrue if the timer wasstarted,false if it was already started.
histogram[Symbol.dispose]()#
Disables the update interval timer when the histogram is disposed.
const { monitorEventLoopDelay } =require('node:perf_hooks');{using hist =monitorEventLoopDelay({resolution:20 }); hist.enable();// The histogram will be disabled when the block is exited.}Cloning anIntervalHistogram#
<IntervalHistogram> instances can be cloned via<MessagePort>. On the receivingend, the histogram is cloned as a plain<Histogram> object that does notimplement theenable() anddisable() methods.
Class:RecordableHistogram extends Histogram#
histogram.record(val)#
histogram.recordDelta()#
Calculates the amount of time (in nanoseconds) that has passed since theprevious call torecordDelta() and records that amount in the histogram.
Examples#
Measuring the duration of async operations#
The following example uses theAsync Hooks and Performance APIs to measurethe actual duration of a Timeout operation (including the amount of time it tookto execute the callback).
import { createHook }from'node:async_hooks';import { performance,PerformanceObserver }from'node:perf_hooks';const set =newSet();const hook =createHook({init(id, type) {if (type ==='Timeout') { performance.mark(`Timeout-${id}-Init`); set.add(id); } },destroy(id) {if (set.has(id)) { set.delete(id); performance.mark(`Timeout-${id}-Destroy`); performance.measure(`Timeout-${id}`,`Timeout-${id}-Init`,`Timeout-${id}-Destroy`); } },});hook.enable();const obs =newPerformanceObserver((list, observer) => {console.log(list.getEntries()[0]); performance.clearMarks(); performance.clearMeasures(); observer.disconnect();});obs.observe({entryTypes: ['measure'],buffered:true });setTimeout(() => {},1000);'use strict';const async_hooks =require('node:async_hooks');const { performance,PerformanceObserver,} =require('node:perf_hooks');const set =newSet();const hook = async_hooks.createHook({init(id, type) {if (type ==='Timeout') { performance.mark(`Timeout-${id}-Init`); set.add(id); } },destroy(id) {if (set.has(id)) { set.delete(id); performance.mark(`Timeout-${id}-Destroy`); performance.measure(`Timeout-${id}`,`Timeout-${id}-Init`,`Timeout-${id}-Destroy`); } },});hook.enable();const obs =newPerformanceObserver((list, observer) => {console.log(list.getEntries()[0]); performance.clearMarks(); performance.clearMeasures(); observer.disconnect();});obs.observe({entryTypes: ['measure'] });setTimeout(() => {},1000);
Measuring how long it takes to load dependencies#
The following example measures the duration ofrequire() operations to loaddependencies:
import { performance,PerformanceObserver }from'node:perf_hooks';// Activate the observerconst obs =newPerformanceObserver((list) => {const entries = list.getEntries(); entries.forEach((entry) => {console.log(`import('${entry[0]}')`, entry.duration); }); performance.clearMarks(); performance.clearMeasures(); obs.disconnect();});obs.observe({entryTypes: ['function'],buffered:true });const timedImport = performance.timerify(async (module) => {returnawaitimport(module);});awaittimedImport('some-module');'use strict';const { performance,PerformanceObserver,} =require('node:perf_hooks');const mod =require('node:module');// Monkey patch the require functionmod.Module.prototype.require = performance.timerify(mod.Module.prototype.require);require = performance.timerify(require);// Activate the observerconst obs =newPerformanceObserver((list) => {const entries = list.getEntries(); entries.forEach((entry) => {console.log(`require('${entry[0]}')`, entry.duration); }); performance.clearMarks(); performance.clearMeasures(); obs.disconnect();});obs.observe({entryTypes: ['function'] });require('some-module');
Measuring how long one HTTP round-trip takes#
The following example is used to trace the time spent by HTTP client(OutgoingMessage) and HTTP request (IncomingMessage). For HTTP client,it means the time interval between starting the request and receiving theresponse, and for HTTP request, it means the time interval between receivingthe request and sending the response:
import {PerformanceObserver }from'node:perf_hooks';import { createServer, get }from'node:http';const obs =newPerformanceObserver((items) => { items.getEntries().forEach((item) => {console.log(item); });});obs.observe({entryTypes: ['http'] });constPORT =8080;createServer((req, res) => { res.end('ok');}).listen(PORT,() => {get(`http://127.0.0.1:${PORT}`);});'use strict';const {PerformanceObserver } =require('node:perf_hooks');const http =require('node:http');const obs =newPerformanceObserver((items) => { items.getEntries().forEach((item) => {console.log(item); });});obs.observe({entryTypes: ['http'] });constPORT =8080;http.createServer((req, res) => { res.end('ok');}).listen(PORT,() => { http.get(`http://127.0.0.1:${PORT}`);});
Measuring how long thenet.connect (only for TCP) takes when the connection is successful#
import {PerformanceObserver }from'node:perf_hooks';import { connect, createServer }from'node:net';const obs =newPerformanceObserver((items) => { items.getEntries().forEach((item) => {console.log(item); });});obs.observe({entryTypes: ['net'] });constPORT =8080;createServer((socket) => { socket.destroy();}).listen(PORT,() => {connect(PORT);});'use strict';const {PerformanceObserver } =require('node:perf_hooks');const net =require('node:net');const obs =newPerformanceObserver((items) => { items.getEntries().forEach((item) => {console.log(item); });});obs.observe({entryTypes: ['net'] });constPORT =8080;net.createServer((socket) => { socket.destroy();}).listen(PORT,() => { net.connect(PORT);});
Measuring how long the DNS takes when the request is successful#
import {PerformanceObserver }from'node:perf_hooks';import { lookup, promises }from'node:dns';const obs =newPerformanceObserver((items) => { items.getEntries().forEach((item) => {console.log(item); });});obs.observe({entryTypes: ['dns'] });lookup('localhost',() => {});promises.resolve('localhost');'use strict';const {PerformanceObserver } =require('node:perf_hooks');const dns =require('node:dns');const obs =newPerformanceObserver((items) => { items.getEntries().forEach((item) => {console.log(item); });});obs.observe({entryTypes: ['dns'] });dns.lookup('localhost',() => {});dns.promises.resolve('localhost');
Permissions#
Permissions can be used to control what system resources theNode.js process has access to or what actions the process can takewith those resources.
- Process-based permissions control the Node.jsprocess's access to resources.The resource can be entirely allowed or denied, or actions related to it canbe controlled. For example, file system reads can be allowed while denyingwrites.This feature does not protect against malicious code. According to the Node.jsSecurity Policy, Node.js trusts any code it is asked to run.
The permission model implements a "seat belt" approach, which prevents trustedcode from unintentionally changing files or using resources that access hasnot explicitly been granted to. It does not provide security guarantees in thepresence of malicious code. Malicious code can bypass the permission model andexecute arbitrary code without the restrictions imposed by the permissionmodel.
If you find a potential security vulnerability, please refer to ourSecurity Policy.
Process-based permissions#
Permission Model#
History
| Version | Changes |
|---|---|
| v23.5.0, v22.13.0 | This feature is no longer experimental. |
| v20.0.0 | Added in: v20.0.0 |
The Node.js Permission Model is a mechanism for restricting access to specificresources during execution.The API exists behind a flag--permission which when enabled,will restrict access to all available permissions.
The available permissions are documented by the--permissionflag.
When starting Node.js with--permission,the ability to access the file system through thefs module, access the network,spawn processes, usenode:worker_threads, use native addons, use WASI, andenable the runtime inspector will be restricted (the listener for SIGUSR1 won'tbe created).
$node --permission index.jsError: Access to this API has been restricted at node:internal/main/run_main_module:23:47 { code: 'ERR_ACCESS_DENIED', permission: 'FileSystemRead', resource: '/home/user/index.js'}Allowing access to spawning a process and creating worker threads can be doneusing the--allow-child-process and--allow-worker respectively.
To allow network access, use--allow-net and for allowing native addonswhen using permission model, use the--allow-addonsflag. For WASI, use the--allow-wasi flag.
Runtime API#
When enabling the Permission Model through the--permissionflag a new propertypermission is added to theprocess object.This property contains one function:
permission.has(scope[, reference])#
API call to check permissions at runtime (permission.has())
process.permission.has('fs.write');// trueprocess.permission.has('fs.write','/home/rafaelgss/protected-folder');// trueprocess.permission.has('fs.read');// trueprocess.permission.has('fs.read','/home/rafaelgss/protected-folder');// falseFile System Permissions#
The Permission Model, by default, restricts access to the file system through thenode:fs module.It does not guarantee that users will not be able to access the file system through other means,such as through thenode:sqlite module.
To allow access to the file system, use the--allow-fs-read and--allow-fs-write flags:
$node --permission --allow-fs-read=* --allow-fs-write=* index.jsHello world!By default the entrypoints of your application are includedin the allowed file system read list. For example:
$node --permission index.jsindex.jswill be included in the allowed file system read list
$node -r /path/to/custom-require.js --permission index.js./path/to/custom-require.jswill be included in the allowed file system readlist.index.jswill be included in the allowed file system read list.
The valid arguments for both flags are:
*- To allow allFileSystemReadorFileSystemWriteoperations,respectively.- Relative paths to the current working directory.
- Absolute paths.
Example:
--allow-fs-read=*- It will allow allFileSystemReadoperations.--allow-fs-write=*- It will allow allFileSystemWriteoperations.--allow-fs-write=/tmp/- It will allowFileSystemWriteaccess to the/tmp/folder.--allow-fs-read=/tmp/ --allow-fs-read=/home/.gitignore- It allowsFileSystemReadaccessto the/tmp/folderand the/home/.gitignorepath.
Wildcards are supported too:
--allow-fs-read=/home/test*will allow read access to everythingthat matches the wildcard. e.g:/home/test/file1or/home/test2
After passing a wildcard character (*) all subsequent characters willbe ignored. For example:/home/*.js will work similar to/home/*.
When the permission model is initialized, it will automatically add a wildcard(*) if the specified directory exists. For example, if/home/test/filesexists, it will be treated as/home/test/files/*. However, if the directorydoes not exist, the wildcard will not be added, and access will be limited to/home/test/files. If you want to allow access to a folder that does not existyet, make sure to explicitly include the wildcard:/my-path/folder-do-not-exist/*.
Configuration file support#
In addition to passing permission flags on the command line, they can also bedeclared in a Node.js configuration file when using the experimental[--experimental-config-file][] flag. Permission options must be placed insidethepermission top-level object.
Examplenode.config.json:
{"permission":{"allow-fs-read":["./foo"],"allow-fs-write":["./bar"],"allow-child-process":true,"allow-worker":true,"allow-net":true,"allow-addons":false}}When thepermission namespace is present in the configuration file, Node.jsautomatically enables the--permission flag. Run with:
$node --experimental-default-config-file app.jsUsing the Permission Model withnpx#
If you're usingnpx to execute a Node.js script, you can enable thePermission Model by passing the--node-options flag. For example:
npx --node-options="--permission" package-nameThis sets theNODE_OPTIONS environment variable for all Node.js processesspawned bynpx, without affecting thenpx process itself.
FileSystemRead Error withnpx
The above command will likely throw aFileSystemRead invalid access errorbecause Node.js requires file system read access to locate and execute thepackage. To avoid this:
Using a Globally Installed PackageGrant read access to the global
node_modulesdirectory by running:npx --node-options="--permission --allow-fs-read=$(npm prefix -g)" package-nameUsing the
npxCacheIf you are installing the package temporarily or relying on thenpxcache,grant read access to the npm cache directory:npx --node-options="--permission --allow-fs-read=$(npm config get cache)" package-name
Any arguments you would normally pass tonode (e.g.,--allow-* flags) canalso be passed through the--node-options flag. This flexibility makes iteasy to configure permissions as needed when usingnpx.
Permission Model constraints#
There are constraints you need to know before using this system:
- The model does not inherit to a worker thread.
- When using the Permission Model the following features will be restricted:
- Native modules
- Network
- Child process
- Worker Threads
- Inspector protocol
- File system access
- WASI
- The Permission Model is initialized after the Node.js environment is set up.However, certain flags such as
--env-fileor--openssl-configare designedto read files before environment initialization. As a result, such flags arenot subject to the rules of the Permission Model. The same applies for V8flags that can be set via runtime throughv8.setFlagsFromString. - OpenSSL engines cannot be requested at runtime when the PermissionModel is enabled, affecting the built-in crypto, https, and tls modules.
- Run-Time Loadable Extensions cannot be loaded when the Permission Model isenabled, affecting the sqlite module.
- Using existing file descriptors via the
node:fsmodule bypasses thePermission Model.
Limitations and Known Issues#
- Symbolic links will be followed even to locations outside of the set of pathsthat access has been granted to. Relative symbolic links may allow access toarbitrary files and directories. When starting applications with thepermission model enabled, you must ensure that no paths to which access hasbeen granted contain relative symbolic links.
Process#
Source Code:lib/process.js
Theprocess object provides information about, and control over, the currentNode.js process.
import processfrom'node:process';const process =require('node:process');
Process events#
Theprocess object is an instance ofEventEmitter.
Event:'beforeExit'#
The'beforeExit' event is emitted when Node.js empties its event loop and hasno additional work to schedule. Normally, the Node.js process will exit whenthere is no work scheduled, but a listener registered on the'beforeExit'event can make asynchronous calls, and thereby cause the Node.js process tocontinue.
The listener callback function is invoked with the value ofprocess.exitCode passed as the only argument.
The'beforeExit' event isnot emitted for conditions causing explicittermination, such as callingprocess.exit() or uncaught exceptions.
The'beforeExit' shouldnot be used as an alternative to the'exit' eventunless the intention is to schedule additional work.
import processfrom'node:process';process.on('beforeExit',(code) => {console.log('Process beforeExit event with code: ', code);});process.on('exit',(code) => {console.log('Process exit event with code: ', code);});console.log('This message is displayed first.');// Prints:// This message is displayed first.// Process beforeExit event with code: 0// Process exit event with code: 0const process =require('node:process');process.on('beforeExit',(code) => {console.log('Process beforeExit event with code: ', code);});process.on('exit',(code) => {console.log('Process exit event with code: ', code);});console.log('This message is displayed first.');// Prints:// This message is displayed first.// Process beforeExit event with code: 0// Process exit event with code: 0
Event:'disconnect'#
If the Node.js process is spawned with an IPC channel (see theChild ProcessandCluster documentation), the'disconnect' event will be emitted whenthe IPC channel is closed.
Event:'exit'#
code<integer>
The'exit' event is emitted when the Node.js process is about to exit as aresult of either:
- The
process.exit()method being called explicitly; - The Node.js event loop no longer having any additional work to perform.
There is no way to prevent the exiting of the event loop at this point, and onceall'exit' listeners have finished running the Node.js process will terminate.
The listener callback function is invoked with the exit code specified eitherby theprocess.exitCode property, or theexitCode argument passed to theprocess.exit() method.
import processfrom'node:process';process.on('exit',(code) => {console.log(`About to exit with code:${code}`);});const process =require('node:process');process.on('exit',(code) => {console.log(`About to exit with code:${code}`);});
Listener functionsmust only performsynchronous operations. The Node.jsprocess will exit immediately after calling the'exit' event listenerscausing any additional work still queued in the event loop to be abandoned.In the following example, for instance, the timeout will never occur:
import processfrom'node:process';process.on('exit',(code) => {setTimeout(() => {console.log('This will not run'); },0);});const process =require('node:process');process.on('exit',(code) => {setTimeout(() => {console.log('This will not run'); },0);});
Event:'message'#
message<Object> |<boolean> |<number> |<string> |<null> a parsed JSON objector a serializable primitive value.sendHandle<net.Server> |<net.Socket> anet.Serverornet.Socketobject, or undefined.
If the Node.js process is spawned with an IPC channel (see theChild ProcessandCluster documentation), the'message' event is emitted whenever amessage sent by a parent process usingchildprocess.send() is received bythe child process.
The message goes through serialization and parsing. The resulting message mightnot be the same as what is originally sent.
If theserialization option was set toadvanced used when spawning theprocess, themessage argument can contain data that JSON is not ableto represent.SeeAdvanced serialization forchild_process for more details.
Event:'rejectionHandled'#
promise<Promise> The late handled promise.
The'rejectionHandled' event is emitted whenever aPromise has been rejectedand an error handler was attached to it (usingpromise.catch(), forexample) later than one turn of the Node.js event loop.
ThePromise object would have previously been emitted in an'unhandledRejection' event, but during the course of processing gained arejection handler.
There is no notion of a top level for aPromise chain at which rejections canalways be handled. Being inherently asynchronous in nature, aPromiserejection can be handled at a future point in time, possibly much later thanthe event loop turn it takes for the'unhandledRejection' event to be emitted.
Another way of stating this is that, unlike in synchronous code where there isan ever-growing list of unhandled exceptions, with Promises there can be agrowing-and-shrinking list of unhandled rejections.
In synchronous code, the'uncaughtException' event is emitted when the list ofunhandled exceptions grows.
In asynchronous code, the'unhandledRejection' event is emitted when the listof unhandled rejections grows, and the'rejectionHandled' event is emittedwhen the list of unhandled rejections shrinks.
import processfrom'node:process';const unhandledRejections =newMap();process.on('unhandledRejection',(reason, promise) => { unhandledRejections.set(promise, reason);});process.on('rejectionHandled',(promise) => { unhandledRejections.delete(promise);});const process =require('node:process');const unhandledRejections =newMap();process.on('unhandledRejection',(reason, promise) => { unhandledRejections.set(promise, reason);});process.on('rejectionHandled',(promise) => { unhandledRejections.delete(promise);});
In this example, theunhandledRejectionsMap will grow and shrink over time,reflecting rejections that start unhandled and then become handled. It ispossible to record such errors in an error log, either periodically (which islikely best for long-running application) or upon process exit (which is likelymost convenient for scripts).
Event:'workerMessage'#
value<any> A value transmitted usingpostMessageToThread().source<number> The transmitting worker thread ID or0for the main thread.
The'workerMessage' event is emitted for any incoming message send by the otherparty by usingpostMessageToThread().
Event:'uncaughtException'#
History
| Version | Changes |
|---|---|
| v12.0.0, v10.17.0 | Added the |
| v0.1.18 | Added in: v0.1.18 |
err<Error> The uncaught exception.origin<string> Indicates if the exception originates from an unhandledrejection or from a synchronous error. Can either be'uncaughtException'or'unhandledRejection'. The latter is used when an exception happens in aPromisebased async context (or if aPromiseis rejected) and--unhandled-rejectionsflag set tostrictorthrow(which is thedefault) and the rejection is not handled, or when a rejection happens duringthe command line entry point's ES module static loading phase.
The'uncaughtException' event is emitted when an uncaught JavaScriptexception bubbles all the way back to the event loop. By default, Node.jshandles such exceptions by printing the stack trace tostderr and exitingwith code 1, overriding any previously setprocess.exitCode.Adding a handler for the'uncaughtException' event overrides this defaultbehavior. Alternatively, change theprocess.exitCode in the'uncaughtException' handler which will result in the process exiting with theprovided exit code. Otherwise, in the presence of such handler the process willexit with 0.
import processfrom'node:process';import fsfrom'node:fs';process.on('uncaughtException',(err, origin) => { fs.writeSync( process.stderr.fd,`Caught exception:${err}\n` +`Exception origin:${origin}\n`, );});setTimeout(() => {console.log('This will still run.');},500);// Intentionally cause an exception, but don't catch it.nonexistentFunc();console.log('This will not run.');const process =require('node:process');const fs =require('node:fs');process.on('uncaughtException',(err, origin) => { fs.writeSync( process.stderr.fd,`Caught exception:${err}\n` +`Exception origin:${origin}\n`, );});setTimeout(() => {console.log('This will still run.');},500);// Intentionally cause an exception, but don't catch it.nonexistentFunc();console.log('This will not run.');
It is possible to monitor'uncaughtException' events without overriding thedefault behavior to exit the process by installing a'uncaughtExceptionMonitor' listener.
Warning: Using'uncaughtException' correctly#
'uncaughtException' is a crude mechanism for exception handlingintended to be used only as a last resort. The eventshould not be used asan equivalent toOn Error Resume Next. Unhandled exceptions inherently meanthat an application is in an undefined state. Attempting to resume applicationcode without properly recovering from the exception can cause additionalunforeseen and unpredictable issues.
Exceptions thrown from within the event handler will not be caught. Instead theprocess will exit with a non-zero exit code and the stack trace will be printed.This is to avoid infinite recursion.
Attempting to resume normally after an uncaught exception can be similar topulling out the power cord when upgrading a computer. Nine out of tentimes, nothing happens. But the tenth time, the system becomes corrupted.
The correct use of'uncaughtException' is to perform synchronous cleanupof allocated resources (e.g. file descriptors, handles, etc) before shuttingdown the process.It is not safe to resume normal operation after'uncaughtException'.
To restart a crashed application in a more reliable way, whether'uncaughtException' is emitted or not, an external monitor should be employedin a separate process to detect application failures and recover or restart asneeded.
Event:'uncaughtExceptionMonitor'#
err<Error> The uncaught exception.origin<string> Indicates if the exception originates from an unhandledrejection or from synchronous errors. Can either be'uncaughtException'or'unhandledRejection'. The latter is used when an exception happens in aPromisebased async context (or if aPromiseis rejected) and--unhandled-rejectionsflag set tostrictorthrow(which is thedefault) and the rejection is not handled, or when a rejection happens duringthe command line entry point's ES module static loading phase.
The'uncaughtExceptionMonitor' event is emitted before an'uncaughtException' event is emitted or a hook installed viaprocess.setUncaughtExceptionCaptureCallback() is called.
Installing an'uncaughtExceptionMonitor' listener does not change the behavioronce an'uncaughtException' event is emitted. The process willstill crash if no'uncaughtException' listener is installed.
import processfrom'node:process';process.on('uncaughtExceptionMonitor',(err, origin) => {MyMonitoringTool.logSync(err, origin);});// Intentionally cause an exception, but don't catch it.nonexistentFunc();// Still crashes Node.jsconst process =require('node:process');process.on('uncaughtExceptionMonitor',(err, origin) => {MyMonitoringTool.logSync(err, origin);});// Intentionally cause an exception, but don't catch it.nonexistentFunc();// Still crashes Node.js
Event:'unhandledRejection'#
History
| Version | Changes |
|---|---|
| v7.0.0 | Not handling |
| v6.6.0 | Unhandled |
| v1.4.1 | Added in: v1.4.1 |
reason<Error> |<any> The object with which the promise was rejected(typically anErrorobject).promise<Promise> The rejected promise.
The'unhandledRejection' event is emitted whenever aPromise is rejected andno error handler is attached to the promise within a turn of the event loop.When programming with Promises, exceptions are encapsulated as "rejectedpromises". Rejections can be caught and handled usingpromise.catch() andare propagated through aPromise chain. The'unhandledRejection' event isuseful for detecting and keeping track of promises that were rejected whoserejections have not yet been handled.
import processfrom'node:process';process.on('unhandledRejection',(reason, promise) => {console.log('Unhandled Rejection at:', promise,'reason:', reason);// Application specific logging, throwing an error, or other logic here});somePromise.then((res) => {returnreportToUser(JSON.pasre(res));// Note the typo (`pasre`)});// No `.catch()` or `.then()`const process =require('node:process');process.on('unhandledRejection',(reason, promise) => {console.log('Unhandled Rejection at:', promise,'reason:', reason);// Application specific logging, throwing an error, or other logic here});somePromise.then((res) => {returnreportToUser(JSON.pasre(res));// Note the typo (`pasre`)});// No `.catch()` or `.then()`
The following will also trigger the'unhandledRejection' event to beemitted:
import processfrom'node:process';functionSomeResource() {// Initially set the loaded status to a rejected promisethis.loaded =Promise.reject(newError('Resource not yet loaded!'));}const resource =newSomeResource();// no .catch or .then on resource.loaded for at least a turnconst process =require('node:process');functionSomeResource() {// Initially set the loaded status to a rejected promisethis.loaded =Promise.reject(newError('Resource not yet loaded!'));}const resource =newSomeResource();// no .catch or .then on resource.loaded for at least a turn
In this example case, it is possible to track the rejection as a developer erroras would typically be the case for other'unhandledRejection' events. Toaddress such failures, a non-operational.catch(() => { }) handler may be attached toresource.loaded, which would prevent the'unhandledRejection' event frombeing emitted.
If an'unhandledRejection' event is emitted but not handled it willbe raised as an uncaught exception. This alongside other behaviors of'unhandledRejection' events can changed via the--unhandled-rejections flag.
Event:'warning'#
warning<Error> Key properties of the warning are:
The'warning' event is emitted whenever Node.js emits a process warning.
A process warning is similar to an error in that it describes exceptionalconditions that are being brought to the user's attention. However, warningsare not part of the normal Node.js and JavaScript error handling flow.Node.js can emit warnings whenever it detects bad coding practices that couldlead to sub-optimal application performance, bugs, or security vulnerabilities.
import processfrom'node:process';process.on('warning',(warning) => {console.warn(warning.name);// Print the warning nameconsole.warn(warning.message);// Print the warning messageconsole.warn(warning.stack);// Print the stack trace});const process =require('node:process');process.on('warning',(warning) => {console.warn(warning.name);// Print the warning nameconsole.warn(warning.message);// Print the warning messageconsole.warn(warning.stack);// Print the stack trace});
By default, Node.js will print process warnings tostderr. The--no-warningscommand-line option can be used to suppress the default console output but the'warning' event will still be emitted by theprocess object. Currently, itis not possible to suppress specific warning types other than deprecationwarnings. To suppress deprecation warnings, check out the--no-deprecationflag.
The following example illustrates the warning that is printed tostderr whentoo many listeners have been added to an event:
$node>events.defaultMaxListeners = 1;>process.on('foo', () => {});>process.on('foo', () => {});>(node:38638) MaxListenersExceededWarning: Possible EventEmitter memory leakdetected. 2 foo listeners added. Use emitter.setMaxListeners() to increase limitIn contrast, the following example turns off the default warning output andadds a custom handler to the'warning' event:
$node --no-warnings>const p = process.on('warning', (warning) => console.warn('Do not do that!'));>events.defaultMaxListeners = 1;>process.on('foo', () => {});>process.on('foo', () => {});>Do notdo that!The--trace-warnings command-line option can be used to have the defaultconsole output for warnings include the full stack trace of the warning.
Launching Node.js using the--throw-deprecation command-line flag willcause custom deprecation warnings to be thrown as exceptions.
Using the--trace-deprecation command-line flag will cause the customdeprecation to be printed tostderr along with the stack trace.
Using the--no-deprecation command-line flag will suppress all reportingof the custom deprecation.
The*-deprecation command-line flags only affect warnings that use the name'DeprecationWarning'.
Emitting custom warnings#
See theprocess.emitWarning() method for issuingcustom or application-specific warnings.
Node.js warning names#
There are no strict guidelines for warning types (as identified by thenameproperty) emitted by Node.js. New types of warnings can be added at any time.A few of the warning types that are most common include:
'DeprecationWarning'- Indicates use of a deprecated Node.js API or feature.Such warnings must include a'code'property identifying thedeprecation code.'ExperimentalWarning'- Indicates use of an experimental Node.js API orfeature. Such features must be used with caution as they may change at anytime and are not subject to the same strict semantic-versioning and long-termsupport policies as supported features.'MaxListenersExceededWarning'- Indicates that too many listeners for agiven event have been registered on either anEventEmitterorEventTarget.This is often an indication of a memory leak.'TimeoutOverflowWarning'- Indicates that a numeric value that cannot fitwithin a 32-bit signed integer has been provided to either thesetTimeout()orsetInterval()functions.'TimeoutNegativeWarning'- Indicates that a negative number has provided toeither thesetTimeout()orsetInterval()functions.'TimeoutNaNWarning'- Indicates that a value which is not a number hasprovided to either thesetTimeout()orsetInterval()functions.'UnsupportedWarning'- Indicates use of an unsupported option or featurethat will be ignored rather than treated as an error. One example is use ofthe HTTP response status message when using the HTTP/2 compatibility API.
Event:'worker'#
The'worker' event is emitted after a new<Worker> thread has been created.
Signal events#
Signal events will be emitted when the Node.js process receives a signal. Pleaserefer tosignal(7) for a listing of standard POSIX signal names such as'SIGINT','SIGHUP', etc.
Signals are not available onWorker threads.
The signal handler will receive the signal's name ('SIGINT','SIGTERM', etc.) as the first argument.
The name of each event will be the uppercase common name for the signal (e.g.'SIGINT' forSIGINT signals).
import processfrom'node:process';// Begin reading from stdin so the process does not exit.process.stdin.resume();process.on('SIGINT',() => {console.log('Received SIGINT. Press Control-D to exit.');});// Using a single function to handle multiple signalsfunctionhandle(signal) {console.log(`Received${signal}`);}process.on('SIGINT', handle);process.on('SIGTERM', handle);const process =require('node:process');// Begin reading from stdin so the process does not exit.process.stdin.resume();process.on('SIGINT',() => {console.log('Received SIGINT. Press Control-D to exit.');});// Using a single function to handle multiple signalsfunctionhandle(signal) {console.log(`Received${signal}`);}process.on('SIGINT', handle);process.on('SIGTERM', handle);
'SIGUSR1'is reserved by Node.js to start thedebugger. It's possible toinstall a listener but doing so might interfere with the debugger.'SIGTERM'and'SIGINT'have default handlers on non-Windows platforms thatreset the terminal mode before exiting with code128 + signal number. If oneof these signals has a listener installed, its default behavior will beremoved (Node.js will no longer exit).'SIGPIPE'is ignored by default. It can have a listener installed.'SIGHUP'is generated on Windows when the console window is closed, and onother platforms under various similar conditions. Seesignal(7). It can have alistener installed, however Node.js will be unconditionally terminated byWindows about 10 seconds later. On non-Windows platforms, the defaultbehavior ofSIGHUPis to terminate Node.js, but once a listener has beeninstalled its default behavior will be removed.'SIGTERM'is not supported on Windows, it can be listened on.'SIGINT'from the terminal is supported on all platforms, and can usually begenerated withCtrl+C (though this may be configurable).It is not generated whenterminal raw mode is enabledandCtrl+C is used.'SIGBREAK'is delivered on Windows whenCtrl+Break ispressed. On non-Windows platforms, it can be listened on, but there is no wayto send or generate it.'SIGWINCH'is delivered when the console has been resized. On Windows, thiswill only happen on write to the console when the cursor is being moved, orwhen a readable tty is used in raw mode.'SIGKILL'cannot have a listener installed, it will unconditionallyterminate Node.js on all platforms.'SIGSTOP'cannot have a listener installed.'SIGBUS','SIGFPE','SIGSEGV', and'SIGILL', when not raisedartificially usingkill(2), inherently leave the process in a state fromwhich it is not safe to call JS listeners. Doing so might cause the processto stop responding.0can be sent to test for the existence of a process, it has no effect ifthe process exists, but will throw an error if the process does not exist.
Windows does not support signals so has no equivalent to termination by signal,but Node.js offers some emulation withprocess.kill(), andsubprocess.kill():
- Sending
SIGINT,SIGTERM, andSIGKILLwill cause the unconditionaltermination of the target process, and afterwards, subprocess will report thatthe process was terminated by signal. - Sending signal
0can be used as a platform independent way to test for theexistence of a process.
process.abort()#
Theprocess.abort() method causes the Node.js process to exit immediately andgenerate a core file.
This feature is not available inWorker threads.
process.allowedNodeEnvironmentFlags#
- Type:<Set>
Theprocess.allowedNodeEnvironmentFlags property is a special,read-onlySet of flags allowable within theNODE_OPTIONSenvironment variable.
process.allowedNodeEnvironmentFlags extendsSet, but overridesSet.prototype.has to recognize several different possible flagrepresentations.process.allowedNodeEnvironmentFlags.has() willreturntrue in the following cases:
- Flags may omit leading single (
-) or double (--) dashes; e.g.,inspect-brkfor--inspect-brk, orrfor-r. - Flags passed through to V8 (as listed in
--v8-options) may replaceone or morenon-leading dashes for an underscore, or vice-versa;e.g.,--perf_basic_prof,--perf-basic-prof,--perf_basic-prof,etc. - Flags may contain one or more equals (
=) characters; allcharacters after and including the first equals will be ignored;e.g.,--stack-trace-limit=100. - Flagsmust be allowable within
NODE_OPTIONS.
When iterating overprocess.allowedNodeEnvironmentFlags, flags willappear onlyonce; each will begin with one or more dashes. Flagspassed through to V8 will contain underscores instead of non-leadingdashes:
import { allowedNodeEnvironmentFlags }from'node:process';allowedNodeEnvironmentFlags.forEach((flag) => {// -r// --inspect-brk// --abort_on_uncaught_exception// ...});const { allowedNodeEnvironmentFlags } =require('node:process');allowedNodeEnvironmentFlags.forEach((flag) => {// -r// --inspect-brk// --abort_on_uncaught_exception// ...});
The methodsadd(),clear(), anddelete() ofprocess.allowedNodeEnvironmentFlags do nothing, and will failsilently.
If Node.js was compiledwithoutNODE_OPTIONS support (shown inprocess.config),process.allowedNodeEnvironmentFlags willcontain whatwould have been allowable.
process.arch#
- Type:<string>
The operating system CPU architecture for which the Node.js binary was compiled.Possible values are:'arm','arm64','ia32','loong64','mips','mipsel','ppc64','riscv64','s390','s390x', and'x64'.
import { arch }from'node:process';console.log(`This processor architecture is${arch}`);const { arch } =require('node:process');console.log(`This processor architecture is${arch}`);
process.argv#
- Type:<string[]>
Theprocess.argv property returns an array containing the command-linearguments passed when the Node.js process was launched. The first element willbeprocess.execPath. Seeprocess.argv0 if access to the original valueofargv[0] is needed. If aprogram entry point was provided, the second elementwill be the absolute path to it. The remaining elements are additional command-linearguments.
For example, assuming the following script forprocess-args.js:
import { argv }from'node:process';// print process.argvargv.forEach((val, index) => {console.log(`${index}:${val}`);});const { argv } =require('node:process');// print process.argvargv.forEach((val, index) => {console.log(`${index}:${val}`);});
Launching the Node.js process as:
node process-args.js one two=three fourWould generate the output:
0: /usr/local/bin/node1: /Users/mjr/work/node/process-args.js2: one3: two=three4: fourprocess.argv0#
- Type:<string>
Theprocess.argv0 property stores a read-only copy of the original value ofargv[0] passed when Node.js starts.
$bash -c'exec -a customArgv0 ./node'>process.argv[0]'/Volumes/code/external/node/out/Release/node'>process.argv0'customArgv0'process.availableMemory()#
History
| Version | Changes |
|---|---|
| v24.0.0, v22.16.0 | Change stability index for this feature from Experimental to Stable. |
| v22.0.0, v20.13.0 | Added in: v22.0.0, v20.13.0 |
- Type:<number>
Gets the amount of free memory that is still available to the process(in bytes).
Seeuv_get_available_memory for moreinformation.
process.channel#
History
| Version | Changes |
|---|---|
| v14.0.0 | The object no longer accidentally exposes native C++ bindings. |
| v7.1.0 | Added in: v7.1.0 |
- Type:<Object>
If the Node.js process was spawned with an IPC channel (see theChild Process documentation), theprocess.channelproperty is a reference to the IPC channel. If no IPC channel exists, thisproperty isundefined.
process.channel.ref()#
This method makes the IPC channel keep the event loop of the processrunning if.unref() has been called before.
Typically, this is managed through the number of'disconnect' and'message'listeners on theprocess object. However, this method can be used toexplicitly request a specific behavior.
process.channel.unref()#
This method makes the IPC channel not keep the event loop of the processrunning, and lets it finish even while the channel is open.
Typically, this is managed through the number of'disconnect' and'message'listeners on theprocess object. However, this method can be used toexplicitly request a specific behavior.
process.chdir(directory)#
directory<string>
Theprocess.chdir() method changes the current working directory of theNode.js process or throws an exception if doing so fails (for instance, ifthe specifieddirectory does not exist).
import { chdir, cwd }from'node:process';console.log(`Starting directory:${cwd()}`);try {chdir('/tmp');console.log(`New directory:${cwd()}`);}catch (err) {console.error(`chdir:${err}`);}const { chdir, cwd } =require('node:process');console.log(`Starting directory:${cwd()}`);try {chdir('/tmp');console.log(`New directory:${cwd()}`);}catch (err) {console.error(`chdir:${err}`);}
This feature is not available inWorker threads.
process.config#
History
| Version | Changes |
|---|---|
| v19.0.0 | The |
| v16.0.0 | Modifying process.config has been deprecated. |
| v0.7.7 | Added in: v0.7.7 |
- Type:<Object>
Theprocess.config property returns a frozenObject containing theJavaScript representation of the configure options used to compile the currentNode.js executable. This is the same as theconfig.gypi file that was producedwhen running the./configure script.
An example of the possible output looks like:
{target_defaults: {cflags: [],default_configuration:'Release',defines: [],include_dirs: [],libraries: [] },variables: {host_arch:'x64',napi_build_version:5,node_install_npm:'true',node_prefix:'',node_shared_cares:'false',node_shared_http_parser:'false',node_shared_libuv:'false',node_shared_zlib:'false',node_use_openssl:'true',node_shared_openssl:'false',target_arch:'x64',v8_use_snapshot:1 }}process.connected#
- Type:<boolean>
If the Node.js process is spawned with an IPC channel (see theChild ProcessandCluster documentation), theprocess.connected property will returntrue so long as the IPC channel is connected and will returnfalse afterprocess.disconnect() is called.
Onceprocess.connected isfalse, it is no longer possible to send messagesover the IPC channel usingprocess.send().
process.constrainedMemory()#
History
| Version | Changes |
|---|---|
| v24.0.0, v22.16.0 | Change stability index for this feature from Experimental to Stable. |
| v22.0.0, v20.13.0 | Aligned return value with |
| v19.6.0, v18.15.0 | Added in: v19.6.0, v18.15.0 |
- Type:<number>
Gets the amount of memory available to the process (in bytes) based onlimits imposed by the OS. If there is no such constraint, or the constraintis unknown,0 is returned.
Seeuv_get_constrained_memory for moreinformation.
process.cpuUsage([previousValue])#
Theprocess.cpuUsage() method returns the user and system CPU time usage ofthe current process, in an object with propertiesuser andsystem, whosevalues are microsecond values (millionth of a second). These values measure timespent in user and system code respectively, and may end up being greater thanactual elapsed time if multiple CPU cores are performing work for this process.
The result of a previous call toprocess.cpuUsage() can be passed as theargument to the function, to get a diff reading.
import { cpuUsage }from'node:process';const startUsage =cpuUsage();// { user: 38579, system: 6986 }// spin the CPU for 500 millisecondsconst now =Date.now();while (Date.now() - now <500);console.log(cpuUsage(startUsage));// { user: 514883, system: 11226 }const { cpuUsage } =require('node:process');const startUsage =cpuUsage();// { user: 38579, system: 6986 }// spin the CPU for 500 millisecondsconst now =Date.now();while (Date.now() - now <500);console.log(cpuUsage(startUsage));// { user: 514883, system: 11226 }
process.cwd()#
- Returns:<string>
Theprocess.cwd() method returns the current working directory of the Node.jsprocess.
import { cwd }from'node:process';console.log(`Current directory:${cwd()}`);const { cwd } =require('node:process');console.log(`Current directory:${cwd()}`);
process.debugPort#
- Type:<number>
The port used by the Node.js debugger when enabled.
import processfrom'node:process';process.debugPort =5858;const process =require('node:process');process.debugPort =5858;
process.disconnect()#
If the Node.js process is spawned with an IPC channel (see theChild ProcessandCluster documentation), theprocess.disconnect() method will close theIPC channel to the parent process, allowing the child process to exit gracefullyonce there are no other connections keeping it alive.
The effect of callingprocess.disconnect() is the same as callingChildProcess.disconnect() from the parent process.
If the Node.js process was not spawned with an IPC channel,process.disconnect() will beundefined.
process.dlopen(module, filename[, flags])#
History
| Version | Changes |
|---|---|
| v9.0.0 | Added support for the |
| v0.1.16 | Added in: v0.1.16 |
module<Object>filename<string>flags<os.constants.dlopen>Default:os.constants.dlopen.RTLD_LAZY
Theprocess.dlopen() method allows dynamically loading shared objects. It isprimarily used byrequire() to load C++ Addons, and should not be useddirectly, except in special cases. In other words,require() should bepreferred overprocess.dlopen() unless there are specific reasons such ascustom dlopen flags or loading from ES modules.
Theflags argument is an integer that allows to specify dlopenbehavior. See theos.constants.dlopen documentation for details.
An important requirement when callingprocess.dlopen() is that themoduleinstance must be passed. Functions exported by the C++ Addon are thenaccessible viamodule.exports.
The example below shows how to load a C++ Addon, namedlocal.node,that exports afoo function. All the symbols are loaded beforethe call returns, by passing theRTLD_NOW constant. In this examplethe constant is assumed to be available.
import { dlopen }from'node:process';import { constants }from'node:os';import { fileURLToPath }from'node:url';constmodule = {exports: {} };dlopen(module,fileURLToPath(newURL('local.node',import.meta.url)), constants.dlopen.RTLD_NOW);module.exports.foo();const { dlopen } =require('node:process');const { constants } =require('node:os');const { join } =require('node:path');constmodule = {exports: {} };dlopen(module,join(__dirname,'local.node'), constants.dlopen.RTLD_NOW);module.exports.foo();
process.emitWarning(warning[, options])#
warning<string> |<Error> The warning to emit.options<Object>type<string> Whenwarningis aString,typeis the name to usefor thetype of warning being emitted.Default:'Warning'.code<string> A unique identifier for the warning instance being emitted.ctor<Function> Whenwarningis aString,ctoris an optionalfunction used to limit the generated stack trace.Default:process.emitWarning.detail<string> Additional text to include with the error.
Theprocess.emitWarning() method can be used to emit custom or applicationspecific process warnings. These can be listened for by adding a handler to the'warning' event.
import { emitWarning }from'node:process';// Emit a warning with a code and additional detail.emitWarning('Something happened!', {code:'MY_WARNING',detail:'This is some additional information',});// Emits:// (node:56338) [MY_WARNING] Warning: Something happened!// This is some additional informationconst { emitWarning } =require('node:process');// Emit a warning with a code and additional detail.emitWarning('Something happened!', {code:'MY_WARNING',detail:'This is some additional information',});// Emits:// (node:56338) [MY_WARNING] Warning: Something happened!// This is some additional information
In this example, anError object is generated internally byprocess.emitWarning() and passed through to the'warning' handler.
import processfrom'node:process';process.on('warning',(warning) => {console.warn(warning.name);// 'Warning'console.warn(warning.message);// 'Something happened!'console.warn(warning.code);// 'MY_WARNING'console.warn(warning.stack);// Stack traceconsole.warn(warning.detail);// 'This is some additional information'});const process =require('node:process');process.on('warning',(warning) => {console.warn(warning.name);// 'Warning'console.warn(warning.message);// 'Something happened!'console.warn(warning.code);// 'MY_WARNING'console.warn(warning.stack);// Stack traceconsole.warn(warning.detail);// 'This is some additional information'});
Ifwarning is passed as anError object, theoptions argument is ignored.
process.emitWarning(warning[, type[, code]][, ctor])#
warning<string> |<Error> The warning to emit.type<string> Whenwarningis aString,typeis the name to usefor thetype of warning being emitted.Default:'Warning'.code<string> A unique identifier for the warning instance being emitted.ctor<Function> Whenwarningis aString,ctoris an optionalfunction used to limit the generated stack trace.Default:process.emitWarning.
Theprocess.emitWarning() method can be used to emit custom or applicationspecific process warnings. These can be listened for by adding a handler to the'warning' event.
import { emitWarning }from'node:process';// Emit a warning using a string.emitWarning('Something happened!');// Emits: (node: 56338) Warning: Something happened!const { emitWarning } =require('node:process');// Emit a warning using a string.emitWarning('Something happened!');// Emits: (node: 56338) Warning: Something happened!
import { emitWarning }from'node:process';// Emit a warning using a string and a type.emitWarning('Something Happened!','CustomWarning');// Emits: (node:56338) CustomWarning: Something Happened!const { emitWarning } =require('node:process');// Emit a warning using a string and a type.emitWarning('Something Happened!','CustomWarning');// Emits: (node:56338) CustomWarning: Something Happened!
import { emitWarning }from'node:process';emitWarning('Something happened!','CustomWarning','WARN001');// Emits: (node:56338) [WARN001] CustomWarning: Something happened!const { emitWarning } =require('node:process');process.emitWarning('Something happened!','CustomWarning','WARN001');// Emits: (node:56338) [WARN001] CustomWarning: Something happened!
In each of the previous examples, anError object is generated internally byprocess.emitWarning() and passed through to the'warning'handler.
import processfrom'node:process';process.on('warning',(warning) => {console.warn(warning.name);console.warn(warning.message);console.warn(warning.code);console.warn(warning.stack);});const process =require('node:process');process.on('warning',(warning) => {console.warn(warning.name);console.warn(warning.message);console.warn(warning.code);console.warn(warning.stack);});
Ifwarning is passed as anError object, it will be passed through to the'warning' event handler unmodified (and the optionaltype,code andctor arguments will be ignored):
import { emitWarning }from'node:process';// Emit a warning using an Error object.const myWarning =newError('Something happened!');// Use the Error name property to specify the type namemyWarning.name ='CustomWarning';myWarning.code ='WARN001';emitWarning(myWarning);// Emits: (node:56338) [WARN001] CustomWarning: Something happened!const { emitWarning } =require('node:process');// Emit a warning using an Error object.const myWarning =newError('Something happened!');// Use the Error name property to specify the type namemyWarning.name ='CustomWarning';myWarning.code ='WARN001';emitWarning(myWarning);// Emits: (node:56338) [WARN001] CustomWarning: Something happened!
ATypeError is thrown ifwarning is anything other than a string orErrorobject.
While process warnings useError objects, the process warningmechanism isnot a replacement for normal error handling mechanisms.
The following additional handling is implemented if the warningtype is'DeprecationWarning':
- If the
--throw-deprecationcommand-line flag is used, the deprecationwarning is thrown as an exception rather than being emitted as an event. - If the
--no-deprecationcommand-line flag is used, the deprecationwarning is suppressed. - If the
--trace-deprecationcommand-line flag is used, the deprecationwarning is printed tostderralong with the full stack trace.
Avoiding duplicate warnings#
As a best practice, warnings should be emitted only once per process. To doso, place theemitWarning() behind a boolean.
import { emitWarning }from'node:process';functionemitMyWarning() {if (!emitMyWarning.warned) { emitMyWarning.warned =true;emitWarning('Only warn once!'); }}emitMyWarning();// Emits: (node: 56339) Warning: Only warn once!emitMyWarning();// Emits nothingconst { emitWarning } =require('node:process');functionemitMyWarning() {if (!emitMyWarning.warned) { emitMyWarning.warned =true;emitWarning('Only warn once!'); }}emitMyWarning();// Emits: (node: 56339) Warning: Only warn once!emitMyWarning();// Emits nothing
process.env#
History
| Version | Changes |
|---|---|
| v11.14.0 | Worker threads will now use a copy of the parent thread's |
| v10.0.0 | Implicit conversion of variable value to string is deprecated. |
| v0.1.27 | Added in: v0.1.27 |
- Type:<Object>
Theprocess.env property returns an object containing the user environment.Seeenviron(7).
An example of this object looks like:
{TERM:'xterm-256color',SHELL:'/usr/local/bin/bash',USER:'maciej',PATH:'~/.bin/:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin',PWD:'/Users/maciej',EDITOR:'vim',SHLVL:'1',HOME:'/Users/maciej',LOGNAME:'maciej',_:'/usr/local/bin/node'}It is possible to modify this object, but such modifications will not bereflected outside the Node.js process, or (unless explicitly requested)to otherWorker threads.In other words, the following example would not work:
node -e'process.env.foo = "bar"' &&echo$fooWhile the following will:
import { env }from'node:process';env.foo ='bar';console.log(env.foo);const { env } =require('node:process');env.foo ='bar';console.log(env.foo);
Assigning a property onprocess.env will implicitly convert the valueto a string.This behavior is deprecated. Future versions of Node.js maythrow an error when the value is not a string, number, or boolean.
import { env }from'node:process';env.test =null;console.log(env.test);// => 'null'env.test =undefined;console.log(env.test);// => 'undefined'const { env } =require('node:process');env.test =null;console.log(env.test);// => 'null'env.test =undefined;console.log(env.test);// => 'undefined'
Usedelete to delete a property fromprocess.env.
import { env }from'node:process';env.TEST =1;delete env.TEST;console.log(env.TEST);// => undefinedconst { env } =require('node:process');env.TEST =1;delete env.TEST;console.log(env.TEST);// => undefined
On Windows operating systems, environment variables are case-insensitive.
import { env }from'node:process';env.TEST =1;console.log(env.test);// => 1const { env } =require('node:process');env.TEST =1;console.log(env.test);// => 1
Unless explicitly specified when creating aWorker instance,eachWorker thread has its own copy ofprocess.env, based on itsparent thread'sprocess.env, or whatever was specified as theenv optionto theWorker constructor. Changes toprocess.env will not be visibleacrossWorker threads, and only the main thread can make changes thatare visible to the operating system or to native add-ons. On Windows, a copy ofprocess.env on aWorker instance operates in a case-sensitive mannerunlike the main thread.
process.execArgv#
- Type:<string[]>
Theprocess.execArgv property returns the set of Node.js-specific command-lineoptions passed when the Node.js process was launched. These options do notappear in the array returned by theprocess.argv property, and do notinclude the Node.js executable, the name of the script, or any options followingthe script name. These options are useful in order to spawn child processes withthe same execution environment as the parent.
node --icu-data-dir=./foo --require ./bar.js script.js --versionResults inprocess.execArgv:
["--icu-data-dir=./foo","--require","./bar.js"]Andprocess.argv:
['/usr/local/bin/node','script.js','--version']Refer toWorker constructor for the detailed behavior of workerthreads with this property.
process.execPath#
- Type:<string>
Theprocess.execPath property returns the absolute pathname of the executablethat started the Node.js process. Symbolic links, if any, are resolved.
'/usr/local/bin/node'process.execve(file[, args[, env]])#
file<string> The name or path of the executable file to run.args<string[]> List of string arguments. No argument can contain a null-byte (\u0000).env<Object> Environment key-value pairs.No key or value can contain a null-byte (\u0000).Default:process.env.
Replaces the current process with a new process.
This is achieved by using theexecve POSIX function and therefore no memory or otherresources from the current process are preserved, except for the standard input,standard output and standard error file descriptor.
All other resources are discarded by the system when the processes are swapped, without triggeringany exit or close events and without running any cleanup handler.
This function will never return, unless an error occurred.
This function is not available on Windows or IBM i.
process.exit([code])#
History
| Version | Changes |
|---|---|
| v20.0.0 | Only accepts a code of type number, or of type string if it represents an integer. |
| v0.1.13 | Added in: v0.1.13 |
code<integer> |<string> |<null> |<undefined> The exit code. For string type, onlyinteger strings (e.g.,'1') are allowed.Default:0.
Theprocess.exit() method instructs Node.js to terminate the processsynchronously with an exit status ofcode. Ifcode is omitted, exit useseither the 'success' code0 or the value ofprocess.exitCode if it has beenset. Node.js will not terminate until all the'exit' event listeners arecalled.
To exit with a 'failure' code:
import { exit }from'node:process';exit(1);const { exit } =require('node:process');exit(1);
The shell that executed Node.js should see the exit code as1.
Callingprocess.exit() will force the process to exit as quickly as possibleeven if there are still asynchronous operations pending that have not yetcompleted fully, including I/O operations toprocess.stdout andprocess.stderr.
In most situations, it is not actually necessary to callprocess.exit()explicitly. The Node.js process will exit on its ownif there is no additionalwork pending in the event loop. Theprocess.exitCode property can be set totell the process which exit code to use when the process exits gracefully.
For instance, the following example illustrates amisuse of theprocess.exit() method that could lead to data printed to stdout beingtruncated and lost:
import { exit }from'node:process';// This is an example of what *not* to do:if (someConditionNotMet()) {printUsageToStdout();exit(1);}const { exit } =require('node:process');// This is an example of what *not* to do:if (someConditionNotMet()) {printUsageToStdout();exit(1);}
The reason this is problematic is because writes toprocess.stdout in Node.jsare sometimesasynchronous and may occur over multiple ticks of the Node.jsevent loop. Callingprocess.exit(), however, forces the process to exitbefore those additional writes tostdout can be performed.
Rather than callingprocess.exit() directly, the codeshould set theprocess.exitCode and allow the process to exit naturally by avoidingscheduling any additional work for the event loop:
import processfrom'node:process';// How to properly set the exit code while letting// the process exit gracefully.if (someConditionNotMet()) {printUsageToStdout(); process.exitCode =1;}const process =require('node:process');// How to properly set the exit code while letting// the process exit gracefully.if (someConditionNotMet()) {printUsageToStdout(); process.exitCode =1;}
If it is necessary to terminate the Node.js process due to an error condition,throwing anuncaught error and allowing the process to terminate accordinglyis safer than callingprocess.exit().
InWorker threads, this function stops the current thread ratherthan the current process.
process.exitCode#
History
| Version | Changes |
|---|---|
| v20.0.0 | Only accepts a code of type number, or of type string if it represents an integer. |
| v0.11.8 | Added in: v0.11.8 |
- Type:<integer> |<string> |<null> |<undefined> The exit code. For string type, onlyinteger strings (e.g.,'1') are allowed.Default:
undefined.
A number which will be the process exit code, when the process eitherexits gracefully, or is exited viaprocess.exit() without specifyinga code.
The value ofprocess.exitCode can be updated by either assigning a value toprocess.exitCode or by passing an argument toprocess.exit():
$node -e'process.exitCode = 9';echo $?9$node -e'process.exit(42)';echo $?42$node -e'process.exitCode = 9; process.exit(42)';echo $?42The value can also be set implicitly by Node.js when unrecoverable errors occur (e.g.such as the encountering of an unsettled top-level await). However explicitmanipulations of the exit code always take precedence over implicit ones:
$node --input-type=module -e'await new Promise(() => {})';echo $?13$node --input-type=module -e'process.exitCode = 9; await new Promise(() => {})';echo $?9process.features.cached_builtins#
- Type:<boolean>
A boolean value that istrue if the current Node.js build is caching builtin modules.
process.features.debug#
- Type:<boolean>
A boolean value that istrue if the current Node.js build is a debug build.
process.features.inspector#
- Type:<boolean>
A boolean value that istrue if the current Node.js build includes the inspector.
process.features.ipv6#
- Type:<boolean>
A boolean value that istrue if the current Node.js build includes support for IPv6.
Since all Node.js builds have IPv6 support, this value is alwaystrue.
process.features.require_module#
- Type:<boolean>
A boolean value that istrue if the current Node.js build supportsloading ECMAScript modules usingrequire().
process.features.tls#
- Type:<boolean>
A boolean value that istrue if the current Node.js build includes support for TLS.
process.features.tls_alpn#
process.features.tls instead.- Type:<boolean>
A boolean value that istrue if the current Node.js build includes support for ALPN in TLS.
In Node.js 11.0.0 and later versions, the OpenSSL dependencies feature unconditional ALPN support.This value is therefore identical to that ofprocess.features.tls.
process.features.tls_ocsp#
process.features.tls instead.- Type:<boolean>
A boolean value that istrue if the current Node.js build includes support for OCSP in TLS.
In Node.js 11.0.0 and later versions, the OpenSSL dependencies feature unconditional OCSP support.This value is therefore identical to that ofprocess.features.tls.
process.features.tls_sni#
process.features.tls instead.- Type:<boolean>
A boolean value that istrue if the current Node.js build includes support for SNI in TLS.
In Node.js 11.0.0 and later versions, the OpenSSL dependencies feature unconditional SNI support.This value is therefore identical to that ofprocess.features.tls.
process.features.typescript#
History
| Version | Changes |
|---|---|
| v25.2.0 | Type stripping is now stable. |
| v23.0.0, v22.10.0 | Added in: v23.0.0, v22.10.0 |
A value that is"strip" by default,"transform" if Node.js is run with--experimental-transform-types, andfalse ifNode.js is run with--no-strip-types.
process.features.uv#
- Type:<boolean>
A boolean value that istrue if the current Node.js build includes support for libuv.
Since it's not possible to build Node.js without libuv, this value is alwaystrue.
process.finalization.register(ref, callback)#
ref<Object> |<Function> The reference to the resource that is being tracked.callback<Function> The callback function to be called when the resourceis finalized.ref<Object> |<Function> The reference to the resource that is being tracked.event<string> The event that triggered the finalization. Defaults to 'exit'.
This function registers a callback to be called when the process emits theexitevent if theref object was not garbage collected. If the objectref was garbage collectedbefore theexit event is emitted, the callback will be removed from the finalization registry,and it will not be called on process exit.
Inside the callback you can release the resources allocated by theref object.Be aware that all limitations applied to thebeforeExit event are also applied to thecallback function,this means that there is a possibility that the callback will not be called under special circumstances.
The idea of this function is to help you free up resources when the starts process exiting,but also let the object be garbage collected if it is no longer being used.
Eg: you can register an object that contains a buffer, you want to make sure that buffer is releasedwhen the process exit, but if the object is garbage collected before the process exit, we no longerneed to release the buffer, so in this case we just remove the callback from the finalization registry.
const { finalization } =require('node:process');// Please make sure that the function passed to finalization.register()// does not create a closure around unnecessary objects.functiononFinalize(obj, event) {// You can do whatever you want with the object obj.dispose();}functionsetup() {// This object can be safely garbage collected,// and the resulting shutdown function will not be called.// There are no leaks.const myDisposableObject = {dispose() {// Free your resources synchronously }, }; finalization.register(myDisposableObject, onFinalize);}setup();import { finalization }from'node:process';// Please make sure that the function passed to finalization.register()// does not create a closure around unnecessary objects.functiononFinalize(obj, event) {// You can do whatever you want with the object obj.dispose();}functionsetup() {// This object can be safely garbage collected,// and the resulting shutdown function will not be called.// There are no leaks.const myDisposableObject = {dispose() {// Free your resources synchronously }, }; finalization.register(myDisposableObject, onFinalize);}setup();
The code above relies on the following assumptions:
- arrow functions are avoided
- regular functions are recommended to be within the global context (root)
Regular functionscould reference the context where theobj lives, making theobj not garbage collectible.
Arrow functions will hold the previous context. Consider, for example:
classTest {constructor() { finalization.register(this,(ref) => ref.dispose());// Even something like this is highly discouraged// finalization.register(this, () => this.dispose()); }dispose() {}}It is very unlikely (not impossible) that this object will be garbage collected,but if it is not,dispose will be called whenprocess.exit is called.
Be careful and avoid relying on this feature for the disposal of critical resources,as it is not guaranteed that the callback will be called under all circumstances.
process.finalization.registerBeforeExit(ref, callback)#
ref<Object> |<Function> The referenceto the resource that is being tracked.callback<Function> The callback function to be called when the resourceis finalized.ref<Object> |<Function> The reference to the resource that is being tracked.event<string> The event that triggered the finalization. Defaults to 'beforeExit'.
This function behaves exactly like theregister, except that the callback will be calledwhen the process emits thebeforeExit event ifref object was not garbage collected.
Be aware that all limitations applied to thebeforeExit event are also applied to thecallback function,this means that there is a possibility that the callback will not be called under special circumstances.
process.finalization.unregister(ref)#
ref<Object> |<Function> The referenceto the resource that was registered previously.
This function remove the register of the object from the finalizationregistry, so the callback will not be called anymore.
const { finalization } =require('node:process');// Please make sure that the function passed to finalization.register()// does not create a closure around unnecessary objects.functiononFinalize(obj, event) {// You can do whatever you want with the object obj.dispose();}functionsetup() {// This object can be safely garbage collected,// and the resulting shutdown function will not be called.// There are no leaks.const myDisposableObject = {dispose() {// Free your resources synchronously }, }; finalization.register(myDisposableObject, onFinalize);// Do something myDisposableObject.dispose(); finalization.unregister(myDisposableObject);}setup();import { finalization }from'node:process';// Please make sure that the function passed to finalization.register()// does not create a closure around unnecessary objects.functiononFinalize(obj, event) {// You can do whatever you want with the object obj.dispose();}functionsetup() {// This object can be safely garbage collected,// and the resulting shutdown function will not be called.// There are no leaks.const myDisposableObject = {dispose() {// Free your resources synchronously }, };// Please make sure that the function passed to finalization.register()// does not create a closure around unnecessary objects.functiononFinalize(obj, event) {// You can do whatever you want with the object obj.dispose(); } finalization.register(myDisposableObject, onFinalize);// Do something myDisposableObject.dispose(); finalization.unregister(myDisposableObject);}setup();
process.getActiveResourcesInfo()#
History
| Version | Changes |
|---|---|
| v24.0.0, v22.16.0 | Change stability index for this feature from Experimental to Stable. |
| v17.3.0, v16.14.0 | Added in: v17.3.0, v16.14.0 |
- Returns:<string[]>
Theprocess.getActiveResourcesInfo() method returns an array of stringscontaining the types of the active resources that are currently keeping theevent loop alive.
import { getActiveResourcesInfo }from'node:process';import {setTimeout }from'node:timers';console.log('Before:',getActiveResourcesInfo());setTimeout(() => {},1000);console.log('After:',getActiveResourcesInfo());// Prints:// Before: [ 'CloseReq', 'TTYWrap', 'TTYWrap', 'TTYWrap' ]// After: [ 'CloseReq', 'TTYWrap', 'TTYWrap', 'TTYWrap', 'Timeout' ]const { getActiveResourcesInfo } =require('node:process');const {setTimeout } =require('node:timers');console.log('Before:',getActiveResourcesInfo());setTimeout(() => {},1000);console.log('After:',getActiveResourcesInfo());// Prints:// Before: [ 'TTYWrap', 'TTYWrap', 'TTYWrap' ]// After: [ 'TTYWrap', 'TTYWrap', 'TTYWrap', 'Timeout' ]
process.getBuiltinModule(id)#
id<string> ID of the built-in module being requested.- Returns:<Object> |<undefined>
process.getBuiltinModule(id) provides a way to load built-in modulesin a globally available function. ES Modules that need to supportother environments can use it to conditionally load a Node.js built-inwhen it is run in Node.js, without having to deal with the resolutionerror that can be thrown byimport in a non-Node.js environment orhaving to use dynamicimport() which either turns the module intoan asynchronous module, or turns a synchronous API into an asynchronous one.
if (globalThis.process?.getBuiltinModule) {// Run in Node.js, use the Node.js fs module.const fs = globalThis.process.getBuiltinModule('fs');// If `require()` is needed to load user-modules, use createRequire()constmodule = globalThis.process.getBuiltinModule('module');constrequire =module.createRequire(import.meta.url);const foo =require('foo');}Ifid specifies a built-in module available in the current Node.js process,process.getBuiltinModule(id) method returns the corresponding built-inmodule. Ifid does not correspond to any built-in module,undefinedis returned.
process.getBuiltinModule(id) accepts built-in module IDs that are recognizedbymodule.isBuiltin(id). Some built-in modules must be loaded with thenode: prefix, seebuilt-in modules with mandatorynode: prefix.The references returned byprocess.getBuiltinModule(id) always point tothe built-in module corresponding toid even if users modifyrequire.cache so thatrequire(id) returns something else.
process.getegid()#
Theprocess.getegid() method returns the numerical effective group identityof the Node.js process. (Seegetegid(2).)
import processfrom'node:process';if (process.getegid) {console.log(`Current gid:${process.getegid()}`);}const process =require('node:process');if (process.getegid) {console.log(`Current gid:${process.getegid()}`);}
This function is only available on POSIX platforms (i.e. not Windows orAndroid).
process.geteuid()#
- Returns:<Object>
Theprocess.geteuid() method returns the numerical effective user identity ofthe process. (Seegeteuid(2).)
import processfrom'node:process';if (process.geteuid) {console.log(`Current uid:${process.geteuid()}`);}const process =require('node:process');if (process.geteuid) {console.log(`Current uid:${process.geteuid()}`);}
This function is only available on POSIX platforms (i.e. not Windows orAndroid).
process.getgid()#
- Returns:<Object>
Theprocess.getgid() method returns the numerical group identity of theprocess. (Seegetgid(2).)
import processfrom'node:process';if (process.getgid) {console.log(`Current gid:${process.getgid()}`);}const process =require('node:process');if (process.getgid) {console.log(`Current gid:${process.getgid()}`);}
This function is only available on POSIX platforms (i.e. not Windows orAndroid).
process.getgroups()#
- Returns:<integer[]>
Theprocess.getgroups() method returns an array with the supplementary groupIDs. POSIX leaves it unspecified if the effective group ID is included butNode.js ensures it always is.
import processfrom'node:process';if (process.getgroups) {console.log(process.getgroups());// [ 16, 21, 297 ]}const process =require('node:process');if (process.getgroups) {console.log(process.getgroups());// [ 16, 21, 297 ]}
This function is only available on POSIX platforms (i.e. not Windows orAndroid).
process.getuid()#
- Returns:<integer>
Theprocess.getuid() method returns the numeric user identity of the process.(Seegetuid(2).)
import processfrom'node:process';if (process.getuid) {console.log(`Current uid:${process.getuid()}`);}const process =require('node:process');if (process.getuid) {console.log(`Current uid:${process.getuid()}`);}
This function not available on Windows.
process.hasUncaughtExceptionCaptureCallback()#
- Returns:<boolean>
Indicates whether a callback has been set usingprocess.setUncaughtExceptionCaptureCallback().
process.hrtime([time])#
process.hrtime.bigint() instead.time<integer[]> The result of a previous call toprocess.hrtime()- Returns:<integer[]>
This is the legacy version ofprocess.hrtime.bigint()beforebigint was introduced in JavaScript.
Theprocess.hrtime() method returns the current high-resolution real timein a[seconds, nanoseconds] tupleArray, wherenanoseconds is theremaining part of the real time that can't be represented in second precision.
time is an optional parameter that must be the result of a previousprocess.hrtime() call to diff with the current time. If the parameterpassed in is not a tupleArray, aTypeError will be thrown. Passing in auser-defined array instead of the result of a previous call toprocess.hrtime() will lead to undefined behavior.
These times are relative to an arbitrary time in thepast, and not related to the time of day and therefore not subject to clockdrift. The primary use is for measuring performance between intervals:
import { hrtime }from'node:process';constNS_PER_SEC =1e9;const time =hrtime();// [ 1800216, 25 ]setTimeout(() => {const diff =hrtime(time);// [ 1, 552 ]console.log(`Benchmark took${diff[0] * NS_PER_SEC + diff[1]} nanoseconds`);// Benchmark took 1000000552 nanoseconds},1000);const { hrtime } =require('node:process');constNS_PER_SEC =1e9;const time =hrtime();// [ 1800216, 25 ]setTimeout(() => {const diff =hrtime(time);// [ 1, 552 ]console.log(`Benchmark took${diff[0] * NS_PER_SEC + diff[1]} nanoseconds`);// Benchmark took 1000000552 nanoseconds},1000);
process.hrtime.bigint()#
- Returns:<bigint>
Thebigint version of theprocess.hrtime() method returning thecurrent high-resolution real time in nanoseconds as abigint.
Unlikeprocess.hrtime(), it does not support an additionaltimeargument since the difference can just be computed directlyby subtraction of the twobigints.
import { hrtime }from'node:process';const start = hrtime.bigint();// 191051479007711nsetTimeout(() => {const end = hrtime.bigint();// 191052633396993nconsole.log(`Benchmark took${end - start} nanoseconds`);// Benchmark took 1154389282 nanoseconds},1000);const { hrtime } =require('node:process');const start = hrtime.bigint();// 191051479007711nsetTimeout(() => {const end = hrtime.bigint();// 191052633396993nconsole.log(`Benchmark took${end - start} nanoseconds`);// Benchmark took 1154389282 nanoseconds},1000);
process.initgroups(user, extraGroup)#
user<string> |<number> The user name or numeric identifier.extraGroup<string> |<number> A group name or numeric identifier.
Theprocess.initgroups() method reads the/etc/group file and initializesthe group access list, using all groups of which the user is a member. This isa privileged operation that requires that the Node.js process either haverootaccess or theCAP_SETGID capability.
Use care when dropping privileges:
import { getgroups, initgroups, setgid }from'node:process';console.log(getgroups());// [ 0 ]initgroups('nodeuser',1000);// switch userconsole.log(getgroups());// [ 27, 30, 46, 1000, 0 ]setgid(1000);// drop root gidconsole.log(getgroups());// [ 27, 30, 46, 1000 ]const { getgroups, initgroups, setgid } =require('node:process');console.log(getgroups());// [ 0 ]initgroups('nodeuser',1000);// switch userconsole.log(getgroups());// [ 27, 30, 46, 1000, 0 ]setgid(1000);// drop root gidconsole.log(getgroups());// [ 27, 30, 46, 1000 ]
This function is only available on POSIX platforms (i.e. not Windows orAndroid).This feature is not available inWorker threads.
process.kill(pid[, signal])#
pid<number> A process IDsignal<string> |<number> The signal to send, either as a string or number.Default:'SIGTERM'.
Theprocess.kill() method sends thesignal to the process identified bypid.
Signal names are strings such as'SIGINT' or'SIGHUP'. SeeSignal Eventsandkill(2) for more information.
This method will throw an error if the targetpid does not exist. As a specialcase, a signal of0 can be used to test for the existence of a process.Windows platforms will throw an error if thepid is used to kill a processgroup.
Even though the name of this function isprocess.kill(), it is really just asignal sender, like thekill system call. The signal sent may do somethingother than kill the target process.
import process, { kill }from'node:process';process.on('SIGHUP',() => {console.log('Got SIGHUP signal.');});setTimeout(() => {console.log('Exiting.'); process.exit(0);},100);kill(process.pid,'SIGHUP');const process =require('node:process');process.on('SIGHUP',() => {console.log('Got SIGHUP signal.');});setTimeout(() => {console.log('Exiting.'); process.exit(0);},100);process.kill(process.pid,'SIGHUP');
WhenSIGUSR1 is received by a Node.js process, Node.js will start thedebugger. SeeSignal Events.
process.loadEnvFile(path)#
History
| Version | Changes |
|---|---|
| v24.10.0 | This API is no longer experimental. |
| v21.7.0, v20.12.0 | Added in: v21.7.0, v20.12.0 |
path<string> |<URL> |<Buffer> |<undefined>.Default:'./.env'
Loads the.env file intoprocess.env. Usage ofNODE_OPTIONSin the.env file will not have any effect on Node.js.
const { loadEnvFile } =require('node:process');loadEnvFile();import { loadEnvFile }from'node:process';loadEnvFile();
process.mainModule#
require.main instead.- Type:<Object>
Theprocess.mainModule property provides an alternative way of retrievingrequire.main. The difference is that if the main module changes atruntime,require.main may still refer to the original main module inmodules that were required before the change occurred. Generally, it'ssafe to assume that the two refer to the same module.
As withrequire.main,process.mainModule will beundefined if thereis no entry script.
process.memoryUsage()#
History
| Version | Changes |
|---|---|
| v13.9.0, v12.17.0 | Added |
| v7.2.0 | Added |
| v0.1.16 | Added in: v0.1.16 |
- Returns:<Object>
Returns an object describing the memory usage of the Node.js process measured inbytes.
import { memoryUsage }from'node:process';console.log(memoryUsage());// Prints:// {// rss: 4935680,// heapTotal: 1826816,// heapUsed: 650472,// external: 49879,// arrayBuffers: 9386// }const { memoryUsage } =require('node:process');console.log(memoryUsage());// Prints:// {// rss: 4935680,// heapTotal: 1826816,// heapUsed: 650472,// external: 49879,// arrayBuffers: 9386// }
heapTotalandheapUsedrefer to V8's memory usage.externalrefers to the memory usage of C++ objects bound to JavaScriptobjects managed by V8.rss, Resident Set Size, is the amount of space occupied in the mainmemory device (that is a subset of the total allocated memory) for theprocess, including all C++ and JavaScript objects and code.arrayBuffersrefers to memory allocated forArrayBuffers andSharedArrayBuffers, including all Node.jsBuffers.This is also included in theexternalvalue. When Node.js is used as anembedded library, this value may be0because allocations forArrayBuffersmay not be tracked in that case.
When usingWorker threads,rss will be a value that is valid for theentire process, while the other fields will only refer to the current thread.
Theprocess.memoryUsage() method iterates over each page to gatherinformation about memory usage which might be slow depending on theprogram memory allocations.
A note on process memoryUsage#
On Linux or other systems where glibc is commonly used, an application may have sustainedrss growth despite stableheapTotal due to fragmentation caused by the glibcmallocimplementation. Seenodejs/node#21973 on how to switch to an alternativemallocimplementation to address the performance issue.
process.memoryUsage.rss()#
- Returns:<integer>
Theprocess.memoryUsage.rss() method returns an integer representing theResident Set Size (RSS) in bytes.
The Resident Set Size, is the amount of space occupied in the mainmemory device (that is a subset of the total allocated memory) for theprocess, including all C++ and JavaScript objects and code.
This is the same value as therss property provided byprocess.memoryUsage()butprocess.memoryUsage.rss() is faster.
import { memoryUsage }from'node:process';console.log(memoryUsage.rss());// 35655680const { memoryUsage } =require('node:process');console.log(memoryUsage.rss());// 35655680
process.nextTick(callback[, ...args])#
History
| Version | Changes |
|---|---|
| v22.7.0, v20.18.0 | Changed stability to Legacy. |
| v18.0.0 | Passing an invalid callback to the |
| v1.8.1 | Additional arguments after |
| v0.1.26 | Added in: v0.1.26 |
queueMicrotask() instead.callback<Function>...args<any> Additional arguments to pass when invoking thecallback
process.nextTick() addscallback to the "next tick queue". This queue isfully drained after the current operation on the JavaScript stack runs tocompletion and before the event loop is allowed to continue. It's possible tocreate an infinite loop if one were to recursively callprocess.nextTick().See theEvent Loop guide for more background.
import { nextTick }from'node:process';console.log('start');nextTick(() => {console.log('nextTick callback');});console.log('scheduled');// Output:// start// scheduled// nextTick callbackconst { nextTick } =require('node:process');console.log('start');nextTick(() => {console.log('nextTick callback');});console.log('scheduled');// Output:// start// scheduled// nextTick callback
This is important when developing APIs in order to give users the opportunityto assign event handlersafter an object has been constructed but before anyI/O has occurred:
import { nextTick }from'node:process';functionMyThing(options) {this.setupOptions(options);nextTick(() => {this.startDoingStuff(); });}const thing =newMyThing();thing.getReadyForStuff();// thing.startDoingStuff() gets called now, not before.const { nextTick } =require('node:process');functionMyThing(options) {this.setupOptions(options);nextTick(() => {this.startDoingStuff(); });}const thing =newMyThing();thing.getReadyForStuff();// thing.startDoingStuff() gets called now, not before.
It is very important for APIs to be either 100% synchronous or 100%asynchronous. Consider this example:
// WARNING! DO NOT USE! BAD UNSAFE HAZARD!functionmaybeSync(arg, cb) {if (arg) {cb();return; } fs.stat('file', cb);}This API is hazardous because in the following case:
const maybeTrue =Math.random() >0.5;maybeSync(maybeTrue,() => {foo();});bar();It is not clear whetherfoo() orbar() will be called first.
The following approach is much better:
import { nextTick }from'node:process';functiondefinitelyAsync(arg, cb) {if (arg) {nextTick(cb);return; } fs.stat('file', cb);}const { nextTick } =require('node:process');functiondefinitelyAsync(arg, cb) {if (arg) {nextTick(cb);return; } fs.stat('file', cb);}
When to usequeueMicrotask() vs.process.nextTick()#
ThequeueMicrotask() API is an alternative toprocess.nextTick() that instead of using the"next tick queue" defers execution of a function using the same microtask queue used to execute thethen, catch, and finally handlers of resolved promises.
Within Node.js, every time the "next tick queue" is drained, the microtask queueis drained immediately after.
So in CJS modulesprocess.nextTick() callbacks are always run beforequeueMicrotask() ones.However since ESM modules are processed already as part of the microtask queue, therequeueMicrotask() callbacks are always executed beforeprocess.nextTick() ones since Node.jsis already in the process of draining the microtask queue.
import { nextTick }from'node:process';Promise.resolve().then(() =>console.log('resolve'));queueMicrotask(() =>console.log('microtask'));nextTick(() =>console.log('nextTick'));// Output:// resolve// microtask// nextTickconst { nextTick } =require('node:process');Promise.resolve().then(() =>console.log('resolve'));queueMicrotask(() =>console.log('microtask'));nextTick(() =>console.log('nextTick'));// Output:// nextTick// resolve// microtask
Formost userland use cases, thequeueMicrotask() API provides a portableand reliable mechanism for deferring execution that works across multipleJavaScript platform environments and should be favored overprocess.nextTick().In simple scenarios,queueMicrotask() can be a drop-in replacement forprocess.nextTick().
console.log('start');queueMicrotask(() => {console.log('microtask callback');});console.log('scheduled');// Output:// start// scheduled// microtask callbackOne note-worthy difference between the two APIs is thatprocess.nextTick()allows specifying additional values that will be passed as arguments to thedeferred function when it is called. Achieving the same result withqueueMicrotask() requires using either a closure or a bound function:
functiondeferred(a, b) {console.log('microtask', a + b);}console.log('start');queueMicrotask(deferred.bind(undefined,1,2));console.log('scheduled');// Output:// start// scheduled// microtask 3There are minor differences in the way errors raised from within the next tickqueue and microtask queue are handled. Errors thrown within a queued microtaskcallback should be handled within the queued callback when possible. If they arenot, theprocess.on('uncaughtException') event handler can be used to captureand handle the errors.
When in doubt, unless the specific capabilities ofprocess.nextTick() areneeded, usequeueMicrotask().
process.noDeprecation#
- Type:<boolean>
Theprocess.noDeprecation property indicates whether the--no-deprecationflag is set on the current Node.js process. See the documentation forthe'warning' event and theemitWarning() method for more information about thisflag's behavior.
process.permission#
- Type:<Object>
This API is available through the--permission flag.
process.permission is an object whose methods are used to manage permissionsfor the current process. Additional documentation is available in thePermission Model.
process.permission.has(scope[, reference])#
Verifies that the process is able to access the given scope and reference.If no reference is provided, a global scope is assumed, for instance,process.permission.has('fs.read') will check if the process has ALLfile system read permissions.
The reference has a meaning based on the provided scope. For example,the reference when the scope is File System means files and folders.
The available scopes are:
fs- All File Systemfs.read- File System read operationsfs.write- File System write operationschild- Child process spawning operationsworker- Worker thread spawning operation
// Check if the process has permission to read the README fileprocess.permission.has('fs.read','./README.md');// Check if the process has read permission operationsprocess.permission.has('fs.read');process.pid#
- Type:<integer>
Theprocess.pid property returns the PID of the process.
import { pid }from'node:process';console.log(`This process is pid${pid}`);const { pid } =require('node:process');console.log(`This process is pid${pid}`);
process.platform#
- Type:<string>
Theprocess.platform property returns a string identifying the operatingsystem platform for which the Node.js binary was compiled.
Currently possible values are:
'aix''darwin''freebsd''linux''openbsd''sunos''win32'
import { platform }from'node:process';console.log(`This platform is${platform}`);const { platform } =require('node:process');console.log(`This platform is${platform}`);
The value'android' may also be returned if the Node.js is built on theAndroid operating system. However, Android support in Node.jsis experimental.
process.ppid#
- Type:<integer>
Theprocess.ppid property returns the PID of the parent of thecurrent process.
import { ppid }from'node:process';console.log(`The parent process is pid${ppid}`);const { ppid } =require('node:process');console.log(`The parent process is pid${ppid}`);
process.ref(maybeRefable)#
maybeRefable<any> An object that may be "refable".
An object is "refable" if it implements the Node.js "Refable protocol".Specifically, this means that the object implements theSymbol.for('nodejs.ref')andSymbol.for('nodejs.unref') methods. "Ref'd" objects will keep the Node.jsevent loop alive, while "unref'd" objects will not. Historically, this wasimplemented by usingref() andunref() methods directly on the objects.This pattern, however, is being deprecated in favor of the "Refable protocol"in order to better support Web Platform API types whose APIs cannot be modifiedto addref() andunref() methods but still need to support that behavior.
process.release#
History
| Version | Changes |
|---|---|
| v4.2.0 | The |
| v3.0.0 | Added in: v3.0.0 |
- Type:<Object>
Theprocess.release property returns anObject containing metadata relatedto the current release, including URLs for the source tarball and headers-onlytarball.
process.release contains the following properties:
name<string> A value that will always be'node'.sourceUrl<string> an absolute URL pointing to a.tar.gzfile containingthe source code of the current release.headersUrl<string> an absolute URL pointing to a.tar.gzfile containingonly the source header files for the current release. This file issignificantly smaller than the full source file and can be used for compilingNode.js native add-ons.libUrl<string> |<undefined> an absolute URL pointing to anode.libfilematching the architecture and version of the current release. This file isused for compiling Node.js native add-ons.This property is only present onWindows builds of Node.js and will be missing on all other platforms.lts<string> |<undefined> a string label identifying theLTS label for thisrelease. This property only exists for LTS releases and isundefinedfor allother release types, includingCurrent releases. Valid values include theLTS Release code names (including those that are no longer supported).'Fermium'for the 14.x LTS line beginning with 14.15.0.'Gallium'for the 16.x LTS line beginning with 16.13.0.'Hydrogen'for the 18.x LTS line beginning with 18.12.0.For other LTS Release code names, seeNode.js Changelog Archive
{name:'node',lts:'Hydrogen',sourceUrl:'https://nodejs.org/download/release/v18.12.0/node-v18.12.0.tar.gz',headersUrl:'https://nodejs.org/download/release/v18.12.0/node-v18.12.0-headers.tar.gz',libUrl:'https://nodejs.org/download/release/v18.12.0/win-x64/node.lib'}In custom builds from non-release versions of the source tree, only thename property may be present. The additional properties should not berelied upon to exist.
process.report#
History
| Version | Changes |
|---|---|
| v13.12.0, v12.17.0 | This API is no longer experimental. |
| v11.8.0 | Added in: v11.8.0 |
- Type:<Object>
process.report is an object whose methods are used to generate diagnosticreports for the current process. Additional documentation is available in thereport documentation.
process.report.compact#
- Type:<boolean>
Write reports in a compact format, single-line JSON, more easily consumableby log processing systems than the default multi-line format designed forhuman consumption.
import { report }from'node:process';console.log(`Reports are compact?${report.compact}`);const { report } =require('node:process');console.log(`Reports are compact?${report.compact}`);
process.report.directory#
History
| Version | Changes |
|---|---|
| v13.12.0, v12.17.0 | This API is no longer experimental. |
| v11.12.0 | Added in: v11.12.0 |
- Type:<string>
Directory where the report is written. The default value is the empty string,indicating that reports are written to the current working directory of theNode.js process.
import { report }from'node:process';console.log(`Report directory is${report.directory}`);const { report } =require('node:process');console.log(`Report directory is${report.directory}`);
process.report.filename#
History
| Version | Changes |
|---|---|
| v13.12.0, v12.17.0 | This API is no longer experimental. |
| v11.12.0 | Added in: v11.12.0 |
- Type:<string>
Filename where the report is written. If set to the empty string, the outputfilename will be comprised of a timestamp, PID, and sequence number. The defaultvalue is the empty string.
If the value ofprocess.report.filename is set to'stdout' or'stderr',the report is written to the stdout or stderr of the process respectively.
import { report }from'node:process';console.log(`Report filename is${report.filename}`);const { report } =require('node:process');console.log(`Report filename is${report.filename}`);
process.report.getReport([err])#
History
| Version | Changes |
|---|---|
| v13.12.0, v12.17.0 | This API is no longer experimental. |
| v11.8.0 | Added in: v11.8.0 |
Returns a JavaScript Object representation of a diagnostic report for therunning process. The report's JavaScript stack trace is taken fromerr, ifpresent.
import { report }from'node:process';import utilfrom'node:util';const data = report.getReport();console.log(data.header.nodejsVersion);// Similar to process.report.writeReport()import fsfrom'node:fs';fs.writeFileSync('my-report.log', util.inspect(data),'utf8');const { report } =require('node:process');const util =require('node:util');const data = report.getReport();console.log(data.header.nodejsVersion);// Similar to process.report.writeReport()const fs =require('node:fs');fs.writeFileSync('my-report.log', util.inspect(data),'utf8');
Additional documentation is available in thereport documentation.
process.report.reportOnFatalError#
History
| Version | Changes |
|---|---|
| v15.0.0, v14.17.0 | This API is no longer experimental. |
| v11.12.0 | Added in: v11.12.0 |
- Type:<boolean>
Iftrue, a diagnostic report is generated on fatal errors, such as out ofmemory errors or failed C++ assertions.
import { report }from'node:process';console.log(`Report on fatal error:${report.reportOnFatalError}`);const { report } =require('node:process');console.log(`Report on fatal error:${report.reportOnFatalError}`);
process.report.reportOnSignal#
History
| Version | Changes |
|---|---|
| v13.12.0, v12.17.0 | This API is no longer experimental. |
| v11.12.0 | Added in: v11.12.0 |
- Type:<boolean>
Iftrue, a diagnostic report is generated when the process receives thesignal specified byprocess.report.signal.
import { report }from'node:process';console.log(`Report on signal:${report.reportOnSignal}`);const { report } =require('node:process');console.log(`Report on signal:${report.reportOnSignal}`);
process.report.reportOnUncaughtException#
History
| Version | Changes |
|---|---|
| v13.12.0, v12.17.0 | This API is no longer experimental. |
| v11.12.0 | Added in: v11.12.0 |
- Type:<boolean>
Iftrue, a diagnostic report is generated on uncaught exception.
import { report }from'node:process';console.log(`Report on exception:${report.reportOnUncaughtException}`);const { report } =require('node:process');console.log(`Report on exception:${report.reportOnUncaughtException}`);
process.report.excludeEnv#
- Type:<boolean>
Iftrue, a diagnostic report is generated without the environment variables.
process.report.signal#
History
| Version | Changes |
|---|---|
| v13.12.0, v12.17.0 | This API is no longer experimental. |
| v11.12.0 | Added in: v11.12.0 |
- Type:<string>
The signal used to trigger the creation of a diagnostic report. Defaults to'SIGUSR2'.
import { report }from'node:process';console.log(`Report signal:${report.signal}`);const { report } =require('node:process');console.log(`Report signal:${report.signal}`);
process.report.writeReport([filename][, err])#
History
| Version | Changes |
|---|---|
| v13.12.0, v12.17.0 | This API is no longer experimental. |
| v11.8.0 | Added in: v11.8.0 |
filename<string> Name of the file where the report is written. Thisshould be a relative path, that will be appended to the directory specified inprocess.report.directory, or the current working directory of the Node.jsprocess, if unspecified.err<Error> A custom error used for reporting the JavaScript stack.Returns:<string> Returns the filename of the generated report.
Writes a diagnostic report to a file. Iffilename is not provided, the defaultfilename includes the date, time, PID, and a sequence number. The report'sJavaScript stack trace is taken fromerr, if present.
If the value offilename is set to'stdout' or'stderr', the report iswritten to the stdout or stderr of the process respectively.
import { report }from'node:process';report.writeReport();const { report } =require('node:process');report.writeReport();
Additional documentation is available in thereport documentation.
process.resourceUsage()#
- Returns:<Object> the resource usage for the current process. All of thesevalues come from the
uv_getrusagecall which returnsauv_rusage_tstruct.userCPUTime<integer> maps toru_utimecomputed in microseconds.It is the same value asprocess.cpuUsage().user.systemCPUTime<integer> maps toru_stimecomputed in microseconds.It is the same value asprocess.cpuUsage().system.maxRSS<integer> maps toru_maxrsswhich is the maximum resident setsize used in kibibytes (1024 bytes).sharedMemorySize<integer> maps toru_ixrssbut is not supported byany platform.unsharedDataSize<integer> maps toru_idrssbut is not supported byany platform.unsharedStackSize<integer> maps toru_isrssbut is not supported byany platform.minorPageFault<integer> maps toru_minfltwhich is the number ofminor page faults for the process, seethis article for more details.majorPageFault<integer> maps toru_majfltwhich is the number ofmajor page faults for the process, seethis article for more details. This field is notsupported on Windows.swappedOut<integer> maps toru_nswapbut is not supported by anyplatform.fsRead<integer> maps toru_inblockwhich is the number of times thefile system had to perform input.fsWrite<integer> maps toru_oublockwhich is the number of times thefile system had to perform output.ipcSent<integer> maps toru_msgsndbut is not supported by anyplatform.ipcReceived<integer> maps toru_msgrcvbut is not supported by anyplatform.signalsCount<integer> maps toru_nsignalsbut is not supported by anyplatform.voluntaryContextSwitches<integer> maps toru_nvcswwhich is thenumber of times a CPU context switch resulted due to a process voluntarilygiving up the processor before its time slice was completed (usually toawait availability of a resource). This field is not supported on Windows.involuntaryContextSwitches<integer> maps toru_nivcswwhich is thenumber of times a CPU context switch resulted due to a higher priorityprocess becoming runnable or because the current process exceeded itstime slice. This field is not supported on Windows.
import { resourceUsage }from'node:process';console.log(resourceUsage());/* Will output: { userCPUTime: 82872, systemCPUTime: 4143, maxRSS: 33164, sharedMemorySize: 0, unsharedDataSize: 0, unsharedStackSize: 0, minorPageFault: 2469, majorPageFault: 0, swappedOut: 0, fsRead: 0, fsWrite: 8, ipcSent: 0, ipcReceived: 0, signalsCount: 0, voluntaryContextSwitches: 79, involuntaryContextSwitches: 1 }*/const { resourceUsage } =require('node:process');console.log(resourceUsage());/* Will output: { userCPUTime: 82872, systemCPUTime: 4143, maxRSS: 33164, sharedMemorySize: 0, unsharedDataSize: 0, unsharedStackSize: 0, minorPageFault: 2469, majorPageFault: 0, swappedOut: 0, fsRead: 0, fsWrite: 8, ipcSent: 0, ipcReceived: 0, signalsCount: 0, voluntaryContextSwitches: 79, involuntaryContextSwitches: 1 }*/
process.send(message[, sendHandle[, options]][, callback])#
message<Object>sendHandle<net.Server> |<net.Socket>options<Object> used to parameterize the sending of certain types ofhandles.optionssupports the following properties:keepOpen<boolean> A value that can be used when passing instances ofnet.Socket. Whentrue, the socket is kept open in the sending process.Default:false.
callback<Function>- Returns:<boolean>
If Node.js is spawned with an IPC channel, theprocess.send() method can beused to send messages to the parent process. Messages will be received as a'message' event on the parent'sChildProcess object.
If Node.js was not spawned with an IPC channel,process.send will beundefined.
The message goes through serialization and parsing. The resulting message mightnot be the same as what is originally sent.
process.setegid(id)#
Theprocess.setegid() method sets the effective group identity of the process.(Seesetegid(2).) Theid can be passed as either a numeric ID or a groupname string. If a group name is specified, this method blocks while resolvingthe associated a numeric ID.
import processfrom'node:process';if (process.getegid && process.setegid) {console.log(`Current gid:${process.getegid()}`);try { process.setegid(501);console.log(`New gid:${process.getegid()}`); }catch (err) {console.error(`Failed to set gid:${err}`); }}const process =require('node:process');if (process.getegid && process.setegid) {console.log(`Current gid:${process.getegid()}`);try { process.setegid(501);console.log(`New gid:${process.getegid()}`); }catch (err) {console.error(`Failed to set gid:${err}`); }}
This function is only available on POSIX platforms (i.e. not Windows orAndroid).This feature is not available inWorker threads.
process.seteuid(id)#
Theprocess.seteuid() method sets the effective user identity of the process.(Seeseteuid(2).) Theid can be passed as either a numeric ID or a usernamestring. If a username is specified, the method blocks while resolving theassociated numeric ID.
import processfrom'node:process';if (process.geteuid && process.seteuid) {console.log(`Current uid:${process.geteuid()}`);try { process.seteuid(501);console.log(`New uid:${process.geteuid()}`); }catch (err) {console.error(`Failed to set uid:${err}`); }}const process =require('node:process');if (process.geteuid && process.seteuid) {console.log(`Current uid:${process.geteuid()}`);try { process.seteuid(501);console.log(`New uid:${process.geteuid()}`); }catch (err) {console.error(`Failed to set uid:${err}`); }}
This function is only available on POSIX platforms (i.e. not Windows orAndroid).This feature is not available inWorker threads.
process.setgid(id)#
Theprocess.setgid() method sets the group identity of the process. (Seesetgid(2).) Theid can be passed as either a numeric ID or a group namestring. If a group name is specified, this method blocks while resolving theassociated numeric ID.
import processfrom'node:process';if (process.getgid && process.setgid) {console.log(`Current gid:${process.getgid()}`);try { process.setgid(501);console.log(`New gid:${process.getgid()}`); }catch (err) {console.error(`Failed to set gid:${err}`); }}const process =require('node:process');if (process.getgid && process.setgid) {console.log(`Current gid:${process.getgid()}`);try { process.setgid(501);console.log(`New gid:${process.getgid()}`); }catch (err) {console.error(`Failed to set gid:${err}`); }}
This function is only available on POSIX platforms (i.e. not Windows orAndroid).This feature is not available inWorker threads.
process.setgroups(groups)#
groups<integer[]>
Theprocess.setgroups() method sets the supplementary group IDs for theNode.js process. This is a privileged operation that requires the Node.jsprocess to haveroot or theCAP_SETGID capability.
Thegroups array can contain numeric group IDs, group names, or both.
import processfrom'node:process';if (process.getgroups && process.setgroups) {try { process.setgroups([501]);console.log(process.getgroups());// new groups }catch (err) {console.error(`Failed to set groups:${err}`); }}const process =require('node:process');if (process.getgroups && process.setgroups) {try { process.setgroups([501]);console.log(process.getgroups());// new groups }catch (err) {console.error(`Failed to set groups:${err}`); }}
This function is only available on POSIX platforms (i.e. not Windows orAndroid).This feature is not available inWorker threads.
process.setuid(id)#
Theprocess.setuid(id) method sets the user identity of the process. (Seesetuid(2).) Theid can be passed as either a numeric ID or a username string.If a username is specified, the method blocks while resolving the associatednumeric ID.
import processfrom'node:process';if (process.getuid && process.setuid) {console.log(`Current uid:${process.getuid()}`);try { process.setuid(501);console.log(`New uid:${process.getuid()}`); }catch (err) {console.error(`Failed to set uid:${err}`); }}const process =require('node:process');if (process.getuid && process.setuid) {console.log(`Current uid:${process.getuid()}`);try { process.setuid(501);console.log(`New uid:${process.getuid()}`); }catch (err) {console.error(`Failed to set uid:${err}`); }}
This function is only available on POSIX platforms (i.e. not Windows orAndroid).This feature is not available inWorker threads.
process.setSourceMapsEnabled(val)#
module.setSourceMapsSupport() instead.val<boolean>
This function enables or disables theSource Map support forstack traces.
It provides same features as launching Node.js process with commandline options--enable-source-maps.
Only source maps in JavaScript files that are loaded after source maps has beenenabled will be parsed and loaded.
This implies callingmodule.setSourceMapsSupport() with an option{ nodeModules: true, generatedCode: true }.
process.setUncaughtExceptionCaptureCallback(fn)#
fn<Function> |<null>
Theprocess.setUncaughtExceptionCaptureCallback() function sets a functionthat will be invoked when an uncaught exception occurs, which will receive theexception value itself as its first argument.
If such a function is set, the'uncaughtException' event willnot be emitted. If--abort-on-uncaught-exception was passed from thecommand line or set throughv8.setFlagsFromString(), the process willnot abort. Actions configured to take place on exceptions such as reportgenerations will be affected too
To unset the capture function,process.setUncaughtExceptionCaptureCallback(null) may be used. Calling thismethod with a non-null argument while another capture function is set willthrow an error.
Using this function is mutually exclusive with using the deprecateddomain built-in module.
process.sourceMapsEnabled#
module.getSourceMapsSupport() instead.- Type:<boolean>
Theprocess.sourceMapsEnabled property returns whether theSource Map support for stack traces is enabled.
process.stderr#
- Type:<Stream>
Theprocess.stderr property returns a stream connected tostderr (fd2). It is anet.Socket (which is aDuplexstream) unless fd2 refers to a file, in which case it isaWritable stream.
process.stderr differs from other Node.js streams in important ways. Seenote on process I/O for more information.
process.stdin#
- Type:<Stream>
Theprocess.stdin property returns a stream connected tostdin (fd0). It is anet.Socket (which is aDuplexstream) unless fd0 refers to a file, in which case it isaReadable stream.
For details of how to read fromstdin seereadable.read().
As aDuplex stream,process.stdin can also be used in "old" mode thatis compatible with scripts written for Node.js prior to v0.10.For more information seeStream compatibility.
In "old" streams mode thestdin stream is paused by default, so onemust callprocess.stdin.resume() to read from it. Note also that callingprocess.stdin.resume() itself would switch stream to "old" mode.
process.stdout#
- Type:<Stream>
Theprocess.stdout property returns a stream connected tostdout (fd1). It is anet.Socket (which is aDuplexstream) unless fd1 refers to a file, in which case it isaWritable stream.
For example, to copyprocess.stdin toprocess.stdout:
import { stdin, stdout }from'node:process';stdin.pipe(stdout);const { stdin, stdout } =require('node:process');stdin.pipe(stdout);
process.stdout differs from other Node.js streams in important ways. Seenote on process I/O for more information.
process.stdout.fd#
- Type:<number>
This property refers to the value of underlying file descriptor ofprocess.stdout. The value is fixed at1. InWorker threads,this field does not exist.
A note on process I/O#
process.stdout andprocess.stderr differ from other Node.js streams inimportant ways:
- They are used internally by
console.log()andconsole.error(),respectively. - Writes may be synchronous depending on what the stream is connected toand whether the system is Windows or POSIX:
- Files:synchronous on Windows and POSIX
- TTYs (Terminals):asynchronous on Windows,synchronous on POSIX
- Pipes (and sockets):synchronous on Windows,asynchronous on POSIX
These behaviors are partly for historical reasons, as changing them wouldcreate backward incompatibility, but they are also expected by some users.
Synchronous writes avoid problems such as output written withconsole.log() orconsole.error() being unexpectedly interleaved, or not written at all ifprocess.exit() is called before an asynchronous write completes. Seeprocess.exit() for more information.
Warning: Synchronous writes block the event loop until the write hascompleted. This can be near instantaneous in the case of output to a file, butunder high system load, pipes that are not being read at the receiving end, orwith slow terminals or file systems, it's possible for the event loop to beblocked often enough and long enough to have severe negative performanceimpacts. This may not be a problem when writing to an interactive terminalsession, but consider this particularly careful when doing production logging tothe process output streams.
To check if a stream is connected to aTTY context, check theisTTYproperty.
For instance:
$node -p"Boolean(process.stdin.isTTY)"true$echo"foo" | node -p"Boolean(process.stdin.isTTY)"false$node -p"Boolean(process.stdout.isTTY)"true$node -p"Boolean(process.stdout.isTTY)" |catfalseSee theTTY documentation for more information.
process.throwDeprecation#
- Type:<boolean>
The initial value ofprocess.throwDeprecation indicates whether the--throw-deprecation flag is set on the current Node.js process.process.throwDeprecation is mutable, so whether or not deprecationwarnings result in errors may be altered at runtime. See thedocumentation for the'warning' event and theemitWarning() method for more information.
$node --throw-deprecation -p"process.throwDeprecation"true$node -p"process.throwDeprecation"undefined$node>process.emitWarning('test','DeprecationWarning');undefined>(node:26598) DeprecationWarning:test>process.throwDeprecation =true;true>process.emitWarning('test','DeprecationWarning');Thrown:[DeprecationWarning: test] { name: 'DeprecationWarning' }process.threadCpuUsage([previousValue])#
Theprocess.threadCpuUsage() method returns the user and system CPU time usage ofthe current worker thread, in an object with propertiesuser andsystem, whosevalues are microsecond values (millionth of a second).
The result of a previous call toprocess.threadCpuUsage() can be passed as theargument to the function, to get a diff reading.
process.title#
- Type:<string>
Theprocess.title property returns the current process title (i.e. returnsthe current value ofps). Assigning a new value toprocess.title modifiesthe current value ofps.
When a new value is assigned, different platforms will impose different maximumlength restrictions on the title. Usually such restrictions are quite limited.For instance, on Linux and macOS,process.title is limited to the size of thebinary name plus the length of the command-line arguments because setting theprocess.title overwrites theargv memory of the process. Node.js 0.8allowed for longer process title strings by also overwriting theenvironmemory but that was potentially insecure and confusing in some (rather obscure)cases.
Assigning a value toprocess.title might not result in an accurate labelwithin process manager applications such as macOS Activity Monitor or WindowsServices Manager.
process.traceDeprecation#
- Type:<boolean>
Theprocess.traceDeprecation property indicates whether the--trace-deprecation flag is set on the current Node.js process. See thedocumentation for the'warning' event and theemitWarning() method for more information about thisflag's behavior.
process.traceProcessWarnings#
Theprocess.traceProcessWarnings property indicates whether the--trace-warnings flagis set on the current Node.js process. This property allows programmatic control over thetracing of warnings, enabling or disabling stack traces for warnings at runtime.
// Enable trace warningsprocess.traceProcessWarnings =true;// Emit a warning with a stack traceprocess.emitWarning('Warning with stack trace');// Disable trace warningsprocess.traceProcessWarnings =false;process.umask()#
History
| Version | Changes |
|---|---|
| v14.0.0, v12.19.0 | Calling |
| v0.1.19 | Added in: v0.1.19 |
process.umask() with no argument causesthe process-wide umask to be written twice. This introduces a race conditionbetween threads, and is a potential security vulnerability. There is no safe,cross-platform alternative API.process.umask() returns the Node.js process's file mode creation mask. Childprocesses inherit the mask from the parent process.
process.umask(mask)#
process.umask(mask) sets the Node.js process's file mode creation mask. Childprocesses inherit the mask from the parent process. Returns the previous mask.
import { umask }from'node:process';const newmask =0o022;const oldmask =umask(newmask);console.log(`Changed umask from${oldmask.toString(8)} to${newmask.toString(8)}`,);const { umask } =require('node:process');const newmask =0o022;const oldmask =umask(newmask);console.log(`Changed umask from${oldmask.toString(8)} to${newmask.toString(8)}`,);
InWorker threads,process.umask(mask) will throw an exception.
process.unref(maybeRefable)#
maybeRefable<any> An object that may be "unref'd".
An object is "unrefable" if it implements the Node.js "Refable protocol".Specifically, this means that the object implements theSymbol.for('nodejs.ref')andSymbol.for('nodejs.unref') methods. "Ref'd" objects will keep the Node.jsevent loop alive, while "unref'd" objects will not. Historically, this wasimplemented by usingref() andunref() methods directly on the objects.This pattern, however, is being deprecated in favor of the "Refable protocol"in order to better support Web Platform API types whose APIs cannot be modifiedto addref() andunref() methods but still need to support that behavior.
process.uptime()#
- Returns:<number>
Theprocess.uptime() method returns the number of seconds the current Node.jsprocess has been running.
The return value includes fractions of a second. UseMath.floor() to get wholeseconds.
process.version#
- Type:<string>
Theprocess.version property contains the Node.js version string.
import { version }from'node:process';console.log(`Version:${version}`);// Version: v14.8.0const { version } =require('node:process');console.log(`Version:${version}`);// Version: v14.8.0
To get the version string without the prependedv, useprocess.versions.node.
process.versions#
History
| Version | Changes |
|---|---|
| v9.0.0 | The |
| v4.2.0 | The |
| v0.2.0 | Added in: v0.2.0 |
- Type:<Object>
Theprocess.versions property returns an object listing the version strings ofNode.js and its dependencies.process.versions.modules indicates the currentABI version, which is increased whenever a C++ API changes. Node.js will refuseto load modules that were compiled against a different module ABI version.
import { versions }from'node:process';console.log(versions);const { versions } =require('node:process');console.log(versions);
Will generate an object similar to:
{ node: '26.0.0-pre', acorn: '8.15.0', ada: '3.4.1', amaro: '1.1.5', ares: '1.34.6', brotli: '1.2.0', merve: '1.0.0', cldr: '48.0', icu: '78.2', llhttp: '9.3.0', modules: '144', napi: '10', nbytes: '0.1.1', ncrypto: '0.0.1', nghttp2: '1.68.0', nghttp3: '', ngtcp2: '', openssl: '3.5.4', simdjson: '4.2.4', simdutf: '7.3.3', sqlite: '3.51.2', tz: '2025c', undici: '7.18.2', unicode: '17.0', uv: '1.51.0', uvwasi: '0.0.23', v8: '14.3.127.18-node.10', zlib: '1.3.1-e00f703', zstd: '1.5.7' }Exit codes#
Node.js will normally exit with a0 status code when no more asyncoperations are pending. The following status codes are used in othercases:
1Uncaught Fatal Exception: There was an uncaught exception,and it was not handled by a domain or an'uncaughtException'eventhandler.2: Unused (reserved by Bash for builtin misuse)3Internal JavaScript Parse Error: The JavaScript source codeinternal in the Node.js bootstrapping process caused a parse error. Thisis extremely rare, and generally can only happen during developmentof Node.js itself.4Internal JavaScript Evaluation Failure: The JavaScriptsource code internal in the Node.js bootstrapping process failed toreturn a function value when evaluated. This is extremely rare, andgenerally can only happen during development of Node.js itself.5Fatal Error: There was a fatal unrecoverable error in V8.Typically a message will be printed to stderr with the prefixFATAL ERROR.6Non-function Internal Exception Handler: There was anuncaught exception, but the internal fatal exception handlerfunction was somehow set to a non-function, and could not be called.7Internal Exception Handler Run-Time Failure: There was anuncaught exception, and the internal fatal exception handlerfunction itself threw an error while attempting to handle it. Thiscan happen, for example, if an'uncaughtException'ordomain.on('error')handler throws an error.8: Unused. In previous versions of Node.js, exit code 8 sometimesindicated an uncaught exception.9Invalid Argument: Either an unknown option was specified,or an option requiring a value was provided without a value.10Internal JavaScript Run-Time Failure: The JavaScriptsource code internal in the Node.js bootstrapping process threw an errorwhen the bootstrapping function was called. This is extremely rare,and generally can only happen during development of Node.js itself.12Invalid Debug Argument: The--inspectand/or--inspect-brkoptions were set, but the port number chosen was invalid or unavailable.13Unsettled Top-Level Await:awaitwas used outside of a functionin the top-level code, but the passedPromisenever settled.14Snapshot Failure: Node.js was started to build a V8 startupsnapshot and it failed because certain requirements of the state ofthe application were not met.>128Signal Exits: If Node.js receives a fatal signal such asSIGKILLorSIGHUP, then its exit code will be128plus thevalue of the signal code. This is a standard POSIX practice, sinceexit codes are defined to be 7-bit integers, and signal exits setthe high-order bit, and then contain the value of the signal code.For example, signalSIGABRThas value6, so the expected exitcode will be128+6, or134.
Punycode#
Source Code:lib/punycode.js
The version of the punycode module bundled in Node.js is being deprecated.In a future major version of Node.js this module will be removed. Userscurrently depending on thepunycode module should switch to using theuserland-providedPunycode.js module instead. For punycode-based URLencoding, seeurl.domainToASCII or, more generally, theWHATWG URL API.
Thepunycode module is a bundled version of thePunycode.js module. Itcan be accessed using:
const punycode =require('node:punycode');Punycode is a character encoding scheme defined by RFC 3492 that isprimarily intended for use in Internationalized Domain Names. Because hostnames in URLs are limited to ASCII characters only, Domain Names that containnon-ASCII characters must be converted into ASCII using the Punycode scheme.For instance, the Japanese character that translates into the English word,'example' is'例'. The Internationalized Domain Name,'例.com' (equivalentto'example.com') is represented by Punycode as the ASCII string'xn--fsq.com'.
Thepunycode module provides a simple implementation of the Punycode standard.
Thepunycode module is a third-party dependency used by Node.js andmade available to developers as a convenience. Fixes or other modifications tothe module must be directed to thePunycode.js project.
punycode.decode(string)#
string<string>
Thepunycode.decode() method converts aPunycode string of ASCII-onlycharacters to the equivalent string of Unicode codepoints.
punycode.decode('maana-pta');// 'mañana'punycode.decode('--dqo34k');// '☃-⌘'punycode.encode(string)#
string<string>
Thepunycode.encode() method converts a string of Unicode codepoints to aPunycode string of ASCII-only characters.
punycode.encode('mañana');// 'maana-pta'punycode.encode('☃-⌘');// '--dqo34k'punycode.toASCII(domain)#
domain<string>
Thepunycode.toASCII() method converts a Unicode string representing anInternationalized Domain Name toPunycode. Only the non-ASCII parts of thedomain name will be converted. Callingpunycode.toASCII() on a string thatalready only contains ASCII characters will have no effect.
// encode domain namespunycode.toASCII('mañana.com');// 'xn--maana-pta.com'punycode.toASCII('☃-⌘.com');// 'xn----dqo34k.com'punycode.toASCII('example.com');// 'example.com'punycode.toUnicode(domain)#
domain<string>
Thepunycode.toUnicode() method converts a string representing a domain namecontainingPunycode encoded characters into Unicode. Only thePunycodeencoded parts of the domain name are be converted.
// decode domain namespunycode.toUnicode('xn--maana-pta.com');// 'mañana.com'punycode.toUnicode('xn----dqo34k.com');// '☃-⌘.com'punycode.toUnicode('example.com');// 'example.com'punycode.ucs2#
punycode.ucs2.decode(string)#
string<string>
Thepunycode.ucs2.decode() method returns an array containing the numericcodepoint values of each Unicode symbol in the string.
punycode.ucs2.decode('abc');// [0x61, 0x62, 0x63]// surrogate pair for U+1D306 tetragram for centre:punycode.ucs2.decode('\uD834\uDF06');// [0x1D306]punycode.ucs2.encode(codePoints)#
codePoints<integer[]>
Thepunycode.ucs2.encode() method returns a string based on an array ofnumeric code point values.
punycode.ucs2.encode([0x61,0x62,0x63]);// 'abc'punycode.ucs2.encode([0x1D306]);// '\uD834\uDF06'punycode.version#
- Type:<string>
Returns a string identifying the currentPunycode.js version number.
Query string#
Source Code:lib/querystring.js
Thenode:querystring module provides utilities for parsing and formatting URLquery strings. It can be accessed using:
const querystring =require('node:querystring');querystring is more performant than<URLSearchParams> but is not astandardized API. Use<URLSearchParams> when performance is not critical orwhen compatibility with browser code is desirable.
querystring.decode()#
Thequerystring.decode() function is an alias forquerystring.parse().
querystring.encode()#
Thequerystring.encode() function is an alias forquerystring.stringify().
querystring.escape(str)#
str<string>
Thequerystring.escape() method performs URL percent-encoding on the givenstr in a manner that is optimized for the specific requirements of URLquery strings.
Thequerystring.escape() method is used byquerystring.stringify() and isgenerally not expected to be used directly. It is exported primarily to allowapplication code to provide a replacement percent-encoding implementation ifnecessary by assigningquerystring.escape to an alternative function.
querystring.parse(str[, sep[, eq[, options]]])#
History
| Version | Changes |
|---|---|
| v8.0.0 | Multiple empty entries are now parsed correctly (e.g. |
| v6.0.0 | The returned object no longer inherits from |
| v6.0.0, v4.2.4 | The |
| v0.1.25 | Added in: v0.1.25 |
str<string> The URL query string to parsesep<string> The substring used to delimit key and value pairs in thequery string.Default:'&'.eq<string>. The substring used to delimit keys and values in thequery string.Default:'='.options<Object>decodeURIComponent<Function> The function to use when decodingpercent-encoded characters in the query string.Default:querystring.unescape().maxKeys<number> Specifies the maximum number of keys to parse.Specify0to remove key counting limitations.Default:1000.
Thequerystring.parse() method parses a URL query string (str) into acollection of key and value pairs.
For example, the query string'foo=bar&abc=xyz&abc=123' is parsed into:
{"foo":"bar","abc":["xyz","123"]}The object returned by thequerystring.parse() methoddoes notprototypically inherit from the JavaScriptObject. This means that typicalObject methods such asobj.toString(),obj.hasOwnProperty(), and othersare not defined andwill not work.
By default, percent-encoded characters within the query string will be assumedto use UTF-8 encoding. If an alternative character encoding is used, then analternativedecodeURIComponent option will need to be specified:
// Assuming gbkDecodeURIComponent function already exists...querystring.parse('w=%D6%D0%CE%C4&foo=bar',null,null, {decodeURIComponent: gbkDecodeURIComponent });querystring.stringify(obj[, sep[, eq[, options]]])#
obj<Object> The object to serialize into a URL query stringsep<string> The substring used to delimit key and value pairs in thequery string.Default:'&'.eq<string>. The substring used to delimit keys and values in thequery string.Default:'='.optionsencodeURIComponent<Function> The function to use when convertingURL-unsafe characters to percent-encoding in the query string.Default:querystring.escape().
Thequerystring.stringify() method produces a URL query string from agivenobj by iterating through the object's "own properties".
It serializes the following types of values passed inobj:<string> |<number> |<bigint> |<boolean> |<string[]> |<number[]> |<bigint[]> |<boolean[]>The numeric values must be finite. Any other input values will be coerced toempty strings.
querystring.stringify({foo:'bar',baz: ['qux','quux'],corge:'' });// Returns 'foo=bar&baz=qux&baz=quux&corge='querystring.stringify({foo:'bar',baz:'qux' },';',':');// Returns 'foo:bar;baz:qux'By default, characters requiring percent-encoding within the query string willbe encoded as UTF-8. If an alternative encoding is required, then an alternativeencodeURIComponent option will need to be specified:
// Assuming gbkEncodeURIComponent function already exists,querystring.stringify({w:'中文',foo:'bar' },null,null, {encodeURIComponent: gbkEncodeURIComponent });querystring.unescape(str)#
str<string>
Thequerystring.unescape() method performs decoding of URL percent-encodedcharacters on the givenstr.
Thequerystring.unescape() method is used byquerystring.parse() and isgenerally not expected to be used directly. It is exported primarily to allowapplication code to provide a replacement decoding implementation ifnecessary by assigningquerystring.unescape to an alternative function.
By default, thequerystring.unescape() method will attempt to use theJavaScript built-indecodeURIComponent() method to decode. If that fails,a safer equivalent that does not throw on malformed URLs will be used.
Readline#
Source Code:lib/readline.js
Thenode:readline module provides an interface for reading data from aReadable stream (such asprocess.stdin) one line at a time.
To use the promise-based APIs:
import *as readlinefrom'node:readline/promises';const readline =require('node:readline/promises');
To use the callback and sync APIs:
import *as readlinefrom'node:readline';const readline =require('node:readline');
The following simple example illustrates the basic use of thenode:readlinemodule.
import *as readlinefrom'node:readline/promises';import { stdinas input, stdoutas output }from'node:process';const rl = readline.createInterface({ input, output });const answer =await rl.question('What do you think of Node.js? ');console.log(`Thank you for your valuable feedback:${answer}`);rl.close();const readline =require('node:readline');const {stdin: input,stdout: output } =require('node:process');const rl = readline.createInterface({ input, output });rl.question('What do you think of Node.js? ',(answer) => {//TODO: Log the answer in a databaseconsole.log(`Thank you for your valuable feedback:${answer}`); rl.close();});
Once this code is invoked, the Node.js application will not terminate until thereadline.Interface is closed because the interface waits for data to bereceived on theinput stream.
Class:InterfaceConstructor#
- Extends:<EventEmitter>
Instances of theInterfaceConstructor class are constructed using thereadlinePromises.createInterface() orreadline.createInterface() method.Every instance is associated with a singleinputReadable stream and asingleoutputWritable stream.Theoutput stream is used to print prompts for user input that arrives on,and is read from, theinput stream.
Event:'close'#
The'close' event is emitted when one of the following occur:
- The
rl.close()method is called and theInterfaceConstructorinstance hasrelinquished control over theinputandoutputstreams; - The
inputstream receives its'end'event; - The
inputstream receivesCtrl+D to signalend-of-transmission (EOT); - The
inputstream receivesCtrl+C to signalSIGINTand there is no'SIGINT'event listener registered on theInterfaceConstructorinstance.
The listener function is called without passing any arguments.
TheInterfaceConstructor instance is finished once the'close' event isemitted.
Event:'error'#
The'error' event is emitted when an error occurs on theinput streamassociated with thenode:readlineInterface.
The listener function is called with anError object passed as the single argument.
Event:'line'#
The'line' event is emitted whenever theinput stream receives anend-of-line input (\n,\r, or\r\n). This usually occurs when the userpressesEnter orReturn.
The'line' event is also emitted if new data has been read from a stream andthat stream ends without a final end-of-line marker.
The listener function is called with a string containing the single line ofreceived input.
rl.on('line',(input) => {console.log(`Received:${input}`);});Event:'history'#
The'history' event is emitted whenever the history array has changed.
The listener function is called with an array containing the history array.It will reflect all changes, added lines and removed lines due tohistorySize andremoveHistoryDuplicates.
The primary purpose is to allow a listener to persist the history.It is also possible for the listener to change the history object. Thiscould be useful to prevent certain lines to be added to the history, likea password.
rl.on('history',(history) => {console.log(`Received:${history}`);});Event:'pause'#
The'pause' event is emitted when one of the following occur:
- The
inputstream is paused. - The
inputstream is not paused and receives the'SIGCONT'event. (Seeevents'SIGTSTP'and'SIGCONT'.)
The listener function is called without passing any arguments.
rl.on('pause',() => {console.log('Readline paused.');});Event:'resume'#
The'resume' event is emitted whenever theinput stream is resumed.
The listener function is called without passing any arguments.
rl.on('resume',() => {console.log('Readline resumed.');});Event:'SIGCONT'#
The'SIGCONT' event is emitted when a Node.js process previously moved intothe background usingCtrl+Z (i.e.SIGTSTP) is thenbrought back to the foreground usingfg(1p).
If theinput stream was pausedbefore theSIGTSTP request, this event willnot be emitted.
The listener function is invoked without passing any arguments.
rl.on('SIGCONT',() => {// `prompt` will automatically resume the stream rl.prompt();});The'SIGCONT' event isnot supported on Windows.
Event:'SIGINT'#
The'SIGINT' event is emitted whenever theinput stream receivesaCtrl+C input, known typically asSIGINT. If there are no'SIGINT' event listeners registered when theinput stream receives aSIGINT, the'pause' event will be emitted.
The listener function is invoked without passing any arguments.
rl.on('SIGINT',() => { rl.question('Are you sure you want to exit? ',(answer) => {if (answer.match(/^y(es)?$/i)) rl.pause(); });});Event:'SIGTSTP'#
The'SIGTSTP' event is emitted when theinput stream receivesaCtrl+Z input, typically known asSIGTSTP. If there areno'SIGTSTP' event listeners registered when theinput stream receives aSIGTSTP, the Node.js process will be sent to the background.
When the program is resumed usingfg(1p), the'pause' and'SIGCONT' eventswill be emitted. These can be used to resume theinput stream.
The'pause' and'SIGCONT' events will not be emitted if theinput waspaused before the process was sent to the background.
The listener function is invoked without passing any arguments.
rl.on('SIGTSTP',() => {// This will override SIGTSTP and prevent the program from going to the// background.console.log('Caught SIGTSTP.');});The'SIGTSTP' event isnot supported on Windows.
rl.close()#
Therl.close() method closes theInterfaceConstructor instance andrelinquishes control over theinput andoutput streams. When called,the'close' event will be emitted.
Callingrl.close() does not immediately stop other events (including'line')from being emitted by theInterfaceConstructor instance.
rl.pause()#
Therl.pause() method pauses theinput stream, allowing it to be resumedlater if necessary.
Callingrl.pause() does not immediately pause other events (including'line') from being emitted by theInterfaceConstructor instance.
rl.prompt([preserveCursor])#
preserveCursor<boolean> Iftrue, prevents the cursor placement frombeing reset to0.
Therl.prompt() method writes theInterfaceConstructor instances configuredprompt to a new line inoutput in order to provide a user with a newlocation at which to provide input.
When called,rl.prompt() will resume theinput stream if it has beenpaused.
If theInterfaceConstructor was created withoutput set tonull orundefined the prompt is not written.
rl.setPrompt(prompt)#
prompt<string>
Therl.setPrompt() method sets the prompt that will be written tooutputwheneverrl.prompt() is called.
rl.getPrompt()#
- Returns:<string> the current prompt string
Therl.getPrompt() method returns the current prompt used byrl.prompt().
rl.write(data[, key])#
Therl.write() method will write eitherdata or a key sequence identifiedbykey to theoutput. Thekey argument is supported only ifoutput isaTTY text terminal. SeeTTY keybindings for a list of keycombinations.
Ifkey is specified,data is ignored.
When called,rl.write() will resume theinput stream if it has beenpaused.
If theInterfaceConstructor was created withoutput set tonull orundefined thedata andkey are not written.
rl.write('Delete this!');// Simulate Ctrl+U to delete the line written previouslyrl.write(null, {ctrl:true,name:'u' });Therl.write() method will write the data to thereadlineInterface'sinputas if it were provided by the user.
rl[Symbol.asyncIterator]()#
History
| Version | Changes |
|---|---|
| v11.14.0, v10.17.0 | Symbol.asyncIterator support is no longer experimental. |
| v11.4.0, v10.16.0 | Added in: v11.4.0, v10.16.0 |
- Returns:<AsyncIterator>
Create anAsyncIterator object that iterates through each line in the inputstream as a string. This method allows asynchronous iteration ofInterfaceConstructor objects throughfor await...of loops.
Errors in the input stream are not forwarded.
If the loop is terminated withbreak,throw, orreturn,rl.close() will be called. In other words, iterating over aInterfaceConstructor will always consume the input stream fully.
Performance is not on par with the traditional'line' event API. Use'line'instead for performance-sensitive applications.
asyncfunctionprocessLineByLine() {const rl = readline.createInterface({// ... });forawait (const lineof rl) {// Each line in the readline input will be successively available here as// `line`. }}readline.createInterface() will start to consume the input stream onceinvoked. Having asynchronous operations between interface creation andasynchronous iteration may result in missed lines.
rl.line#
History
| Version | Changes |
|---|---|
| v15.8.0, v14.18.0 | Value will always be a string, never undefined. |
| v0.1.98 | Added in: v0.1.98 |
- Type:<string>
The current input data being processed by node.
This can be used when collecting input from a TTY stream to retrieve thecurrent value that has been processed thus far, prior to theline eventbeing emitted. Once theline event has been emitted, this property willbe an empty string.
Be aware that modifying the value during the instance runtime may haveunintended consequences ifrl.cursor is not also controlled.
If not using a TTY stream for input, use the'line' event.
One possible use case would be as follows:
const values = ['lorem ipsum','dolor sit amet'];const rl = readline.createInterface(process.stdin);const showResults =debounce(() => {console.log('\n', values.filter((val) => val.startsWith(rl.line)).join(' '), );},300);process.stdin.on('keypress',(c, k) => {showResults();});rl.cursor#
- Type:<number> |<undefined>
The cursor position relative torl.line.
This will track where the current cursor lands in the input string, whenreading input from a TTY stream. The position of cursor determines theportion of the input string that will be modified as input is processed,as well as the column where the terminal caret will be rendered.
Promises API#
History
| Version | Changes |
|---|---|
| v24.0.0, v22.17.0 | Marking the API stable. |
| v17.0.0 | Added in: v17.0.0 |
Class:readlinePromises.Interface#
- Extends:<readline.InterfaceConstructor>
Instances of thereadlinePromises.Interface class are constructed using thereadlinePromises.createInterface() method. Every instance is associated with asingleinputReadable stream and a singleoutputWritable stream.Theoutput stream is used to print prompts for user input that arrives on,and is read from, theinput stream.
rl.question(query[, options])#
query<string> A statement or query to write tooutput, prepended to theprompt.options<Object>signal<AbortSignal> Optionally allows thequestion()to be canceledusing anAbortSignal.
- Returns:<Promise> A promise that is fulfilled with the user'sinput in response to the
query.
Therl.question() method displays thequery by writing it to theoutput,waits for user input to be provided oninput, then invokes thecallbackfunction passing the provided input as the first argument.
When called,rl.question() will resume theinput stream if it has beenpaused.
If thereadlinePromises.Interface was created withoutput set tonull orundefined thequery is not written.
If the question is called afterrl.close(), it returns a rejected promise.
Example usage:
const answer =await rl.question('What is your favorite food? ');console.log(`Oh, so your favorite food is${answer}`);Using anAbortSignal to cancel a question.
const signal =AbortSignal.timeout(10_000);signal.addEventListener('abort',() => {console.log('The food question timed out');}, {once:true });const answer =await rl.question('What is your favorite food? ', { signal });console.log(`Oh, so your favorite food is${answer}`);Class:readlinePromises.Readline#
new readlinePromises.Readline(stream[, options])#
stream<stream.Writable> ATTY stream.options<Object>autoCommit<boolean> Iftrue, no need to callrl.commit().
rl.clearLine(dir)#
dir<integer>-1: to the left from cursor1: to the right from cursor0: the entire line
- Returns: this
Therl.clearLine() method adds to the internal list of pending action anaction that clears current line of the associatedstream in a specifieddirection identified bydir.Callrl.commit() to see the effect of this method, unlessautoCommit: truewas passed to the constructor.
rl.clearScreenDown()#
- Returns: this
Therl.clearScreenDown() method adds to the internal list of pending action anaction that clears the associated stream from the current position of thecursor down.Callrl.commit() to see the effect of this method, unlessautoCommit: truewas passed to the constructor.
rl.commit()#
- Returns:<Promise>
Therl.commit() method sends all the pending actions to the associatedstream and clears the internal list of pending actions.
rl.cursorTo(x[, y])#
Therl.cursorTo() method adds to the internal list of pending action an actionthat moves cursor to the specified position in the associatedstream.Callrl.commit() to see the effect of this method, unlessautoCommit: truewas passed to the constructor.
rl.moveCursor(dx, dy)#
Therl.moveCursor() method adds to the internal list of pending action anaction that moves the cursorrelative to its current position in theassociatedstream.Callrl.commit() to see the effect of this method, unlessautoCommit: truewas passed to the constructor.
rl.rollback()#
- Returns: this
Therl.rollback methods clears the internal list of pending actions withoutsending it to the associatedstream.
readlinePromises.createInterface(options)#
options<Object>input<stream.Readable> TheReadable stream to listen to. This optionisrequired.output<stream.Writable> TheWritable stream to write readline datato.completer<Function> An optional function used for Tab autocompletion.terminal<boolean>trueif theinputandoutputstreams should betreated like a TTY, and have ANSI/VT100 escape codes written to it.Default: checkingisTTYon theoutputstream upon instantiation.history<string[]> Initial list of history lines. This option makes senseonly ifterminalis set totrueby the user or by an internaloutputcheck, otherwise the history caching mechanism is not initialized at all.Default:[].historySize<number> Maximum number of history lines retained. To disablethe history set this value to0. This option makes sense only ifterminalis set totrueby the user or by an internaloutputcheck,otherwise the history caching mechanism is not initialized at all.Default:30.removeHistoryDuplicates<boolean> Iftrue, when a new input line addedto the history list duplicates an older one, this removes the older linefrom the list.Default:false.prompt<string> The prompt string to use.Default:'> '.crlfDelay<number> If the delay between\rand\nexceedscrlfDelaymilliseconds, both\rand\nwill be treated as separateend-of-line input.crlfDelaywill be coerced to a number no less than100. It can be set toInfinity, in which case\rfollowed by\nwill always be considered a single newline (which may be reasonable forreading files with\r\nline delimiter).Default:100.escapeCodeTimeout<number> The durationreadlinePromiseswill wait for acharacter (when reading an ambiguous key sequence in milliseconds one thatcan both form a complete key sequence using the input read so far and cantake additional input to complete a longer key sequence).Default:500.tabSize<integer> The number of spaces a tab is equal to (minimum 1).Default:8.signal<AbortSignal> Allows closing the interface using an AbortSignal.
- Returns:<readlinePromises.Interface>
ThereadlinePromises.createInterface() method creates a newreadlinePromises.Interfaceinstance.
import { createInterface }from'node:readline/promises';import { stdin, stdout }from'node:process';const rl =createInterface({input: stdin,output: stdout,});const { createInterface } =require('node:readline/promises');const rl =createInterface({input: process.stdin,output: process.stdout,});
Once thereadlinePromises.Interface instance is created, the most common caseis to listen for the'line' event:
rl.on('line',(line) => {console.log(`Received:${line}`);});Ifterminal istrue for this instance then theoutput stream will getthe best compatibility if it defines anoutput.columns property and emitsa'resize' event on theoutput if or when the columns ever change(process.stdout does this automatically when it is a TTY).
Use of thecompleter function#
Thecompleter function takes the current line entered by the useras an argument, and returns anArray with 2 entries:
- An
Arraywith matching entries for the completion. - The substring that was used for the matching.
For instance:[[substr1, substr2, ...], originalsubstring].
functioncompleter(line) {const completions ='.help .error .exit .quit .q'.split(' ');const hits = completions.filter((c) => c.startsWith(line));// Show all completions if none foundreturn [hits.length ? hits : completions, line];}Thecompleter function can also return a<Promise>, or be asynchronous:
asyncfunctioncompleter(linePartial) {awaitsomeAsyncWork();return [['123'], linePartial];}Callback API#
Class:readline.Interface#
History
| Version | Changes |
|---|---|
| v17.0.0 | The class |
| v0.1.104 | Added in: v0.1.104 |
- Extends:<readline.InterfaceConstructor>
Instances of thereadline.Interface class are constructed using thereadline.createInterface() method. Every instance is associated with asingleinputReadable stream and a singleoutputWritable stream.Theoutput stream is used to print prompts for user input that arrives on,and is read from, theinput stream.
rl.question(query[, options], callback)#
query<string> A statement or query to write tooutput, prepended to theprompt.options<Object>signal<AbortSignal> Optionally allows thequestion()to be canceledusing anAbortController.
callback<Function> A callback function that is invoked with the user'sinput in response to thequery.
Therl.question() method displays thequery by writing it to theoutput,waits for user input to be provided oninput, then invokes thecallbackfunction passing the provided input as the first argument.
When called,rl.question() will resume theinput stream if it has beenpaused.
If thereadline.Interface was created withoutput set tonull orundefined thequery is not written.
Thecallback function passed torl.question() does not follow the typicalpattern of accepting anError object ornull as the first argument.Thecallback is called with the provided answer as the only argument.
An error will be thrown if callingrl.question() afterrl.close().
Example usage:
rl.question('What is your favorite food? ',(answer) => {console.log(`Oh, so your favorite food is${answer}`);});Using anAbortController to cancel a question.
const ac =newAbortController();const signal = ac.signal;rl.question('What is your favorite food? ', { signal },(answer) => {console.log(`Oh, so your favorite food is${answer}`);});signal.addEventListener('abort',() => {console.log('The food question timed out');}, {once:true });setTimeout(() => ac.abort(),10000);readline.clearLine(stream, dir[, callback])#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v12.7.0 | The stream's write() callback and return value are exposed. |
| v0.7.7 | Added in: v0.7.7 |
stream<stream.Writable>dir<number>-1: to the left from cursor1: to the right from cursor0: the entire line
callback<Function> Invoked once the operation completes.- Returns:<boolean>
falseifstreamwishes for the calling code to wait forthe'drain'event to be emitted before continuing to write additional data;otherwisetrue.
Thereadline.clearLine() method clears current line of givenTTY streamin a specified direction identified bydir.
readline.clearScreenDown(stream[, callback])#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v12.7.0 | The stream's write() callback and return value are exposed. |
| v0.7.7 | Added in: v0.7.7 |
stream<stream.Writable>callback<Function> Invoked once the operation completes.- Returns:<boolean>
falseifstreamwishes for the calling code to wait forthe'drain'event to be emitted before continuing to write additional data;otherwisetrue.
Thereadline.clearScreenDown() method clears the givenTTY stream fromthe current position of the cursor down.
readline.createInterface(options)#
History
| Version | Changes |
|---|---|
| v15.14.0, v14.18.0 | The |
| v15.8.0, v14.18.0 | The |
| v13.9.0 | The |
| v8.3.0, v6.11.4 | Remove max limit of |
| v6.6.0 | The |
| v6.3.0 | The |
| v6.0.0 | The |
| v0.1.98 | Added in: v0.1.98 |
options<Object>input<stream.Readable> TheReadable stream to listen to. This optionisrequired.output<stream.Writable> TheWritable stream to write readline datato.completer<Function> An optional function used for Tab autocompletion.terminal<boolean>trueif theinputandoutputstreams should betreated like a TTY, and have ANSI/VT100 escape codes written to it.Default: checkingisTTYon theoutputstream upon instantiation.history<string[]> Initial list of history lines. This option makes senseonly ifterminalis set totrueby the user or by an internaloutputcheck, otherwise the history caching mechanism is not initialized at all.Default:[].historySize<number> Maximum number of history lines retained. To disablethe history set this value to0. This option makes sense only ifterminalis set totrueby the user or by an internaloutputcheck,otherwise the history caching mechanism is not initialized at all.Default:30.removeHistoryDuplicates<boolean> Iftrue, when a new input line addedto the history list duplicates an older one, this removes the older linefrom the list.Default:false.prompt<string> The prompt string to use.Default:'> '.crlfDelay<number> If the delay between\rand\nexceedscrlfDelaymilliseconds, both\rand\nwill be treated as separateend-of-line input.crlfDelaywill be coerced to a number no less than100. It can be set toInfinity, in which case\rfollowed by\nwill always be considered a single newline (which may be reasonable forreading files with\r\nline delimiter).Default:100.escapeCodeTimeout<number> The durationreadlinewill wait for acharacter (when reading an ambiguous key sequence in milliseconds one thatcan both form a complete key sequence using the input read so far and cantake additional input to complete a longer key sequence).Default:500.tabSize<integer> The number of spaces a tab is equal to (minimum 1).Default:8.signal<AbortSignal> Allows closing the interface using an AbortSignal.Aborting the signal will internally callcloseon the interface.
- Returns:<readline.Interface>
Thereadline.createInterface() method creates a newreadline.Interfaceinstance.
import { createInterface }from'node:readline';import { stdin, stdout }from'node:process';const rl =createInterface({input: stdin,output: stdout,});const { createInterface } =require('node:readline');const rl =createInterface({input: process.stdin,output: process.stdout,});
Once thereadline.Interface instance is created, the most common case is tolisten for the'line' event:
rl.on('line',(line) => {console.log(`Received:${line}`);});Ifterminal istrue for this instance then theoutput stream will getthe best compatibility if it defines anoutput.columns property and emitsa'resize' event on theoutput if or when the columns ever change(process.stdout does this automatically when it is a TTY).
When creating areadline.Interface usingstdin as input, the programwill not terminate until it receives anEOF character. To exit withoutwaiting for user input, callprocess.stdin.unref().
Use of thecompleter function#
Thecompleter function takes the current line entered by the useras an argument, and returns anArray with 2 entries:
- An
Arraywith matching entries for the completion. - The substring that was used for the matching.
For instance:[[substr1, substr2, ...], originalsubstring].
functioncompleter(line) {const completions ='.help .error .exit .quit .q'.split(' ');const hits = completions.filter((c) => c.startsWith(line));// Show all completions if none foundreturn [hits.length ? hits : completions, line];}Thecompleter function can be called asynchronously if it accepts twoarguments:
functioncompleter(linePartial, callback) {callback(null, [['123'], linePartial]);}readline.cursorTo(stream, x[, y][, callback])#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v12.7.0 | The stream's write() callback and return value are exposed. |
| v0.7.7 | Added in: v0.7.7 |
stream<stream.Writable>x<number>y<number>callback<Function> Invoked once the operation completes.- Returns:<boolean>
falseifstreamwishes for the calling code to wait forthe'drain'event to be emitted before continuing to write additional data;otherwisetrue.
Thereadline.cursorTo() method moves cursor to the specified position in agivenTTYstream.
readline.moveCursor(stream, dx, dy[, callback])#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v12.7.0 | The stream's write() callback and return value are exposed. |
| v0.7.7 | Added in: v0.7.7 |
stream<stream.Writable>dx<number>dy<number>callback<Function> Invoked once the operation completes.- Returns:<boolean>
falseifstreamwishes for the calling code to wait forthe'drain'event to be emitted before continuing to write additional data;otherwisetrue.
Thereadline.moveCursor() method moves the cursorrelative to its currentposition in a givenTTYstream.
readline.emitKeypressEvents(stream[, interface])#
stream<stream.Readable>interface<readline.InterfaceConstructor>
Thereadline.emitKeypressEvents() method causes the givenReadablestream to begin emitting'keypress' events corresponding to received input.
Optionally,interface specifies areadline.Interface instance for whichautocompletion is disabled when copy-pasted input is detected.
If thestream is aTTY, then it must be in raw mode.
This is automatically called by any readline instance on itsinput if theinput is a terminal. Closing thereadline instance does not stoptheinput from emitting'keypress' events.
readline.emitKeypressEvents(process.stdin);if (process.stdin.isTTY) process.stdin.setRawMode(true);Example: Tiny CLI#
The following example illustrates the use ofreadline.Interface class toimplement a small command-line interface:
import { createInterface }from'node:readline';import { exit, stdin, stdout }from'node:process';const rl =createInterface({input: stdin,output: stdout,prompt:'OHAI> ',});rl.prompt();rl.on('line',(line) => {switch (line.trim()) {case'hello':console.log('world!');break;default:console.log(`Say what? I might have heard '${line.trim()}'`);break; } rl.prompt();}).on('close',() => {console.log('Have a great day!');exit(0);});const { createInterface } =require('node:readline');const rl =createInterface({input: process.stdin,output: process.stdout,prompt:'OHAI> ',});rl.prompt();rl.on('line',(line) => {switch (line.trim()) {case'hello':console.log('world!');break;default:console.log(`Say what? I might have heard '${line.trim()}'`);break; } rl.prompt();}).on('close',() => {console.log('Have a great day!'); process.exit(0);});
Example: Read file stream line-by-Line#
A common use case forreadline is to consume an input file one line at atime. The easiest way to do so is leveraging thefs.ReadStream API aswell as afor await...of loop:
import { createReadStream }from'node:fs';import { createInterface }from'node:readline';asyncfunctionprocessLineByLine() {const fileStream =createReadStream('input.txt');const rl =createInterface({input: fileStream,crlfDelay:Infinity, });// Note: we use the crlfDelay option to recognize all instances of CR LF// ('\r\n') in input.txt as a single line break.forawait (const lineof rl) {// Each line in input.txt will be successively available here as `line`.console.log(`Line from file:${line}`); }}processLineByLine();const { createReadStream } =require('node:fs');const { createInterface } =require('node:readline');asyncfunctionprocessLineByLine() {const fileStream =createReadStream('input.txt');const rl =createInterface({input: fileStream,crlfDelay:Infinity, });// Note: we use the crlfDelay option to recognize all instances of CR LF// ('\r\n') in input.txt as a single line break.forawait (const lineof rl) {// Each line in input.txt will be successively available here as `line`.console.log(`Line from file:${line}`); }}processLineByLine();
Alternatively, one could use the'line' event:
import { createReadStream }from'node:fs';import { createInterface }from'node:readline';const rl =createInterface({input:createReadStream('sample.txt'),crlfDelay:Infinity,});rl.on('line',(line) => {console.log(`Line from file:${line}`);});const { createReadStream } =require('node:fs');const { createInterface } =require('node:readline');const rl =createInterface({input:createReadStream('sample.txt'),crlfDelay:Infinity,});rl.on('line',(line) => {console.log(`Line from file:${line}`);});
Currently,for await...of loop can be a bit slower. Ifasync /awaitflow and speed are both essential, a mixed approach can be applied:
import { once }from'node:events';import { createReadStream }from'node:fs';import { createInterface }from'node:readline';(asyncfunctionprocessLineByLine() {try {const rl =createInterface({input:createReadStream('big-file.txt'),crlfDelay:Infinity, }); rl.on('line',(line) => {// Process the line. });awaitonce(rl,'close');console.log('File processed.'); }catch (err) {console.error(err); }})();const { once } =require('node:events');const { createReadStream } =require('node:fs');const { createInterface } =require('node:readline');(asyncfunctionprocessLineByLine() {try {const rl =createInterface({input:createReadStream('big-file.txt'),crlfDelay:Infinity, }); rl.on('line',(line) => {// Process the line. });awaitonce(rl,'close');console.log('File processed.'); }catch (err) {console.error(err); }})();
TTY keybindings#
| Keybindings | Description | Notes |
|---|---|---|
| Ctrl+Shift+Backspace | Delete line left | Doesn't work on Linux, Mac and Windows |
| Ctrl+Shift+Delete | Delete line right | Doesn't work on Mac |
| Ctrl+C | EmitSIGINT or close the readline instance | |
| Ctrl+H | Delete left | |
| Ctrl+D | Delete right or close the readline instance in case the current line is empty / EOF | Doesn't work on Windows |
| Ctrl+U | Delete from the current position to the line start | |
| Ctrl+K | Delete from the current position to the end of line | |
| Ctrl+Y | Yank (Recall) the previously deleted text | Only works with text deleted byCtrl+U orCtrl+K |
| Meta+Y | Cycle among previously deleted texts | Only available when the last keystroke isCtrl+Y orMeta+Y |
| Ctrl+A | Go to start of line | |
| Ctrl+E | Go to end of line | |
| Ctrl+B | Back one character | |
| Ctrl+F | Forward one character | |
| Ctrl+L | Clear screen | |
| Ctrl+N | Next history item | |
| Ctrl+P | Previous history item | |
| Ctrl+- | Undo previous change | Any keystroke that emits key code0x1F will do this action. In many terminals, for examplexterm, this is bound toCtrl+-. |
| Ctrl+6 | Redo previous change | Many terminals don't have a default redo keystroke. We choose key code0x1E to perform redo. Inxterm, it is bound toCtrl+6 by default. |
| Ctrl+Z | Moves running process into background. Typefg and pressEnter to return. | Doesn't work on Windows |
| Ctrl+W orCtrl +Backspace | Delete backward to a word boundary | Ctrl+Backspace Doesn't work on Linux, Mac and Windows |
| Ctrl+Delete | Delete forward to a word boundary | Doesn't work on Mac |
| Ctrl+Left arrow orMeta+B | Word left | Ctrl+Left arrow Doesn't work on Mac |
| Ctrl+Right arrow orMeta+F | Word right | Ctrl+Right arrow Doesn't work on Mac |
| Meta+D orMeta +Delete | Delete word right | Meta+Delete Doesn't work on windows |
| Meta+Backspace | Delete word left | Doesn't work on Mac |
REPL#
Source Code:lib/repl.js
Thenode:repl module provides a Read-Eval-Print-Loop (REPL) implementationthat is available both as a standalone program or includible in otherapplications. It can be accessed using:
import replfrom'node:repl';const repl =require('node:repl');
Design and features#
Thenode:repl module exports therepl.REPLServer class. While running,instances ofrepl.REPLServer will accept individual lines of user input,evaluate those according to a user-defined evaluation function, then output theresult. Input and output may be fromstdin andstdout, respectively, or maybe connected to any Node.jsstream.
Instances ofrepl.REPLServer support automatic completion of inputs,completion preview, simplistic Emacs-style line editing, multi-line inputs,ZSH-like reverse-i-search,ZSH-like substring-based history search,ANSI-styled output, saving and restoring current REPL session state, errorrecovery, and customizable evaluation functions. Terminals that do not supportANSI styles and Emacs-style line editing automatically fall back to a limitedfeature set.
Commands and special keys#
The following special commands are supported by all REPL instances:
.break: When in the process of inputting a multi-line expression, enterthe.breakcommand (or pressCtrl+C) to abortfurther input or processing of that expression..clear: Resets the REPLcontextto an empty object and clears anymulti-line expression being input..exit: Close the I/O stream, causing the REPL to exit..help: Show this list of special commands..save: Save the current REPL session to a file:> .save ./file/to/save.js.load: Load a file into the current REPL session.> .load ./file/to/load.js.editor: Enter editor mode (Ctrl+D tofinish,Ctrl+C to cancel).
>.editor// Entering editor mode (^D to finish, ^C to cancel)function welcome(name) { return `Hello ${name}!`;}welcome('Node.js User');// ^D'Hello Node.js User!'>The following key combinations in the REPL have these special effects:
- Ctrl+C: When pressed once, has the same effect as the
.breakcommand.When pressed twice on a blank line, has the same effect as the.exitcommand. - Ctrl+D: Has the same effect as the
.exitcommand. - Tab: When pressed on a blank line, displays global and local(scope) variables. When pressed while entering other input, displays relevantautocompletion options.
For key bindings related to the reverse-i-search, seereverse-i-search.For all other key bindings, seeTTY keybindings.
Default evaluation#
By default, all instances ofrepl.REPLServer use an evaluation functionthat evaluates JavaScript expressions and provides access to Node.js built-inmodules. This default behavior can be overridden by passing in an alternativeevaluation function when therepl.REPLServer instance is created.
JavaScript expressions#
The default evaluator supports direct evaluation of JavaScript expressions:
>1 + 12>const m = 2undefined>m + 13Unless otherwise scoped within blocks or functions, variables declaredeither implicitly or using theconst,let, orvar keywordsare declared at the global scope.
Global and local scope#
The default evaluator provides access to any variables that exist in the globalscope. It is possible to expose a variable to the REPL explicitly by assigningit to thecontext object associated with eachREPLServer:
import replfrom'node:repl';const msg ='message';repl.start('> ').context.m = msg;const repl =require('node:repl');const msg ='message';repl.start('> ').context.m = msg;
Properties in thecontext object appear as local within the REPL:
$node repl_test.js>m'message'Context properties are not read-only by default. To specify read-only globals,context properties must be defined usingObject.defineProperty():
import replfrom'node:repl';const msg ='message';const r = repl.start('> ');Object.defineProperty(r.context,'m', {configurable:false,enumerable:true,value: msg,});const repl =require('node:repl');const msg ='message';const r = repl.start('> ');Object.defineProperty(r.context,'m', {configurable:false,enumerable:true,value: msg,});
Accessing core Node.js modules#
The default evaluator will automatically load Node.js core modules into theREPL environment when used. For instance, unless otherwise declared as aglobal or scoped variable, the inputfs will be evaluated on-demand asglobal.fs = require('node:fs').
>fs.createReadStream('./some/file');Global uncaught exceptions#
History
| Version | Changes |
|---|---|
| v12.3.0 | The |
The REPL uses thedomain module to catch all uncaught exceptions for thatREPL session.
This use of thedomain module in the REPL has these side effects:
Uncaught exceptions only emit the
'uncaughtException'event in thestandalone REPL. Adding a listener for this event in a REPL withinanother Node.js program results inERR_INVALID_REPL_INPUT.const r = repl.start();r.write('process.on("uncaughtException", () => console.log("Foobar"));\n');// Output stream includes:// TypeError [ERR_INVALID_REPL_INPUT]: Listeners for `uncaughtException`// cannot be used in the REPLr.close();Trying to use
process.setUncaughtExceptionCaptureCallback()throwsanERR_DOMAIN_CANNOT_SET_UNCAUGHT_EXCEPTION_CAPTUREerror.
Assignment of the_ (underscore) variable#
History
| Version | Changes |
|---|---|
| v9.8.0 | Added |
The default evaluator will, by default, assign the result of the most recentlyevaluated expression to the special variable_ (underscore).Explicitly setting_ to a value will disable this behavior.
>['a','b','c' ][ 'a', 'b', 'c' ]>_.length3>_ += 1Expression assignment to _ now disabled.4>1 + 12>_4Similarly,_error will refer to the last seen error, if there was any.Explicitly setting_error to a value will disable this behavior.
>throw new Error('foo');Uncaught Error: foo>_error.message'foo'await keyword#
Support for theawait keyword is enabled at the top level.
>await Promise.resolve(123)123>await Promise.reject(new Error('REPL await'))Uncaught Error: REPL await at REPL2:1:54>consttimeout = util.promisify(setTimeout);undefined>const old = Date.now(); awaittimeout(1000); console.log(Date.now() - old);1002undefinedOne known limitation of using theawait keyword in the REPL is thatit will invalidate the lexical scoping of theconst keywords.
For example:
>const m = await Promise.resolve(123)undefined>m123>m = await Promise.resolve(234)234// redeclaring the constant does error>const m = await Promise.resolve(345)Uncaught SyntaxError: Identifier 'm' has already been declared--no-experimental-repl-await shall disable top-level await in REPL.
Reverse-i-search#
The REPL supports bi-directional reverse-i-search similar toZSH. It istriggered withCtrl+R to search backwardandCtrl+S to search forwards.
Duplicated history entries will be skipped.
Entries are accepted as soon as any key is pressed that doesn't correspondwith the reverse search. Cancelling is possible by pressingEscorCtrl+C.
Changing the direction immediately searches for the next entry in the expecteddirection from the current position on.
Custom evaluation functions#
When a newrepl.REPLServer is created, a custom evaluation function may beprovided. This can be used, for instance, to implement fully customized REPLapplications.
An evaluation function accepts the following four arguments:
code<string> The code to be executed (e.g.1 + 1).context<Object> The context in which the code is executed. This can either be the JavaScriptglobalcontext or a context specific to the REPL instance, depending on theuseGlobaloption.replResourceName<string> An identifier for the REPL resource associated with the current codeevaluation. This can be useful for debugging purposes.callback<Function> A function to invoke once the code evaluation is complete. The callback takes two parameters:- An error object to provide if an error occurred during evaluation, or
null/undefinedif no error occurred. - The result of the code evaluation (this is not relevant if an error is provided).
- An error object to provide if an error occurred during evaluation, or
The following illustrates an example of a REPL that squares a given number, an error is instead printedif the provided input is not actually a number:
import replfrom'node:repl';functionbyThePowerOfTwo(number) {return number * number;}functionmyEval(code, context, replResourceName, callback) {if (isNaN(code)) {callback(newError(`${code.trim()} is not a number`)); }else {callback(null,byThePowerOfTwo(code)); }}repl.start({prompt:'Enter a number: ',eval: myEval });const repl =require('node:repl');functionbyThePowerOfTwo(number) {return number * number;}functionmyEval(code, context, replResourceName, callback) {if (isNaN(code)) {callback(newError(`${code.trim()} is not a number`)); }else {callback(null,byThePowerOfTwo(code)); }}repl.start({prompt:'Enter a number: ',eval: myEval });
Recoverable errors#
At the REPL prompt, pressingEnter sends the current line of input totheeval function. In order to support multi-line input, theeval functioncan return an instance ofrepl.Recoverable to the provided callback function:
functionmyEval(cmd, context, filename, callback) {let result;try { result = vm.runInThisContext(cmd); }catch (e) {if (isRecoverableError(e)) {returncallback(new repl.Recoverable(e)); } }callback(null, result);}functionisRecoverableError(error) {if (error.name ==='SyntaxError') {return/^(Unexpected end of input|Unexpected token)/.test(error.message); }returnfalse;}Customizing REPL output#
By default,repl.REPLServer instances format output using theutil.inspect() method before writing the output to the providedWritablestream (process.stdout by default). TheshowProxy inspection option is setto true by default and thecolors option is set to true depending on theREPL'suseColors option.
TheuseColors boolean option can be specified at construction to instruct thedefault writer to use ANSI style codes to colorize the output from theutil.inspect() method.
If the REPL is run as standalone program, it is also possible to change theREPL'sinspection defaults from inside the REPL by using theinspect.replDefaults property which mirrors thedefaultOptions fromutil.inspect().
>util.inspect.replDefaults.compact =false;false>[1][ 1]>To fully customize the output of arepl.REPLServer instance pass in a newfunction for thewriter option on construction. The following example, forinstance, simply converts any input text to upper case:
import replfrom'node:repl';const r = repl.start({prompt:'> ',eval: myEval,writer: myWriter });functionmyEval(cmd, context, filename, callback) {callback(null, cmd);}functionmyWriter(output) {return output.toUpperCase();}const repl =require('node:repl');const r = repl.start({prompt:'> ',eval: myEval,writer: myWriter });functionmyEval(cmd, context, filename, callback) {callback(null, cmd);}functionmyWriter(output) {return output.toUpperCase();}
Class:REPLServer#
options<Object> |<string> Seerepl.start()- Extends:<readline.Interface>
Instances ofrepl.REPLServer are created using therepl.start() methodor directly using the JavaScriptnew keyword.
import replfrom'node:repl';const options = {useColors:true };const firstInstance = repl.start(options);const secondInstance =new repl.REPLServer(options);const repl =require('node:repl');const options = {useColors:true };const firstInstance = repl.start(options);const secondInstance =new repl.REPLServer(options);
Event:'exit'#
The'exit' event is emitted when the REPL is exited either by receiving the.exit command as input, the user pressingCtrl+C twiceto signalSIGINT,or by pressingCtrl+D to signal'end' on the inputstream. The listenercallback is invoked without any arguments.
replServer.on('exit',() => {console.log('Received "exit" event from repl!'); process.exit();});Event:'reset'#
The'reset' event is emitted when the REPL's context is reset. This occurswhenever the.clear command is received as inputunless the REPL is usingthe default evaluator and therepl.REPLServer instance was created with theuseGlobal option set totrue. The listener callback will be called with areference to thecontext object as the only argument.
This can be used primarily to re-initialize REPL context to some pre-definedstate:
import replfrom'node:repl';functioninitializeContext(context) { context.m ='test';}const r = repl.start({prompt:'> ' });initializeContext(r.context);r.on('reset', initializeContext);const repl =require('node:repl');functioninitializeContext(context) { context.m ='test';}const r = repl.start({prompt:'> ' });initializeContext(r.context);r.on('reset', initializeContext);
When this code is executed, the global'm' variable can be modified but thenreset to its initial value using the.clear command:
$./node example.js>m'test'>m = 11>m1>.clearClearing context...>m'test'>replServer.defineCommand(keyword, cmd)#
keyword<string> The command keyword (without a leading.character).cmd<Object> |<Function> The function to invoke when the command is processed.
ThereplServer.defineCommand() method is used to add new.-prefixed commandsto the REPL instance. Such commands are invoked by typing a. followed by thekeyword. Thecmd is either aFunction or anObject with the followingproperties:
help<string> Help text to be displayed when.helpis entered (Optional).action<Function> The function to execute, optionally accepting a singlestring argument.
The following example shows two new commands added to the REPL instance:
import replfrom'node:repl';const replServer = repl.start({prompt:'> ' });replServer.defineCommand('sayhello', {help:'Say hello',action(name) {this.clearBufferedCommand();console.log(`Hello,${name}!`);this.displayPrompt(); },});replServer.defineCommand('saybye',functionsaybye() {console.log('Goodbye!');this.close();});const repl =require('node:repl');const replServer = repl.start({prompt:'> ' });replServer.defineCommand('sayhello', {help:'Say hello',action(name) {this.clearBufferedCommand();console.log(`Hello,${name}!`);this.displayPrompt(); },});replServer.defineCommand('saybye',functionsaybye() {console.log('Goodbye!');this.close();});
The new commands can then be used from within the REPL instance:
>.sayhello Node.js UserHello, Node.js User!>.saybyeGoodbye!replServer.displayPrompt([preserveCursor])#
preserveCursor<boolean>
ThereplServer.displayPrompt() method readies the REPL instance for inputfrom the user, printing the configuredprompt to a new line in theoutputand resuming theinput to accept new input.
When multi-line input is being entered, a pipe'|' is printed rather than the'prompt'.
WhenpreserveCursor istrue, the cursor placement will not be reset to0.
ThereplServer.displayPrompt method is primarily intended to be called fromwithin the action function for commands registered using thereplServer.defineCommand() method.
replServer.clearBufferedCommand()#
ThereplServer.clearBufferedCommand() method clears any command that has beenbuffered but not yet executed. This method is primarily intended to becalled from within the action function for commands registered using thereplServer.defineCommand() method.
replServer.setupHistory(historyConfig, callback)#
History
| Version | Changes |
|---|---|
| v24.2.0 | Updated the |
| v11.10.0 | Added in: v11.10.0 |
historyConfig<Object> |<string> the path to the history fileIf it is a string, it is the path to the history file.If it is an object, it can have the following properties:filePath<string> the path to the history filesize<number> Maximum number of history lines retained. To disablethe history set this value to0. This option makes sense only ifterminalis set totrueby the user or by an internaloutputcheck,otherwise the history caching mechanism is not initialized at all.Default:30.removeHistoryDuplicates<boolean> Iftrue, when a new input line addedto the history list duplicates an older one, this removes the older linefrom the list.Default:false.onHistoryFileLoaded<Function> called when history writes are ready or upon errorerr<Error>repl<repl.REPLServer>
callback<Function> called when history writes are ready or upon error(Optional if provided asonHistoryFileLoadedinhistoryConfig)err<Error>repl<repl.REPLServer>
Initializes a history log file for the REPL instance. When executing theNode.js binary and using the command-line REPL, a history file is initializedby default. However, this is not the case when creating a REPLprogrammatically. Use this method to initialize a history log file when workingwith REPL instances programmatically.
repl.builtinModules#
module.builtinModules instead.- Type:<string[]>
A list of the names of some Node.js modules, e.g.,'http'.
An automated migration is available (source):
npx codemod@latest @nodejs/repl-builtin-modulesrepl.start([options])#
History
| Version | Changes |
|---|---|
| v24.1.0 | Added the possibility to add/edit/remove multilines while adding a multiline command. |
| v24.0.0 | The multi-line indicator is now "|" instead of "...". Added support for multi-line history. It is now possible to "fix" multi-line commands with syntax errors by visiting the history and editing the command. When visiting the multiline history from an old node version, the multiline structure is not preserved. |
| v13.4.0, v12.17.0 | The |
| v12.0.0 | The |
| v10.0.0 | The |
| v6.3.0 | The |
| v5.8.0 | The |
| v0.1.91 | Added in: v0.1.91 |
options<Object> |<string>prompt<string> The input prompt to display.Default:'> '(with a trailing space).input<stream.Readable> TheReadablestream from which REPL input willbe read.Default:process.stdin.output<stream.Writable> TheWritablestream to which REPL output willbe written.Default:process.stdout.terminal<boolean> Iftrue, specifies that theoutputshould betreated as a TTY terminal.Default: checking the value of theisTTYproperty on theoutputstream upon instantiation.eval<Function> The function to be used when evaluating each given lineof input.Default: an async wrapper for the JavaScripteval()function. Anevalfunction can error withrepl.Recoverableto indicatethe input was incomplete and prompt for additional lines. See thecustom evaluation functions section for more details.useColors<boolean> Iftrue, specifies that the defaultwriterfunction should include ANSI color styling to REPL output. If a customwriterfunction is provided then this has no effect.Default: checkingcolor support on theoutputstream if the REPL instance'sterminalvalueistrue.useGlobal<boolean> Iftrue, specifies that the default evaluationfunction will use the JavaScriptglobalas the context as opposed tocreating a new separate context for the REPL instance. The node CLI REPLsets this value totrue.Default:false.ignoreUndefined<boolean> Iftrue, specifies that the default writerwill not output the return value of a command if it evaluates toundefined.Default:false.writer<Function> The function to invoke to format the output of eachcommand before writing tooutput.Default:util.inspect().completer<Function> An optional function used for custom Tab autocompletion. Seereadline.InterfaceCompleterfor an example.replMode<symbol> A flag that specifies whether the default evaluatorexecutes all JavaScript commands in strict mode or default (sloppy) mode.Acceptable values are:repl.REPL_MODE_SLOPPYto evaluate expressions in sloppy mode.repl.REPL_MODE_STRICTto evaluate expressions in strict mode. This isequivalent to prefacing every repl statement with'use strict'.
breakEvalOnSigint<boolean> Stop evaluating the current piece of code whenSIGINTis received, such as whenCtrl+C is pressed.This cannot be usedtogether with a customevalfunction.Default:false.preview<boolean> Defines if the repl prints autocomplete and outputpreviews or not.Default:truewith the default eval function andfalsein case a custom eval function is used. Ifterminalis falsy, thenthere are no previews and the value ofpreviewhas no effect.
- Returns:<repl.REPLServer>
Therepl.start() method creates and starts arepl.REPLServer instance.
Ifoptions is a string, then it specifies the input prompt:
import replfrom'node:repl';// a Unix style promptrepl.start('$ ');const repl =require('node:repl');// a Unix style promptrepl.start('$ ');
The Node.js REPL#
Node.js itself uses thenode:repl module to provide its own interactiveinterface for executing JavaScript. This can be used by executing the Node.jsbinary without passing any arguments (or by passing the-i argument):
$node>const a = [1, 2, 3];undefined>a[ 1, 2, 3 ]>a.forEach((v) => {... console.log(v);... });123Environment variable options#
Various behaviors of the Node.js REPL can be customized using the followingenvironment variables:
NODE_REPL_HISTORY: When a valid path is given, persistent REPL historywill be saved to the specified file rather than.node_repl_historyin theuser's home directory. Setting this value to''(an empty string) willdisable persistent REPL history. Whitespace will be trimmed from the value.On Windows platforms environment variables with empty values are invalid soset this variable to one or more spaces to disable persistent REPL history.NODE_REPL_HISTORY_SIZE: Controls how many lines of history will bepersisted if history is available. Must be a positive number.Default:1000.NODE_REPL_MODE: May be either'sloppy'or'strict'.Default:'sloppy', which will allow non-strict mode code to be run.
Persistent history#
By default, the Node.js REPL will persist history betweennode REPL sessionsby saving inputs to a.node_repl_history file located in the user's homedirectory. This can be disabled by setting the environment variableNODE_REPL_HISTORY=''.
Using the Node.js REPL with advanced line-editors#
For advanced line-editors, start Node.js with the environment variableNODE_NO_READLINE=1. This will start the main and debugger REPL in canonicalterminal settings, which will allow use withrlwrap.
For example, the following can be added to a.bashrc file:
alias node="env NODE_NO_READLINE=1 rlwrap node"Starting multiple REPL instances in the same process#
It is possible to create and run multiple REPL instances against a singlerunning instance of Node.js that share a singleglobal object (by settingtheuseGlobal option totrue) but have separate I/O interfaces.
The following example, for instance, provides separate REPLs onstdin, a Unixsocket, and a TCP socket, all sharing the sameglobal object:
import netfrom'node:net';import replfrom'node:repl';import processfrom'node:process';import fsfrom'node:fs';let connections =0;repl.start({prompt:'Node.js via stdin> ',useGlobal:true,input: process.stdin,output: process.stdout,});const unixSocketPath ='/tmp/node-repl-sock';// If the socket file already exists let's remove itfs.rmSync(unixSocketPath, {force:true });net.createServer((socket) => { connections +=1; repl.start({prompt:'Node.js via Unix socket> ',useGlobal:true,input: socket,output: socket, }).on('exit',() => { socket.end(); });}).listen(unixSocketPath);net.createServer((socket) => { connections +=1; repl.start({prompt:'Node.js via TCP socket> ',useGlobal:true,input: socket,output: socket, }).on('exit',() => { socket.end(); });}).listen(5001);const net =require('node:net');const repl =require('node:repl');const fs =require('node:fs');let connections =0;repl.start({prompt:'Node.js via stdin> ',useGlobal:true,input: process.stdin,output: process.stdout,});const unixSocketPath ='/tmp/node-repl-sock';// If the socket file already exists let's remove itfs.rmSync(unixSocketPath, {force:true });net.createServer((socket) => { connections +=1; repl.start({prompt:'Node.js via Unix socket> ',useGlobal:true,input: socket,output: socket, }).on('exit',() => { socket.end(); });}).listen(unixSocketPath);net.createServer((socket) => { connections +=1; repl.start({prompt:'Node.js via TCP socket> ',useGlobal:true,input: socket,output: socket, }).on('exit',() => { socket.end(); });}).listen(5001);
Running this application from the command line will start a REPL on stdin.Other REPL clients may connect through the Unix socket or TCP socket.telnet,for instance, is useful for connecting to TCP sockets, whilesocat can be usedto connect to both Unix and TCP sockets.
By starting a REPL from a Unix socket-based server instead of stdin, it ispossible to connect to a long-running Node.js process without restarting it.
Examples#
Full-featured "terminal" REPL overnet.Server andnet.Socket#
This is an example on how to run a "full-featured" (terminal) REPL usingnet.Server andnet.Socket
The following script starts an HTTP server on port1337 that allowsclients to establish socket connections to its REPL instance.
// repl-server.jsimport replfrom'node:repl';import netfrom'node:net';net .createServer((socket) => {const r = repl.start({prompt:`socket${socket.remoteAddress}:${socket.remotePort}> `,input: socket,output: socket,terminal:true,useGlobal:false, }); r.on('exit',() => { socket.end(); }); r.context.socket = socket; }) .listen(1337);// repl-server.jsconst repl =require('node:repl');const net =require('node:net');net .createServer((socket) => {const r = repl.start({prompt:`socket${socket.remoteAddress}:${socket.remotePort}> `,input: socket,output: socket,terminal:true,useGlobal:false, }); r.on('exit',() => { socket.end(); }); r.context.socket = socket; }) .listen(1337);
While the following implements a client that can create a socket connectionwith the above defined server over port1337.
// repl-client.jsimport netfrom'node:net';import processfrom'node:process';const sock = net.connect(1337);process.stdin.pipe(sock);sock.pipe(process.stdout);sock.on('connect',() => { process.stdin.resume(); process.stdin.setRawMode(true);});sock.on('close',() => { process.stdin.setRawMode(false); process.stdin.pause(); sock.removeListener('close', done);});process.stdin.on('end',() => { sock.destroy();console.log();});process.stdin.on('data',(b) => {if (b.length ===1 && b[0] ===4) { process.stdin.emit('end'); }});// repl-client.jsconst net =require('node:net');const sock = net.connect(1337);process.stdin.pipe(sock);sock.pipe(process.stdout);sock.on('connect',() => { process.stdin.resume(); process.stdin.setRawMode(true);});sock.on('close',() => { process.stdin.setRawMode(false); process.stdin.pause(); sock.removeListener('close', done);});process.stdin.on('end',() => { sock.destroy();console.log();});process.stdin.on('data',(b) => {if (b.length ===1 && b[0] ===4) { process.stdin.emit('end'); }});
To run the example open two different terminals on your machine, start the serverwithnode repl-server.js in one terminal andnode repl-client.js on the other.
Original code fromhttps://gist.github.com/TooTallNate/2209310.
REPL overcurl#
This is an example on how to run a REPL instance overcurl()
The following script starts an HTTP server on port8000 that can accepta connection established viacurl().
import httpfrom'node:http';import replfrom'node:repl';const server = http.createServer((req, res) => { res.setHeader('content-type','multipart/octet-stream'); repl.start({prompt:'curl repl> ',input: req,output: res,terminal:false,useColors:true,useGlobal:false, });});server.listen(8000);const http =require('node:http');const repl =require('node:repl');const server = http.createServer((req, res) => { res.setHeader('content-type','multipart/octet-stream'); repl.start({prompt:'curl repl> ',input: req,output: res,terminal:false,useColors:true,useGlobal:false, });});server.listen(8000);
When the above script is running you can then usecurl() to connect tothe server and connect to its REPL instance by runningcurl --no-progress-meter -sSNT. localhost:8000.
Warning This example is intended purely for educational purposes to demonstrate howNode.js REPLs can be started using different I/O streams.It shouldnot be used in production environments or any context where securityis a concern without additional protective measures.If you need to implement REPLs in a real-world application, consider alternativeapproaches that mitigate these risks, such as using secure input mechanisms andavoiding open network interfaces.
Original code fromhttps://gist.github.com/TooTallNate/2053342.
Diagnostic report#
History
| Version | Changes |
|---|---|
| v23.3.0, v22.13.0 | Added |
| v22.0.0, v20.13.0 | Added |
Delivers a JSON-formatted diagnostic summary, written to a file.
The report is intended for development, test, and production use, to captureand preserve information for problem determination. It includes JavaScriptand native stack traces, heap statistics, platform information, resourceusage etc. With the report option enabled, diagnostic reports can be triggeredon unhandled exceptions, fatal errors and user signals, in addition totriggering programmatically through API calls.
A complete example report that was generated on an uncaught exceptionis provided below for reference.
{"header":{"reportVersion":5,"event":"exception","trigger":"Exception","filename":"report.20181221.005011.8974.0.001.json","dumpEventTime":"2018-12-21T00:50:11Z","dumpEventTimeStamp":"1545371411331","processId":8974,"cwd":"/home/nodeuser/project/node","commandLine":["/home/nodeuser/project/node/out/Release/node","--report-uncaught-exception","/home/nodeuser/project/node/test/report/test-exception.js","child"],"nodejsVersion":"v12.0.0-pre","glibcVersionRuntime":"2.17","glibcVersionCompiler":"2.17","wordSize":"64 bit","arch":"x64","platform":"linux","componentVersions":{"node":"12.0.0-pre","v8":"7.1.302.28-node.5","uv":"1.24.1","zlib":"1.2.11","ares":"1.15.0","modules":"68","nghttp2":"1.34.0","napi":"3","llhttp":"1.0.1","openssl":"1.1.0j"},"release":{"name":"node"},"osName":"Linux","osRelease":"3.10.0-862.el7.x86_64","osVersion":"#1 SMP Wed Mar 21 18:14:51 EDT 2018","osMachine":"x86_64","cpus":[{"model":"Intel(R) Core(TM) i7-6820HQ CPU @ 2.70GHz","speed":2700,"user":88902660,"nice":0,"sys":50902570,"idle":241732220,"irq":0},{"model":"Intel(R) Core(TM) i7-6820HQ CPU @ 2.70GHz","speed":2700,"user":88902660,"nice":0,"sys":50902570,"idle":241732220,"irq":0}],"networkInterfaces":[{"name":"en0","internal":false,"mac":"13:10:de:ad:be:ef","address":"10.0.0.37","netmask":"255.255.255.0","family":"IPv4"}],"host":"test_machine"},"javascriptStack":{"message":"Error: *** test-exception.js: throwing uncaught Error","stack":["at myException (/home/nodeuser/project/node/test/report/test-exception.js:9:11)","at Object.<anonymous> (/home/nodeuser/project/node/test/report/test-exception.js:12:3)","at Module._compile (internal/modules/cjs/loader.js:718:30)","at Object.Module._extensions..js (internal/modules/cjs/loader.js:729:10)","at Module.load (internal/modules/cjs/loader.js:617:32)","at tryModuleLoad (internal/modules/cjs/loader.js:560:12)","at Function.Module._load (internal/modules/cjs/loader.js:552:3)","at Function.Module.runMain (internal/modules/cjs/loader.js:771:12)","at executeUserCode (internal/bootstrap/node.js:332:15)"]},"nativeStack":[{"pc":"0x000055b57f07a9ef","symbol":"report::GetNodeReport(v8::Isolate*, node::Environment*, char const*, char const*, v8::Local<v8::String>, std::ostream&) [./node]"},{"pc":"0x000055b57f07cf03","symbol":"report::GetReport(v8::FunctionCallbackInfo<v8::Value> const&) [./node]"},{"pc":"0x000055b57f1bccfd","symbol":" [./node]"},{"pc":"0x000055b57f1be048","symbol":"v8::internal::Builtin_HandleApiCall(int, v8::internal::Object**, v8::internal::Isolate*) [./node]"},{"pc":"0x000055b57feeda0e","symbol":" [./node]"}],"javascriptHeap":{"totalMemory":5660672,"executableMemory":524288,"totalCommittedMemory":5488640,"availableMemory":4341379928,"totalGlobalHandlesMemory":8192,"usedGlobalHandlesMemory":3136,"usedMemory":4816432,"memoryLimit":4345298944,"mallocedMemory":254128,"externalMemory":315644,"peakMallocedMemory":98752,"nativeContextCount":1,"detachedContextCount":0,"doesZapGarbage":0,"heapSpaces":{"read_only_space":{"memorySize":524288,"committedMemory":39208,"capacity":515584,"used":30504,"available":485080},"new_space":{"memorySize":2097152,"committedMemory":2019312,"capacity":1031168,"used":985496,"available":45672},"old_space":{"memorySize":2273280,"committedMemory":1769008,"capacity":1974640,"used":1725488,"available":249152},"code_space":{"memorySize":696320,"committedMemory":184896,"capacity":152128,"used":152128,"available":0},"map_space":{"memorySize":536576,"committedMemory":344928,"capacity":327520,"used":327520,"available":0},"large_object_space":{"memorySize":0,"committedMemory":0,"capacity":1520590336,"used":0,"available":1520590336},"new_large_object_space":{"memorySize":0,"committedMemory":0,"capacity":0,"used":0,"available":0}}},"resourceUsage":{"rss":"35766272","free_memory":"1598337024","total_memory":"17179869184","available_memory":"1598337024","maxRss":"36624662528","constrained_memory":"36624662528","userCpuSeconds":0.040072,"kernelCpuSeconds":0.016029,"cpuConsumptionPercent":5.6101,"userCpuConsumptionPercent":4.0072,"kernelCpuConsumptionPercent":1.6029,"pageFaults":{"IORequired":0,"IONotRequired":4610},"fsActivity":{"reads":0,"writes":0}},"uvthreadResourceUsage":{"userCpuSeconds":0.039843,"kernelCpuSeconds":0.015937,"cpuConsumptionPercent":5.578,"userCpuConsumptionPercent":3.9843,"kernelCpuConsumptionPercent":1.5937,"fsActivity":{"reads":0,"writes":0}},"libuv":[{"type":"async","is_active":true,"is_referenced":false,"address":"0x0000000102910900","details":""},{"type":"timer","is_active":false,"is_referenced":false,"address":"0x00007fff5fbfeab0","repeat":0,"firesInMsFromNow":94403548320796,"expired":true},{"type":"check","is_active":true,"is_referenced":false,"address":"0x00007fff5fbfeb48"},{"type":"idle","is_active":false,"is_referenced":true,"address":"0x00007fff5fbfebc0"},{"type":"prepare","is_active":false,"is_referenced":false,"address":"0x00007fff5fbfec38"},{"type":"check","is_active":false,"is_referenced":false,"address":"0x00007fff5fbfecb0"},{"type":"async","is_active":true,"is_referenced":false,"address":"0x000000010188f2e0"},{"type":"tty","is_active":false,"is_referenced":true,"address":"0x000055b581db0e18","width":204,"height":55,"fd":17,"writeQueueSize":0,"readable":true,"writable":true},{"type":"signal","is_active":true,"is_referenced":false,"address":"0x000055b581d80010","signum":28,"signal":"SIGWINCH"},{"type":"tty","is_active":true,"is_referenced":true,"address":"0x000055b581df59f8","width":204,"height":55,"fd":19,"writeQueueSize":0,"readable":true,"writable":true},{"type":"loop","is_active":true,"address":"0x000055fc7b2cb180","loopIdleTimeSeconds":22644.8},{"type":"tcp","is_active":true,"is_referenced":true,"address":"0x000055e70fcb85d8","localEndpoint":{"host":"localhost","ip4":"127.0.0.1","port":48986},"remoteEndpoint":{"host":"localhost","ip4":"127.0.0.1","port":38573},"sendBufferSize":2626560,"recvBufferSize":131072,"fd":24,"writeQueueSize":0,"readable":true,"writable":true}],"workers":[],"environmentVariables":{"REMOTEHOST":"REMOVED","MANPATH":"/opt/rh/devtoolset-3/root/usr/share/man:","XDG_SESSION_ID":"66126","HOSTNAME":"test_machine","HOST":"test_machine","TERM":"xterm-256color","SHELL":"/bin/csh","SSH_CLIENT":"REMOVED","PERL5LIB":"/opt/rh/devtoolset-3/root//usr/lib64/perl5/vendor_perl:/opt/rh/devtoolset-3/root/usr/lib/perl5:/opt/rh/devtoolset-3/root//usr/share/perl5/vendor_perl","OLDPWD":"/home/nodeuser/project/node/src","JAVACONFDIRS":"/opt/rh/devtoolset-3/root/etc/java:/etc/java","SSH_TTY":"/dev/pts/0","PCP_DIR":"/opt/rh/devtoolset-3/root","GROUP":"normaluser","USER":"nodeuser","LD_LIBRARY_PATH":"/opt/rh/devtoolset-3/root/usr/lib64:/opt/rh/devtoolset-3/root/usr/lib","HOSTTYPE":"x86_64-linux","XDG_CONFIG_DIRS":"/opt/rh/devtoolset-3/root/etc/xdg:/etc/xdg","MAIL":"/var/spool/mail/nodeuser","PATH":"/home/nodeuser/project/node:/opt/rh/devtoolset-3/root/usr/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin","PWD":"/home/nodeuser/project/node","LANG":"en_US.UTF-8","PS1":"\\u@\\h : \\[\\e[31m\\]\\w\\[\\e[m\\] > ","SHLVL":"2","HOME":"/home/nodeuser","OSTYPE":"linux","VENDOR":"unknown","PYTHONPATH":"/opt/rh/devtoolset-3/root/usr/lib64/python2.7/site-packages:/opt/rh/devtoolset-3/root/usr/lib/python2.7/site-packages","MACHTYPE":"x86_64","LOGNAME":"nodeuser","XDG_DATA_DIRS":"/opt/rh/devtoolset-3/root/usr/share:/usr/local/share:/usr/share","LESSOPEN":"||/usr/bin/lesspipe.sh %s","INFOPATH":"/opt/rh/devtoolset-3/root/usr/share/info","XDG_RUNTIME_DIR":"/run/user/50141","_":"./node"},"userLimits":{"core_file_size_blocks":{"soft":"","hard":"unlimited"},"data_seg_size_bytes":{"soft":"unlimited","hard":"unlimited"},"file_size_blocks":{"soft":"unlimited","hard":"unlimited"},"max_locked_memory_bytes":{"soft":"unlimited","hard":65536},"max_memory_size_bytes":{"soft":"unlimited","hard":"unlimited"},"open_files":{"soft":"unlimited","hard":4096},"stack_size_bytes":{"soft":"unlimited","hard":"unlimited"},"cpu_time_seconds":{"soft":"unlimited","hard":"unlimited"},"max_user_processes":{"soft":"unlimited","hard":4127290},"virtual_memory_bytes":{"soft":"unlimited","hard":"unlimited"}},"sharedObjects":["/lib64/libdl.so.2","/lib64/librt.so.1","/lib64/libstdc++.so.6","/lib64/libm.so.6","/lib64/libgcc_s.so.1","/lib64/libpthread.so.0","/lib64/libc.so.6","/lib64/ld-linux-x86-64.so.2"]}Usage#
node --report-uncaught-exception --report-on-signal \--report-on-fatalerror app.js--report-uncaught-exceptionEnables report to be generated onun-caught exceptions. Useful when inspecting JavaScript stack in conjunctionwith native stack and other runtime environment data.--report-on-signalEnables report to be generated upon receivingthe specified (or predefined) signal to the running Node.js process. (Seebelow on how to modify the signal that triggers the report.) Default signal isSIGUSR2. Useful when a report needs to be triggered from another program.Application monitors may leverage this feature to collect report at regularintervals and plot rich set of internal runtime data to their views.
Signal based report generation is not supported in Windows.
Under normal circumstances, there is no need to modify the report triggeringsignal. However, ifSIGUSR2 is already used for other purposes, then thisflag helps to change the signal for report generation and preserve the originalmeaning ofSIGUSR2 for the said purposes.
--report-on-fatalerrorEnables the report to be triggered on fatal errors(internal errors within the Node.js runtime, such as out of memory)that leads to termination of the application. Useful to inspect variousdiagnostic data elements such as heap, stack, event loop state, resourceconsumption etc. to reason about the fatal error.--report-compactWrite reports in a compact format, single-line JSON, moreeasily consumable by log processing systems than the default multi-line formatdesigned for human consumption.--report-directoryLocation at which the report will begenerated.--report-filenameName of the file to which the report will bewritten.--report-signalSets or resets the signal for report generation(not supported on Windows). Default signal isSIGUSR2.--report-exclude-networkExcludeheader.networkInterfacesand disable the reverse DNS queriesinlibuv.*.(remote|local)Endpoint.hostfrom the diagnostic report.By default this is not set and the network interfaces are included.--report-exclude-envExcludeenvironmentVariablesfrom thediagnostic report. By default this is not set and the environmentvariables are included.
A report can also be triggered via an API call from a JavaScript application:
process.report.writeReport();This function takes an optional additional argumentfilename, which isthe name of a file into which the report is written.
process.report.writeReport('./foo.json');This function takes an optional additional argumenterr which is anErrorobject that will be used as the context for the JavaScript stack printed in thereport. When using report to handle errors in a callback or an exceptionhandler, this allows the report to include the location of the original error aswell as where it was handled.
try { process.chdir('/non-existent-path');}catch (err) { process.report.writeReport(err);}// Any other codeIf both filename and error object are passed towriteReport() theerror object must be the second parameter.
try { process.chdir('/non-existent-path');}catch (err) { process.report.writeReport(filename, err);}// Any other codeThe content of the diagnostic report can be returned as a JavaScript Objectvia an API call from a JavaScript application:
const report = process.report.getReport();console.log(typeof report ==='object');// true// Similar to process.report.writeReport() outputconsole.log(JSON.stringify(report,null,2));This function takes an optional additional argumenterr, which is anErrorobject that will be used as the context for the JavaScript stack printed in thereport.
const report = process.report.getReport(newError('custom error'));console.log(typeof report ==='object');// trueThe API versions are useful when inspecting the runtime state from withinthe application, in expectation of self-adjusting the resource consumption,load balancing, monitoring etc.
The content of the report consists of a header section containing the eventtype, date, time, PID, and Node.js version, sections containing JavaScript andnative stack traces, a section containing V8 heap information, a sectioncontaininglibuv handle information, and an OS platform information sectionshowing CPU and memory usage and system limits. An example report can betriggered using the Node.js REPL:
$node>process.report.writeReport();Writing Node.js report to file: report.20181126.091102.8480.0.001.jsonNode.js report completed>When a report is written, start and end messages are issued to stderrand the filename of the report is returned to the caller. The default filenameincludes the date, time, PID, and a sequence number. The sequence number helpsin associating the report dump with the runtime state if generated multipletimes for the same Node.js process.
Report Version#
Diagnostic report has an associated single-digit version number (report.header.reportVersion),uniquely representing the report format. The version number is bumpedwhen new key is added or removed, or the data type of a value is changed.Report version definitions are consistent across LTS releases.
Version history#
Version 5#
History
| Version | Changes |
|---|---|
| v23.5.0, v22.13.0 | Fix typos in the memory limit units. |
Replace the keysdata_seg_size_kbytes,max_memory_size_kbytes, andvirtual_memory_kbyteswithdata_seg_size_bytes,max_memory_size_bytes, andvirtual_memory_bytesrespectively in theuserLimits section, as these values are given in bytes.
{"userLimits":{// Skip some keys ..."data_seg_size_bytes":{// replacing data_seg_size_kbytes"soft":"unlimited","hard":"unlimited"},// ..."max_memory_size_bytes":{// replacing max_memory_size_kbytes"soft":"unlimited","hard":"unlimited"},// ..."virtual_memory_bytes":{// replacing virtual_memory_kbytes"soft":"unlimited","hard":"unlimited"}}}Version 4#
History
| Version | Changes |
|---|---|
| v23.3.0, v22.13.0 | Added |
New fieldsipv4 andipv6 are added totcp andudp libuv handles endpoints. Examples:
{"libuv":[{"type":"tcp","is_active":true,"is_referenced":true,"address":"0x000055e70fcb85d8","localEndpoint":{"host":"localhost","ip4":"127.0.0.1",// new key"port":48986},"remoteEndpoint":{"host":"localhost","ip4":"127.0.0.1",// new key"port":38573},"sendBufferSize":2626560,"recvBufferSize":131072,"fd":24,"writeQueueSize":0,"readable":true,"writable":true},{"type":"tcp","is_active":true,"is_referenced":true,"address":"0x000055e70fcd68c8","localEndpoint":{"host":"ip6-localhost","ip6":"::1",// new key"port":52266},"remoteEndpoint":{"host":"ip6-localhost","ip6":"::1",// new key"port":38573},"sendBufferSize":2626560,"recvBufferSize":131072,"fd":25,"writeQueueSize":0,"readable":false,"writable":false}]}Version 3#
History
| Version | Changes |
|---|---|
| v19.1.0, v18.13.0 | Add more memory info. |
The following memory usage keys are added to theresourceUsage section.
{"resourceUsage":{"rss":"35766272","free_memory":"1598337024","total_memory":"17179869184","available_memory":"1598337024","constrained_memory":"36624662528"}}Version 2#
History
| Version | Changes |
|---|---|
| v13.9.0, v12.16.2 | Workers are now included in the report. |
AddedWorker support. Refer toInteraction with workers section for more details.
Version 1#
This is the first version of the diagnostic report.
Configuration#
Additional runtime configuration of report generation is available viathe following properties ofprocess.report:
reportOnFatalError triggers diagnostic reporting on fatal errors whentrue.Defaults tofalse.
reportOnSignal triggers diagnostic reporting on signal whentrue. This isnot supported on Windows. Defaults tofalse.
reportOnUncaughtException triggers diagnostic reporting on uncaught exceptionwhentrue. Defaults tofalse.
signal specifies the POSIX signal identifier that will be usedto intercept external triggers for report generation. Defaults to'SIGUSR2'.
filename specifies the name of the output file in the file system.Special meaning is attached tostdout andstderr. Usage of thesewill result in report being written to the associated standard streams.In cases where standard streams are used, the value indirectory is ignored.URLs are not supported. Defaults to a composite filename that containstimestamp, PID, and sequence number.
directory specifies the file system directory where the report will bewritten. URLs are not supported. Defaults to the current working directory ofthe Node.js process.
excludeNetwork excludesheader.networkInterfaces from the diagnostic report.
// Trigger report only on uncaught exceptions.process.report.reportOnFatalError =false;process.report.reportOnSignal =false;process.report.reportOnUncaughtException =true;// Trigger report for both internal errors as well as external signal.process.report.reportOnFatalError =true;process.report.reportOnSignal =true;process.report.reportOnUncaughtException =false;// Change the default signal to 'SIGQUIT' and enable it.process.report.reportOnFatalError =false;process.report.reportOnUncaughtException =false;process.report.reportOnSignal =true;process.report.signal ='SIGQUIT';// Disable network interfaces reportingprocess.report.excludeNetwork =true;Configuration on module initialization is also available viaenvironment variables:
NODE_OPTIONS="--report-uncaught-exception \ --report-on-fatalerror --report-on-signal \ --report-signal=SIGUSR2 --report-filename=./report.json \ --report-directory=/home/nodeuser"Specific API documentation can be found underprocess API documentation section.
Interaction with workers#
History
| Version | Changes |
|---|---|
| v13.9.0, v12.16.2 | Workers are now included in the report. |
Worker threads can create reports in the same way that the main threaddoes.
Reports will include information on any Workers that are children of the currentthread as part of theworkers section, with each Worker generating a reportin the standard report format.
The thread which is generating the report will wait for the reports from Workerthreads to finish. However, the latency for this will usually be low, as bothrunning JavaScript and the event loop are interrupted to generate the report.
Single executable applications#
History
| Version | Changes |
|---|---|
| v25.5.0 | Added built-in single executable application generation via the CLI flag |
| v20.6.0 | Added support for "useSnapshot". |
| v20.6.0 | Added support for "useCodeCache". |
| v19.7.0, v18.16.0 | Added in: v19.7.0, v18.16.0 |
Source Code:src/node_sea.cc
This feature allows the distribution of a Node.js application conveniently to asystem that does not have Node.js installed.
Node.js supports the creation ofsingle executable applications by allowingthe injection of a blob prepared by Node.js, which can contain a bundled script,into thenode binary. During start up, the program checks if anything has beeninjected. If the blob is found, it executes the script in the blob. OtherwiseNode.js operates as it normally does.
The single executable application feature currently only supports running asingle embedded script using theCommonJS module system.
Users can create a single executable application from their bundled scriptwith thenode binary itself and any tool which can inject resources into thebinary.
Create a JavaScript file:
echo'console.log(`Hello, ${process.argv[2]}!`);' > hello.jsCreate a configuration file building a blob that can be injected into thesingle executable application (seeGenerating single executable preparation blobs for details):
- On systems other than Windows:
echo'{ "main": "hello.js", "output": "sea" }' > sea-config.json- On Windows:
echo'{ "main": "hello.js", "output": "sea.exe" }' > sea-config.jsonThe
.exeextension is necessary.Generate the target executable:
node --build-sea sea-config.jsonSign the binary (macOS and Windows only):
- On macOS:
codesign --sign - hello- On Windows (optional):
A certificate needs to be present for this to work. However, the unsignedbinary would still be runnable.
signtool sign /fd SHA256 hello.exeRun the binary:
- On systems other than Windows
$./hello worldHello, world!- On Windows
$.\hello.exe worldHello, world!
Generating single executable applications with--build-sea#
To generate a single executable application directly, the--build-sea flag can beused. It takes a path to a configuration file in JSON format. If the path passed to itisn't absolute, Node.js will use the path relative to the current working directory.
The configuration currently reads the following top-level fields:
{"main":"/path/to/bundled/script.js","executable":"/path/to/node/binary",// Optional, if not specified, uses the current Node.js binary"output":"/path/to/write/the/generated/executable","disableExperimentalSEAWarning":true,// Default: false"useSnapshot":false,// Default: false"useCodeCache":true,// Default: false"execArgv":["--no-warnings","--max-old-space-size=4096"],// Optional"execArgvExtension":"env",// Default: "env", options: "none", "env", "cli""assets":{// Optional"a.dat":"/path/to/a.dat","b.txt":"/path/to/b.txt"}}If the paths are not absolute, Node.js will use the path relative to thecurrent working directory. The version of the Node.js binary used to producethe blob must be the same as the one to which the blob will be injected.
Note: When generating cross-platform SEAs (e.g., generating a SEAforlinux-x64 ondarwin-arm64),useCodeCache anduseSnapshotmust be set to false to avoid generating incompatible executables.Since code cache and snapshots can only be loaded on the same platformwhere they are compiled, the generated executable might crash on startup whentrying to load code cache or snapshots built on a different platform.
Assets#
Users can include assets by adding a key-path dictionary to the configurationas theassets field. At build time, Node.js would read the assets from thespecified paths and bundle them into the preparation blob. In the generatedexecutable, users can retrieve the assets using thesea.getAsset() andsea.getAssetAsBlob() APIs.
{"main":"/path/to/bundled/script.js","output":"/path/to/write/the/generated/executable","assets":{"a.jpg":"/path/to/a.jpg","b.txt":"/path/to/b.txt"}}The single-executable application can access the assets as follows:
const { getAsset, getAssetAsBlob, getRawAsset, getAssetKeys } =require('node:sea');// Get all asset keys.const keys =getAssetKeys();console.log(keys);// ['a.jpg', 'b.txt']// Returns a copy of the data in an ArrayBuffer.const image =getAsset('a.jpg');// Returns a string decoded from the asset as UTF8.const text =getAsset('b.txt','utf8');// Returns a Blob containing the asset.const blob =getAssetAsBlob('a.jpg');// Returns an ArrayBuffer containing the raw asset without copying.const raw =getRawAsset('a.jpg');See documentation of thesea.getAsset(),sea.getAssetAsBlob(),sea.getRawAsset() andsea.getAssetKeys() APIs for more information.
Startup snapshot support#
TheuseSnapshot field can be used to enable startup snapshot support. In thiscase, themain script would not be executed when the final executable is launched.Instead, it would be run when the single executable application preparationblob is generated on the building machine. The generated preparation blob wouldthen include a snapshot capturing the states initialized by themain script.The final executable, with the preparation blob injected, would deserializethe snapshot at run time.
WhenuseSnapshot is true, the main script must invoke thev8.startupSnapshot.setDeserializeMainFunction() API to configure codethat needs to be run when the final executable is launched by the users.
The typical pattern for an application to use snapshot in a single executableapplication is:
- At build time, on the building machine, the main script is run toinitialize the heap to a state that's ready to take user input. The scriptshould also configure a main function with
v8.startupSnapshot.setDeserializeMainFunction(). This function will becompiled and serialized into the snapshot, but not invoked at build time. - At run time, the main function will be run on top of the deserialized heapon the user machine to process user input and generate output.
The general constraints of the startup snapshot scripts also apply to the mainscript when it's used to build snapshot for the single executable application,and the main script can use thev8.startupSnapshot API to adapt tothese constraints. Seedocumentation about startup snapshot support in Node.js.
V8 code cache support#
WhenuseCodeCache is set totrue in the configuration, during the generationof the single executable preparation blob, Node.js will compile themainscript to generate the V8 code cache. The generated code cache would be part ofthe preparation blob and get injected into the final executable. When the singleexecutable application is launched, instead of compiling themain script fromscratch, Node.js would use the code cache to speed up the compilation, thenexecute the script, which would improve the startup performance.
Note:import() does not work whenuseCodeCache istrue.
Execution arguments#
TheexecArgv field can be used to specify Node.js-specificarguments that will be automatically applied when the singleexecutable application starts. This allows application developersto configure Node.js runtime options without requiring end usersto be aware of these flags.
For example, the following configuration:
{"main":"/path/to/bundled/script.js","output":"/path/to/write/the/generated/executable","execArgv":["--no-warnings","--max-old-space-size=2048"]}will instruct the SEA to be launched with the--no-warnings and--max-old-space-size=2048 flags. In the scripts embedded in the executable, these flagscan be accessed using theprocess.execArgv property:
// If the executable is launched with `sea user-arg1 user-arg2`console.log(process.execArgv);// Prints: ['--no-warnings', '--max-old-space-size=2048']console.log(process.argv);// Prints ['/path/to/sea', 'path/to/sea', 'user-arg1', 'user-arg2']The user-provided arguments are in theprocess.argv array starting from index 2,similar to what would happen if the application is started with:
node --no-warnings --max-old-space-size=2048 /path/to/bundled/script.js user-arg1 user-arg2Execution argument extension#
TheexecArgvExtension field controls how additional execution arguments can beprovided beyond those specified in theexecArgv field. It accepts one of three string values:
"none": No extension is allowed. Only the arguments specified inexecArgvwill be used,and theNODE_OPTIONSenvironment variable will be ignored."env":(Default) TheNODE_OPTIONSenvironment variable can extend the execution arguments.This is the default behavior to maintain backward compatibility."cli": The executable can be launched with--node-options="--flag1 --flag2", and those flagswill be parsed as execution arguments for Node.js instead of being passed to the user script.This allows using arguments that are not supported by theNODE_OPTIONSenvironment variable.
For example, with"execArgvExtension": "cli":
{"main":"/path/to/bundled/script.js","output":"/path/to/write/the/generated/executable","execArgv":["--no-warnings"],"execArgvExtension":"cli"}The executable can be launched as:
./my-sea --node-options="--trace-exit" user-arg1 user-arg2This would be equivalent to running:
node --no-warnings --trace-exit /path/to/bundled/script.js user-arg1 user-arg2In the injected main script#
Single-executable application API#
Thenode:sea builtin allows interaction with the single-executable applicationfrom the JavaScript main script embedded into the executable.
sea.getAsset(key[, encoding])#
This method can be used to retrieve the assets configured to be bundled into thesingle-executable application at build time.An error is thrown when no matching asset can be found.
key<string> the key for the asset in the dictionary specified by theassetsfield in the single-executable application configuration.encoding<string> If specified, the asset will be decoded asa string. Any encoding supported by theTextDecoderis accepted.If unspecified, anArrayBuffercontaining a copy of the asset would bereturned instead.- Returns:<string> |<ArrayBuffer>
sea.getAssetAsBlob(key[, options])#
Similar tosea.getAsset(), but returns the result in a<Blob>.An error is thrown when no matching asset can be found.
sea.getRawAsset(key)#
This method can be used to retrieve the assets configured to be bundled into thesingle-executable application at build time.An error is thrown when no matching asset can be found.
Unlikesea.getAsset() orsea.getAssetAsBlob(), this method does notreturn a copy. Instead, it returns the raw asset bundled inside the executable.
For now, users should avoid writing to the returned array buffer. If theinjected section is not marked as writable or not aligned properly,writes to the returned array buffer is likely to result in a crash.
key<string> the key for the asset in the dictionary specified by theassetsfield in the single-executable application configuration.- Returns:<ArrayBuffer>
sea.getAssetKeys()#
- Returns<string[]> An array containing all the keys of the assetsembedded in the executable. If no assets are embedded, returns an empty array.
This method can be used to retrieve an array of all the keys of assetsembedded into the single-executable application.An error is thrown when not running inside a single-executable application.
require(id) in the injected main script is not file based#
require() in the injected main script is not the same as therequire()available to modules that are not injected. It also does not have any of theproperties that non-injectedrequire() has exceptrequire.main. Itcan only be used to load built-in modules. Attempting to load a module that canonly be found in the file system will throw an error.
Instead of relying on a file basedrequire(), users can bundle theirapplication into a standalone JavaScript file to inject into the executable.This also ensures a more deterministic dependency graph.
However, if a file basedrequire() is still needed, that can also be achieved:
const { createRequire } =require('node:module');require =createRequire(__filename);__filename andmodule.filename in the injected main script#
The values of__filename andmodule.filename in the injected main scriptare equal toprocess.execPath.
__dirname in the injected main script#
The value of__dirname in the injected main script is equal to the directoryname ofprocess.execPath.
Using native addons in the injected main script#
Native addons can be bundled as assets into the single-executable applicationby specifying them in theassets field of the configuration file used togenerate the single-executable application preparation blob.The addon can then be loaded in the injected main script by writing the assetto a temporary file and loading it withprocess.dlopen().
{"main":"/path/to/bundled/script.js","output":"/path/to/write/the/generated/executable","assets":{"myaddon.node":"/path/to/myaddon/build/Release/myaddon.node"}}// script.jsconst fs =require('node:fs');const os =require('node:os');const path =require('node:path');const { getRawAsset } =require('node:sea');const addonPath = path.join(os.tmpdir(),'myaddon.node');fs.writeFileSync(addonPath,newUint8Array(getRawAsset('myaddon.node')));const myaddon = {exports: {} };process.dlopen(myaddon, addonPath);console.log(myaddon.exports);fs.rmSync(addonPath);Known caveat: if the single-executable application is produced by postject running on a Linux arm64 docker container,the produced ELF binary does not have the correct hash table to load the addons andwill crash onprocess.dlopen(). Build the single-executable application on other platforms, or at least ona non-container Linux arm64 environment to work around this issue.
Notes#
Single executable application creation process#
The process documented here is subject to change.
1. Generating single executable preparation blobs#
To build a single executable application, Node.js would first generate a blobthat contains all the necessary information to run the bundled script.When using--build-sea, this step is done internally along with the injection.
Dumping the preparation blob to disk#
Before--build-sea was introduced, an older workflow was introduced to write thepreparation blob to disk for injection by external tools. This can stillbe used for verification purposes.
To dump the preparation blob to disk for verification, use--experimental-sea-config.This writes a file that can be injected into a Node.js binary using tools likepostject.
The configuration is similar to that of--build-sea, except that theoutput field specifies the path to write the generated blob file instead ofthe final executable.
{"main":"/path/to/bundled/script.js",// Instead of the final executable, this is the path to write the blob."output":"/path/to/write/the/generated/blob.blob"}2. Injecting the preparation blob into thenode binary#
To complete the creation of a single executable application, the generated blobneeds to be injected into a copy of thenode binary, as documented below.
When using--build-sea, this step is done internally along with the blob generation.
- If the
nodebinary is aPE file, the blob should be injected as a resourcenamedNODE_SEA_BLOB. - If the
nodebinary is aMach-O file, the blob should be injected as a sectionnamedNODE_SEA_BLOBin theNODE_SEAsegment. - If the
nodebinary is anELF file, the blob should be injected as a notenamedNODE_SEA_BLOB.
Then, the SEA building process searches the binary for theNODE_SEA_FUSE_fce680ab2cc467b6e072b8b5df1996b2:0fuse string and flip thelast character to1 to indicate that a resource has been injected.
Injecting the preparation blob manually#
Before--build-sea was introduced, an older workflow was introduced to allowexternal tools to inject the generated blob into a copy of thenode binary.
For example, withpostject:
Create a copy of the
nodeexecutable and name it according to your needs:- On systems other than Windows:
cp $(command -v node) hello- On Windows:
node -e "require('fs').copyFileSync(process.execPath, 'hello.exe')"The
.exeextension is necessary.Remove the signature of the binary (macOS and Windows only):
- On macOS:
codesign --remove-signature hello- On Windows (optional):
signtool can be used from the installedWindows SDK. If this step isskipped, ignore any signature-related warning from postject.
signtool remove /s hello.exeInject the blob into the copied binary by running
postjectwiththe following options:hello/hello.exe- The name of the copy of thenodeexecutablecreated in step 4.NODE_SEA_BLOB- The name of the resource / note / section in the binarywhere the contents of the blob will be stored.sea-prep.blob- The name of the blob created in step 1.--sentinel-fuse NODE_SEA_FUSE_fce680ab2cc467b6e072b8b5df1996b2- Thefuse used by the Node.js project to detect if a file has been injected.--macho-segment-name NODE_SEA(only needed on macOS) - The name of thesegment in the binary where the contents of the blob will bestored.
To summarize, here is the required command for each platform:
On Linux:
npx postject hello NODE_SEA_BLOB sea-prep.blob \ --sentinel-fuse NODE_SEA_FUSE_fce680ab2cc467b6e072b8b5df1996b2On Windows - PowerShell:
npx postject hello.exe NODE_SEA_BLOB sea-prep.blob `--sentinel-fuse NODE_SEA_FUSE_fce680ab2cc467b6e072b8b5df1996b2On Windows - Command Prompt:
npx postject hello.exe NODE_SEA_BLOB sea-prep.blob ^ --sentinel-fuse NODE_SEA_FUSE_fce680ab2cc467b6e072b8b5df1996b2On macOS:
npx postject hello NODE_SEA_BLOB sea-prep.blob \ --sentinel-fuse NODE_SEA_FUSE_fce680ab2cc467b6e072b8b5df1996b2 \ --macho-segment-name NODE_SEA
Platform support#
Single-executable support is tested regularly on CI only on the followingplatforms:
- Windows
- macOS
- Linux (all distributionssupported by Node.js except Alpine and allarchitecturessupported by Node.js except s390x)
This is due to a lack of better tools to generate single-executables that can beused to test this feature on other platforms.
Suggestions for other resource injection tools/workflows are welcomed. Pleasestart a discussion athttps://github.com/nodejs/single-executable/discussionsto help us document them.
SQLite#
History
| Version | Changes |
|---|---|
| v23.4.0, v22.13.0 | SQLite is no longer behind |
| v22.5.0 | Added in: v22.5.0 |
Source Code:lib/sqlite.js
Thenode:sqlite module facilitates working with SQLite databases.To access it:
import sqlitefrom'node:sqlite';const sqlite =require('node:sqlite');
This module is only available under thenode: scheme.
The following example shows the basic usage of thenode:sqlite module to openan in-memory database, write data to the database, and then read the data back.
import {DatabaseSync }from'node:sqlite';const database =newDatabaseSync(':memory:');// Execute SQL statements from strings.database.exec(` CREATE TABLE data( key INTEGER PRIMARY KEY, value TEXT ) STRICT`);// Create a prepared statement to insert data into the database.const insert = database.prepare('INSERT INTO data (key, value) VALUES (?, ?)');// Execute the prepared statement with bound values.insert.run(1,'hello');insert.run(2,'world');// Create a prepared statement to read data from the database.const query = database.prepare('SELECT * FROM data ORDER BY key');// Execute the prepared statement and log the result set.console.log(query.all());// Prints: [ { key: 1, value: 'hello' }, { key: 2, value: 'world' } ]'use strict';const {DatabaseSync } =require('node:sqlite');const database =newDatabaseSync(':memory:');// Execute SQL statements from strings.database.exec(` CREATE TABLE data( key INTEGER PRIMARY KEY, value TEXT ) STRICT`);// Create a prepared statement to insert data into the database.const insert = database.prepare('INSERT INTO data (key, value) VALUES (?, ?)');// Execute the prepared statement with bound values.insert.run(1,'hello');insert.run(2,'world');// Create a prepared statement to read data from the database.const query = database.prepare('SELECT * FROM data ORDER BY key');// Execute the prepared statement and log the result set.console.log(query.all());// Prints: [ { key: 1, value: 'hello' }, { key: 2, value: 'world' } ]
Class:DatabaseSync#
History
| Version | Changes |
|---|---|
| v24.0.0, v22.16.0 | Add |
| v23.10.0, v22.15.0 | The |
| v22.5.0 | Added in: v22.5.0 |
This class represents a singleconnection to a SQLite database. All APIsexposed by this class execute synchronously.
new DatabaseSync(path[, options])#
History
| Version | Changes |
|---|---|
| v25.5.0 | Enable |
| v25.1.0 | Add |
| v24.4.0, v22.18.0 | Add new SQLite database options. |
| v22.5.0 | Added in: v22.5.0 |
path<string> |<Buffer> |<URL> The path of the database. A SQLite database can bestored in a file or completelyin memory. To use a file-backed database,the path should be a file path. To use an in-memory database, the pathshould be the special name':memory:'.options<Object> Configuration options for the database connection. Thefollowing options are supported:open<boolean> Iftrue, the database is opened by the constructor. Whenthis value isfalse, the database must be opened via theopen()method.Default:true.readOnly<boolean> Iftrue, the database is opened in read-only mode.If the database does not exist, opening it will fail.Default:false.enableForeignKeyConstraints<boolean> Iftrue, foreign key constraintsare enabled. This is recommended but can be disabled for compatibility withlegacy database schemas. The enforcement of foreign key constraints can beenabled and disabled after opening the database usingPRAGMA foreign_keys.Default:true.enableDoubleQuotedStringLiterals<boolean> Iftrue, SQLite will acceptdouble-quoted string literals. This is not recommended but can beenabled for compatibility with legacy database schemas.Default:false.allowExtension<boolean> Iftrue, theloadExtensionSQL functionand theloadExtension()method are enabled.You can callenableLoadExtension(false)later to disable this feature.Default:false.timeout<number> Thebusy timeout in milliseconds. This is the maximum amount oftime that SQLite will wait for a database lock to be released beforereturning an error.Default:0.readBigInts<boolean> Iftrue, integer fields are read as JavaScriptBigIntvalues. Iffalse,integer fields are read as JavaScript numbers.Default:false.returnArrays<boolean> Iftrue, query results are returned as arrays instead of objects.Default:false.allowBareNamedParameters<boolean> Iftrue, allows binding named parameters without the prefixcharacter (e.g.,fooinstead of:foo).Default:true.allowUnknownNamedParameters<boolean> Iftrue, unknown named parameters are ignored when binding.Iffalse, an exception is thrown for unknown named parameters.Default:false.defensive<boolean> Iftrue, enables the defensive flag. When the defensive flag is enabled,language features that allow ordinary SQL to deliberately corrupt the database file are disabled.The defensive flag can also be set usingenableDefensive().Default:true.
Constructs a newDatabaseSync instance.
database.aggregate(name, options)#
Registers a new aggregate function with the SQLite database. This method is a wrapper aroundsqlite3_create_window_function().
name<string> The name of the SQLite function to create.options<Object> Function configuration settings.deterministic<boolean> Iftrue, theSQLITE_DETERMINISTICflag isset on the created function.Default:false.directOnly<boolean> Iftrue, theSQLITE_DIRECTONLYflag is set onthe created function.Default:false.useBigIntArguments<boolean> Iftrue, integer arguments tooptions.stepandoptions.inverseare converted toBigInts. Iffalse, integer arguments are passed asJavaScript numbers.Default:false.varargs<boolean> Iftrue,options.stepandoptions.inversemay be invoked with any number ofarguments (between zero andSQLITE_MAX_FUNCTION_ARG). Iffalse,inverseandstepmust be invoked with exactlylengtharguments.Default:false.start<number> |<string> |<null> |<Array> |<Object> |<Function> The identityvalue for the aggregation function. This value is used when the aggregationfunction is initialized. When a<Function> is passed the identity will be its return value.step<Function> The function to call for each row in the aggregation. Thefunction receives the current state and the row value. The return value ofthis function should be the new state.result<Function> The function to call to get the result of theaggregation. The function receives the final state and should return theresult of the aggregation.inverse<Function> When this function is provided, theaggregatemethod will work as a window function.The function receives the current state and the dropped row value. The return value of this function should be thenew state.
When used as a window function, theresult function will be called multiple times.
const {DatabaseSync } =require('node:sqlite');const db =newDatabaseSync(':memory:');db.exec(` CREATE TABLE t3(x, y); INSERT INTO t3 VALUES ('a', 4), ('b', 5), ('c', 3), ('d', 8), ('e', 1);`);db.aggregate('sumint', {start:0,step:(acc, value) => acc + value,});db.prepare('SELECT sumint(y) as total FROM t3').get();// { total: 21 }import {DatabaseSync }from'node:sqlite';const db =newDatabaseSync(':memory:');db.exec(` CREATE TABLE t3(x, y); INSERT INTO t3 VALUES ('a', 4), ('b', 5), ('c', 3), ('d', 8), ('e', 1);`);db.aggregate('sumint', {start:0,step:(acc, value) => acc + value,});db.prepare('SELECT sumint(y) as total FROM t3').get();// { total: 21 }
database.close()#
Closes the database connection. An exception is thrown if the database is notopen. This method is a wrapper aroundsqlite3_close_v2().
database.loadExtension(path)#
path<string> The path to the shared library to load.
Loads a shared library into the database connection. This method is a wrapperaroundsqlite3_load_extension(). It is required to enable theallowExtension option when constructing theDatabaseSync instance.
database.enableLoadExtension(allow)#
allow<boolean> Whether to allow loading extensions.
Enables or disables theloadExtension SQL function, and theloadExtension()method. WhenallowExtension isfalse when constructing, you cannot enableloading extensions for security reasons.
database.enableDefensive(active)#
active<boolean> Whether to set the defensive flag.
Enables or disables the defensive flag. When the defensive flag is active,language features that allow ordinary SQL to deliberately corrupt the database file are disabled.SeeSQLITE_DBCONFIG_DEFENSIVE in the SQLite documentation for details.
database.location([dbName])#
dbName<string> Name of the database. This can be'main'(the default primary database) or any otherdatabase that has been added withATTACH DATABASEDefault:'main'.- Returns:<string> |<null> The location of the database file. When using an in-memory database,this method returns null.
This method is a wrapper aroundsqlite3_db_filename()
database.exec(sql)#
sql<string> A SQL string to execute.
This method allows one or more SQL statements to be executed without returningany results. This method is useful when executing SQL statements read from afile. This method is a wrapper aroundsqlite3_exec().
database.function(name[, options], function)#
name<string> The name of the SQLite function to create.options<Object> Optional configuration settings for the function. Thefollowing properties are supported:deterministic<boolean> Iftrue, theSQLITE_DETERMINISTICflag isset on the created function.Default:false.directOnly<boolean> Iftrue, theSQLITE_DIRECTONLYflag is set onthe created function.Default:false.useBigIntArguments<boolean> Iftrue, integer arguments tofunctionare converted toBigInts. Iffalse, integer arguments are passed asJavaScript numbers.Default:false.varargs<boolean> Iftrue,functionmay be invoked with any number ofarguments (between zero andSQLITE_MAX_FUNCTION_ARG). Iffalse,functionmust be invoked with exactlyfunction.lengtharguments.Default:false.
function<Function> The JavaScript function to call when the SQLitefunction is invoked. The return value of this function should be a validSQLite data type: seeType conversion between JavaScript and SQLite.The result defaults toNULLif the return value isundefined.
This method is used to create SQLite user-defined functions. This method is awrapper aroundsqlite3_create_function_v2().
database.setAuthorizer(callback)#
callback<Function> |<null> The authorizer function to set, ornulltoclear the current authorizer.
Sets an authorizer callback that SQLite will invoke whenever it attempts toaccess data or modify the database schema through prepared statements.This can be used to implement security policies, audit access, or restrict certain operations.This method is a wrapper aroundsqlite3_set_authorizer().
When invoked, the callback receives five arguments:
actionCode<number> The type of operation being performed (e.g.,SQLITE_INSERT,SQLITE_UPDATE,SQLITE_SELECT).arg1<string> |<null> The first argument (context-dependent, often a table name).arg2<string> |<null> The second argument (context-dependent, often a column name).dbName<string> |<null> The name of the database.triggerOrView<string> |<null> The name of the trigger or view causing the access.
The callback must return one of the following constants:
SQLITE_OK- Allow the operation.SQLITE_DENY- Deny the operation (causes an error).SQLITE_IGNORE- Ignore the operation (silently skip).
const {DatabaseSync, constants } =require('node:sqlite');const db =newDatabaseSync(':memory:');// Set up an authorizer that denies all table creationdb.setAuthorizer((actionCode) => {if (actionCode === constants.SQLITE_CREATE_TABLE) {return constants.SQLITE_DENY; }return constants.SQLITE_OK;});// This will workdb.prepare('SELECT 1').get();// This will throw an error due to authorization denialtry { db.exec('CREATE TABLE blocked (id INTEGER)');}catch (err) {console.log('Operation blocked:', err.message);}import {DatabaseSync, constants }from'node:sqlite';const db =newDatabaseSync(':memory:');// Set up an authorizer that denies all table creationdb.setAuthorizer((actionCode) => {if (actionCode === constants.SQLITE_CREATE_TABLE) {return constants.SQLITE_DENY; }return constants.SQLITE_OK;});// This will workdb.prepare('SELECT 1').get();// This will throw an error due to authorization denialtry { db.exec('CREATE TABLE blocked (id INTEGER)');}catch (err) {console.log('Operation blocked:', err.message);}
database.isOpen#
- Type:<boolean> Whether the database is currently open or not.
database.isTransaction#
- Type:<boolean> Whether the database is currently within a transaction. This methodis a wrapper around
sqlite3_get_autocommit().
database.open()#
Opens the database specified in thepath argument of theDatabaseSyncconstructor. This method should only be used when the database is not opened viathe constructor. An exception is thrown if the database is already open.
database.prepare(sql[, options])#
sql<string> A SQL string to compile to a prepared statement.options<Object> Optional configuration for the prepared statement.readBigInts<boolean> Iftrue, integer fields are read asBigInts.Default: inherited from database options orfalse.returnArrays<boolean> Iftrue, results are returned as arrays.Default: inherited from database options orfalse.allowBareNamedParameters<boolean> Iftrue, allows binding namedparameters without the prefix character.Default: inherited fromdatabase options ortrue.allowUnknownNamedParameters<boolean> Iftrue, unknown named parametersare ignored.Default: inherited from database options orfalse.
- Returns:<StatementSync> The prepared statement.
Compiles a SQL statement into aprepared statement. This method is a wrapperaroundsqlite3_prepare_v2().
database.createTagStore([maxSize])#
maxSize<integer> The maximum number of prepared statements to cache.Default:1000.- Returns:<SQLTagStore> A new SQL tag store for caching prepared statements.
Creates a newSQLTagStore, which is a Least Recently Used (LRU) cachefor storing prepared statements. This allows for the efficient reuse ofprepared statements by tagging them with a unique identifier.
When a tagged SQL literal is executed, theSQLTagStore checks if a preparedstatement for the corresponding SQL query string already exists in the cache.If it does, the cached statement is used. If not, a new prepared statement iscreated, executed, and then stored in the cache for future use. This mechanismhelps to avoid the overhead of repeatedly parsing and preparing the same SQLstatements.
Tagged statements bind the placeholder values from the template literal asparameters to the underlying prepared statement. For example:
sqlTagStore.get`SELECT${value}`;is equivalent to:
db.prepare('SELECT ?').get(value);However, in the first example, the tag store will cache the underlying preparedstatement for future use.
Note: The
${value}syntax in tagged statementsbinds a parameter tothe prepared statement. This differs from its behavior inuntagged templateliterals, where it performs string interpolation.// This a safe example of binding a parameter to a tagged statement.sqlTagStore.run`INSERT INTO t1 (id) VALUES (${id})`;// This is an *unsafe* example of an untagged template string.// `id` is interpolated into the query text as a string.// This can lead to SQL injection and data corruption.db.run(`INSERT INTO t1 (id) VALUES (${id})`);
The tag store will match a statement from the cache if the query strings(including the positions of any bound placeholders) are identical.
// The following statements will match in the cache:sqlTagStore.get`SELECT * FROM t1 WHERE id =${id} AND active = 1`;sqlTagStore.get`SELECT * FROM t1 WHERE id =${12345} AND active = 1`;// The following statements will not match, as the query strings// and bound placeholders differ:sqlTagStore.get`SELECT * FROM t1 WHERE id =${id} AND active = 1`;sqlTagStore.get`SELECT * FROM t1 WHERE id = 12345 AND active = 1`;// The following statements will not match, as matches are case-sensitive:sqlTagStore.get`SELECT * FROM t1 WHERE id =${id} AND active = 1`;sqlTagStore.get`select * from t1 where id =${id} and active = 1`;The only way of binding parameters in tagged statements is with the${value}syntax. Do not add parameter binding placeholders (? etc.) to the SQL querystring itself.
import {DatabaseSync }from'node:sqlite';const db =newDatabaseSync(':memory:');const sql = db.createTagStore();db.exec('CREATE TABLE users (id INT, name TEXT)');// Using the 'run' method to insert data.// The tagged literal is used to identify the prepared statement.sql.run`INSERT INTO users VALUES (1, 'Alice')`;sql.run`INSERT INTO users VALUES (2, 'Bob')`;// Using the 'get' method to retrieve a single row.const name ='Alice';const user = sql.get`SELECT * FROM users WHERE name =${name}`;console.log(user);// { id: 1, name: 'Alice' }// Using the 'all' method to retrieve all rows.const allUsers = sql.all`SELECT * FROM users ORDER BY id`;console.log(allUsers);// [// { id: 1, name: 'Alice' },// { id: 2, name: 'Bob' }// ]const {DatabaseSync } =require('node:sqlite');const db =newDatabaseSync(':memory:');const sql = db.createTagStore();db.exec('CREATE TABLE users (id INT, name TEXT)');// Using the 'run' method to insert data.// The tagged literal is used to identify the prepared statement.sql.run`INSERT INTO users VALUES (1, 'Alice')`;sql.run`INSERT INTO users VALUES (2, 'Bob')`;// Using the 'get' method to retrieve a single row.const name ='Alice';const user = sql.get`SELECT * FROM users WHERE name =${name}`;console.log(user);// { id: 1, name: 'Alice' }// Using the 'all' method to retrieve all rows.const allUsers = sql.all`SELECT * FROM users ORDER BY id`;console.log(allUsers);// [// { id: 1, name: 'Alice' },// { id: 2, name: 'Bob' }// ]
database.createSession([options])#
options<Object> The configuration options for the session.table<string> A specific table to track changes for. By default, changes to all tables are tracked.db<string> Name of the database to track. This is useful when multiple databases have been added usingATTACH DATABASE.Default:'main'.
- Returns:<Session> A session handle.
Creates and attaches a session to the database. This method is a wrapper aroundsqlite3session_create() andsqlite3session_attach().
database.applyChangeset(changeset[, options])#
changeset<Uint8Array> A binary changeset or patchset.options<Object> The configuration options for how the changes will be applied.filter<Function> Skip changes that, when targeted table name is supplied to this function, return a truthy value.By default, all changes are attempted.onConflict<Function> A function that determines how to handle conflicts. The function receives one argument,which can be one of the following values:SQLITE_CHANGESET_DATA: ADELETEorUPDATEchange does not contain the expected "before" values.SQLITE_CHANGESET_NOTFOUND: A row matching the primary key of theDELETEorUPDATEchange does not exist.SQLITE_CHANGESET_CONFLICT: AnINSERTchange results in a duplicate primary key.SQLITE_CHANGESET_FOREIGN_KEY: Applying a change would result in a foreign key violation.SQLITE_CHANGESET_CONSTRAINT: Applying a change results in aUNIQUE,CHECK, orNOT NULLconstraintviolation.
The function should return one of the following values:
SQLITE_CHANGESET_OMIT: Omit conflicting changes.SQLITE_CHANGESET_REPLACE: Replace existing values with conflicting changes (only valid withSQLITE_CHANGESET_DATAorSQLITE_CHANGESET_CONFLICTconflicts).SQLITE_CHANGESET_ABORT: Abort on conflict and roll back the database.
When an error is thrown in the conflict handler or when any other value is returned from the handler,applying the changeset is aborted and the database is rolled back.
Default: A function that returns
SQLITE_CHANGESET_ABORT.
- Returns:<boolean> Whether the changeset was applied successfully without being aborted.
An exception is thrown if the database is notopen. This method is a wrapper aroundsqlite3changeset_apply().
import {DatabaseSync }from'node:sqlite';const sourceDb =newDatabaseSync(':memory:');const targetDb =newDatabaseSync(':memory:');sourceDb.exec('CREATE TABLE data(key INTEGER PRIMARY KEY, value TEXT)');targetDb.exec('CREATE TABLE data(key INTEGER PRIMARY KEY, value TEXT)');const session = sourceDb.createSession();const insert = sourceDb.prepare('INSERT INTO data (key, value) VALUES (?, ?)');insert.run(1,'hello');insert.run(2,'world');const changeset = session.changeset();targetDb.applyChangeset(changeset);// Now that the changeset has been applied, targetDb contains the same data as sourceDb.const {DatabaseSync } =require('node:sqlite');const sourceDb =newDatabaseSync(':memory:');const targetDb =newDatabaseSync(':memory:');sourceDb.exec('CREATE TABLE data(key INTEGER PRIMARY KEY, value TEXT)');targetDb.exec('CREATE TABLE data(key INTEGER PRIMARY KEY, value TEXT)');const session = sourceDb.createSession();const insert = sourceDb.prepare('INSERT INTO data (key, value) VALUES (?, ?)');insert.run(1,'hello');insert.run(2,'world');const changeset = session.changeset();targetDb.applyChangeset(changeset);// Now that the changeset has been applied, targetDb contains the same data as sourceDb.
database[Symbol.dispose]()#
History
| Version | Changes |
|---|---|
| v24.2.0 | No longer experimental. |
| v23.11.0, v22.15.0 | Added in: v23.11.0, v22.15.0 |
Closes the database connection. If the database connection is already closedthen this is a no-op.
Class:Session#
session.changeset()#
- Returns:<Uint8Array> Binary changeset that can be applied to other databases.
Retrieves a changeset containing all changes since the changeset was created. Can be called multiple times.An exception is thrown if the database or the session is not open. This method is a wrapper aroundsqlite3session_changeset().
session.patchset()#
- Returns:<Uint8Array> Binary patchset that can be applied to other databases.
Similar to the method above, but generates a more compact patchset. SeeChangesets and Patchsetsin the documentation of SQLite. An exception is thrown if the database or the session is not open. This method is awrapper aroundsqlite3session_patchset().
session.close()#
Closes the session. An exception is thrown if the database or the session is not open. This method is awrapper aroundsqlite3session_delete().
session[Symbol.dispose]()#
Closes the session. If the session is already closed, does nothing.
Class:StatementSync#
This class represents a singleprepared statement. This class cannot beinstantiated via its constructor. Instead, instances are created via thedatabase.prepare() method. All APIs exposed by this class executesynchronously.
A prepared statement is an efficient binary representation of the SQL used tocreate it. Prepared statements are parameterizable, and can be invoked multipletimes with different bound values. Parameters also offer protection againstSQL injection attacks. For these reasons, prepared statements are preferredover hand-crafted SQL strings when handling user input.
statement.all([namedParameters][, ...anonymousParameters])#
History
| Version | Changes |
|---|---|
| v23.7.0, v22.14.0 | Add support for |
| v22.5.0 | Added in: v22.5.0 |
namedParameters<Object> An optional object used to bind named parameters.The keys of this object are used to configure the mapping....anonymousParameters<null> |<number> |<bigint> |<string> |<Buffer> |<TypedArray> |<DataView> Zero ormore values to bind to anonymous parameters.- Returns:<Array> An array of objects. Each object corresponds to a rowreturned by executing the prepared statement. The keys and values of eachobject correspond to the column names and values of the row.
This method executes a prepared statement and returns all results as an array ofobjects. If the prepared statement does not return any results, this methodreturns an empty array. The prepared statementparameters are bound usingthe values innamedParameters andanonymousParameters.
statement.columns()#
Returns:<Array> An array of objects. Each object corresponds to a columnin the prepared statement, and contains the following properties:
column<string> |<null> The unaliased name of the column in the origintable, ornullif the column is the result of an expression or subquery.This property is the result ofsqlite3_column_origin_name().database<string> |<null> The unaliased name of the origin database, ornullif the column is the result of an expression or subquery. Thisproperty is the result ofsqlite3_column_database_name().name<string> The name assigned to the column in the result set of aSELECTstatement. This property is the result ofsqlite3_column_name().table<string> |<null> The unaliased name of the origin table, ornullifthe column is the result of an expression or subquery. This property is theresult ofsqlite3_column_table_name().type<string> |<null> The declared data type of the column, ornullif thecolumn is the result of an expression or subquery. This property is theresult ofsqlite3_column_decltype().
This method is used to retrieve information about the columns returned by theprepared statement.
statement.expandedSQL#
- Type:<string> The source SQL expanded to include parameter values.
The source SQL text of the prepared statement with parameterplaceholders replaced by the values that were used during the most recentexecution of this prepared statement. This property is a wrapper aroundsqlite3_expanded_sql().
statement.get([namedParameters][, ...anonymousParameters])#
History
| Version | Changes |
|---|---|
| v23.7.0, v22.14.0 | Add support for |
| v22.5.0 | Added in: v22.5.0 |
namedParameters<Object> An optional object used to bind named parameters.The keys of this object are used to configure the mapping....anonymousParameters<null> |<number> |<bigint> |<string> |<Buffer> |<TypedArray> |<DataView> Zero ormore values to bind to anonymous parameters.- Returns:<Object> |<undefined> An object corresponding to the first row returnedby executing the prepared statement. The keys and values of the objectcorrespond to the column names and values of the row. If no rows were returnedfrom the database then this method returns
undefined.
This method executes a prepared statement and returns the first result as anobject. If the prepared statement does not return any results, this methodreturnsundefined. The prepared statementparameters are bound using thevalues innamedParameters andanonymousParameters.
statement.iterate([namedParameters][, ...anonymousParameters])#
History
| Version | Changes |
|---|---|
| v23.7.0, v22.14.0 | Add support for |
| v23.4.0, v22.13.0 | Added in: v23.4.0, v22.13.0 |
namedParameters<Object> An optional object used to bind named parameters.The keys of this object are used to configure the mapping....anonymousParameters<null> |<number> |<bigint> |<string> |<Buffer> |<TypedArray> |<DataView> Zero ormore values to bind to anonymous parameters.- Returns:<Iterator> An iterable iterator of objects. Each object corresponds to a rowreturned by executing the prepared statement. The keys and values of eachobject correspond to the column names and values of the row.
This method executes a prepared statement and returns an iterator ofobjects. If the prepared statement does not return any results, this methodreturns an empty iterator. The prepared statementparameters are bound usingthe values innamedParameters andanonymousParameters.
statement.run([namedParameters][, ...anonymousParameters])#
History
| Version | Changes |
|---|---|
| v23.7.0, v22.14.0 | Add support for |
| v22.5.0 | Added in: v22.5.0 |
namedParameters<Object> An optional object used to bind named parameters.The keys of this object are used to configure the mapping....anonymousParameters<null> |<number> |<bigint> |<string> |<Buffer> |<TypedArray> |<DataView> Zero ormore values to bind to anonymous parameters.- Returns:<Object>
changes<number> |<bigint> The number of rows modified, inserted, or deletedby the most recently completedINSERT,UPDATE, orDELETEstatement.This field is either a number or aBigIntdepending on the preparedstatement's configuration. This property is the result ofsqlite3_changes64().lastInsertRowid<number> |<bigint> The most recently inserted rowid. Thisfield is either a number or aBigIntdepending on the prepared statement'sconfiguration. This property is the result ofsqlite3_last_insert_rowid().
This method executes a prepared statement and returns an object summarizing theresulting changes. The prepared statementparameters are bound using thevalues innamedParameters andanonymousParameters.
statement.setAllowBareNamedParameters(enabled)#
enabled<boolean> Enables or disables support for binding named parameterswithout the prefix character.
The names of SQLite parameters begin with a prefix character. By default,node:sqlite requires that this prefix character is present when bindingparameters. However, with the exception of dollar sign character, theseprefix characters also require extra quoting when used in object keys.
To improve ergonomics, this method can be used to also allow bare namedparameters, which do not require the prefix character in JavaScript code. Thereare several caveats to be aware of when enabling bare named parameters:
- The prefix character is still required in SQL.
- The prefix character is still allowed in JavaScript. In fact, prefixed nameswill have slightly better binding performance.
- Using ambiguous named parameters, such as
$kand@k, in the same preparedstatement will result in an exception as it cannot be determined how to binda bare name.
statement.setAllowUnknownNamedParameters(enabled)#
enabled<boolean> Enables or disables support for unknown named parameters.
By default, if an unknown name is encountered while binding parameters, anexception is thrown. This method allows unknown named parameters to be ignored.
statement.setReturnArrays(enabled)#
enabled<boolean> Enables or disables the return of query results as arrays.
When enabled, query results returned by theall(),get(), anditerate() methods will be returned as arrays insteadof objects.
statement.setReadBigInts(enabled)#
enabled<boolean> Enables or disables the use ofBigInts when readingINTEGERfields from the database.
When reading from the database, SQLiteINTEGERs are mapped to JavaScriptnumbers by default. However, SQLiteINTEGERs can store values larger thanJavaScript numbers are capable of representing. In such cases, this method canbe used to readINTEGER data using JavaScriptBigInts. This method has noimpact on database write operations where numbers andBigInts are bothsupported at all times.
statement.sourceSQL#
- Type:<string> The source SQL used to create this prepared statement.
The source SQL text of the prepared statement. This property is awrapper aroundsqlite3_sql().
Class:SQLTagStore#
This class represents a single LRU (Least Recently Used) cache for storingprepared statements.
Instances of this class are created via thedatabase.createTagStore()method, not by using a constructor. The store caches prepared statements basedon the provided SQL query string. When the same query is seen again, the storeretrieves the cached statement and safely applies the new values throughparameter binding, thereby preventing attacks like SQL injection.
The cache has a maxSize that defaults to 1000 statements, but a custom size canbe provided (e.g.,database.createTagStore(100)). All APIs exposed by thisclass execute synchronously.
sqlTagStore.all(stringElements[, ...boundParameters])#
stringElements<string[]> Template literal elements containing the SQLquery....boundParameters<null> |<number> |<bigint> |<string> |<Buffer> |<TypedArray> |<DataView>Parameter values to be bound to placeholders in the template string.- Returns:<Array> An array of objects representing the rows returned by the query.
Executes the given SQL query and returns all resulting rows as an array ofobjects.
This function is intended to be used as a template literal tag, not to becalled directly.
sqlTagStore.get(stringElements[, ...boundParameters])#
stringElements<string[]> Template literal elements containing the SQLquery....boundParameters<null> |<number> |<bigint> |<string> |<Buffer> |<TypedArray> |<DataView>Parameter values to be bound to placeholders in the template string.- Returns:<Object> |<undefined> An object representing the first row returned bythe query, or
undefinedif no rows are returned.
Executes the given SQL query and returns the first resulting row as an object.
This function is intended to be used as a template literal tag, not to becalled directly.
sqlTagStore.iterate(stringElements[, ...boundParameters])#
stringElements<string[]> Template literal elements containing the SQLquery....boundParameters<null> |<number> |<bigint> |<string> |<Buffer> |<TypedArray> |<DataView>Parameter values to be bound to placeholders in the template string.- Returns:<Iterator> An iterator that yields objects representing the rows returned by the query.
Executes the given SQL query and returns an iterator over the resulting rows.
This function is intended to be used as a template literal tag, not to becalled directly.
sqlTagStore.run(stringElements[, ...boundParameters])#
stringElements<string[]> Template literal elements containing the SQLquery....boundParameters<null> |<number> |<bigint> |<string> |<Buffer> |<TypedArray> |<DataView>Parameter values to be bound to placeholders in the template string.- Returns:<Object> An object containing information about the execution, including
changesandlastInsertRowid.
Executes the given SQL query, which is expected to not return any rows (e.g., INSERT, UPDATE, DELETE).
This function is intended to be used as a template literal tag, not to becalled directly.
sqlTagStore.size#
History
| Version | Changes |
|---|---|
| v25.4.0 | Changed from a method to a getter. |
| v24.9.0 | Added in: v24.9.0 |
- Type:<integer>
A read-only property that returns the number of prepared statements currently in the cache.
sqlTagStore.capacity#
- Type:<integer>
A read-only property that returns the maximum number of prepared statements the cache can hold.
sqlTagStore.db#
- Type:<DatabaseSync>
A read-only property that returns theDatabaseSync object associated with thisSQLTagStore.
Type conversion between JavaScript and SQLite#
When Node.js writes to or reads from SQLite it is necessary to convert betweenJavaScript data types and SQLite'sdata types. Because JavaScript supportsmore data types than SQLite, only a subset of JavaScript types are supported.Attempting to write an unsupported data type to SQLite will result in anexception.
| SQLite | JavaScript |
|---|---|
NULL | <null> |
INTEGER | <number> or<bigint> |
REAL | <number> |
TEXT | <string> |
BLOB | <TypedArray> or<DataView> |
sqlite.backup(sourceDb, path[, options])#
History
| Version | Changes |
|---|---|
| v23.10.0 | The |
| v23.8.0, v22.16.0 | Added in: v23.8.0, v22.16.0 |
sourceDb<DatabaseSync> The database to backup. The source database must be open.path<string> |<Buffer> |<URL> The path where the backup will be created. If the file already exists,the contents will be overwritten.options<Object> Optional configuration for the backup. Thefollowing properties are supported:source<string> Name of the source database. This can be'main'(the default primary database) or any otherdatabase that have been added withATTACH DATABASEDefault:'main'.target<string> Name of the target database. This can be'main'(the default primary database) or any otherdatabase that have been added withATTACH DATABASEDefault:'main'.rate<number> Number of pages to be transmitted in each batch of the backup.Default:100.progress<Function> An optional callback function that will be called after each backup step. The argument passedto this callback is an<Object> withremainingPagesandtotalPagesproperties, describing the current progressof the backup operation.
- Returns:<Promise> A promise that fulfills with the total number of backed-up pages upon completion, or rejects if anerror occurs.
This method makes a database backup. This method abstracts thesqlite3_backup_init(),sqlite3_backup_step()andsqlite3_backup_finish() functions.
The backed-up database can be used normally during the backup process. Mutations coming from the same connection - same<DatabaseSync> - object will be reflected in the backup right away. However, mutations from other connections will causethe backup process to restart.
const { backup,DatabaseSync } =require('node:sqlite');(async () => {const sourceDb =newDatabaseSync('source.db');const totalPagesTransferred =awaitbackup(sourceDb,'backup.db', {rate:1,// Copy one page at a time.progress:({ totalPages, remainingPages }) => {console.log('Backup in progress', { totalPages, remainingPages }); }, });console.log('Backup completed', totalPagesTransferred);})();import { backup,DatabaseSync }from'node:sqlite';const sourceDb =newDatabaseSync('source.db');const totalPagesTransferred =awaitbackup(sourceDb,'backup.db', {rate:1,// Copy one page at a time.progress:({ totalPages, remainingPages }) => {console.log('Backup in progress', { totalPages, remainingPages }); },});console.log('Backup completed', totalPagesTransferred);
sqlite.constants#
- Type:<Object>
An object containing commonly used constants for SQLite operations.
SQLite constants#
The following constants are exported by thesqlite.constants object.
Conflict resolution constants#
One of the following constants is available as an argument to theonConflictconflict resolution handler passed todatabase.applyChangeset(). See alsoConstants Passed To The Conflict Handler in the SQLite documentation.
| Constant | Description |
|---|---|
SQLITE_CHANGESET_DATA | The conflict handler is invoked with this constant when processing a DELETE or UPDATE change if a row with the required PRIMARY KEY fields is present in the database, but one or more other (non primary-key) fields modified by the update do not contain the expected "before" values. |
SQLITE_CHANGESET_NOTFOUND | The conflict handler is invoked with this constant when processing a DELETE or UPDATE change if a row with the required PRIMARY KEY fields is not present in the database. |
SQLITE_CHANGESET_CONFLICT | This constant is passed to the conflict handler while processing an INSERT change if the operation would result in duplicate primary key values. |
SQLITE_CHANGESET_CONSTRAINT | If foreign key handling is enabled, and applying a changeset leaves the database in a state containing foreign key violations, the conflict handler is invoked with this constant exactly once before the changeset is committed. If the conflict handler returnsSQLITE_CHANGESET_OMIT, the changes, including those that caused the foreign key constraint violation, are committed. Or, if it returnsSQLITE_CHANGESET_ABORT, the changeset is rolled back. |
SQLITE_CHANGESET_FOREIGN_KEY | If any other constraint violation occurs while applying a change (i.e. a UNIQUE, CHECK or NOT NULL constraint), the conflict handler is invoked with this constant. |
One of the following constants must be returned from theonConflict conflictresolution handler passed todatabase.applyChangeset(). See alsoConstants Returned From The Conflict Handler in the SQLite documentation.
| Constant | Description |
|---|---|
SQLITE_CHANGESET_OMIT | Conflicting changes are omitted. |
SQLITE_CHANGESET_REPLACE | Conflicting changes replace existing values. Note that this value can only be returned when the type of conflict is eitherSQLITE_CHANGESET_DATA orSQLITE_CHANGESET_CONFLICT. |
SQLITE_CHANGESET_ABORT | Abort when a change encounters a conflict and roll back database. |
Authorization constants#
The following constants are used with thedatabase.setAuthorizer() method.
Authorization result codes#
One of the following constants must be returned from the authorizer callbackfunction passed todatabase.setAuthorizer().
| Constant | Description |
|---|---|
SQLITE_OK | Allow the operation to proceed normally. |
SQLITE_DENY | Deny the operation and cause an error to be returned. |
SQLITE_IGNORE | Ignore the operation and continue as if it had never been requested. |
Authorization action codes#
The following constants are passed as the first argument to the authorizercallback function to indicate what type of operation is being authorized.
| Constant | Description |
|---|---|
SQLITE_CREATE_INDEX | Create an index |
SQLITE_CREATE_TABLE | Create a table |
SQLITE_CREATE_TEMP_INDEX | Create a temporary index |
SQLITE_CREATE_TEMP_TABLE | Create a temporary table |
SQLITE_CREATE_TEMP_TRIGGER | Create a temporary trigger |
SQLITE_CREATE_TEMP_VIEW | Create a temporary view |
SQLITE_CREATE_TRIGGER | Create a trigger |
SQLITE_CREATE_VIEW | Create a view |
SQLITE_DELETE | Delete from a table |
SQLITE_DROP_INDEX | Drop an index |
SQLITE_DROP_TABLE | Drop a table |
SQLITE_DROP_TEMP_INDEX | Drop a temporary index |
SQLITE_DROP_TEMP_TABLE | Drop a temporary table |
SQLITE_DROP_TEMP_TRIGGER | Drop a temporary trigger |
SQLITE_DROP_TEMP_VIEW | Drop a temporary view |
SQLITE_DROP_TRIGGER | Drop a trigger |
SQLITE_DROP_VIEW | Drop a view |
SQLITE_INSERT | Insert into a table |
SQLITE_PRAGMA | Execute a PRAGMA statement |
SQLITE_READ | Read from a table |
SQLITE_SELECT | Execute a SELECT statement |
SQLITE_TRANSACTION | Begin, commit, or rollback a transaction |
SQLITE_UPDATE | Update a table |
SQLITE_ATTACH | Attach a database |
SQLITE_DETACH | Detach a database |
SQLITE_ALTER_TABLE | Alter a table |
SQLITE_REINDEX | Reindex |
SQLITE_ANALYZE | Analyze the database |
SQLITE_CREATE_VTABLE | Create a virtual table |
SQLITE_DROP_VTABLE | Drop a virtual table |
SQLITE_FUNCTION | Use a function |
SQLITE_SAVEPOINT | Create, release, or rollback a savepoint |
SQLITE_COPY | Copy data (legacy) |
SQLITE_RECURSIVE | Recursive query |
Stream[src]#
Source Code:lib/stream.js
A stream is an abstract interface for working with streaming data in Node.js.Thenode:stream module provides an API for implementing the stream interface.
There are many stream objects provided by Node.js. For instance, arequest to an HTTP server andprocess.stdoutare both stream instances.
Streams can be readable, writable, or both. All streams are instances ofEventEmitter.
To access thenode:stream module:
const stream =require('node:stream');Thenode:stream module is useful for creating new types of stream instances.It is usually not necessary to use thenode:stream module to consume streams.
Organization of this document#
This document contains two primary sections and a third section for notes. Thefirst section explains how to use existing streams within an application. Thesecond section explains how to create new types of streams.
Types of streams#
There are four fundamental stream types within Node.js:
Writable: streams to which data can be written (for example,fs.createWriteStream()).Readable: streams from which data can be read (for example,fs.createReadStream()).Duplex: streams that are bothReadableandWritable(for example,net.Socket).Transform:Duplexstreams that can modify or transform the data as itis written and read (for example,zlib.createDeflate()).
Additionally, this module includes the utility functionsstream.duplexPair(),stream.pipeline(),stream.finished()stream.Readable.from(), andstream.addAbortSignal().
Streams Promises API#
Thestream/promises API provides an alternative set of asynchronous utilityfunctions for streams that returnPromise objects rather than usingcallbacks. The API is accessible viarequire('node:stream/promises')orrequire('node:stream').promises.
stream.pipeline(streams[, options])#
stream.pipeline(source[, ...transforms], destination[, options])#
History
| Version | Changes |
|---|---|
| v18.0.0, v17.2.0, v16.14.0 | Add the |
| v15.0.0 | Added in: v15.0.0 |
streams<Stream[]> |<Iterable[]> |<AsyncIterable[]> |<Function[]>source<Stream> |<Iterable> |<AsyncIterable> |<Function>- Returns:<Promise> |<AsyncIterable>
...transforms<Stream> |<Function>source<AsyncIterable>- Returns:<Promise> |<AsyncIterable>
destination<Stream> |<Function>source<AsyncIterable>- Returns:<Promise> |<AsyncIterable>
options<Object> Pipeline optionssignal<AbortSignal>end<boolean> End the destination stream when the source stream ends.Transform streams are always ended, even if this value isfalse.Default:true.
- Returns:<Promise> Fulfills when the pipeline is complete.
const { pipeline } =require('node:stream/promises');const fs =require('node:fs');const zlib =require('node:zlib');asyncfunctionrun() {awaitpipeline( fs.createReadStream('archive.tar'), zlib.createGzip(), fs.createWriteStream('archive.tar.gz'), );console.log('Pipeline succeeded.');}run().catch(console.error);import { pipeline }from'node:stream/promises';import { createReadStream, createWriteStream }from'node:fs';import { createGzip }from'node:zlib';awaitpipeline(createReadStream('archive.tar'),createGzip(),createWriteStream('archive.tar.gz'),);console.log('Pipeline succeeded.');
To use anAbortSignal, pass it inside an options object, as the last argument.When the signal is aborted,destroy will be called on the underlying pipeline,with anAbortError.
const { pipeline } =require('node:stream/promises');const fs =require('node:fs');const zlib =require('node:zlib');asyncfunctionrun() {const ac =newAbortController();const signal = ac.signal;setImmediate(() => ac.abort());awaitpipeline( fs.createReadStream('archive.tar'), zlib.createGzip(), fs.createWriteStream('archive.tar.gz'), { signal }, );}run().catch(console.error);// AbortErrorimport { pipeline }from'node:stream/promises';import { createReadStream, createWriteStream }from'node:fs';import { createGzip }from'node:zlib';const ac =newAbortController();const { signal } = ac;setImmediate(() => ac.abort());try {awaitpipeline(createReadStream('archive.tar'),createGzip(),createWriteStream('archive.tar.gz'), { signal }, );}catch (err) {console.error(err);// AbortError}
Thepipeline API also supports async generators:
const { pipeline } =require('node:stream/promises');const fs =require('node:fs');asyncfunctionrun() {awaitpipeline( fs.createReadStream('lowercase.txt'),asyncfunction* (source, { signal }) { source.setEncoding('utf8');// Work with strings rather than `Buffer`s.forawait (const chunkof source) {yieldawaitprocessChunk(chunk, { signal }); } }, fs.createWriteStream('uppercase.txt'), );console.log('Pipeline succeeded.');}run().catch(console.error);import { pipeline }from'node:stream/promises';import { createReadStream, createWriteStream }from'node:fs';awaitpipeline(createReadStream('lowercase.txt'),asyncfunction* (source, { signal }) { source.setEncoding('utf8');// Work with strings rather than `Buffer`s.forawait (const chunkof source) {yieldawaitprocessChunk(chunk, { signal }); } },createWriteStream('uppercase.txt'),);console.log('Pipeline succeeded.');
Remember to handle thesignal argument passed into the async generator.Especially in the case where the async generator is the source for thepipeline (i.e. first argument) or the pipeline will never complete.
const { pipeline } =require('node:stream/promises');const fs =require('node:fs');asyncfunctionrun() {awaitpipeline(asyncfunction* ({ signal }) {awaitsomeLongRunningfn({ signal });yield'asd'; }, fs.createWriteStream('uppercase.txt'), );console.log('Pipeline succeeded.');}run().catch(console.error);import { pipeline }from'node:stream/promises';import fsfrom'node:fs';awaitpipeline(asyncfunction* ({ signal }) {awaitsomeLongRunningfn({ signal });yield'asd'; }, fs.createWriteStream('uppercase.txt'),);console.log('Pipeline succeeded.');
Thepipeline API providescallback version:
stream.finished(stream[, options])#
History
| Version | Changes |
|---|---|
| v19.5.0, v18.14.0 | Added support for |
| v19.1.0, v18.13.0 | The |
| v15.0.0 | Added in: v15.0.0 |
stream<Stream> |<ReadableStream> |<WritableStream> A readable and/or writablestream/webstream.options<Object>error<boolean> |<undefined>readable<boolean> |<undefined>writable<boolean> |<undefined>signal<AbortSignal> |<undefined>cleanup<boolean> |<undefined> Iftrue, removes the listeners registered bythis function before the promise is fulfilled.Default:false.
- Returns:<Promise> Fulfills when the stream is nolonger readable or writable.
const { finished } =require('node:stream/promises');const fs =require('node:fs');const rs = fs.createReadStream('archive.tar');asyncfunctionrun() {awaitfinished(rs);console.log('Stream is done reading.');}run().catch(console.error);rs.resume();// Drain the stream.import { finished }from'node:stream/promises';import { createReadStream }from'node:fs';const rs =createReadStream('archive.tar');asyncfunctionrun() {awaitfinished(rs);console.log('Stream is done reading.');}run().catch(console.error);rs.resume();// Drain the stream.
Thefinished API also provides acallback version.
stream.finished() leaves dangling event listeners (in particular'error','end','finish' and'close') after the returned promise isresolved or rejected. The reason for this is so that unexpected'error'events (due to incorrect stream implementations) do not cause unexpectedcrashes. If this is unwanted behavior thenoptions.cleanup should be set totrue:
awaitfinished(rs, {cleanup:true });Object mode#
All streams created by Node.js APIs operate exclusively on strings,<Buffer>,<TypedArray> and<DataView> objects:
StringsandBuffersare the most common types used with streams.TypedArrayandDataViewlets you handle binary data with types likeInt32ArrayorUint8Array. When you write a TypedArray or DataView to astream, Node.js processesthe raw bytes.
It is possible, however, for streamimplementations to work with other types of JavaScript values (with theexception ofnull, which serves a special purpose within streams).Such streams are considered to operate in "object mode".
Stream instances are switched into object mode using theobjectMode optionwhen the stream is created. Attempting to switch an existing stream intoobject mode is not safe.
Buffering#
BothWritable andReadable streams will store data in an internalbuffer.
The amount of data potentially buffered depends on thehighWaterMark optionpassed into the stream's constructor. For normal streams, thehighWaterMarkoption specifies atotal number of bytes. For streams operatingin object mode, thehighWaterMark specifies a total number of objects. Forstreams operating on (but not decoding) strings, thehighWaterMark specifiesa total number of UTF-16 code units.
Data is buffered inReadable streams when the implementation callsstream.push(chunk). If the consumer of the Stream does notcallstream.read(), the data will sit in the internalqueue until it is consumed.
Once the total size of the internal read buffer reaches the threshold specifiedbyhighWaterMark, the stream will temporarily stop reading data from theunderlying resource until the data currently buffered can be consumed (that is,the stream will stop calling the internalreadable._read() method that isused to fill the read buffer).
Data is buffered inWritable streams when thewritable.write(chunk) method is called repeatedly. While thetotal size of the internal write buffer is below the threshold set byhighWaterMark, calls towritable.write() will returntrue. Oncethe size of the internal buffer reaches or exceeds thehighWaterMark,falsewill be returned.
A key goal of thestream API, particularly thestream.pipe() method,is to limit the buffering of data to acceptable levels such that sources anddestinations of differing speeds will not overwhelm the available memory.
ThehighWaterMark option is a threshold, not a limit: it dictates the amountof data that a stream buffers before it stops asking for more data. It does notenforce a strict memory limitation in general. Specific stream implementationsmay choose to enforce stricter limits but doing so is optional.
BecauseDuplex andTransform streams are bothReadable andWritable, each maintainstwo separate internal buffers used for reading andwriting, allowing each side to operate independently of the other whilemaintaining an appropriate and efficient flow of data. For example,net.Socket instances areDuplex streams whoseReadable side allowsconsumption of data receivedfrom the socket and whoseWritable side allowswriting datato the socket. Because data may be written to the socket at afaster or slower rate than data is received, each side shouldoperate (and buffer) independently of the other.
The mechanics of the internal buffering are an internal implementation detailand may be changed at any time. However, for certain advanced implementations,the internal buffers can be retrieved usingwritable.writableBuffer orreadable.readableBuffer. Use of these undocumented properties is discouraged.
API for stream consumers#
Almost all Node.js applications, no matter how simple, use streams in somemanner. The following is an example of using streams in a Node.js applicationthat implements an HTTP server:
const http =require('node:http');const server = http.createServer((req, res) => {// `req` is an http.IncomingMessage, which is a readable stream.// `res` is an http.ServerResponse, which is a writable stream.let body ='';// Get the data as utf8 strings.// If an encoding is not set, Buffer objects will be received. req.setEncoding('utf8');// Readable streams emit 'data' events once a listener is added. req.on('data',(chunk) => { body += chunk; });// The 'end' event indicates that the entire body has been received. req.on('end',() => {try {const data =JSON.parse(body);// Write back something interesting to the user: res.write(typeof data); res.end(); }catch (er) {// uh oh! bad json! res.statusCode =400;return res.end(`error:${er.message}`); } });});server.listen(1337);// $ curl localhost:1337 -d "{}"// object// $ curl localhost:1337 -d "\"foo\""// string// $ curl localhost:1337 -d "not json"// error: Unexpected token 'o', "not json" is not valid JSONWritable streams (such asres in the example) expose methods such aswrite() andend() that are used to write data onto the stream.
Readable streams use theEventEmitter API for notifying applicationcode when data is available to be read off the stream. That available data canbe read from the stream in multiple ways.
BothWritable andReadable streams use theEventEmitter API invarious ways to communicate the current state of the stream.
Duplex andTransform streams are bothWritable andReadable.
Applications that are either writing data to or consuming data from a streamare not required to implement the stream interfaces directly and will generallyhave no reason to callrequire('node:stream').
Developers wishing to implement new types of streams should refer to thesectionAPI for stream implementers.
Writable streams#
Writable streams are an abstraction for adestination to which data iswritten.
Examples ofWritable streams include:
- HTTP requests, on the client
- HTTP responses, on the server
- fs write streams
- zlib streams
- crypto streams
- TCP sockets
- child process stdin
process.stdout,process.stderr
Some of these examples are actuallyDuplex streams that implement theWritable interface.
AllWritable streams implement the interface defined by thestream.Writable class.
While specific instances ofWritable streams may differ in various ways,allWritable streams follow the same fundamental usage pattern as illustratedin the example below:
const myStream =getWritableStreamSomehow();myStream.write('some data');myStream.write('some more data');myStream.end('done writing data');Class:stream.Writable#
Event:'close'#
History
| Version | Changes |
|---|---|
| v10.0.0 | Add |
| v0.9.4 | Added in: v0.9.4 |
The'close' event is emitted when the stream and any of its underlyingresources (a file descriptor, for example) have been closed. The event indicatesthat no more events will be emitted, and no further computation will occur.
AWritable stream will always emit the'close' event if it iscreated with theemitClose option.
Event:'drain'#
If a call tostream.write(chunk) returnsfalse, the'drain' event will be emitted when it is appropriate to resume writing datato the stream.
// Write the data to the supplied writable stream one million times.// Be attentive to back-pressure.functionwriteOneMillionTimes(writer, data, encoding, callback) {let i =1000000;write();functionwrite() {let ok =true;do { i--;if (i ===0) {// Last time! writer.write(data, encoding, callback); }else {// See if we should continue, or wait.// Don't pass the callback, because we're not done yet. ok = writer.write(data, encoding); } }while (i >0 && ok);if (i >0) {// Had to stop early!// Write some more once it drains. writer.once('drain', write); } }}Event:'error'#
- Type:<Error>
The'error' event is emitted if an error occurred while writing or pipingdata. The listener callback is passed a singleError argument when called.
The stream is closed when the'error' event is emitted unless theautoDestroy option was set tofalse when creating thestream.
After'error', no further events other than'close'should be emitted(including'error' events).
Event:'finish'#
The'finish' event is emitted after thestream.end() methodhas been called, and all data has been flushed to the underlying system.
const writer =getWritableStreamSomehow();for (let i =0; i <100; i++) { writer.write(`hello, #${i}!\n`);}writer.on('finish',() => {console.log('All writes are now complete.');});writer.end('This is the end\n');Event:'pipe'#
src<stream.Readable> source stream that is piping to this writable
The'pipe' event is emitted when thestream.pipe() method is called ona readable stream, adding this writable to its set of destinations.
const writer =getWritableStreamSomehow();const reader =getReadableStreamSomehow();writer.on('pipe',(src) => {console.log('Something is piping into the writer.'); assert.equal(src, reader);});reader.pipe(writer);Event:'unpipe'#
src<stream.Readable> The source stream thatunpiped this writable
The'unpipe' event is emitted when thestream.unpipe() method is calledon aReadable stream, removing thisWritable from its set ofdestinations.
This is also emitted in case thisWritable stream emits an error when aReadable stream pipes into it.
const writer =getWritableStreamSomehow();const reader =getReadableStreamSomehow();writer.on('unpipe',(src) => {console.log('Something has stopped piping into the writer.'); assert.equal(src, reader);});reader.pipe(writer);reader.unpipe(writer);writable.cork()#
Thewritable.cork() method forces all written data to be buffered in memory.The buffered data will be flushed when either thestream.uncork() orstream.end() methods are called.
The primary intent ofwritable.cork() is to accommodate a situation in whichseveral small chunks are written to the stream in rapid succession. Instead ofimmediately forwarding them to the underlying destination,writable.cork()buffers all the chunks untilwritable.uncork() is called, which will pass themall towritable._writev(), if present. This prevents a head-of-line blockingsituation where data is being buffered while waiting for the first small chunkto be processed. However, use ofwritable.cork() without implementingwritable._writev() may have an adverse effect on throughput.
See also:writable.uncork(),writable._writev().
writable.destroy([error])#
History
| Version | Changes |
|---|---|
| v14.0.0 | Work as a no-op on a stream that has already been destroyed. |
| v8.0.0 | Added in: v8.0.0 |
Destroy the stream. Optionally emit an'error' event, and emit a'close'event (unlessemitClose is set tofalse). After this call, the writablestream has ended and subsequent calls towrite() orend() will result inanERR_STREAM_DESTROYED error.This is a destructive and immediate way to destroy a stream. Previous calls towrite() may not have drained, and may trigger anERR_STREAM_DESTROYED error.Useend() instead of destroy if data should flush before close, or wait forthe'drain' event before destroying the stream.
const {Writable } =require('node:stream');const myStream =newWritable();const fooErr =newError('foo error');myStream.destroy(fooErr);myStream.on('error',(fooErr) =>console.error(fooErr.message));// foo errorconst {Writable } =require('node:stream');const myStream =newWritable();myStream.destroy();myStream.on('error',functionwontHappen() {});const {Writable } =require('node:stream');const myStream =newWritable();myStream.destroy();myStream.write('foo',(error) =>console.error(error.code));// ERR_STREAM_DESTROYEDOncedestroy() has been called any further calls will be a no-op and nofurther errors except from_destroy() may be emitted as'error'.
Implementors should not override this method,but instead implementwritable._destroy().
writable.destroyed#
- Type:<boolean>
Istrue afterwritable.destroy() has been called.
const {Writable } =require('node:stream');const myStream =newWritable();console.log(myStream.destroyed);// falsemyStream.destroy();console.log(myStream.destroyed);// truewritable.end([chunk[, encoding]][, callback])#
History
| Version | Changes |
|---|---|
| v22.0.0, v20.13.0 | The |
| v15.0.0 | The |
| v14.0.0 | The |
| v10.0.0 | This method now returns a reference to |
| v8.0.0 | The |
| v0.9.4 | Added in: v0.9.4 |
chunk<string> |<Buffer> |<TypedArray> |<DataView> |<any> Optional data to write. Forstreams not operating in object mode,chunkmust be a<string>,<Buffer>,<TypedArray> or<DataView>. For object mode streams,chunkmay be anyJavaScript value other thannull.encoding<string> The encoding ifchunkis a stringcallback<Function> Callback for when the stream is finished.- Returns:<this>
Calling thewritable.end() method signals that no more data will be writtento theWritable. The optionalchunk andencoding arguments allow onefinal additional chunk of data to be written immediately before closing thestream.
Calling thestream.write() method after callingstream.end() will raise an error.
// Write 'hello, ' and then end with 'world!'.const fs =require('node:fs');const file = fs.createWriteStream('example.txt');file.write('hello, ');file.end('world!');// Writing more now is not allowed!writable.setDefaultEncoding(encoding)#
History
| Version | Changes |
|---|---|
| v6.1.0 | This method now returns a reference to |
| v0.11.15 | Added in: v0.11.15 |
Thewritable.setDefaultEncoding() method sets the defaultencoding for aWritable stream.
writable.uncork()#
Thewritable.uncork() method flushes all data buffered sincestream.cork() was called.
When usingwritable.cork() andwritable.uncork() to manage the bufferingof writes to a stream, defer calls towritable.uncork() usingprocess.nextTick(). Doing so allows batching of allwritable.write() calls that occur within a given Node.js event loop phase.
stream.cork();stream.write('some ');stream.write('data ');process.nextTick(() => stream.uncork());If thewritable.cork() method is called multiple times on a stream, thesame number of calls towritable.uncork() must be called to flush the buffereddata.
stream.cork();stream.write('some ');stream.cork();stream.write('data ');process.nextTick(() => { stream.uncork();// The data will not be flushed until uncork() is called a second time. stream.uncork();});See also:writable.cork().
writable.writable#
- Type:<boolean>
Istrue if it is safe to callwritable.write(), which meansthe stream has not been destroyed, errored, or ended.
writable.writableAborted#
History
| Version | Changes |
|---|---|
| v24.0.0, v22.17.0 | Marking the API stable. |
| v18.0.0, v16.17.0 | Added in: v18.0.0, v16.17.0 |
- Type:<boolean>
Returns whether the stream was destroyed or errored before emitting'finish'.
writable.writableEnded#
- Type:<boolean>
Istrue afterwritable.end() has been called. This propertydoes not indicate whether the data has been flushed, for this usewritable.writableFinished instead.
writable.writableCorked#
- Type:<integer>
Number of timeswritable.uncork() needs to becalled in order to fully uncork the stream.
writable.errored#
- Type:<Error>
Returns error if the stream has been destroyed with an error.
writable.writableFinished#
- Type:<boolean>
Is set totrue immediately before the'finish' event is emitted.
writable.writableHighWaterMark#
- Type:<number>
Return the value ofhighWaterMark passed when creating thisWritable.
writable.writableLength#
- Type:<number>
This property contains the number of bytes (or objects) in the queueready to be written. The value provides introspection data regardingthe status of thehighWaterMark.
writable.writableNeedDrain#
- Type:<boolean>
Istrue if the stream's buffer has been full and stream will emit'drain'.
writable.writableObjectMode#
- Type:<boolean>
Getter for the propertyobjectMode of a givenWritable stream.
writable[Symbol.asyncDispose]()#
History
| Version | Changes |
|---|---|
| v24.2.0 | No longer experimental. |
| v22.4.0, v20.16.0 | Added in: v22.4.0, v20.16.0 |
Callswritable.destroy() with anAbortError and returnsa promise that fulfills when the stream is finished.
writable.write(chunk[, encoding][, callback])#
History
| Version | Changes |
|---|---|
| v22.0.0, v20.13.0 | The |
| v8.0.0 | The |
| v6.0.0 | Passing |
| v0.9.4 | Added in: v0.9.4 |
chunk<string> |<Buffer> |<TypedArray> |<DataView> |<any> Optional data to write. Forstreams not operating in object mode,chunkmust be a<string>,<Buffer>,<TypedArray> or<DataView>. For object mode streams,chunkmay be anyJavaScript value other thannull.encoding<string> |<null> The encoding, ifchunkis a string.Default:'utf8'callback<Function> Callback for when this chunk of data is flushed.- Returns:<boolean>
falseif the stream wishes for the calling code towait for the'drain'event to be emitted before continuing to writeadditional data; otherwisetrue.
Thewritable.write() method writes some data to the stream, and calls thesuppliedcallback once the data has been fully handled. If an erroroccurs, thecallback will be called with the error as itsfirst argument. Thecallback is called asynchronously and before'error' isemitted.
The return value istrue if the internal buffer is less than thehighWaterMark configured when the stream was created after admittingchunk.Iffalse is returned, further attempts to write data to the stream shouldstop until the'drain' event is emitted.
While a stream is not draining, calls towrite() will bufferchunk, andreturn false. Once all currently buffered chunks are drained (accepted fordelivery by the operating system), the'drain' event will be emitted.Oncewrite() returns false, do not write more chunksuntil the'drain' event is emitted. While callingwrite() on a stream thatis not draining is allowed, Node.js will buffer all written chunks untilmaximum memory usage occurs, at which point it will abort unconditionally.Even before it aborts, high memory usage will cause poor garbage collectorperformance and high RSS (which is not typically released back to the system,even after the memory is no longer required). Since TCP sockets may neverdrain if the remote peer does not read the data, writing a socket that isnot draining may lead to a remotely exploitable vulnerability.
Writing data while the stream is not draining is particularlyproblematic for aTransform, because theTransform streams are pausedby default until they are piped or a'data' or'readable' event handleris added.
If the data to be written can be generated or fetched on demand, it isrecommended to encapsulate the logic into aReadable and usestream.pipe(). However, if callingwrite() is preferred, it ispossible to respect backpressure and avoid memory issues using the'drain' event:
functionwrite(data, cb) {if (!stream.write(data)) { stream.once('drain', cb); }else { process.nextTick(cb); }}// Wait for cb to be called before doing any other write.write('hello',() => {console.log('Write completed, do more writes now.');});AWritable stream in object mode will always ignore theencoding argument.
Readable streams#
Readable streams are an abstraction for asource from which data isconsumed.
Examples ofReadable streams include:
- HTTP responses, on the client
- HTTP requests, on the server
- fs read streams
- zlib streams
- crypto streams
- TCP sockets
- child process stdout and stderr
process.stdin
AllReadable streams implement the interface defined by thestream.Readable class.
Two reading modes#
Readable streams effectively operate in one of two modes: flowing andpaused. These modes are separate fromobject mode.AReadable stream can be in object mode or not, regardless of whetherit is in flowing mode or paused mode.
In flowing mode, data is read from the underlying system automaticallyand provided to an application as quickly as possible using events via the
EventEmitterinterface.In paused mode, the
stream.read()method must be calledexplicitly to read chunks of data from the stream.
AllReadable streams begin in paused mode but can be switched to flowingmode in one of the following ways:
- Adding a
'data'event handler. - Calling the
stream.resume()method. - Calling the
stream.pipe()method to send the data to aWritable.
TheReadable can switch back to paused mode using one of the following:
- If there are no pipe destinations, by calling the
stream.pause()method. - If there are pipe destinations, by removing all pipe destinations.Multiple pipe destinations may be removed by calling the
stream.unpipe()method.
The important concept to remember is that aReadable will not generate datauntil a mechanism for either consuming or ignoring that data is provided. Ifthe consuming mechanism is disabled or taken away, theReadable willattemptto stop generating the data.
For backward compatibility reasons, removing'data' event handlers willnot automatically pause the stream. Also, if there are piped destinations,then callingstream.pause() will not guarantee that thestream willremain paused once those destinations drain and ask for more data.
If aReadable is switched into flowing mode and there are no consumersavailable to handle the data, that data will be lost. This can occur, forinstance, when thereadable.resume() method is called without a listenerattached to the'data' event, or when a'data' event handler is removedfrom the stream.
Adding a'readable' event handler automatically makes the streamstop flowing, and the data has to be consumed viareadable.read(). If the'readable' event handler isremoved, then the stream will start flowing again if there is a'data' event handler.
Three states#
The "two modes" of operation for aReadable stream are a simplifiedabstraction for the more complicated internal state management that is happeningwithin theReadable stream implementation.
Specifically, at any given point in time, everyReadable is in one of threepossible states:
readable.readableFlowing === nullreadable.readableFlowing === falsereadable.readableFlowing === true
Whenreadable.readableFlowing isnull, no mechanism for consuming thestream's data is provided. Therefore, the stream will not generate data.While in this state, attaching a listener for the'data' event, calling thereadable.pipe() method, or calling thereadable.resume() method will switchreadable.readableFlowing totrue, causing theReadable to begin activelyemitting events as data is generated.
Callingreadable.pause(),readable.unpipe(), or receiving backpressurewill cause thereadable.readableFlowing to be set asfalse,temporarily halting the flowing of events butnot halting the generation ofdata. While in this state, attaching a listener for the'data' eventwill not switchreadable.readableFlowing totrue.
const {PassThrough,Writable } =require('node:stream');const pass =newPassThrough();const writable =newWritable();pass.pipe(writable);pass.unpipe(writable);// readableFlowing is now false.pass.on('data',(chunk) => {console.log(chunk.toString()); });// readableFlowing is still false.pass.write('ok');// Will not emit 'data'.pass.resume();// Must be called to make stream emit 'data'.// readableFlowing is now true.Whilereadable.readableFlowing isfalse, data may be accumulatingwithin the stream's internal buffer.
Choose one API style#
TheReadable stream API evolved across multiple Node.js versions and providesmultiple methods of consuming stream data. In general, developers should chooseone of the methods of consuming data andshould never use multiple methodsto consume data from a single stream. Specifically, using a combinationofon('data'),on('readable'),pipe(), or async iterators couldlead to unintuitive behavior.
Class:stream.Readable#
Event:'close'#
History
| Version | Changes |
|---|---|
| v10.0.0 | Add |
| v0.9.4 | Added in: v0.9.4 |
The'close' event is emitted when the stream and any of its underlyingresources (a file descriptor, for example) have been closed. The event indicatesthat no more events will be emitted, and no further computation will occur.
AReadable stream will always emit the'close' event if it iscreated with theemitClose option.
Event:'data'#
chunk<Buffer> |<string> |<any> The chunk of data. For streams that are notoperating in object mode, the chunk will be either a string orBuffer.For streams that are in object mode, the chunk can be any JavaScript valueother thannull.
The'data' event is emitted whenever the stream is relinquishing ownership ofa chunk of data to a consumer. This may occur whenever the stream is switchedin flowing mode by callingreadable.pipe(),readable.resume(), or byattaching a listener callback to the'data' event. The'data' event willalso be emitted whenever thereadable.read() method is called and a chunk ofdata is available to be returned.
Attaching a'data' event listener to a stream that has not been explicitlypaused will switch the stream into flowing mode. Data will then be passed assoon as it is available.
The listener callback will be passed the chunk of data as a string if a defaultencoding has been specified for the stream using thereadable.setEncoding() method; otherwise the data will be passed as aBuffer.
const readable =getReadableStreamSomehow();readable.on('data',(chunk) => {console.log(`Received${chunk.length} bytes of data.`);});Event:'end'#
The'end' event is emitted when there is no more data to be consumed fromthe stream.
The'end' eventwill not be emitted unless the data is completelyconsumed. This can be accomplished by switching the stream into flowing mode,or by callingstream.read() repeatedly until all data has beenconsumed.
const readable =getReadableStreamSomehow();readable.on('data',(chunk) => {console.log(`Received${chunk.length} bytes of data.`);});readable.on('end',() => {console.log('There will be no more data.');});Event:'error'#
- Type:<Error>
The'error' event may be emitted by aReadable implementation at any time.Typically, this may occur if the underlying stream is unable to generate datadue to an underlying internal failure, or when a stream implementation attemptsto push an invalid chunk of data.
The listener callback will be passed a singleError object.
Event:'pause'#
The'pause' event is emitted whenstream.pause() is calledandreadableFlowing is notfalse.
Event:'readable'#
History
| Version | Changes |
|---|---|
| v10.0.0 | The |
| v10.0.0 | Using |
| v0.9.4 | Added in: v0.9.4 |
The'readable' event is emitted when there is data available to be read fromthe stream, up to the configured high water mark (state.highWaterMark). Effectively,it indicates that the stream has new information within the buffer. If data is availablewithin this buffer,stream.read() can be called to retrieve that data.Additionally, the'readable' event may also be emitted when the end of the stream has beenreached.
const readable =getReadableStreamSomehow();readable.on('readable',function() {// There is some data to read now.let data;while ((data =this.read()) !==null) {console.log(data); }});If the end of the stream has been reached, callingstream.read() will returnnull and trigger the'end'event. This is also true if there never was any data to be read. For instance,in the following example,foo.txt is an empty file:
const fs =require('node:fs');const rr = fs.createReadStream('foo.txt');rr.on('readable',() => {console.log(`readable:${rr.read()}`);});rr.on('end',() => {console.log('end');});The output of running this script is:
$node test.jsreadable: nullendIn some cases, attaching a listener for the'readable' event will cause someamount of data to be read into an internal buffer.
In general, thereadable.pipe() and'data' event mechanisms are easier tounderstand than the'readable' event. However, handling'readable' mightresult in increased throughput.
If both'readable' and'data' are used at the same time,'readable'takes precedence in controlling the flow, i.e.'data' will be emittedonly whenstream.read() is called. ThereadableFlowing property would becomefalse.If there are'data' listeners when'readable' is removed, the streamwill start flowing, i.e.'data' events will be emitted without calling.resume().
Event:'resume'#
The'resume' event is emitted whenstream.resume() iscalled andreadableFlowing is nottrue.
readable.destroy([error])#
History
| Version | Changes |
|---|---|
| v14.0.0 | Work as a no-op on a stream that has already been destroyed. |
| v8.0.0 | Added in: v8.0.0 |
Destroy the stream. Optionally emit an'error' event, and emit a'close'event (unlessemitClose is set tofalse). After this call, the readablestream will release any internal resources and subsequent calls topush()will be ignored.
Oncedestroy() has been called any further calls will be a no-op and nofurther errors except from_destroy() may be emitted as'error'.
Implementors should not override this method, but instead implementreadable._destroy().
readable.isPaused()#
- Returns:<boolean>
Thereadable.isPaused() method returns the current operating state of theReadable. This is used primarily by the mechanism that underlies thereadable.pipe() method. In most typical cases, there will be no reason touse this method directly.
const readable =new stream.Readable();readable.isPaused();// === falsereadable.pause();readable.isPaused();// === truereadable.resume();readable.isPaused();// === falsereadable.pause()#
- Returns:<this>
Thereadable.pause() method will cause a stream in flowing mode to stopemitting'data' events, switching out of flowing mode. Any data thatbecomes available will remain in the internal buffer.
const readable =getReadableStreamSomehow();readable.on('data',(chunk) => {console.log(`Received${chunk.length} bytes of data.`); readable.pause();console.log('There will be no additional data for 1 second.');setTimeout(() => {console.log('Now data will start flowing again.'); readable.resume(); },1000);});Thereadable.pause() method has no effect if there is a'readable'event listener.
readable.pipe(destination[, options])#
destination<stream.Writable> The destination for writing dataoptions<Object> Pipe optionsend<boolean> End the writer when the reader ends.Default:true.
- Returns:<stream.Writable> Thedestination, allowing for a chain of pipes ifit is a
Duplexor aTransformstream
Thereadable.pipe() method attaches aWritable stream to thereadable,causing it to switch automatically into flowing mode and push all of its datato the attachedWritable. The flow of data will be automatically managedso that the destinationWritable stream is not overwhelmed by a fasterReadable stream.
The following example pipes all of the data from thereadable into a filenamedfile.txt:
const fs =require('node:fs');const readable =getReadableStreamSomehow();const writable = fs.createWriteStream('file.txt');// All the data from readable goes into 'file.txt'.readable.pipe(writable);It is possible to attach multipleWritable streams to a singleReadablestream.
Thereadable.pipe() method returns a reference to thedestination streammaking it possible to set up chains of piped streams:
const fs =require('node:fs');const zlib =require('node:zlib');const r = fs.createReadStream('file.txt');const z = zlib.createGzip();const w = fs.createWriteStream('file.txt.gz');r.pipe(z).pipe(w);By default,stream.end() is called on the destinationWritablestream when the sourceReadable stream emits'end', so that thedestination is no longer writable. To disable this default behavior, theendoption can be passed asfalse, causing the destination stream to remain open:
reader.pipe(writer, {end:false });reader.on('end',() => { writer.end('Goodbye\n');});One important caveat is that if theReadable stream emits an error duringprocessing, theWritable destinationis not closed automatically. If anerror occurs, it will be necessary tomanually close each stream in orderto prevent memory leaks.
Theprocess.stderr andprocess.stdoutWritable streams are neverclosed until the Node.js process exits, regardless of the specified options.
readable.read([size])#
size<number> Optional argument to specify how much data to read.- Returns:<string> |<Buffer> |<null> |<any>
Thereadable.read() method reads data out of the internal buffer andreturns it. If no data is available to be read,null is returned. By default,the data is returned as aBuffer object unless an encoding has beenspecified using thereadable.setEncoding() method or the stream is operatingin object mode.
The optionalsize argument specifies a specific number of bytes to read. Ifsize bytes are not available to be read,null will be returnedunlessthe stream has ended, in which case all of the data remaining in the internalbuffer will be returned.
If thesize argument is not specified, all of the data contained in theinternal buffer will be returned.
Thesize argument must be less than or equal to 1 GiB.
Thereadable.read() method should only be called onReadable streamsoperating in paused mode. In flowing mode,readable.read() is calledautomatically until the internal buffer is fully drained.
const readable =getReadableStreamSomehow();// 'readable' may be triggered multiple times as data is buffered inreadable.on('readable',() => {let chunk;console.log('Stream is readable (new data received in buffer)');// Use a loop to make sure we read all currently available datawhile (null !== (chunk = readable.read())) {console.log(`Read${chunk.length} bytes of data...`); }});// 'end' will be triggered once when there is no more data availablereadable.on('end',() => {console.log('Reached end of stream.');});Each call toreadable.read() returns a chunk of data ornull, signifyingthat there's no more data to read at that moment. These chunks aren't automaticallyconcatenated. Because a singleread() call does not return all the data, usinga while loop may be necessary to continuously read chunks until all data is retrieved.When reading a large file,.read() might returnnull temporarily, indicatingthat it has consumed all buffered content but there may be more data yet to bebuffered. In such cases, a new'readable' event is emitted once there's moredata in the buffer, and the'end' event signifies the end of data transmission.
Therefore to read a file's whole contents from areadable, it is necessaryto collect chunks across multiple'readable' events:
const chunks = [];readable.on('readable',() => {let chunk;while (null !== (chunk = readable.read())) { chunks.push(chunk); }});readable.on('end',() => {const content = chunks.join('');});AReadable stream in object mode will always return a single item froma call toreadable.read(size), regardless of the value of thesize argument.
If thereadable.read() method returns a chunk of data, a'data' event willalso be emitted.
Callingstream.read([size]) after the'end' event hasbeen emitted will returnnull. No runtime error will be raised.
readable.readable#
- Type:<boolean>
Istrue if it is safe to callreadable.read(), which meansthe stream has not been destroyed or emitted'error' or'end'.
readable.readableAborted#
History
| Version | Changes |
|---|---|
| v24.0.0, v22.17.0 | Marking the API stable. |
| v16.8.0 | Added in: v16.8.0 |
- Type:<boolean>
Returns whether the stream was destroyed or errored before emitting'end'.
readable.readableDidRead#
History
| Version | Changes |
|---|---|
| v24.0.0, v22.17.0 | Marking the API stable. |
| v16.7.0, v14.18.0 | Added in: v16.7.0, v14.18.0 |
- Type:<boolean>
Returns whether'data' has been emitted.
readable.readableEncoding#
Getter for the propertyencoding of a givenReadable stream. Theencodingproperty can be set using thereadable.setEncoding() method.
readable.errored#
- Type:<Error>
Returns error if the stream has been destroyed with an error.
readable.readableFlowing#
- Type:<boolean>
This property reflects the current state of aReadable stream as describedin theThree states section.
readable.readableHighWaterMark#
- Type:<number>
Returns the value ofhighWaterMark passed when creating thisReadable.
readable.readableLength#
- Type:<number>
This property contains the number of bytes (or objects) in the queueready to be read. The value provides introspection data regardingthe status of thehighWaterMark.
readable.readableObjectMode#
- Type:<boolean>
Getter for the propertyobjectMode of a givenReadable stream.
readable.resume()#
History
| Version | Changes |
|---|---|
| v10.0.0 | The |
| v0.9.4 | Added in: v0.9.4 |
- Returns:<this>
Thereadable.resume() method causes an explicitly pausedReadable stream toresume emitting'data' events, switching the stream into flowing mode.
Thereadable.resume() method can be used to fully consume the data from astream without actually processing any of that data:
getReadableStreamSomehow() .resume() .on('end',() => {console.log('Reached the end, but did not read anything.'); });Thereadable.resume() method has no effect if there is a'readable'event listener.
readable.setEncoding(encoding)#
Thereadable.setEncoding() method sets the character encoding fordata read from theReadable stream.
By default, no encoding is assigned and stream data will be returned asBuffer objects. Setting an encoding causes the stream datato be returned as strings of the specified encoding rather than asBufferobjects. For instance, callingreadable.setEncoding('utf8') will cause theoutput data to be interpreted as UTF-8 data, and passed as strings. Callingreadable.setEncoding('hex') will cause the data to be encoded in hexadecimalstring format.
TheReadable stream will properly handle multi-byte characters deliveredthrough the stream that would otherwise become improperly decoded if simplypulled from the stream asBuffer objects.
const readable =getReadableStreamSomehow();readable.setEncoding('utf8');readable.on('data',(chunk) => { assert.equal(typeof chunk,'string');console.log('Got %d characters of string data:', chunk.length);});readable.unpipe([destination])#
destination<stream.Writable> Optional specific stream to unpipe- Returns:<this>
Thereadable.unpipe() method detaches aWritable stream previously attachedusing thestream.pipe() method.
If thedestination is not specified, thenall pipes are detached.
If thedestination is specified, but no pipe is set up for it, thenthe method does nothing.
const fs =require('node:fs');const readable =getReadableStreamSomehow();const writable = fs.createWriteStream('file.txt');// All the data from readable goes into 'file.txt',// but only for the first second.readable.pipe(writable);setTimeout(() => {console.log('Stop writing to file.txt.'); readable.unpipe(writable);console.log('Manually close the file stream.'); writable.end();},1000);readable.unshift(chunk[, encoding])#
History
| Version | Changes |
|---|---|
| v22.0.0, v20.13.0 | The |
| v8.0.0 | The |
| v0.9.11 | Added in: v0.9.11 |
chunk<Buffer> |<TypedArray> |<DataView> |<string> |<null> |<any> Chunk of data to unshiftonto the read queue. For streams not operating in object mode,chunkmustbe a<string>,<Buffer>,<TypedArray>,<DataView> ornull.For object mode streams,chunkmay be any JavaScript value.encoding<string> Encoding of string chunks. Must be a validBufferencoding, such as'utf8'or'ascii'.
Passingchunk asnull signals the end of the stream (EOF) and behaves thesame asreadable.push(null), after which no more data can be written. The EOFsignal is put at the end of the buffer and any buffered data will still beflushed.
Thereadable.unshift() method pushes a chunk of data back into the internalbuffer. This is useful in certain situations where a stream is being consumed bycode that needs to "un-consume" some amount of data that it has optimisticallypulled out of the source, so that the data can be passed on to some other party.
Thestream.unshift(chunk) method cannot be called after the'end' eventhas been emitted or a runtime error will be thrown.
Developers usingstream.unshift() often should consider switching touse of aTransform stream instead. See theAPI for stream implementerssection for more information.
// Pull off a header delimited by \n\n.// Use unshift() if we get too much.// Call the callback with (error, header, stream).const {StringDecoder } =require('node:string_decoder');functionparseHeader(stream, callback) { stream.on('error', callback); stream.on('readable', onReadable);const decoder =newStringDecoder('utf8');let header ='';functiononReadable() {let chunk;while (null !== (chunk = stream.read())) {const str = decoder.write(chunk);if (str.includes('\n\n')) {// Found the header boundary.const split = str.split(/\n\n/); header += split.shift();const remaining = split.join('\n\n');const buf =Buffer.from(remaining,'utf8'); stream.removeListener('error', callback);// Remove the 'readable' listener before unshifting. stream.removeListener('readable', onReadable);if (buf.length) stream.unshift(buf);// Now the body of the message can be read from the stream.callback(null, header, stream);return; }// Still reading the header. header += str; } }}Unlikestream.push(chunk),stream.unshift(chunk) will notend the reading process by resetting the internal reading state of the stream.This can cause unexpected results ifreadable.unshift() is called during aread (i.e. from within astream._read() implementation on acustom stream). Following the call toreadable.unshift() with an immediatestream.push('') will reset the reading state appropriately,however it is best to simply avoid callingreadable.unshift() while in theprocess of performing a read.
readable.wrap(stream)#
Prior to Node.js 0.10, streams did not implement the entirenode:streammodule API as it is currently defined. (SeeCompatibility for moreinformation.)
When using an older Node.js library that emits'data' events and has astream.pause() method that is advisory only, thereadable.wrap() method can be used to create aReadable stream that usesthe old stream as its data source.
It will rarely be necessary to usereadable.wrap() but the method has beenprovided as a convenience for interacting with older Node.js applications andlibraries.
const {OldReader } =require('./old-api-module.js');const {Readable } =require('node:stream');const oreader =newOldReader();const myReader =newReadable().wrap(oreader);myReader.on('readable',() => { myReader.read();// etc.});readable[Symbol.asyncIterator]()#
History
| Version | Changes |
|---|---|
| v11.14.0 | Symbol.asyncIterator support is no longer experimental. |
| v10.0.0 | Added in: v10.0.0 |
- Returns:<AsyncIterator> to fully consume the stream.
const fs =require('node:fs');asyncfunctionprint(readable) { readable.setEncoding('utf8');let data ='';forawait (const chunkof readable) { data += chunk; }console.log(data);}print(fs.createReadStream('file')).catch(console.error);If the loop terminates with abreak,return, or athrow, the stream willbe destroyed. In other terms, iterating over a stream will consume the streamfully. The stream will be read in chunks of size equal to thehighWaterMarkoption. In the code example above, data will be in a single chunk if the filehas less then 64 KiB of data because nohighWaterMark option is provided tofs.createReadStream().
readable[Symbol.asyncDispose]()#
History
| Version | Changes |
|---|---|
| v24.2.0 | No longer experimental. |
| v20.4.0, v18.18.0 | Added in: v20.4.0, v18.18.0 |
Callsreadable.destroy() with anAbortError and returnsa promise that fulfills when the stream is finished.
readable.compose(stream[, options])#
History
| Version | Changes |
|---|---|
| v24.0.0, v22.17.0 | Marking the API stable. |
| v19.1.0, v18.13.0 | Added in: v19.1.0, v18.13.0 |
stream<Writable> |<Duplex> |<WritableStream> |<TransformStream> |<Function>options<Object>signal<AbortSignal> allows destroying the stream if the signal isaborted.
- Returns:<Duplex> a stream composed with the stream
stream.
import {Readable }from'node:stream';asyncfunction*splitToWords(source) {forawait (const chunkof source) {const words =String(chunk).split(' ');for (const wordof words) {yield word; } }}const wordsStream =Readable.from(['text passed through','composed stream']).compose(splitToWords);const words =await wordsStream.toArray();console.log(words);// prints ['text', 'passed', 'through', 'composed', 'stream']readable.compose(s) is equivalent tostream.compose(readable, s).
This method also allows for an<AbortSignal> to be provided, which will destroythe composed stream when aborted.
Seestream.compose(...streams) for more information.
readable.iterator([options])#
History
| Version | Changes |
|---|---|
| v24.0.0, v22.17.0 | Marking the API stable. |
| v16.3.0 | Added in: v16.3.0 |
options<Object>destroyOnReturn<boolean> When set tofalse, callingreturnon theasync iterator, or exiting afor await...ofiteration using abreak,return, orthrowwill not destroy the stream.Default:true.
- Returns:<AsyncIterator> to consume the stream.
The iterator created by this method gives users the option to cancel thedestruction of the stream if thefor await...of loop is exited byreturn,break, orthrow, or if the iterator should destroy the stream if the streamemitted an error during iteration.
const {Readable } =require('node:stream');asyncfunctionprintIterator(readable) {forawait (const chunkof readable.iterator({destroyOnReturn:false })) {console.log(chunk);// 1break; }console.log(readable.destroyed);// falseforawait (const chunkof readable.iterator({destroyOnReturn:false })) {console.log(chunk);// Will print 2 and then 3 }console.log(readable.destroyed);// True, stream was totally consumed}asyncfunctionprintSymbolAsyncIterator(readable) {forawait (const chunkof readable) {console.log(chunk);// 1break; }console.log(readable.destroyed);// true}asyncfunctionshowBoth() {awaitprintIterator(Readable.from([1,2,3]));awaitprintSymbolAsyncIterator(Readable.from([1,2,3]));}showBoth();readable.map(fn[, options])#
History
| Version | Changes |
|---|---|
| v20.7.0, v18.19.0 | added |
| v17.4.0, v16.14.0 | Added in: v17.4.0, v16.14.0 |
fn<Function> |<AsyncFunction> a function to map over every chunk in thestream.data<any> a chunk of data from the stream.options<Object>signal<AbortSignal> aborted if the stream is destroyed allowing toabort thefncall early.
options<Object>concurrency<number> the maximum concurrent invocation offnto callon the stream at once.Default:1.highWaterMark<number> how many items to buffer while waiting for userconsumption of the mapped items.Default:concurrency * 2 - 1.signal<AbortSignal> allows destroying the stream if the signal isaborted.
- Returns:<Readable> a stream mapped with the function
fn.
This method allows mapping over the stream. Thefn function will be calledfor every chunk in the stream. If thefn function returns a promise - thatpromise will beawaited before being passed to the result stream.
import {Readable }from'node:stream';import {Resolver }from'node:dns/promises';// With a synchronous mapper.forawait (const chunkofReadable.from([1,2,3,4]).map((x) => x *2)) {console.log(chunk);// 2, 4, 6, 8}// With an asynchronous mapper, making at most 2 queries at a time.const resolver =newResolver();const dnsResults =Readable.from(['nodejs.org','openjsf.org','www.linuxfoundation.org',]).map((domain) => resolver.resolve4(domain), {concurrency:2 });forawait (const resultof dnsResults) {console.log(result);// Logs the DNS result of resolver.resolve4.}readable.filter(fn[, options])#
History
| Version | Changes |
|---|---|
| v20.7.0, v18.19.0 | added |
| v17.4.0, v16.14.0 | Added in: v17.4.0, v16.14.0 |
fn<Function> |<AsyncFunction> a function to filter chunks from the stream.data<any> a chunk of data from the stream.options<Object>signal<AbortSignal> aborted if the stream is destroyed allowing toabort thefncall early.
options<Object>concurrency<number> the maximum concurrent invocation offnto callon the stream at once.Default:1.highWaterMark<number> how many items to buffer while waiting for userconsumption of the filtered items.Default:concurrency * 2 - 1.signal<AbortSignal> allows destroying the stream if the signal isaborted.
- Returns:<Readable> a stream filtered with the predicate
fn.
This method allows filtering the stream. For each chunk in the stream thefnfunction will be called and if it returns a truthy value, the chunk will bepassed to the result stream. If thefn function returns a promise - thatpromise will beawaited.
import {Readable }from'node:stream';import {Resolver }from'node:dns/promises';// With a synchronous predicate.forawait (const chunkofReadable.from([1,2,3,4]).filter((x) => x >2)) {console.log(chunk);// 3, 4}// With an asynchronous predicate, making at most 2 queries at a time.const resolver =newResolver();const dnsResults =Readable.from(['nodejs.org','openjsf.org','www.linuxfoundation.org',]).filter(async (domain) => {const { address } =await resolver.resolve4(domain, {ttl:true });return address.ttl >60;}, {concurrency:2 });forawait (const resultof dnsResults) {// Logs domains with more than 60 seconds on the resolved dns record.console.log(result);}readable.forEach(fn[, options])#
fn<Function> |<AsyncFunction> a function to call on each chunk of the stream.data<any> a chunk of data from the stream.options<Object>signal<AbortSignal> aborted if the stream is destroyed allowing toabort thefncall early.
options<Object>concurrency<number> the maximum concurrent invocation offnto callon the stream at once.Default:1.signal<AbortSignal> allows destroying the stream if the signal isaborted.
- Returns:<Promise> a promise for when the stream has finished.
This method allows iterating a stream. For each chunk in the stream thefn function will be called. If thefn function returns a promise - thatpromise will beawaited.
This method is different fromfor await...of loops in that it can optionallyprocess chunks concurrently. In addition, aforEach iteration can only bestopped by having passed asignal option and aborting the relatedAbortController whilefor await...of can be stopped withbreak orreturn. In either case the stream will be destroyed.
This method is different from listening to the'data' event in that ituses thereadable event in the underlying machinery and can limit thenumber of concurrentfn calls.
import {Readable }from'node:stream';import {Resolver }from'node:dns/promises';// With a synchronous predicate.forawait (const chunkofReadable.from([1,2,3,4]).filter((x) => x >2)) {console.log(chunk);// 3, 4}// With an asynchronous predicate, making at most 2 queries at a time.const resolver =newResolver();const dnsResults =Readable.from(['nodejs.org','openjsf.org','www.linuxfoundation.org',]).map(async (domain) => {const { address } =await resolver.resolve4(domain, {ttl:true });return address;}, {concurrency:2 });await dnsResults.forEach((result) => {// Logs result, similar to `for await (const result of dnsResults)`console.log(result);});console.log('done');// Stream has finishedreadable.toArray([options])#
options<Object>signal<AbortSignal> allows cancelling the toArray operation if thesignal is aborted.
- Returns:<Promise> a promise containing an array with the contents of thestream.
This method allows easily obtaining the contents of a stream.
As this method reads the entire stream into memory, it negates the benefits ofstreams. It's intended for interoperability and convenience, not as the primaryway to consume streams.
import {Readable }from'node:stream';import {Resolver }from'node:dns/promises';awaitReadable.from([1,2,3,4]).toArray();// [1, 2, 3, 4]const resolver =newResolver();// Make dns queries concurrently using .map and collect// the results into an array using toArrayconst dnsResults =awaitReadable.from(['nodejs.org','openjsf.org','www.linuxfoundation.org',]).map(async (domain) => {const { address } =await resolver.resolve4(domain, {ttl:true });return address;}, {concurrency:2 }).toArray();readable.some(fn[, options])#
fn<Function> |<AsyncFunction> a function to call on each chunk of the stream.data<any> a chunk of data from the stream.options<Object>signal<AbortSignal> aborted if the stream is destroyed allowing toabort thefncall early.
options<Object>concurrency<number> the maximum concurrent invocation offnto callon the stream at once.Default:1.signal<AbortSignal> allows destroying the stream if the signal isaborted.
- Returns:<Promise> a promise evaluating to
trueiffnreturned a truthyvalue for at least one of the chunks.
This method is similar toArray.prototype.some and callsfn on each chunkin the stream until the awaited return value istrue (or any truthy value).Once anfn call on a chunk awaited return value is truthy, the stream isdestroyed and the promise is fulfilled withtrue. If none of thefncalls on the chunks return a truthy value, the promise is fulfilled withfalse.
import {Readable }from'node:stream';import { stat }from'node:fs/promises';// With a synchronous predicate.awaitReadable.from([1,2,3,4]).some((x) => x >2);// trueawaitReadable.from([1,2,3,4]).some((x) => x <0);// false// With an asynchronous predicate, making at most 2 file checks at a time.const anyBigFile =awaitReadable.from(['file1','file2','file3',]).some(async (fileName) => {const stats =awaitstat(fileName);return stats.size >1024 *1024;}, {concurrency:2 });console.log(anyBigFile);// `true` if any file in the list is bigger than 1MBconsole.log('done');// Stream has finishedreadable.find(fn[, options])#
fn<Function> |<AsyncFunction> a function to call on each chunk of the stream.data<any> a chunk of data from the stream.options<Object>signal<AbortSignal> aborted if the stream is destroyed allowing toabort thefncall early.
options<Object>concurrency<number> the maximum concurrent invocation offnto callon the stream at once.Default:1.signal<AbortSignal> allows destroying the stream if the signal isaborted.
- Returns:<Promise> a promise evaluating to the first chunk for which
fnevaluated with a truthy value, orundefinedif no element was found.
This method is similar toArray.prototype.find and callsfn on each chunkin the stream to find a chunk with a truthy value forfn. Once anfn call'sawaited return value is truthy, the stream is destroyed and the promise isfulfilled with value for whichfn returned a truthy value. If all of thefn calls on the chunks return a falsy value, the promise is fulfilled withundefined.
import {Readable }from'node:stream';import { stat }from'node:fs/promises';// With a synchronous predicate.awaitReadable.from([1,2,3,4]).find((x) => x >2);// 3awaitReadable.from([1,2,3,4]).find((x) => x >0);// 1awaitReadable.from([1,2,3,4]).find((x) => x >10);// undefined// With an asynchronous predicate, making at most 2 file checks at a time.const foundBigFile =awaitReadable.from(['file1','file2','file3',]).find(async (fileName) => {const stats =awaitstat(fileName);return stats.size >1024 *1024;}, {concurrency:2 });console.log(foundBigFile);// File name of large file, if any file in the list is bigger than 1MBconsole.log('done');// Stream has finishedreadable.every(fn[, options])#
fn<Function> |<AsyncFunction> a function to call on each chunk of the stream.data<any> a chunk of data from the stream.options<Object>signal<AbortSignal> aborted if the stream is destroyed allowing toabort thefncall early.
options<Object>concurrency<number> the maximum concurrent invocation offnto callon the stream at once.Default:1.signal<AbortSignal> allows destroying the stream if the signal isaborted.
- Returns:<Promise> a promise evaluating to
trueiffnreturned a truthyvalue for all of the chunks.
This method is similar toArray.prototype.every and callsfn on each chunkin the stream to check if all awaited return values are truthy value forfn.Once anfn call on a chunk awaited return value is falsy, the stream isdestroyed and the promise is fulfilled withfalse. If all of thefn callson the chunks return a truthy value, the promise is fulfilled withtrue.
import {Readable }from'node:stream';import { stat }from'node:fs/promises';// With a synchronous predicate.awaitReadable.from([1,2,3,4]).every((x) => x >2);// falseawaitReadable.from([1,2,3,4]).every((x) => x >0);// true// With an asynchronous predicate, making at most 2 file checks at a time.const allBigFiles =awaitReadable.from(['file1','file2','file3',]).every(async (fileName) => {const stats =awaitstat(fileName);return stats.size >1024 *1024;}, {concurrency:2 });// `true` if all files in the list are bigger than 1MiBconsole.log(allBigFiles);console.log('done');// Stream has finishedreadable.flatMap(fn[, options])#
fn<Function> |<AsyncGeneratorFunction> |<AsyncFunction> a function to map overevery chunk in the stream.data<any> a chunk of data from the stream.options<Object>signal<AbortSignal> aborted if the stream is destroyed allowing toabort thefncall early.
options<Object>concurrency<number> the maximum concurrent invocation offnto callon the stream at once.Default:1.signal<AbortSignal> allows destroying the stream if the signal isaborted.
- Returns:<Readable> a stream flat-mapped with the function
fn.
This method returns a new stream by applying the given callback to eachchunk of the stream and then flattening the result.
It is possible to return a stream or another iterable or async iterable fromfn and the result streams will be merged (flattened) into the returnedstream.
import {Readable }from'node:stream';import { createReadStream }from'node:fs';// With a synchronous mapper.forawait (const chunkofReadable.from([1,2,3,4]).flatMap((x) => [x, x])) {console.log(chunk);// 1, 1, 2, 2, 3, 3, 4, 4}// With an asynchronous mapper, combine the contents of 4 filesconst concatResult =Readable.from(['./1.mjs','./2.mjs','./3.mjs','./4.mjs',]).flatMap((fileName) =>createReadStream(fileName));forawait (const resultof concatResult) {// This will contain the contents (all chunks) of all 4 filesconsole.log(result);}readable.drop(limit[, options])#
limit<number> the number of chunks to drop from the readable.options<Object>signal<AbortSignal> allows destroying the stream if the signal isaborted.
- Returns:<Readable> a stream with
limitchunks dropped.
This method returns a new stream with the firstlimit chunks dropped.
import {Readable }from'node:stream';awaitReadable.from([1,2,3,4]).drop(2).toArray();// [3, 4]readable.take(limit[, options])#
limit<number> the number of chunks to take from the readable.options<Object>signal<AbortSignal> allows destroying the stream if the signal isaborted.
- Returns:<Readable> a stream with
limitchunks taken.
This method returns a new stream with the firstlimit chunks.
import {Readable }from'node:stream';awaitReadable.from([1,2,3,4]).take(2).toArray();// [1, 2]readable.reduce(fn[, initial[, options]])#
fn<Function> |<AsyncFunction> a reducer function to call over every chunkin the stream.previous<any> the value obtained from the last call tofnor theinitialvalue if specified or the first chunk of the stream otherwise.data<any> a chunk of data from the stream.options<Object>signal<AbortSignal> aborted if the stream is destroyed allowing toabort thefncall early.
initial<any> the initial value to use in the reduction.options<Object>signal<AbortSignal> allows destroying the stream if the signal isaborted.
- Returns:<Promise> a promise for the final value of the reduction.
This method callsfn on each chunk of the stream in order, passing it theresult from the calculation on the previous element. It returns a promise forthe final value of the reduction.
If noinitial value is supplied the first chunk of the stream is used as theinitial value. If the stream is empty, the promise is rejected with aTypeError with theERR_INVALID_ARGS code property.
import {Readable }from'node:stream';import { readdir, stat }from'node:fs/promises';import { join }from'node:path';const directoryPath ='./src';const filesInDir =awaitreaddir(directoryPath);const folderSize =awaitReadable.from(filesInDir) .reduce(async (totalSize, file) => {const { size } =awaitstat(join(directoryPath, file));return totalSize + size; },0);console.log(folderSize);The reducer function iterates the stream element-by-element which means thatthere is noconcurrency parameter or parallelism. To perform areduceconcurrently, you can extract the async function toreadable.map method.
import {Readable }from'node:stream';import { readdir, stat }from'node:fs/promises';import { join }from'node:path';const directoryPath ='./src';const filesInDir =awaitreaddir(directoryPath);const folderSize =awaitReadable.from(filesInDir) .map((file) =>stat(join(directoryPath, file)), {concurrency:2 }) .reduce((totalSize, { size }) => totalSize + size,0);console.log(folderSize);Duplex and transform streams#
Class:stream.Duplex#
History
| Version | Changes |
|---|---|
| v6.8.0 | Instances of |
| v0.9.4 | Added in: v0.9.4 |
Duplex streams are streams that implement both theReadable andWritable interfaces.
Examples ofDuplex streams include:
duplex.allowHalfOpen#
- Type:<boolean>
Iffalse then the stream will automatically end the writable side when thereadable side ends. Set initially by theallowHalfOpen constructor option,which defaults totrue.
This can be changed manually to change the half-open behavior of an existingDuplex stream instance, but must be changed before the'end' event isemitted.
Class:stream.Transform#
Transform streams areDuplex streams where the output is in some wayrelated to the input. Like allDuplex streams,Transform streamsimplement both theReadable andWritable interfaces.
Examples ofTransform streams include:
transform.destroy([error])#
History
| Version | Changes |
|---|---|
| v14.0.0 | Work as a no-op on a stream that has already been destroyed. |
| v8.0.0 | Added in: v8.0.0 |
Destroy the stream, and optionally emit an'error' event. After this call, thetransform stream would release any internal resources.Implementors should not override this method, but instead implementreadable._destroy().The default implementation of_destroy() forTransform also emit'close'unlessemitClose is set in false.
Oncedestroy() has been called, any further calls will be a no-op and nofurther errors except from_destroy() may be emitted as'error'.
stream.duplexPair([options])#
options<Object> A value to pass to bothDuplexconstructors,to set options such as buffering.- Returns:<Array> of two
Duplexinstances.
The utility functionduplexPair returns an Array with two items,each being aDuplex stream connected to the other side:
const [ sideA, sideB ] =duplexPair();Whatever is written to one stream is made readable on the other. It providesbehavior analogous to a network connection, where the data written by the clientbecomes readable by the server, and vice-versa.
The Duplex streams are symmetrical; one or the other may be used without anydifference in behavior.
stream.finished(stream[, options], callback)#
History
| Version | Changes |
|---|---|
| v19.5.0 | Added support for |
| v15.11.0 | The |
| v14.0.0 | The |
| v14.0.0 | Emitting |
| v14.0.0 | Callback will be invoked on streams which have already finished before the call to |
| v10.0.0 | Added in: v10.0.0 |
stream<Stream> |<ReadableStream> |<WritableStream> A readable and/or writablestream/webstream.options<Object>error<boolean> If set tofalse, then a call toemit('error', err)isnot treated as finished.Default:true.readable<boolean> When set tofalse, the callback will be called whenthe stream ends even though the stream might still be readable.Default:true.writable<boolean> When set tofalse, the callback will be called whenthe stream ends even though the stream might still be writable.Default:true.signal<AbortSignal> allows aborting the wait for the stream finish. Theunderlying stream willnot be aborted if the signal is aborted. Thecallback will get called with anAbortError. All registeredlisteners added by this function will also be removed.
callback<Function> A callback function that takes an optional errorargument.- Returns:<Function> A cleanup function which removes all registeredlisteners.
A function to get notified when a stream is no longer readable, writableor has experienced an error or a premature close event.
const { finished } =require('node:stream');const fs =require('node:fs');const rs = fs.createReadStream('archive.tar');finished(rs,(err) => {if (err) {console.error('Stream failed.', err); }else {console.log('Stream is done reading.'); }});rs.resume();// Drain the stream.Especially useful in error handling scenarios where a stream is destroyedprematurely (like an aborted HTTP request), and will not emit'end'or'finish'.
Thefinished API providespromise version.
stream.finished() leaves dangling event listeners (in particular'error','end','finish' and'close') aftercallback has beeninvoked. The reason for this is so that unexpected'error' events (due toincorrect stream implementations) do not cause unexpected crashes.If this is unwanted behavior then the returned cleanup function needs to beinvoked in the callback:
const cleanup =finished(rs,(err) => {cleanup();// ...});stream.pipeline(source[, ...transforms], destination, callback)#
stream.pipeline(streams, callback)#
History
| Version | Changes |
|---|---|
| v19.7.0, v18.16.0 | Added support for webstreams. |
| v18.0.0 | Passing an invalid callback to the |
| v14.0.0 | The |
| v13.10.0 | Add support for async generators. |
| v10.0.0 | Added in: v10.0.0 |
streams<Stream[]> |<Iterable[]> |<AsyncIterable[]> |<Function[]> |<ReadableStream[]> |<WritableStream[]> |<TransformStream[]>source<Stream> |<Iterable> |<AsyncIterable> |<Function> |<ReadableStream>- Returns:<Iterable> |<AsyncIterable>
...transforms<Stream> |<Function> |<TransformStream>source<AsyncIterable>- Returns:<AsyncIterable>
destination<Stream> |<Function> |<WritableStream>source<AsyncIterable>- Returns:<AsyncIterable> |<Promise>
callback<Function> Called when the pipeline is fully done.err<Error>valResolved value ofPromisereturned bydestination.
- Returns:<Stream>
A module method to pipe between streams and generators forwarding errors andproperly cleaning up and provide a callback when the pipeline is complete.
const { pipeline } =require('node:stream');const fs =require('node:fs');const zlib =require('node:zlib');// Use the pipeline API to easily pipe a series of streams// together and get notified when the pipeline is fully done.// A pipeline to gzip a potentially huge tar file efficiently:pipeline( fs.createReadStream('archive.tar'), zlib.createGzip(), fs.createWriteStream('archive.tar.gz'),(err) => {if (err) {console.error('Pipeline failed.', err); }else {console.log('Pipeline succeeded.'); } },);Thepipeline API provides apromise version.
stream.pipeline() will callstream.destroy(err) on all streams except:
Readablestreams which have emitted'end'or'close'.Writablestreams which have emitted'finish'or'close'.
stream.pipeline() leaves dangling event listeners on the streamsafter thecallback has been invoked. In the case of reuse of streams afterfailure, this can cause event listener leaks and swallowed errors. If the laststream is readable, dangling event listeners will be removed so that the laststream can be consumed later.
stream.pipeline() closes all the streams when an error is raised.TheIncomingRequest usage withpipeline could lead to an unexpected behavioronce it would destroy the socket without sending the expected response.See the example below:
const fs =require('node:fs');const http =require('node:http');const { pipeline } =require('node:stream');const server = http.createServer((req, res) => {const fileStream = fs.createReadStream('./fileNotExist.txt');pipeline(fileStream, res,(err) => {if (err) {console.log(err);// No such file// this message can't be sent once `pipeline` already destroyed the socketreturn res.end('error!!!'); } });});stream.compose(...streams)#
History
| Version | Changes |
|---|---|
| v21.1.0, v20.10.0 | Added support for stream class. |
| v19.8.0, v18.16.0 | Added support for webstreams. |
| v16.9.0 | Added in: v16.9.0 |
stream.compose is experimental.streams<Stream[]> |<Iterable[]> |<AsyncIterable[]> |<Function[]> |<ReadableStream[]> |<WritableStream[]> |<TransformStream[]> |<Duplex[]> |<Function>- Returns:<stream.Duplex>
Combines two or more streams into aDuplex stream that writes to thefirst stream and reads from the last. Each provided stream is piped intothe next, usingstream.pipeline. If any of the streams error then allare destroyed, including the outerDuplex stream.
Becausestream.compose returns a new stream that in turn can (andshould) be piped into other streams, it enables composition. In contrast,when passing streams tostream.pipeline, typically the first stream isa readable stream and the last a writable stream, forming a closedcircuit.
If passed aFunction it must be a factory method taking asourceIterable.
import { compose,Transform }from'node:stream';const removeSpaces =newTransform({transform(chunk, encoding, callback) {callback(null,String(chunk).replace(' ','')); },});asyncfunction*toUpper(source) {forawait (const chunkof source) {yieldString(chunk).toUpperCase(); }}let res ='';forawait (const bufofcompose(removeSpaces, toUpper).end('hello world')) { res += buf;}console.log(res);// prints 'HELLOWORLD'stream.compose can be used to convert async iterables, generators andfunctions into streams.
AsyncIterableconverts into a readableDuplex. Cannot yieldnull.AsyncGeneratorFunctionconverts into a readable/writable transformDuplex.Must take a sourceAsyncIterableas first parameter. Cannot yieldnull.AsyncFunctionconverts into a writableDuplex. Must returneithernullorundefined.
import { compose }from'node:stream';import { finished }from'node:stream/promises';// Convert AsyncIterable into readable Duplex.const s1 =compose(asyncfunction*() {yield'Hello';yield'World';}());// Convert AsyncGenerator into transform Duplex.const s2 =compose(asyncfunction*(source) {forawait (const chunkof source) {yieldString(chunk).toUpperCase(); }});let res ='';// Convert AsyncFunction into writable Duplex.const s3 =compose(asyncfunction(source) {forawait (const chunkof source) { res += chunk; }});awaitfinished(compose(s1, s2, s3));console.log(res);// prints 'HELLOWORLD'For convenience, thereadable.compose(stream) method is available on<Readable> and<Duplex> streams as a wrapper for this function.
stream.isErrored(stream)#
History
| Version | Changes |
|---|---|
| v24.0.0, v22.17.0 | Marking the API stable. |
| v17.3.0, v16.14.0 | Added in: v17.3.0, v16.14.0 |
stream<Readable> |<Writable> |<Duplex> |<WritableStream> |<ReadableStream>- Returns:<boolean>
Returns whether the stream has encountered an error.
stream.isReadable(stream)#
History
| Version | Changes |
|---|---|
| v24.0.0, v22.17.0 | Marking the API stable. |
| v17.4.0, v16.14.0 | Added in: v17.4.0, v16.14.0 |
stream<Readable> |<Duplex> |<ReadableStream>- Returns:<boolean> |<null> - Only returns
nullifstreamis not a validReadable,DuplexorReadableStream.
Returns whether the stream is readable.
stream.isWritable(stream)#
stream<Writable> |<Duplex> |<WritableStream>- Returns:<boolean> |<null> - Only returns
nullifstreamis not a validWritable,DuplexorWritableStream.
Returns whether the stream is writable.
stream.Readable.from(iterable[, options])#
iterable<Iterable> Object implementing theSymbol.asyncIteratororSymbol.iteratoriterable protocol. Emits an 'error' event if a nullvalue is passed.options<Object> Options provided tonew stream.Readable([options]).By default,Readable.from()will setoptions.objectModetotrue, unlessthis is explicitly opted out by settingoptions.objectModetofalse.- Returns:<stream.Readable>
A utility method for creating readable streams out of iterators.
const {Readable } =require('node:stream');asyncfunction *generate() {yield'hello';yield'streams';}const readable =Readable.from(generate());readable.on('data',(chunk) => {console.log(chunk);});CallingReadable.from(string) orReadable.from(buffer) will not havethe strings or buffers be iterated to match the other streams semanticsfor performance reasons.
If anIterable object containing promises is passed as an argument,it might result in unhandled rejection.
const {Readable } =require('node:stream');Readable.from([newPromise((resolve) =>setTimeout(resolve('1'),1500)),newPromise((_, reject) =>setTimeout(reject(newError('2')),1000)),// Unhandled rejection]);stream.Readable.fromWeb(readableStream[, options])#
History
| Version | Changes |
|---|---|
| v24.0.0, v22.17.0 | Marking the API stable. |
| v17.0.0 | Added in: v17.0.0 |
readableStream<ReadableStream>options<Object>encoding<string>highWaterMark<number>objectMode<boolean>signal<AbortSignal>
- Returns:<stream.Readable>
stream.Readable.isDisturbed(stream)#
History
| Version | Changes |
|---|---|
| v24.0.0, v22.17.0 | Marking the API stable. |
| v16.8.0 | Added in: v16.8.0 |
stream<stream.Readable> |<ReadableStream>- Returns:
boolean
Returns whether the stream has been read from or cancelled.
stream.Readable.toWeb(streamReadable[, options])#
History
| Version | Changes |
|---|---|
| v25.4.0 | Add 'type' option to specify 'bytes'. |
| v24.0.0, v22.17.0 | Marking the API stable. |
| v18.7.0 | include strategy options on Readable. |
| v17.0.0 | Added in: v17.0.0 |
streamReadable<stream.Readable>options<Object>strategy<Object>highWaterMark<number> The maximum internal queue size (of the createdReadableStream) before backpressure is applied in reading from the givenstream.Readable. If no value is provided, it will be taken from thegivenstream.Readable.size<Function> A function that size of the given chunk of data.If no value is provided, the size will be1for all the chunks.
type<string> Must be 'bytes' or undefined.
- Returns:<ReadableStream>
stream.Writable.fromWeb(writableStream[, options])#
History
| Version | Changes |
|---|---|
| v24.0.0, v22.17.0 | Marking the API stable. |
| v17.0.0 | Added in: v17.0.0 |
writableStream<WritableStream>options<Object>decodeStrings<boolean>highWaterMark<number>objectMode<boolean>signal<AbortSignal>
- Returns:<stream.Writable>
stream.Writable.toWeb(streamWritable)#
History
| Version | Changes |
|---|---|
| v24.0.0, v22.17.0 | Marking the API stable. |
| v17.0.0 | Added in: v17.0.0 |
streamWritable<stream.Writable>- Returns:<WritableStream>
stream.Duplex.from(src)#
History
| Version | Changes |
|---|---|
| v19.5.0, v18.17.0 | The |
| v16.8.0 | Added in: v16.8.0 |
src<Stream> |<Blob> |<ArrayBuffer> |<string> |<Iterable> |<AsyncIterable> |<AsyncGeneratorFunction> |<AsyncFunction> |<Promise> |<Object> |<ReadableStream> |<WritableStream>
A utility method for creating duplex streams.
Streamconverts writable stream into writableDuplexand readable streamtoDuplex.Blobconverts into readableDuplex.stringconverts into readableDuplex.ArrayBufferconverts into readableDuplex.AsyncIterableconverts into a readableDuplex. Cannot yieldnull.AsyncGeneratorFunctionconverts into a readable/writable transformDuplex. Must take a sourceAsyncIterableas first parameter. Cannot yieldnull.AsyncFunctionconverts into a writableDuplex. Must returneithernullorundefinedObject ({ writable, readable })convertsreadableandwritableintoStreamand then combines them intoDuplexwhere theDuplexwill write to thewritableand read from thereadable.Promiseconverts into readableDuplex. Valuenullis ignored.ReadableStreamconverts into readableDuplex.WritableStreamconverts into writableDuplex.- Returns:<stream.Duplex>
If anIterable object containing promises is passed as an argument,it might result in unhandled rejection.
const {Duplex } =require('node:stream');Duplex.from([newPromise((resolve) =>setTimeout(resolve('1'),1500)),newPromise((_, reject) =>setTimeout(reject(newError('2')),1000)),// Unhandled rejection]);stream.Duplex.fromWeb(pair[, options])#
History
| Version | Changes |
|---|---|
| v24.0.0, v22.17.0 | Marking the API stable. |
| v17.0.0 | Added in: v17.0.0 |
pair<Object>readable<ReadableStream>writable<WritableStream>
options<Object>- Returns:<stream.Duplex>
import {Duplex }from'node:stream';import {ReadableStream,WritableStream,}from'node:stream/web';const readable =newReadableStream({start(controller) { controller.enqueue('world'); },});const writable =newWritableStream({write(chunk) {console.log('writable', chunk); },});const pair = { readable, writable,};const duplex =Duplex.fromWeb(pair, {encoding:'utf8',objectMode:true });duplex.write('hello');forawait (const chunkof duplex) {console.log('readable', chunk);}const {Duplex } =require('node:stream');const {ReadableStream,WritableStream,} =require('node:stream/web');const readable =newReadableStream({start(controller) { controller.enqueue('world'); },});const writable =newWritableStream({write(chunk) {console.log('writable', chunk); },});const pair = { readable, writable,};const duplex =Duplex.fromWeb(pair, {encoding:'utf8',objectMode:true });duplex.write('hello');duplex.once('readable',() =>console.log('readable', duplex.read()));
stream.Duplex.toWeb(streamDuplex[, options])#
History
| Version | Changes |
|---|---|
| v25.4.0 | Add 'type' option to specify 'bytes'. |
| v24.0.0, v22.17.0 | Marking the API stable. |
| v17.0.0 | Added in: v17.0.0 |
streamDuplex<stream.Duplex>options<Object>type<string> Must be 'bytes' or undefined.
- Returns:<Object>
readable<ReadableStream>writable<WritableStream>
import {Duplex }from'node:stream';const duplex =Duplex({objectMode:true,read() {this.push('world');this.push(null); },write(chunk, encoding, callback) {console.log('writable', chunk);callback(); },});const { readable, writable } =Duplex.toWeb(duplex);writable.getWriter().write('hello');const { value } =await readable.getReader().read();console.log('readable', value);const {Duplex } =require('node:stream');const duplex =Duplex({objectMode:true,read() {this.push('world');this.push(null); },write(chunk, encoding, callback) {console.log('writable', chunk);callback(); },});const { readable, writable } =Duplex.toWeb(duplex);writable.getWriter().write('hello');readable.getReader().read().then((result) => {console.log('readable', result.value);});
stream.addAbortSignal(signal, stream)#
History
| Version | Changes |
|---|---|
| v19.7.0, v18.16.0 | Added support for |
| v15.4.0 | Added in: v15.4.0 |
signal<AbortSignal> A signal representing possible cancellationstream<Stream> |<ReadableStream> |<WritableStream> A stream to attach a signalto.
Attaches an AbortSignal to a readable or writeable stream. This lets codecontrol stream destruction using anAbortController.
Callingabort on theAbortController corresponding to the passedAbortSignal will behave the same way as calling.destroy(new AbortError())on the stream, andcontroller.error(new AbortError()) for webstreams.
const fs =require('node:fs');const controller =newAbortController();const read =addAbortSignal( controller.signal, fs.createReadStream(('object.json')),);// Later, abort the operation closing the streamcontroller.abort();Or using anAbortSignal with a readable stream as an async iterable:
const controller =newAbortController();setTimeout(() => controller.abort(),10_000);// set a timeoutconst stream =addAbortSignal( controller.signal, fs.createReadStream(('object.json')),);(async () => {try {forawait (const chunkof stream) {awaitprocess(chunk); } }catch (e) {if (e.name ==='AbortError') {// The operation was cancelled }else {throw e; } }})();Or using anAbortSignal with a ReadableStream:
const controller =newAbortController();const rs =newReadableStream({start(controller) { controller.enqueue('hello'); controller.enqueue('world'); controller.close(); },});addAbortSignal(controller.signal, rs);finished(rs,(err) => {if (err) {if (err.name ==='AbortError') {// The operation was cancelled } }});const reader = rs.getReader();reader.read().then(({ value, done }) => {console.log(value);// helloconsole.log(done);// false controller.abort();});stream.getDefaultHighWaterMark(objectMode)#
Returns the default highWaterMark used by streams.Defaults to65536 (64 KiB), or16 forobjectMode.
API for stream implementers#
Thenode:stream module API has been designed to make it possible to easilyimplement streams using JavaScript's prototypal inheritance model.
First, a stream developer would declare a new JavaScript class that extends oneof the four basic stream classes (stream.Writable,stream.Readable,stream.Duplex, orstream.Transform), making sure they call the appropriateparent class constructor:
const {Writable } =require('node:stream');classMyWritableextendsWritable {constructor({ highWaterMark, ...options }) {super({ highWaterMark });// ... }}When extending streams, keep in mind what options the usercan and should provide before forwarding these to the base constructor. Forexample, if the implementation makes assumptions in regard to theautoDestroy andemitClose options, do not allow theuser to override these. Be explicit about whatoptions are forwarded instead of implicitly forwarding all options.
The new stream class must then implement one or more specific methods, dependingon the type of stream being created, as detailed in the chart below:
| Use-case | Class | Method(s) to implement |
|---|---|---|
| Reading only | Readable | _read() |
| Writing only | Writable | _write(),_writev(),_final() |
| Reading and writing | Duplex | _read(),_write(),_writev(),_final() |
| Operate on written data, then read the result | Transform | _transform(),_flush(),_final() |
The implementation code for a stream shouldnever call the "public" methodsof a stream that are intended for use by consumers (as described in theAPI for stream consumers section). Doing so may lead to adverse side effectsin application code consuming the stream.
Avoid overriding public methods such aswrite(),end(),cork(),uncork(),read() anddestroy(), or emitting internal events suchas'error','data','end','finish' and'close' through.emit().Doing so can break current and future stream invariants leading to behaviorand/or compatibility issues with other streams, stream utilities, and userexpectations.
Simplified construction#
For many simple cases, it is possible to create a stream without relying oninheritance. This can be accomplished by directly creating instances of thestream.Writable,stream.Readable,stream.Duplex, orstream.Transformobjects and passing appropriate methods as constructor options.
const {Writable } =require('node:stream');const myWritable =newWritable({construct(callback) {// Initialize state and load resources... },write(chunk, encoding, callback) {// ... },destroy() {// Free resources... },});Implementing a writable stream#
Thestream.Writable class is extended to implement aWritable stream.
CustomWritable streamsmust call thenew stream.Writable([options])constructor and implement thewritable._write() and/orwritable._writev()method.
new stream.Writable([options])#
History
| Version | Changes |
|---|---|
| v22.0.0 | bump default highWaterMark. |
| v15.5.0 | support passing in an AbortSignal. |
| v14.0.0 | Change |
| v11.2.0, v10.16.0 | Add |
| v10.0.0 | Add |
options<Object>highWaterMark<number> Buffer level whenstream.write()starts returningfalse.Default:65536(64 KiB), or16forobjectModestreams.decodeStrings<boolean> Whether to encodestrings passed tostream.write()toBuffers (with the encodingspecified in thestream.write()call) before passingthem tostream._write(). Other types of data are notconverted (i.e.Buffers are not decoded intostrings). Setting tofalse will preventstrings from being converted.Default:true.defaultEncoding<string> The default encoding that is used when noencoding is specified as an argument tostream.write().Default:'utf8'.objectMode<boolean> Whether or not thestream.write(anyObj)is a valid operation. When set,it becomes possible to write JavaScript values other than string,<Buffer>,<TypedArray> or<DataView> if supported by the stream implementation.Default:false.emitClose<boolean> Whether or not the stream should emit'close'after it has been destroyed.Default:true.write<Function> Implementation for thestream._write()method.writev<Function> Implementation for thestream._writev()method.destroy<Function> Implementation for thestream._destroy()method.final<Function> Implementation for thestream._final()method.construct<Function> Implementation for thestream._construct()method.autoDestroy<boolean> Whether this stream should automatically call.destroy()on itself after ending.Default:true.signal<AbortSignal> A signal representing possible cancellation.
const {Writable } =require('node:stream');classMyWritableextendsWritable {constructor(options) {// Calls the stream.Writable() constructor.super(options);// ... }}Or, when using pre-ES6 style constructors:
const {Writable } =require('node:stream');const util =require('node:util');functionMyWritable(options) {if (!(thisinstanceofMyWritable))returnnewMyWritable(options);Writable.call(this, options);}util.inherits(MyWritable,Writable);Or, using the simplified constructor approach:
const {Writable } =require('node:stream');const myWritable =newWritable({write(chunk, encoding, callback) {// ... },writev(chunks, callback) {// ... },});Callingabort on theAbortController corresponding to the passedAbortSignal will behave the same way as calling.destroy(new AbortError())on the writeable stream.
const {Writable } =require('node:stream');const controller =newAbortController();const myWritable =newWritable({write(chunk, encoding, callback) {// ... },writev(chunks, callback) {// ... },signal: controller.signal,});// Later, abort the operation closing the streamcontroller.abort();writable._construct(callback)#
callback<Function> Call this function (optionally with an errorargument) when the stream has finished initializing.
The_construct() method MUST NOT be called directly. It may be implementedby child classes, and if so, will be called by the internalWritableclass methods only.
This optional function will be called in a tick after the stream constructorhas returned, delaying any_write(),_final() and_destroy() calls untilcallback is called. This is useful to initialize state or asynchronouslyinitialize resources before the stream can be used.
const {Writable } =require('node:stream');const fs =require('node:fs');classWriteStreamextendsWritable {constructor(filename) {super();this.filename = filename;this.fd =null; }_construct(callback) { fs.open(this.filename,'w',(err, fd) => {if (err) {callback(err); }else {this.fd = fd;callback(); } }); }_write(chunk, encoding, callback) { fs.write(this.fd, chunk, callback); }_destroy(err, callback) {if (this.fd) { fs.close(this.fd,(er) =>callback(er || err)); }else {callback(err); } }}writable._write(chunk, encoding, callback)#
History
| Version | Changes |
|---|---|
| v12.11.0 | _write() is optional when providing _writev(). |
chunk<Buffer> |<string> |<any> TheBufferto be written, converted from thestringpassed tostream.write(). If the stream'sdecodeStringsoption isfalseor the stream is operating in object mode,the chunk will not be converted & will be whatever was passed tostream.write().encoding<string> If the chunk is a string, thenencodingis thecharacter encoding of that string. If chunk is aBuffer, or if thestream is operating in object mode,encodingmay be ignored.callback<Function> Call this function (optionally with an errorargument) when processing is complete for the supplied chunk.
AllWritable stream implementations must provide awritable._write() and/orwritable._writev() method to send data to the underlyingresource.
Transform streams provide their own implementation of thewritable._write().
This function MUST NOT be called by application code directly. It should beimplemented by child classes, and called by the internalWritable classmethods only.
Thecallback function must be called synchronously inside ofwritable._write() or asynchronously (i.e. different tick) to signal eitherthat the write completed successfully or failed with an error.The first argument passed to thecallback must be theError object if thecall failed ornull if the write succeeded.
All calls towritable.write() that occur between the timewritable._write()is called and thecallback is called will cause the written data to bebuffered. When thecallback is invoked, the stream might emit a'drain'event. If a stream implementation is capable of processing multiple chunks ofdata at once, thewritable._writev() method should be implemented.
If thedecodeStrings property is explicitly set tofalse in the constructoroptions, thenchunk will remain the same object that is passed to.write(),and may be a string rather than aBuffer. This is to support implementationsthat have an optimized handling for certain string data encodings. In that case,theencoding argument will indicate the character encoding of the string.Otherwise, theencoding argument can be safely ignored.
Thewritable._write() method is prefixed with an underscore because it isinternal to the class that defines it, and should never be called directly byuser programs.
writable._writev(chunks, callback)#
chunks<Object[]> The data to be written. The value is an array of<Object>that each represent a discrete chunk of data to write. The properties ofthese objects are:chunk<Buffer> |<string> A buffer instance or string containing the data tobe written. Thechunkwill be a string if theWritablewas created withthedecodeStringsoption set tofalseand a string was passed towrite().encoding<string> The character encoding of thechunk. IfchunkisaBuffer, theencodingwill be'buffer'.
callback<Function> A callback function (optionally with an errorargument) to be invoked when processing is complete for the supplied chunks.
This function MUST NOT be called by application code directly. It should beimplemented by child classes, and called by the internalWritable classmethods only.
Thewritable._writev() method may be implemented in addition or alternativelytowritable._write() in stream implementations that are capable of processingmultiple chunks of data at once. If implemented and if there is buffered datafrom previous writes,_writev() will be called instead of_write().
Thewritable._writev() method is prefixed with an underscore because it isinternal to the class that defines it, and should never be called directly byuser programs.
writable._destroy(err, callback)#
err<Error> A possible error.callback<Function> A callback function that takes an optional errorargument.
The_destroy() method is called bywritable.destroy().It can be overridden by child classes but itmust not be called directly.
writable._final(callback)#
callback<Function> Call this function (optionally with an errorargument) when finished writing any remaining data.
The_final() methodmust not be called directly. It may be implementedby child classes, and if so, will be called by the internalWritableclass methods only.
This optional function will be called before the stream closes, delaying the'finish' event untilcallback is called. This is useful to close resourcesor write buffered data before a stream ends.
Errors while writing#
Errors occurring during the processing of thewritable._write(),writable._writev() andwritable._final() methods must be propagatedby invoking the callback and passing the error as the first argument.Throwing anError from within these methods or manually emitting an'error'event results in undefined behavior.
If aReadable stream pipes into aWritable stream whenWritable emits anerror, theReadable stream will be unpiped.
const {Writable } =require('node:stream');const myWritable =newWritable({write(chunk, encoding, callback) {if (chunk.toString().indexOf('a') >=0) {callback(newError('chunk is invalid')); }else {callback(); } },});An example writable stream#
The following illustrates a rather simplistic (and somewhat pointless) customWritable stream implementation. While this specificWritable stream instanceis not of any real particular usefulness, the example illustrates each of therequired elements of a customWritable stream instance:
const {Writable } =require('node:stream');classMyWritableextendsWritable {_write(chunk, encoding, callback) {if (chunk.toString().indexOf('a') >=0) {callback(newError('chunk is invalid')); }else {callback(); } }}Decoding buffers in a writable stream#
Decoding buffers is a common task, for instance, when using transformers whoseinput is a string. This is not a trivial process when using multi-bytecharacters encoding, such as UTF-8. The following example shows how to decodemulti-byte strings usingStringDecoder andWritable.
const {Writable } =require('node:stream');const {StringDecoder } =require('node:string_decoder');classStringWritableextendsWritable {constructor(options) {super(options);this._decoder =newStringDecoder(options?.defaultEncoding);this.data =''; }_write(chunk, encoding, callback) {if (encoding ==='buffer') { chunk =this._decoder.write(chunk); }this.data += chunk;callback(); }_final(callback) {this.data +=this._decoder.end();callback(); }}const euro = [[0xE2,0x82], [0xAC]].map(Buffer.from);const w =newStringWritable();w.write('currency: ');w.write(euro[0]);w.end(euro[1]);console.log(w.data);// currency: €Implementing a readable stream#
Thestream.Readable class is extended to implement aReadable stream.
CustomReadable streamsmust call thenew stream.Readable([options])constructor and implement thereadable._read() method.
new stream.Readable([options])#
History
| Version | Changes |
|---|---|
| v22.0.0 | bump default highWaterMark. |
| v15.5.0 | support passing in an AbortSignal. |
| v14.0.0 | Change |
| v11.2.0, v10.16.0 | Add |
options<Object>highWaterMark<number> The maximumnumber of bytes to storein the internal buffer before ceasing to read from the underlying resource.Default:65536(64 KiB), or16forobjectModestreams.encoding<string> If specified, then buffers will be decoded tostrings using the specified encoding.Default:null.objectMode<boolean> Whether this stream should behaveas a stream of objects. Meaning thatstream.read(n)returnsa single value instead of aBufferof sizen.Default:false.emitClose<boolean> Whether or not the stream should emit'close'after it has been destroyed.Default:true.read<Function> Implementation for thestream._read()method.destroy<Function> Implementation for thestream._destroy()method.construct<Function> Implementation for thestream._construct()method.autoDestroy<boolean> Whether this stream should automatically call.destroy()on itself after ending.Default:true.signal<AbortSignal> A signal representing possible cancellation.
const {Readable } =require('node:stream');classMyReadableextendsReadable {constructor(options) {// Calls the stream.Readable(options) constructor.super(options);// ... }}Or, when using pre-ES6 style constructors:
const {Readable } =require('node:stream');const util =require('node:util');functionMyReadable(options) {if (!(thisinstanceofMyReadable))returnnewMyReadable(options);Readable.call(this, options);}util.inherits(MyReadable,Readable);Or, using the simplified constructor approach:
const {Readable } =require('node:stream');const myReadable =newReadable({read(size) {// ... },});Callingabort on theAbortController corresponding to the passedAbortSignal will behave the same way as calling.destroy(new AbortError())on the readable created.
const {Readable } =require('node:stream');const controller =newAbortController();const read =newReadable({read(size) {// ... },signal: controller.signal,});// Later, abort the operation closing the streamcontroller.abort();readable._construct(callback)#
callback<Function> Call this function (optionally with an errorargument) when the stream has finished initializing.
The_construct() method MUST NOT be called directly. It may be implementedby child classes, and if so, will be called by the internalReadableclass methods only.
This optional function will be scheduled in the next tick by the streamconstructor, delaying any_read() and_destroy() calls untilcallback iscalled. This is useful to initialize state or asynchronously initializeresources before the stream can be used.
const {Readable } =require('node:stream');const fs =require('node:fs');classReadStreamextendsReadable {constructor(filename) {super();this.filename = filename;this.fd =null; }_construct(callback) { fs.open(this.filename,(err, fd) => {if (err) {callback(err); }else {this.fd = fd;callback(); } }); }_read(n) {const buf =Buffer.alloc(n); fs.read(this.fd, buf,0, n,null,(err, bytesRead) => {if (err) {this.destroy(err); }else {this.push(bytesRead >0 ? buf.slice(0, bytesRead) :null); } }); }_destroy(err, callback) {if (this.fd) { fs.close(this.fd,(er) =>callback(er || err)); }else {callback(err); } }}readable._read(size)#
size<number> Number of bytes to read asynchronously
This function MUST NOT be called by application code directly. It should beimplemented by child classes, and called by the internalReadable classmethods only.
AllReadable stream implementations must provide an implementation of thereadable._read() method to fetch data from the underlying resource.
Whenreadable._read() is called, if data is available from the resource,the implementation should begin pushing that data into the read queue using thethis.push(dataChunk) method._read() will be called againafter each call tothis.push(dataChunk) once the stream isready to accept more data._read() may continue reading from the resource andpushing data untilreadable.push() returnsfalse. Only when_read() iscalled again after it has stopped should it resume pushing additional data intothe queue.
Once thereadable._read() method has been called, it will not be calledagain until more data is pushed through thereadable.push()method. Empty data such as empty buffers and strings will not causereadable._read() to be called.
Thesize argument is advisory. For implementations where a "read" is asingle operation that returns data can use thesize argument to determine howmuch data to fetch. Other implementations may ignore this argument and simplyprovide data whenever it becomes available. There is no need to "wait" untilsize bytes are available before callingstream.push(chunk).
Thereadable._read() method is prefixed with an underscore because it isinternal to the class that defines it, and should never be called directly byuser programs.
readable._destroy(err, callback)#
err<Error> A possible error.callback<Function> A callback function that takes an optional errorargument.
The_destroy() method is called byreadable.destroy().It can be overridden by child classes but itmust not be called directly.
readable.push(chunk[, encoding])#
History
| Version | Changes |
|---|---|
| v22.0.0, v20.13.0 | The |
| v8.0.0 | The |
chunk<Buffer> |<TypedArray> |<DataView> |<string> |<null> |<any> Chunk of data to pushinto the read queue. For streams not operating in object mode,chunkmustbe a<string>,<Buffer>,<TypedArray> or<DataView>. For object mode streams,chunkmay be any JavaScript value.encoding<string> Encoding of string chunks. Must be a validBufferencoding, such as'utf8'or'ascii'.- Returns:<boolean>
trueif additional chunks of data may continue to bepushed;falseotherwise.
Whenchunk is a<Buffer>,<TypedArray>,<DataView> or<string>, thechunkof data will be added to the internal queue for users of the stream to consume.Passingchunk asnull signals the end of the stream (EOF), after which nomore data can be written.
When theReadable is operating in paused mode, the data added withreadable.push() can be read out by calling thereadable.read() method when the'readable' event isemitted.
When theReadable is operating in flowing mode, the data added withreadable.push() will be delivered by emitting a'data' event.
Thereadable.push() method is designed to be as flexible as possible. Forexample, when wrapping a lower-level source that provides some form ofpause/resume mechanism, and a data callback, the low-level source can be wrappedby the customReadable instance:
// `_source` is an object with readStop() and readStart() methods,// and an `ondata` member that gets called when it has data, and// an `onend` member that gets called when the data is over.classSourceWrapperextendsReadable {constructor(options) {super(options);this._source =getLowLevelSourceObject();// Every time there's data, push it into the internal buffer.this._source.ondata =(chunk) => {// If push() returns false, then stop reading from source.if (!this.push(chunk))this._source.readStop(); };// When the source ends, push the EOF-signaling `null` chunk.this._source.onend =() => {this.push(null); }; }// _read() will be called when the stream wants to pull more data in.// The advisory size argument is ignored in this case._read(size) {this._source.readStart(); }}Thereadable.push() method is used to push the contentinto the internal buffer. It can be driven by thereadable._read() method.
For streams not operating in object mode, if thechunk parameter ofreadable.push() isundefined, it will be treated as empty string orbuffer. Seereadable.push('') for more information.
Errors while reading#
Errors occurring during processing of thereadable._read() must bepropagated through thereadable.destroy(err) method.Throwing anError from withinreadable._read() or manually emitting an'error' event results in undefined behavior.
const {Readable } =require('node:stream');const myReadable =newReadable({read(size) {const err =checkSomeErrorCondition();if (err) {this.destroy(err); }else {// Do some work. } },});An example counting stream#
The following is a basic example of aReadable stream that emits the numeralsfrom 1 to 1,000,000 in ascending order, and then ends.
const {Readable } =require('node:stream');classCounterextendsReadable {constructor(opt) {super(opt);this._max =1000000;this._index =1; }_read() {const i =this._index++;if (i >this._max)this.push(null);else {const str =String(i);const buf =Buffer.from(str,'ascii');this.push(buf); } }}Implementing a duplex stream#
ADuplex stream is one that implements bothReadable andWritable, such as a TCP socket connection.
Because JavaScript does not have support for multiple inheritance, thestream.Duplex class is extended to implement aDuplex stream (as opposedto extending thestream.Readableandstream.Writable classes).
Thestream.Duplex class prototypically inherits fromstream.Readable andparasitically fromstream.Writable, butinstanceof will work properly forboth base classes due to overridingSymbol.hasInstance onstream.Writable.
CustomDuplex streamsmust call thenew stream.Duplex([options])constructor and implementboth thereadable._read() andwritable._write() methods.
new stream.Duplex(options)#
History
| Version | Changes |
|---|---|
| v8.4.0 | The |
options<Object> Passed to bothWritableandReadableconstructors. Also has the following fields:allowHalfOpen<boolean> If set tofalse, then the stream willautomatically end the writable side when the readable side ends.Default:true.readable<boolean> Sets whether theDuplexshould be readable.Default:true.writable<boolean> Sets whether theDuplexshould be writable.Default:true.readableObjectMode<boolean> SetsobjectModefor readable side of thestream. Has no effect ifobjectModeistrue.Default:false.writableObjectMode<boolean> SetsobjectModefor writable side of thestream. Has no effect ifobjectModeistrue.Default:false.readableHighWaterMark<number> SetshighWaterMarkfor the readable sideof the stream. Has no effect ifhighWaterMarkis provided.writableHighWaterMark<number> SetshighWaterMarkfor the writable sideof the stream. Has no effect ifhighWaterMarkis provided.
const {Duplex } =require('node:stream');classMyDuplexextendsDuplex {constructor(options) {super(options);// ... }}Or, when using pre-ES6 style constructors:
const {Duplex } =require('node:stream');const util =require('node:util');functionMyDuplex(options) {if (!(thisinstanceofMyDuplex))returnnewMyDuplex(options);Duplex.call(this, options);}util.inherits(MyDuplex,Duplex);Or, using the simplified constructor approach:
const {Duplex } =require('node:stream');const myDuplex =newDuplex({read(size) {// ... },write(chunk, encoding, callback) {// ... },});When using pipeline:
const {Transform, pipeline } =require('node:stream');const fs =require('node:fs');pipeline( fs.createReadStream('object.json') .setEncoding('utf8'),newTransform({decodeStrings:false,// Accept string input rather than Buffersconstruct(callback) {this.data ='';callback(); },transform(chunk, encoding, callback) {this.data += chunk;callback(); },flush(callback) {try {// Make sure is valid json.JSON.parse(this.data);this.push(this.data);callback(); }catch (err) {callback(err); } }, }), fs.createWriteStream('valid-object.json'),(err) => {if (err) {console.error('failed', err); }else {console.log('completed'); } },);An example duplex stream#
The following illustrates a simple example of aDuplex stream that wraps ahypothetical lower-level source object to which data can be written, andfrom which data can be read, albeit using an API that is not compatible withNode.js streams.The following illustrates a simple example of aDuplex stream that buffersincoming written data via theWritable interface that is read back outvia theReadable interface.
const {Duplex } =require('node:stream');const kSource =Symbol('source');classMyDuplexextendsDuplex {constructor(source, options) {super(options);this[kSource] = source; }_write(chunk, encoding, callback) {// The underlying source only deals with strings.if (Buffer.isBuffer(chunk)) chunk = chunk.toString();this[kSource].writeSomeData(chunk);callback(); }_read(size) {this[kSource].fetchSomeData(size,(data, encoding) => {this.push(Buffer.from(data, encoding)); }); }}The most important aspect of aDuplex stream is that theReadable andWritable sides operate independently of one another despite co-existing withina single object instance.
Object mode duplex streams#
ForDuplex streams,objectMode can be set exclusively for either theReadable orWritable side using thereadableObjectMode andwritableObjectMode options respectively.
In the following example, for instance, a newTransform stream (which is atype ofDuplex stream) is created that has an object modeWritable sidethat accepts JavaScript numbers that are converted to hexadecimal strings ontheReadable side.
const {Transform } =require('node:stream');// All Transform streams are also Duplex Streams.const myTransform =newTransform({writableObjectMode:true,transform(chunk, encoding, callback) {// Coerce the chunk to a number if necessary. chunk |=0;// Transform the chunk into something else.const data = chunk.toString(16);// Push the data onto the readable queue.callback(null,'0'.repeat(data.length %2) + data); },});myTransform.setEncoding('ascii');myTransform.on('data',(chunk) =>console.log(chunk));myTransform.write(1);// Prints: 01myTransform.write(10);// Prints: 0amyTransform.write(100);// Prints: 64Implementing a transform stream#
ATransform stream is aDuplex stream where the output is computedin some way from the input. Examples includezlib streams orcryptostreams that compress, encrypt, or decrypt data.
There is no requirement that the output be the same size as the input, the samenumber of chunks, or arrive at the same time. For example, aHash stream willonly ever have a single chunk of output which is provided when the input isended. Azlib stream will produce output that is either much smaller or muchlarger than its input.
Thestream.Transform class is extended to implement aTransform stream.
Thestream.Transform class prototypically inherits fromstream.Duplex andimplements its own versions of thewritable._write() andreadable._read() methods. CustomTransform implementationsmustimplement thetransform._transform() method andmayalso implement thetransform._flush() method.
Care must be taken when usingTransform streams in that data written to thestream can cause theWritable side of the stream to become paused if theoutput on theReadable side is not consumed.
new stream.Transform([options])#
options<Object> Passed to bothWritableandReadableconstructors. Also has the following fields:transform<Function> Implementation for thestream._transform()method.flush<Function> Implementation for thestream._flush()method.
const {Transform } =require('node:stream');classMyTransformextendsTransform {constructor(options) {super(options);// ... }}Or, when using pre-ES6 style constructors:
const {Transform } =require('node:stream');const util =require('node:util');functionMyTransform(options) {if (!(thisinstanceofMyTransform))returnnewMyTransform(options);Transform.call(this, options);}util.inherits(MyTransform,Transform);Or, using the simplified constructor approach:
const {Transform } =require('node:stream');const myTransform =newTransform({transform(chunk, encoding, callback) {// ... },});Event:'end'#
The'end' event is from thestream.Readable class. The'end' event isemitted after all data has been output, which occurs after the callback intransform._flush() has been called. In the case of an error,'end' should not be emitted.
Event:'finish'#
The'finish' event is from thestream.Writable class. The'finish'event is emitted afterstream.end() is called and all chunkshave been processed bystream._transform(). In the caseof an error,'finish' should not be emitted.
transform._flush(callback)#
callback<Function> A callback function (optionally with an errorargument and data) to be called when remaining data has been flushed.
This function MUST NOT be called by application code directly. It should beimplemented by child classes, and called by the internalReadable classmethods only.
In some cases, a transform operation may need to emit an additional bit ofdata at the end of the stream. For example, azlib compression stream willstore an amount of internal state used to optimally compress the output. Whenthe stream ends, however, that additional data needs to be flushed so that thecompressed data will be complete.
CustomTransform implementationsmay implement thetransform._flush()method. This will be called when there is no more written data to be consumed,but before the'end' event is emitted signaling the end of theReadable stream.
Within thetransform._flush() implementation, thetransform.push() methodmay be called zero or more times, as appropriate. Thecallback function mustbe called when the flush operation is complete.
Thetransform._flush() method is prefixed with an underscore because it isinternal to the class that defines it, and should never be called directly byuser programs.
transform._transform(chunk, encoding, callback)#
chunk<Buffer> |<string> |<any> TheBufferto be transformed, converted fromthestringpassed tostream.write(). If the stream'sdecodeStringsoption isfalseor the stream is operating in object mode,the chunk will not be converted & will be whatever was passed tostream.write().encoding<string> If the chunk is a string, then this is theencoding type. If chunk is a buffer, then this is the specialvalue'buffer'. Ignore it in that case.callback<Function> A callback function (optionally with an errorargument and data) to be called after the suppliedchunkhas beenprocessed.
This function MUST NOT be called by application code directly. It should beimplemented by child classes, and called by the internalReadable classmethods only.
AllTransform stream implementations must provide a_transform()method to accept input and produce output. Thetransform._transform()implementation handles the bytes being written, computes an output, then passesthat output off to the readable portion using thetransform.push() method.
Thetransform.push() method may be called zero or more times to generateoutput from a single input chunk, depending on how much is to be outputas a result of the chunk.
It is possible that no output is generated from any given chunk of input data.
Thecallback function must be called only when the current chunk is completelyconsumed. The first argument passed to thecallback must be anError objectif an error occurred while processing the input ornull otherwise. If a secondargument is passed to thecallback, it will be forwarded on to thetransform.push() method, but only if the first argument is falsy. In otherwords, the following are equivalent:
transform.prototype._transform =function(data, encoding, callback) {this.push(data);callback();};transform.prototype._transform =function(data, encoding, callback) {callback(null, data);};Thetransform._transform() method is prefixed with an underscore because itis internal to the class that defines it, and should never be called directly byuser programs.
transform._transform() is never called in parallel; streams implement aqueue mechanism, and to receive the next chunk,callback must becalled, either synchronously or asynchronously.
Class:stream.PassThrough#
Thestream.PassThrough class is a trivial implementation of aTransformstream that simply passes the input bytes across to the output. Its purpose isprimarily for examples and testing, but there are some use cases wherestream.PassThrough is useful as a building block for novel sorts of streams.
Additional notes#
Streams compatibility with async generators and async iterators#
With the support of async generators and iterators in JavaScript, asyncgenerators are effectively a first-class language-level stream construct atthis point.
Some common interop cases of using Node.js streams with async generatorsand async iterators are provided below.
Consuming readable streams with async iterators#
(asyncfunction() {forawait (const chunkof readable) {console.log(chunk); }})();Async iterators register a permanent error handler on the stream to prevent anyunhandled post-destroy errors.
Creating readable streams with async generators#
A Node.js readable stream can be created from an asynchronous generator usingtheReadable.from() utility method:
const {Readable } =require('node:stream');const ac =newAbortController();const signal = ac.signal;asyncfunction *generate() {yield'a';awaitsomeLongRunningFn({ signal });yield'b';yield'c';}const readable =Readable.from(generate());readable.on('close',() => { ac.abort();});readable.on('data',(chunk) => {console.log(chunk);});Piping to writable streams from async iterators#
When writing to a writable stream from an async iterator, ensure correcthandling of backpressure and errors.stream.pipeline() abstracts awaythe handling of backpressure and backpressure-related errors:
const fs =require('node:fs');const { pipeline } =require('node:stream');const {pipeline: pipelinePromise } =require('node:stream/promises');const writable = fs.createWriteStream('./file');const ac =newAbortController();const signal = ac.signal;const iterator =createIterator({ signal });// Callback Patternpipeline(iterator, writable,(err, value) => {if (err) {console.error(err); }else {console.log(value,'value returned'); }}).on('close',() => { ac.abort();});// Promise PatternpipelinePromise(iterator, writable) .then((value) => {console.log(value,'value returned'); }) .catch((err) => {console.error(err); ac.abort(); });Compatibility with older Node.js versions#
Prior to Node.js 0.10, theReadable stream interface was simpler, but alsoless powerful and less useful.
- Rather than waiting for calls to the
stream.read()method,'data'events would begin emitting immediately. Applications thatwould need to perform some amount of work to decide how to handle datawere required to store read data into buffers so the data would not be lost. - The
stream.pause()method was advisory, rather thanguaranteed. This meant that it was still necessary to be prepared to receive'data'eventseven when the stream was in a paused state.
In Node.js 0.10, theReadable class was added. For backwardcompatibility with older Node.js programs,Readable streams switch into"flowing mode" when a'data' event handler is added, or when thestream.resume() method is called. The effect is that, evenwhen not using the newstream.read() method and'readable' event, it is no longer necessary to worry about losing'data' chunks.
While most applications will continue to function normally, this introduces anedge case in the following conditions:
- No
'data'event listener is added. - The
stream.resume()method is never called. - The stream is not piped to any writable destination.
For example, consider the following code:
// WARNING! BROKEN!net.createServer((socket) => {// We add an 'end' listener, but never consume the data. socket.on('end',() => {// It will never get here. socket.end('The message was received but was not processed.\n'); });}).listen(1337);Prior to Node.js 0.10, the incoming message data would be simply discarded.However, in Node.js 0.10 and beyond, the socket remains paused forever.
The workaround in this situation is to call thestream.resume() method to begin the flow of data:
// Workaround.net.createServer((socket) => { socket.on('end',() => { socket.end('The message was received but was not processed.\n'); });// Start the flow of data, discarding it. socket.resume();}).listen(1337);In addition to newReadable streams switching into flowing mode,pre-0.10 style streams can be wrapped in aReadable class using thereadable.wrap() method.
readable.read(0)#
There are some cases where it is necessary to trigger a refresh of theunderlying readable stream mechanisms, without actually consuming anydata. In such cases, it is possible to callreadable.read(0), which willalways returnnull.
If the internal read buffer is below thehighWaterMark, and thestream is not currently reading, then callingstream.read(0) will triggera low-levelstream._read() call.
While most applications will almost never need to do this, there aresituations within Node.js where this is done, particularly in theReadable stream class internals.
readable.push('')#
Use ofreadable.push('') is not recommended.
Pushing a zero-byte<string>,<Buffer>,<TypedArray> or<DataView> to a streamthat is not in object mode has an interesting side effect.Because itis a call toreadable.push(), the call will end the reading process.However, because the argument is an empty string, no data is added to thereadable buffer so there is nothing for a user to consume.
highWaterMark discrepancy after callingreadable.setEncoding()#
The use ofreadable.setEncoding() will change the behavior of how thehighWaterMark operates in non-object mode.
Typically, the size of the current buffer is measured against thehighWaterMark inbytes. However, aftersetEncoding() is called, thecomparison function will begin to measure the buffer's size incharacters.
This is not a problem in common cases withlatin1 orascii. But it isadvised to be mindful about this behavior when working with strings that couldcontain multi-byte characters.
String decoder#
Source Code:lib/string_decoder.js
Thenode:string_decoder module provides an API for decodingBuffer objectsinto strings in a manner that preserves encoded multi-byte UTF-8 and UTF-16characters. It can be accessed using:
import {StringDecoder }from'node:string_decoder';const {StringDecoder } =require('node:string_decoder');
The following example shows the basic use of theStringDecoder class.
import {StringDecoder }from'node:string_decoder';import {Buffer }from'node:buffer';const decoder =newStringDecoder('utf8');const cent =Buffer.from([0xC2,0xA2]);console.log(decoder.write(cent));// Prints: ¢const euro =Buffer.from([0xE2,0x82,0xAC]);console.log(decoder.write(euro));// Prints: €const {StringDecoder } =require('node:string_decoder');const decoder =newStringDecoder('utf8');const cent =Buffer.from([0xC2,0xA2]);console.log(decoder.write(cent));// Prints: ¢const euro =Buffer.from([0xE2,0x82,0xAC]);console.log(decoder.write(euro));// Prints: €
When aBuffer instance is written to theStringDecoder instance, aninternal buffer is used to ensure that the decoded string does not containany incomplete multibyte characters. These are held in the buffer until thenext call tostringDecoder.write() or untilstringDecoder.end() is called.
In the following example, the three UTF-8 encoded bytes of the European Eurosymbol (€) are written over three separate operations:
import {StringDecoder }from'node:string_decoder';import {Buffer }from'node:buffer';const decoder =newStringDecoder('utf8');decoder.write(Buffer.from([0xE2]));decoder.write(Buffer.from([0x82]));console.log(decoder.end(Buffer.from([0xAC])));// Prints: €const {StringDecoder } =require('node:string_decoder');const decoder =newStringDecoder('utf8');decoder.write(Buffer.from([0xE2]));decoder.write(Buffer.from([0x82]));console.log(decoder.end(Buffer.from([0xAC])));// Prints: €
Class:StringDecoder#
stringDecoder.end([buffer])#
buffer<string> |<Buffer> |<TypedArray> |<DataView> The bytes to decode.- Returns:<string>
Returns any remaining input stored in the internal buffer as a string. Bytesrepresenting incomplete UTF-8 and UTF-16 characters will be replaced withsubstitution characters appropriate for the character encoding.
If thebuffer argument is provided, one final call tostringDecoder.write()is performed before returning the remaining input.Afterend() is called, thestringDecoder object can be reused for new input.
stringDecoder.write(buffer)#
History
| Version | Changes |
|---|---|
| v8.0.0 | Each invalid character is now replaced by a single replacement character instead of one for each individual byte. |
| v0.1.99 | Added in: v0.1.99 |
buffer<string> |<Buffer> |<TypedArray> |<DataView> The bytes to decode.- Returns:<string>
Returns a decoded string, ensuring that any incomplete multibyte characters atthe end of theBuffer, orTypedArray, orDataView are omitted from thereturned string and stored in an internal buffer for the next call tostringDecoder.write() orstringDecoder.end().
Test runner#
History
| Version | Changes |
|---|---|
| v20.0.0 | The test runner is now stable. |
| v18.0.0, v16.17.0 | Added in: v18.0.0, v16.17.0 |
Source Code:lib/test.js
Thenode:test module facilitates the creation of JavaScript tests.To access it:
import testfrom'node:test';const test =require('node:test');
This module is only available under thenode: scheme.
Tests created via thetest module consist of a single function that isprocessed in one of three ways:
- A synchronous function that is considered failing if it throws an exception,and is considered passing otherwise.
- A function that returns a
Promisethat is considered failing if thePromiserejects, and is considered passing if thePromisefulfills. - A function that receives a callback function. If the callback receives anytruthy value as its first argument, the test is considered failing. If afalsy value is passed as the first argument to the callback, the test isconsidered passing. If the test function receives a callback function andalso returns a
Promise, the test will fail.
The following example illustrates how tests are written using thetest module.
test('synchronous passing test',(t) => {// This test passes because it does not throw an exception. assert.strictEqual(1,1);});test('synchronous failing test',(t) => {// This test fails because it throws an exception. assert.strictEqual(1,2);});test('asynchronous passing test',async (t) => {// This test passes because the Promise returned by the async// function is settled and not rejected. assert.strictEqual(1,1);});test('asynchronous failing test',async (t) => {// This test fails because the Promise returned by the async// function is rejected. assert.strictEqual(1,2);});test('failing test using Promises',(t) => {// Promises can be used directly as well.returnnewPromise((resolve, reject) => {setImmediate(() => {reject(newError('this will cause the test to fail')); }); });});test('callback passing test',(t, done) => {// done() is the callback function. When the setImmediate() runs, it invokes// done() with no arguments.setImmediate(done);});test('callback failing test',(t, done) => {// When the setImmediate() runs, done() is invoked with an Error object and// the test fails.setImmediate(() => {done(newError('callback failure')); });});If any tests fail, the process exit code is set to1.
Subtests#
The test context'stest() method allows subtests to be created.It allows you to structure your tests in a hierarchical manner,where you can create nested tests within a larger test.This method behaves identically to the top leveltest() function.The following example demonstrates the creation of atop level test with two subtests.
test('top level test',async (t) => {await t.test('subtest 1',(t) => { assert.strictEqual(1,1); });await t.test('subtest 2',(t) => { assert.strictEqual(2,2); });});Note:
beforeEachandafterEachhooks are triggeredbetween each subtest execution.
In this example,await is used to ensure that both subtests have completed.This is necessary because tests do not wait for their subtests tocomplete, unlike tests created within suites.Any subtests that are still outstanding when their parent finishesare cancelled and treated as failures. Any subtest failures cause the parenttest to fail.
Skipping tests#
Individual tests can be skipped by passing theskip option to the test, or bycalling the test context'sskip() method as shown in thefollowing example.
// The skip option is used, but no message is provided.test('skip option', {skip:true },(t) => {// This code is never executed.});// The skip option is used, and a message is provided.test('skip option with message', {skip:'this is skipped' },(t) => {// This code is never executed.});test('skip() method',(t) => {// Make sure to return here as well if the test contains additional logic. t.skip();});test('skip() method with message',(t) => {// Make sure to return here as well if the test contains additional logic. t.skip('this is skipped');});Rerunning failed tests#
The test runner supports persisting the state of the run to a file, allowingthe test runner to rerun failed tests without having to re-run the entire test suite.Use the--test-rerun-failures command-line option to specify a file path where thestate of the run is stored. if the state file does not exist, the test runner willcreate it.the state file is a JSON file that contains an array of run attempts.Each run attempt is an object mapping successful tests to the attempt they have passed in.The key identifying a test in this map is the test file path, with the line and column where the test is defined.in a case where a test defined in a specific location is run multiple times,for example within a function or a loop,a counter will be appended to the key, to disambiguate the test runs.note changing the order of test execution or the location of a test can lead the test runnerto consider tests as passed on a previous attempt,meaning--test-rerun-failures should be used when tests run in a deterministic order.
example of a state file:
[{"test.js:10:5":{"passed_on_attempt":0,"name":"test 1"}},{"test.js:10:5":{"passed_on_attempt":0,"name":"test 1"},"test.js:20:5":{"passed_on_attempt":1,"name":"test 2"}}]in this example, there are two run attempts, with two tests defined intest.js,the first test succeeded on the first attempt, and the second test succeeded on the second attempt.
When the--test-rerun-failures option is used, the test runner will only run tests that have not yet passed.
node --test-rerun-failures /path/to/state/fileTODO tests#
Individual tests can be marked as flaky or incomplete by passing thetodooption to the test, or by calling the test context'stodo() method, as shownin the following example. These tests represent a pending implementation or bugthat needs to be fixed. TODO tests are executed, but are not treated as testfailures, and therefore do not affect the process exit code. If a test is markedas both TODO and skipped, the TODO option is ignored.
// The todo option is used, but no message is provided.test('todo option', {todo:true },(t) => {// This code is executed, but not treated as a failure.thrownewError('this does not fail the test');});// The todo option is used, and a message is provided.test('todo option with message', {todo:'this is a todo test' },(t) => {// This code is executed.});test('todo() method',(t) => { t.todo();});test('todo() method with message',(t) => { t.todo('this is a todo test and is not treated as a failure');thrownewError('this does not fail the test');});Expecting tests to fail#
This flips the pass/fail reporting for a specific test or suite: A flagged test/test-case must throwin order to "pass"; a test/test-case that does not throw, fails.
In the following,doTheThing() returnscurrentlyfalse (false does not equaltrue, causingstrictEqual to throw, so the test-case passes).
it.expectFailure('should do the thing',() => { assert.strictEqual(doTheThing(),true);});it('should do the thing', {expectFailure:true },() => { assert.strictEqual(doTheThing(),true);});skip and/ortodo are mutually exclusive toexpectFailure, andskip ortodowill "win" when both are applied (skip wins against both, andtodo winsagainstexpectFailure).
These tests will be skipped (and not run):
it.expectFailure('should do the thing', {skip:true },() => { assert.strictEqual(doTheThing(),true);});it.skip('should do the thing', {expectFailure:true },() => { assert.strictEqual(doTheThing(),true);});These tests will be marked "todo" (silencing errors):
it.expectFailure('should do the thing', {todo:true },() => { assert.strictEqual(doTheThing(),true);});it.todo('should do the thing', {expectFailure:true },() => { assert.strictEqual(doTheThing(),true);});describe() andit() aliases#
Suites and tests can also be written using thedescribe() andit()functions.describe() is an alias forsuite(), andit() is analias fortest().
describe('A thing',() => {it('should work',() => { assert.strictEqual(1,1); });it('should be ok',() => { assert.strictEqual(2,2); });describe('a nested thing',() => {it('should work',() => { assert.strictEqual(3,3); }); });});describe() andit() are imported from thenode:test module.
import { describe, it }from'node:test';const { describe, it } =require('node:test');
only tests#
If Node.js is started with the--test-only command-line option, or testisolation is disabled, it is possible to skip all tests except for a selectedsubset by passing theonly option to the tests that should run. When a testwith theonly option is set, all subtests are also run.If a suite has theonly option set, all tests within the suite are run,unless it has descendants with theonly option set, in which case only thosetests are run.
When usingsubtests within atest()/it(), it is required to markall ancestor tests with theonly option to run only aselected subset of tests.
The test context'srunOnly()method can be used to implement the same behavior at the subtest level. Teststhat are not executed are omitted from the test runner output.
// Assume Node.js is run with the --test-only command-line option.// The suite's 'only' option is set, so these tests are run.test('this test is run', {only:true },async (t) => {// Within this test, all subtests are run by default.await t.test('running subtest');// The test context can be updated to run subtests with the 'only' option. t.runOnly(true);await t.test('this subtest is now skipped');await t.test('this subtest is run', {only:true });// Switch the context back to execute all tests. t.runOnly(false);await t.test('this subtest is now run');// Explicitly do not run these tests.await t.test('skipped subtest 3', {only:false });await t.test('skipped subtest 4', {skip:true });});// The 'only' option is not set, so this test is skipped.test('this test is not run',() => {// This code is not run.thrownewError('fail');});describe('a suite',() => {// The 'only' option is set, so this test is run.it('this test is run', {only:true },() => {// This code is run. });it('this test is not run',() => {// This code is not run.thrownewError('fail'); });});describe.only('a suite',() => {// The 'only' option is set, so this test is run.it('this test is run',() => {// This code is run. });it('this test is run',() => {// This code is run. });});Filtering tests by name#
The--test-name-pattern command-line option can be used to only runtests whose name matches the provided pattern, and the--test-skip-pattern option can be used to skip tests whose namematches the provided pattern. Test name patterns are interpreted asJavaScript regular expressions. The--test-name-pattern and--test-skip-pattern options can be specified multiple times in order to runnested tests. For each test that is executed, any corresponding test hooks,such asbeforeEach(), are also run. Tests that are not executed are omittedfrom the test runner output.
Given the following test file, starting Node.js with the--test-name-pattern="test [1-3]" option would cause the test runner to executetest 1,test 2, andtest 3. Iftest 1 did not match the test namepattern, then its subtests would not execute, despite matching the pattern. Thesame set of tests could also be executed by passing--test-name-patternmultiple times (e.g.--test-name-pattern="test 1",--test-name-pattern="test 2", etc.).
test('test 1',async (t) => {await t.test('test 2');await t.test('test 3');});test('Test 4',async (t) => {await t.test('Test 5');await t.test('test 6');});Test name patterns can also be specified using regular expression literals. Thisallows regular expression flags to be used. In the previous example, startingNode.js with--test-name-pattern="/test [4-5]/i" (or--test-skip-pattern="/test [4-5]/i")would matchTest 4 andTest 5 because the pattern is case-insensitive.
To match a single test with a pattern, you can prefix it with all its ancestortest names separated by space, to ensure it is unique.For example, given the following test file:
describe('test 1',(t) => {it('some test');});describe('test 2',(t) => {it('some test');});Starting Node.js with--test-name-pattern="test 1 some test" would matchonlysome test intest 1.
Test name patterns do not change the set of files that the test runner executes.
If both--test-name-pattern and--test-skip-pattern are supplied,tests must satisfyboth requirements in order to be executed.
Extraneous asynchronous activity#
Once a test function finishes executing, the results are reported as quicklyas possible while maintaining the order of the tests. However, it is possiblefor the test function to generate asynchronous activity that outlives the testitself. The test runner handles this type of activity, but does not delay thereporting of test results in order to accommodate it.
In the following example, a test completes with twosetImmediate()operations still outstanding. The firstsetImmediate() attempts to create anew subtest. Because the parent test has already finished and output itsresults, the new subtest is immediately marked as failed, and reported laterto the<TestsStream>.
The secondsetImmediate() creates anuncaughtException event.uncaughtException andunhandledRejection events originating from a completedtest are marked as failed by thetest module and reported as diagnosticwarnings at the top level by the<TestsStream>.
test('a test that creates asynchronous activity',(t) => {setImmediate(() => { t.test('subtest that is created too late',(t) => {thrownewError('error1'); }); });setImmediate(() => {thrownewError('error2'); });// The test finishes after this line.});Watch mode#
The Node.js test runner supports running in watch mode by passing the--watch flag:
node --test --watchIn watch mode, the test runner will watch for changes to test files andtheir dependencies. When a change is detected, the test runner willrerun the tests affected by the change.The test runner will continue to run until the process is terminated.
Global setup and teardown#
The test runner supports specifying a module that will be evaluated before all tests are executed andcan be used to setup global state or fixtures for tests. This is useful for preparing resources or setting upshared state that is required by multiple tests.
This module can export any of the following:
- A
globalSetupfunction which runs once before all tests start - A
globalTeardownfunction which runs once after all tests complete
The module is specified using the--test-global-setup flag when running tests from the command line.
// setup-module.jsasyncfunctionglobalSetup() {// Setup shared resources, state, or environmentconsole.log('Global setup executed');// Run servers, create files, prepare databases, etc.}asyncfunctionglobalTeardown() {// Clean up resources, state, or environmentconsole.log('Global teardown executed');// Close servers, remove files, disconnect from databases, etc.}module.exports = { globalSetup, globalTeardown };// setup-module.mjsexportasyncfunctionglobalSetup() {// Setup shared resources, state, or environmentconsole.log('Global setup executed');// Run servers, create files, prepare databases, etc.}exportasyncfunctionglobalTeardown() {// Clean up resources, state, or environmentconsole.log('Global teardown executed');// Close servers, remove files, disconnect from databases, etc.}
If the global setup function throws an error, no tests will be run and the process will exit with a non-zero exit code.The global teardown function will not be called in this case.
Running tests from the command line#
The Node.js test runner can be invoked from the command line by passing the--test flag:
node --testBy default, Node.js will run all files matching these patterns:
**/*.test.{cjs,mjs,js}**/*-test.{cjs,mjs,js}**/*_test.{cjs,mjs,js}**/test-*.{cjs,mjs,js}**/test.{cjs,mjs,js}**/test/**/*.{cjs,mjs,js}
Unless--no-strip-types is supplied, the followingadditional patterns are also matched:
**/*.test.{cts,mts,ts}**/*-test.{cts,mts,ts}**/*_test.{cts,mts,ts}**/test-*.{cts,mts,ts}**/test.{cts,mts,ts}**/test/**/*.{cts,mts,ts}
Alternatively, one or more glob patterns can be provided as thefinal argument(s) to the Node.js command, as shown below.Glob patterns follow the behavior ofglob(7).The glob patterns should be enclosed in double quotes on the command line toprevent shell expansion, which can reduce portability across systems.
node --test"**/*.test.js""**/*.spec.js"Matching files are executed as test files.More information on the test file execution can be foundin thetest runner execution model section.
Test runner execution model#
When process-level test isolation is enabled, each matching test file isexecuted in a separate child process. The maximum number of child processesrunning at any time is controlled by the--test-concurrency flag. If thechild process finishes with an exit code of 0, the test is considered passing.Otherwise, the test is considered to be a failure. Test files must be executableby Node.js, but are not required to use thenode:test module internally.
Each test file is executed as if it was a regular script. That is, if the testfile itself usesnode:test to define tests, all of those tests will beexecuted within a single application thread, regardless of the value of theconcurrency option oftest().
When process-level test isolation is disabled, each matching test file isimported into the test runner process. Once all test files have been loaded, thetop level tests are executed with a concurrency of one. Because the test filesare all run within the same context, it is possible for tests to interact witheach other in ways that are not possible when isolation is enabled. For example,if a test relies on global state, it is possible for that state to be modifiedby a test originating from another file.
Child process option inheritance#
When running tests in process isolation mode (the default), spawned child processesinherit Node.js options from the parent process, including those specified inconfiguration files. However, certain flags are filtered out to enable propertest runner functionality:
--test- Prevented to avoid recursive test execution--experimental-test-coverage- Managed by the test runner--watch- Watch mode is handled at the parent level--experimental-default-config-file- Config file loading is handled by the parent--test-reporter- Reporting is managed by the parent process--test-reporter-destination- Output destinations are controlled by the parent--experimental-config-file- Config file paths are managed by the parent
All other Node.js options from command line arguments, environment variables,and configuration files are inherited by the child processes.
Collecting code coverage#
When Node.js is started with the--experimental-test-coveragecommand-line flag, code coverage is collected and statistics are reported onceall tests have completed. If theNODE_V8_COVERAGE environment variable isused to specify a code coverage directory, the generated V8 coverage files arewritten to that directory. Node.js core modules and files withinnode_modules/ directories are, by default, not included in the coverage report.However, they can be explicitly included via the--test-coverage-include flag.By default all the matching test files are excluded from the coverage report.Exclusions can be overridden by using the--test-coverage-exclude flag.If coverage is enabled, the coverage report is sent to anytest reporters viathe'test:coverage' event.
Coverage can be disabled on a series of lines using the followingcomment syntax:
/* node:coverage disable */if (anAlwaysFalseCondition) {// Code in this branch will never be executed, but the lines are ignored for// coverage purposes. All lines following the 'disable' comment are ignored// until a corresponding 'enable' comment is encountered.console.log('this is never executed');}/* node:coverage enable */Coverage can also be disabled for a specified number of lines. After thespecified number of lines, coverage will be automatically reenabled. If thenumber of lines is not explicitly provided, a single line is ignored.
/* node:coverage ignore next */if (anAlwaysFalseCondition) {console.log('this is never executed'); }/* node:coverage ignore next 3 */if (anAlwaysFalseCondition) {console.log('this is never executed');}Coverage reporters#
The tap and spec reporters will print a summary of the coverage statistics.There is also an lcov reporter that will generate an lcov file which can beused as an in depth coverage report.
node --test --experimental-test-coverage --test-reporter=lcov --test-reporter-destination=lcov.info- No test results are reported by this reporter.
- This reporter should ideally be used alongside another reporter.
Mocking#
Thenode:test module supports mocking during testing via a top-levelmockobject. The following example creates a spy on a function that adds two numberstogether. The spy is then used to assert that the function was called asexpected.
import assertfrom'node:assert';import { mock, test }from'node:test';test('spies on a function',() => {const sum = mock.fn((a, b) => {return a + b; }); assert.strictEqual(sum.mock.callCount(),0); assert.strictEqual(sum(3,4),7); assert.strictEqual(sum.mock.callCount(),1);const call = sum.mock.calls[0]; assert.deepStrictEqual(call.arguments, [3,4]); assert.strictEqual(call.result,7); assert.strictEqual(call.error,undefined);// Reset the globally tracked mocks. mock.reset();});'use strict';const assert =require('node:assert');const { mock, test } =require('node:test');test('spies on a function',() => {const sum = mock.fn((a, b) => {return a + b; }); assert.strictEqual(sum.mock.callCount(),0); assert.strictEqual(sum(3,4),7); assert.strictEqual(sum.mock.callCount(),1);const call = sum.mock.calls[0]; assert.deepStrictEqual(call.arguments, [3,4]); assert.strictEqual(call.result,7); assert.strictEqual(call.error,undefined);// Reset the globally tracked mocks. mock.reset();});
The same mocking functionality is also exposed on theTestContext objectof each test. The following example creates a spy on an object method using theAPI exposed on theTestContext. The benefit of mocking via the test context isthat the test runner will automatically restore all mocked functionality oncethe test finishes.
test('spies on an object method',(t) => {const number = {value:5,add(a) {returnthis.value + a; }, }; t.mock.method(number,'add'); assert.strictEqual(number.add.mock.callCount(),0); assert.strictEqual(number.add(3),8); assert.strictEqual(number.add.mock.callCount(),1);const call = number.add.mock.calls[0]; assert.deepStrictEqual(call.arguments, [3]); assert.strictEqual(call.result,8); assert.strictEqual(call.target,undefined); assert.strictEqual(call.this, number);});Timers#
Mocking timers is a technique commonly used in software testing to simulate andcontrol the behavior of timers, such assetInterval andsetTimeout,without actually waiting for the specified time intervals.
Refer to theMockTimers class for a full list of methods and features.
This allows developers to write more reliable andpredictable tests for time-dependent functionality.
The example below shows how to mocksetTimeout.Using.enable({ apis: ['setTimeout'] });it will mock thesetTimeout functions in thenode:timers andnode:timers/promises modules,as well as from the Node.js global context.
Note: Destructuring functions such asimport { setTimeout } from 'node:timers'is currently not supported by this API.
import assertfrom'node:assert';import { mock, test }from'node:test';test('mocks setTimeout to be executed synchronously without having to actually wait for it',() => {const fn = mock.fn();// Optionally choose what to mock mock.timers.enable({apis: ['setTimeout'] });setTimeout(fn,9999); assert.strictEqual(fn.mock.callCount(),0);// Advance in time mock.timers.tick(9999); assert.strictEqual(fn.mock.callCount(),1);// Reset the globally tracked mocks. mock.timers.reset();// If you call reset mock instance, it will also reset timers instance mock.reset();});const assert =require('node:assert');const { mock, test } =require('node:test');test('mocks setTimeout to be executed synchronously without having to actually wait for it',() => {const fn = mock.fn();// Optionally choose what to mock mock.timers.enable({apis: ['setTimeout'] });setTimeout(fn,9999); assert.strictEqual(fn.mock.callCount(),0);// Advance in time mock.timers.tick(9999); assert.strictEqual(fn.mock.callCount(),1);// Reset the globally tracked mocks. mock.timers.reset();// If you call reset mock instance, it will also reset timers instance mock.reset();});
The same mocking functionality is also exposed in the mock property on theTestContext objectof each test. The benefit of mocking via the test context isthat the test runner will automatically restore all mocked timersfunctionality once the test finishes.
import assertfrom'node:assert';import { test }from'node:test';test('mocks setTimeout to be executed synchronously without having to actually wait for it',(context) => {const fn = context.mock.fn();// Optionally choose what to mock context.mock.timers.enable({apis: ['setTimeout'] });setTimeout(fn,9999); assert.strictEqual(fn.mock.callCount(),0);// Advance in time context.mock.timers.tick(9999); assert.strictEqual(fn.mock.callCount(),1);});const assert =require('node:assert');const { test } =require('node:test');test('mocks setTimeout to be executed synchronously without having to actually wait for it',(context) => {const fn = context.mock.fn();// Optionally choose what to mock context.mock.timers.enable({apis: ['setTimeout'] });setTimeout(fn,9999); assert.strictEqual(fn.mock.callCount(),0);// Advance in time context.mock.timers.tick(9999); assert.strictEqual(fn.mock.callCount(),1);});
Dates#
The mock timers API also allows the mocking of theDate object. This is auseful feature for testing time-dependent functionality, or to simulateinternal calendar functions such asDate.now().
The dates implementation is also part of theMockTimers class. Refer to itfor a full list of methods and features.
Note: Dates and timers are dependent when mocked together. This means thatif you have both theDate andsetTimeout mocked, advancing the time willalso advance the mocked date as they simulate a single internal clock.
The example below show how to mock theDate object and obtain the currentDate.now() value.
import assertfrom'node:assert';import { test }from'node:test';test('mocks the Date object',(context) => {// Optionally choose what to mock context.mock.timers.enable({apis: ['Date'] });// If not specified, the initial date will be based on 0 in the UNIX epoch assert.strictEqual(Date.now(),0);// Advance in time will also advance the date context.mock.timers.tick(9999); assert.strictEqual(Date.now(),9999);});const assert =require('node:assert');const { test } =require('node:test');test('mocks the Date object',(context) => {// Optionally choose what to mock context.mock.timers.enable({apis: ['Date'] });// If not specified, the initial date will be based on 0 in the UNIX epoch assert.strictEqual(Date.now(),0);// Advance in time will also advance the date context.mock.timers.tick(9999); assert.strictEqual(Date.now(),9999);});
If there is no initial epoch set, the initial date will be based on 0 in theUnix epoch. This is January 1st, 1970, 00:00:00 UTC. You can set an initial dateby passing anow property to the.enable() method. This value will be usedas the initial date for the mockedDate object. It can either be a positiveinteger, or another Date object.
import assertfrom'node:assert';import { test }from'node:test';test('mocks the Date object with initial time',(context) => {// Optionally choose what to mock context.mock.timers.enable({apis: ['Date'],now:100 }); assert.strictEqual(Date.now(),100);// Advance in time will also advance the date context.mock.timers.tick(200); assert.strictEqual(Date.now(),300);});const assert =require('node:assert');const { test } =require('node:test');test('mocks the Date object with initial time',(context) => {// Optionally choose what to mock context.mock.timers.enable({apis: ['Date'],now:100 }); assert.strictEqual(Date.now(),100);// Advance in time will also advance the date context.mock.timers.tick(200); assert.strictEqual(Date.now(),300);});
You can use the.setTime() method to manually move the mocked date to anothertime. This method only accepts a positive integer.
Note: This method willnot execute any mocked timers that are in the pastfrom the new time.
In the below example we are setting a new time for the mocked date.
import assertfrom'node:assert';import { test }from'node:test';test('sets the time of a date object',(context) => {// Optionally choose what to mock context.mock.timers.enable({apis: ['Date'],now:100 }); assert.strictEqual(Date.now(),100);// Advance in time will also advance the date context.mock.timers.setTime(1000); context.mock.timers.tick(200); assert.strictEqual(Date.now(),1200);});const assert =require('node:assert');const { test } =require('node:test');test('sets the time of a date object',(context) => {// Optionally choose what to mock context.mock.timers.enable({apis: ['Date'],now:100 }); assert.strictEqual(Date.now(),100);// Advance in time will also advance the date context.mock.timers.setTime(1000); context.mock.timers.tick(200); assert.strictEqual(Date.now(),1200);});
Timers scheduled in the past willnot run when you callsetTime(). To execute those timers, you can usethe.tick() method to move forward from the new time.
import assertfrom'node:assert';import { test }from'node:test';test('setTime does not execute timers',(context) => {// Optionally choose what to mock context.mock.timers.enable({apis: ['setTimeout','Date'] });const fn = context.mock.fn();setTimeout(fn,1000); context.mock.timers.setTime(800);// Timer is not executed as the time is not yet reached assert.strictEqual(fn.mock.callCount(),0); assert.strictEqual(Date.now(),800); context.mock.timers.setTime(1200);// Timer is still not executed assert.strictEqual(fn.mock.callCount(),0);// Advance in time to execute the timer context.mock.timers.tick(0); assert.strictEqual(fn.mock.callCount(),1); assert.strictEqual(Date.now(),1200);});const assert =require('node:assert');const { test } =require('node:test');test('runs timers as setTime passes ticks',(context) => {// Optionally choose what to mock context.mock.timers.enable({apis: ['setTimeout','Date'] });const fn = context.mock.fn();setTimeout(fn,1000); context.mock.timers.setTime(800);// Timer is not executed as the time is not yet reached assert.strictEqual(fn.mock.callCount(),0); assert.strictEqual(Date.now(),800); context.mock.timers.setTime(1200);// Timer is executed as the time is now reached assert.strictEqual(fn.mock.callCount(),1); assert.strictEqual(Date.now(),1200);});
Using.runAll() will execute all timers that are currently in the queue. Thiswill also advance the mocked date to the time of the last timer that wasexecuted as if the time has passed.
import assertfrom'node:assert';import { test }from'node:test';test('runs timers as setTime passes ticks',(context) => {// Optionally choose what to mock context.mock.timers.enable({apis: ['setTimeout','Date'] });const fn = context.mock.fn();setTimeout(fn,1000);setTimeout(fn,2000);setTimeout(fn,3000); context.mock.timers.runAll();// All timers are executed as the time is now reached assert.strictEqual(fn.mock.callCount(),3); assert.strictEqual(Date.now(),3000);});const assert =require('node:assert');const { test } =require('node:test');test('runs timers as setTime passes ticks',(context) => {// Optionally choose what to mock context.mock.timers.enable({apis: ['setTimeout','Date'] });const fn = context.mock.fn();setTimeout(fn,1000);setTimeout(fn,2000);setTimeout(fn,3000); context.mock.timers.runAll();// All timers are executed as the time is now reached assert.strictEqual(fn.mock.callCount(),3); assert.strictEqual(Date.now(),3000);});
Snapshot testing#
History
| Version | Changes |
|---|---|
| v23.4.0 | Snapshot testing is no longer experimental. |
| v22.3.0 | Added in: v22.3.0 |
Snapshot tests allow arbitrary values to be serialized into string values andcompared against a set of known good values. The known good values are known assnapshots, and are stored in a snapshot file. Snapshot files are managed by thetest runner, but are designed to be human readable to aid in debugging. Bestpractice is for snapshot files to be checked into source control along with yourtest files.
Snapshot files are generated by starting Node.js with the--test-update-snapshots command-line flag. A separate snapshot file isgenerated for each test file. By default, the snapshot file has the same nameas the test file with a.snapshot file extension. This behavior can beconfigured using thesnapshot.setResolveSnapshotPath() function. Eachsnapshot assertion corresponds to an export in the snapshot file.
An example snapshot test is shown below. The first time this test is executed,it will fail because the corresponding snapshot file does not exist.
// test.jssuite('suite of snapshot tests',() => {test('snapshot test',(t) => { t.assert.snapshot({value1:1,value2:2 }); t.assert.snapshot(5); });});Generate the snapshot file by running the test file with--test-update-snapshots. The test should pass, and a file namedtest.js.snapshot is created in the same directory as the test file. Thecontents of the snapshot file are shown below. Each snapshot is identified bythe full name of test and a counter to differentiate between snapshots in thesame test.
exports[`suite of snapshot tests > snapshot test 1`] =`{ "value1": 1, "value2": 2}`;exports[`suite of snapshot tests > snapshot test 2`] =`5`;Once the snapshot file is created, run the tests again without the--test-update-snapshots flag. The tests should pass now.
Test reporters#
History
| Version | Changes |
|---|---|
| v23.0.0 | The default reporter on non-TTY stdout is changed from |
| v19.9.0, v18.17.0 | Reporters are now exposed at |
| v19.6.0, v18.15.0 | Added in: v19.6.0, v18.15.0 |
Thenode:test module supports passing--test-reporterflags for the test runner to use a specific reporter.
The following built-reporters are supported:
specThespecreporter outputs the test results in a human-readable format. Thisis the default reporter.tapThetapreporter outputs the test results in theTAP format.dotThedotreporter outputs the test results in a compact format,where each passing test is represented by a.,and each failing test is represented by aX.junitThe junit reporter outputs test results in a jUnit XML formatlcovThelcovreporter outputs test coverage when used with the--experimental-test-coverageflag.
The exact output of these reporters is subject to change between versions ofNode.js, and should not be relied on programmatically. If programmatic accessto the test runner's output is required, use the events emitted by the<TestsStream>.
The reporters are available via thenode:test/reporters module:
import { tap, spec, dot, junit, lcov }from'node:test/reporters';const { tap, spec, dot, junit, lcov } =require('node:test/reporters');
Custom reporters#
--test-reporter can be used to specify a path to custom reporter.A custom reporter is a module that exports a valueaccepted bystream.compose.Reporters should transform events emitted by a<TestsStream>
Example of a custom reporter using<stream.Transform>:
import {Transform }from'node:stream';const customReporter =newTransform({writableObjectMode:true,transform(event, encoding, callback) {switch (event.type) {case'test:dequeue':callback(null,`test${event.data.name} dequeued`);break;case'test:enqueue':callback(null,`test${event.data.name} enqueued`);break;case'test:watch:drained':callback(null,'test watch queue drained');break;case'test:watch:restarted':callback(null,'test watch restarted due to file change');break;case'test:start':callback(null,`test${event.data.name} started`);break;case'test:pass':callback(null,`test${event.data.name} passed`);break;case'test:fail':callback(null,`test${event.data.name} failed`);break;case'test:plan':callback(null,'test plan');break;case'test:diagnostic':case'test:stderr':case'test:stdout':callback(null, event.data.message);break;case'test:coverage': {const { totalLineCount } = event.data.summary.totals;callback(null,`total line count:${totalLineCount}\n`);break; } } },});exportdefault customReporter;const {Transform } =require('node:stream');const customReporter =newTransform({writableObjectMode:true,transform(event, encoding, callback) {switch (event.type) {case'test:dequeue':callback(null,`test${event.data.name} dequeued`);break;case'test:enqueue':callback(null,`test${event.data.name} enqueued`);break;case'test:watch:drained':callback(null,'test watch queue drained');break;case'test:watch:restarted':callback(null,'test watch restarted due to file change');break;case'test:start':callback(null,`test${event.data.name} started`);break;case'test:pass':callback(null,`test${event.data.name} passed`);break;case'test:fail':callback(null,`test${event.data.name} failed`);break;case'test:plan':callback(null,'test plan');break;case'test:diagnostic':case'test:stderr':case'test:stdout':callback(null, event.data.message);break;case'test:coverage': {const { totalLineCount } = event.data.summary.totals;callback(null,`total line count:${totalLineCount}\n`);break; } } },});module.exports = customReporter;
Example of a custom reporter using a generator function:
exportdefaultasyncfunction *customReporter(source) {forawait (const eventof source) {switch (event.type) {case'test:dequeue':yield`test${event.data.name} dequeued\n`;break;case'test:enqueue':yield`test${event.data.name} enqueued\n`;break;case'test:watch:drained':yield'test watch queue drained\n';break;case'test:watch:restarted':yield'test watch restarted due to file change\n';break;case'test:start':yield`test${event.data.name} started\n`;break;case'test:pass':yield`test${event.data.name} passed\n`;break;case'test:fail':yield`test${event.data.name} failed\n`;break;case'test:plan':yield'test plan\n';break;case'test:diagnostic':case'test:stderr':case'test:stdout':yield`${event.data.message}\n`;break;case'test:coverage': {const { totalLineCount } = event.data.summary.totals;yield`total line count:${totalLineCount}\n`;break; } } }}module.exports =asyncfunction *customReporter(source) {forawait (const eventof source) {switch (event.type) {case'test:dequeue':yield`test${event.data.name} dequeued\n`;break;case'test:enqueue':yield`test${event.data.name} enqueued\n`;break;case'test:watch:drained':yield'test watch queue drained\n';break;case'test:watch:restarted':yield'test watch restarted due to file change\n';break;case'test:start':yield`test${event.data.name} started\n`;break;case'test:pass':yield`test${event.data.name} passed\n`;break;case'test:fail':yield`test${event.data.name} failed\n`;break;case'test:plan':yield'test plan\n';break;case'test:diagnostic':case'test:stderr':case'test:stdout':yield`${event.data.message}\n`;break;case'test:coverage': {const { totalLineCount } = event.data.summary.totals;yield`total line count:${totalLineCount}\n`;break; } } }};
The value provided to--test-reporter should be a string like one used in animport() in JavaScript code, or a value provided for--import.
Multiple reporters#
The--test-reporter flag can be specified multiple times to report testresults in several formats. In this situationit is required to specify a destination for each reporterusing--test-reporter-destination.Destination can bestdout,stderr, or a file path.Reporters and destinations are paired accordingto the order they were specified.
In the following example, thespec reporter will output tostdout,and thedot reporter will output tofile.txt:
node --test-reporter=spec --test-reporter=dot --test-reporter-destination=stdout --test-reporter-destination=file.txtWhen a single reporter is specified, the destination will default tostdout,unless a destination is explicitly provided.
run([options])#
History
| Version | Changes |
|---|---|
| v25.6.0 | Add the |
| v24.7.0 | Added a rerunFailuresFilePath option. |
| v23.0.0 | Added the |
| v23.0.0, v22.10.0 | Added coverage options. |
| v22.8.0 | Added the |
| v22.6.0 | Added the |
| v22.0.0, v20.14.0 | Added the |
| v20.1.0, v18.17.0 | Add a testNamePatterns option. |
| v18.9.0, v16.19.0 | Added in: v18.9.0, v16.19.0 |
options<Object> Configuration options for running tests. The followingproperties are supported:concurrency<number> |<boolean> If a number is provided,then that many test processes would run in parallel, where each processcorresponds to one test file.Iftrue, it would runos.availableParallelism() - 1test files inparallel.Iffalse, it would only run one test file at a time.Default:false.cwd<string> Specifies the current working directory to be used by the test runner.Serves as the base path for resolving files as ifrunning tests from the command line from that directory.Default:process.cwd().files<Array> An array containing the list of files to run.Default: Same asrunning tests from the command line.forceExit<boolean> Configures the test runner to exit the process onceall known tests have finished executing even if the event loop wouldotherwise remain active.Default:false.globPatterns<Array> An array containing the list of glob patterns tomatch test files. This option cannot be used together withfiles.Default: Same asrunning tests from the command line.inspectPort<number> |<Function> Sets inspector port of test child process.This can be a number, or a function that takes no arguments and returns anumber. If a nullish value is provided, each process gets its own port,incremented from the primary'sprocess.debugPort. This option is ignoredif theisolationoption is set to'none'as no child processes arespawned.Default:undefined.isolation<string> Configures the type of test isolation. If set to'process', each test file is run in a separate child process. If set to'none', all test files run in the current process.Default:'process'.only<boolean> If truthy, the test context will only run tests thathave theonlyoption setsetup<Function> A function that accepts theTestsStreaminstanceand can be used to setup listeners before any tests are run.Default:undefined.execArgv<Array> An array of CLI flags to pass to thenodeexecutable whenspawning the subprocesses. This option has no effect whenisolationis'none'.Default:[]argv<Array> An array of CLI flags to pass to each test file when spawning thesubprocesses. This option has no effect whenisolationis'none'.Default:[].signal<AbortSignal> Allows aborting an in-progress test execution.testNamePatterns<string> |<RegExp> |<Array> A String, RegExp or a RegExp Array,that can be used to only run tests whose name matches the provided pattern.Test name patterns are interpreted as JavaScript regular expressions.For each test that is executed, any corresponding test hooks, such asbeforeEach(), are also run.Default:undefined.testSkipPatterns<string> |<RegExp> |<Array> A String, RegExp or a RegExp Array,that can be used to exclude running tests whose name matches the provided pattern.Test name patterns are interpreted as JavaScript regular expressions.For each test that is executed, any corresponding test hooks, such asbeforeEach(), are also run.Default:undefined.timeout<number> A number of milliseconds the test execution willfail after.If unspecified, subtests inherit this value from their parent.Default:Infinity.watch<boolean> Whether to run in watch mode or not.Default:false.shard<Object> Running tests in a specific shard.Default:undefined.rerunFailuresFilePath<string> A file path where the test runner willstore the state of the tests to allow rerunning only the failed tests on a next run.see [Rerunning failed tests][] for more information.Default:undefined.coverage<boolean> enablecode coverage collection.Default:false.coverageExcludeGlobs<string> |<Array> Excludes specific files from code coverageusing a glob pattern, which can match both absolute and relative file paths.This property is only applicable whencoveragewas set totrue.If bothcoverageExcludeGlobsandcoverageIncludeGlobsare provided,files must meetboth criteria to be included in the coverage report.Default:undefined.coverageIncludeGlobs<string> |<Array> Includes specific files in code coverageusing a glob pattern, which can match both absolute and relative file paths.This property is only applicable whencoveragewas set totrue.If bothcoverageExcludeGlobsandcoverageIncludeGlobsare provided,files must meetboth criteria to be included in the coverage report.Default:undefined.lineCoverage<number> Require a minimum percent of covered lines. If codecoverage does not reach the threshold specified, the process will exit with code1.Default:0.branchCoverage<number> Require a minimum percent of covered branches. If codecoverage does not reach the threshold specified, the process will exit with code1.Default:0.functionCoverage<number> Require a minimum percent of covered functions. If codecoverage does not reach the threshold specified, the process will exit with code1.Default:0.env<Object> Specify environment variables to be passed along to the test process.This options is not compatible withisolation='none'. These variables will overridethose from the main process, and are not merged withprocess.env.Default:process.env.
- Returns:<TestsStream>
Note:shard is used to horizontally parallelize test running acrossmachines or processes, ideal for large-scale executions across variedenvironments. It's incompatible withwatch mode, tailored for rapidcode iteration by automatically rerunning tests on file changes.
import { tap }from'node:test/reporters';import { run }from'node:test';import processfrom'node:process';import pathfrom'node:path';run({files: [path.resolve('./tests/test.js')] }) .on('test:fail',() => { process.exitCode =1; }) .compose(tap) .pipe(process.stdout);const { tap } =require('node:test/reporters');const { run } =require('node:test');const path =require('node:path');run({files: [path.resolve('./tests/test.js')] }) .on('test:fail',() => { process.exitCode =1; }) .compose(tap) .pipe(process.stdout);
suite([name][, options][, fn])#
name<string> The name of the suite, which is displayed when reporting testresults.Default: Thenameproperty offn, or'<anonymous>'iffndoes not have a name.options<Object> Optional configuration options for the suite.This supports the same options astest([name][, options][, fn]).fn<Function> |<AsyncFunction> The suite function declaring nested tests andsuites. The first argument to this function is aSuiteContextobject.Default: A no-op function.- Returns:<Promise> Immediately fulfilled with
undefined.
Thesuite() function is imported from thenode:test module.
suite.skip([name][, options][, fn])#
Shorthand for skipping a suite. This is the same assuite([name], { skip: true }[, fn]).
suite.todo([name][, options][, fn])#
Shorthand for marking a suite asTODO. This is the same assuite([name], { todo: true }[, fn]).
suite.only([name][, options][, fn])#
Shorthand for marking a suite asonly. This is the same assuite([name], { only: true }[, fn]).
test([name][, options][, fn])#
History
| Version | Changes |
|---|---|
| v20.2.0, v18.17.0 | Added the |
| v18.8.0, v16.18.0 | Add a |
| v18.7.0, v16.17.0 | Add a |
| v18.0.0, v16.17.0 | Added in: v18.0.0, v16.17.0 |
name<string> The name of the test, which is displayed when reporting testresults.Default: Thenameproperty offn, or'<anonymous>'iffndoes not have a name.options<Object> Configuration options for the test. The followingproperties are supported:concurrency<number> |<boolean> If a number is provided,then that many tests would run asynchronously (they are still managed by the single-threaded event loop).Iftrue, all scheduled asynchronous tests run concurrently within thethread. Iffalse, only one test runs at a time.If unspecified, subtests inherit this value from their parent.Default:false.only<boolean> If truthy, and the test context is configured to runonlytests, then this test will be run. Otherwise, the test is skipped.Default:false.signal<AbortSignal> Allows aborting an in-progress test.skip<boolean> |<string> If truthy, the test is skipped. If a string isprovided, that string is displayed in the test results as the reason forskipping the test.Default:false.todo<boolean> |<string> If truthy, the test marked asTODO. If a stringis provided, that string is displayed in the test results as the reason whythe test isTODO.Default:false.timeout<number> A number of milliseconds the test will fail after.If unspecified, subtests inherit this value from their parent.Default:Infinity.plan<number> The number of assertions and subtests expected to be run in the test.If the number of assertions run in the test does not match the numberspecified in the plan, the test will fail.Default:undefined.
fn<Function> |<AsyncFunction> The function under test. The first argumentto this function is aTestContextobject. If the test uses callbacks,the callback function is passed as the second argument.Default: A no-opfunction.- Returns:<Promise> Fulfilled with
undefinedoncethe test completes, or immediately if the test runs within a suite.
Thetest() function is the value imported from thetest module. Eachinvocation of this function results in reporting the test to the<TestsStream>.
TheTestContext object passed to thefn argument can be used to performactions related to the current test. Examples include skipping the test, addingadditional diagnostic information, or creating subtests.
test() returns aPromise that fulfills once the test completes.iftest() is called within a suite, it fulfills immediately.The return value can usually be discarded for top level tests.However, the return value from subtests should be used to prevent the parenttest from finishing first and cancelling the subtestas shown in the following example.
test('top level test',async (t) => {// The setTimeout() in the following subtest would cause it to outlive its// parent test if 'await' is removed on the next line. Once the parent test// completes, it will cancel any outstanding subtests.await t.test('longer running subtest',async (t) => {returnnewPromise((resolve, reject) => {setTimeout(resolve,1000); }); });});Thetimeout option can be used to fail the test if it takes longer thantimeout milliseconds to complete. However, it is not a reliable mechanism forcanceling tests because a running test might block the application thread andthus prevent the scheduled cancellation.
test.skip([name][, options][, fn])#
Shorthand for skipping a test,same astest([name], { skip: true }[, fn]).
test.todo([name][, options][, fn])#
Shorthand for marking a test asTODO,same astest([name], { todo: true }[, fn]).
test.only([name][, options][, fn])#
Shorthand for marking a test asonly,same astest([name], { only: true }[, fn]).
describe([name][, options][, fn])#
Alias forsuite().
Thedescribe() function is imported from thenode:test module.
describe.skip([name][, options][, fn])#
Shorthand for skipping a suite. This is the same asdescribe([name], { skip: true }[, fn]).
describe.todo([name][, options][, fn])#
Shorthand for marking a suite asTODO. This is the same asdescribe([name], { todo: true }[, fn]).
describe.only([name][, options][, fn])#
Shorthand for marking a suite asonly. This is the same asdescribe([name], { only: true }[, fn]).
it([name][, options][, fn])#
History
| Version | Changes |
|---|---|
| v19.8.0, v18.16.0 | Calling |
| v18.6.0, v16.17.0 | Added in: v18.6.0, v16.17.0 |
Alias fortest().
Theit() function is imported from thenode:test module.
it.skip([name][, options][, fn])#
Shorthand for skipping a test,same asit([name], { skip: true }[, fn]).
it.todo([name][, options][, fn])#
Shorthand for marking a test asTODO,same asit([name], { todo: true }[, fn]).
it.only([name][, options][, fn])#
Shorthand for marking a test asonly,same asit([name], { only: true }[, fn]).
before([fn][, options])#
fn<Function> |<AsyncFunction> The hook function.If the hook uses callbacks,the callback function is passed as the second argument.Default: A no-opfunction.options<Object> Configuration options for the hook. The followingproperties are supported:signal<AbortSignal> Allows aborting an in-progress hook.timeout<number> A number of milliseconds the hook will fail after.If unspecified, subtests inherit this value from their parent.Default:Infinity.
This function creates a hook that runs before executing a suite.
describe('tests',async () => {before(() =>console.log('about to run some test'));it('is a subtest',() => {// Some relevant assertions here });});after([fn][, options])#
fn<Function> |<AsyncFunction> The hook function.If the hook uses callbacks,the callback function is passed as the second argument.Default: A no-opfunction.options<Object> Configuration options for the hook. The followingproperties are supported:signal<AbortSignal> Allows aborting an in-progress hook.timeout<number> A number of milliseconds the hook will fail after.If unspecified, subtests inherit this value from their parent.Default:Infinity.
This function creates a hook that runs after executing a suite.
describe('tests',async () => {after(() =>console.log('finished running tests'));it('is a subtest',() => {// Some relevant assertion here });});Note: Theafter hook is guaranteed to run,even if tests within the suite fail.
beforeEach([fn][, options])#
fn<Function> |<AsyncFunction> The hook function.If the hook uses callbacks,the callback function is passed as the second argument.Default: A no-opfunction.options<Object> Configuration options for the hook. The followingproperties are supported:signal<AbortSignal> Allows aborting an in-progress hook.timeout<number> A number of milliseconds the hook will fail after.If unspecified, subtests inherit this value from their parent.Default:Infinity.
This function creates a hook that runs before each test in the current suite.
describe('tests',async () => {beforeEach(() =>console.log('about to run a test'));it('is a subtest',() => {// Some relevant assertion here });});afterEach([fn][, options])#
fn<Function> |<AsyncFunction> The hook function.If the hook uses callbacks,the callback function is passed as the second argument.Default: A no-opfunction.options<Object> Configuration options for the hook. The followingproperties are supported:signal<AbortSignal> Allows aborting an in-progress hook.timeout<number> A number of milliseconds the hook will fail after.If unspecified, subtests inherit this value from their parent.Default:Infinity.
This function creates a hook that runs after each test in the current suite.TheafterEach() hook is run even if the test fails.
describe('tests',async () => {afterEach(() =>console.log('finished running a test'));it('is a subtest',() => {// Some relevant assertion here });});assert#
An object whose methods are used to configure available assertions on theTestContext objects in the current process. The methods fromnode:assertand snapshot testing functions are available by default.
It is possible to apply the same configuration to all files by placing commonconfiguration code in a modulepreloaded with--require or--import.
assert.register(name, fn)#
Defines a new assertion function with the provided name and function. If anassertion already exists with the same name, it is overwritten.
snapshot#
An object whose methods are used to configure default snapshot settings in thecurrent process. It is possible to apply the same configuration to all files byplacing common configuration code in a module preloaded with--require or--import.
snapshot.setDefaultSnapshotSerializers(serializers)#
serializers<Array> An array of synchronous functions used as the defaultserializers for snapshot tests.
This function is used to customize the default serialization mechanism used bythe test runner. By default, the test runner performs serialization by callingJSON.stringify(value, null, 2) on the provided value.JSON.stringify() doeshave limitations regarding circular structures and supported data types. If amore robust serialization mechanism is required, this function should be used.
snapshot.setResolveSnapshotPath(fn)#
fn<Function> A function used to compute the location of the snapshot file.The function receives the path of the test file as its only argument. If thetest is not associated with a file (for example in the REPL), the input isundefined.fn()must return a string specifying the location of the snapshotsnapshot file.
This function is used to customize the location of the snapshot file used forsnapshot testing. By default, the snapshot filename is the same as the entrypoint filename with a.snapshot file extension.
Class:MockFunctionContext#
TheMockFunctionContext class is used to inspect or manipulate the behavior ofmocks created via theMockTracker APIs.
ctx.calls#
- Type:<Array>
A getter that returns a copy of the internal array used to track calls to themock. Each entry in the array is an object with the following properties.
arguments<Array> An array of the arguments passed to the mock function.error<any> If the mocked function threw then this property contains thethrown value.Default:undefined.result<any> The value returned by the mocked function.stack<Error> AnErrorobject whose stack can be used to determine thecallsite of the mocked function invocation.target<Function> |<undefined> If the mocked function is a constructor, thisfield contains the class being constructed. Otherwise this will beundefined.this<any> The mocked function'sthisvalue.
ctx.callCount()#
- Returns:<integer> The number of times that this mock has been invoked.
This function returns the number of times that this mock has been invoked. Thisfunction is more efficient than checkingctx.calls.length becausectx.callsis a getter that creates a copy of the internal call tracking array.
ctx.mockImplementation(implementation)#
implementation<Function> |<AsyncFunction> The function to be used as themock's new implementation.
This function is used to change the behavior of an existing mock.
The following example creates a mock function usingt.mock.fn(), calls themock function, and then changes the mock implementation to a different function.
test('changes a mock behavior',(t) => {let cnt =0;functionaddOne() { cnt++;return cnt; }functionaddTwo() { cnt +=2;return cnt; }const fn = t.mock.fn(addOne); assert.strictEqual(fn(),1); fn.mock.mockImplementation(addTwo); assert.strictEqual(fn(),3); assert.strictEqual(fn(),5);});ctx.mockImplementationOnce(implementation[, onCall])#
implementation<Function> |<AsyncFunction> The function to be used as themock's implementation for the invocation number specified byonCall.onCall<integer> The invocation number that will useimplementation. Ifthe specified invocation has already occurred then an exception is thrown.Default: The number of the next invocation.
This function is used to change the behavior of an existing mock for a singleinvocation. Once invocationonCall has occurred, the mock will revert towhatever behavior it would have used hadmockImplementationOnce() not beencalled.
The following example creates a mock function usingt.mock.fn(), calls themock function, changes the mock implementation to a different function for thenext invocation, and then resumes its previous behavior.
test('changes a mock behavior once',(t) => {let cnt =0;functionaddOne() { cnt++;return cnt; }functionaddTwo() { cnt +=2;return cnt; }const fn = t.mock.fn(addOne); assert.strictEqual(fn(),1); fn.mock.mockImplementationOnce(addTwo); assert.strictEqual(fn(),3); assert.strictEqual(fn(),4);});ctx.restore()#
Resets the implementation of the mock function to its original behavior. Themock can still be used after calling this function.
Class:MockModuleContext#
TheMockModuleContext class is used to manipulate the behavior of module mockscreated via theMockTracker APIs.
Class:MockPropertyContext#
TheMockPropertyContext class is used to inspect or manipulate the behaviorof property mocks created via theMockTracker APIs.
ctx.accesses#
- Type:<Array>
A getter that returns a copy of the internal array used to track accesses (get/set) tothe mocked property. Each entry in the array is an object with the following properties:
ctx.accessCount()#
- Returns:<integer> The number of times that the property was accessed (read or written).
This function returns the number of times that the property was accessed.This function is more efficient than checkingctx.accesses.length becausectx.accesses is a getter that creates a copy of the internal access tracking array.
ctx.mockImplementation(value)#
value<any> The new value to be set as the mocked property value.
This function is used to change the value returned by the mocked property getter.
ctx.mockImplementationOnce(value[, onAccess])#
value<any> The value to be used as the mock'simplementation for the invocation number specified byonAccess.onAccess<integer> The invocation number that will usevalue. Ifthe specified invocation has already occurred then an exception is thrown.Default: The number of the next invocation.
This function is used to change the behavior of an existing mock for a singleinvocation. Once invocationonAccess has occurred, the mock will revert towhatever behavior it would have used hadmockImplementationOnce() not beencalled.
The following example creates a mock function usingt.mock.property(), calls themock property, changes the mock implementation to a different value for thenext invocation, and then resumes its previous behavior.
test('changes a mock behavior once',(t) => {const obj = {foo:1 };const prop = t.mock.property(obj,'foo',5); assert.strictEqual(obj.foo,5); prop.mock.mockImplementationOnce(25); assert.strictEqual(obj.foo,25); assert.strictEqual(obj.foo,5);});Caveat#
For consistency with the rest of the mocking API, this function treats both property gets and setsas accesses. If a property set occurs at the same access index, the "once" value will be consumedby the set operation, and the mocked property value will be changed to the "once" value. This maylead to unexpected behavior if you intend the "once" value to only be used for a get operation.
ctx.resetAccesses()#
Resets the access history of the mocked property.
ctx.restore()#
Resets the implementation of the mock property to its original behavior. Themock can still be used after calling this function.
Class:MockTracker#
TheMockTracker class is used to manage mocking functionality. The test runnermodule provides a top levelmock export which is aMockTracker instance.Each test also provides its ownMockTracker instance via the test context'smock property.
mock.fn([original[, implementation]][, options])#
original<Function> |<AsyncFunction> An optional function to create a mock on.Default: A no-op function.implementation<Function> |<AsyncFunction> An optional function used as themock implementation fororiginal. This is useful for creating mocks thatexhibit one behavior for a specified number of calls and then restore thebehavior oforiginal.Default: The function specified byoriginal.options<Object> Optional configuration options for the mock function. Thefollowing properties are supported:times<integer> The number of times that the mock will use the behavior ofimplementation. Once the mock function has been calledtimestimes, itwill automatically restore the behavior oforiginal. This value must be aninteger greater than zero.Default:Infinity.
- Returns:<Proxy> The mocked function. The mocked function contains a special
mockproperty, which is an instance ofMockFunctionContext, and canbe used for inspecting and changing the behavior of the mocked function.
This function is used to create a mock function.
The following example creates a mock function that increments a counter by oneon each invocation. Thetimes option is used to modify the mock behavior suchthat the first two invocations add two to the counter instead of one.
test('mocks a counting function',(t) => {let cnt =0;functionaddOne() { cnt++;return cnt; }functionaddTwo() { cnt +=2;return cnt; }const fn = t.mock.fn(addOne, addTwo, {times:2 }); assert.strictEqual(fn(),2); assert.strictEqual(fn(),4); assert.strictEqual(fn(),5); assert.strictEqual(fn(),6);});mock.getter(object, methodName[, implementation][, options])#
This function is syntax sugar forMockTracker.method withoptions.getterset totrue.
mock.method(object, methodName[, implementation][, options])#
object<Object> The object whose method is being mocked.methodName<string> |<symbol> The identifier of the method onobjectto mock.Ifobject[methodName]is not a function, an error is thrown.implementation<Function> |<AsyncFunction> An optional function used as themock implementation forobject[methodName].Default: The original methodspecified byobject[methodName].options<Object> Optional configuration options for the mock method. Thefollowing properties are supported:getter<boolean> Iftrue,object[methodName]is treated as a getter.This option cannot be used with thesetteroption.Default: false.setter<boolean> Iftrue,object[methodName]is treated as a setter.This option cannot be used with thegetteroption.Default: false.times<integer> The number of times that the mock will use the behavior ofimplementation. Once the mocked method has been calledtimestimes, itwill automatically restore the original behavior. This value must be aninteger greater than zero.Default:Infinity.
- Returns:<Proxy> The mocked method. The mocked method contains a special
mockproperty, which is an instance ofMockFunctionContext, and canbe used for inspecting and changing the behavior of the mocked method.
This function is used to create a mock on an existing object method. Thefollowing example demonstrates how a mock is created on an existing objectmethod.
test('spies on an object method',(t) => {const number = {value:5,subtract(a) {returnthis.value - a; }, }; t.mock.method(number,'subtract'); assert.strictEqual(number.subtract.mock.callCount(),0); assert.strictEqual(number.subtract(3),2); assert.strictEqual(number.subtract.mock.callCount(),1);const call = number.subtract.mock.calls[0]; assert.deepStrictEqual(call.arguments, [3]); assert.strictEqual(call.result,2); assert.strictEqual(call.error,undefined); assert.strictEqual(call.target,undefined); assert.strictEqual(call.this, number);});mock.module(specifier[, options])#
History
| Version | Changes |
|---|---|
| v24.0.0, v22.17.0 | Support JSON modules. |
| v22.3.0, v20.18.0 | Added in: v22.3.0, v20.18.0 |
specifier<string> |<URL> A string identifying the module to mock.options<Object> Optional configuration options for the mock module. Thefollowing properties are supported:cache<boolean> Iffalse, each call torequire()orimport()generates a new mock module. Iftrue, subsequent calls will return the samemodule mock, and the mock module is inserted into the CommonJS cache.Default: false.defaultExport<any> An optional value used as the mocked module's defaultexport. If this value is not provided, ESM mocks do not include a defaultexport. If the mock is a CommonJS or builtin module, this setting is used asthe value ofmodule.exports. If this value is not provided, CJS and builtinmocks use an empty object as the value ofmodule.exports.namedExports<Object> An optional object whose keys and values are used tocreate the named exports of the mock module. If the mock is a CommonJS orbuiltin module, these values are copied ontomodule.exports. Therefore, if amock is created with both named exports and a non-object default export, themock will throw an exception when used as a CJS or builtin module.
- Returns:<MockModuleContext> An object that can be used to manipulate the mock.
This function is used to mock the exports of ECMAScript modules, CommonJS modules, JSON modules, andNode.js builtin modules. Any references to the original module prior to mocking are not impacted. Inorder to enable module mocking, Node.js must be started with the--experimental-test-module-mocks command-line flag.
The following example demonstrates how a mock is created for a module.
test('mocks a builtin module in both module systems',async (t) => {// Create a mock of 'node:readline' with a named export named 'fn', which// does not exist in the original 'node:readline' module.const mock = t.mock.module('node:readline', {namedExports: {fn() {return42; } }, });let esmImpl =awaitimport('node:readline');let cjsImpl =require('node:readline');// cursorTo() is an export of the original 'node:readline' module. assert.strictEqual(esmImpl.cursorTo,undefined); assert.strictEqual(cjsImpl.cursorTo,undefined); assert.strictEqual(esmImpl.fn(),42); assert.strictEqual(cjsImpl.fn(),42); mock.restore();// The mock is restored, so the original builtin module is returned. esmImpl =awaitimport('node:readline'); cjsImpl =require('node:readline'); assert.strictEqual(typeof esmImpl.cursorTo,'function'); assert.strictEqual(typeof cjsImpl.cursorTo,'function'); assert.strictEqual(esmImpl.fn,undefined); assert.strictEqual(cjsImpl.fn,undefined);});mock.property(object, propertyName[, value])#
object<Object> The object whose value is being mocked.propertyName<string> |<symbol> The identifier of the property onobjectto mock.value<any> An optional value used as the mock valueforobject[propertyName].Default: The original property value.- Returns:<Proxy> A proxy to the mocked object. The mocked object contains aspecial
mockproperty, which is an instance ofMockPropertyContext, andcan be used for inspecting and changing the behavior of the mocked property.
Creates a mock for a property value on an object. This allows you to track and control access to a specific property,including how many times it is read (getter) or written (setter), and to restore the original value after mocking.
test('mocks a property value',(t) => {const obj = {foo:42 };const prop = t.mock.property(obj,'foo',100); assert.strictEqual(obj.foo,100); assert.strictEqual(prop.mock.accessCount(),1); assert.strictEqual(prop.mock.accesses[0].type,'get'); assert.strictEqual(prop.mock.accesses[0].value,100); obj.foo =200; assert.strictEqual(prop.mock.accessCount(),2); assert.strictEqual(prop.mock.accesses[1].type,'set'); assert.strictEqual(prop.mock.accesses[1].value,200); prop.mock.restore(); assert.strictEqual(obj.foo,42);});mock.reset()#
This function restores the default behavior of all mocks that were previouslycreated by thisMockTracker and disassociates the mocks from theMockTracker instance. Once disassociated, the mocks can still be used, but theMockTracker instance can no longer be used to reset their behavior orotherwise interact with them.
After each test completes, this function is called on the test context'sMockTracker. If the globalMockTracker is used extensively, calling thisfunction manually is recommended.
mock.restoreAll()#
This function restores the default behavior of all mocks that were previouslycreated by thisMockTracker. Unlikemock.reset(),mock.restoreAll() doesnot disassociate the mocks from theMockTracker instance.
mock.setter(object, methodName[, implementation][, options])#
This function is syntax sugar forMockTracker.method withoptions.setterset totrue.
Class:MockTimers#
History
| Version | Changes |
|---|---|
| v23.1.0 | The Mock Timers is now stable. |
| v20.4.0, v18.19.0 | Added in: v20.4.0, v18.19.0 |
Mocking timers is a technique commonly used in software testing to simulate andcontrol the behavior of timers, such assetInterval andsetTimeout,without actually waiting for the specified time intervals.
MockTimers is also able to mock theDate object.
TheMockTracker provides a top-leveltimers exportwhich is aMockTimers instance.
timers.enable([enableOptions])#
History
| Version | Changes |
|---|---|
| v21.2.0, v20.11.0 | Updated parameters to be an option object with available APIs and the default initial epoch. |
| v20.4.0, v18.19.0 | Added in: v20.4.0, v18.19.0 |
Enables timer mocking for the specified timers.
enableOptions<Object> Optional configuration options for enabling timermocking. The following properties are supported:apis<Array> An optional array containing the timers to mock.The currently supported timer values are'setInterval','setTimeout','setImmediate',and'Date'.Default:['setInterval', 'setTimeout', 'setImmediate', 'Date'].If no array is provided, all time related APIs ('setInterval','clearInterval','setTimeout','clearTimeout','setImmediate','clearImmediate', and'Date') will be mocked by default.now<number> |<Date> An optional number or Date object representing theinitial time (in milliseconds) to use as the valueforDate.now().Default:0.
Note: When you enable mocking for a specific timer, its associatedclear function will also be implicitly mocked.
Note: MockingDate will affect the behavior of the mocked timersas they use the same internal clock.
Example usage without setting initial time:
import { mock }from'node:test';mock.timers.enable({apis: ['setInterval'] });const { mock } =require('node:test');mock.timers.enable({apis: ['setInterval'] });
The above example enables mocking for thesetInterval timer andimplicitly mocks theclearInterval function. Only thesetIntervalandclearInterval functions fromnode:timers,node:timers/promises, andglobalThis will be mocked.
Example usage with initial time set
import { mock }from'node:test';mock.timers.enable({apis: ['Date'],now:1000 });const { mock } =require('node:test');mock.timers.enable({apis: ['Date'],now:1000 });
Example usage with initial Date object as time set
import { mock }from'node:test';mock.timers.enable({apis: ['Date'],now:newDate() });const { mock } =require('node:test');mock.timers.enable({apis: ['Date'],now:newDate() });
Alternatively, if you callmock.timers.enable() without any parameters:
All timers ('setInterval','clearInterval','setTimeout','clearTimeout','setImmediate', and'clearImmediate') will be mocked. ThesetInterval,clearInterval,setTimeout,clearTimeout,setImmediate, andclearImmediate functions fromnode:timers,node:timers/promises, andglobalThis will be mocked. As well as the globalDate object.
timers.reset()#
This function restores the default behavior of all mocks that were previouslycreated by thisMockTimers instance and disassociates the mocksfrom theMockTracker instance.
Note: After each test completes, this function is called onthe test context'sMockTracker.
import { mock }from'node:test';mock.timers.reset();const { mock } =require('node:test');mock.timers.reset();
timers[Symbol.dispose]()#
Callstimers.reset().
timers.tick([milliseconds])#
Advances time for all mocked timers.
milliseconds<number> The amount of time, in milliseconds,to advance the timers.Default:1.
Note: This diverges from howsetTimeout in Node.js behaves and acceptsonly positive numbers. In Node.js,setTimeout with negative numbers isonly supported for web compatibility reasons.
The following example mocks asetTimeout function andby using.tick advances intime triggering all pending timers.
import assertfrom'node:assert';import { test }from'node:test';test('mocks setTimeout to be executed synchronously without having to actually wait for it',(context) => {const fn = context.mock.fn(); context.mock.timers.enable({apis: ['setTimeout'] });setTimeout(fn,9999); assert.strictEqual(fn.mock.callCount(),0);// Advance in time context.mock.timers.tick(9999); assert.strictEqual(fn.mock.callCount(),1);});const assert =require('node:assert');const { test } =require('node:test');test('mocks setTimeout to be executed synchronously without having to actually wait for it',(context) => {const fn = context.mock.fn(); context.mock.timers.enable({apis: ['setTimeout'] });setTimeout(fn,9999); assert.strictEqual(fn.mock.callCount(),0);// Advance in time context.mock.timers.tick(9999); assert.strictEqual(fn.mock.callCount(),1);});
Alternatively, the.tick function can be called many times
import assertfrom'node:assert';import { test }from'node:test';test('mocks setTimeout to be executed synchronously without having to actually wait for it',(context) => {const fn = context.mock.fn(); context.mock.timers.enable({apis: ['setTimeout'] });const nineSecs =9000;setTimeout(fn, nineSecs);const threeSeconds =3000; context.mock.timers.tick(threeSeconds); context.mock.timers.tick(threeSeconds); context.mock.timers.tick(threeSeconds); assert.strictEqual(fn.mock.callCount(),1);});const assert =require('node:assert');const { test } =require('node:test');test('mocks setTimeout to be executed synchronously without having to actually wait for it',(context) => {const fn = context.mock.fn(); context.mock.timers.enable({apis: ['setTimeout'] });const nineSecs =9000;setTimeout(fn, nineSecs);const threeSeconds =3000; context.mock.timers.tick(threeSeconds); context.mock.timers.tick(threeSeconds); context.mock.timers.tick(threeSeconds); assert.strictEqual(fn.mock.callCount(),1);});
Advancing time using.tick will also advance the time for anyDate objectcreated after the mock was enabled (ifDate was also set to be mocked).
import assertfrom'node:assert';import { test }from'node:test';test('mocks setTimeout to be executed synchronously without having to actually wait for it',(context) => {const fn = context.mock.fn(); context.mock.timers.enable({apis: ['setTimeout','Date'] });setTimeout(fn,9999); assert.strictEqual(fn.mock.callCount(),0); assert.strictEqual(Date.now(),0);// Advance in time context.mock.timers.tick(9999); assert.strictEqual(fn.mock.callCount(),1); assert.strictEqual(Date.now(),9999);});const assert =require('node:assert');const { test } =require('node:test');test('mocks setTimeout to be executed synchronously without having to actually wait for it',(context) => {const fn = context.mock.fn(); context.mock.timers.enable({apis: ['setTimeout','Date'] });setTimeout(fn,9999); assert.strictEqual(fn.mock.callCount(),0); assert.strictEqual(Date.now(),0);// Advance in time context.mock.timers.tick(9999); assert.strictEqual(fn.mock.callCount(),1); assert.strictEqual(Date.now(),9999);});
Using clear functions#
As mentioned, all clear functions from timers (clearTimeout,clearInterval,andclearImmediate) are implicitly mocked. Take a look at this example usingsetTimeout:
import assertfrom'node:assert';import { test }from'node:test';test('mocks setTimeout to be executed synchronously without having to actually wait for it',(context) => {const fn = context.mock.fn();// Optionally choose what to mock context.mock.timers.enable({apis: ['setTimeout'] });const id =setTimeout(fn,9999);// Implicitly mocked as wellclearTimeout(id); context.mock.timers.tick(9999);// As that setTimeout was cleared the mock function will never be called assert.strictEqual(fn.mock.callCount(),0);});const assert =require('node:assert');const { test } =require('node:test');test('mocks setTimeout to be executed synchronously without having to actually wait for it',(context) => {const fn = context.mock.fn();// Optionally choose what to mock context.mock.timers.enable({apis: ['setTimeout'] });const id =setTimeout(fn,9999);// Implicitly mocked as wellclearTimeout(id); context.mock.timers.tick(9999);// As that setTimeout was cleared the mock function will never be called assert.strictEqual(fn.mock.callCount(),0);});
Working with Node.js timers modules#
Once you enable mocking timers,node:timers,node:timers/promises modules,and timers from the Node.js global context are enabled:
Note: Destructuring functions such asimport { setTimeout } from 'node:timers' is currentlynot supported by this API.
import assertfrom'node:assert';import { test }from'node:test';import nodeTimersfrom'node:timers';import nodeTimersPromisesfrom'node:timers/promises';test('mocks setTimeout to be executed synchronously without having to actually wait for it',async (context) => {const globalTimeoutObjectSpy = context.mock.fn();const nodeTimerSpy = context.mock.fn();const nodeTimerPromiseSpy = context.mock.fn();// Optionally choose what to mock context.mock.timers.enable({apis: ['setTimeout'] });setTimeout(globalTimeoutObjectSpy,9999); nodeTimers.setTimeout(nodeTimerSpy,9999);const promise = nodeTimersPromises.setTimeout(9999).then(nodeTimerPromiseSpy);// Advance in time context.mock.timers.tick(9999); assert.strictEqual(globalTimeoutObjectSpy.mock.callCount(),1); assert.strictEqual(nodeTimerSpy.mock.callCount(),1);await promise; assert.strictEqual(nodeTimerPromiseSpy.mock.callCount(),1);});const assert =require('node:assert');const { test } =require('node:test');const nodeTimers =require('node:timers');const nodeTimersPromises =require('node:timers/promises');test('mocks setTimeout to be executed synchronously without having to actually wait for it',async (context) => {const globalTimeoutObjectSpy = context.mock.fn();const nodeTimerSpy = context.mock.fn();const nodeTimerPromiseSpy = context.mock.fn();// Optionally choose what to mock context.mock.timers.enable({apis: ['setTimeout'] });setTimeout(globalTimeoutObjectSpy,9999); nodeTimers.setTimeout(nodeTimerSpy,9999);const promise = nodeTimersPromises.setTimeout(9999).then(nodeTimerPromiseSpy);// Advance in time context.mock.timers.tick(9999); assert.strictEqual(globalTimeoutObjectSpy.mock.callCount(),1); assert.strictEqual(nodeTimerSpy.mock.callCount(),1);await promise; assert.strictEqual(nodeTimerPromiseSpy.mock.callCount(),1);});
In Node.js,setInterval fromnode:timers/promisesis anAsyncGenerator and is also supported by this API:
import assertfrom'node:assert';import { test }from'node:test';import nodeTimersPromisesfrom'node:timers/promises';test('should tick five times testing a real use case',async (context) => { context.mock.timers.enable({apis: ['setInterval'] });const expectedIterations =3;const interval =1000;const startedAt =Date.now();asyncfunctionrun() {const times = [];forawait (const timeof nodeTimersPromises.setInterval(interval, startedAt)) { times.push(time);if (times.length === expectedIterations)break; }return times; }const r =run(); context.mock.timers.tick(interval); context.mock.timers.tick(interval); context.mock.timers.tick(interval);const timeResults =await r; assert.strictEqual(timeResults.length, expectedIterations);for (let it =1; it < expectedIterations; it++) { assert.strictEqual(timeResults[it -1], startedAt + (interval * it)); }});const assert =require('node:assert');const { test } =require('node:test');const nodeTimersPromises =require('node:timers/promises');test('should tick five times testing a real use case',async (context) => { context.mock.timers.enable({apis: ['setInterval'] });const expectedIterations =3;const interval =1000;const startedAt =Date.now();asyncfunctionrun() {const times = [];forawait (const timeof nodeTimersPromises.setInterval(interval, startedAt)) { times.push(time);if (times.length === expectedIterations)break; }return times; }const r =run(); context.mock.timers.tick(interval); context.mock.timers.tick(interval); context.mock.timers.tick(interval);const timeResults =await r; assert.strictEqual(timeResults.length, expectedIterations);for (let it =1; it < expectedIterations; it++) { assert.strictEqual(timeResults[it -1], startedAt + (interval * it)); }});
timers.runAll()#
Triggers all pending mocked timers immediately. If theDate object is alsomocked, it will also advance theDate object to the furthest timer's time.
The example below triggers all pending timers immediately,causing them to execute without any delay.
import assertfrom'node:assert';import { test }from'node:test';test('runAll functions following the given order',(context) => { context.mock.timers.enable({apis: ['setTimeout','Date'] });const results = [];setTimeout(() => results.push(1),9999);// Notice that if both timers have the same timeout,// the order of execution is guaranteedsetTimeout(() => results.push(3),8888);setTimeout(() => results.push(2),8888); assert.deepStrictEqual(results, []); context.mock.timers.runAll(); assert.deepStrictEqual(results, [3,2,1]);// The Date object is also advanced to the furthest timer's time assert.strictEqual(Date.now(),9999);});const assert =require('node:assert');const { test } =require('node:test');test('runAll functions following the given order',(context) => { context.mock.timers.enable({apis: ['setTimeout','Date'] });const results = [];setTimeout(() => results.push(1),9999);// Notice that if both timers have the same timeout,// the order of execution is guaranteedsetTimeout(() => results.push(3),8888);setTimeout(() => results.push(2),8888); assert.deepStrictEqual(results, []); context.mock.timers.runAll(); assert.deepStrictEqual(results, [3,2,1]);// The Date object is also advanced to the furthest timer's time assert.strictEqual(Date.now(),9999);});
Note: TherunAll() function is specifically designed fortriggering timers in the context of timer mocking.It does not have any effect on real-time systemclocks or actual timers outside of the mocking environment.
timers.setTime(milliseconds)#
Sets the current Unix timestamp that will be used as reference for any mockedDate objects.
import assertfrom'node:assert';import { test }from'node:test';test('runAll functions following the given order',(context) => {const now =Date.now();const setTime =1000;// Date.now is not mocked assert.deepStrictEqual(Date.now(), now); context.mock.timers.enable({apis: ['Date'] }); context.mock.timers.setTime(setTime);// Date.now is now 1000 assert.strictEqual(Date.now(), setTime);});const assert =require('node:assert');const { test } =require('node:test');test('setTime replaces current time',(context) => {const now =Date.now();const setTime =1000;// Date.now is not mocked assert.deepStrictEqual(Date.now(), now); context.mock.timers.enable({apis: ['Date'] }); context.mock.timers.setTime(setTime);// Date.now is now 1000 assert.strictEqual(Date.now(), setTime);});
Dates and Timers working together#
Dates and timer objects are dependent on each other. If you usesetTime() topass the current time to the mockedDate object, the set timers withsetTimeout andsetInterval willnot be affected.
However, thetick methodwill advance the mockedDate object.
import assertfrom'node:assert';import { test }from'node:test';test('runAll functions following the given order',(context) => { context.mock.timers.enable({apis: ['setTimeout','Date'] });const results = [];setTimeout(() => results.push(1),9999); assert.deepStrictEqual(results, []); context.mock.timers.setTime(12000); assert.deepStrictEqual(results, []);// The date is advanced but the timers don't tick assert.strictEqual(Date.now(),12000);});const assert =require('node:assert');const { test } =require('node:test');test('runAll functions following the given order',(context) => { context.mock.timers.enable({apis: ['setTimeout','Date'] });const results = [];setTimeout(() => results.push(1),9999); assert.deepStrictEqual(results, []); context.mock.timers.setTime(12000); assert.deepStrictEqual(results, []);// The date is advanced but the timers don't tick assert.strictEqual(Date.now(),12000);});
Class:TestsStream#
History
| Version | Changes |
|---|---|
| v20.0.0, v19.9.0, v18.17.0 | added type to test:pass and test:fail events for when the test is a suite. |
| v18.9.0, v16.19.0 | Added in: v18.9.0, v16.19.0 |
- Extends<Readable>
A successful call torun() method will return a new<TestsStream>object, streaming a series of events representing the execution of the tests.TestsStream will emit events, in the order of the tests definition
Some of the events are guaranteed to be emitted in the same order as the testsare defined, while others are emitted in the order that the tests execute.
Event:'test:coverage'#
data<Object>summary<Object> An object containing the coverage report.files<Array> An array of coverage reports for individual files. Eachreport is an object with the following schema:path<string> The absolute path of the file.totalLineCount<number> The total number of lines.totalBranchCount<number> The total number of branches.totalFunctionCount<number> The total number of functions.coveredLineCount<number> The number of covered lines.coveredBranchCount<number> The number of covered branches.coveredFunctionCount<number> The number of covered functions.coveredLinePercent<number> The percentage of lines covered.coveredBranchPercent<number> The percentage of branches covered.coveredFunctionPercent<number> The percentage of functions covered.functions<Array> An array of functions representing functioncoverage.branches<Array> An array of branches representing branch coverage.lines<Array> An array of lines representing linenumbers and the number of times they were covered.
thresholds<Object> An object containing whether or not the coverage foreach coverage type.totals<Object> An object containing a summary of coverage for allfiles.totalLineCount<number> The total number of lines.totalBranchCount<number> The total number of branches.totalFunctionCount<number> The total number of functions.coveredLineCount<number> The number of covered lines.coveredBranchCount<number> The number of covered branches.coveredFunctionCount<number> The number of covered functions.coveredLinePercent<number> The percentage of lines covered.coveredBranchPercent<number> The percentage of branches covered.coveredFunctionPercent<number> The percentage of functions covered.
workingDirectory<string> The working directory when code coveragebegan. This is useful for displaying relative path names in case the testschanged the working directory of the Node.js process.
nesting<number> The nesting level of the test.
Emitted when code coverage is enabled and all tests have completed.
Event:'test:complete'#
data<Object>column<number> |<undefined> The column number where the test is defined, orundefinedif the test was run through the REPL.details<Object> Additional execution metadata.passed<boolean> Whether the test passed or not.duration_ms<number> The duration of the test in milliseconds.error<Error> |<undefined> An error wrapping the error thrown by the testif it did not pass.cause<Error> The actual error thrown by the test.
type<string> |<undefined> The type of the test, used to denote whetherthis is a suite.
file<string> |<undefined> The path of the test file,undefinedif test was run through the REPL.line<number> |<undefined> The line number where the test is defined, orundefinedif the test was run through the REPL.name<string> The test name.nesting<number> The nesting level of the test.testNumber<number> The ordinal number of the test.todo<string> |<boolean> |<undefined> Present ifcontext.todois calledskip<string> |<boolean> |<undefined> Present ifcontext.skipis called
Emitted when a test completes its execution.This event is not emitted in the same order as the tests aredefined.The corresponding declaration ordered events are'test:pass' and'test:fail'.
Event:'test:dequeue'#
data<Object>column<number> |<undefined> The column number where the test is defined, orundefinedif the test was run through the REPL.file<string> |<undefined> The path of the test file,undefinedif test was run through the REPL.line<number> |<undefined> The line number where the test is defined, orundefinedif the test was run through the REPL.name<string> The test name.nesting<number> The nesting level of the test.type<string> The test type. Either'suite'or'test'.
Emitted when a test is dequeued, right before it is executed.This event is not guaranteed to be emitted in the same order as the tests aredefined. The corresponding declaration ordered event is'test:start'.
Event:'test:diagnostic'#
data<Object>column<number> |<undefined> The column number where the test is defined, orundefinedif the test was run through the REPL.file<string> |<undefined> The path of the test file,undefinedif test was run through the REPL.line<number> |<undefined> The line number where the test is defined, orundefinedif the test was run through the REPL.message<string> The diagnostic message.nesting<number> The nesting level of the test.level<string> The severity level of the diagnostic message.Possible values are:'info': Informational messages.'warn': Warnings.'error': Errors.
Emitted whencontext.diagnostic is called.This event is guaranteed to be emitted in the same order as the tests aredefined.
Event:'test:enqueue'#
data<Object>column<number> |<undefined> The column number where the test is defined, orundefinedif the test was run through the REPL.file<string> |<undefined> The path of the test file,undefinedif test was run through the REPL.line<number> |<undefined> The line number where the test is defined, orundefinedif the test was run through the REPL.name<string> The test name.nesting<number> The nesting level of the test.type<string> The test type. Either'suite'or'test'.
Emitted when a test is enqueued for execution.
Event:'test:fail'#
data<Object>column<number> |<undefined> The column number where the test is defined, orundefinedif the test was run through the REPL.details<Object> Additional execution metadata.duration_ms<number> The duration of the test in milliseconds.error<Error> An error wrapping the error thrown by the test.cause<Error> The actual error thrown by the test.
type<string> |<undefined> The type of the test, used to denote whetherthis is a suite.attempt<number> |<undefined> The attempt number of the test run,present only when using the--test-rerun-failuresflag.
file<string> |<undefined> The path of the test file,undefinedif test was run through the REPL.line<number> |<undefined> The line number where the test is defined, orundefinedif the test was run through the REPL.name<string> The test name.nesting<number> The nesting level of the test.testNumber<number> The ordinal number of the test.todo<string> |<boolean> |<undefined> Present ifcontext.todois calledskip<string> |<boolean> |<undefined> Present ifcontext.skipis called
Emitted when a test fails.This event is guaranteed to be emitted in the same order as the tests aredefined.The corresponding execution ordered event is'test:complete'.
Event:'test:pass'#
data<Object>column<number> |<undefined> The column number where the test is defined, orundefinedif the test was run through the REPL.details<Object> Additional execution metadata.duration_ms<number> The duration of the test in milliseconds.type<string> |<undefined> The type of the test, used to denote whetherthis is a suite.attempt<number> |<undefined> The attempt number of the test run,present only when using the--test-rerun-failuresflag.passed_on_attempt<number> |<undefined> The attempt number the test passed on,present only when using the--test-rerun-failuresflag.
file<string> |<undefined> The path of the test file,undefinedif test was run through the REPL.line<number> |<undefined> The line number where the test is defined, orundefinedif the test was run through the REPL.name<string> The test name.nesting<number> The nesting level of the test.testNumber<number> The ordinal number of the test.todo<string> |<boolean> |<undefined> Present ifcontext.todois calledskip<string> |<boolean> |<undefined> Present ifcontext.skipis called
Emitted when a test passes.This event is guaranteed to be emitted in the same order as the tests aredefined.The corresponding execution ordered event is'test:complete'.
Event:'test:plan'#
data<Object>column<number> |<undefined> The column number where the test is defined, orundefinedif the test was run through the REPL.file<string> |<undefined> The path of the test file,undefinedif test was run through the REPL.line<number> |<undefined> The line number where the test is defined, orundefinedif the test was run through the REPL.nesting<number> The nesting level of the test.count<number> The number of subtests that have ran.
Emitted when all subtests have completed for a given test.This event is guaranteed to be emitted in the same order as the tests aredefined.
Event:'test:start'#
data<Object>column<number> |<undefined> The column number where the test is defined, orundefinedif the test was run through the REPL.file<string> |<undefined> The path of the test file,undefinedif test was run through the REPL.line<number> |<undefined> The line number where the test is defined, orundefinedif the test was run through the REPL.name<string> The test name.nesting<number> The nesting level of the test.
Emitted when a test starts reporting its own and its subtests status.This event is guaranteed to be emitted in the same order as the tests aredefined.The corresponding execution ordered event is'test:dequeue'.
Event:'test:stderr'#
Emitted when a running test writes tostderr.This event is only emitted if--test flag is passed.This event is not guaranteed to be emitted in the same order as the tests aredefined.
Event:'test:stdout'#
Emitted when a running test writes tostdout.This event is only emitted if--test flag is passed.This event is not guaranteed to be emitted in the same order as the tests aredefined.
Event:'test:summary'#
data<Object>counts<Object> An object containing the counts of various test results.cancelled<number> The total number of cancelled tests.failed<number> The total number of failed tests.passed<number> The total number of passed tests.skipped<number> The total number of skipped tests.suites<number> The total number of suites run.tests<number> The total number of tests run, excluding suites.todo<number> The total number of TODO tests.topLevel<number> The total number of top level tests and suites.
duration_ms<number> The duration of the test run in milliseconds.file<string> |<undefined> The path of the test file that generated thesummary. If the summary corresponds to multiple files, this value isundefined.success<boolean> Indicates whether or not the test run is consideredsuccessful or not. If any error condition occurs, such as a failing test orunmet coverage threshold, this value will be set tofalse.
Emitted when a test run completes. This event contains metrics pertaining tothe completed test run, and is useful for determining if a test run passed orfailed. If process-level test isolation is used, a'test:summary' event isgenerated for each test file in addition to a final cumulative summary.
Event:'test:watch:drained'#
Emitted when no more tests are queued for execution in watch mode.
Event:'test:watch:restarted'#
Emitted when one or more tests are restarted due to a file change in watch mode.
Class:TestContext#
History
| Version | Changes |
|---|---|
| v20.1.0, v18.17.0 | The |
| v18.0.0, v16.17.0 | Added in: v18.0.0, v16.17.0 |
An instance ofTestContext is passed to each test function in order tointeract with the test runner. However, theTestContext constructor is notexposed as part of the API.
context.before([fn][, options])#
fn<Function> |<AsyncFunction> The hook function. The first argumentto this function is aTestContextobject. If the hook uses callbacks,the callback function is passed as the second argument.Default: A no-opfunction.options<Object> Configuration options for the hook. The followingproperties are supported:signal<AbortSignal> Allows aborting an in-progress hook.timeout<number> A number of milliseconds the hook will fail after.If unspecified, subtests inherit this value from their parent.Default:Infinity.
This function is used to create a hook running beforesubtest of the current test.
context.beforeEach([fn][, options])#
fn<Function> |<AsyncFunction> The hook function. The first argumentto this function is aTestContextobject. If the hook uses callbacks,the callback function is passed as the second argument.Default: A no-opfunction.options<Object> Configuration options for the hook. The followingproperties are supported:signal<AbortSignal> Allows aborting an in-progress hook.timeout<number> A number of milliseconds the hook will fail after.If unspecified, subtests inherit this value from their parent.Default:Infinity.
This function is used to create a hook runningbefore each subtest of the current test.
test('top level test',async (t) => { t.beforeEach((t) => t.diagnostic(`about to run${t.name}`));await t.test('This is a subtest',(t) => {// Some relevant assertion here }, );});context.after([fn][, options])#
fn<Function> |<AsyncFunction> The hook function. The first argumentto this function is aTestContextobject. If the hook uses callbacks,the callback function is passed as the second argument.Default: A no-opfunction.options<Object> Configuration options for the hook. The followingproperties are supported:signal<AbortSignal> Allows aborting an in-progress hook.timeout<number> A number of milliseconds the hook will fail after.If unspecified, subtests inherit this value from their parent.Default:Infinity.
This function is used to create a hook that runs after the current testfinishes.
test('top level test',async (t) => { t.after((t) => t.diagnostic(`finished running${t.name}`));// Some relevant assertion here});context.afterEach([fn][, options])#
fn<Function> |<AsyncFunction> The hook function. The first argumentto this function is aTestContextobject. If the hook uses callbacks,the callback function is passed as the second argument.Default: A no-opfunction.options<Object> Configuration options for the hook. The followingproperties are supported:signal<AbortSignal> Allows aborting an in-progress hook.timeout<number> A number of milliseconds the hook will fail after.If unspecified, subtests inherit this value from their parent.Default:Infinity.
This function is used to create a hook runningafter each subtest of the current test.
test('top level test',async (t) => { t.afterEach((t) => t.diagnostic(`finished running${t.name}`));await t.test('This is a subtest',(t) => {// Some relevant assertion here }, );});context.assert#
An object containing assertion methods bound tocontext. The top-levelfunctions from thenode:assert module are exposed here for the purpose ofcreating test plans.
test('test',(t) => { t.plan(1); t.assert.strictEqual(true,true);});context.assert.fileSnapshot(value, path[, options])#
value<any> A value to serialize to a string. If Node.js was started withthe--test-update-snapshotsflag, the serialized value is written topath. Otherwise, the serialized value is compared to the contents of theexisting snapshot file.path<string> The file where the serializedvalueis written.options<Object> Optional configuration options. The following propertiesare supported:serializers<Array> An array of synchronous functions used to serializevalueinto a string.valueis passed as the only argument to the firstserializer function. The return value of each serializer is passed as inputto the next serializer. Once all serializers have run, the resulting valueis coerced to a string.Default: If no serializers are provided, thetest runner's default serializers are used.
This function serializesvalue and writes it to the file specified bypath.
test('snapshot test with default serialization',(t) => { t.assert.fileSnapshot({value1:1,value2:2 },'./snapshots/snapshot.json');});This function differs fromcontext.assert.snapshot() in the following ways:
- The snapshot file path is explicitly provided by the user.
- Each snapshot file is limited to a single snapshot value.
- No additional escaping is performed by the test runner.
These differences allow snapshot files to better support features such as syntaxhighlighting.
context.assert.snapshot(value[, options])#
value<any> A value to serialize to a string. If Node.js was started withthe--test-update-snapshotsflag, the serialized value is written tothe snapshot file. Otherwise, the serialized value is compared to thecorresponding value in the existing snapshot file.options<Object> Optional configuration options. The following propertiesare supported:serializers<Array> An array of synchronous functions used to serializevalueinto a string.valueis passed as the only argument to the firstserializer function. The return value of each serializer is passed as inputto the next serializer. Once all serializers have run, the resulting valueis coerced to a string.Default: If no serializers are provided, thetest runner's default serializers are used.
This function implements assertions for snapshot testing.
test('snapshot test with default serialization',(t) => { t.assert.snapshot({value1:1,value2:2 });});test('snapshot test with custom serialization',(t) => { t.assert.snapshot({value3:3,value4:4 }, {serializers: [(value) =>JSON.stringify(value)], });});context.diagnostic(message)#
message<string> Message to be reported.
This function is used to write diagnostics to the output. Any diagnosticinformation is included at the end of the test's results. This function doesnot return a value.
test('top level test',(t) => { t.diagnostic('A diagnostic message');});context.filePath#
The absolute path of the test file that created the current test. If a test fileimports additional modules that generate tests, the imported tests will returnthe path of the root test file.
context.fullName#
The name of the test and each of its ancestors, separated by>.
context.passed#
- Type:<boolean>
falsebefore the test is executed, e.g. in abeforeEachhook.
Indicated whether the test succeeded.
context.error#
The failure reason for the test/case; wrapped and available viacontext.error.cause.
context.plan(count[,options])#
History
| Version | Changes |
|---|---|
| v23.9.0, v22.15.0 | Add the |
| v23.4.0, v22.13.0 | This function is no longer experimental. |
| v22.2.0, v20.15.0 | Added in: v22.2.0, v20.15.0 |
count<number> The number of assertions and subtests that are expected to run.options<Object> Additional options for the plan.wait<boolean> |<number> The wait time for the plan:- If
true, the plan waits indefinitely for all assertions and subtests to run. - If
false, the plan performs an immediate check after the test function completes,without waiting for any pending assertions or subtests.Any assertions or subtests that complete after this check will not be counted towards the plan. - If a number, it specifies the maximum wait time in millisecondsbefore timing out while waiting for expected assertions and subtests to be matched.If the timeout is reached, the test will fail.Default:
false.
- If
This function is used to set the number of assertions and subtests that are expected to runwithin the test. If the number of assertions and subtests that run does not match theexpected count, the test will fail.
Note: To make sure assertions are tracked,
t.assertmust be used instead ofassertdirectly.
test('top level test',(t) => { t.plan(2); t.assert.ok('some relevant assertion here'); t.test('subtest',() => {});});When working with asynchronous code, theplan function can be used to ensure that thecorrect number of assertions are run:
test('planning with streams',(t, done) => {function*generate() {yield'a';yield'b';yield'c'; }const expected = ['a','b','c']; t.plan(expected.length);const stream =Readable.from(generate()); stream.on('data',(chunk) => { t.assert.strictEqual(chunk, expected.shift()); }); stream.on('end',() => {done(); });});When using thewait option, you can control how long the test will wait for the expected assertions.For example, setting a maximum wait time ensures that the test will wait for asynchronous assertionsto complete within the specified timeframe:
test('plan with wait: 2000 waits for async assertions',(t) => { t.plan(1, {wait:2000 });// Waits for up to 2 seconds for the assertion to complete.constasyncActivity = () => {setTimeout(() => { t.assert.ok(true,'Async assertion completed within the wait time'); },1000);// Completes after 1 second, within the 2-second wait time. };asyncActivity();// The test will pass because the assertion is completed in time.});Note: If await timeout is specified, it begins counting down only after the test function finishes executing.
context.runOnly(shouldRunOnlyTests)#
shouldRunOnlyTests<boolean> Whether or not to runonlytests.
IfshouldRunOnlyTests is truthy, the test context will only run tests thathave theonly option set. Otherwise, all tests are run. If Node.js was notstarted with the--test-only command-line option, this function is ano-op.
test('top level test',(t) => {// The test context can be set to run subtests with the 'only' option. t.runOnly(true);returnPromise.all([ t.test('this subtest is now skipped'), t.test('this subtest is run', {only:true }), ]);});context.signal#
- Type:<AbortSignal>
Can be used to abort test subtasks when the test has been aborted.
test('top level test',async (t) => {awaitfetch('some/uri', {signal: t.signal });});context.skip([message])#
message<string> Optional skip message.
This function causes the test's output to indicate the test as skipped. Ifmessage is provided, it is included in the output. Callingskip() doesnot terminate execution of the test function. This function does not return avalue.
test('top level test',(t) => {// Make sure to return here as well if the test contains additional logic. t.skip('this is skipped');});context.todo([message])#
message<string> OptionalTODOmessage.
This function adds aTODO directive to the test's output. Ifmessage isprovided, it is included in the output. Callingtodo() does not terminateexecution of the test function. This function does not return a value.
test('top level test',(t) => {// This test is marked as `TODO` t.todo('this is a todo');});context.test([name][, options][, fn])#
History
| Version | Changes |
|---|---|
| v18.8.0, v16.18.0 | Add a |
| v18.7.0, v16.17.0 | Add a |
| v18.0.0, v16.17.0 | Added in: v18.0.0, v16.17.0 |
name<string> The name of the subtest, which is displayed when reportingtest results.Default: Thenameproperty offn, or'<anonymous>'iffndoes not have a name.options<Object> Configuration options for the subtest. The followingproperties are supported:concurrency<number> |<boolean> |<null> If a number is provided,then that many tests would run asynchronously (they are still managed by the single-threaded event loop).Iftrue, it would run all subtests in parallel.Iffalse, it would only run one test at a time.If unspecified, subtests inherit this value from their parent.Default:null.only<boolean> If truthy, and the test context is configured to runonlytests, then this test will be run. Otherwise, the test is skipped.Default:false.signal<AbortSignal> Allows aborting an in-progress test.skip<boolean> |<string> If truthy, the test is skipped. If a string isprovided, that string is displayed in the test results as the reason forskipping the test.Default:false.todo<boolean> |<string> If truthy, the test marked asTODO. If a stringis provided, that string is displayed in the test results as the reason whythe test isTODO.Default:false.timeout<number> A number of milliseconds the test will fail after.If unspecified, subtests inherit this value from their parent.Default:Infinity.plan<number> The number of assertions and subtests expected to be run in the test.If the number of assertions run in the test does not match the numberspecified in the plan, the test will fail.Default:undefined.
fn<Function> |<AsyncFunction> The function under test. The first argumentto this function is aTestContextobject. If the test uses callbacks,the callback function is passed as the second argument.Default: A no-opfunction.- Returns:<Promise> Fulfilled with
undefinedonce the test completes.
This function is used to create subtests under the current test. This functionbehaves in the same fashion as the top leveltest() function.
test('top level test',async (t) => {await t.test('This is a subtest', {only:false,skip:false,concurrency:1,todo:false,plan:1 },(t) => { t.assert.ok('some relevant assertion here'); }, );});context.waitFor(condition[, options])#
condition<Function> |<AsyncFunction> An assertion function that is invokedperiodically until it completes successfully or the defined polling timeoutelapses. Successful completion is defined as not throwing or rejecting. Thisfunction does not accept any arguments, and is allowed to return any value.options<Object> An optional configuration object for the polling operation.The following properties are supported:- Returns:<Promise> Fulfilled with the value returned by
condition.
This method polls acondition function until that function either returnssuccessfully or the operation times out.
Class:SuiteContext#
An instance ofSuiteContext is passed to each suite function in order tointeract with the test runner. However, theSuiteContext constructor is notexposed as part of the API.
context.filePath#
The absolute path of the test file that created the current suite. If a testfile imports additional modules that generate suites, the imported suites willreturn the path of the root test file.
context.fullName#
The name of the suite and each of its ancestors, separated by>.
context.signal#
- Type:<AbortSignal>
Can be used to abort test subtasks when the test has been aborted.
Timers#
Source Code:lib/timers.js
Thetimer module exposes a global API for scheduling functions tobe called at some future period of time. Because the timer functions areglobals, there is no need to callrequire('node:timers') to use the API.
The timer functions within Node.js implement a similar API as the timers APIprovided by Web Browsers but use a different internal implementation that isbuilt around the Node.jsEvent Loop.
Class:Immediate#
This object is created internally and is returned fromsetImmediate(). Itcan be passed toclearImmediate() in order to cancel the scheduledactions.
By default, when an immediate is scheduled, the Node.js event loop will continuerunning as long as the immediate is active. TheImmediate object returned bysetImmediate() exports bothimmediate.ref() andimmediate.unref()functions that can be used to control this default behavior.
immediate.hasRef()#
- Returns:<boolean>
If true, theImmediate object will keep the Node.js event loop active.
immediate.ref()#
- Returns:<Immediate> a reference to
immediate
When called, requests that the Node.js event loopnot exit so long as theImmediate is active. Callingimmediate.ref() multiple times will have noeffect.
By default, allImmediate objects are "ref'ed", making it normally unnecessaryto callimmediate.ref() unlessimmediate.unref() had been called previously.
immediate.unref()#
- Returns:<Immediate> a reference to
immediate
When called, the activeImmediate object will not require the Node.js eventloop to remain active. If there is no other activity keeping the event looprunning, the process may exit before theImmediate object's callback isinvoked. Callingimmediate.unref() multiple times will have no effect.
immediate[Symbol.dispose]()#
History
| Version | Changes |
|---|---|
| v24.2.0 | No longer experimental. |
| v20.5.0, v18.18.0 | Added in: v20.5.0, v18.18.0 |
Cancels the immediate. This is similar to callingclearImmediate().
Class:Timeout#
This object is created internally and is returned fromsetTimeout() andsetInterval(). It can be passed to eitherclearTimeout() orclearInterval() in order to cancel the scheduled actions.
By default, when a timer is scheduled using eithersetTimeout() orsetInterval(), the Node.js event loop will continue running as long as thetimer is active. Each of theTimeout objects returned by these functionsexport bothtimeout.ref() andtimeout.unref() functions that can be used tocontrol this default behavior.
timeout.close()#
clearTimeout() instead.- Returns:<Timeout> a reference to
timeout
Cancels the timeout.
timeout.hasRef()#
- Returns:<boolean>
If true, theTimeout object will keep the Node.js event loop active.
timeout.ref()#
- Returns:<Timeout> a reference to
timeout
When called, requests that the Node.js event loopnot exit so long as theTimeout is active. Callingtimeout.ref() multiple times will have no effect.
By default, allTimeout objects are "ref'ed", making it normally unnecessaryto calltimeout.ref() unlesstimeout.unref() had been called previously.
timeout.refresh()#
- Returns:<Timeout> a reference to
timeout
Sets the timer's start time to the current time, and reschedules the timer tocall its callback at the previously specified duration adjusted to the currenttime. This is useful for refreshing a timer without allocating a newJavaScript object.
Using this on a timer that has already called its callback will reactivate thetimer.
timeout.unref()#
- Returns:<Timeout> a reference to
timeout
When called, the activeTimeout object will not require the Node.js event loopto remain active. If there is no other activity keeping the event loop running,the process may exit before theTimeout object's callback is invoked. Callingtimeout.unref() multiple times will have no effect.
timeout[Symbol.toPrimitive]()#
- Returns:<integer> a number that can be used to reference this
timeout
Coerce aTimeout to a primitive. The primitive can be used toclear theTimeout. The primitive can only be used in thesame thread where the timeout was created. Therefore, to use itacrossworker_threads it must first be passed to the correctthread. This allows enhanced compatibility with browsersetTimeout() andsetInterval() implementations.
timeout[Symbol.dispose]()#
History
| Version | Changes |
|---|---|
| v24.2.0 | No longer experimental. |
| v20.5.0, v18.18.0 | Added in: v20.5.0, v18.18.0 |
Cancels the timeout.
Scheduling timers#
A timer in Node.js is an internal construct that calls a given function aftera certain period of time. When a timer's function is called varies depending onwhich method was used to create the timer and what other work the Node.jsevent loop is doing.
setImmediate(callback[, ...args])#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v0.9.1 | Added in: v0.9.1 |
callback<Function> The function to call at the end of this turn ofthe Node.jsEvent Loop...args<any> Optional arguments to pass when thecallbackis called.- Returns:<Immediate> for use with
clearImmediate()
Schedules the "immediate" execution of thecallback after I/O events'callbacks.
When multiple calls tosetImmediate() are made, thecallback functions arequeued for execution in the order in which they are created. The entire callbackqueue is processed every event loop iteration. If an immediate timer is queuedfrom inside an executing callback, that timer will not be triggered until thenext event loop iteration.
Ifcallback is not a function, aTypeError will be thrown.
This method has a custom variant for promises that is available usingtimersPromises.setImmediate().
setInterval(callback[, delay[, ...args]])#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v0.0.1 | Added in: v0.0.1 |
callback<Function> The function to call when the timer elapses.delay<number> The number of milliseconds to wait before calling thecallback.Default:1....args<any> Optional arguments to pass when thecallbackis called.- Returns:<Timeout> for use with
clearInterval()
Schedules repeated execution ofcallback everydelay milliseconds.
Whendelay is larger than2147483647 or less than1 orNaN, thedelaywill be set to1. Non-integer delays are truncated to an integer.
Ifcallback is not a function, aTypeError will be thrown.
This method has a custom variant for promises that is available usingtimersPromises.setInterval().
setTimeout(callback[, delay[, ...args]])#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v0.0.1 | Added in: v0.0.1 |
callback<Function> The function to call when the timer elapses.delay<number> The number of milliseconds to wait before calling thecallback.Default:1....args<any> Optional arguments to pass when thecallbackis called.- Returns:<Timeout> for use with
clearTimeout()
Schedules execution of a one-timecallback afterdelay milliseconds.
Thecallback will likely not be invoked in preciselydelay milliseconds.Node.js makes no guarantees about the exact timing of when callbacks will fire,nor of their ordering. The callback will be called as close as possible to thetime specified.
Whendelay is larger than2147483647 or less than1 orNaN, thedelaywill be set to1. Non-integer delays are truncated to an integer.
Ifcallback is not a function, aTypeError will be thrown.
This method has a custom variant for promises that is available usingtimersPromises.setTimeout().
Cancelling timers#
ThesetImmediate(),setInterval(), andsetTimeout() methodseach return objects that represent the scheduled timers. These can be used tocancel the timer and prevent it from triggering.
For the promisified variants ofsetImmediate() andsetTimeout(),anAbortController may be used to cancel the timer. When canceled, thereturned Promises will be rejected with an'AbortError'.
ForsetImmediate():
import { setImmediateas setImmediatePromise }from'node:timers/promises';const ac =newAbortController();const signal = ac.signal;// We do not `await` the promise so `ac.abort()` is called concurrently.setImmediatePromise('foobar', { signal }) .then(console.log) .catch((err) => {if (err.name ==='AbortError')console.error('The immediate was aborted'); });ac.abort();const {setImmediate: setImmediatePromise } =require('node:timers/promises');const ac =newAbortController();const signal = ac.signal;setImmediatePromise('foobar', { signal }) .then(console.log) .catch((err) => {if (err.name ==='AbortError')console.error('The immediate was aborted'); });ac.abort();
ForsetTimeout():
import {setTimeoutas setTimeoutPromise }from'node:timers/promises';const ac =newAbortController();const signal = ac.signal;// We do not `await` the promise so `ac.abort()` is called concurrently.setTimeoutPromise(1000,'foobar', { signal }) .then(console.log) .catch((err) => {if (err.name ==='AbortError')console.error('The timeout was aborted'); });ac.abort();const {setTimeout: setTimeoutPromise } =require('node:timers/promises');const ac =newAbortController();const signal = ac.signal;setTimeoutPromise(1000,'foobar', { signal }) .then(console.log) .catch((err) => {if (err.name ==='AbortError')console.error('The timeout was aborted'); });ac.abort();
clearImmediate(immediate)#
immediate<Immediate> AnImmediateobject as returned bysetImmediate().
Cancels anImmediate object created bysetImmediate().
clearInterval(timeout)#
timeout<Timeout> |<string> |<number> ATimeoutobject as returned bysetInterval()or theprimitive of theTimeoutobject as a string or a number.
Cancels aTimeout object created bysetInterval().
clearTimeout(timeout)#
timeout<Timeout> |<string> |<number> ATimeoutobject as returned bysetTimeout()or theprimitive of theTimeoutobject as a string or a number.
Cancels aTimeout object created bysetTimeout().
Timers Promises API#
History
| Version | Changes |
|---|---|
| v16.0.0 | Graduated from experimental. |
| v15.0.0 | Added in: v15.0.0 |
Thetimers/promises API provides an alternative set of timer functionsthat returnPromise objects. The API is accessible viarequire('node:timers/promises').
import {setTimeout, setImmediate,setInterval,}from'node:timers/promises';const {setTimeout, setImmediate,setInterval,} =require('node:timers/promises');
timersPromises.setTimeout([delay[, value[, options]]])#
delay<number> The number of milliseconds to wait before fulfilling thepromise.Default:1.value<any> A value with which the promise is fulfilled.options<Object>ref<boolean> Set tofalseto indicate that the scheduledTimeoutshould not require the Node.js event loop to remain active.Default:true.signal<AbortSignal> An optionalAbortSignalthat can be used tocancel the scheduledTimeout.
import {setTimeout,}from'node:timers/promises';const res =awaitsetTimeout(100,'result');console.log(res);// Prints 'result'const {setTimeout,} =require('node:timers/promises');setTimeout(100,'result').then((res) => {console.log(res);// Prints 'result'});
timersPromises.setImmediate([value[, options]])#
value<any> A value with which the promise is fulfilled.options<Object>ref<boolean> Set tofalseto indicate that the scheduledImmediateshould not require the Node.js event loop to remain active.Default:true.signal<AbortSignal> An optionalAbortSignalthat can be used tocancel the scheduledImmediate.
import { setImmediate,}from'node:timers/promises';const res =awaitsetImmediate('result');console.log(res);// Prints 'result'const { setImmediate,} =require('node:timers/promises');setImmediate('result').then((res) => {console.log(res);// Prints 'result'});
timersPromises.setInterval([delay[, value[, options]]])#
Returns an async iterator that generates values in an interval ofdelay ms.Ifref istrue, you need to callnext() of async iterator explicitlyor implicitly to keep the event loop alive.
delay<number> The number of milliseconds to wait between iterations.Default:1.value<any> A value with which the iterator returns.options<Object>ref<boolean> Set tofalseto indicate that the scheduledTimeoutbetween iterations should not require the Node.js event loop toremain active.Default:true.signal<AbortSignal> An optionalAbortSignalthat can be used tocancel the scheduledTimeoutbetween operations.
import {setInterval,}from'node:timers/promises';const interval =100;forawait (const startTimeofsetInterval(interval,Date.now())) {const now =Date.now();console.log(now);if ((now - startTime) >1000)break;}console.log(Date.now());const {setInterval,} =require('node:timers/promises');const interval =100;(asyncfunction() {forawait (const startTimeofsetInterval(interval,Date.now())) {const now =Date.now();console.log(now);if ((now - startTime) >1000)break; }console.log(Date.now());})();
timersPromises.scheduler.wait(delay[, options])#
delay<number> The number of milliseconds to wait before resolving thepromise.options<Object>ref<boolean> Set tofalseto indicate that the scheduledTimeoutshould not require the Node.js event loop to remain active.Default:true.signal<AbortSignal> An optionalAbortSignalthat can be used tocancel waiting.
- Returns:<Promise>
An experimental API defined by theScheduling APIs draft specificationbeing developed as a standard Web Platform API.
CallingtimersPromises.scheduler.wait(delay, options) is equivalentto callingtimersPromises.setTimeout(delay, undefined, options).
import { scheduler }from'node:timers/promises';await scheduler.wait(1000);// Wait one second before continuingtimersPromises.scheduler.yield()#
- Returns:<Promise>
An experimental API defined by theScheduling APIs draft specificationbeing developed as a standard Web Platform API.
CallingtimersPromises.scheduler.yield() is equivalent to callingtimersPromises.setImmediate() with no arguments.
TLS (SSL)#
Source Code:lib/tls.js
Thenode:tls module provides an implementation of the Transport Layer Security(TLS) and Secure Socket Layer (SSL) protocols that is built on top of OpenSSL.The module can be accessed using:
import tlsfrom'node:tls';const tls =require('node:tls');
Determining if crypto support is unavailable#
It is possible for Node.js to be built without including support for thenode:crypto module. In such cases, attempting toimport fromtls orcallingrequire('node:tls') will result in an error being thrown.
When using CommonJS, the error thrown can be caught using try/catch:
let tls;try { tls =require('node:tls');}catch (err) {console.error('tls support is disabled!');}When using the lexical ESMimport keyword, the error can only becaught if a handler forprocess.on('uncaughtException') is registeredbefore any attempt to load the module is made (using, for instance,a preload module).
When using ESM, if there is a chance that the code may be run on a buildof Node.js where crypto support is not enabled, consider using theimport() function instead of the lexicalimport keyword:
let tls;try { tls =awaitimport('node:tls');}catch (err) {console.error('tls support is disabled!');}TLS/SSL concepts#
TLS/SSL is a set of protocols that rely on a public key infrastructure (PKI) toenable secure communication between a client and a server. For most commoncases, each server must have a private key.
Private keys can be generated in multiple ways. The example below illustratesuse of the OpenSSL command-line interface to generate a 2048-bit RSA privatekey:
openssl genrsa -out ryans-key.pem 2048With TLS/SSL, all servers (and some clients) must have acertificate.Certificates arepublic keys that correspond to a private key, and that aredigitally signed either by a Certificate Authority or by the owner of theprivate key (such certificates are referred to as "self-signed"). The firststep to obtaining a certificate is to create aCertificate Signing Request(CSR) file.
The OpenSSL command-line interface can be used to generate a CSR for a privatekey:
openssl req -new -sha256 -key ryans-key.pem -out ryans-csr.pemOnce the CSR file is generated, it can either be sent to a CertificateAuthority for signing or used to generate a self-signed certificate.
Creating a self-signed certificate using the OpenSSL command-line interfaceis illustrated in the example below:
openssl x509 -req -in ryans-csr.pem -signkey ryans-key.pem -out ryans-cert.pemOnce the certificate is generated, it can be used to generate a.pfx or.p12 file:
openssl pkcs12 -export -in ryans-cert.pem -inkey ryans-key.pem \ -certfile ca-cert.pem -out ryans.pfxWhere:
in: is the signed certificateinkey: is the associated private keycertfile: is a concatenation of all Certificate Authority (CA) certs intoa single file, e.g.cat ca1-cert.pem ca2-cert.pem > ca-cert.pem
Perfect forward secrecy#
The termforward secrecy orperfect forward secrecy describes a featureof key-agreement (i.e., key-exchange) methods. That is, the server and clientkeys are used to negotiate new temporary keys that are used specifically andonly for the current communication session. Practically, this means that evenif the server's private key is compromised, communication can only be decryptedby eavesdroppers if the attacker manages to obtain the key-pair specificallygenerated for the session.
Perfect forward secrecy is achieved by randomly generating a key pair forkey-agreement on every TLS/SSL handshake (in contrast to using the same key forall sessions). Methods implementing this technique are called "ephemeral".
Currently two methods are commonly used to achieve perfect forward secrecy (notethe character "E" appended to the traditional abbreviations):
- ECDHE: An ephemeral version of the Elliptic Curve Diffie-Hellmankey-agreement protocol.
- DHE: An ephemeral version of the Diffie-Hellman key-agreement protocol.
Perfect forward secrecy using ECDHE is enabled by default. TheecdhCurveoption can be used when creating a TLS server to customize the list of supportedECDH curves to use. Seetls.createServer() for more info.
DHE is disabled by default but can be enabled alongside ECDHE by setting thedhparam option to'auto'. Custom DHE parameters are also supported butdiscouraged in favor of automatically selected, well-known parameters.
Perfect forward secrecy was optional up to TLSv1.2. As of TLSv1.3, (EC)DHE isalways used (with the exception of PSK-only connections).
ALPN and SNI#
ALPN (Application-Layer Protocol Negotiation Extension) andSNI (Server Name Indication) are TLS handshake extensions:
- ALPN: Allows the use of one TLS server for multiple protocols (HTTP, HTTP/2)
- SNI: Allows the use of one TLS server for multiple hostnames with differentcertificates.
Pre-shared keys#
TLS-PSK support is available as an alternative to normal certificate-basedauthentication. It uses a pre-shared key instead of certificates toauthenticate a TLS connection, providing mutual authentication.TLS-PSK and public key infrastructure are not mutually exclusive. Clients andservers can accommodate both, choosing either of them during the normal ciphernegotiation step.
TLS-PSK is only a good choice where means exist to securely share akey with every connecting machine, so it does not replace the public keyinfrastructure (PKI) for the majority of TLS uses.The TLS-PSK implementation in OpenSSL has seen many security flaws inrecent years, mostly because it is used only by a minority of applications.Please consider all alternative solutions before switching to PSK ciphers.Upon generating PSK it is of critical importance to use sufficient entropy asdiscussed inRFC 4086. Deriving a shared secret from a password or otherlow-entropy sources is not secure.
PSK ciphers are disabled by default, and using TLS-PSK thus requires explicitlyspecifying a cipher suite with theciphers option. The list of availableciphers can be retrieved viaopenssl ciphers -v 'PSK'. All TLS 1.3ciphers are eligible for PSK and can be retrieved viaopenssl ciphers -v -s -tls1_3 -psk.On the client connection, a customcheckServerIdentity should be passedbecause the default one will fail in the absence of a certificate.
According to theRFC 4279, PSK identities up to 128 bytes in length andPSKs up to 64 bytes in length must be supported. As of OpenSSL 1.1.0maximum identity size is 128 bytes, and maximum PSK length is 256 bytes.
The current implementation doesn't support asynchronous PSK callbacks due to thelimitations of the underlying OpenSSL API.
To use TLS-PSK, client and server must specify thepskCallback option,a function that returns the PSK to use (which must be compatible withthe selected cipher's digest).
It will be called first on the client:
hint<string> optional message sent from the server to help the clientdecide which identity to use during negotiation.Alwaysnullif TLS 1.3 is used.- Returns:<Object> in the form
{ psk: <Buffer|TypedArray|DataView>, identity: <string> }ornull.
Then on the server:
socket<tls.TLSSocket> the server socket instance, equivalent tothis.identity<string> identity parameter sent from the client.- Returns:<Buffer> |<TypedArray> |<DataView> the PSK (or
null).
A return value ofnull stops the negotiation process and sends anunknown_psk_identity alert message to the other party.If the server wishes to hide the fact that the PSK identity was not known,the callback must provide some random data aspsk to make the connectionfail withdecrypt_error before negotiation is finished.
Client-initiated renegotiation attack mitigation#
The TLS protocol allows clients to renegotiate certain aspects of the TLSsession. Unfortunately, session renegotiation requires a disproportionate amountof server-side resources, making it a potential vector for denial-of-serviceattacks.
To mitigate the risk, renegotiation is limited to three times every ten minutes.An'error' event is emitted on thetls.TLSSocket instance when thisthreshold is exceeded. The limits are configurable:
tls.CLIENT_RENEG_LIMIT<number> Specifies the number of renegotiationrequests.Default:3.tls.CLIENT_RENEG_WINDOW<number> Specifies the time renegotiation windowin seconds.Default:600(10 minutes).
The default renegotiation limits should not be modified without a fullunderstanding of the implications and risks.
TLSv1.3 does not support renegotiation.
Session resumption#
Establishing a TLS session can be relatively slow. The process can be spedup by saving and later reusing the session state. There are several mechanismsto do so, discussed here from oldest to newest (and preferred).
Session identifiers#
Servers generate a unique ID for new connections andsend it to the client. Clients and servers save the session state. Whenreconnecting, clients send the ID of their saved session state and if the serveralso has the state for that ID, it can agree to use it. Otherwise, the serverwill create a new session. SeeRFC 2246 for more information, page 23 and30.
Resumption using session identifiers is supported by most web browsers whenmaking HTTPS requests.
For Node.js, clients wait for the'session' event to get the session data,and provide the data to thesession option of a subsequenttls.connect()to reuse the session. Servers mustimplement handlers for the'newSession' and'resumeSession' eventsto save and restore the session data using the session ID as the lookup key toreuse sessions. To reuse sessions across load balancers or cluster workers,servers must use a shared session cache (such as Redis) in their sessionhandlers.
Session tickets#
The servers encrypt the entire session state and send itto the client as a "ticket". When reconnecting, the state is sent to the serverin the initial connection. This mechanism avoids the need for a server-sidesession cache. If the server doesn't use the ticket, for any reason (failureto decrypt it, it's too old, etc.), it will create a new session and send a newticket. SeeRFC 5077 for more information.
Resumption using session tickets is becoming commonly supported by many webbrowsers when making HTTPS requests.
For Node.js, clients use the same APIs for resumption with session identifiersas for resumption with session tickets. For debugging, iftls.TLSSocket.getTLSTicket() returns a value, the session data contains aticket, otherwise it contains client-side session state.
With TLSv1.3, be aware that multiple tickets may be sent by the server,resulting in multiple'session' events, see'session' for moreinformation.
Single process servers need no specific implementation to use session tickets.To use session tickets across server restarts or load balancers, servers mustall have the same ticket keys. There are three 16-byte keys internally, but thetls API exposes them as a single 48-byte buffer for convenience.
It's possible to get the ticket keys by callingserver.getTicketKeys() onone server instance and then distribute them, but it is more reasonable tosecurely generate 48 bytes of secure random data and set them with theticketKeys option oftls.createServer(). The keys should be regularlyregenerated and server's keys can be reset withserver.setTicketKeys().
Session ticket keys are cryptographic keys, and theymust be storedsecurely. With TLS 1.2 and below, if they are compromised all sessions thatused tickets encrypted with them can be decrypted. They should not be storedon disk, and they should be regenerated regularly.
If clients advertise support for tickets, the server will send them. Theserver can disable tickets by supplyingrequire('node:constants').SSL_OP_NO_TICKET insecureOptions.
Both session identifiers and session tickets timeout, causing the server tocreate new sessions. The timeout can be configured with thesessionTimeoutoption oftls.createServer().
For all the mechanisms, when resumption fails, servers will create new sessions.Since failing to resume the session does not cause TLS/HTTPS connectionfailures, it is easy to not notice unnecessarily poor TLS performance. TheOpenSSL CLI can be used to verify that servers are resuming sessions. Use the-reconnect option toopenssl s_client, for example:
openssl s_client -connect localhost:443 -reconnectRead through the debug output. The first connection should say "New", forexample:
New, TLSv1.2, Cipher is ECDHE-RSA-AES128-GCM-SHA256Subsequent connections should say "Reused", for example:
Reused, TLSv1.2, Cipher is ECDHE-RSA-AES128-GCM-SHA256Modifying the default TLS cipher suite#
Node.js is built with a default suite of enabled and disabled TLS ciphers. Thisdefault cipher list can be configured when building Node.js to allowdistributions to provide their own default list.
The following command can be used to show the default cipher suite:
node -p crypto.constants.defaultCoreCipherList | tr ':' '\n'TLS_AES_256_GCM_SHA384TLS_CHACHA20_POLY1305_SHA256TLS_AES_128_GCM_SHA256ECDHE-RSA-AES128-GCM-SHA256ECDHE-ECDSA-AES128-GCM-SHA256ECDHE-RSA-AES256-GCM-SHA384ECDHE-ECDSA-AES256-GCM-SHA384DHE-RSA-AES128-GCM-SHA256ECDHE-RSA-AES128-SHA256DHE-RSA-AES128-SHA256ECDHE-RSA-AES256-SHA384DHE-RSA-AES256-SHA384ECDHE-RSA-AES256-SHA256DHE-RSA-AES256-SHA256HIGH!aNULL!eNULL!EXPORT!DES!RC4!MD5!PSK!SRP!CAMELLIAThis default can be replaced entirely using the--tls-cipher-listcommand-line switch (directly, or via theNODE_OPTIONS environmentvariable). For instance, the following makesECDHE-RSA-AES128-GCM-SHA256:!RC4the default TLS cipher suite:
node --tls-cipher-list='ECDHE-RSA-AES128-GCM-SHA256:!RC4' server.jsexport NODE_OPTIONS=--tls-cipher-list='ECDHE-RSA-AES128-GCM-SHA256:!RC4'node server.jsTo verify, use the following command to show the set cipher list, note thedifference betweendefaultCoreCipherList anddefaultCipherList:
node --tls-cipher-list='ECDHE-RSA-AES128-GCM-SHA256:!RC4' -p crypto.constants.defaultCipherList |tr':''\n'ECDHE-RSA-AES128-GCM-SHA256!RC4i.e. thedefaultCoreCipherList list is set at compilation time and thedefaultCipherList is set at runtime.
To modify the default cipher suites from within the runtime, modify thetls.DEFAULT_CIPHERS variable, this must be performed before listening on anysockets, it will not affect sockets already opened. For example:
// Remove Obsolete CBC Ciphers and RSA Key Exchange based Ciphers as they don't provide Forward Secrecytls.DEFAULT_CIPHERS +=':!ECDHE-RSA-AES128-SHA:!ECDHE-RSA-AES128-SHA256:!ECDHE-RSA-AES256-SHA:!ECDHE-RSA-AES256-SHA384' +':!ECDHE-ECDSA-AES128-SHA:!ECDHE-ECDSA-AES128-SHA256:!ECDHE-ECDSA-AES256-SHA:!ECDHE-ECDSA-AES256-SHA384' +':!kRSA';The default can also be replaced on a per client or server basis using theciphers option fromtls.createSecureContext(), which is also availableintls.createServer(),tls.connect(), and when creating newtls.TLSSockets.
The ciphers list can contain a mixture of TLSv1.3 cipher suite names, the onesthat start with'TLS_', and specifications for TLSv1.2 and below ciphersuites. The TLSv1.2 ciphers support a legacy specification format, consultthe OpenSSLcipher list format documentation for details, but thosespecifications donot apply to TLSv1.3 ciphers. The TLSv1.3 suites can onlybe enabled by including their full name in the cipher list. They cannot, forexample, be enabled or disabled by using the legacy TLSv1.2'EECDH' or'!EECDH' specification.
Despite the relative order of TLSv1.3 and TLSv1.2 cipher suites, the TLSv1.3protocol is significantly more secure than TLSv1.2, and will always be chosenover TLSv1.2 if the handshake indicates it is supported, and if any TLSv1.3cipher suites are enabled.
The default cipher suite included within Node.js has been carefullyselected to reflect current security best practices and risk mitigation.Changing the default cipher suite can have a significant impact on the securityof an application. The--tls-cipher-list switch andciphers option should byused only if absolutely necessary.
The default cipher suite prefers GCM ciphers forChrome's 'moderncryptography' setting and also prefers ECDHE and DHE ciphers for perfectforward secrecy, while offeringsome backward compatibility.
Old clients that rely on insecure and deprecated RC4 or DES-based ciphers(like Internet Explorer 6) cannot complete the handshaking process withthe default configuration. If these clientsmust be supported, theTLS recommendations may offer a compatible cipher suite. For more detailson the format, see the OpenSSLcipher list format documentation.
There are only five TLSv1.3 cipher suites:
'TLS_AES_256_GCM_SHA384''TLS_CHACHA20_POLY1305_SHA256''TLS_AES_128_GCM_SHA256''TLS_AES_128_CCM_SHA256''TLS_AES_128_CCM_8_SHA256'
The first three are enabled by default. The twoCCM-based suites are supportedby TLSv1.3 because they may be more performant on constrained systems, but theyare not enabled by default since they offer less security.
OpenSSL security level#
The OpenSSL library enforces security levels to control the minimum acceptablelevel of security for cryptographic operations. OpenSSL's security levels rangefrom 0 to 5, with each level imposing stricter security requirements. The defaultsecurity level is 2, which is generally suitable for most modern applications.However, some legacy features and protocols, such as TLSv1, require a lowersecurity level (SECLEVEL=0) to function properly. For more detailed information,please refer to theOpenSSL documentation on security levels.
Setting security levels#
To adjust the security level in your Node.js application, you can include@SECLEVEL=Xwithin a cipher string, whereX is the desired security level. For example,to set the security level to 0 while using the default OpenSSL cipher list, you could use:
import { createServer, connect }from'node:tls';const port =443;createServer({ciphers:'DEFAULT@SECLEVEL=0',minVersion:'TLSv1' },function(socket) {console.log('Client connected with protocol:', socket.getProtocol()); socket.end();this.close();}).listen(port,() => {connect(port, {ciphers:'DEFAULT@SECLEVEL=0',maxVersion:'TLSv1' });});const { createServer, connect } =require('node:tls');const port =443;createServer({ciphers:'DEFAULT@SECLEVEL=0',minVersion:'TLSv1' },function(socket) {console.log('Client connected with protocol:', socket.getProtocol()); socket.end();this.close();}).listen(port,() => {connect(port, {ciphers:'DEFAULT@SECLEVEL=0',maxVersion:'TLSv1' });});
This approach sets the security level to 0, allowing the use of legacy features while stillleveraging the default OpenSSL ciphers.
Using--tls-cipher-list#
You can also set the security level and ciphers from the command line using the--tls-cipher-list=DEFAULT@SECLEVEL=X as described inModifying the default TLS cipher suite.However, it is generally discouraged to use the command line option for setting ciphers and it ispreferable to configure the ciphers for individual contexts within your application code,as this approach provides finer control and reduces the risk of globally downgrading the security level.
X509 certificate error codes#
Multiple functions can fail due to certificate errors that are reported byOpenSSL. In such a case, the function provides an<Error> via its callback thathas the propertycode which can take one of the following values:
'UNABLE_TO_GET_ISSUER_CERT': Unable to get issuer certificate.'UNABLE_TO_GET_CRL': Unable to get certificate CRL.'UNABLE_TO_DECRYPT_CERT_SIGNATURE': Unable to decrypt certificate'ssignature.'UNABLE_TO_DECRYPT_CRL_SIGNATURE': Unable to decrypt CRL's signature.'UNABLE_TO_DECODE_ISSUER_PUBLIC_KEY': Unable to decode issuer public key.'CERT_SIGNATURE_FAILURE': Certificate signature failure.'CRL_SIGNATURE_FAILURE': CRL signature failure.'CERT_NOT_YET_VALID': Certificate is not yet valid.'CERT_HAS_EXPIRED': Certificate has expired.'CRL_NOT_YET_VALID': CRL is not yet valid.'CRL_HAS_EXPIRED': CRL has expired.'ERROR_IN_CERT_NOT_BEFORE_FIELD': Format error in certificate's notBeforefield.'ERROR_IN_CERT_NOT_AFTER_FIELD': Format error in certificate's notAfterfield.'ERROR_IN_CRL_LAST_UPDATE_FIELD': Format error in CRL's lastUpdate field.'ERROR_IN_CRL_NEXT_UPDATE_FIELD': Format error in CRL's nextUpdate field.'OUT_OF_MEM': Out of memory.'DEPTH_ZERO_SELF_SIGNED_CERT': Self signed certificate.'SELF_SIGNED_CERT_IN_CHAIN': Self signed certificate in certificate chain.'UNABLE_TO_GET_ISSUER_CERT_LOCALLY': Unable to get local issuer certificate.'UNABLE_TO_VERIFY_LEAF_SIGNATURE': Unable to verify the first certificate.'CERT_CHAIN_TOO_LONG': Certificate chain too long.'CERT_REVOKED': Certificate revoked.'INVALID_CA': Invalid CA certificate.'PATH_LENGTH_EXCEEDED': Path length constraint exceeded.'INVALID_PURPOSE': Unsupported certificate purpose.'CERT_UNTRUSTED': Certificate not trusted.'CERT_REJECTED': Certificate rejected.'HOSTNAME_MISMATCH': Hostname mismatch.
When certificate errors likeUNABLE_TO_VERIFY_LEAF_SIGNATURE,DEPTH_ZERO_SELF_SIGNED_CERT, orUNABLE_TO_GET_ISSUER_CERT occur, Node.jsappends a hint suggesting that if the root CA is installed locally,try running with the--use-system-ca flag to direct developers towards asecure solution, to prevent unsafe workarounds.
Class:tls.Server#
- Extends:<net.Server>
Accepts encrypted connections using TLS or SSL.
Event:'connection'#
socket<stream.Duplex>
This event is emitted when a new TCP stream is established, before the TLShandshake begins.socket is typically an object of typenet.Socket butwill not receive events unlike the socket created from thenet.Server'connection' event. Usually users will not want to access this event.
This event can also be explicitly emitted by users to inject connectionsinto the TLS server. In that case, anyDuplex stream can be passed.
Event:'keylog'#
line<Buffer> Line of ASCII text, in NSSSSLKEYLOGFILEformat.tlsSocket<tls.TLSSocket> Thetls.TLSSocketinstance on which it wasgenerated.
Thekeylog event is emitted when key material is generated or received bya connection to this server (typically before handshake has completed, but notnecessarily). This keying material can be stored for debugging, as it allowscaptured TLS traffic to be decrypted. It may be emitted multiple times foreach socket.
A typical use case is to append received lines to a common text file, whichis later used by software (such as Wireshark) to decrypt the traffic:
const logFile = fs.createWriteStream('/tmp/ssl-keys.log', {flags:'a' });// ...server.on('keylog',(line, tlsSocket) => {if (tlsSocket.remoteAddress !=='...')return;// Only log keys for a particular IP logFile.write(line);});Event:'newSession'#
History
| Version | Changes |
|---|---|
| v0.11.12 | The |
| v0.9.2 | Added in: v0.9.2 |
The'newSession' event is emitted upon creation of a new TLS session. This maybe used to store sessions in external storage. The data should be provided tothe'resumeSession' callback.
The listener callback is passed three arguments when called:
sessionId<Buffer> The TLS session identifiersessionData<Buffer> The TLS session datacallback<Function> A callback function taking no arguments that must beinvoked in order for data to be sent or received over the secure connection.
Listening for this event will have an effect only on connections establishedafter the addition of the event listener.
Event:'OCSPRequest'#
The'OCSPRequest' event is emitted when the client sends a certificate statusrequest. The listener callback is passed three arguments when called:
certificate<Buffer> The server certificateissuer<Buffer> The issuer's certificatecallback<Function> A callback function that must be invoked to providethe results of the OCSP request.
The server's current certificate can be parsed to obtain the OCSP URLand certificate ID; after obtaining an OCSP response,callback(null, resp) isthen invoked, whereresp is aBuffer instance containing the OCSP response.Bothcertificate andissuer areBuffer DER-representations of theprimary and issuer's certificates. These can be used to obtain the OCSPcertificate ID and OCSP endpoint URL.
Alternatively,callback(null, null) may be called, indicating that there wasno OCSP response.
Callingcallback(err) will result in asocket.destroy(err) call.
The typical flow of an OCSP request is as follows:
- Client connects to the server and sends an
'OCSPRequest'(via the statusinfo extension in ClientHello). - Server receives the request and emits the
'OCSPRequest'event, calling thelistener if registered. - Server extracts the OCSP URL from either the
certificateorissuerandperforms anOCSP request to the CA. - Server receives
'OCSPResponse'from the CA and sends it back to the clientvia thecallbackargument - Client validates the response and either destroys the socket or performs ahandshake.
Theissuer can benull if the certificate is either self-signed or theissuer is not in the root certificates list. (An issuer may be providedvia theca option when establishing the TLS connection.)
Listening for this event will have an effect only on connections establishedafter the addition of the event listener.
An npm module likeasn1.js may be used to parse the certificates.
Event:'resumeSession'#
The'resumeSession' event is emitted when the client requests to resume aprevious TLS session. The listener callback is passed two arguments whencalled:
sessionId<Buffer> The TLS session identifiercallback<Function> A callback function to be called when the prior sessionhas been recovered:callback([err[, sessionData]])
The event listener should perform a lookup in external storage for thesessionData saved by the'newSession' event handler using the givensessionId. If found, callcallback(null, sessionData) to resume the session.If not found, the session cannot be resumed.callback() must be calledwithoutsessionData so that the handshake can continue and a new session canbe created. It is possible to callcallback(err) to terminate the incomingconnection and destroy the socket.
Listening for this event will have an effect only on connections establishedafter the addition of the event listener.
The following illustrates resuming a TLS session:
const tlsSessionStore = {};server.on('newSession',(id, data, cb) => { tlsSessionStore[id.toString('hex')] = data;cb();});server.on('resumeSession',(id, cb) => {cb(null, tlsSessionStore[id.toString('hex')] ||null);});Event:'secureConnection'#
The'secureConnection' event is emitted after the handshaking process for anew connection has successfully completed. The listener callback is passed asingle argument when called:
tlsSocket<tls.TLSSocket> The established TLS socket.
ThetlsSocket.authorized property is aboolean indicating whether theclient has been verified by one of the supplied Certificate Authorities for theserver. IftlsSocket.authorized isfalse, thensocket.authorizationErroris set to describe how authorization failed. Depending on the settingsof the TLS server, unauthorized connections may still be accepted.
ThetlsSocket.alpnProtocol property is a string that contains the selectedALPN protocol. When ALPN has no selected protocol because the client or theserver did not send an ALPN extension,tlsSocket.alpnProtocol equalsfalse.
ThetlsSocket.servername property is a string containing the server namerequested via SNI.
Event:'tlsClientError'#
The'tlsClientError' event is emitted when an error occurs before a secureconnection is established. The listener callback is passed two arguments whencalled:
exception<Error> TheErrorobject describing the errortlsSocket<tls.TLSSocket> Thetls.TLSSocketinstance from which theerror originated.
server.addContext(hostname, context)#
hostname<string> A SNI host name or wildcard (e.g.'*')context<Object> |<tls.SecureContext> An object containing any of the possibleproperties from thetls.createSecureContext()optionsarguments(e.g.key,cert,ca, etc), or a TLS context object created withtls.createSecureContext()itself.
Theserver.addContext() method adds a secure context that will be used ifthe client request's SNI name matches the suppliedhostname (or wildcard).
When there are multiple matching contexts, the most recently added one isused.
server.address()#
- Returns:<Object>
Returns the bound address, the address family name, and port of theserver as reported by the operating system. Seenet.Server.address() formore information.
server.close([callback])#
callback<Function> A listener callback that will be registered to listenfor the server instance's'close'event.- Returns:<tls.Server>
Theserver.close() method stops the server from accepting new connections.
This function operates asynchronously. The'close' event will be emittedwhen the server has no more open connections.
server.getTicketKeys()#
- Returns:<Buffer> A 48-byte buffer containing the session ticket keys.
Returns the session ticket keys.
SeeSession Resumption for more information.
server.listen()#
Starts the server listening for encrypted connections.This method is identical toserver.listen() fromnet.Server.
server.setSecureContext(options)#
options<Object> An object containing any of the possible properties fromthetls.createSecureContext()optionsarguments (e.g.key,cert,ca, etc).
Theserver.setSecureContext() method replaces the secure context of anexisting server. Existing connections to the server are not interrupted.
server.setTicketKeys(keys)#
keys<Buffer> |<TypedArray> |<DataView> A 48-byte buffer containing the sessionticket keys.
Sets the session ticket keys.
Changes to the ticket keys are effective only for future server connections.Existing or currently pending server connections will use the previous keys.
SeeSession Resumption for more information.
Class:tls.TLSSocket#
- Extends:<net.Socket>
Performs transparent encryption of written data and all required TLSnegotiation.
Instances oftls.TLSSocket implement the duplexStream interface.
Methods that return TLS connection metadata (e.g.tls.TLSSocket.getPeerCertificate()) will only return data while theconnection is open.
new tls.TLSSocket(socket[, options])#
History
| Version | Changes |
|---|---|
| v12.2.0 | The |
| v5.0.0 | ALPN options are supported now. |
| v0.11.4 | Added in: v0.11.4 |
socket<net.Socket> |<stream.Duplex>On the server side, anyDuplexstream. On the client side, anyinstance ofnet.Socket(for genericDuplexstream supporton the client side,tls.connect()must be used).options<Object>enableTrace: Seetls.createServer()isServer: The SSL/TLS protocol is asymmetrical, TLSSockets must know ifthey are to behave as a server or a client. Iftruethe TLS socket will beinstantiated as a server.Default:false.server<net.Server> Anet.Serverinstance.requestCert: Whether to authenticate the remote peer by requesting acertificate. Clients always request a server certificate. Servers(isServeris true) may setrequestCertto true to request a clientcertificate.rejectUnauthorized: Seetls.createServer()ALPNProtocols: Seetls.createServer()SNICallback: Seetls.createServer()ALPNCallback: Seetls.createServer()session<Buffer> ABufferinstance containing a TLS session.requestOCSP<boolean> Iftrue, specifies that the OCSP status requestextension will be added to the client hello and an'OCSPResponse'eventwill be emitted on the socket before establishing a secure communicationsecureContext: TLS context object created withtls.createSecureContext(). If asecureContextisnot provided, onewill be created by passing the entireoptionsobject totls.createSecureContext().- ...:
tls.createSecureContext()options that are used if thesecureContextoption is missing. Otherwise, they are ignored.
Construct a newtls.TLSSocket object from an existing TCP socket.
Event:'keylog'#
line<Buffer> Line of ASCII text, in NSSSSLKEYLOGFILEformat.
Thekeylog event is emitted on atls.TLSSocket when key materialis generated or received by the socket. This keying material can be storedfor debugging, as it allows captured TLS traffic to be decrypted. It maybe emitted multiple times, before or after the handshake completes.
A typical use case is to append received lines to a common text file, whichis later used by software (such as Wireshark) to decrypt the traffic:
const logFile = fs.createWriteStream('/tmp/ssl-keys.log', {flags:'a' });// ...tlsSocket.on('keylog',(line) => logFile.write(line));Event:'OCSPResponse'#
The'OCSPResponse' event is emitted if therequestOCSP option was setwhen thetls.TLSSocket was created and an OCSP response has been received.The listener callback is passed a single argument when called:
response<Buffer> The server's OCSP response
Typically, theresponse is a digitally signed object from the server's CA thatcontains information about server's certificate revocation status.
Event:'secure'#
The'secure' event is emitted after the TLS handshake has successfullycompleted and a secure connection has been established.
This event is emitted on both client and server<tls.TLSSocket> instances,including sockets created using thenew tls.TLSSocket() constructor.
Event:'secureConnect'#
The'secureConnect' event is emitted after the handshaking process for a newconnection has successfully completed. The listener callback will be calledregardless of whether or not the server's certificate has been authorized. Itis the client's responsibility to check thetlsSocket.authorized property todetermine if the server certificate was signed by one of the specified CAs. IftlsSocket.authorized === false, then the error can be found by examining thetlsSocket.authorizationError property. If ALPN was used, thetlsSocket.alpnProtocol property can be checked to determine the negotiatedprotocol.
The'secureConnect' event is not emitted when a<tls.TLSSocket> is createdusing thenew tls.TLSSocket() constructor.
Event:'session'#
session<Buffer>
The'session' event is emitted on a clienttls.TLSSocket when a new sessionor TLS ticket is available. This may or may not be before the handshake iscomplete, depending on the TLS protocol version that was negotiated. The eventis not emitted on the server, or if a new session was not created, for example,when the connection was resumed. For some TLS protocol versions the event may beemitted multiple times, in which case all the sessions can be used forresumption.
On the client, thesession can be provided to thesession option oftls.connect() to resume the connection.
SeeSession Resumption for more information.
For TLSv1.2 and below,tls.TLSSocket.getSession() can be called oncethe handshake is complete. For TLSv1.3, only ticket-based resumption is allowedby the protocol, multiple tickets are sent, and the tickets aren't sent untilafter the handshake completes. So it is necessary to wait for the'session' event to get a resumable session. Applicationsshould use the'session' event instead ofgetSession() to ensurethey will work for all TLS versions. Applications that only expect toget or use one session should listen for this event only once:
tlsSocket.once('session',(session) => {// The session can be used immediately or later. tls.connect({session: session,// Other connect options... });});tlsSocket.address()#
History
| Version | Changes |
|---|---|
| v18.4.0 | The |
| v18.0.0 | The |
| v0.11.4 | Added in: v0.11.4 |
- Returns:<Object>
Returns the boundaddress, the addressfamily name, andport of theunderlying socket as reported by the operating system:{ port: 12346, family: 'IPv4', address: '127.0.0.1' }.
tlsSocket.authorizationError#
Returns the reason why the peer's certificate was not been verified. Thisproperty is set only whentlsSocket.authorized === false.
tlsSocket.authorized#
- Type:<boolean>
This property istrue if the peer certificate was signed by one of the CAsspecified when creating thetls.TLSSocket instance, otherwisefalse.
tlsSocket.disableRenegotiation()#
Disables TLS renegotiation for thisTLSSocket instance. Once called, attemptsto renegotiate will trigger an'error' event on theTLSSocket.
tlsSocket.enableTrace()#
When enabled, TLS packet trace information is written tostderr. This can beused to debug TLS connection problems.
The format of the output is identical to the output ofopenssl s_client -trace oropenssl s_server -trace. While it is produced byOpenSSL'sSSL_trace() function, the format is undocumented, can changewithout notice, and should not be relied on.
tlsSocket.encrypted#
Always returnstrue. This may be used to distinguish TLS sockets from regularnet.Socket instances.
tlsSocket.exportKeyingMaterial(length, label[, context])#
length<number> number of bytes to retrieve from keying materiallabel<string> an application specific label, typically this will be avalue from theIANA Exporter Label Registry.context<Buffer> Optionally provide a context.Returns:<Buffer> requested bytes of the keying material
Keying material is used for validations to prevent different kind of attacks innetwork protocols, for example in the specifications of IEEE 802.1X.
Example
const keyingMaterial = tlsSocket.exportKeyingMaterial(128,'client finished');/* Example return value of keyingMaterial: <Buffer 76 26 af 99 c5 56 8e 42 09 91 ef 9f 93 cb ad 6c 7b 65 f8 53 f1 d8 d9 12 5a 33 b8 b5 25 df 7b 37 9f e0 e2 4f b8 67 83 a3 2f cd 5d 41 42 4c 91 74 ef 2c ... 78 more bytes>*/See the OpenSSLSSL_export_keying_material documentation for moreinformation.
tlsSocket.getCertificate()#
- Returns:<Object>
Returns an object representing the local certificate. The returned object hassome properties corresponding to the fields of the certificate.
Seetls.TLSSocket.getPeerCertificate() for an example of the certificatestructure.
If there is no local certificate, an empty object will be returned. If thesocket has been destroyed,null will be returned.
tlsSocket.getCipher()#
History
| Version | Changes |
|---|---|
| v13.4.0, v12.16.0 | Return the IETF cipher name as |
| v12.0.0 | Return the minimum cipher version, instead of a fixed string ( |
| v0.11.4 | Added in: v0.11.4 |
- Returns:<Object>
name<string> OpenSSL name for the cipher suite.standardName<string> IETF name for the cipher suite.version<string> The minimum TLS protocol version supported by this ciphersuite. For the actual negotiated protocol, seetls.TLSSocket.getProtocol().
Returns an object containing information on the negotiated cipher suite.
For example, a TLSv1.2 protocol with AES256-SHA cipher:
{"name":"AES256-SHA","standardName":"TLS_RSA_WITH_AES_256_CBC_SHA","version":"SSLv3"}SeeSSL_CIPHER_get_namefor more information.
tlsSocket.getEphemeralKeyInfo()#
- Returns:<Object>
Returns an object representing the type, name, and size of parameter ofan ephemeral key exchange inperfect forward secrecy on a clientconnection. It returns an empty object when the key exchange is notephemeral. As this is only supported on a client socket;null is returnedif called on a server socket. The supported types are'DH' and'ECDH'. Thename property is available only when type is'ECDH'.
For example:{ type: 'ECDH', name: 'prime256v1', size: 256 }.
tlsSocket.getFinished()#
- Returns:<Buffer> |<undefined> The latest
Finishedmessage that has beensent to the socket as part of a SSL/TLS handshake, orundefinedifnoFinishedmessage has been sent yet.
As theFinished messages are message digests of the complete handshake(with a total of 192 bits for TLS 1.0 and more for SSL 3.0), they canbe used for external authentication procedures when the authenticationprovided by SSL/TLS is not desired or is not enough.
Corresponds to theSSL_get_finished routine in OpenSSL and may be usedto implement thetls-unique channel binding fromRFC 5929.
tlsSocket.getPeerCertificate([detailed])#
detailed<boolean> Include the full certificate chain iftrue, otherwiseinclude just the peer's certificate.- Returns:<Object> A certificate object.
Returns an object representing the peer's certificate. If the peer does notprovide a certificate, an empty object will be returned. If the socket has beendestroyed,null will be returned.
If the full certificate chain was requested, each certificate will include anissuerCertificate property containing an object representing its issuer'scertificate.
Certificate object#
History
| Version | Changes |
|---|---|
| v19.1.0, v18.13.0 | Add "ca" property. |
| v17.2.0, v16.14.0 | Add fingerprint512. |
| v11.4.0 | Support Elliptic Curve public key info. |
A certificate object has properties corresponding to the fields of thecertificate.
ca<boolean>trueif a Certificate Authority (CA),falseotherwise.raw<Buffer> The DER encoded X.509 certificate data.subject<Object> The certificate subject, described in terms ofCountry (C), StateOrProvince (ST), Locality (L), Organization (O),OrganizationalUnit (OU), and CommonName (CN). The CommonName is typicallya DNS name with TLS certificates. Example:{C: 'UK', ST: 'BC', L: 'Metro', O: 'Node Fans', OU: 'Docs', CN: 'example.com'}.issuer<Object> The certificate issuer, described in the same terms as thesubject.valid_from<string> The date-time the certificate is valid from.valid_to<string> The date-time the certificate is valid to.serialNumber<string> The certificate serial number, as a hex string.Example:'B9B0D332A1AA5635'.fingerprint<string> The SHA-1 digest of the DER encoded certificate. It isreturned as a:separated hexadecimal string. Example:'2A:7A:C2:DD:...'.fingerprint256<string> The SHA-256 digest of the DER encoded certificate.It is returned as a:separated hexadecimal string. Example:'2A:7A:C2:DD:...'.fingerprint512<string> The SHA-512 digest of the DER encoded certificate.It is returned as a:separated hexadecimal string. Example:'2A:7A:C2:DD:...'.ext_key_usage<Array> (Optional) The extended key usage, a set of OIDs.subjectaltname<string> (Optional) A string containing concatenated namesfor the subject, an alternative to thesubjectnames.infoAccess<Array> (Optional) An array describing the AuthorityInfoAccess,used with OCSP.issuerCertificate<Object> (Optional) The issuer certificate object. Forself-signed certificates, this may be a circular reference.
The certificate may contain information about the public key, depending onthe key type.
For RSA keys, the following properties may be defined:
bits<number> The RSA bit size. Example:1024.exponent<string> The RSA exponent, as a string in hexadecimal numbernotation. Example:'0x010001'.modulus<string> The RSA modulus, as a hexadecimal string. Example:'B56CE45CB7...'.pubkey<Buffer> The public key.
For EC keys, the following properties may be defined:
pubkey<Buffer> The public key.bits<number> The key size in bits. Example:256.asn1Curve<string> (Optional) The ASN.1 name of the OID of the ellipticcurve. Well-known curves are identified by an OID. While it is unusual, it ispossible that the curve is identified by its mathematical properties, in whichcase it will not have an OID. Example:'prime256v1'.nistCurve<string> (Optional) The NIST name for the elliptic curve, if ithas one (not all well-known curves have been assigned names by NIST). Example:'P-256'.
Example certificate:
{subject: {OU: ['Domain Control Validated','PositiveSSL Wildcard' ],CN:'*.nodejs.org' },issuer: {C:'GB',ST:'Greater Manchester',L:'Salford',O:'COMODO CA Limited',CN:'COMODO RSA Domain Validation Secure Server CA' },subjectaltname:'DNS:*.nodejs.org, DNS:nodejs.org',infoAccess: {'CA Issuers - URI': ['http://crt.comodoca.com/COMODORSADomainValidationSecureServerCA.crt' ],'OCSP - URI': ['http://ocsp.comodoca.com' ] },modulus:'B56CE45CB740B09A13F64AC543B712FF9EE8E4C284B542A1708A27E82A8D151CA178153E12E6DDA15BF70FFD96CB8A88618641BDFCCA03527E665B70D779C8A349A6F88FD4EF6557180BD4C98192872BCFE3AF56E863C09DDD8BC1EC58DF9D94F914F0369102B2870BECFA1348A0838C9C49BD1C20124B442477572347047506B1FCD658A80D0C44BCC16BC5C5496CFE6E4A8428EF654CD3D8972BF6E5BFAD59C93006830B5EB1056BBB38B53D1464FA6E02BFDF2FF66CD949486F0775EC43034EC2602AEFBF1703AD221DAA2A88353C3B6A688EFE8387811F645CEED7B3FE46E1F8B9F59FAD028F349B9BC14211D5830994D055EEA3D547911E07A0ADDEB8A82B9188E58720D95CD478EEC9AF1F17BE8141BE80906F1A339445A7EB5B285F68039B0F294598A7D1C0005FC22B5271B0752F58CCDEF8C8FD856FB7AE21C80B8A2CE983AE94046E53EDE4CB89F42502D31B5360771C01C80155918637490550E3F555E2EE75CC8C636DDE3633CFEDD62E91BF0F7688273694EEEBA20C2FC9F14A2A435517BC1D7373922463409AB603295CEB0BB53787A334C9CA3CA8B30005C5A62FC0715083462E00719A8FA3ED0A9828C3871360A73F8B04A4FC1E71302844E9BB9940B77E745C9D91F226D71AFCAD4B113AAF68D92B24DDB4A2136B55A1CD1ADF39605B63CB639038ED0F4C987689866743A68769CC55847E4A06D6E2E3F1',exponent:'0x10001',pubkey: <Buffer ... >, valid_from: 'Aug 14 00:00:00 2017 GMT', valid_to: 'Nov 20 23:59:59 2019 GMT', fingerprint: '01:02:59:D9:C3:D2:0D:08:F7:82:4E:44:A4:B4:53:C5:E2:3A:87:4D', fingerprint256: '69:AE:1A:6A:D4:3D:C6:C1:1B:EA:C6:23:DE:BA:2A:14:62:62:93:5C:7A:EA:06:41:9B:0B:BC:87:CE:48:4E:02', fingerprint512: '19:2B:3E:C3:B3:5B:32:E8:AE:BB:78:97:27:E4:BA:6C:39:C9:92:79:4F:31:46:39:E2:70:E5:5F:89:42:17:C9:E8:64:CA:FF:BB:72:56:73:6E:28:8A:92:7E:A3:2A:15:8B:C2:E0:45:CA:C3:BC:EA:40:52:EC:CA:A2:68:CB:32', ext_key_usage: [ '1.3.6.1.5.5.7.3.1', '1.3.6.1.5.5.7.3.2' ], serialNumber: '66593D57F20CBC573E433381B5FEC280', raw: <Buffer ... > }tlsSocket.getPeerFinished()#
- Returns:<Buffer> |<undefined> The latest
Finishedmessage that is expectedor has actually been received from the socket as part of a SSL/TLS handshake,orundefinedif there is noFinishedmessage so far.
As theFinished messages are message digests of the complete handshake(with a total of 192 bits for TLS 1.0 and more for SSL 3.0), they canbe used for external authentication procedures when the authenticationprovided by SSL/TLS is not desired or is not enough.
Corresponds to theSSL_get_peer_finished routine in OpenSSL and may be usedto implement thetls-unique channel binding fromRFC 5929.
tlsSocket.getPeerX509Certificate()#
- Returns:<X509Certificate>
Returns the peer certificate as an<X509Certificate> object.
If there is no peer certificate, or the socket has been destroyed,undefined will be returned.
tlsSocket.getProtocol()#
Returns a string containing the negotiated SSL/TLS protocol version of thecurrent connection. The value'unknown' will be returned for connectedsockets that have not completed the handshaking process. The valuenull willbe returned for server sockets or disconnected client sockets.
Protocol versions are:
'SSLv3''TLSv1''TLSv1.1''TLSv1.2''TLSv1.3'
See the OpenSSLSSL_get_version documentation for more information.
tlsSocket.getSession()#
- Type:<Buffer>
Returns the TLS session data orundefined if no session wasnegotiated. On the client, the data can be provided to thesession option oftls.connect() to resume the connection. On the server, it may be usefulfor debugging.
SeeSession Resumption for more information.
Note:getSession() works only for TLSv1.2 and below. For TLSv1.3, applicationsmust use the'session' event (it also works for TLSv1.2 and below).
tlsSocket.getSharedSigalgs()#
- Returns:<Array> List of signature algorithms shared between the server andthe client in the order of decreasing preference.
SeeSSL_get_shared_sigalgsfor more information.
tlsSocket.getTLSTicket()#
- Type:<Buffer>
For a client, returns the TLS session ticket if one is available, orundefined. For a server, always returnsundefined.
It may be useful for debugging.
SeeSession Resumption for more information.
tlsSocket.getX509Certificate()#
- Returns:<X509Certificate>
Returns the local certificate as an<X509Certificate> object.
If there is no local certificate, or the socket has been destroyed,undefined will be returned.
tlsSocket.isSessionReused()#
- Returns:<boolean>
trueif the session was reused,falseotherwise.
SeeSession Resumption for more information.
tlsSocket.localAddress#
- Type:<string>
Returns the string representation of the local IP address.
tlsSocket.localPort#
- Type:<integer>
Returns the numeric representation of the local port.
tlsSocket.remoteAddress#
- Type:<string>
Returns the string representation of the remote IP address. For example,'74.125.127.100' or'2001:4860:a005::68'.
tlsSocket.remoteFamily#
- Type:<string>
Returns the string representation of the remote IP family.'IPv4' or'IPv6'.
tlsSocket.remotePort#
- Type:<integer>
Returns the numeric representation of the remote port. For example,443.
tlsSocket.renegotiate(options, callback)#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
| v0.11.8 | Added in: v0.11.8 |
options<Object>rejectUnauthorized<boolean> If notfalse, the server certificate isverified against the list of supplied CAs. An'error'event is emitted ifverification fails;err.codecontains the OpenSSL error code.Default:true.requestCert
callback<Function> Ifrenegotiate()returnedtrue, callback isattached once to the'secure'event. Ifrenegotiate()returnedfalse,callbackwill be called in the next tick with an error, unless thetlsSockethas been destroyed, in which casecallbackwill not be calledat all.Returns:<boolean>
trueif renegotiation was initiated,falseotherwise.
ThetlsSocket.renegotiate() method initiates a TLS renegotiation process.Upon completion, thecallback function will be passed a single argumentthat is either anError (if the request failed) ornull.
This method can be used to request a peer's certificate after the secureconnection has been established.
When running as the server, the socket will be destroyed with an error afterhandshakeTimeout timeout.
For TLSv1.3, renegotiation cannot be initiated, it is not supported by theprotocol.
tlsSocket.setKeyCert(context)#
context<Object> |<tls.SecureContext> An object containing at leastkeyandcertproperties from thetls.createSecureContext()options, or aTLS context object created withtls.createSecureContext()itself.
ThetlsSocket.setKeyCert() method sets the private key and certificate to usefor the socket. This is mainly useful if you wish to select a server certificatefrom a TLS server'sALPNCallback.
tlsSocket.setMaxSendFragment(size)#
size<number> The maximum TLS fragment size. The maximum value is16384.Default:16384.- Returns:<boolean>
ThetlsSocket.setMaxSendFragment() method sets the maximum TLS fragment size.Returnstrue if setting the limit succeeded;false otherwise.
Smaller fragment sizes decrease the buffering latency on the client: largerfragments are buffered by the TLS layer until the entire fragment is receivedand its integrity is verified; large fragments can span multiple roundtripsand their processing can be delayed due to packet loss or reordering. However,smaller fragments add extra TLS framing bytes and CPU overhead, which maydecrease overall server throughput.
tls.checkServerIdentity(hostname, cert)#
History
| Version | Changes |
|---|---|
| v17.3.1, v16.13.2, v14.18.3, v12.22.9 | Support for |
| v0.8.4 | Added in: v0.8.4 |
hostname<string> The host name or IP address to verify the certificateagainst.cert<Object> Acertificate object representing the peer's certificate.- Returns:<Error> |<undefined>
Verifies the certificatecert is issued tohostname.
Returns<Error> object, populating it withreason,host, andcert onfailure. On success, returns<undefined>.
This function is intended to be used in combination with thecheckServerIdentity option that can be passed totls.connect() and assuch operates on acertificate object. For other purposes, consider usingx509.checkHost() instead.
This function can be overwritten by providing an alternative function as theoptions.checkServerIdentity option that is passed totls.connect(). Theoverwriting function can calltls.checkServerIdentity() of course, to augmentthe checks done with additional verification.
This function is only called if the certificate passed all other checks, such asbeing issued by trusted CA (options.ca).
Earlier versions of Node.js incorrectly accepted certificates for a givenhostname if a matchinguniformResourceIdentifier subject alternative namewas present (seeCVE-2021-44531). Applications that wish to acceptuniformResourceIdentifier subject alternative names can use a customoptions.checkServerIdentity function that implements the desired behavior.
tls.connect(options[, callback])#
History
| Version | Changes |
|---|---|
| v15.1.0, v14.18.0 | Added |
| v14.1.0, v13.14.0 | The |
| v13.6.0, v12.16.0 | The |
| v12.9.0 | Support the |
| v12.4.0 | The |
| v12.2.0 | The |
| v11.8.0, v10.16.0 | The |
| v8.0.0 | The |
| v8.0.0 | The |
| v5.0.0 | ALPN options are supported now. |
| v5.3.0, v4.7.0 | The |
| v0.11.3 | Added in: v0.11.3 |
options<Object>enableTrace: Seetls.createServer()host<string> Host the client should connect to.Default:'localhost'.port<number> Port the client should connect to.path<string> Creates Unix socket connection to path. If this option isspecified,hostandportare ignored.socket<stream.Duplex> Establish secure connection on a given socketrather than creating a new socket. Typically, this is an instance ofnet.Socket, but anyDuplexstream is allowed.If this option is specified,path,host, andportare ignored,except for certificate validation. Usually, a socket is already connectedwhen passed totls.connect(), but it can be connected later.Connection/disconnection/destruction ofsocketis the user'sresponsibility; callingtls.connect()will not causenet.connect()to becalled.allowHalfOpen<boolean> If set tofalse, then the socket willautomatically end the writable side when the readable side ends. If thesocketoption is set, this option has no effect. See theallowHalfOpenoption ofnet.Socketfor details.Default:false.rejectUnauthorized<boolean> If notfalse, the server certificate isverified against the list of supplied CAs. An'error'event is emitted ifverification fails;err.codecontains the OpenSSL error code.Default:true.pskCallback<Function> For TLS-PSK negotiation, seePre-shared keys.ALPNProtocols<string[]> |<Buffer> |<TypedArray> |<DataView> An array of strings,or a singleBuffer,TypedArray, orDataViewcontaining the supportedALPN protocols. Buffers should have the format[len][name][len][name]...e.g.'\x08http/1.1\x08http/1.0', where thelenbyte is the length of thenext protocol name. Passing an array is usually much simpler, e.g.['http/1.1', 'http/1.0']. Protocols earlier in the list have higherpreference than those later.servername<string> Server name for the SNI (Server Name Indication) TLSextension. It is the name of the host being connected to, and must be a hostname, and not an IP address. It can be used by a multi-homed server tochoose the correct certificate to present to the client, see theSNICallbackoption totls.createServer().checkServerIdentity(servername, cert)<Function> A callback functionto be used (instead of the builtintls.checkServerIdentity()function)when checking the server's host name (or the providedservernamewhenexplicitly set) against the certificate. This should return an<Error> ifverification fails. The method should returnundefinedif theservernameandcertare verified.session<Buffer> ABufferinstance, containing TLS session.requestOCSP<boolean> Iftrue, specifies that the OCSP status requestextension will be added to the client hello and an'OCSPResponse'eventwill be emitted on the socket before establishing a secure communication.minDHSize<number> Minimum size of the DH parameter in bits to accept aTLS connection. When a server offers a DH parameter with a size lessthanminDHSize, the TLS connection is destroyed and an error is thrown.Default:1024.highWaterMark<number> Consistent with the readable streamhighWaterMarkparameter.Default:16 * 1024.timeout:<number> If set and if a socket is created internally, will callsocket.setTimeout(timeout)after the socket is created, but before itstarts the connection.secureContext: TLS context object created withtls.createSecureContext(). If asecureContextisnot provided, onewill be created by passing the entireoptionsobject totls.createSecureContext().onread<Object> If thesocketoption is missing, incoming data isstored in a singlebufferand passed to the suppliedcallbackwhendata arrives on the socket, otherwise the option is ignored. See theonreadoption ofnet.Socketfor details.- ...:
tls.createSecureContext()options that are used if thesecureContextoption is missing, otherwise they are ignored. - ...: Any
socket.connect()option not already listed.
callback<Function>- Returns:<tls.TLSSocket>
Thecallback function, if specified, will be added as a listener for the'secureConnect' event.
tls.connect() returns atls.TLSSocket object.
Unlike thehttps API,tls.connect() does not enable theSNI (Server Name Indication) extension by default, which may cause someservers to return an incorrect certificate or reject the connectionaltogether. To enable SNI, set theservername option in additiontohost.
The following illustrates a client for the echo server example fromtls.createServer():
// Assumes an echo server that is listening on port 8000.import { connect }from'node:tls';import { readFileSync }from'node:fs';import { stdin }from'node:process';const options = {// Necessary only if the server requires client certificate authentication.key:readFileSync('client-key.pem'),cert:readFileSync('client-cert.pem'),// Necessary only if the server uses a self-signed certificate.ca: [readFileSync('server-cert.pem') ],// Necessary only if the server's cert isn't for "localhost".checkServerIdentity:() => {returnnull; },};const socket =connect(8000, options,() => {console.log('client connected', socket.authorized ?'authorized' :'unauthorized'); stdin.pipe(socket); stdin.resume();});socket.setEncoding('utf8');socket.on('data',(data) => {console.log(data);});socket.on('end',() => {console.log('server ends connection');});// Assumes an echo server that is listening on port 8000.const { connect } =require('node:tls');const { readFileSync } =require('node:fs');const options = {// Necessary only if the server requires client certificate authentication.key:readFileSync('client-key.pem'),cert:readFileSync('client-cert.pem'),// Necessary only if the server uses a self-signed certificate.ca: [readFileSync('server-cert.pem') ],// Necessary only if the server's cert isn't for "localhost".checkServerIdentity:() => {returnnull; },};const socket =connect(8000, options,() => {console.log('client connected', socket.authorized ?'authorized' :'unauthorized'); process.stdin.pipe(socket); process.stdin.resume();});socket.setEncoding('utf8');socket.on('data',(data) => {console.log(data);});socket.on('end',() => {console.log('server ends connection');});
To generate the certificate and key for this example, run:
openssl req -x509 -newkey rsa:2048 -nodes -sha256 -subj'/CN=localhost' \ -keyout client-key.pem -out client-cert.pemThen, to generate theserver-cert.pem certificate for this example, run:
openssl pkcs12 -certpbe AES-256-CBC -export -out server-cert.pem \ -inkey client-key.pem -in client-cert.pemtls.connect(path[, options][, callback])#
path<string> Default value foroptions.path.options<Object> Seetls.connect().callback<Function> Seetls.connect().- Returns:<tls.TLSSocket>
Same astls.connect() except thatpath can be providedas an argument instead of an option.
A path option, if specified, will take precedence over the path argument.
tls.connect(port[, host][, options][, callback])#
port<number> Default value foroptions.port.host<string> Default value foroptions.host.options<Object> Seetls.connect().callback<Function> Seetls.connect().- Returns:<tls.TLSSocket>
Same astls.connect() except thatport andhost can be providedas arguments instead of options.
A port or host option, if specified, will take precedence over any port or hostargument.
tls.createSecureContext([options])#
History
| Version | Changes |
|---|---|
| v22.9.0, v20.18.0 | The |
| v22.4.0, v20.16.0 | The |
| v19.8.0, v18.16.0 | The |
| v12.12.0 | Added |
| v12.11.0 | Added |
| v12.0.0 | TLSv1.3 support added. |
| v11.5.0 | The |
| v11.4.0, v10.16.0 | The |
| v10.0.0 | The |
| v9.3.0 | The |
| v9.0.0 | The |
| v7.3.0 | If the |
| v5.2.0 | The |
| v0.11.13 | Added in: v0.11.13 |
options<Object>allowPartialTrustChain<boolean> Treat intermediate (non-self-signed)certificates in the trust CA certificate list as trusted.ca<string> |<string[]> |<Buffer> |<Buffer[]> Optionally override the trusted CAcertificates. If not specified, the CA certificates trusted by default arethe same as the ones returned bytls.getCACertificates()using thedefaulttype. If specified, the default list would be completely replaced(instead of being concatenated) by the certificates in thecaoption.Users need to concatenate manually if they wish to add additional certificatesinstead of completely overriding the default.The value can be a string orBuffer, or anArrayofstrings and/orBuffers. Any string orBuffercan contain multiple PEMCAs concatenated together. The peer's certificate must be chainable to a CAtrusted by the server for the connection to be authenticated. When usingcertificates that are not chainable to a well-known CA, the certificate's CAmust be explicitly specified as a trusted or the connection will fail toauthenticate.If the peer uses a certificate that doesn't match or chain to one of thedefault CAs, use thecaoption to provide a CA certificate that the peer'scertificate can match or chain to.For self-signed certificates, the certificate is its own CA, and must beprovided.For PEM encoded certificates, supported types are "TRUSTED CERTIFICATE","X509 CERTIFICATE", and "CERTIFICATE".cert<string> |<string[]> |<Buffer> |<Buffer[]> Cert chains in PEM format. Onecert chain should be provided per private key. Each cert chain shouldconsist of the PEM formatted certificate for a provided privatekey,followed by the PEM formatted intermediate certificates (if any), in order,and not including the root CA (the root CA must be pre-known to the peer,seeca). When providing multiple cert chains, they do not have to be inthe same order as their private keys inkey. If the intermediatecertificates are not provided, the peer will not be able to validate thecertificate, and the handshake will fail.sigalgs<string> Colon-separated list of supported signature algorithms.The list can contain digest algorithms (SHA256,MD5etc.), public keyalgorithms (RSA-PSS,ECDSAetc.), combination of both (e.g'RSA+SHA384') or TLS v1.3 scheme names (e.g.rsa_pss_pss_sha512).SeeOpenSSL man pagesfor more info.ciphers<string> Cipher suite specification, replacing the default. Formore information, seeModifying the default TLS cipher suite. Permittedciphers can be obtained viatls.getCiphers(). Cipher names must beuppercased in order for OpenSSL to accept them.clientCertEngine<string> Name of an OpenSSL engine which can provide theclient certificate.Deprecated.crl<string> |<string[]> |<Buffer> |<Buffer[]> PEM formatted CRLs (CertificateRevocation Lists).dhparam<string> |<Buffer>'auto'or custom Diffie-Hellman parameters,required for non-ECDHEperfect forward secrecy. If omitted or invalid,the parameters are silently discarded and DHE ciphers will not be available.ECDHE-basedperfect forward secrecy will still be available.ecdhCurve<string> A string describing a named curve or a colon separatedlist of curve NIDs or names, for exampleP-521:P-384:P-256, to use forECDH key agreement. Set toautoto select thecurve automatically. Usecrypto.getCurves()to obtain a list ofavailable curve names. On recent releases,openssl ecparam -list_curveswill also display the name and description of each available elliptic curve.Default:tls.DEFAULT_ECDH_CURVE.honorCipherOrder<boolean> Attempt to use the server's cipher suitepreferences instead of the client's. Whentrue, causesSSL_OP_CIPHER_SERVER_PREFERENCEto be set insecureOptions, seeOpenSSL Options for more information.key<string> |<string[]> |<Buffer> |<Buffer[]> |<Object[]> Private keys in PEMformat. PEM allows the option of private keys being encrypted. Encryptedkeys will be decrypted withoptions.passphrase. Multiple keys usingdifferent algorithms can be provided either as an array of unencrypted keystrings or buffers, or an array of objects in the form{pem: <string|buffer>[, passphrase: <string>]}. The object form can onlyoccur in an array.object.passphraseis optional. Encrypted keys will bedecrypted withobject.passphraseif provided, oroptions.passphraseifit is not.privateKeyEngine<string> Name of an OpenSSL engine to get private keyfrom. Should be used together withprivateKeyIdentifier.Deprecated.privateKeyIdentifier<string> Identifier of a private key managed byan OpenSSL engine. Should be used together withprivateKeyEngine.Should not be set together withkey, because both options define aprivate key in different ways.Deprecated.maxVersion<string> Optionally set the maximum TLS version to allow. Oneof'TLSv1.3','TLSv1.2','TLSv1.1', or'TLSv1'. Cannot be specifiedalong with thesecureProtocoloption; use one or the other.Default:tls.DEFAULT_MAX_VERSION.minVersion<string> Optionally set the minimum TLS version to allow. Oneof'TLSv1.3','TLSv1.2','TLSv1.1', or'TLSv1'. Cannot be specifiedalong with thesecureProtocoloption; use one or the other. Avoidsetting to less than TLSv1.2, but it may be required forinteroperability. Versions before TLSv1.2 may require downgrading theOpenSSL Security Level.Default:tls.DEFAULT_MIN_VERSION.passphrase<string> Shared passphrase used for a single private key and/ora PFX.pfx<string> |<string[]> |<Buffer> |<Buffer[]> |<Object[]> PFX or PKCS12 encodedprivate key and certificate chain.pfxis an alternative to providingkeyandcertindividually. PFX is usually encrypted, if it is,passphrasewill be used to decrypt it. Multiple PFX can be provided eitheras an array of unencrypted PFX buffers, or an array of objects in the form{buf: <string|buffer>[, passphrase: <string>]}. The object form can onlyoccur in an array.object.passphraseis optional. Encrypted PFX will bedecrypted withobject.passphraseif provided, oroptions.passphraseifit is not.secureOptions<number> Optionally affect the OpenSSL protocol behavior,which is not usually necessary. This should be used carefully if at all!Value is a numeric bitmask of theSSL_OP_*options fromOpenSSL Options.secureProtocol<string> Legacy mechanism to select the TLS protocolversion to use, it does not support independent control of the minimum andmaximum version, and does not support limiting the protocol to TLSv1.3. UseminVersionandmaxVersioninstead. The possible values are listed asSSL_METHODS, use the function names as strings. For example,use'TLSv1_1_method'to force TLS version 1.1, or'TLS_method'to allowany TLS protocol version up to TLSv1.3. It is not recommended to use TLSversions less than 1.2, but it may be required for interoperability.Default: none, seeminVersion.sessionIdContext<string> Opaque identifier used by servers to ensuresession state is not shared between applications. Unused by clients.ticketKeys<Buffer> 48-bytes of cryptographically strong pseudorandomdata. SeeSession Resumption for more information.sessionTimeout<number> The number of seconds after which a TLS sessioncreated by the server will no longer be resumable. SeeSession Resumption for more information.Default:300.
tls.createServer() sets the default value of thehonorCipherOrder optiontotrue, other APIs that create secure contexts leave it unset.
tls.createServer() uses a 128 bit truncated SHA1 hash value generatedfromprocess.argv as the default value of thesessionIdContext option, otherAPIs that create secure contexts have no default value.
Thetls.createSecureContext() method creates aSecureContext object. It isusable as an argument to severaltls APIs, such asserver.addContext(),but has no public methods. Thetls.Server constructor and thetls.createServer() method do not support thesecureContext option.
A key isrequired for ciphers that use certificates. Eitherkey orpfx can be used to provide it.
If theca option is not given, then Node.js will default to usingMozilla's publicly trusted list of CAs.
Custom DHE parameters are discouraged in favor of the newdhparam: 'auto'option. When set to'auto', well-known DHE parameters of sufficient strengthwill be selected automatically. Otherwise, if necessary,openssl dhparam canbe used to create custom parameters. The key length must be greater than orequal to 1024 bits or else an error will be thrown. Although 1024 bits ispermissible, use 2048 bits or larger for stronger security.
tls.createServer([options][, secureConnectionListener])#
History
| Version | Changes |
|---|---|
| v22.4.0, v20.16.0 | The |
| v19.0.0 | If |
| v20.4.0, v18.19.0 | The |
| v12.3.0 | The |
| v9.3.0 | The |
| v8.0.0 | The |
| v5.0.0 | ALPN options are supported now. |
| v0.3.2 | Added in: v0.3.2 |
options<Object>ALPNProtocols<string[]> |<Buffer> |<TypedArray> |<DataView> An array of strings,or a singleBuffer,TypedArray, orDataViewcontaining the supportedALPN protocols. Buffers should have the format[len][name][len][name]...e.g.0x05hello0x05world, where the first byte is the length of the nextprotocol name. Passing an array is usually much simpler, e.g.['hello', 'world']. (Protocols should be ordered by their priority.)ALPNCallback<Function> If set, this will be called when aclient opens a connection using the ALPN extension. One argument willbe passed to the callback: an object containingservernameandprotocolsfields, respectively containing the server name fromthe SNI extension (if any) and an array of ALPN protocol name strings. Thecallback must return either one of the strings listed inprotocols, which will be returned to the client as the selectedALPN protocol, orundefined, to reject the connection with a fatal alert.If a string is returned that does not match one of the client's ALPNprotocols, an error will be thrown. This option cannot be used with theALPNProtocolsoption, and setting both options will throw an error.clientCertEngine<string> Name of an OpenSSL engine which can provide theclient certificate.Deprecated.enableTrace<boolean> Iftrue,tls.TLSSocket.enableTrace()will becalled on new connections. Tracing can be enabled after the secureconnection is established, but this option must be used to trace the secureconnection setup.Default:false.handshakeTimeout<number> Abort the connection if the SSL/TLS handshakedoes not finish in the specified number of milliseconds.A'tlsClientError'is emitted on thetls.Serverobject whenevera handshake times out.Default:120000(120 seconds).rejectUnauthorized<boolean> If notfalsethe server will reject anyconnection which is not authorized with the list of supplied CAs. Thisoption only has an effect ifrequestCertistrue.Default:true.requestCert<boolean> Iftruethe server will request a certificate fromclients that connect and attempt to verify that certificate.Default:false.sessionTimeout<number> The number of seconds after which a TLS sessioncreated by the server will no longer be resumable. SeeSession Resumption for more information.Default:300.SNICallback(servername, callback)<Function> A function that will becalled if the client supports SNI TLS extension. Two arguments will bepassed when called:servernameandcallback.callbackis anerror-first callback that takes two optional arguments:errorandctx.ctx, if provided, is aSecureContextinstance.tls.createSecureContext()can be used to get a properSecureContext.Ifcallbackis called with a falsyctxargument, the default securecontext of the server will be used. IfSNICallbackwasn't provided thedefault callback with high-level API will be used (see below).ticketKeys<Buffer> 48-bytes of cryptographically strong pseudorandomdata. SeeSession Resumption for more information.pskCallback<Function> For TLS-PSK negotiation, seePre-shared keys.pskIdentityHint<string> optional hint to send to a client to helpwith selecting the identity during TLS-PSK negotiation. Will be ignoredin TLS 1.3. Upon failing to set pskIdentityHint'tlsClientError'will beemitted with'ERR_TLS_PSK_SET_IDENTITY_HINT_FAILED'code.- ...: Any
tls.createSecureContext()option can be provided. Forservers, the identity options (pfx,key/cert, orpskCallback)are usually required. - ...: Any
net.createServer()option can be provided.
secureConnectionListener<Function>- Returns:<tls.Server>
Creates a newtls.Server. ThesecureConnectionListener, if provided, isautomatically set as a listener for the'secureConnection' event.
TheticketKeys options is automatically shared betweennode:cluster moduleworkers.
The following illustrates a simple echo server:
import { createServer }from'node:tls';import { readFileSync }from'node:fs';const options = {key:readFileSync('server-key.pem'),cert:readFileSync('server-cert.pem'),// This is necessary only if using client certificate authentication.requestCert:true,// This is necessary only if the client uses a self-signed certificate.ca: [readFileSync('client-cert.pem') ],};const server =createServer(options,(socket) => {console.log('server connected', socket.authorized ?'authorized' :'unauthorized'); socket.write('welcome!\n'); socket.setEncoding('utf8'); socket.pipe(socket);});server.listen(8000,() => {console.log('server bound');});const { createServer } =require('node:tls');const { readFileSync } =require('node:fs');const options = {key:readFileSync('server-key.pem'),cert:readFileSync('server-cert.pem'),// This is necessary only if using client certificate authentication.requestCert:true,// This is necessary only if the client uses a self-signed certificate.ca: [readFileSync('client-cert.pem') ],};const server =createServer(options,(socket) => {console.log('server connected', socket.authorized ?'authorized' :'unauthorized'); socket.write('welcome!\n'); socket.setEncoding('utf8'); socket.pipe(socket);});server.listen(8000,() => {console.log('server bound');});
To generate the certificate and key for this example, run:
openssl req -x509 -newkey rsa:2048 -nodes -sha256 -subj'/CN=localhost' \ -keyout server-key.pem -out server-cert.pemThen, to generate theclient-cert.pem certificate for this example, run:
openssl pkcs12 -certpbe AES-256-CBC -export -out client-cert.pem \ -inkey server-key.pem -in server-cert.pemThe server can be tested by connecting to it using the example client fromtls.connect().
tls.setDefaultCACertificates(certs)#
certs<string[]> |<ArrayBufferView[]> An array of CA certificates in PEM format.
Sets the default CA certificates used by Node.js TLS clients. If the providedcertificates are parsed successfully, they will become the default CAcertificate list returned bytls.getCACertificates() and usedby subsequent TLS connections that don't specify their own CA certificates.The certificates will be deduplicated before being set as the default.
This function only affects the current Node.js thread. Previoussessions cached by the HTTPS agent won't be affected by this change, sothis method should be called before any unwanted cachable TLS connections aremade.
To use system CA certificates as the default:
const tls =require('node:tls');tls.setDefaultCACertificates(tls.getCACertificates('system'));import tlsfrom'node:tls';tls.setDefaultCACertificates(tls.getCACertificates('system'));
This function completely replaces the default CA certificate list. To add additionalcertificates to the existing defaults, get the current certificates and append to them:
const tls =require('node:tls');const currentCerts = tls.getCACertificates('default');const additionalCerts = ['-----BEGIN CERTIFICATE-----\n...'];tls.setDefaultCACertificates([...currentCerts, ...additionalCerts]);import tlsfrom'node:tls';const currentCerts = tls.getCACertificates('default');const additionalCerts = ['-----BEGIN CERTIFICATE-----\n...'];tls.setDefaultCACertificates([...currentCerts, ...additionalCerts]);
tls.getCACertificates([type])#
type<string> |<undefined> The type of CA certificates that will be returned. Valid valuesare"default","system","bundled"and"extra".Default:"default".- Returns:<string[]> An array of PEM-encoded certificates. The array may contain duplicatesif the same certificate is repeatedly stored in multiple sources.
Returns an array containing the CA certificates from various sources, depending ontype:
"default": return the CA certificates that will be used by the Node.js TLS clients by default.- When
--use-bundled-cais enabled (default), or--use-openssl-cais not enabled,this would include CA certificates from the bundled Mozilla CA store. - When
--use-system-cais enabled, this would also include certificates from the system'strusted store. - When
NODE_EXTRA_CA_CERTSis used, this would also include certificates loaded from the specifiedfile.
- When
"system": return the CA certificates that are loaded from the system's trusted store, accordingto rules set by--use-system-ca. This can be used to get the certificates from the systemwhen--use-system-cais not enabled."bundled": return the CA certificates from the bundled Mozilla CA store. This would be the sameastls.rootCertificates."extra": return the CA certificates loaded fromNODE_EXTRA_CA_CERTS. It's an empty array ifNODE_EXTRA_CA_CERTSis not set.
tls.getCiphers()#
- Returns:<string[]>
Returns an array with the names of the supported TLS ciphers. The names arelower-case for historical reasons, but must be uppercased to be used intheciphers option oftls.createSecureContext().
Not all supported ciphers are enabled by default. SeeModifying the default TLS cipher suite.
Cipher names that start with'tls_' are for TLSv1.3, all the others are forTLSv1.2 and below.
console.log(tls.getCiphers());// ['aes128-gcm-sha256', 'aes128-sha', ...]tls.rootCertificates#
- Type:<string[]>
An immutable array of strings representing the root certificates (in PEM format)from the bundled Mozilla CA store as supplied by the current Node.js version.
The bundled CA store, as supplied by Node.js, is a snapshot of Mozilla CA storethat is fixed at release time. It is identical on all supported platforms.
To get the actual CA certificates used by the current Node.js instance, whichmay include certificates loaded from the system store (if--use-system-ca is used)or loaded from a file indicated byNODE_EXTRA_CA_CERTS, usetls.getCACertificates().
tls.DEFAULT_ECDH_CURVE#
History
| Version | Changes |
|---|---|
| v10.0.0 | Default value changed to |
| v0.11.13 | Added in: v0.11.13 |
The default curve name to use for ECDH key agreement in a tls server. Thedefault value is'auto'. Seetls.createSecureContext() for furtherinformation.
tls.DEFAULT_MAX_VERSION#
- Type:<string> The default value of the
maxVersionoption oftls.createSecureContext(). It can be assigned any of the supported TLSprotocol versions,'TLSv1.3','TLSv1.2','TLSv1.1', or'TLSv1'.Default:'TLSv1.3', unless changed using CLI options. Using--tls-max-v1.2sets the default to'TLSv1.2'. Using--tls-max-v1.3setsthe default to'TLSv1.3'. If multiple of the options are provided, thehighest maximum is used.
tls.DEFAULT_MIN_VERSION#
- Type:<string> The default value of the
minVersionoption oftls.createSecureContext(). It can be assigned any of the supported TLSprotocol versions,'TLSv1.3','TLSv1.2','TLSv1.1', or'TLSv1'.Versions before TLSv1.2 may require downgrading theOpenSSL Security Level.Default:'TLSv1.2', unless changed using CLI options. Using--tls-min-v1.0sets the default to'TLSv1'. Using--tls-min-v1.1setsthe default to'TLSv1.1'. Using--tls-min-v1.3sets the default to'TLSv1.3'. If multiple of the options are provided, the lowest minimum isused.
tls.DEFAULT_CIPHERS#
- Type:<string> The default value of the
ciphersoption oftls.createSecureContext(). It can be assigned any of the supportedOpenSSL ciphers. Defaults to the content ofcrypto.constants.defaultCoreCipherList, unless changed using CLI optionsusing--tls-default-ciphers.
Trace events#
Source Code:lib/trace_events.js
Thenode:trace_events module provides a mechanism to centralize tracinginformation generated by V8, Node.js core, and userspace code.
Tracing can be enabled with the--trace-event-categories command-line flagor by using thenode:trace_events module. The--trace-event-categories flagaccepts a list of comma-separated category names.
The available categories are:
node: An empty placeholder.node.async_hooks: Enables capture of detailedasync_hookstrace data.Theasync_hooksevents have a uniqueasyncIdand a specialtriggerIdtriggerAsyncIdproperty.node.bootstrap: Enables capture of Node.js bootstrap milestones.node.console: Enables capture ofconsole.time()andconsole.count()output.node.threadpoolwork.sync: Enables capture of trace data for threadpoolsynchronous operations, such asblob,zlib,cryptoandnode_api.node.threadpoolwork.async: Enables capture of trace data for threadpoolasynchronous operations, such asblob,zlib,cryptoandnode_api.node.dns.native: Enables capture of trace data for DNS queries.node.net.native: Enables capture of trace data for network.node.environment: Enables capture of Node.js Environment milestones.node.fs.sync: Enables capture of trace data for file system sync methods.node.fs_dir.sync: Enables capture of trace data for file system syncdirectory methods.node.fs.async: Enables capture of trace data for file system async methods.node.fs_dir.async: Enables capture of trace data for file system asyncdirectory methods.node.perf: Enables capture ofPerformance API measurements.node.perf.usertiming: Enables capture of only Performance API User Timingmeasures and marks.node.perf.timerify: Enables capture of only Performance API timerifymeasurements.
node.promises.rejections: Enables capture of trace data tracking the numberof unhandled Promise rejections and handled-after-rejections.node.vm.script: Enables capture of trace data for thenode:vmmodule'srunInNewContext(),runInContext(), andrunInThisContext()methods.v8: TheV8 events are GC, compiling, and execution related.node.http: Enables capture of trace data for http request / response.node.module_timer: Enables capture of trace data for CJS Module loading.
By default thenode,node.async_hooks, andv8 categories are enabled.
node --trace-event-categories v8,node,node.async_hooks server.jsPrior versions of Node.js required the use of the--trace-events-enabledflag to enable trace events. This requirement has been removed. However, the--trace-events-enabled flagmay still be used and will enable thenode,node.async_hooks, andv8 trace event categories by default.
node --trace-events-enabled# is equivalent tonode --trace-event-categories v8,node,node.async_hooksAlternatively, trace events may be enabled using thenode:trace_events module:
import { createTracing }from'node:trace_events';const tracing =createTracing({categories: ['node.perf'] });tracing.enable();// Enable trace event capture for the 'node.perf' category// do worktracing.disable();// Disable trace event capture for the 'node.perf' categoryconst { createTracing } =require('node:trace_events');const tracing =createTracing({categories: ['node.perf'] });tracing.enable();// Enable trace event capture for the 'node.perf' category// do worktracing.disable();// Disable trace event capture for the 'node.perf' category
Running Node.js with tracing enabled will produce log files that can be openedin thechrome://tracingtab of Chrome.
The logging file is by default callednode_trace.${rotation}.log, where${rotation} is an incrementing log-rotation id. The filepath pattern canbe specified with--trace-event-file-pattern that accepts a templatestring that supports${rotation} and${pid}:
node --trace-event-categories v8 --trace-event-file-pattern'${pid}-${rotation}.log' server.jsTo guarantee that the log file is properly generated after signal events likeSIGINT,SIGTERM, orSIGBREAK, make sure to have the appropriate handlersin your code, such as:
process.on('SIGINT',functiononSigint() {console.info('Received SIGINT.'); process.exit(130);// Or applicable exit code depending on OS and signal});The tracing system uses the same time sourceas the one used byprocess.hrtime().However the trace-event timestamps are expressed in microseconds,unlikeprocess.hrtime() which returns nanoseconds.
The features from this module are not available inWorker threads.
Thenode:trace_events module#
Tracing object#
TheTracing object is used to enable or disable tracing for sets ofcategories. Instances are created using thetrace_events.createTracing()method.
When created, theTracing object is disabled. Calling thetracing.enable() method adds the categories to the set of enabled trace eventcategories. Callingtracing.disable() will remove the categories from theset of enabled trace event categories.
tracing.categories#
- Type:<string>
A comma-separated list of the trace event categories covered by thisTracing object.
tracing.disable()#
Disables thisTracing object.
Only trace event categoriesnot covered by other enabledTracing objectsandnot specified by the--trace-event-categories flag will be disabled.
import { createTracing, getEnabledCategories }from'node:trace_events';const t1 =createTracing({categories: ['node','v8'] });const t2 =createTracing({categories: ['node.perf','node'] });t1.enable();t2.enable();// Prints 'node,node.perf,v8'console.log(getEnabledCategories());t2.disable();// Will only disable emission of the 'node.perf' category// Prints 'node,v8'console.log(getEnabledCategories());const { createTracing, getEnabledCategories } =require('node:trace_events');const t1 =createTracing({categories: ['node','v8'] });const t2 =createTracing({categories: ['node.perf','node'] });t1.enable();t2.enable();// Prints 'node,node.perf,v8'console.log(getEnabledCategories());t2.disable();// Will only disable emission of the 'node.perf' category// Prints 'node,v8'console.log(getEnabledCategories());
tracing.enable()#
Enables thisTracing object for the set of categories covered by theTracing object.
trace_events.createTracing(options)#
options<Object>categories<string[]> An array of trace category names. Values includedin the array are coerced to a string when possible. An error will bethrown if the value cannot be coerced.
- Returns:<Tracing>.
Creates and returns aTracing object for the given set ofcategories.
import { createTracing }from'node:trace_events';const categories = ['node.perf','node.async_hooks'];const tracing =createTracing({ categories });tracing.enable();// do stufftracing.disable();const { createTracing } =require('node:trace_events');const categories = ['node.perf','node.async_hooks'];const tracing =createTracing({ categories });tracing.enable();// do stufftracing.disable();
trace_events.getEnabledCategories()#
- Returns:<string>
Returns a comma-separated list of all currently-enabled trace eventcategories. The current set of enabled trace event categories is determinedby theunion of all currently-enabledTracing objects and any categoriesenabled using the--trace-event-categories flag.
Given the filetest.js below, the commandnode --trace-event-categories node.perf test.js will print'node.async_hooks,node.perf' to the console.
import { createTracing, getEnabledCategories }from'node:trace_events';const t1 =createTracing({categories: ['node.async_hooks'] });const t2 =createTracing({categories: ['node.perf'] });const t3 =createTracing({categories: ['v8'] });t1.enable();t2.enable();console.log(getEnabledCategories());const { createTracing, getEnabledCategories } =require('node:trace_events');const t1 =createTracing({categories: ['node.async_hooks'] });const t2 =createTracing({categories: ['node.perf'] });const t3 =createTracing({categories: ['v8'] });t1.enable();t2.enable();console.log(getEnabledCategories());
Examples#
Collect trace events data by inspector#
import {Session }from'node:inspector';const session =newSession();session.connect();functionpost(message, data) {returnnewPromise((resolve, reject) => { session.post(message, data,(err, result) => {if (err)reject(newError(JSON.stringify(err)));elseresolve(result); }); });}asyncfunctioncollect() {const data = []; session.on('NodeTracing.dataCollected',(chunk) => data.push(chunk)); session.on('NodeTracing.tracingComplete',() => {// done });const traceConfig = {includedCategories: ['v8'] };awaitpost('NodeTracing.start', { traceConfig });// do somethingsetTimeout(() => {post('NodeTracing.stop').then(() => { session.disconnect();console.log(data); }); },1000);}collect();'use strict';const {Session } =require('node:inspector');const session =newSession();session.connect();functionpost(message, data) {returnnewPromise((resolve, reject) => { session.post(message, data,(err, result) => {if (err)reject(newError(JSON.stringify(err)));elseresolve(result); }); });}asyncfunctioncollect() {const data = []; session.on('NodeTracing.dataCollected',(chunk) => data.push(chunk)); session.on('NodeTracing.tracingComplete',() => {// done });const traceConfig = {includedCategories: ['v8'] };awaitpost('NodeTracing.start', { traceConfig });// do somethingsetTimeout(() => {post('NodeTracing.stop').then(() => { session.disconnect();console.log(data); }); },1000);}collect();
TTY#
Source Code:lib/tty.js
Thenode:tty module provides thetty.ReadStream andtty.WriteStreamclasses. In most cases, it will not be necessary or possible to use this moduledirectly. However, it can be accessed using:
const tty =require('node:tty');When Node.js detects that it is being run with a text terminal ("TTY")attached,process.stdin will, by default, be initialized as an instance oftty.ReadStream and bothprocess.stdout andprocess.stderr will, bydefault, be instances oftty.WriteStream. The preferred method of determiningwhether Node.js is being run within a TTY context is to check that the value oftheprocess.stdout.isTTY property istrue:
$node -p -e"Boolean(process.stdout.isTTY)"true$node -p -e"Boolean(process.stdout.isTTY)" |catfalseIn most cases, there should be little to no reason for an application tomanually create instances of thetty.ReadStream andtty.WriteStreamclasses.
Class:tty.ReadStream#
- Extends:<net.Socket>
Represents the readable side of a TTY. In normal circumstancesprocess.stdin will be the onlytty.ReadStream instance in a Node.jsprocess and there should be no reason to create additional instances.
readStream.isRaw#
Aboolean that istrue if the TTY is currently configured to operate as araw device.
This flag is alwaysfalse when a process starts, even if the terminal isoperating in raw mode. Its value will change with subsequent calls tosetRawMode.
readStream.setRawMode(mode)#
mode<boolean> Iftrue, configures thetty.ReadStreamto operate as araw device. Iffalse, configures thetty.ReadStreamto operate in itsdefault mode. ThereadStream.isRawproperty will be set to the resultingmode.- Returns:<this> The read stream instance.
Allows configuration oftty.ReadStream so that it operates as a raw device.
When in raw mode, input is always available character-by-character, notincluding modifiers. Additionally, all special processing of characters by theterminal is disabled, including echoing inputcharacters.Ctrl+C will no longer cause aSIGINT whenin this mode.
Class:tty.WriteStream#
- Extends:<net.Socket>
Represents the writable side of a TTY. In normal circumstances,process.stdout andprocess.stderr will be the onlytty.WriteStream instances created for a Node.js process and thereshould be no reason to create additional instances.
new tty.ReadStream(fd[, options])#
History
| Version | Changes |
|---|---|
| v0.9.4 | The |
| v0.5.8 | Added in: v0.5.8 |
fd<number> A file descriptor associated with a TTY.options<Object> Options passed to parentnet.Socket,seeoptionsofnet.Socketconstructor.- Returns:<tty.ReadStream>
Creates aReadStream forfd associated with a TTY.
new tty.WriteStream(fd)#
fd<number> A file descriptor associated with a TTY.- Returns:<tty.WriteStream>
Creates aWriteStream forfd associated with a TTY.
Event:'resize'#
The'resize' event is emitted whenever either of thewriteStream.columnsorwriteStream.rows properties have changed. No arguments are passed to thelistener callback when called.
process.stdout.on('resize',() => {console.log('screen size has changed!');console.log(`${process.stdout.columns}x${process.stdout.rows}`);});writeStream.clearLine(dir[, callback])#
History
| Version | Changes |
|---|---|
| v12.7.0 | The stream's write() callback and return value are exposed. |
| v0.7.7 | Added in: v0.7.7 |
dir<number>-1: to the left from cursor1: to the right from cursor0: the entire line
callback<Function> Invoked once the operation completes.- Returns:<boolean>
falseif the stream wishes for the calling code to waitfor the'drain'event to be emitted before continuing to write additionaldata; otherwisetrue.
writeStream.clearLine() clears the current line of thisWriteStream in adirection identified bydir.
writeStream.clearScreenDown([callback])#
History
| Version | Changes |
|---|---|
| v12.7.0 | The stream's write() callback and return value are exposed. |
| v0.7.7 | Added in: v0.7.7 |
callback<Function> Invoked once the operation completes.- Returns:<boolean>
falseif the stream wishes for the calling code to waitfor the'drain'event to be emitted before continuing to write additionaldata; otherwisetrue.
writeStream.clearScreenDown() clears thisWriteStream from the currentcursor down.
writeStream.columns#
Anumber specifying the number of columns the TTY currently has. This propertyis updated whenever the'resize' event is emitted.
writeStream.cursorTo(x[, y][, callback])#
History
| Version | Changes |
|---|---|
| v12.7.0 | The stream's write() callback and return value are exposed. |
| v0.7.7 | Added in: v0.7.7 |
x<number>y<number>callback<Function> Invoked once the operation completes.- Returns:<boolean>
falseif the stream wishes for the calling code to waitfor the'drain'event to be emitted before continuing to write additionaldata; otherwisetrue.
writeStream.cursorTo() moves thisWriteStream's cursor to the specifiedposition.
writeStream.getColorDepth([env])#
env<Object> An object containing the environment variables to check. Thisenables simulating the usage of a specific terminal.Default:process.env.- Returns:<number>
Returns:
1for 2,4for 16,8for 256,24for 16,777,216 colors supported.
Use this to determine what colors the terminal supports. Due to the nature ofcolors in terminals it is possible to either have false positives or falsenegatives. It depends on process information and the environment variables thatmay lie about what terminal is used.It is possible to pass in anenv object to simulate the usage of a specificterminal. This can be useful to check how specific environment settings behave.
To enforce a specific color support, use one of the below environment settings.
- 2 colors:
FORCE_COLOR = 0(Disables colors) - 16 colors:
FORCE_COLOR = 1 - 256 colors:
FORCE_COLOR = 2 - 16,777,216 colors:
FORCE_COLOR = 3
Disabling color support is also possible by using theNO_COLOR andNODE_DISABLE_COLORS environment variables.
writeStream.getWindowSize()#
- Returns:<number[]>
writeStream.getWindowSize() returns the size of the TTYcorresponding to thisWriteStream. The array is of the type[numColumns, numRows] wherenumColumns andnumRows represent the numberof columns and rows in the corresponding TTY.
writeStream.hasColors([count][, env])#
count<integer> The number of colors that are requested (minimum 2).Default: 16.env<Object> An object containing the environment variables to check. Thisenables simulating the usage of a specific terminal.Default:process.env.- Returns:<boolean>
Returnstrue if thewriteStream supports at least as many colors as providedincount. Minimum support is 2 (black and white).
This has the same false positives and negatives as described inwriteStream.getColorDepth().
process.stdout.hasColors();// Returns true or false depending on if `stdout` supports at least 16 colors.process.stdout.hasColors(256);// Returns true or false depending on if `stdout` supports at least 256 colors.process.stdout.hasColors({TMUX:'1' });// Returns true.process.stdout.hasColors(2 **24, {TMUX:'1' });// Returns false (the environment setting pretends to support 2 ** 8 colors).writeStream.moveCursor(dx, dy[, callback])#
History
| Version | Changes |
|---|---|
| v12.7.0 | The stream's write() callback and return value are exposed. |
| v0.7.7 | Added in: v0.7.7 |
dx<number>dy<number>callback<Function> Invoked once the operation completes.- Returns:<boolean>
falseif the stream wishes for the calling code to waitfor the'drain'event to be emitted before continuing to write additionaldata; otherwisetrue.
writeStream.moveCursor() moves thisWriteStream's cursorrelative to itscurrent position.
writeStream.rows#
Anumber specifying the number of rows the TTY currently has. This propertyis updated whenever the'resize' event is emitted.
tty.isatty(fd)#
Thetty.isatty() method returnstrue if the givenfd is associated witha TTY andfalse if it is not, including wheneverfd is not a non-negativeinteger.
UDP/datagram sockets#
Source Code:lib/dgram.js
Thenode:dgram module provides an implementation of UDP datagram sockets.
import dgramfrom'node:dgram';const server = dgram.createSocket('udp4');server.on('error',(err) => {console.error(`server error:\n${err.stack}`); server.close();});server.on('message',(msg, rinfo) => {console.log(`server got:${msg} from${rinfo.address}:${rinfo.port}`);});server.on('listening',() => {const address = server.address();console.log(`server listening${address.address}:${address.port}`);});server.bind(41234);// Prints: server listening 0.0.0.0:41234const dgram =require('node:dgram');const server = dgram.createSocket('udp4');server.on('error',(err) => {console.error(`server error:\n${err.stack}`); server.close();});server.on('message',(msg, rinfo) => {console.log(`server got:${msg} from${rinfo.address}:${rinfo.port}`);});server.on('listening',() => {const address = server.address();console.log(`server listening${address.address}:${address.port}`);});server.bind(41234);// Prints: server listening 0.0.0.0:41234
Class:dgram.Socket#
- Extends:<EventEmitter>
Encapsulates the datagram functionality.
New instances ofdgram.Socket are created usingdgram.createSocket().Thenew keyword is not to be used to createdgram.Socket instances.
Event:'close'#
The'close' event is emitted after a socket is closed withclose().Once triggered, no new'message' events will be emitted on this socket.
Event:'connect'#
The'connect' event is emitted after a socket is associated to a remoteaddress as a result of a successfulconnect() call.
Event:'error'#
exception<Error>
The'error' event is emitted whenever any error occurs. The event handlerfunction is passed a singleError object.
Event:'listening'#
The'listening' event is emitted once thedgram.Socket is addressable andcan receive data. This happens either explicitly withsocket.bind() orimplicitly the first time data is sent usingsocket.send().Until thedgram.Socket is listening, the underlying system resources do notexist and calls such assocket.address() andsocket.setTTL() will fail.
Event:'message'#
History
| Version | Changes |
|---|---|
| v18.4.0 | The |
| v18.0.0 | The |
| v0.1.99 | Added in: v0.1.99 |
The'message' event is emitted when a new datagram is available on a socket.The event handler function is passed two arguments:msg andrinfo.
If the source address of the incoming packet is an IPv6 link-localaddress, the interface name is added to theaddress. Forexample, a packet received on theen0 interface might have theaddress field set to'fe80::2618:1234:ab11:3b9c%en0', where'%en0'is the interface name as a zone ID suffix.
socket.addMembership(multicastAddress[, multicastInterface])#
Tells the kernel to join a multicast group at the givenmulticastAddress andmulticastInterface using theIP_ADD_MEMBERSHIP socket option. If themulticastInterface argument is not specified, the operating system will chooseone interface and will add membership to it. To add membership to everyavailable interface, calladdMembership multiple times, once per interface.
When called on an unbound socket, this method will implicitly bind to a randomport, listening on all interfaces.
When sharing a UDP socket across multiplecluster workers, thesocket.addMembership() function must be called only once or anEADDRINUSE error will occur:
import clusterfrom'node:cluster';import dgramfrom'node:dgram';if (cluster.isPrimary) { cluster.fork();// Works ok. cluster.fork();// Fails with EADDRINUSE.}else {const s = dgram.createSocket('udp4'); s.bind(1234,() => { s.addMembership('224.0.0.114'); });}const cluster =require('node:cluster');const dgram =require('node:dgram');if (cluster.isPrimary) { cluster.fork();// Works ok. cluster.fork();// Fails with EADDRINUSE.}else {const s = dgram.createSocket('udp4'); s.bind(1234,() => { s.addMembership('224.0.0.114'); });}
socket.addSourceSpecificMembership(sourceAddress, groupAddress[, multicastInterface])#
Tells the kernel to join a source-specific multicast channel at the givensourceAddress andgroupAddress, using themulticastInterface with theIP_ADD_SOURCE_MEMBERSHIP socket option. If themulticastInterface argumentis not specified, the operating system will choose one interface and will addmembership to it. To add membership to every available interface, callsocket.addSourceSpecificMembership() multiple times, once per interface.
When called on an unbound socket, this method will implicitly bind to a randomport, listening on all interfaces.
socket.address()#
- Returns:<Object>
Returns an object containing the address information for a socket.For UDP sockets, this object will containaddress,family, andportproperties.
This method throwsEBADF if called on an unbound socket.
socket.bind([port][, address][, callback])#
History
| Version | Changes |
|---|---|
| v0.9.1 | The method was changed to an asynchronous execution model. Legacy code would need to be changed to pass a callback function to the method call. |
| v0.1.99 | Added in: v0.1.99 |
port<integer>address<string>callback<Function> with no parameters. Called when binding is complete.
For UDP sockets, causes thedgram.Socket to listen for datagrammessages on a namedport and optionaladdress. Ifport is notspecified or is0, the operating system will attempt to bind to arandom port. Ifaddress is not specified, the operating system willattempt to listen on all addresses. Once binding is complete, a'listening' event is emitted and the optionalcallback function iscalled.
Specifying both a'listening' event listener and passing acallback to thesocket.bind() method is not harmful but not veryuseful.
A bound datagram socket keeps the Node.js process running to receivedatagram messages.
If binding fails, an'error' event is generated. In rare case (e.g.attempting to bind with a closed socket), anError may be thrown.
Example of a UDP server listening on port 41234:
import dgramfrom'node:dgram';const server = dgram.createSocket('udp4');server.on('error',(err) => {console.error(`server error:\n${err.stack}`); server.close();});server.on('message',(msg, rinfo) => {console.log(`server got:${msg} from${rinfo.address}:${rinfo.port}`);});server.on('listening',() => {const address = server.address();console.log(`server listening${address.address}:${address.port}`);});server.bind(41234);// Prints: server listening 0.0.0.0:41234const dgram =require('node:dgram');const server = dgram.createSocket('udp4');server.on('error',(err) => {console.error(`server error:\n${err.stack}`); server.close();});server.on('message',(msg, rinfo) => {console.log(`server got:${msg} from${rinfo.address}:${rinfo.port}`);});server.on('listening',() => {const address = server.address();console.log(`server listening${address.address}:${address.port}`);});server.bind(41234);// Prints: server listening 0.0.0.0:41234
socket.bind(options[, callback])#
options<Object> Required. Supports the following properties:callback<Function>
For UDP sockets, causes thedgram.Socket to listen for datagrammessages on a namedport and optionaladdress that are passed asproperties of anoptions object passed as the first argument. Ifport is not specified or is0, the operating system will attemptto bind to a random port. Ifaddress is not specified, the operatingsystem will attempt to listen on all addresses. Once binding iscomplete, a'listening' event is emitted and the optionalcallbackfunction is called.
Theoptions object may contain afd property. When afd greaterthan0 is set, it will wrap around an existing socket with the givenfile descriptor. In this case, the properties ofport andaddresswill be ignored.
Specifying both a'listening' event listener and passing acallback to thesocket.bind() method is not harmful but not veryuseful.
Theoptions object may contain an additionalexclusive property that isused when usingdgram.Socket objects with thecluster module. Whenexclusive is set tofalse (the default), cluster workers will use the sameunderlying socket handle allowing connection handling duties to be shared.Whenexclusive istrue, however, the handle is not shared and attemptedport sharing results in an error. Creating adgram.Socket with thereusePortoption set totrue causesexclusive to always betrue whensocket.bind()is called.
A bound datagram socket keeps the Node.js process running to receivedatagram messages.
If binding fails, an'error' event is generated. In rare case (e.g.attempting to bind with a closed socket), anError may be thrown.
An example socket listening on an exclusive port is shown below.
socket.bind({address:'localhost',port:8000,exclusive:true,});socket.close([callback])#
callback<Function> Called when the socket has been closed.
Close the underlying socket and stop listening for data on it. If a callback isprovided, it is added as a listener for the'close' event.
socket[Symbol.asyncDispose]()#
History
| Version | Changes |
|---|---|
| v24.2.0 | No longer experimental. |
| v20.5.0, v18.18.0 | Added in: v20.5.0, v18.18.0 |
Callssocket.close() and returns a promise that fulfills when thesocket has closed.
socket.connect(port[, address][, callback])#
port<integer>address<string>callback<Function> Called when the connection is completed or on error.
Associates thedgram.Socket to a remote address and port. Everymessage sent by this handle is automatically sent to that destination. Also,the socket will only receive messages from that remote peer.Trying to callconnect() on an already connected socket will resultin anERR_SOCKET_DGRAM_IS_CONNECTED exception. Ifaddress is notprovided,'127.0.0.1' (forudp4 sockets) or'::1' (forudp6 sockets)will be used by default. Once the connection is complete, a'connect' eventis emitted and the optionalcallback function is called. In case of failure,thecallback is called or, failing this, an'error' event is emitted.
socket.disconnect()#
A synchronous function that disassociates a connecteddgram.Socket fromits remote address. Trying to calldisconnect() on an unbound or alreadydisconnected socket will result in anERR_SOCKET_DGRAM_NOT_CONNECTEDexception.
socket.dropMembership(multicastAddress[, multicastInterface])#
Instructs the kernel to leave a multicast group atmulticastAddress using theIP_DROP_MEMBERSHIP socket option. This method is automatically called by thekernel when the socket is closed or the process terminates, so most apps willnever have reason to call this.
IfmulticastInterface is not specified, the operating system will attempt todrop membership on all valid interfaces.
socket.dropSourceSpecificMembership(sourceAddress, groupAddress[, multicastInterface])#
Instructs the kernel to leave a source-specific multicast channel at the givensourceAddress andgroupAddress using theIP_DROP_SOURCE_MEMBERSHIPsocket option. This method is automatically called by the kernel when thesocket is closed or the process terminates, so most apps will never havereason to call this.
IfmulticastInterface is not specified, the operating system will attempt todrop membership on all valid interfaces.
socket.getRecvBufferSize()#
- Returns:<number> the
SO_RCVBUFsocket receive buffer size in bytes.
This method throwsERR_SOCKET_BUFFER_SIZE if called on an unbound socket.
socket.getSendBufferSize()#
- Returns:<number> the
SO_SNDBUFsocket send buffer size in bytes.
This method throwsERR_SOCKET_BUFFER_SIZE if called on an unbound socket.
socket.getSendQueueSize()#
- Returns:<number> Number of bytes queued for sending.
socket.getSendQueueCount()#
- Returns:<number> Number of send requests currently in the queue awaitingto be processed.
socket.ref()#
- Returns:<dgram.Socket>
By default, binding a socket will cause it to block the Node.js process fromexiting as long as the socket is open. Thesocket.unref() method can be usedto exclude the socket from the reference counting that keeps the Node.jsprocess active. Thesocket.ref() method adds the socket back to the referencecounting and restores the default behavior.
Callingsocket.ref() multiples times will have no additional effect.
Thesocket.ref() method returns a reference to the socket so calls can bechained.
socket.remoteAddress()#
- Returns:<Object>
Returns an object containing theaddress,family, andport of the remoteendpoint. This method throws anERR_SOCKET_DGRAM_NOT_CONNECTED exceptionif the socket is not connected.
socket.send(msg[, offset, length][, port][, address][, callback])#
History
| Version | Changes |
|---|---|
| v17.0.0 | The |
| v14.5.0, v12.19.0 | The |
| v12.0.0 | Added support for sending data on connected sockets. |
| v8.0.0 | The |
| v8.0.0 | The |
| v6.0.0 | On success, |
| v5.7.0 | The |
| v0.1.99 | Added in: v0.1.99 |
msg<Buffer> |<TypedArray> |<DataView> |<string> |<Array> Message to be sent.offset<integer> Offset in the buffer where the message starts.length<integer> Number of bytes in the message.port<integer> Destination port.address<string> Destination host name or IP address.callback<Function> Called when the message has been sent.
Broadcasts a datagram on the socket.For connectionless sockets, the destinationport andaddress must bespecified. Connected sockets, on the other hand, will use their associatedremote endpoint, so theport andaddress arguments must not be set.
Themsg argument contains the message to be sent.Depending on its type, different behavior can apply. Ifmsg is aBuffer,anyTypedArray or aDataView,theoffset andlength specify the offset within theBuffer where themessage begins and the number of bytes in the message, respectively.Ifmsg is aString, then it is automatically converted to aBufferwith'utf8' encoding. With messages thatcontain multi-byte characters,offset andlength will be calculated withrespect tobyte length and not the character position.Ifmsg is an array,offset andlength must not be specified.
Theaddress argument is a string. If the value ofaddress is a host name,DNS will be used to resolve the address of the host. Ifaddress is notprovided or otherwise nullish,'127.0.0.1' (forudp4 sockets) or'::1'(forudp6 sockets) will be used by default.
If the socket has not been previously bound with a call tobind, the socketis assigned a random port number and is bound to the "all interfaces" address('0.0.0.0' forudp4 sockets,'::0' forudp6 sockets.)
An optionalcallback function may be specified to as a way of reportingDNS errors or for determining when it is safe to reuse thebuf object.DNS lookups delay the time to send for at least one tick of theNode.js event loop.
The only way to know for sure that the datagram has been sent is by using acallback. If an error occurs and acallback is given, the error will bepassed as the first argument to thecallback. If acallback is not given,the error is emitted as an'error' event on thesocket object.
Offset and length are optional but bothmust be set if either are used.They are supported only when the first argument is aBuffer, aTypedArray,or aDataView.
This method throwsERR_SOCKET_BAD_PORT if called on an unbound socket.
Example of sending a UDP packet to a port onlocalhost;
import dgramfrom'node:dgram';import {Buffer }from'node:buffer';const message =Buffer.from('Some bytes');const client = dgram.createSocket('udp4');client.send(message,41234,'localhost',(err) => { client.close();});const dgram =require('node:dgram');const {Buffer } =require('node:buffer');const message =Buffer.from('Some bytes');const client = dgram.createSocket('udp4');client.send(message,41234,'localhost',(err) => { client.close();});
Example of sending a UDP packet composed of multiple buffers to a port on127.0.0.1;
import dgramfrom'node:dgram';import {Buffer }from'node:buffer';const buf1 =Buffer.from('Some ');const buf2 =Buffer.from('bytes');const client = dgram.createSocket('udp4');client.send([buf1, buf2],41234,(err) => { client.close();});const dgram =require('node:dgram');const {Buffer } =require('node:buffer');const buf1 =Buffer.from('Some ');const buf2 =Buffer.from('bytes');const client = dgram.createSocket('udp4');client.send([buf1, buf2],41234,(err) => { client.close();});
Sending multiple buffers might be faster or slower depending on theapplication and operating system. Run benchmarks todetermine the optimal strategy on a case-by-case basis. Generally speaking,however, sending multiple buffers is faster.
Example of sending a UDP packet using a socket connected to a port onlocalhost:
import dgramfrom'node:dgram';import {Buffer }from'node:buffer';const message =Buffer.from('Some bytes');const client = dgram.createSocket('udp4');client.connect(41234,'localhost',(err) => { client.send(message,(err) => { client.close(); });});const dgram =require('node:dgram');const {Buffer } =require('node:buffer');const message =Buffer.from('Some bytes');const client = dgram.createSocket('udp4');client.connect(41234,'localhost',(err) => { client.send(message,(err) => { client.close(); });});
Note about UDP datagram size#
The maximum size of an IPv4/v6 datagram depends on theMTU(Maximum Transmission Unit) and on thePayload Length field size.
The
Payload Lengthfield is 16 bits wide, which means that a normalpayload cannot exceed 64K octets including the internet header and data(65,507 bytes = 65,535 − 8 bytes UDP header − 20 bytes IP header);this is generally true for loopback interfaces, but such long datagrammessages are impractical for most hosts and networks.The
MTUis the largest size a given link layer technology can support fordatagram messages. For any link, IPv4 mandates a minimumMTUof 68octets, while the recommendedMTUfor IPv4 is 576 (typically recommendedas theMTUfor dial-up type applications), whether they arrive whole or infragments.For IPv6, the minimum
MTUis 1280 octets. However, the mandatory minimumfragment reassembly buffer size is 1500 octets. The value of 68 octets isvery small, since most current link layer technologies, like Ethernet, have aminimumMTUof 1500.
It is impossible to know in advance the MTU of each link through whicha packet might travel. Sending a datagram greater than the receiverMTU willnot work because the packet will get silently dropped without informing thesource that the data did not reach its intended recipient.
socket.setBroadcast(flag)#
flag<boolean>
Sets or clears theSO_BROADCAST socket option. When set totrue, UDPpackets may be sent to a local interface's broadcast address.
This method throwsEBADF if called on an unbound socket.
socket.setMulticastInterface(multicastInterface)#
multicastInterface<string>
All references to scope in this section are referring toIPv6 Zone Indexes, which are defined byRFC 4007. In string form, an IPwith a scope index is written as'IP%scope' where scope is an interface nameor interface number.
Sets the default outgoing multicast interface of the socket to a choseninterface or back to system interface selection. ThemulticastInterface mustbe a valid string representation of an IP from the socket's family.
For IPv4 sockets, this should be the IP configured for the desired physicalinterface. All packets sent to multicast on the socket will be sent on theinterface determined by the most recent successful use of this call.
For IPv6 sockets,multicastInterface should include a scope to indicate theinterface as in the examples that follow. In IPv6, individualsend calls canalso use explicit scope in addresses, so only packets sent to a multicastaddress without specifying an explicit scope are affected by the most recentsuccessful use of this call.
This method throwsEBADF if called on an unbound socket.
Example: IPv6 outgoing multicast interface#
On most systems, where scope format uses the interface name:
const socket = dgram.createSocket('udp6');socket.bind(1234,() => { socket.setMulticastInterface('::%eth1');});On Windows, where scope format uses an interface number:
const socket = dgram.createSocket('udp6');socket.bind(1234,() => { socket.setMulticastInterface('::%2');});Example: IPv4 outgoing multicast interface#
All systems use an IP of the host on the desired physical interface:
const socket = dgram.createSocket('udp4');socket.bind(1234,() => { socket.setMulticastInterface('10.0.0.2');});Call results#
A call on a socket that is not ready to send or no longer open may throw aNotrunningError.
IfmulticastInterface can not be parsed into an IP then anEINVALSystem Error is thrown.
On IPv4, ifmulticastInterface is a valid address but does not match anyinterface, or if the address does not match the family thenaSystem Error such asEADDRNOTAVAIL orEPROTONOSUP is thrown.
On IPv6, most errors with specifying or omitting scope will result in the socketcontinuing to use (or returning to) the system's default interface selection.
A socket's address family's ANY address (IPv4'0.0.0.0' or IPv6'::') can beused to return control of the sockets default outgoing interface to the systemfor future multicast packets.
socket.setMulticastLoopback(flag)#
flag<boolean>
Sets or clears theIP_MULTICAST_LOOP socket option. When set totrue,multicast packets will also be received on the local interface.
This method throwsEBADF if called on an unbound socket.
socket.setMulticastTTL(ttl)#
ttl<integer>
Sets theIP_MULTICAST_TTL socket option. While TTL generally stands for"Time to Live", in this context it specifies the number of IP hops that apacket is allowed to travel through, specifically for multicast traffic. Eachrouter or gateway that forwards a packet decrements the TTL. If the TTL isdecremented to 0 by a router, it will not be forwarded.
Thettl argument may be between 0 and 255. The default on most systems is1.
This method throwsEBADF if called on an unbound socket.
socket.setRecvBufferSize(size)#
size<integer>
Sets theSO_RCVBUF socket option. Sets the maximum socket receive bufferin bytes.
This method throwsERR_SOCKET_BUFFER_SIZE if called on an unbound socket.
socket.setSendBufferSize(size)#
size<integer>
Sets theSO_SNDBUF socket option. Sets the maximum socket send bufferin bytes.
This method throwsERR_SOCKET_BUFFER_SIZE if called on an unbound socket.
socket.setTTL(ttl)#
ttl<integer>
Sets theIP_TTL socket option. While TTL generally stands for "Time to Live",in this context it specifies the number of IP hops that a packet is allowed totravel through. Each router or gateway that forwards a packet decrements theTTL. If the TTL is decremented to 0 by a router, it will not be forwarded.Changing TTL values is typically done for network probes or when multicasting.
Thettl argument may be between 1 and 255. The default on most systemsis 64.
This method throwsEBADF if called on an unbound socket.
socket.unref()#
- Returns:<dgram.Socket>
By default, binding a socket will cause it to block the Node.js process fromexiting as long as the socket is open. Thesocket.unref() method can be usedto exclude the socket from the reference counting that keeps the Node.jsprocess active, allowing the process to exit even if the socket is stilllistening.
Callingsocket.unref() multiple times will have no additional effect.
Thesocket.unref() method returns a reference to the socket so calls can bechained.
node:dgram module functions#
dgram.createSocket(options[, callback])#
History
| Version | Changes |
|---|---|
| v23.1.0, v22.12.0 | The |
| v15.8.0 | AbortSignal support was added. |
| v11.4.0 | The |
| v8.7.0 | The |
| v8.6.0 | The |
| v0.11.13 | Added in: v0.11.13 |
options<Object> Available options are:type<string> The family of socket. Must be either'udp4'or'udp6'.Required.reuseAddr<boolean> Whentruesocket.bind()will reuse theaddress, even if another process has already bound a socket on it, butonly one socket can receive the data.Default:false.reusePort<boolean> Whentruesocket.bind()will reuse theport, even if another process has already bound a socket on it. Incomingdatagrams are distributed to listening sockets. The option is availableonly on some platforms, such as Linux 3.9+, DragonFlyBSD 3.6+, FreeBSD 12.0+,Solaris 11.4, and AIX 7.2.5+. On unsupported platforms, this option raisesan error when the socket is bound.Default:false.ipv6Only<boolean> Settingipv6Onlytotruewilldisable dual-stack support, i.e., binding to address::won't make0.0.0.0be bound.Default:false.recvBufferSize<number> Sets theSO_RCVBUFsocket value.sendBufferSize<number> Sets theSO_SNDBUFsocket value.lookup<Function> Custom lookup function.Default:dns.lookup().signal<AbortSignal> An AbortSignal that may be used to close a socket.receiveBlockList<net.BlockList>receiveBlockListcan be used for discardinginbound datagram to specific IP addresses, IP ranges, or IP subnets. This does notwork if the server is behind a reverse proxy, NAT, etc. because the addresschecked against the blocklist is the address of the proxy, or the onespecified by the NAT.sendBlockList<net.BlockList>sendBlockListcan be used for disabling outboundaccess to specific IP addresses, IP ranges, or IP subnets.
callback<Function> Attached as a listener for'message'events. Optional.- Returns:<dgram.Socket>
Creates adgram.Socket object. Once the socket is created, callingsocket.bind() will instruct the socket to begin listening for datagrammessages. Whenaddress andport are not passed tosocket.bind() themethod will bind the socket to the "all interfaces" address on a random port(it does the right thing for bothudp4 andudp6 sockets). The bound addressand port can be retrieved usingsocket.address().address andsocket.address().port.
If thesignal option is enabled, calling.abort() on the correspondingAbortController is similar to calling.close() on the socket:
const controller =newAbortController();const { signal } = controller;const server = dgram.createSocket({type:'udp4', signal });server.on('message',(msg, rinfo) => {console.log(`server got:${msg} from${rinfo.address}:${rinfo.port}`);});// Later, when you want to close the server.controller.abort();dgram.createSocket(type[, callback])#
type<string> Either'udp4'or'udp6'.callback<Function> Attached as a listener to'message'events.- Returns:<dgram.Socket>
Creates adgram.Socket object of the specifiedtype.
Once the socket is created, callingsocket.bind() will instruct thesocket to begin listening for datagram messages. Whenaddress andport arenot passed tosocket.bind() the method will bind the socket to the "allinterfaces" address on a random port (it does the right thing for bothudp4andudp6 sockets). The bound address and port can be retrieved usingsocket.address().address andsocket.address().port.
URL#
Source Code:lib/url.js
Thenode:url module provides utilities for URL resolution and parsing. It canbe accessed using:
import urlfrom'node:url';const url =require('node:url');
URL strings and URL objects#
A URL string is a structured string containing multiple meaningful components.When parsed, a URL object is returned containing properties for each of thesecomponents.
Thenode:url module provides two APIs for working with URLs: a legacy API thatis Node.js specific, and a newer API that implements the sameWHATWG URL Standard used by web browsers.
A comparison between the WHATWG and legacy APIs is provided below. Above the URL'https://user:pass@sub.example.com:8080/p/a/t/h?query=string#hash', propertiesof an object returned by the legacyurl.parse() are shown. Below it areproperties of a WHATWGURL object.
WHATWG URL'sorigin property includesprotocol andhost, but notusername orpassword.
┌────────────────────────────────────────────────────────────────────────────────────────────────┐│ href │├──────────┬──┬─────────────────────┬────────────────────────┬───────────────────────────┬───────┤│ protocol │ │ auth │ host │ path │ hash ││ │ │ ├─────────────────┬──────┼──────────┬────────────────┤ ││ │ │ │ hostname │ port │ pathname │ search │ ││ │ │ │ │ │ ├─┬──────────────┤ ││ │ │ │ │ │ │ │ query │ │" https: // user : pass @ sub.example.com : 8080 /p/a/t/h ? query=string #hash "│ │ │ │ │ hostname │ port │ │ │ ││ │ │ │ ├─────────────────┴──────┤ │ │ ││ protocol │ │ username │ password │ host │ │ │ │├──────────┴──┼──────────┴──────────┼────────────────────────┤ │ │ ││ origin │ │ origin │ pathname │ search │ hash │├─────────────┴─────────────────────┴────────────────────────┴──────────┴────────────────┴───────┤│ href │└────────────────────────────────────────────────────────────────────────────────────────────────┘(All spaces in the "" line should be ignored. They are purely for formatting.)Parsing the URL string using the WHATWG API:
const myURL =newURL('https://user:pass@sub.example.com:8080/p/a/t/h?query=string#hash');Parsing the URL string using the legacy API:
import urlfrom'node:url';const myURL = url.parse('https://user:pass@sub.example.com:8080/p/a/t/h?query=string#hash');const url =require('node:url');const myURL = url.parse('https://user:pass@sub.example.com:8080/p/a/t/h?query=string#hash');
Constructing a URL from component parts and getting the constructed string#
It is possible to construct a WHATWG URL from component parts using either theproperty setters or a template literal string:
const myURL =newURL('https://example.org');myURL.pathname ='/a/b/c';myURL.search ='?d=e';myURL.hash ='#fgh';const pathname ='/a/b/c';const search ='?d=e';const hash ='#fgh';const myURL =newURL(`https://example.org${pathname}${search}${hash}`);To get the constructed URL string, use thehref property accessor:
console.log(myURL.href);The WHATWG URL API#
Class:URL#
History
| Version | Changes |
|---|---|
| v10.0.0 | The class is now available on the global object. |
| v7.0.0, v6.13.0 | Added in: v7.0.0, v6.13.0 |
Browser-compatibleURL class, implemented by following the WHATWG URLStandard.Examples of parsed URLs may be found in the Standard itself.TheURL class is also available on the global object.
In accordance with browser conventions, all properties ofURL objectsare implemented as getters and setters on the class prototype, rather than asdata properties on the object itself. Thus, unlikelegacyurlObjects,using thedelete keyword on any properties ofURL objects (e.g.delete myURL.protocol,delete myURL.pathname, etc) has no effect but will stillreturntrue.
new URL(input[, base])#
History
| Version | Changes |
|---|---|
| v20.0.0, v18.17.0 | ICU requirement is removed. |
input<string> The absolute or relative input URL to parse. Ifinputis relative, thenbaseis required. Ifinputis absolute, thebaseis ignored. Ifinputis not a string, it isconverted to a string first.base<string> The base URL to resolve against if theinputis notabsolute. Ifbaseis not a string, it isconverted to a string first.
Creates a newURL object by parsing theinput relative to thebase. Ifbase is passed as a string, it will be parsed equivalent tonew URL(base).
const myURL =newURL('/foo','https://example.org/');// https://example.org/fooThe URL constructor is accessible as a property on the global object.It can also be imported from the built-in url module:
import {URL }from'node:url';console.log(URL === globalThis.URL);// Prints 'true'.console.log(URL ===require('node:url').URL);// Prints 'true'.
ATypeError will be thrown if theinput orbase are not valid URLs. Notethat an effort will be made to coerce the given values into strings. Forinstance:
const myURL =newURL({toString:() =>'https://example.org/' });// https://example.org/Unicode characters appearing within the host name ofinput will beautomatically converted to ASCII using thePunycode algorithm.
const myURL =newURL('https://測試');// https://xn--g6w251d/In cases where it is not known in advance ifinput is an absolute URLand abase is provided, it is advised to validate that theorigin oftheURL object is what is expected.
let myURL =newURL('http://Example.com/','https://example.org/');// http://example.com/myURL =newURL('https://Example.com/','https://example.org/');// https://example.com/myURL =newURL('foo://Example.com/','https://example.org/');// foo://Example.com/myURL =newURL('http:Example.com/','https://example.org/');// http://example.com/myURL =newURL('https:Example.com/','https://example.org/');// https://example.org/Example.com/myURL =newURL('foo:Example.com/','https://example.org/');// foo:Example.com/url.hash#
- Type:<string>
Gets and sets the fragment portion of the URL.
const myURL =newURL('https://example.org/foo#bar');console.log(myURL.hash);// Prints #barmyURL.hash ='baz';console.log(myURL.href);// Prints https://example.org/foo#bazInvalid URL characters included in the value assigned to thehash propertyarepercent-encoded. The selection of which characters topercent-encode may vary somewhat from what theurl.parse() andurl.format() methods would produce.
url.host#
- Type:<string>
Gets and sets the host portion of the URL.
const myURL =newURL('https://example.org:81/foo');console.log(myURL.host);// Prints example.org:81myURL.host ='example.com:82';console.log(myURL.href);// Prints https://example.com:82/fooInvalid host values assigned to thehost property are ignored.
url.hostname#
- Type:<string>
Gets and sets the host name portion of the URL. The key difference betweenurl.host andurl.hostname is thaturl.hostname doesnot include theport.
const myURL =newURL('https://example.org:81/foo');console.log(myURL.hostname);// Prints example.org// Setting the hostname does not change the portmyURL.hostname ='example.com';console.log(myURL.href);// Prints https://example.com:81/foo// Use myURL.host to change the hostname and portmyURL.host ='example.org:82';console.log(myURL.href);// Prints https://example.org:82/fooInvalid host name values assigned to thehostname property are ignored.
url.href#
- Type:<string>
Gets and sets the serialized URL.
const myURL =newURL('https://example.org/foo');console.log(myURL.href);// Prints https://example.org/foomyURL.href ='https://example.com/bar';console.log(myURL.href);// Prints https://example.com/barGetting the value of thehref property is equivalent to callingurl.toString().
Setting the value of this property to a new value is equivalent to creating anewURL object usingnew URL(value). Each of theURLobject's properties will be modified.
If the value assigned to thehref property is not a valid URL, aTypeErrorwill be thrown.
url.origin#
History
| Version | Changes |
|---|---|
| v15.0.0 | The scheme "gopher" is no longer special and |
- Type:<string>
Gets the read-only serialization of the URL's origin.
const myURL =newURL('https://example.org/foo/bar?baz');console.log(myURL.origin);// Prints https://example.orgconst idnURL =newURL('https://測試');console.log(idnURL.origin);// Prints https://xn--g6w251dconsole.log(idnURL.hostname);// Prints xn--g6w251durl.password#
- Type:<string>
Gets and sets the password portion of the URL.
const myURL =newURL('https://abc:xyz@example.com');console.log(myURL.password);// Prints xyzmyURL.password ='123';console.log(myURL.href);// Prints https://abc:123@example.com/Invalid URL characters included in the value assigned to thepassword propertyarepercent-encoded. The selection of which characters topercent-encode may vary somewhat from what theurl.parse() andurl.format() methods would produce.
url.pathname#
- Type:<string>
Gets and sets the path portion of the URL.
const myURL =newURL('https://example.org/abc/xyz?123');console.log(myURL.pathname);// Prints /abc/xyzmyURL.pathname ='/abcdef';console.log(myURL.href);// Prints https://example.org/abcdef?123Invalid URL characters included in the value assigned to thepathnameproperty arepercent-encoded. The selection of which charactersto percent-encode may vary somewhat from what theurl.parse() andurl.format() methods would produce.
url.port#
History
| Version | Changes |
|---|---|
| v15.0.0 | The scheme "gopher" is no longer special. |
- Type:<string>
Gets and sets the port portion of the URL.
The port value may be a number or a string containing a number in the range0 to65535 (inclusive). Setting the value to the default port of theURL objects givenprotocol will result in theport value becomingthe empty string ('').
The port value can be an empty string in which case the port depends onthe protocol/scheme:
| protocol | port |
|---|---|
| "ftp" | 21 |
| "file" | |
| "http" | 80 |
| "https" | 443 |
| "ws" | 80 |
| "wss" | 443 |
Upon assigning a value to the port, the value will first be converted to astring using.toString().
If that string is invalid but it begins with a number, the leading number isassigned toport.If the number lies outside the range denoted above, it is ignored.
const myURL =newURL('https://example.org:8888');console.log(myURL.port);// Prints 8888// Default ports are automatically transformed to the empty string// (HTTPS protocol's default port is 443)myURL.port ='443';console.log(myURL.port);// Prints the empty stringconsole.log(myURL.href);// Prints https://example.org/myURL.port =1234;console.log(myURL.port);// Prints 1234console.log(myURL.href);// Prints https://example.org:1234/// Completely invalid port strings are ignoredmyURL.port ='abcd';console.log(myURL.port);// Prints 1234// Leading numbers are treated as a port numbermyURL.port ='5678abcd';console.log(myURL.port);// Prints 5678// Non-integers are truncatedmyURL.port =1234.5678;console.log(myURL.port);// Prints 1234// Out-of-range numbers which are not represented in scientific notation// will be ignored.myURL.port =1e10;// 10000000000, will be range-checked as described belowconsole.log(myURL.port);// Prints 1234Numbers which contain a decimal point,such as floating-point numbers or numbers in scientific notation,are not an exception to this rule.Leading numbers up to the decimal point will be set as the URL's port,assuming they are valid:
myURL.port =4.567e21;console.log(myURL.port);// Prints 4 (because it is the leading number in the string '4.567e21')url.protocol#
- Type:<string>
Gets and sets the protocol portion of the URL.
const myURL =newURL('https://example.org');console.log(myURL.protocol);// Prints https:myURL.protocol ='ftp';console.log(myURL.href);// Prints ftp://example.org/Invalid URL protocol values assigned to theprotocol property are ignored.
Special schemes#
History
| Version | Changes |
|---|---|
| v15.0.0 | The scheme "gopher" is no longer special. |
TheWHATWG URL Standard considers a handful of URL protocol schemes to bespecial in terms of how they are parsed and serialized. When a URL isparsed using one of these special protocols, theurl.protocol propertymay be changed to another special protocol but cannot be changed to anon-special protocol, and vice versa.
For instance, changing fromhttp tohttps works:
const u =newURL('http://example.org');u.protocol ='https';console.log(u.href);// https://example.org/However, changing fromhttp to a hypotheticalfish protocol does notbecause the new protocol is not special.
const u =newURL('http://example.org');u.protocol ='fish';console.log(u.href);// http://example.org/Likewise, changing from a non-special protocol to a special protocol is alsonot permitted:
const u =newURL('fish://example.org');u.protocol ='http';console.log(u.href);// fish://example.orgAccording to the WHATWG URL Standard, special protocol schemes areftp,file,http,https,ws, andwss.
url.search#
- Type:<string>
Gets and sets the serialized query portion of the URL.
const myURL =newURL('https://example.org/abc?123');console.log(myURL.search);// Prints ?123myURL.search ='abc=xyz';console.log(myURL.href);// Prints https://example.org/abc?abc=xyzAny invalid URL characters appearing in the value assigned thesearchproperty will bepercent-encoded. The selection of whichcharacters to percent-encode may vary somewhat from what theurl.parse()andurl.format() methods would produce.
url.searchParams#
- Type:<URLSearchParams>
Gets theURLSearchParams object representing the query parameters of theURL. This property is read-only but theURLSearchParams object it providescan be used to mutate the URL instance; to replace the entirety of queryparameters of the URL, use theurl.search setter. SeeURLSearchParams documentation for details.
Use care when using.searchParams to modify theURL because,per the WHATWG specification, theURLSearchParams object usesdifferent rules to determine which characters to percent-encode. Forinstance, theURL object will not percent encode the ASCII tilde (~)character, whileURLSearchParams will always encode it:
const myURL =newURL('https://example.org/abc?foo=~bar');console.log(myURL.search);// prints ?foo=~bar// Modify the URL via searchParams...myURL.searchParams.sort();console.log(myURL.search);// prints ?foo=%7Ebarurl.username#
- Type:<string>
Gets and sets the username portion of the URL.
const myURL =newURL('https://abc:xyz@example.com');console.log(myURL.username);// Prints abcmyURL.username ='123';console.log(myURL.href);// Prints https://123:xyz@example.com/Any invalid URL characters appearing in the value assigned theusernameproperty will bepercent-encoded. The selection of whichcharacters to percent-encode may vary somewhat from what theurl.parse()andurl.format() methods would produce.
url.toString()#
- Returns:<string>
ThetoString() method on theURL object returns the serialized URL. Thevalue returned is equivalent to that ofurl.href andurl.toJSON().
url.toJSON()#
- Returns:<string>
ThetoJSON() method on theURL object returns the serialized URL. Thevalue returned is equivalent to that ofurl.href andurl.toString().
This method is automatically called when anURL object is serializedwithJSON.stringify().
const myURLs = [newURL('https://www.example.com'),newURL('https://test.example.org'),];console.log(JSON.stringify(myURLs));// Prints ["https://www.example.com/","https://test.example.org/"]URL.createObjectURL(blob)#
History
| Version | Changes |
|---|---|
| v24.0.0, v22.17.0 | Marking the API stable. |
| v16.7.0 | Added in: v16.7.0 |
Creates a'blob:nodedata:...' URL string that represents the given<Blob>object and can be used to retrieve theBlob later.
const {Blob, resolveObjectURL,} =require('node:buffer');const blob =newBlob(['hello']);const id =URL.createObjectURL(blob);// later...const otherBlob =resolveObjectURL(id);console.log(otherBlob.size);The data stored by the registered<Blob> will be retained in memory untilURL.revokeObjectURL() is called to remove it.
Blob objects are registered within the current thread. If using WorkerThreads,Blob objects registered within one Worker will not be availableto other workers or the main thread.
URL.revokeObjectURL(id)#
History
| Version | Changes |
|---|---|
| v24.0.0, v22.17.0 | Marking the API stable. |
| v16.7.0 | Added in: v16.7.0 |
id<string> A'blob:nodedata:...URL string returned by a prior call toURL.createObjectURL().
Removes the stored<Blob> identified by the given ID. Attempting to revoke aID that isn't registered will silently fail.
URL.canParse(input[, base])#
input<string> The absolute or relative input URL to parse. Ifinputis relative, thenbaseis required. Ifinputis absolute, thebaseis ignored. Ifinputis not a string, it isconverted to a string first.base<string> The base URL to resolve against if theinputis notabsolute. Ifbaseis not a string, it isconverted to a string first.- Returns:<boolean>
Checks if aninput relative to thebase can be parsed to aURL.
const isValid =URL.canParse('/foo','https://example.org/');// trueconst isNotValid =URL.canParse('/foo');// falseURL.parse(input[, base])#
input<string> The absolute or relative input URL to parse. Ifinputis relative, thenbaseis required. Ifinputis absolute, thebaseis ignored. Ifinputis not a string, it isconverted to a string first.base<string> The base URL to resolve against if theinputis notabsolute. Ifbaseis not a string, it isconverted to a string first.- Returns:<URL> |<null>
Parses a string as a URL. Ifbase is provided, it will be used as the baseURL for the purpose of resolving non-absoluteinput URLs. Returnsnullif the parameters can't be resolved to a valid URL.
Class:URLPattern#
TheURLPattern API provides an interface to match URLs or parts of URLsagainst a pattern.
const myPattern =newURLPattern('https://nodejs.org/docs/latest/api/*.html');console.log(myPattern.exec('https://nodejs.org/docs/latest/api/dns.html'));// Prints:// {// "hash": { "groups": { "0": "" }, "input": "" },// "hostname": { "groups": {}, "input": "nodejs.org" },// "inputs": [// "https://nodejs.org/docs/latest/api/dns.html"// ],// "password": { "groups": { "0": "" }, "input": "" },// "pathname": { "groups": { "0": "dns" }, "input": "/docs/latest/api/dns.html" },// "port": { "groups": {}, "input": "" },// "protocol": { "groups": {}, "input": "https" },// "search": { "groups": { "0": "" }, "input": "" },// "username": { "groups": { "0": "" }, "input": "" }// }console.log(myPattern.test('https://nodejs.org/docs/latest/api/dns.html'));// Prints: truenew URLPattern()#
Instantiate a new emptyURLPattern object.
new URLPattern(string[, baseURL][, options])#
string<string> A URL stringbaseURL<string> |<undefined> A base URL stringoptions<Object> Options
Parse thestring as a URL, and use it to instantiate a newURLPattern object.
IfbaseURL is not specified, it defaults toundefined.
An option can haveignoreCase boolean attribute which enablescase-insensitive matching if set to true.
The constructor can throw aTypeError to indicate parsing failure.
new URLPattern(obj[, baseURL][, options])#
obj<Object> An input patternbaseURL<string> |<undefined> A base URL stringoptions<Object> Options
Parse theObject as an input pattern, and use it to instantiate a newURLPattern object. The object members can be any ofprotocol,username,password,hostname,port,pathname,search,hash orbaseURL.
IfbaseURL is not specified, it defaults toundefined.
An option can haveignoreCase boolean attribute which enablescase-insensitive matching if set to true.
The constructor can throw aTypeError to indicate parsing failure.
urlPattern.exec(input[, baseURL])#
input<string> |<Object> A URL or URL partsbaseURL<string> |<undefined> A base URL string
Input can be a string or an object providing the individual URL parts. Theobject members can be any ofprotocol,username,password,hostname,port,pathname,search,hash orbaseURL.
IfbaseURL is not specified, it will default toundefined.
Returns an object with aninputs key containing the array of argumentspassed into the function and keys of the URL components which contains thematched input and matched groups.
const myPattern =newURLPattern('https://nodejs.org/docs/latest/api/*.html');console.log(myPattern.exec('https://nodejs.org/docs/latest/api/dns.html'));// Prints:// {// "hash": { "groups": { "0": "" }, "input": "" },// "hostname": { "groups": {}, "input": "nodejs.org" },// "inputs": [// "https://nodejs.org/docs/latest/api/dns.html"// ],// "password": { "groups": { "0": "" }, "input": "" },// "pathname": { "groups": { "0": "dns" }, "input": "/docs/latest/api/dns.html" },// "port": { "groups": {}, "input": "" },// "protocol": { "groups": {}, "input": "https" },// "search": { "groups": { "0": "" }, "input": "" },// "username": { "groups": { "0": "" }, "input": "" }// }urlPattern.test(input[, baseURL])#
input<string> |<Object> A URL or URL partsbaseURL<string> |<undefined> A base URL string
Input can be a string or an object providing the individual URL parts. Theobject members can be any ofprotocol,username,password,hostname,port,pathname,search,hash orbaseURL.
IfbaseURL is not specified, it will default toundefined.
Returns a boolean indicating if the input matches the current pattern.
const myPattern =newURLPattern('https://nodejs.org/docs/latest/api/*.html');console.log(myPattern.test('https://nodejs.org/docs/latest/api/dns.html'));// Prints: trueClass:URLSearchParams#
History
| Version | Changes |
|---|---|
| v10.0.0 | The class is now available on the global object. |
| v7.5.0, v6.13.0 | Added in: v7.5.0, v6.13.0 |
TheURLSearchParams API provides read and write access to the query of aURL. TheURLSearchParams class can also be used standalone with one of thefour following constructors.TheURLSearchParams class is also available on the global object.
The WHATWGURLSearchParams interface and thequerystring module havesimilar purpose, but the purpose of thequerystring module is moregeneral, as it allows the customization of delimiter characters (& and=).On the other hand, this API is designed purely for URL query strings.
const myURL =newURL('https://example.org/?abc=123');console.log(myURL.searchParams.get('abc'));// Prints 123myURL.searchParams.append('abc','xyz');console.log(myURL.href);// Prints https://example.org/?abc=123&abc=xyzmyURL.searchParams.delete('abc');myURL.searchParams.set('a','b');console.log(myURL.href);// Prints https://example.org/?a=bconst newSearchParams =newURLSearchParams(myURL.searchParams);// The above is equivalent to// const newSearchParams = new URLSearchParams(myURL.search);newSearchParams.append('a','c');console.log(myURL.href);// Prints https://example.org/?a=bconsole.log(newSearchParams.toString());// Prints a=b&a=c// newSearchParams.toString() is implicitly calledmyURL.search = newSearchParams;console.log(myURL.href);// Prints https://example.org/?a=b&a=cnewSearchParams.delete('a');console.log(myURL.href);// Prints https://example.org/?a=b&a=cnew URLSearchParams()#
Instantiate a new emptyURLSearchParams object.
new URLSearchParams(string)#
string<string> A query string
Parse thestring as a query string, and use it to instantiate a newURLSearchParams object. A leading'?', if present, is ignored.
let params;params =newURLSearchParams('user=abc&query=xyz');console.log(params.get('user'));// Prints 'abc'console.log(params.toString());// Prints 'user=abc&query=xyz'params =newURLSearchParams('?user=abc&query=xyz');console.log(params.toString());// Prints 'user=abc&query=xyz'new URLSearchParams(obj)#
obj<Object> An object representing a collection of key-value pairs
Instantiate a newURLSearchParams object with a query hash map. The key andvalue of each property ofobj are always coerced to strings.
Unlikequerystring module, duplicate keys in the form of array values arenot allowed. Arrays are stringified usingarray.toString(), which simplyjoins all array elements with commas.
const params =newURLSearchParams({user:'abc',query: ['first','second'],});console.log(params.getAll('query'));// Prints [ 'first,second' ]console.log(params.toString());// Prints 'user=abc&query=first%2Csecond'new URLSearchParams(iterable)#
iterable<Iterable> An iterable object whose elements are key-value pairs
Instantiate a newURLSearchParams object with an iterable map in a way thatis similar to<Map>'s constructor.iterable can be anArray or anyiterable object. That meansiterable can be anotherURLSearchParams, inwhich case the constructor will simply create a clone of the providedURLSearchParams. Elements ofiterable are key-value pairs, and canthemselves be any iterable object.
Duplicate keys are allowed.
let params;// Using an arrayparams =newURLSearchParams([ ['user','abc'], ['query','first'], ['query','second'],]);console.log(params.toString());// Prints 'user=abc&query=first&query=second'// Using a Map objectconst map =newMap();map.set('user','abc');map.set('query','xyz');params =newURLSearchParams(map);console.log(params.toString());// Prints 'user=abc&query=xyz'// Using a generator functionfunction*getQueryPairs() {yield ['user','abc'];yield ['query','first'];yield ['query','second'];}params =newURLSearchParams(getQueryPairs());console.log(params.toString());// Prints 'user=abc&query=first&query=second'// Each key-value pair must have exactly two elementsnewURLSearchParams([ ['user','abc','error'],]);// Throws TypeError [ERR_INVALID_TUPLE]:// Each query pair must be an iterable [name, value] tupleurlSearchParams.append(name, value)#
Append a new name-value pair to the query string.
urlSearchParams.delete(name[, value])#
History
| Version | Changes |
|---|---|
| v20.2.0, v18.18.0 | Add support for optional |
Ifvalue is provided, removes all name-value pairswhere name isname and value isvalue..
Ifvalue is not provided, removes all name-value pairs whose name isname.
urlSearchParams.entries()#
- Returns:<Iterator>
Returns an ES6Iterator over each of the name-value pairs in the query.Each item of the iterator is a JavaScriptArray. The first item of theArrayis thename, the second item of theArray is thevalue.
Alias forurlSearchParams[Symbol.iterator]().
urlSearchParams.forEach(fn[, thisArg])#
History
| Version | Changes |
|---|---|
| v18.0.0 | Passing an invalid callback to the |
fn<Function> Invoked for each name-value pair in the querythisArg<Object> To be used asthisvalue for whenfnis called
Iterates over each name-value pair in the query and invokes the given function.
const myURL =newURL('https://example.org/?a=b&c=d');myURL.searchParams.forEach((value, name, searchParams) => {console.log(name, value, myURL.searchParams === searchParams);});// Prints:// a b true// c d trueurlSearchParams.get(name)#
name<string>- Returns:<string> |<null> A string or
nullif there is no name-value pairwith the givenname.
Returns the value of the first name-value pair whose name isname. If thereare no such pairs,null is returned.
urlSearchParams.getAll(name)#
name<string>- Returns:<string[]>
Returns the values of all name-value pairs whose name isname. If there areno such pairs, an empty array is returned.
urlSearchParams.has(name[, value])#
History
| Version | Changes |
|---|---|
| v20.2.0, v18.18.0 | Add support for optional |
Checks if theURLSearchParams object contains key-value pair(s) based onname and an optionalvalue argument.
Ifvalue is provided, returnstrue when name-value pair withsamename andvalue exists.
Ifvalue is not provided, returnstrue if there is at least one name-valuepair whose name isname.
urlSearchParams.keys()#
- Returns:<Iterator>
Returns an ES6Iterator over the names of each name-value pair.
const params =newURLSearchParams('foo=bar&foo=baz');for (const nameof params.keys()) {console.log(name);}// Prints:// foo// foourlSearchParams.set(name, value)#
Sets the value in theURLSearchParams object associated withname tovalue. If there are any pre-existing name-value pairs whose names arename,set the first such pair's value tovalue and remove all others. If not,append the name-value pair to the query string.
const params =newURLSearchParams();params.append('foo','bar');params.append('foo','baz');params.append('abc','def');console.log(params.toString());// Prints foo=bar&foo=baz&abc=defparams.set('foo','def');params.set('xyz','opq');console.log(params.toString());// Prints foo=def&abc=def&xyz=opqurlSearchParams.sort()#
Sort all existing name-value pairs in-place by their names. Sorting is donewith astable sorting algorithm, so relative order between name-value pairswith the same name is preserved.
This method can be used, in particular, to increase cache hits.
const params =newURLSearchParams('query[]=abc&type=search&query[]=123');params.sort();console.log(params.toString());// Prints query%5B%5D=abc&query%5B%5D=123&type=searchurlSearchParams.toString()#
- Returns:<string>
Returns the search parameters serialized as a string, with characterspercent-encoded where necessary.
urlSearchParams.values()#
- Returns:<Iterator>
Returns an ES6Iterator over the values of each name-value pair.
urlSearchParams[Symbol.iterator]()#
- Returns:<Iterator>
Returns an ES6Iterator over each of the name-value pairs in the query string.Each item of the iterator is a JavaScriptArray. The first item of theArrayis thename, the second item of theArray is thevalue.
Alias forurlSearchParams.entries().
const params =newURLSearchParams('foo=bar&xyz=baz');for (const [name, value]of params) {console.log(name, value);}// Prints:// foo bar// xyz bazurl.domainToASCII(domain)#
History
| Version | Changes |
|---|---|
| v20.0.0, v18.17.0 | ICU requirement is removed. |
| v7.4.0, v6.13.0 | Added in: v7.4.0, v6.13.0 |
Returns thePunycode ASCII serialization of thedomain. Ifdomain is aninvalid domain, the empty string is returned.
It performs the inverse operation tourl.domainToUnicode().
import urlfrom'node:url';console.log(url.domainToASCII('español.com'));// Prints xn--espaol-zwa.comconsole.log(url.domainToASCII('中文.com'));// Prints xn--fiq228c.comconsole.log(url.domainToASCII('xn--iñvalid.com'));// Prints an empty stringconst url =require('node:url');console.log(url.domainToASCII('español.com'));// Prints xn--espaol-zwa.comconsole.log(url.domainToASCII('中文.com'));// Prints xn--fiq228c.comconsole.log(url.domainToASCII('xn--iñvalid.com'));// Prints an empty string
url.domainToUnicode(domain)#
History
| Version | Changes |
|---|---|
| v20.0.0, v18.17.0 | ICU requirement is removed. |
| v7.4.0, v6.13.0 | Added in: v7.4.0, v6.13.0 |
Returns the Unicode serialization of thedomain. Ifdomain is an invaliddomain, the empty string is returned.
It performs the inverse operation tourl.domainToASCII().
import urlfrom'node:url';console.log(url.domainToUnicode('xn--espaol-zwa.com'));// Prints español.comconsole.log(url.domainToUnicode('xn--fiq228c.com'));// Prints 中文.comconsole.log(url.domainToUnicode('xn--iñvalid.com'));// Prints an empty stringconst url =require('node:url');console.log(url.domainToUnicode('xn--espaol-zwa.com'));// Prints español.comconsole.log(url.domainToUnicode('xn--fiq228c.com'));// Prints 中文.comconsole.log(url.domainToUnicode('xn--iñvalid.com'));// Prints an empty string
url.fileURLToPath(url[, options])#
History
| Version | Changes |
|---|---|
| v22.1.0, v20.13.0 | The |
| v10.12.0 | Added in: v10.12.0 |
url<URL> |<string> The file URL string or URL object to convert to a path.options<Object>windows<boolean> |<undefined>trueif thepathshould bereturn as a windows filepath,falsefor posix, andundefinedfor the system default.Default:undefined.
- Returns:<string> The fully-resolved platform-specific Node.js file path.
This function ensures the correct decodings of percent-encoded characters aswell as ensuring a cross-platform valid absolute path string.
Security Considerations:
This function decodes percent-encoded characters, including encoded dot-segments(%2e as. and%2e%2e as..), and then normalizes the resulting path.This means that encoded directory traversal sequences (such as%2e%2e) aredecoded and processed as actual path traversal, even though encoded slashes(%2F,%5C) are correctly rejected.
Applications must not rely onfileURLToPath() alone to prevent directorytraversal attacks. Always perform explicit path validation and security checkson the returned path value to ensure it remains within expected boundariesbefore using it for file system operations.
import { fileURLToPath }from'node:url';const __filename =fileURLToPath(import.meta.url);newURL('file:///C:/path/').pathname;// Incorrect: /C:/path/fileURLToPath('file:///C:/path/');// Correct: C:\path\ (Windows)newURL('file://nas/foo.txt').pathname;// Incorrect: /foo.txtfileURLToPath('file://nas/foo.txt');// Correct: \\nas\foo.txt (Windows)newURL('file:///你好.txt').pathname;// Incorrect: /%E4%BD%A0%E5%A5%BD.txtfileURLToPath('file:///你好.txt');// Correct: /你好.txt (POSIX)newURL('file:///hello world').pathname;// Incorrect: /hello%20worldfileURLToPath('file:///hello world');// Correct: /hello world (POSIX)const { fileURLToPath } =require('node:url');newURL('file:///C:/path/').pathname;// Incorrect: /C:/path/fileURLToPath('file:///C:/path/');// Correct: C:\path\ (Windows)newURL('file://nas/foo.txt').pathname;// Incorrect: /foo.txtfileURLToPath('file://nas/foo.txt');// Correct: \\nas\foo.txt (Windows)newURL('file:///你好.txt').pathname;// Incorrect: /%E4%BD%A0%E5%A5%BD.txtfileURLToPath('file:///你好.txt');// Correct: /你好.txt (POSIX)newURL('file:///hello world').pathname;// Incorrect: /hello%20worldfileURLToPath('file:///hello world');// Correct: /hello world (POSIX)
url.fileURLToPathBuffer(url[, options])#
url<URL> |<string> The file URL string or URL object to convert to a path.options<Object>windows<boolean> |<undefined>trueif thepathshould bereturn as a windows filepath,falsefor posix, andundefinedfor the system default.Default:undefined.
- Returns:<Buffer> The fully-resolved platform-specific Node.js file pathas a<Buffer>.
Likeurl.fileURLToPath(...) except that instead of returning a stringrepresentation of the path, aBuffer is returned. This conversion ishelpful when the input URL contains percent-encoded segments that arenot valid UTF-8 / Unicode sequences.
Security Considerations:
This function has the same security considerations asurl.fileURLToPath().It decodes percent-encoded characters, including encoded dot-segments(%2e as. and%2e%2e as..), and normalizes the path.Applicationsmust not rely on this function alone to prevent directory traversal attacks.Always perform explicit path validation on the returned buffer value beforeusing it for file system operations.
url.format(URL[, options])#
URL<URL> AWHATWG URL objectoptions<Object>auth<boolean>trueif the serialized URL string should include theusername and password,falseotherwise.Default:true.fragment<boolean>trueif the serialized URL string should include thefragment,falseotherwise.Default:true.search<boolean>trueif the serialized URL string should include thesearch query,falseotherwise.Default:true.unicode<boolean>trueif Unicode characters appearing in the hostcomponent of the URL string should be encoded directly as opposed to beingPunycode encoded.Default:false.
- Returns:<string>
Returns a customizable serialization of a URLString representation of aWHATWG URL object.
The URL object has both atoString() method andhref property that returnstring serializations of the URL. These are not, however, customizable inany way. Theurl.format(URL[, options]) method allows for basic customizationof the output.
import urlfrom'node:url';const myURL =newURL('https://a:b@測試?abc#foo');console.log(myURL.href);// Prints https://a:b@xn--g6w251d/?abc#fooconsole.log(myURL.toString());// Prints https://a:b@xn--g6w251d/?abc#fooconsole.log(url.format(myURL, {fragment:false,unicode:true,auth:false }));// Prints 'https://測試/?abc'const url =require('node:url');const myURL =newURL('https://a:b@測試?abc#foo');console.log(myURL.href);// Prints https://a:b@xn--g6w251d/?abc#fooconsole.log(myURL.toString());// Prints https://a:b@xn--g6w251d/?abc#fooconsole.log(url.format(myURL, {fragment:false,unicode:true,auth:false }));// Prints 'https://測試/?abc'
url.pathToFileURL(path[, options])#
History
| Version | Changes |
|---|---|
| v22.1.0, v20.13.0 | The |
| v10.12.0 | Added in: v10.12.0 |
path<string> The path to convert to a File URL.options<Object>windows<boolean> |<undefined>trueif thepathshould betreated as a windows filepath,falsefor posix, andundefinedfor the system default.Default:undefined.
- Returns:<URL> The file URL object.
This function ensures thatpath is resolved absolutely, and that the URLcontrol characters are correctly encoded when converting into a File URL.
import { pathToFileURL }from'node:url';newURL('/foo#1','file:');// Incorrect: file:///foo#1pathToFileURL('/foo#1');// Correct: file:///foo%231 (POSIX)newURL('/some/path%.c','file:');// Incorrect: file:///some/path%.cpathToFileURL('/some/path%.c');// Correct: file:///some/path%25.c (POSIX)const { pathToFileURL } =require('node:url');newURL(__filename);// Incorrect: throws (POSIX)newURL(__filename);// Incorrect: C:\... (Windows)pathToFileURL(__filename);// Correct: file:///... (POSIX)pathToFileURL(__filename);// Correct: file:///C:/... (Windows)newURL('/foo#1','file:');// Incorrect: file:///foo#1pathToFileURL('/foo#1');// Correct: file:///foo%231 (POSIX)newURL('/some/path%.c','file:');// Incorrect: file:///some/path%.cpathToFileURL('/some/path%.c');// Correct: file:///some/path%25.c (POSIX)
url.urlToHttpOptions(url)#
History
| Version | Changes |
|---|---|
| v19.9.0, v18.17.0 | The returned object will also contain all the own enumerable properties of the |
| v15.7.0, v14.18.0 | Added in: v15.7.0, v14.18.0 |
url<URL> TheWHATWG URL object to convert to an options object.- Returns:<Object> Options object
protocol<string> Protocol to use.hostname<string> A domain name or IP address of the server to issue therequest to.hash<string> The fragment portion of the URL.search<string> The serialized query portion of the URL.pathname<string> The path portion of the URL.path<string> Request path. Should include query string if any.E.G.'/index.html?page=12'. An exception is thrown when the request pathcontains illegal characters. Currently, only spaces are rejected but thatmay change in the future.href<string> The serialized URL.port<number> Port of remote server.auth<string> Basic authentication i.e.'user:password'to compute anAuthorization header.
This utility function converts a URL object into an ordinary options object asexpected by thehttp.request() andhttps.request() APIs.
import { urlToHttpOptions }from'node:url';const myURL =newURL('https://a:b@測試?abc#foo');console.log(urlToHttpOptions(myURL));/*{ protocol: 'https:', hostname: 'xn--g6w251d', hash: '#foo', search: '?abc', pathname: '/', path: '/?abc', href: 'https://a:b@xn--g6w251d/?abc#foo', auth: 'a:b'}*/const { urlToHttpOptions } =require('node:url');const myURL =newURL('https://a:b@測試?abc#foo');console.log(urlToHttpOptions(myURL));/*{ protocol: 'https:', hostname: 'xn--g6w251d', hash: '#foo', search: '?abc', pathname: '/', path: '/?abc', href: 'https://a:b@xn--g6w251d/?abc#foo', auth: 'a:b'}*/
Legacy URL API#
History
| Version | Changes |
|---|---|
| v15.13.0, v14.17.0 | Deprecation revoked. Status changed to "Legacy". |
| v11.0.0 | This API is deprecated. |
LegacyurlObject#
History
| Version | Changes |
|---|---|
| v15.13.0, v14.17.0 | Deprecation revoked. Status changed to "Legacy". |
| v11.0.0 | The Legacy URL API is deprecated. Use the WHATWG URL API. |
The legacyurlObject (require('node:url').Url orimport { Url } from 'node:url') iscreated and returned by theurl.parse() function.
urlObject.auth#
Theauth property is the username and password portion of the URL, alsoreferred to asuserinfo. This string subset follows theprotocol anddouble slashes (if present) and precedes thehost component, delimited by@.The string is either the username, or it is the username and password separatedby:.
For example:'user:pass'.
urlObject.hash#
Thehash property is the fragment identifier portion of the URL including theleading# character.
For example:'#hash'.
urlObject.host#
Thehost property is the full lower-cased host portion of the URL, includingtheport if specified.
For example:'sub.example.com:8080'.
urlObject.hostname#
Thehostname property is the lower-cased host name portion of thehostcomponentwithout theport included.
For example:'sub.example.com'.
urlObject.href#
Thehref property is the full URL string that was parsed with both theprotocol andhost components converted to lower-case.
For example:'http://user:pass@sub.example.com:8080/p/a/t/h?query=string#hash'.
urlObject.path#
Thepath property is a concatenation of thepathname andsearchcomponents.
For example:'/p/a/t/h?query=string'.
No decoding of thepath is performed.
urlObject.pathname#
Thepathname property consists of the entire path section of the URL. Thisis everything following thehost (including theport) and before the startof thequery orhash components, delimited by either the ASCII questionmark (?) or hash (#) characters.
For example:'/p/a/t/h'.
No decoding of the path string is performed.
urlObject.port#
Theport property is the numeric port portion of thehost component.
For example:'8080'.
urlObject.protocol#
Theprotocol property identifies the URL's lower-cased protocol scheme.
For example:'http:'.
urlObject.query#
Thequery property is either the query string without the leading ASCIIquestion mark (?), or an object returned by thequerystring module'sparse() method. Whether thequery property is a string or object isdetermined by theparseQueryString argument passed tourl.parse().
For example:'query=string' or{'query': 'string'}.
If returned as a string, no decoding of the query string is performed. Ifreturned as an object, both keys and values are decoded.
urlObject.search#
Thesearch property consists of the entire "query string" portion of theURL, including the leading ASCII question mark (?) character.
For example:'?query=string'.
No decoding of the query string is performed.
urlObject.slashes#
Theslashes property is aboolean with a value oftrue if two ASCIIforward-slash characters (/) are required following the colon in theprotocol.
url.format(urlObject)#
History
| Version | Changes |
|---|---|
| v17.0.0 | Now throws an |
| v15.13.0, v14.17.0 | Deprecation revoked. Status changed to "Legacy". |
| v11.0.0 | The Legacy URL API is deprecated. Use the WHATWG URL API. |
| v7.0.0 | URLs with a |
| v0.1.25 | Added in: v0.1.25 |
urlObject<Object> A URL object (as returned byurl.parse()orconstructed otherwise).
Theurl.format() method returns a formatted URL string derived fromurlObject.
const url =require('node:url');url.format({protocol:'https',hostname:'example.com',pathname:'/some/path',query: {page:1,format:'json', },});// => 'https://example.com/some/path?page=1&format=json'IfurlObject is not an object or a string,url.format() will throw aTypeError.
The formatting process operates as follows:
- A new empty string
resultis created. - If
urlObject.protocolis a string, it is appended as-is toresult. - Otherwise, if
urlObject.protocolis notundefinedand is not a string, anErroris thrown. - For all string values of
urlObject.protocolthatdo not end with an ASCIIcolon (:) character, the literal string:will be appended toresult. - If either of the following conditions is true, then the literal string
//will be appended toresult:urlObject.slashesproperty is true;urlObject.protocolbegins withhttp,https,ftp,gopher, orfile;
- If the value of the
urlObject.authproperty is truthy, and eitherurlObject.hostorurlObject.hostnameare notundefined, the value ofurlObject.authwill be coerced into a string and appended toresultfollowed by the literal string@. - If the
urlObject.hostproperty isundefinedthen:- If the
urlObject.hostnameis a string, it is appended toresult. - Otherwise, if
urlObject.hostnameis notundefinedand is not a string,anErroris thrown. - If the
urlObject.portproperty value is truthy, andurlObject.hostnameis notundefined:- The literal string
:is appended toresult, and - The value of
urlObject.portis coerced to a string and appended toresult.
- The literal string
- If the
- Otherwise, if the
urlObject.hostproperty value is truthy, the value ofurlObject.hostis coerced to a string and appended toresult. - If the
urlObject.pathnameproperty is a string that is not an empty string:- If the
urlObject.pathnamedoes not start with an ASCII forward slash(/), then the literal string'/'is appended toresult. - The value of
urlObject.pathnameis appended toresult.
- If the
- Otherwise, if
urlObject.pathnameis notundefinedand is not a string, anErroris thrown. - If the
urlObject.searchproperty isundefinedand if theurlObject.queryproperty is anObject, the literal string?is appended toresultfollowed by the output of calling thequerystringmodule'sstringify()method passing the value ofurlObject.query. - Otherwise, if
urlObject.searchis a string:- If the value of
urlObject.searchdoes not start with the ASCII questionmark (?) character, the literal string?is appended toresult. - The value of
urlObject.searchis appended toresult.
- If the value of
- Otherwise, if
urlObject.searchis notundefinedand is not a string, anErroris thrown. - If the
urlObject.hashproperty is a string:- If the value of
urlObject.hashdoes not start with the ASCII hash (#)character, the literal string#is appended toresult. - The value of
urlObject.hashis appended toresult.
- If the value of
- Otherwise, if the
urlObject.hashproperty is notundefinedand is not astring, anErroris thrown. resultis returned.
An automated migration is available (source).
npx codemod@latest @nodejs/node-url-to-whatwg-urlurl.format(urlString)#
History
| Version | Changes |
|---|---|
| v24.0.0 | Application deprecation. |
| v0.1.25 | Added in: v0.1.25 |
urlString<string> A string that will be passed tourl.parse()and thenformatted.
url.format(urlString) is shorthand forurl.format(url.parse(urlString)).
Because it invokes the deprecatedurl.parse(), passing a string argumenttourl.format() is itself deprecated.
Canonicalizing a URL string can be performed using the WHATWG URL API, byconstructing a new URL object and callingurl.toString().
import {URL }from'node:url';const unformatted ='http://[fe80:0:0:0:0:0:0:1]:/a/b?a=b#abc';const formatted =newURL(unformatted).toString();console.log(formatted);// Prints: http://[fe80::1]/a/b?a=b#abcconst {URL } =require('node:url');const unformatted ='http://[fe80:0:0:0:0:0:0:1]:/a/b?a=b#abc';const formatted =newURL(unformatted).toString();console.log(formatted);// Prints: http://[fe80::1]/a/b?a=b#abc
url.parse(urlString[, parseQueryString[, slashesDenoteHost]])#
History
| Version | Changes |
|---|---|
| v24.0.0 | Application deprecation. |
| v19.9.0, v18.17.0 | Added support for |
| v19.0.0, v18.13.0 | Documentation-only deprecation. |
| v15.13.0, v14.17.0 | Deprecation revoked. Status changed to "Legacy". |
| v11.14.0 | The |
| v11.0.0 | The Legacy URL API is deprecated. Use the WHATWG URL API. |
| v9.0.0 | The |
| v0.1.25 | Added in: v0.1.25 |
urlString<string> The URL string to parse.parseQueryString<boolean> Iftrue, thequeryproperty will alwaysbe set to an object returned by thequerystringmodule'sparse()method. Iffalse, thequeryproperty on the returned URL object will be anunparsed, undecoded string.Default:false.slashesDenoteHost<boolean> Iftrue, the first token after the literalstring//and preceding the next/will be interpreted as thehost.For instance, given//foo/bar, the result would be{host: 'foo', pathname: '/bar'}rather than{pathname: '//foo/bar'}.Default:false.
Theurl.parse() method takes a URL string, parses it, and returns a URLobject.
ATypeError is thrown ifurlString is not a string.
AURIError is thrown if theauth property is present but cannot be decoded.
url.parse() uses a lenient, non-standard algorithm for parsing URLstrings. It is prone to security issues such ashost name spoofingand incorrect handling of usernames and passwords. Do not use with untrustedinput. CVEs are not issued forurl.parse() vulnerabilities. Use theWHATWG URL API instead, for example:
functiongetURL(req) {const proto = req.headers['x-forwarded-proto'] ||'https';const host = req.headers['x-forwarded-host'] || req.headers.host ||'example.com';returnnewURL(`${proto}://${host}${req.url ||'/'}`);}The example above assumes well-formed headers are forwarded from a reverseproxy to your Node.js server. If you are not using a reverse proxy, you shoulduse the example below:
functiongetURL(req) {returnnewURL(`https://example.com${req.url ||'/'}`);}An automated migration is available (source).
npx codemod@latest @nodejs/node-url-to-whatwg-urlurl.resolve(from, to)#
History
| Version | Changes |
|---|---|
| v15.13.0, v14.17.0 | Deprecation revoked. Status changed to "Legacy". |
| v11.0.0 | The Legacy URL API is deprecated. Use the WHATWG URL API. |
| v6.6.0 | The |
| v6.0.0 | The |
| v6.5.0, v4.6.2 | The |
| v0.1.25 | Added in: v0.1.25 |
Theurl.resolve() method resolves a target URL relative to a base URL in amanner similar to that of a web browser resolving an anchor tag.
const url =require('node:url');url.resolve('/one/two/three','four');// '/one/two/four'url.resolve('http://example.com/','/one');// 'http://example.com/one'url.resolve('http://example.com/one','/two');// 'http://example.com/two'To achieve the same result using the WHATWG URL API:
functionresolve(from, to) {const resolvedUrl =newURL(to,newURL(from,'resolve://'));if (resolvedUrl.protocol ==='resolve:') {// `from` is a relative URL.const { pathname, search, hash } = resolvedUrl;return pathname + search + hash; }return resolvedUrl.toString();}resolve('/one/two/three','four');// '/one/two/four'resolve('http://example.com/','/one');// 'http://example.com/one'resolve('http://example.com/one','/two');// 'http://example.com/two'Percent-encoding in URLs#
URLs are permitted to only contain a certain range of characters. Any characterfalling outside of that range must be encoded. How such characters are encoded,and which characters to encode depends entirely on where the character islocated within the structure of the URL.
Legacy API#
Within the Legacy API, spaces (' ') and the following characters will beautomatically escaped in the properties of URL objects:
< > " ` \r \n \t { } | \ ^ 'For example, the ASCII space character (' ') is encoded as%20. The ASCIIforward slash (/) character is encoded as%3C.
WHATWG API#
TheWHATWG URL Standard uses a more selective and fine grained approach toselecting encoded characters than that used by the Legacy API.
The WHATWG algorithm defines four "percent-encode sets" that describe rangesof characters that must be percent-encoded:
TheC0 control percent-encode set includes code points in range U+0000 toU+001F (inclusive) and all code points greater than U+007E (~).
Thefragment percent-encode set includes theC0 control percent-encode setand code points U+0020 SPACE, U+0022 ("), U+003C (<), U+003E (>),and U+0060 (`).
Thepath percent-encode set includes theC0 control percent-encode setand code points U+0020 SPACE, U+0022 ("), U+0023 (#), U+003C (<), U+003E (>),U+003F (?), U+0060 (`), U+007B ({), and U+007D (}).
Theuserinfo encode set includes thepath percent-encode set and codepoints U+002F (/), U+003A (:), U+003B (;), U+003D (=), U+0040 (@),U+005B ([) to U+005E(^), and U+007C (|).
Theuserinfo percent-encode set is used exclusively for username andpasswords encoded within the URL. Thepath percent-encode set is used for thepath of most URLs. Thefragment percent-encode set is used for URL fragments.TheC0 control percent-encode set is used for host and path under certainspecific conditions, in addition to all other cases.
When non-ASCII characters appear within a host name, the host name is encodedusing thePunycode algorithm. Note, however, that a host namemay containboth Punycode encoded and percent-encoded characters:
const myURL =newURL('https://%CF%80.example.com/foo');console.log(myURL.href);// Prints https://xn--1xa.example.com/fooconsole.log(myURL.origin);// Prints https://xn--1xa.example.comUtil#
Source Code:lib/util.js
Thenode:util module supports the needs of Node.js internal APIs. Many of theutilities are useful for application and module developers as well. To accessit:
import utilfrom'node:util';const util =require('node:util');
util.callbackify(original)#
original<Function> Anasyncfunction- Returns:<Function> a callback style function
Takes anasync function (or a function that returns aPromise) and returns afunction following the error-first callback style, i.e. takingan(err, value) => ... callback as the last argument. In the callback, thefirst argument will be the rejection reason (ornull if thePromiseresolved), and the second argument will be the resolved value.
import { callbackify }from'node:util';asyncfunctionfn() {return'hello world';}const callbackFunction =callbackify(fn);callbackFunction((err, ret) => {if (err)throw err;console.log(ret);});const { callbackify } =require('node:util');asyncfunctionfn() {return'hello world';}const callbackFunction =callbackify(fn);callbackFunction((err, ret) => {if (err)throw err;console.log(ret);});
Will print:
hello worldThe callback is executed asynchronously, and will have a limited stack trace.If the callback throws, the process will emit an'uncaughtException'event, and if not handled will exit.
Sincenull has a special meaning as the first argument to a callback, if awrapped function rejects aPromise with a falsy value as a reason, the valueis wrapped in anError with the original value stored in a field namedreason.
functionfn() {returnPromise.reject(null);}const callbackFunction = util.callbackify(fn);callbackFunction((err, ret) => {// When the Promise was rejected with `null` it is wrapped with an Error and// the original value is stored in `reason`. err &&Object.hasOwn(err,'reason') && err.reason ===null;// true});util.convertProcessSignalToExitCode(signalCode)#
signalCode<string> A signal name (e.g.,'SIGTERM','SIGKILL').- Returns:<number> |<null> The exit code, or
nullif the signal is invalid.
Theutil.convertProcessSignalToExitCode() method converts a signal name to itscorresponding POSIX exit code. Following the POSIX standard, the exit codefor a process terminated by a signal is calculated as128 + signal number.
import { convertProcessSignalToExitCode }from'node:util';console.log(convertProcessSignalToExitCode('SIGTERM'));// 143 (128 + 15)console.log(convertProcessSignalToExitCode('SIGKILL'));// 137 (128 + 9)console.log(convertProcessSignalToExitCode('INVALID'));// nullconst { convertProcessSignalToExitCode } =require('node:util');console.log(convertProcessSignalToExitCode('SIGTERM'));// 143 (128 + 15)console.log(convertProcessSignalToExitCode('SIGKILL'));// 137 (128 + 9)console.log(convertProcessSignalToExitCode('INVALID'));// null
This is particularly useful when working with processes to determinethe exit code based on the signal that terminated the process.
util.debuglog(section[, callback])#
section<string> A string identifying the portion of the application forwhich thedebuglogfunction is being created.callback<Function> A callback invoked the first time the logging functionis called with a function argument that is a more optimized logging function.- Returns:<Function> The logging function
Theutil.debuglog() method is used to create a function that conditionallywrites debug messages tostderr based on the existence of theNODE_DEBUGenvironment variable. If thesection name appears within the value of thatenvironment variable, then the returned function operates similar toconsole.error(). If not, then the returned function is a no-op.
import { debuglog }from'node:util';const log =debuglog('foo');log('hello from foo [%d]',123);const { debuglog } =require('node:util');const log =debuglog('foo');log('hello from foo [%d]',123);
If this program is run withNODE_DEBUG=foo in the environment, thenit will output something like:
FOO 3245: hello from foo [123]where3245 is the process id. If it is not run with thatenvironment variable set, then it will not print anything.
Thesection supports wildcard also:
import { debuglog }from'node:util';const log =debuglog('foo-bar');log('hi there, it\'s foo-bar [%d]',2333);const { debuglog } =require('node:util');const log =debuglog('foo-bar');log('hi there, it\'s foo-bar [%d]',2333);
if it is run withNODE_DEBUG=foo* in the environment, then it will outputsomething like:
FOO-BAR 3257: hi there, it's foo-bar [2333]Multiple comma-separatedsection names may be specified in theNODE_DEBUGenvironment variable:NODE_DEBUG=fs,net,tls.
The optionalcallback argument can be used to replace the logging functionwith a different function that doesn't have any initialization orunnecessary wrapping.
import { debuglog }from'node:util';let log =debuglog('internals',(debug) => {// Replace with a logging function that optimizes out// testing if the section is enabled log = debug;});const { debuglog } =require('node:util');let log =debuglog('internals',(debug) => {// Replace with a logging function that optimizes out// testing if the section is enabled log = debug;});
debuglog().enabled#
- Type:<boolean>
Theutil.debuglog().enabled getter is used to create a test that can be usedin conditionals based on the existence of theNODE_DEBUG environment variable.If thesection name appears within the value of that environment variable,then the returned value will betrue. If not, then the returned value will befalse.
import { debuglog }from'node:util';const enabled =debuglog('foo').enabled;if (enabled) {console.log('hello from foo [%d]',123);}const { debuglog } =require('node:util');const enabled =debuglog('foo').enabled;if (enabled) {console.log('hello from foo [%d]',123);}
If this program is run withNODE_DEBUG=foo in the environment, then it willoutput something like:
hello from foo [123]util.debug(section)#
Alias forutil.debuglog. Usage allows for readability of that doesn't implylogging when only usingutil.debuglog().enabled.
util.deprecate(fn, msg[, code[, options]])#
History
| Version | Changes |
|---|---|
| v25.2.0 | Add options object with modifyPrototype to conditionally modify the prototype of the deprecated object. |
| v10.0.0 | Deprecation warnings are only emitted once for each code. |
| v0.8.0 | Added in: v0.8.0 |
fn<Function> The function that is being deprecated.msg<string> A warning message to display when the deprecated function isinvoked.code<string> A deprecation code. See thelist of deprecated APIs for alist of codes.options<Object>modifyPrototype<boolean> When false do not change the prototype of objectwhile emitting the deprecation warning.Default:true.
- Returns:<Function> The deprecated function wrapped to emit a warning.
Theutil.deprecate() method wrapsfn (which may be a function or class) insuch a way that it is marked as deprecated.
import { deprecate }from'node:util';exportconst obsoleteFunction =deprecate(() => {// Do something here.},'obsoleteFunction() is deprecated. Use newShinyFunction() instead.');const { deprecate } =require('node:util');exports.obsoleteFunction =deprecate(() => {// Do something here.},'obsoleteFunction() is deprecated. Use newShinyFunction() instead.');
When called,util.deprecate() will return a function that will emit aDeprecationWarning using the'warning' event. The warning willbe emitted and printed tostderr the first time the returned function iscalled. After the warning is emitted, the wrapped function is called withoutemitting a warning.
If the same optionalcode is supplied in multiple calls toutil.deprecate(),the warning will be emitted only once for thatcode.
import { deprecate }from'node:util';const fn1 =deprecate(() =>'a value','deprecation message','DEP0001',);const fn2 =deprecate(() =>'a different value','other dep message','DEP0001',);fn1();// Emits a deprecation warning with code DEP0001fn2();// Does not emit a deprecation warning because it has the same codeconst { deprecate } =require('node:util');const fn1 =deprecate(function() {return'a value'; },'deprecation message','DEP0001',);const fn2 =deprecate(function() {return'a different value'; },'other dep message','DEP0001',);fn1();// Emits a deprecation warning with code DEP0001fn2();// Does not emit a deprecation warning because it has the same code
If either the--no-deprecation or--no-warnings command-line flags areused, or if theprocess.noDeprecation property is set totrueprior tothe first deprecation warning, theutil.deprecate() method does nothing.
If the--trace-deprecation or--trace-warnings command-line flags are set,or theprocess.traceDeprecation property is set totrue, a warning and astack trace are printed tostderr the first time the deprecated function iscalled.
If the--throw-deprecation command-line flag is set, or theprocess.throwDeprecation property is set totrue, then an exception will bethrown when the deprecated function is called.
The--throw-deprecation command-line flag andprocess.throwDeprecationproperty take precedence over--trace-deprecation andprocess.traceDeprecation.
util.diff(actual, expected)#
Returns:<Array> An array of difference entries. Each entry is an array with two elements:
Algorithm complexity: O(N*D), where:
N is the total length of the two sequences combined (N = actual.length + expected.length)
D is the edit distance (the minimum number of operations required to transform one sequence into the other).
util.diff() compares two string or array values and returns an array of difference entries.It uses the Myers diff algorithm to compute minimal differences, which is the same algorithmused internally by assertion error messages.
If the values are equal, an empty array is returned.
const { diff } =require('node:util');// Comparing stringsconst actualString ='12345678';const expectedString ='12!!5!7!';console.log(diff(actualString, expectedString));// [// [0, '1'],// [0, '2'],// [1, '3'],// [1, '4'],// [-1, '!'],// [-1, '!'],// [0, '5'],// [1, '6'],// [-1, '!'],// [0, '7'],// [1, '8'],// [-1, '!'],// ]// Comparing arraysconst actualArray = ['1','2','3'];const expectedArray = ['1','3','4'];console.log(diff(actualArray, expectedArray));// [// [0, '1'],// [1, '2'],// [0, '3'],// [-1, '4'],// ]// Equal values return empty arrayconsole.log(diff('same','same'));// []util.format(format[, ...args])#
History
| Version | Changes |
|---|---|
| v12.11.0 | The |
| v12.0.0 | The |
| v12.0.0 | If the |
| v11.4.0 | The |
| v11.4.0 | The |
| v11.0.0 | The |
| v10.12.0 | The |
| v8.4.0 | The |
| v0.5.3 | Added in: v0.5.3 |
format<string> Aprintf-like format string.
Theutil.format() method returns a formatted string using the first argumentas aprintf-like format string which can contain zero or more formatspecifiers. Each specifier is replaced with the converted value from thecorresponding argument. Supported specifiers are:
%s:Stringwill be used to convert all values exceptBigInt,Objectand-0.BigIntvalues will be represented with annand Objects thathave neither a user definedtoStringfunction norSymbol.toPrimitivefunction are inspected usingutil.inspect()with options{ depth: 0, colors: false, compact: 3 }.%d:Numberwill be used to convert all values exceptBigIntandSymbol.%i:parseInt(value, 10)is used for all values exceptBigIntandSymbol.%f:parseFloat(value)is used for all values expectSymbol.%j: JSON. Replaced with the string'[Circular]'if the argument containscircular references.%o:Object. A string representation of an object with generic JavaScriptobject formatting. Similar toutil.inspect()with options{ showHidden: true, showProxy: true }. This will show the full objectincluding non-enumerable properties and proxies.%O:Object. A string representation of an object with generic JavaScriptobject formatting. Similar toutil.inspect()without options. This will showthe full object not including non-enumerable properties and proxies.%c:CSS. This specifier is ignored and will skip any CSS passed in.%%: single percent sign ('%'). This does not consume an argument.- Returns:<string> The formatted string
If a specifier does not have a corresponding argument, it is not replaced:
util.format('%s:%s','foo');// Returns: 'foo:%s'Values that are not part of the format string are formatted usingutil.inspect() if their type is notstring.
If there are more arguments passed to theutil.format() method than thenumber of specifiers, the extra arguments are concatenated to the returnedstring, separated by spaces:
util.format('%s:%s','foo','bar','baz');// Returns: 'foo:bar baz'If the first argument does not contain a valid format specifier,util.format()returns a string that is the concatenation of all arguments separated by spaces:
util.format(1,2,3);// Returns: '1 2 3'If only one argument is passed toutil.format(), it is returned as it iswithout any formatting:
util.format('%% %s');// Returns: '%% %s'util.format() is a synchronous method that is intended as a debugging tool.Some input values can have a significant performance overhead that can block theevent loop. Use this function with care and never in a hot code path.
util.formatWithOptions(inspectOptions, format[, ...args])#
This function is identical toutil.format(), except in that it takesaninspectOptions argument which specifies options that are passed along toutil.inspect().
util.formatWithOptions({colors:true },'See object %O', {foo:42 });// Returns 'See object { foo: 42 }', where `42` is colored as a number// when printed to a terminal.util.getCallSites([frameCount][, options])#
History
| Version | Changes |
|---|---|
| v23.7.0, v22.14.0 | Property |
| v23.7.0, v22.14.0 | Property |
| v23.3.0, v22.12.0 | The API is renamed from |
| v22.9.0 | Added in: v22.9.0 |
frameCount<integer> Optional number of frames to capture as call site objects.Default:10. Allowable range is between 1 and 200.options<Object> OptionalsourceMap<boolean> Reconstruct the original location in the stacktrace from the source-map.Enabled by default with the flag--enable-source-maps.
- Returns:<Object[]> An array of call site objects
functionName<string> Returns the name of the function associated with this call site.scriptName<string> Returns the name of the resource that contains the script for thefunction for this call site.scriptId<string> Returns the unique id of the script, as in Chrome DevTools protocolRuntime.ScriptId.lineNumber<number> Returns the JavaScript script line number (1-based).columnNumber<number> Returns the JavaScript script column number (1-based).
Returns an array of call site objects containing the stack ofthe caller function.
Unlike accessing anerror.stack, the result returned from this API is notinterfered withError.prepareStackTrace.
import { getCallSites }from'node:util';functionexampleFunction() {const callSites =getCallSites();console.log('Call Sites:'); callSites.forEach((callSite, index) => {console.log(`CallSite${index +1}:`);console.log(`Function Name:${callSite.functionName}`);console.log(`Script Name:${callSite.scriptName}`);console.log(`Line Number:${callSite.lineNumber}`);console.log(`Column Number:${callSite.columnNumber}`); });// CallSite 1:// Function Name: exampleFunction// Script Name: /home/example.js// Line Number: 5// Column Number: 26// CallSite 2:// Function Name: anotherFunction// Script Name: /home/example.js// Line Number: 22// Column Number: 3// ...}// A function to simulate another stack layerfunctionanotherFunction() {exampleFunction();}anotherFunction();const { getCallSites } =require('node:util');functionexampleFunction() {const callSites =getCallSites();console.log('Call Sites:'); callSites.forEach((callSite, index) => {console.log(`CallSite${index +1}:`);console.log(`Function Name:${callSite.functionName}`);console.log(`Script Name:${callSite.scriptName}`);console.log(`Line Number:${callSite.lineNumber}`);console.log(`Column Number:${callSite.columnNumber}`); });// CallSite 1:// Function Name: exampleFunction// Script Name: /home/example.js// Line Number: 5// Column Number: 26// CallSite 2:// Function Name: anotherFunction// Script Name: /home/example.js// Line Number: 22// Column Number: 3// ...}// A function to simulate another stack layerfunctionanotherFunction() {exampleFunction();}anotherFunction();
It is possible to reconstruct the original locations by setting the optionsourceMap totrue.If the source map is not available, the original location will be the same as the current location.When the--enable-source-maps flag is enabled, for example when using--experimental-transform-types,sourceMap will be true by default.
import { getCallSites }from'node:util';interfaceFoo {foo:string;}const callSites =getCallSites({sourceMap:true });// With sourceMap:// Function Name: ''// Script Name: example.js// Line Number: 7// Column Number: 26// Without sourceMap:// Function Name: ''// Script Name: example.js// Line Number: 2// Column Number: 26const { getCallSites } =require('node:util');const callSites =getCallSites({sourceMap:true });// With sourceMap:// Function Name: ''// Script Name: example.js// Line Number: 7// Column Number: 26// Without sourceMap:// Function Name: ''// Script Name: example.js// Line Number: 2// Column Number: 26util.getSystemErrorName(err)#
Returns the string name for a numeric error code that comes from a Node.js API.The mapping between error codes and error names is platform-dependent.SeeCommon System Errors for the names of common errors.
fs.access('file/that/does/not/exist',(err) => {const name = util.getSystemErrorName(err.errno);console.error(name);// ENOENT});util.getSystemErrorMap()#
- Returns:<Map>
Returns a Map of all system error codes available from the Node.js API.The mapping between error codes and error names is platform-dependent.SeeCommon System Errors for the names of common errors.
fs.access('file/that/does/not/exist',(err) => {const errorMap = util.getSystemErrorMap();const name = errorMap.get(err.errno);console.error(name);// ENOENT});util.getSystemErrorMessage(err)#
Returns the string message for a numeric error code that comes from a Node.jsAPI.The mapping between error codes and string messages is platform-dependent.
fs.access('file/that/does/not/exist',(err) => {const message = util.getSystemErrorMessage(err.errno);console.error(message);// No such file or directory});util.setTraceSigInt(enable)#
enable<boolean>
Enable or disable printing a stack trace onSIGINT. The API is only available on the main thread.
util.inherits(constructor, superConstructor)#
History
| Version | Changes |
|---|---|
| v5.0.0 | The |
| v0.3.0 | Added in: v0.3.0 |
extends keyword instead.constructor<Function>superConstructor<Function>
Usage ofutil.inherits() is discouraged. Please use the ES6class andextends keywords to get language level inheritance support. Also notethat the two styles aresemantically incompatible.
Inherit the prototype methods from oneconstructor into another. Theprototype ofconstructor will be set to a new object created fromsuperConstructor.
This mainly adds some input validation on top ofObject.setPrototypeOf(constructor.prototype, superConstructor.prototype).As an additional convenience,superConstructor will be accessiblethrough theconstructor.super_ property.
const util =require('node:util');constEventEmitter =require('node:events');functionMyStream() {EventEmitter.call(this);}util.inherits(MyStream,EventEmitter);MyStream.prototype.write =function(data) {this.emit('data', data);};const stream =newMyStream();console.log(streaminstanceofEventEmitter);// trueconsole.log(MyStream.super_ ===EventEmitter);// truestream.on('data',(data) => {console.log(`Received data: "${data}"`);});stream.write('It works!');// Received data: "It works!"ES6 example usingclass andextends:
importEventEmitterfrom'node:events';classMyStreamextendsEventEmitter {write(data) {this.emit('data', data); }}const stream =newMyStream();stream.on('data',(data) => {console.log(`Received data: "${data}"`);});stream.write('With ES6');constEventEmitter =require('node:events');classMyStreamextendsEventEmitter {write(data) {this.emit('data', data); }}const stream =newMyStream();stream.on('data',(data) => {console.log(`Received data: "${data}"`);});stream.write('With ES6');
util.inspect(object[, options])#
util.inspect(object[, showHidden[, depth[, colors]]])#
History
| Version | Changes |
|---|---|
| v25.0.0 | The util.inspect.styles.regexp style is now a method that is invoked for coloring the stringified regular expression. |
| v16.18.0 | add support for |
| v17.3.0, v16.14.0 | The |
| v13.0.0 | Circular references now include a marker to the reference. |
| v14.6.0, v12.19.0 | If |
| v13.13.0, v12.17.0 | The |
| v13.5.0, v12.16.0 | User defined prototype properties are inspected in case |
| v12.0.0 | The |
| v12.0.0 | Internal properties no longer appear in the context argument of a custom inspection function. |
| v11.11.0 | The |
| v11.7.0 | ArrayBuffers now also show their binary contents. |
| v11.5.0 | The |
| v11.4.0 | The |
| v11.0.0 | The |
| v11.0.0 | The inspection output is now limited to about 128 MiB. Data above that size will not be fully inspected. |
| v10.12.0 | The |
| v10.6.0 | Inspecting linked lists and similar objects is now possible up to the maximum call stack size. |
| v10.0.0 | The |
| v9.9.0 | The |
| v6.6.0 | Custom inspection functions can now return |
| v6.3.0 | The |
| v6.1.0 | The |
| v6.1.0 | The |
| v0.3.0 | Added in: v0.3.0 |
object<any> Any JavaScript primitive orObject.options<Object>showHidden<boolean> Iftrue,object's non-enumerable symbols andproperties are included in the formatted result.<WeakMap> and<WeakSet> entries are also included as well as user defined prototypeproperties (excluding method properties).Default:false.depth<number> Specifies the number of times to recurse while formattingobject. This is useful for inspecting large objects. To recurse up tothe maximum call stack size passInfinityornull.Default:2.colors<boolean> Iftrue, the output is styled with ANSI colorcodes. Colors are customizable. SeeCustomizingutil.inspectcolors.Default:false.customInspect<boolean> Iffalse,[util.inspect.custom](depth, opts, inspect)functions are not invoked.Default:true.showProxy<boolean> Iftrue,Proxyinspection includesthetargetandhandlerobjects.Default:false.maxArrayLength<integer> Specifies the maximum number ofArray,<TypedArray>,<Map>,<WeakMap>, and<WeakSet> elements to include when formatting.Set tonullorInfinityto show all elements. Set to0ornegative to show no elements.Default:100.maxStringLength<integer> Specifies the maximum number of characters toinclude when formatting. Set tonullorInfinityto show all elements.Set to0or negative to show no characters.Default:10000.breakLength<integer> The length at which input values are split acrossmultiple lines. Set toInfinityto format the input as a single line(in combination withcompactset totrueor any number >=1).Default:80.compact<boolean> |<integer> Setting this tofalsecauses each object keyto be displayed on a new line. It will break on new lines in text that islonger thanbreakLength. If set to a number, the mostninner elementsare united on a single line as long as all properties fit intobreakLength. Short array elements are also grouped together. For moreinformation, see the example below.Default:3.sorted<boolean> |<Function> If set totrueor a function, all propertiesof an object, andSetandMapentries are sorted in the resultingstring. If set totruethedefault sort is used. If set to a function,it is used as acompare function.getters<boolean> |<string> If set totrue, getters are inspected. If setto'get', only getters without a corresponding setter are inspected. Ifset to'set', only getters with a corresponding setter are inspected.This might cause side effects depending on the getter function.Default:false.numericSeparator<boolean> If set totrue, an underscore is used toseparate every three digits in all bigints and numbers.Default:false.
- Returns:<string> The representation of
object.
Theutil.inspect() method returns a string representation ofobject that isintended for debugging. The output ofutil.inspect may change at any timeand should not be depended upon programmatically. Additionaloptions may bepassed that alter the result.util.inspect() will use the constructor's name and/orSymbol.toStringTagproperty to make an identifiable tag for an inspected value.
classFoo { get [Symbol.toStringTag]() {return'bar'; }}classBar {}const baz =Object.create(null, { [Symbol.toStringTag]: {value:'foo' } });util.inspect(newFoo());// 'Foo [bar] {}'util.inspect(newBar());// 'Bar {}'util.inspect(baz);// '[foo] {}'Circular references point to their anchor by using a reference index:
import { inspect }from'node:util';const obj = {};obj.a = [obj];obj.b = {};obj.b.inner = obj.b;obj.b.obj = obj;console.log(inspect(obj));// <ref *1> {// a: [ [Circular *1] ],// b: <ref *2> { inner: [Circular *2], obj: [Circular *1] }// }const { inspect } =require('node:util');const obj = {};obj.a = [obj];obj.b = {};obj.b.inner = obj.b;obj.b.obj = obj;console.log(inspect(obj));// <ref *1> {// a: [ [Circular *1] ],// b: <ref *2> { inner: [Circular *2], obj: [Circular *1] }// }
The following example inspects all properties of theutil object:
import utilfrom'node:util';console.log(util.inspect(util, {showHidden:true,depth:null }));const util =require('node:util');console.log(util.inspect(util, {showHidden:true,depth:null }));
The following example highlights the effect of thecompact option:
import { inspect }from'node:util';const o = {a: [1,2, [['Lorem ipsum dolor sit amet,\nconsectetur adipiscing elit, sed do ' +'eiusmod \ntempor incididunt ut labore et dolore magna aliqua.','test','foo']],4],b:newMap([['za',1], ['zb','test']]),};console.log(inspect(o, {compact:true,depth:5,breakLength:80 }));// { a:// [ 1,// 2,// [ [ 'Lorem ipsum dolor sit amet,\nconsectetur [...]', // A long line// 'test',// 'foo' ] ],// 4 ],// b: Map(2) { 'za' => 1, 'zb' => 'test' } }// Setting `compact` to false or an integer creates more reader friendly output.console.log(inspect(o, {compact:false,depth:5,breakLength:80 }));// {// a: [// 1,// 2,// [// [// 'Lorem ipsum dolor sit amet,\n' +// 'consectetur adipiscing elit, sed do eiusmod \n' +// 'tempor incididunt ut labore et dolore magna aliqua.',// 'test',// 'foo'// ]// ],// 4// ],// b: Map(2) {// 'za' => 1,// 'zb' => 'test'// }// }// Setting `breakLength` to e.g. 150 will print the "Lorem ipsum" text in a// single line.const { inspect } =require('node:util');const o = {a: [1,2, [['Lorem ipsum dolor sit amet,\nconsectetur adipiscing elit, sed do ' +'eiusmod \ntempor incididunt ut labore et dolore magna aliqua.','test','foo']],4],b:newMap([['za',1], ['zb','test']]),};console.log(inspect(o, {compact:true,depth:5,breakLength:80 }));// { a:// [ 1,// 2,// [ [ 'Lorem ipsum dolor sit amet,\nconsectetur [...]', // A long line// 'test',// 'foo' ] ],// 4 ],// b: Map(2) { 'za' => 1, 'zb' => 'test' } }// Setting `compact` to false or an integer creates more reader friendly output.console.log(inspect(o, {compact:false,depth:5,breakLength:80 }));// {// a: [// 1,// 2,// [// [// 'Lorem ipsum dolor sit amet,\n' +// 'consectetur adipiscing elit, sed do eiusmod \n' +// 'tempor incididunt ut labore et dolore magna aliqua.',// 'test',// 'foo'// ]// ],// 4// ],// b: Map(2) {// 'za' => 1,// 'zb' => 'test'// }// }// Setting `breakLength` to e.g. 150 will print the "Lorem ipsum" text in a// single line.
TheshowHidden option allows<WeakMap> and<WeakSet> entries to beinspected. If there are more entries thanmaxArrayLength, there is noguarantee which entries are displayed. That means retrieving the same<WeakSet> entries twice may result in different output. Furthermore, entrieswith no remaining strong references may be garbage collected at any time.
import { inspect }from'node:util';const obj = {a:1 };const obj2 = {b:2 };const weakSet =newWeakSet([obj, obj2]);console.log(inspect(weakSet, {showHidden:true }));// WeakSet { { a: 1 }, { b: 2 } }const { inspect } =require('node:util');const obj = {a:1 };const obj2 = {b:2 };const weakSet =newWeakSet([obj, obj2]);console.log(inspect(weakSet, {showHidden:true }));// WeakSet { { a: 1 }, { b: 2 } }
Thesorted option ensures that an object's property insertion order does notimpact the result ofutil.inspect().
import { inspect }from'node:util';import assertfrom'node:assert';const o1 = {b: [2,3,1],a:'`a` comes before `b`',c:newSet([2,3,1]),};console.log(inspect(o1, {sorted:true }));// { a: '`a` comes before `b`', b: [ 2, 3, 1 ], c: Set(3) { 1, 2, 3 } }console.log(inspect(o1, {sorted:(a, b) => b.localeCompare(a) }));// { c: Set(3) { 3, 2, 1 }, b: [ 2, 3, 1 ], a: '`a` comes before `b`' }const o2 = {c:newSet([2,1,3]),a:'`a` comes before `b`',b: [2,3,1],};assert.strict.equal(inspect(o1, {sorted:true }),inspect(o2, {sorted:true }),);const { inspect } =require('node:util');const assert =require('node:assert');const o1 = {b: [2,3,1],a:'`a` comes before `b`',c:newSet([2,3,1]),};console.log(inspect(o1, {sorted:true }));// { a: '`a` comes before `b`', b: [ 2, 3, 1 ], c: Set(3) { 1, 2, 3 } }console.log(inspect(o1, {sorted:(a, b) => b.localeCompare(a) }));// { c: Set(3) { 3, 2, 1 }, b: [ 2, 3, 1 ], a: '`a` comes before `b`' }const o2 = {c:newSet([2,1,3]),a:'`a` comes before `b`',b: [2,3,1],};assert.strict.equal(inspect(o1, {sorted:true }),inspect(o2, {sorted:true }),);
ThenumericSeparator option adds an underscore every three digits to allnumbers.
import { inspect }from'node:util';const thousand =1000;const million =1000000;const bigNumber =123456789n;const bigDecimal =1234.12345;console.log(inspect(thousand, {numericSeparator:true }));// 1_000console.log(inspect(million, {numericSeparator:true }));// 1_000_000console.log(inspect(bigNumber, {numericSeparator:true }));// 123_456_789nconsole.log(inspect(bigDecimal, {numericSeparator:true }));// 1_234.123_45const { inspect } =require('node:util');const thousand =1000;const million =1000000;const bigNumber =123456789n;const bigDecimal =1234.12345;console.log(inspect(thousand, {numericSeparator:true }));// 1_000console.log(inspect(million, {numericSeparator:true }));// 1_000_000console.log(inspect(bigNumber, {numericSeparator:true }));// 123_456_789nconsole.log(inspect(bigDecimal, {numericSeparator:true }));// 1_234.123_45
util.inspect() is a synchronous method intended for debugging. Its maximumoutput length is approximately 128 MiB. Inputs that result in longer output willbe truncated.
Customizingutil.inspect colors#
Color output (if enabled) ofutil.inspect is customizable globallyvia theutil.inspect.styles andutil.inspect.colors properties.
util.inspect.styles is a map associating a style name to a color fromutil.inspect.colors.
The default styles and associated colors are:
bigint:yellowboolean:yellowdate:magentamodule:underlinename: (no styling)null:boldnumber:yellowregexp: A method that colors character classes, groups, assertions, andother parts for improved readability. To customize the coloring, change thecolorsproperty. It is set to['red', 'green', 'yellow', 'cyan', 'magenta']by default and may beadjusted as needed. The array is repetitively iterated through depending onthe "depth".special:cyan(e.g.,Proxies)string:greensymbol:greenundefined:grey
Color styling uses ANSI control codes that may not be supported on allterminals. To verify color support usetty.hasColors().
Predefined control codes are listed below (grouped as "Modifiers", "Foregroundcolors", and "Background colors").
Complex custom coloring#
It is possible to define a method as style. It receives the stringified valueof the input. It is invoked in case coloring is active and the type isinspected.
Example:util.inspect.styles.regexp(value)
Modifiers#
Modifier support varies throughout different terminals. They will mostly beignored, if not supported.
reset- Resets all (color) modifiers to their defaults- bold - Make text bold
- italic - Make text italic
- underline - Make text underlined
strikethrough- Puts a horizontal line through the center of the text(Alias:strikeThrough,crossedout,crossedOut)hidden- Prints the text, but makes it invisible (Alias: conceal)- dim - Decreased color intensity (Alias:
faint) - overlined - Make text overlined
- blink - Hides and shows the text in an interval
- inverse - Swap foreground andbackground colors (Alias:
swapcolors,swapColors) - doubleunderline - Make textdouble underlined (Alias:
doubleUnderline) - framed - Draw a frame around the text
Foreground colors#
blackredgreenyellowbluemagentacyanwhitegray(alias:grey,blackBright)redBrightgreenBrightyellowBrightblueBrightmagentaBrightcyanBrightwhiteBright
Background colors#
bgBlackbgRedbgGreenbgYellowbgBluebgMagentabgCyanbgWhitebgGray(alias:bgGrey,bgBlackBright)bgRedBrightbgGreenBrightbgYellowBrightbgBlueBrightbgMagentaBrightbgCyanBrightbgWhiteBright
Custom inspection functions on objects#
History
| Version | Changes |
|---|---|
| v17.3.0, v16.14.0 | The inspect argument is added for more interoperability. |
| v0.1.97 | Added in: v0.1.97 |
Objects may also define their own[util.inspect.custom](depth, opts, inspect) function,whichutil.inspect() will invoke and use the result of when inspectingthe object.
import { inspect }from'node:util';classBox {constructor(value) {this.value = value; } [inspect.custom](depth, options, inspect) {if (depth <0) {return options.stylize('[Box]','special'); }const newOptions =Object.assign({}, options, {depth: options.depth ===null ?null : options.depth -1, });// Five space padding because that's the size of "Box< ".const padding =' '.repeat(5);const inner =inspect(this.value, newOptions) .replace(/\n/g,`\n${padding}`);return`${options.stylize('Box','special')}<${inner} >`; }}const box =newBox(true);console.log(inspect(box));// "Box< true >"const { inspect } =require('node:util');classBox {constructor(value) {this.value = value; } [inspect.custom](depth, options, inspect) {if (depth <0) {return options.stylize('[Box]','special'); }const newOptions =Object.assign({}, options, {depth: options.depth ===null ?null : options.depth -1, });// Five space padding because that's the size of "Box< ".const padding =' '.repeat(5);const inner =inspect(this.value, newOptions) .replace(/\n/g,`\n${padding}`);return`${options.stylize('Box','special')}<${inner} >`; }}const box =newBox(true);console.log(inspect(box));// "Box< true >"
Custom[util.inspect.custom](depth, opts, inspect) functions typically returna string but may return a value of any type that will be formatted accordinglybyutil.inspect().
import { inspect }from'node:util';const obj = {foo:'this will not show up in the inspect() output' };obj[inspect.custom] =(depth) => {return {bar:'baz' };};console.log(inspect(obj));// "{ bar: 'baz' }"const { inspect } =require('node:util');const obj = {foo:'this will not show up in the inspect() output' };obj[inspect.custom] =(depth) => {return {bar:'baz' };};console.log(inspect(obj));// "{ bar: 'baz' }"
util.inspect.custom#
History
| Version | Changes |
|---|---|
| v10.12.0 | This is now defined as a shared symbol. |
| v6.6.0 | Added in: v6.6.0 |
- Type:<symbol> that can be used to declare custom inspect functions.
In addition to being accessible throughutil.inspect.custom, thissymbol isregistered globally and can beaccessed in any environment asSymbol.for('nodejs.util.inspect.custom').
Using this allows code to be written in a portable fashion, so that the custominspect function is used in an Node.js environment and ignored in the browser.Theutil.inspect() function itself is passed as third argument to the custominspect function to allow further portability.
const customInspectSymbol =Symbol.for('nodejs.util.inspect.custom');classPassword {constructor(value) {this.value = value; }toString() {return'xxxxxxxx'; } [customInspectSymbol](depth, inspectOptions, inspect) {return`Password <${this.toString()}>`; }}const password =newPassword('r0sebud');console.log(password);// Prints Password <xxxxxxxx>SeeCustom inspection functions on Objects for more details.
util.inspect.defaultOptions#
ThedefaultOptions value allows customization of the default options used byutil.inspect. This is useful for functions likeconsole.log orutil.format which implicitly call intoutil.inspect. It shall be set to anobject containing one or more validutil.inspect() options. Settingoption properties directly is also supported.
import { inspect }from'node:util';const arr =Array(156).fill(0);console.log(arr);// Logs the truncated arrayinspect.defaultOptions.maxArrayLength =null;console.log(arr);// logs the full arrayconst { inspect } =require('node:util');const arr =Array(156).fill(0);console.log(arr);// Logs the truncated arrayinspect.defaultOptions.maxArrayLength =null;console.log(arr);// logs the full array
util.isDeepStrictEqual(val1, val2[, options])#
History
| Version | Changes |
|---|---|
| v24.9.0 | Added |
| v9.0.0 | Added in: v9.0.0 |
val1<any>val2<any>skipPrototype<boolean> Iftrue, prototype and constructorcomparison is skipped during deep strict equality check.Default:false.- Returns:<boolean>
Returnstrue if there is deep strict equality betweenval1 andval2.Otherwise, returnsfalse.
By default, deep strict equality includes comparison of object prototypes andconstructors. WhenskipPrototype istrue, objects withdifferent prototypes or constructors can still be considered equal if theirenumerable properties are deeply strictly equal.
const util =require('node:util');classFoo {constructor(a) {this.a = a; }}classBar {constructor(a) {this.a = a; }}const foo =newFoo(1);const bar =newBar(1);// Different constructors, same propertiesconsole.log(util.isDeepStrictEqual(foo, bar));// falseconsole.log(util.isDeepStrictEqual(foo, bar,true));// trueSeeassert.deepStrictEqual() for more information about deep strictequality.
Class:util.MIMEType#
History
| Version | Changes |
|---|---|
| v23.11.0, v22.15.0 | Marking the API stable. |
| v19.1.0, v18.13.0 | Added in: v19.1.0, v18.13.0 |
An implementation ofthe MIMEType class.
In accordance with browser conventions, all properties ofMIMEType objectsare implemented as getters and setters on the class prototype, rather than asdata properties on the object itself.
A MIME string is a structured string containing multiple meaningfulcomponents. When parsed, aMIMEType object is returned containingproperties for each of these components.
new MIMEType(input)#
input<string> The input MIME to parse
Creates a newMIMEType object by parsing theinput.
import {MIMEType }from'node:util';const myMIME =newMIMEType('text/plain');const {MIMEType } =require('node:util');const myMIME =newMIMEType('text/plain');
ATypeError will be thrown if theinput is not a valid MIME. Notethat an effort will be made to coerce the given values into strings. Forinstance:
import {MIMEType }from'node:util';const myMIME =newMIMEType({toString:() =>'text/plain' });console.log(String(myMIME));// Prints: text/plainconst {MIMEType } =require('node:util');const myMIME =newMIMEType({toString:() =>'text/plain' });console.log(String(myMIME));// Prints: text/plain
mime.type#
- Type:<string>
Gets and sets the type portion of the MIME.
import {MIMEType }from'node:util';const myMIME =newMIMEType('text/javascript');console.log(myMIME.type);// Prints: textmyMIME.type ='application';console.log(myMIME.type);// Prints: applicationconsole.log(String(myMIME));// Prints: application/javascriptconst {MIMEType } =require('node:util');const myMIME =newMIMEType('text/javascript');console.log(myMIME.type);// Prints: textmyMIME.type ='application';console.log(myMIME.type);// Prints: applicationconsole.log(String(myMIME));// Prints: application/javascript
mime.subtype#
- Type:<string>
Gets and sets the subtype portion of the MIME.
import {MIMEType }from'node:util';const myMIME =newMIMEType('text/ecmascript');console.log(myMIME.subtype);// Prints: ecmascriptmyMIME.subtype ='javascript';console.log(myMIME.subtype);// Prints: javascriptconsole.log(String(myMIME));// Prints: text/javascriptconst {MIMEType } =require('node:util');const myMIME =newMIMEType('text/ecmascript');console.log(myMIME.subtype);// Prints: ecmascriptmyMIME.subtype ='javascript';console.log(myMIME.subtype);// Prints: javascriptconsole.log(String(myMIME));// Prints: text/javascript
mime.essence#
- Type:<string>
Gets the essence of the MIME. This property is read only.Usemime.type ormime.subtype to alter the MIME.
import {MIMEType }from'node:util';const myMIME =newMIMEType('text/javascript;key=value');console.log(myMIME.essence);// Prints: text/javascriptmyMIME.type ='application';console.log(myMIME.essence);// Prints: application/javascriptconsole.log(String(myMIME));// Prints: application/javascript;key=valueconst {MIMEType } =require('node:util');const myMIME =newMIMEType('text/javascript;key=value');console.log(myMIME.essence);// Prints: text/javascriptmyMIME.type ='application';console.log(myMIME.essence);// Prints: application/javascriptconsole.log(String(myMIME));// Prints: application/javascript;key=value
mime.params#
- Type:<MIMEParams>
Gets theMIMEParams object representing theparameters of the MIME. This property is read-only. SeeMIMEParams documentation for details.
mime.toString()#
- Returns:<string>
ThetoString() method on theMIMEType object returns the serialized MIME.
Because of the need for standard compliance, this method does not allow usersto customize the serialization process of the MIME.
mime.toJSON()#
- Returns:<string>
Alias formime.toString().
This method is automatically called when anMIMEType object is serializedwithJSON.stringify().
import {MIMEType }from'node:util';const myMIMES = [newMIMEType('image/png'),newMIMEType('image/gif'),];console.log(JSON.stringify(myMIMES));// Prints: ["image/png", "image/gif"]const {MIMEType } =require('node:util');const myMIMES = [newMIMEType('image/png'),newMIMEType('image/gif'),];console.log(JSON.stringify(myMIMES));// Prints: ["image/png", "image/gif"]
Class:util.MIMEParams#
TheMIMEParams API provides read and write access to the parameters of aMIMEType.
new MIMEParams()#
Creates a newMIMEParams object by with empty parameters
import {MIMEParams }from'node:util';const myParams =newMIMEParams();const {MIMEParams } =require('node:util');const myParams =newMIMEParams();
mimeParams.entries()#
- Returns:<Iterator>
Returns an iterator over each of the name-value pairs in the parameters.Each item of the iterator is a JavaScriptArray. The first item of the arrayis thename, the second item of the array is thevalue.
mimeParams.get(name)#
name<string>- Returns:<string> |<null> A string or
nullif there is no name-value pairwith the givenname.
Returns the value of the first name-value pair whose name isname. If thereare no such pairs,null is returned.
mimeParams.has(name)#
Returnstrue if there is at least one name-value pair whose name isname.
mimeParams.keys()#
- Returns:<Iterator>
Returns an iterator over the names of each name-value pair.
import {MIMEType }from'node:util';const { params } =newMIMEType('text/plain;foo=0;bar=1');for (const nameof params.keys()) {console.log(name);}// Prints:// foo// barconst {MIMEType } =require('node:util');const { params } =newMIMEType('text/plain;foo=0;bar=1');for (const nameof params.keys()) {console.log(name);}// Prints:// foo// bar
mimeParams.set(name, value)#
Sets the value in theMIMEParams object associated withname tovalue. If there are any pre-existing name-value pairs whose names arename,set the first such pair's value tovalue.
import {MIMEType }from'node:util';const { params } =newMIMEType('text/plain;foo=0;bar=1');params.set('foo','def');params.set('baz','xyz');console.log(params.toString());// Prints: foo=def;bar=1;baz=xyzconst {MIMEType } =require('node:util');const { params } =newMIMEType('text/plain;foo=0;bar=1');params.set('foo','def');params.set('baz','xyz');console.log(params.toString());// Prints: foo=def;bar=1;baz=xyz
mimeParams[Symbol.iterator]()#
- Returns:<Iterator>
Alias formimeParams.entries().
import {MIMEType }from'node:util';const { params } =newMIMEType('text/plain;foo=bar;xyz=baz');for (const [name, value]of params) {console.log(name, value);}// Prints:// foo bar// xyz bazconst {MIMEType } =require('node:util');const { params } =newMIMEType('text/plain;foo=bar;xyz=baz');for (const [name, value]of params) {console.log(name, value);}// Prints:// foo bar// xyz baz
util.parseArgs([config])#
History
| Version | Changes |
|---|---|
| v22.4.0, v20.16.0 | add support for allowing negative options in input |
| v20.0.0 | The API is no longer experimental. |
| v18.11.0, v16.19.0 | Add support for default values in input |
| v18.7.0, v16.17.0 | add support for returning detailed parse information using |
| v18.3.0, v16.17.0 | Added in: v18.3.0, v16.17.0 |
config<Object> Used to provide arguments for parsing and to configurethe parser.configsupports the following properties:args<string[]> array of argument strings.Default:process.argvwithexecPathandfilenameremoved.options<Object> Used to describe arguments known to the parser.Keys ofoptionsare the long names of options and values are an<Object> accepting the following properties:type<string> Type of argument, which must be eitherbooleanorstring.multiple<boolean> Whether this option can be provided multipletimes. Iftrue, all values will be collected in an array. Iffalse, values for the option are last-wins.Default:false.short<string> A single character alias for the option.default<string> |<boolean> |<string[]> |<boolean[]> The value to assign tothe option if it does not appear in the arguments to be parsed. The valuemust match the type specified by thetypeproperty. Ifmultipleistrue, it must be an array. No default value is applied when the optiondoes appear in the arguments to be parsed, even if the provided valueis falsy.
strict<boolean> Should an error be thrown when unknown argumentsare encountered, or when arguments are passed that do not match thetypeconfigured inoptions.Default:true.allowPositionals<boolean> Whether this command accepts positionalarguments.Default:falseifstrictistrue, otherwisetrue.allowNegative<boolean> Iftrue, allows explicitly setting booleanoptions tofalseby prefixing the option name with--no-.Default:false.tokens<boolean> Return the parsed tokens. This is useful for extendingthe built-in behavior, from adding additional checks through to reprocessingthe tokens in different ways.Default:false.
Returns:<Object> The parsed command line arguments:
values<Object> A mapping of parsed option names with their<string>or<boolean> values.positionals<string[]> Positional arguments.tokens<Object[]> |<undefined> SeeparseArgs tokenssection. Only returned ifconfigincludestokens: true.
Provides a higher level API for command-line argument parsing than interactingwithprocess.argv directly. Takes a specification for the expected argumentsand returns a structured object with the parsed options and positionals.
import { parseArgs }from'node:util';const args = ['-f','--bar','b'];const options = {foo: {type:'boolean',short:'f', },bar: {type:'string', },};const { values, positionals,} =parseArgs({ args, options });console.log(values, positionals);// Prints: [Object: null prototype] { foo: true, bar: 'b' } []const { parseArgs } =require('node:util');const args = ['-f','--bar','b'];const options = {foo: {type:'boolean',short:'f', },bar: {type:'string', },};const { values, positionals,} =parseArgs({ args, options });console.log(values, positionals);// Prints: [Object: null prototype] { foo: true, bar: 'b' } []
parseArgstokens#
Detailed parse information is available for adding custom behaviors byspecifyingtokens: true in the configuration.The returned tokens have properties describing:
- all tokens
- option tokens
name<string> Long name of option.rawName<string> How option used in args, like-fof--foo.value<string> |<undefined> Option value specified in args.Undefined for boolean options.inlineValue<boolean> |<undefined> Whether option value specified inline,like--foo=bar.
- positional tokens
value<string> The value of the positional argument in args (i.e.args[index]).
- option-terminator token
The returned tokens are in the order encountered in the input args. Optionsthat appear more than once in args produce a token for each use. Short optiongroups like-xy expand to a token for each option. So-xxx producesthree tokens.
For example, to add support for a negated option like--no-color (whichallowNegative supports when the option is ofboolean type), the returnedtokens can be reprocessed to change the value stored for the negated option.
import { parseArgs }from'node:util';const options = {'color': {type:'boolean' },'no-color': {type:'boolean' },'logfile': {type:'string' },'no-logfile': {type:'boolean' },};const { values, tokens } =parseArgs({ options,tokens:true });// Reprocess the option tokens and overwrite the returned values.tokens .filter((token) => token.kind ==='option') .forEach((token) => {if (token.name.startsWith('no-')) {// Store foo:false for --no-fooconst positiveName = token.name.slice(3); values[positiveName] =false;delete values[token.name]; }else {// Resave value so last one wins if both --foo and --no-foo. values[token.name] = token.value ??true; } });const color = values.color;const logfile = values.logfile ??'default.log';console.log({ logfile, color });const { parseArgs } =require('node:util');const options = {'color': {type:'boolean' },'no-color': {type:'boolean' },'logfile': {type:'string' },'no-logfile': {type:'boolean' },};const { values, tokens } =parseArgs({ options,tokens:true });// Reprocess the option tokens and overwrite the returned values.tokens .filter((token) => token.kind ==='option') .forEach((token) => {if (token.name.startsWith('no-')) {// Store foo:false for --no-fooconst positiveName = token.name.slice(3); values[positiveName] =false;delete values[token.name]; }else {// Resave value so last one wins if both --foo and --no-foo. values[token.name] = token.value ??true; } });const color = values.color;const logfile = values.logfile ??'default.log';console.log({ logfile, color });
Example usage showing negated options, and when an option is usedmultiple ways then last one wins.
$node negate.js{ logfile: 'default.log', color: undefined }$node negate.js --no-logfile --no-color{ logfile: false, color: false }$node negate.js --logfile=test.log --color{ logfile: 'test.log', color: true }$node negate.js --no-logfile --logfile=test.log --color --no-color{ logfile: 'test.log', color: false }util.parseEnv(content)#
History
| Version | Changes |
|---|---|
| v24.10.0 | This API is no longer experimental. |
| v21.7.0, v20.12.0 | Added in: v21.7.0, v20.12.0 |
content<string>
The raw contents of a.env file.
- Returns:<Object>
Given an example.env file:
const { parseEnv } =require('node:util');parseEnv('HELLO=world\nHELLO=oh my\n');// Returns: { HELLO: 'oh my' }import { parseEnv }from'node:util';parseEnv('HELLO=world\nHELLO=oh my\n');// Returns: { HELLO: 'oh my' }
util.promisify(original)#
History
| Version | Changes |
|---|---|
| v20.8.0 | Calling |
| v8.0.0 | Added in: v8.0.0 |
original<Function>- Returns:<Function>
Takes a function following the common error-first callback style, i.e. takingan(err, value) => ... callback as the last argument, and returns a versionthat returns promises.
import { promisify }from'node:util';import { stat }from'node:fs';const promisifiedStat =promisify(stat);promisifiedStat('.').then((stats) => {// Do something with `stats`}).catch((error) => {// Handle the error.});const { promisify } =require('node:util');const { stat } =require('node:fs');const promisifiedStat =promisify(stat);promisifiedStat('.').then((stats) => {// Do something with `stats`}).catch((error) => {// Handle the error.});
Or, equivalently usingasync functions:
import { promisify }from'node:util';import { stat }from'node:fs';const promisifiedStat =promisify(stat);asyncfunctioncallStat() {const stats =awaitpromisifiedStat('.');console.log(`This directory is owned by${stats.uid}`);}callStat();const { promisify } =require('node:util');const { stat } =require('node:fs');const promisifiedStat =promisify(stat);asyncfunctioncallStat() {const stats =awaitpromisifiedStat('.');console.log(`This directory is owned by${stats.uid}`);}callStat();
If there is anoriginal[util.promisify.custom] property present,promisifywill return its value, seeCustom promisified functions.
promisify() assumes thatoriginal is a function taking a callback as itsfinal argument in all cases. Iforiginal is not a function,promisify()will throw an error. Iforiginal is a function but its last argument is notan error-first callback, it will still be passed an error-firstcallback as its last argument.
Usingpromisify() on class methods or other methods that usethis may notwork as expected unless handled specially:
import { promisify }from'node:util';classFoo {constructor() {this.a =42; }bar(callback) {callback(null,this.a); }}const foo =newFoo();const naiveBar =promisify(foo.bar);// TypeError: Cannot read properties of undefined (reading 'a')// naiveBar().then(a => console.log(a));naiveBar.call(foo).then((a) =>console.log(a));// '42'const bindBar = naiveBar.bind(foo);bindBar().then((a) =>console.log(a));// '42'const { promisify } =require('node:util');classFoo {constructor() {this.a =42; }bar(callback) {callback(null,this.a); }}const foo =newFoo();const naiveBar =promisify(foo.bar);// TypeError: Cannot read properties of undefined (reading 'a')// naiveBar().then(a => console.log(a));naiveBar.call(foo).then((a) =>console.log(a));// '42'const bindBar = naiveBar.bind(foo);bindBar().then((a) =>console.log(a));// '42'
Custom promisified functions#
Using theutil.promisify.custom symbol one can override the return value ofutil.promisify():
import { promisify }from'node:util';functiondoSomething(foo, callback) {// ...}doSomething[promisify.custom] =(foo) => {returngetPromiseSomehow();};const promisified =promisify(doSomething);console.log(promisified === doSomething[promisify.custom]);// prints 'true'const { promisify } =require('node:util');functiondoSomething(foo, callback) {// ...}doSomething[promisify.custom] =(foo) => {returngetPromiseSomehow();};const promisified =promisify(doSomething);console.log(promisified === doSomething[promisify.custom]);// prints 'true'
This can be useful for cases where the original function does not follow thestandard format of taking an error-first callback as the last argument.
For example, with a function that takes in(foo, onSuccessCallback, onErrorCallback):
doSomething[util.promisify.custom] =(foo) => {returnnewPromise((resolve, reject) => {doSomething(foo, resolve, reject); });};Ifpromisify.custom is defined but is not a function,promisify() willthrow an error.
util.promisify.custom#
History
| Version | Changes |
|---|---|
| v13.12.0, v12.16.2 | This is now defined as a shared symbol. |
| v8.0.0 | Added in: v8.0.0 |
- Type:<symbol> that can be used to declare custom promisified variants of functions,seeCustom promisified functions.
In addition to being accessible throughutil.promisify.custom, thissymbol isregistered globally and can beaccessed in any environment asSymbol.for('nodejs.util.promisify.custom').
For example, with a function that takes in(foo, onSuccessCallback, onErrorCallback):
const kCustomPromisifiedSymbol =Symbol.for('nodejs.util.promisify.custom');doSomething[kCustomPromisifiedSymbol] =(foo) => {returnnewPromise((resolve, reject) => {doSomething(foo, resolve, reject); });};util.stripVTControlCharacters(str)#
Returnsstr with any ANSI escape codes removed.
console.log(util.stripVTControlCharacters('\u001B[4mvalue\u001B[0m'));// Prints "value"util.styleText(format, text[, options])#
History
| Version | Changes |
|---|---|
| v24.2.0, v22.17.0 | Added the |
| v23.5.0, v22.13.0 | styleText is now stable. |
| v22.8.0, v20.18.0 | Respect isTTY and environment variables such as NO_COLOR, NODE_DISABLE_COLORS, and FORCE_COLOR. |
| v21.7.0, v20.12.0 | Added in: v21.7.0, v20.12.0 |
format<string> |<Array> A text format or an Arrayof text formats defined inutil.inspect.colors.text<string> The text to to be formatted.options<Object>
This function returns a formatted text considering theformat passedfor printing in a terminal. It is aware of the terminal's capabilitiesand acts according to the configuration set viaNO_COLOR,NODE_DISABLE_COLORS andFORCE_COLOR environment variables.
import { styleText }from'node:util';import { stderr }from'node:process';const successMessage =styleText('green','Success!');console.log(successMessage);const errorMessage =styleText('red','Error! Error!',// Validate if process.stderr has TTY {stream: stderr },);console.error(errorMessage);const { styleText } =require('node:util');const { stderr } =require('node:process');const successMessage =styleText('green','Success!');console.log(successMessage);const errorMessage =styleText('red','Error! Error!',// Validate if process.stderr has TTY {stream: stderr },);console.error(errorMessage);
util.inspect.colors also provides text formats such asitalic, andunderline and you can combine both:
console.log( util.styleText(['underline','italic'],'My italic underlined message'),);When passing an array of formats, the order of the format appliedis left to right so the following style might overwrite the previous one.
console.log( util.styleText(['red','green'],'text'),// green);The special format valuenone applies no additional styling to the text.
The full list of formats can be found inmodifiers.
Class:util.TextDecoder#
History
| Version | Changes |
|---|---|
| v11.0.0 | The class is now available on the global object. |
| v8.3.0 | Added in: v8.3.0 |
An implementation of theWHATWG Encoding StandardTextDecoder API.
const decoder =newTextDecoder();const u8arr =newUint8Array([72,101,108,108,111]);console.log(decoder.decode(u8arr));// HelloWHATWG supported encodings#
Per theWHATWG Encoding Standard, the encodings supported by theTextDecoder API are outlined in the tables below. For each encoding,one or more aliases may be used.
Different Node.js build configurations support different sets of encodings.(seeInternationalization)
Encodings supported by default (with full ICU data)#
| Encoding | Aliases |
|---|---|
'ibm866' | '866','cp866','csibm866' |
'iso-8859-2' | 'csisolatin2','iso-ir-101','iso8859-2','iso88592','iso_8859-2','iso_8859-2:1987','l2','latin2' |
'iso-8859-3' | 'csisolatin3','iso-ir-109','iso8859-3','iso88593','iso_8859-3','iso_8859-3:1988','l3','latin3' |
'iso-8859-4' | 'csisolatin4','iso-ir-110','iso8859-4','iso88594','iso_8859-4','iso_8859-4:1988','l4','latin4' |
'iso-8859-5' | 'csisolatincyrillic','cyrillic','iso-ir-144','iso8859-5','iso88595','iso_8859-5','iso_8859-5:1988' |
'iso-8859-6' | 'arabic','asmo-708','csiso88596e','csiso88596i','csisolatinarabic','ecma-114','iso-8859-6-e','iso-8859-6-i','iso-ir-127','iso8859-6','iso88596','iso_8859-6','iso_8859-6:1987' |
'iso-8859-7' | 'csisolatingreek','ecma-118','elot_928','greek','greek8','iso-ir-126','iso8859-7','iso88597','iso_8859-7','iso_8859-7:1987','sun_eu_greek' |
'iso-8859-8' | 'csiso88598e','csisolatinhebrew','hebrew','iso-8859-8-e','iso-ir-138','iso8859-8','iso88598','iso_8859-8','iso_8859-8:1988','visual' |
'iso-8859-8-i' | 'csiso88598i','logical' |
'iso-8859-10' | 'csisolatin6','iso-ir-157','iso8859-10','iso885910','l6','latin6' |
'iso-8859-13' | 'iso8859-13','iso885913' |
'iso-8859-14' | 'iso8859-14','iso885914' |
'iso-8859-15' | 'csisolatin9','iso8859-15','iso885915','iso_8859-15','l9' |
'koi8-r' | 'cskoi8r','koi','koi8','koi8_r' |
'koi8-u' | 'koi8-ru' |
'macintosh' | 'csmacintosh','mac','x-mac-roman' |
'windows-874' | 'dos-874','iso-8859-11','iso8859-11','iso885911','tis-620' |
'windows-1250' | 'cp1250','x-cp1250' |
'windows-1251' | 'cp1251','x-cp1251' |
'windows-1252' | 'ansi_x3.4-1968','ascii','cp1252','cp819','csisolatin1','ibm819','iso-8859-1','iso-ir-100','iso8859-1','iso88591','iso_8859-1','iso_8859-1:1987','l1','latin1','us-ascii','x-cp1252' |
'windows-1253' | 'cp1253','x-cp1253' |
'windows-1254' | 'cp1254','csisolatin5','iso-8859-9','iso-ir-148','iso8859-9','iso88599','iso_8859-9','iso_8859-9:1989','l5','latin5','x-cp1254' |
'windows-1255' | 'cp1255','x-cp1255' |
'windows-1256' | 'cp1256','x-cp1256' |
'windows-1257' | 'cp1257','x-cp1257' |
'windows-1258' | 'cp1258','x-cp1258' |
'x-mac-cyrillic' | 'x-mac-ukrainian' |
'gbk' | 'chinese','csgb2312','csiso58gb231280','gb2312','gb_2312','gb_2312-80','iso-ir-58','x-gbk' |
'gb18030' | |
'big5' | 'big5-hkscs','cn-big5','csbig5','x-x-big5' |
'euc-jp' | 'cseucpkdfmtjapanese','x-euc-jp' |
'iso-2022-jp' | 'csiso2022jp' |
'shift_jis' | 'csshiftjis','ms932','ms_kanji','shift-jis','sjis','windows-31j','x-sjis' |
'euc-kr' | 'cseuckr','csksc56011987','iso-ir-149','korean','ks_c_5601-1987','ks_c_5601-1989','ksc5601','ksc_5601','windows-949' |
Encodings supported when Node.js is built with thesmall-icu option#
| Encoding | Aliases |
|---|---|
'utf-8' | 'unicode-1-1-utf-8','utf8' |
'utf-16le' | 'utf-16' |
'utf-16be' |
Encodings supported when ICU is disabled#
| Encoding | Aliases |
|---|---|
'utf-8' | 'unicode-1-1-utf-8','utf8' |
'utf-16le' | 'utf-16' |
The'iso-8859-16' encoding listed in theWHATWG Encoding Standardis not supported.
new TextDecoder([encoding[, options]])#
encoding<string> Identifies theencodingthat thisTextDecoderinstancesupports.Default:'utf-8'.options<Object>fatal<boolean>trueif decoding failures are fatal.This option is not supported when ICU is disabled(seeInternationalization).Default:false.ignoreBOM<boolean> Whentrue, theTextDecoderwill include the byteorder mark in the decoded result. Whenfalse, the byte order mark willbe removed from the output. This option is only used whenencodingis'utf-8','utf-16be', or'utf-16le'.Default:false.
Creates a newTextDecoder instance. Theencoding may specify one of thesupported encodings or an alias.
TheTextDecoder class is also available on the global object.
textDecoder.decode([input[, options]])#
input<ArrayBuffer> |<DataView> |<TypedArray> AnArrayBuffer,DataView, orTypedArrayinstance containing the encoded data.options<Object>stream<boolean>trueif additional chunks of data are expected.Default:false.
- Returns:<string>
Decodes theinput and returns a string. Ifoptions.stream istrue, anyincomplete byte sequences occurring at the end of theinput are bufferedinternally and emitted after the next call totextDecoder.decode().
IftextDecoder.fatal istrue, decoding errors that occur will result in aTypeError being thrown.
Class:util.TextEncoder#
History
| Version | Changes |
|---|---|
| v11.0.0 | The class is now available on the global object. |
| v8.3.0 | Added in: v8.3.0 |
An implementation of theWHATWG Encoding StandardTextEncoder API. Allinstances ofTextEncoder only support UTF-8 encoding.
const encoder =newTextEncoder();const uint8array = encoder.encode('this is some data');TheTextEncoder class is also available on the global object.
textEncoder.encode([input])#
input<string> The text to encode.Default: an empty string.- Returns:<Uint8Array>
UTF-8 encodes theinput string and returns aUint8Array containing theencoded bytes.
textEncoder.encodeInto(src, dest)#
src<string> The text to encode.dest<Uint8Array> The array to hold the encode result.- Returns:<Object>
UTF-8 encodes thesrc string to thedest Uint8Array and returns an objectcontaining the read Unicode code units and written UTF-8 bytes.
const encoder =newTextEncoder();const src ='this is some data';const dest =newUint8Array(10);const { read, written } = encoder.encodeInto(src, dest);util.toUSVString(string)#
string<string>
Returns thestring after replacing any surrogate code points(or equivalently, any unpaired surrogate code units) with theUnicode "replacement character" U+FFFD.
util.transferableAbortController()#
History
| Version | Changes |
|---|---|
| v23.11.0, v22.15.0 | Marking the API stable. |
| v18.11.0 | Added in: v18.11.0 |
Creates and returns an<AbortController> instance whose<AbortSignal> is markedas transferable and can be used withstructuredClone() orpostMessage().
util.transferableAbortSignal(signal)#
History
| Version | Changes |
|---|---|
| v23.11.0, v22.15.0 | Marking the API stable. |
| v18.11.0 | Added in: v18.11.0 |
signal<AbortSignal>- Returns:<AbortSignal>
Marks the given<AbortSignal> as transferable so that it can be used withstructuredClone() andpostMessage().
const signal =transferableAbortSignal(AbortSignal.timeout(100));const channel =newMessageChannel();channel.port2.postMessage(signal, [signal]);util.aborted(signal, resource)#
History
| Version | Changes |
|---|---|
| v24.0.0, v22.16.0 | Change stability index for this feature from Experimental to Stable. |
| v19.7.0, v18.16.0 | Added in: v19.7.0, v18.16.0 |
signal<AbortSignal>resource<Object> Any non-null object tied to the abortable operation and held weakly.Ifresourceis garbage collected before thesignalaborts, the promise remains pending,allowing Node.js to stop tracking it.This helps prevent memory leaks in long-running or non-cancelable operations.- Returns:<Promise>
Listens to abort event on the providedsignal and returns a promise that resolves when thesignal is aborted.Ifresource is provided, it weakly references the operation's associated object,so ifresource is garbage collected before thesignal aborts,then returned promise shall remain pending.This prevents memory leaks in long-running or non-cancelable operations.
const { aborted } =require('node:util');// Obtain an object with an abortable signal, like a custom resource or operation.const dependent =obtainSomethingAbortable();// Pass `dependent` as the resource, indicating the promise should only resolve// if `dependent` is still in memory when the signal is aborted.aborted(dependent.signal, dependent).then(() => {// This code runs when `dependent` is aborted.console.log('Dependent resource was aborted.');});// Simulate an event that triggers the abort.dependent.on('event',() => { dependent.abort();// This will cause the `aborted` promise to resolve.});import { aborted }from'node:util';// Obtain an object with an abortable signal, like a custom resource or operation.const dependent =obtainSomethingAbortable();// Pass `dependent` as the resource, indicating the promise should only resolve// if `dependent` is still in memory when the signal is aborted.aborted(dependent.signal, dependent).then(() => {// This code runs when `dependent` is aborted.console.log('Dependent resource was aborted.');});// Simulate an event that triggers the abort.dependent.on('event',() => { dependent.abort();// This will cause the `aborted` promise to resolve.});
util.types#
History
| Version | Changes |
|---|---|
| v15.3.0 | Exposed as |
| v10.0.0 | Added in: v10.0.0 |
util.types provides type checks for different kinds of built-in objects.Unlikeinstanceof orObject.prototype.toString.call(value), these checks donot inspect properties of the object that are accessible from JavaScript (liketheir prototype), and usually have the overhead of calling into C++.
The result generally does not make any guarantees about what kinds ofproperties or behavior a value exposes in JavaScript. They are primarilyuseful for addon developers who prefer to do type checking in JavaScript.
The API is accessible viarequire('node:util').types orrequire('node:util/types').
util.types.isAnyArrayBuffer(value)#
Returnstrue if the value is a built-in<ArrayBuffer> or<SharedArrayBuffer> instance.
See alsoutil.types.isArrayBuffer() andutil.types.isSharedArrayBuffer().
util.types.isAnyArrayBuffer(newArrayBuffer());// Returns trueutil.types.isAnyArrayBuffer(newSharedArrayBuffer());// Returns trueutil.types.isArrayBufferView(value)#
Returnstrue if the value is an instance of one of the<ArrayBuffer>views, such as typed array objects or<DataView>. Equivalent toArrayBuffer.isView().
util.types.isArrayBufferView(newInt8Array());// trueutil.types.isArrayBufferView(Buffer.from('hello world'));// trueutil.types.isArrayBufferView(newDataView(newArrayBuffer(16)));// trueutil.types.isArrayBufferView(newArrayBuffer());// falseutil.types.isArgumentsObject(value)#
Returnstrue if the value is anarguments object.
functionfoo() { util.types.isArgumentsObject(arguments);// Returns true}util.types.isArrayBuffer(value)#
Returnstrue if the value is a built-in<ArrayBuffer> instance.This doesnot include<SharedArrayBuffer> instances. Usually, it isdesirable to test for both; Seeutil.types.isAnyArrayBuffer() for that.
util.types.isArrayBuffer(newArrayBuffer());// Returns trueutil.types.isArrayBuffer(newSharedArrayBuffer());// Returns falseutil.types.isAsyncFunction(value)#
Returnstrue if the value is anasync function.This only reports back what the JavaScript engine is seeing;in particular, the return value may not match the original source code ifa transpilation tool was used.
util.types.isAsyncFunction(functionfoo() {});// Returns falseutil.types.isAsyncFunction(asyncfunctionfoo() {});// Returns trueutil.types.isBigInt64Array(value)#
Returnstrue if the value is aBigInt64Array instance.
util.types.isBigInt64Array(newBigInt64Array());// Returns trueutil.types.isBigInt64Array(newBigUint64Array());// Returns falseutil.types.isBigIntObject(value)#
Returnstrue if the value is a BigInt object, e.g. createdbyObject(BigInt(123)).
util.types.isBigIntObject(Object(BigInt(123)));// Returns trueutil.types.isBigIntObject(BigInt(123));// Returns falseutil.types.isBigIntObject(123);// Returns falseutil.types.isBigUint64Array(value)#
Returnstrue if the value is aBigUint64Array instance.
util.types.isBigUint64Array(newBigInt64Array());// Returns falseutil.types.isBigUint64Array(newBigUint64Array());// Returns trueutil.types.isBooleanObject(value)#
Returnstrue if the value is a boolean object, e.g. createdbynew Boolean().
util.types.isBooleanObject(false);// Returns falseutil.types.isBooleanObject(true);// Returns falseutil.types.isBooleanObject(newBoolean(false));// Returns trueutil.types.isBooleanObject(newBoolean(true));// Returns trueutil.types.isBooleanObject(Boolean(false));// Returns falseutil.types.isBooleanObject(Boolean(true));// Returns falseutil.types.isBoxedPrimitive(value)#
Returnstrue if the value is any boxed primitive object, e.g. createdbynew Boolean(),new String() orObject(Symbol()).
For example:
util.types.isBoxedPrimitive(false);// Returns falseutil.types.isBoxedPrimitive(newBoolean(false));// Returns trueutil.types.isBoxedPrimitive(Symbol('foo'));// Returns falseutil.types.isBoxedPrimitive(Object(Symbol('foo')));// Returns trueutil.types.isBoxedPrimitive(Object(BigInt(5)));// Returns trueutil.types.isDataView(value)#
Returnstrue if the value is a built-in<DataView> instance.
const ab =newArrayBuffer(20);util.types.isDataView(newDataView(ab));// Returns trueutil.types.isDataView(newFloat64Array());// Returns falseSee alsoArrayBuffer.isView().
util.types.isDate(value)#
Returnstrue if the value is a built-in<Date> instance.
util.types.isDate(newDate());// Returns trueutil.types.isExternal(value)#
Returnstrue if the value is a nativeExternal value.
A nativeExternal value is a special type of object that contains araw C++ pointer (void*) for access from native code, and has no otherproperties. Such objects are created either by Node.js internals or nativeaddons. In JavaScript, they arefrozen objects with anull prototype.
#include<js_native_api.h>#include<stdlib.h>napi_value result;static napi_valueMyNapi(napi_env env, napi_callback_info info) {int* raw = (int*)malloc(1024); napi_status status = napi_create_external(env, (void*) raw,NULL,NULL, &result);if (status != napi_ok) { napi_throw_error(env,NULL,"napi_create_external failed");returnNULL; }return result;}...DECLARE_NAPI_PROPERTY("myNapi", MyNapi)...import nativefrom'napi_addon.node';import { types }from'node:util';const data = native.myNapi();types.isExternal(data);// returns truetypes.isExternal(0);// returns falsetypes.isExternal(newString('foo'));// returns falseconst native =require('napi_addon.node');const { types } =require('node:util');const data = native.myNapi();types.isExternal(data);// returns truetypes.isExternal(0);// returns falsetypes.isExternal(newString('foo'));// returns false
For further information onnapi_create_external, refer tonapi_create_external().
util.types.isFloat16Array(value)#
Returnstrue if the value is a built-in<Float16Array> instance.
util.types.isFloat16Array(newArrayBuffer());// Returns falseutil.types.isFloat16Array(newFloat16Array());// Returns trueutil.types.isFloat16Array(newFloat32Array());// Returns falseutil.types.isFloat32Array(value)#
Returnstrue if the value is a built-in<Float32Array> instance.
util.types.isFloat32Array(newArrayBuffer());// Returns falseutil.types.isFloat32Array(newFloat32Array());// Returns trueutil.types.isFloat32Array(newFloat64Array());// Returns falseutil.types.isFloat64Array(value)#
Returnstrue if the value is a built-in<Float64Array> instance.
util.types.isFloat64Array(newArrayBuffer());// Returns falseutil.types.isFloat64Array(newUint8Array());// Returns falseutil.types.isFloat64Array(newFloat64Array());// Returns trueutil.types.isGeneratorFunction(value)#
Returnstrue if the value is a generator function.This only reports back what the JavaScript engine is seeing;in particular, the return value may not match the original source code ifa transpilation tool was used.
util.types.isGeneratorFunction(functionfoo() {});// Returns falseutil.types.isGeneratorFunction(function*foo() {});// Returns trueutil.types.isGeneratorObject(value)#
Returnstrue if the value is a generator object as returned from abuilt-in generator function.This only reports back what the JavaScript engine is seeing;in particular, the return value may not match the original source code ifa transpilation tool was used.
function*foo() {}const generator =foo();util.types.isGeneratorObject(generator);// Returns trueutil.types.isInt8Array(value)#
Returnstrue if the value is a built-in<Int8Array> instance.
util.types.isInt8Array(newArrayBuffer());// Returns falseutil.types.isInt8Array(newInt8Array());// Returns trueutil.types.isInt8Array(newFloat64Array());// Returns falseutil.types.isInt16Array(value)#
Returnstrue if the value is a built-in<Int16Array> instance.
util.types.isInt16Array(newArrayBuffer());// Returns falseutil.types.isInt16Array(newInt16Array());// Returns trueutil.types.isInt16Array(newFloat64Array());// Returns falseutil.types.isInt32Array(value)#
Returnstrue if the value is a built-in<Int32Array> instance.
util.types.isInt32Array(newArrayBuffer());// Returns falseutil.types.isInt32Array(newInt32Array());// Returns trueutil.types.isInt32Array(newFloat64Array());// Returns falseutil.types.isMap(value)#
Returnstrue if the value is a built-in<Map> instance.
util.types.isMap(newMap());// Returns trueutil.types.isMapIterator(value)#
Returnstrue if the value is an iterator returned for a built-in<Map> instance.
const map =newMap();util.types.isMapIterator(map.keys());// Returns trueutil.types.isMapIterator(map.values());// Returns trueutil.types.isMapIterator(map.entries());// Returns trueutil.types.isMapIterator(map[Symbol.iterator]());// Returns trueutil.types.isModuleNamespaceObject(value)#
Returnstrue if the value is an instance of aModule Namespace Object.
import *as nsfrom'./a.js';util.types.isModuleNamespaceObject(ns);// Returns trueutil.types.isNativeError(value)#
Error.isError instead.Note: As of Node.js 24,Error.isError() is currently slower thanutil.types.isNativeError().If performance is critical, consider benchmarking both in your environment.
Returnstrue if the value was returned by the constructor of abuilt-inError type.
console.log(util.types.isNativeError(newError()));// trueconsole.log(util.types.isNativeError(newTypeError()));// trueconsole.log(util.types.isNativeError(newRangeError()));// trueSubclasses of the native error types are also native errors:
classMyErrorextendsError {}console.log(util.types.isNativeError(newMyError()));// trueA value beinginstanceof a native error class is not equivalent toisNativeError()returningtrue for that value.isNativeError() returnstrue for errorswhich come from a differentrealm whileinstanceof Error returnsfalsefor these errors:
import { createContext, runInContext }from'node:vm';import { types }from'node:util';const context =createContext({});const myError =runInContext('new Error()', context);console.log(types.isNativeError(myError));// trueconsole.log(myErrorinstanceofError);// falseconst { createContext, runInContext } =require('node:vm');const { types } =require('node:util');const context =createContext({});const myError =runInContext('new Error()', context);console.log(types.isNativeError(myError));// trueconsole.log(myErrorinstanceofError);// false
Conversely,isNativeError() returnsfalse for all objects which were notreturned by the constructor of a native error. That includes valueswhich areinstanceof native errors:
const myError = {__proto__:Error.prototype };console.log(util.types.isNativeError(myError));// falseconsole.log(myErrorinstanceofError);// trueutil.types.isNumberObject(value)#
Returnstrue if the value is a number object, e.g. createdbynew Number().
util.types.isNumberObject(0);// Returns falseutil.types.isNumberObject(newNumber(0));// Returns trueutil.types.isPromise(value)#
Returnstrue if the value is a built-in<Promise>.
util.types.isPromise(Promise.resolve(42));// Returns trueutil.types.isProxy(value)#
Returnstrue if the value is a<Proxy> instance.
const target = {};const proxy =newProxy(target, {});util.types.isProxy(target);// Returns falseutil.types.isProxy(proxy);// Returns trueutil.types.isRegExp(value)#
Returnstrue if the value is a regular expression object.
util.types.isRegExp(/abc/);// Returns trueutil.types.isRegExp(newRegExp('abc'));// Returns trueutil.types.isSet(value)#
Returnstrue if the value is a built-in<Set> instance.
util.types.isSet(newSet());// Returns trueutil.types.isSetIterator(value)#
Returnstrue if the value is an iterator returned for a built-in<Set> instance.
const set =newSet();util.types.isSetIterator(set.keys());// Returns trueutil.types.isSetIterator(set.values());// Returns trueutil.types.isSetIterator(set.entries());// Returns trueutil.types.isSetIterator(set[Symbol.iterator]());// Returns trueutil.types.isSharedArrayBuffer(value)#
Returnstrue if the value is a built-in<SharedArrayBuffer> instance.This doesnot include<ArrayBuffer> instances. Usually, it isdesirable to test for both; Seeutil.types.isAnyArrayBuffer() for that.
util.types.isSharedArrayBuffer(newArrayBuffer());// Returns falseutil.types.isSharedArrayBuffer(newSharedArrayBuffer());// Returns trueutil.types.isStringObject(value)#
Returnstrue if the value is a string object, e.g. createdbynew String().
util.types.isStringObject('foo');// Returns falseutil.types.isStringObject(newString('foo'));// Returns trueutil.types.isSymbolObject(value)#
Returnstrue if the value is a symbol object, createdby callingObject() on aSymbol primitive.
const symbol =Symbol('foo');util.types.isSymbolObject(symbol);// Returns falseutil.types.isSymbolObject(Object(symbol));// Returns trueutil.types.isTypedArray(value)#
Returnstrue if the value is a built-in<TypedArray> instance.
util.types.isTypedArray(newArrayBuffer());// Returns falseutil.types.isTypedArray(newUint8Array());// Returns trueutil.types.isTypedArray(newFloat64Array());// Returns trueSee alsoArrayBuffer.isView().
util.types.isUint8Array(value)#
Returnstrue if the value is a built-in<Uint8Array> instance.
util.types.isUint8Array(newArrayBuffer());// Returns falseutil.types.isUint8Array(newUint8Array());// Returns trueutil.types.isUint8Array(newFloat64Array());// Returns falseutil.types.isUint8ClampedArray(value)#
Returnstrue if the value is a built-in<Uint8ClampedArray> instance.
util.types.isUint8ClampedArray(newArrayBuffer());// Returns falseutil.types.isUint8ClampedArray(newUint8ClampedArray());// Returns trueutil.types.isUint8ClampedArray(newFloat64Array());// Returns falseutil.types.isUint16Array(value)#
Returnstrue if the value is a built-in<Uint16Array> instance.
util.types.isUint16Array(newArrayBuffer());// Returns falseutil.types.isUint16Array(newUint16Array());// Returns trueutil.types.isUint16Array(newFloat64Array());// Returns falseutil.types.isUint32Array(value)#
Returnstrue if the value is a built-in<Uint32Array> instance.
util.types.isUint32Array(newArrayBuffer());// Returns falseutil.types.isUint32Array(newUint32Array());// Returns trueutil.types.isUint32Array(newFloat64Array());// Returns falseDeprecated APIs#
The following APIs are deprecated and should no longer be used. Existingapplications and modules should be updated to find alternative approaches.
util._extend(target, source)#
Object.assign() instead.Theutil._extend() method was never intended to be used outside of internalNode.js modules. The community found and used it anyway.
It is deprecated and should not be used in new code. JavaScript comes with verysimilar built-in functionality throughObject.assign().
An automated migration is available (source):
npx codemod@latest @nodejs/util-extend-to-object-assignutil.isArray(object)#
Array.isArray() instead.Alias forArray.isArray().
Returnstrue if the givenobject is anArray. Otherwise, returnsfalse.
const util =require('node:util');util.isArray([]);// Returns: trueutil.isArray(newArray());// Returns: trueutil.isArray({});// Returns: falseAn automated migration is available (source):
npx codemod@latest @nodejs/util-isV8#
Source Code:lib/v8.js
Thenode:v8 module exposes APIs that are specific to the version ofV8built into the Node.js binary. It can be accessed using:
import v8from'node:v8';const v8 =require('node:v8');
v8.cachedDataVersionTag()#
- Returns:<integer>
Returns an integer representing a version tag derived from the V8 version,command-line flags, and detected CPU features. This is useful for determiningwhether avm.ScriptcachedData buffer is compatible with this instanceof V8.
console.log(v8.cachedDataVersionTag());// 3947234607// The value returned by v8.cachedDataVersionTag() is derived from the V8// version, command-line flags, and detected CPU features. Test that the value// does indeed update when flags are toggled.v8.setFlagsFromString('--allow_natives_syntax');console.log(v8.cachedDataVersionTag());// 183726201v8.getHeapCodeStatistics()#
- Returns:<Object>
Get statistics about code and its metadata in the heap, see V8GetHeapCodeAndMetadataStatistics API. Returns an object with thefollowing properties:
code_and_metadata_size<number>bytecode_and_metadata_size<number>external_script_source_size<number>cpu_profiler_metadata_size<number>
{code_and_metadata_size:212208,bytecode_and_metadata_size:161368,external_script_source_size:1410794,cpu_profiler_metadata_size:0,}v8.getHeapSnapshot([options])#
History
| Version | Changes |
|---|---|
| v19.1.0 | Support options to configure the heap snapshot. |
| v11.13.0 | Added in: v11.13.0 |
options<Object>Returns:<stream.Readable> A Readable containing the V8 heap snapshot.
Generates a snapshot of the current V8 heap and returns a ReadableStream that may be used to read the JSON serialized representation.This JSON stream format is intended to be used with tools such asChrome DevTools. The JSON schema is undocumented and specific to theV8 engine. Therefore, the schema may change from one version of V8 to the next.
Creating a heap snapshot requires memory about twice the size of the heap atthe time the snapshot is created. This results in the risk of OOM killersterminating the process.
Generating a snapshot is a synchronous operation which blocks the event loopfor a duration depending on the heap size.
// Print heap snapshot to the consoleimport { getHeapSnapshot }from'node:v8';import processfrom'node:process';const stream =getHeapSnapshot();stream.pipe(process.stdout);// Print heap snapshot to the consoleconst v8 =require('node:v8');const process =require('node:process');const stream = v8.getHeapSnapshot();stream.pipe(process.stdout);v8.getHeapSpaceStatistics()#
History
| Version | Changes |
|---|---|
| v7.5.0 | Support values exceeding the 32-bit unsigned integer range. |
| v6.0.0 | Added in: v6.0.0 |
- Returns:<Object[]>
Returns statistics about the V8 heap spaces, i.e. the segments which make upthe V8 heap. Neither the ordering of heap spaces, nor the availability of aheap space can be guaranteed as the statistics are provided via the V8GetHeapSpaceStatistics function and may change from one V8 version to thenext.
The value returned is an array of objects containing the following properties:
space_name<string>space_size<number>space_used_size<number>space_available_size<number>physical_space_size<number>
[{"space_name":"new_space","space_size":2063872,"space_used_size":951112,"space_available_size":80824,"physical_space_size":2063872},{"space_name":"old_space","space_size":3090560,"space_used_size":2493792,"space_available_size":0,"physical_space_size":3090560},{"space_name":"code_space","space_size":1260160,"space_used_size":644256,"space_available_size":960,"physical_space_size":1260160},{"space_name":"map_space","space_size":1094160,"space_used_size":201608,"space_available_size":0,"physical_space_size":1094160},{"space_name":"large_object_space","space_size":0,"space_used_size":0,"space_available_size":1490980608,"physical_space_size":0}]v8.getHeapStatistics()#
History
| Version | Changes |
|---|---|
| v7.5.0 | Support values exceeding the 32-bit unsigned integer range. |
| v7.2.0 | Added |
| v1.0.0 | Added in: v1.0.0 |
- Returns:<Object>
Returns an object with the following properties:
total_heap_size<number>total_heap_size_executable<number>total_physical_size<number>total_available_size<number>used_heap_size<number>heap_size_limit<number>malloced_memory<number>peak_malloced_memory<number>does_zap_garbage<number>number_of_native_contexts<number>number_of_detached_contexts<number>total_global_handles_size<number>used_global_handles_size<number>external_memory<number>total_allocated_bytes<number>
total_heap_size The value of total_heap_size is the number of bytes V8 hasallocated for the heap. This can grow if used_heap needs more memory.
total_heap_size_executable The value of total_heap_size_executable is theportion of the heap that can contain executable code, in bytes. This includesmemory used by JIT-compiled code and any memory that must be kept executable.
total_physical_size The value of total_physical_size is the actual physical memoryused by the V8 heap, in bytes. This is the amount of memory that is committed(or in use) rather than reserved.
total_available_size The value of total_available_size is the number ofbytes of memory available to the V8 heap. This value represents how muchmore memory V8 can use before it exceeds the heap limit.
used_heap_size The value of used_heap_size is number of bytes currentlybeing used by V8’s JavaScript objects. This is the actual memory in use anddoes not include memory that has been allocated but not yet used.
heap_size_limit The value of heap_size_limit is the maximum size of the V8heap, in bytes (either the default limit, determined by system resources, orthe value passed to the--max_old_space_size option).
malloced_memory The value of malloced_memory is the number of bytes allocatedthroughmalloc by V8.
peak_malloced_memory The value of peak_malloced_memory is the peak number ofbytes allocated throughmalloc by V8 during the lifetime of the process.
does_zap_garbage is a 0/1 boolean, which signifies whether the--zap_code_space option is enabled or not. This makes V8 overwrite heapgarbage with a bit pattern. The RSS footprint (resident set size) gets biggerbecause it continuously touches all heap pages and that makes them less likelyto get swapped out by the operating system.
number_of_native_contexts The value of native_context is the number of thetop-level contexts currently active. Increase of this number over time indicatesa memory leak.
number_of_detached_contexts The value of detached_context is the numberof contexts that were detached and not yet garbage collected. This numberbeing non-zero indicates a potential memory leak.
total_global_handles_size The value of total_global_handles_size is thetotal memory size of V8 global handles.
used_global_handles_size The value of used_global_handles_size is theused memory size of V8 global handles.
external_memory The value of external_memory is the memory size of arraybuffers and external strings.
total_allocated_bytes The value of total allocated bytes since the Isolatecreation.
{total_heap_size:7326976,total_heap_size_executable:4194304,total_physical_size:7326976,total_available_size:1152656,used_heap_size:3476208,heap_size_limit:1535115264,malloced_memory:16384,peak_malloced_memory:1127496,does_zap_garbage:0,number_of_native_contexts:1,number_of_detached_contexts:0,total_global_handles_size:8192,used_global_handles_size:3296,external_memory:318824,total_allocated_bytes:45224088}v8.getCppHeapStatistics([detailLevel])#
RetrievesCppHeap statistics regarding memory consumption andutilization using the V8CollectStatistics() function whichmay change from one V8 version to thenext.
detailLevel<string> |<undefined>:Default:'detailed'.Specifies the level of detail in the returned statistics.Accepted values are:'brief': Brief statistics contain only the top-levelallocated and usedmemory statistics for the entire heap.'detailed': Detailed statistics also contain a breakdown per space and page, as well as freelist statisticsand object type histograms.
It returns an object with a structure similar to thecppgc::HeapStatistics object. See theV8 documentationfor more information about the properties of the object.
// Detailed({committed_size_bytes:131072,resident_size_bytes:131072,used_size_bytes:152,space_statistics: [ {name:'NormalPageSpace0',committed_size_bytes:0,resident_size_bytes:0,used_size_bytes:0,page_stats: [{}],free_list_stats: {}, }, {name:'NormalPageSpace1',committed_size_bytes:131072,resident_size_bytes:131072,used_size_bytes:152,page_stats: [{}],free_list_stats: {}, }, {name:'NormalPageSpace2',committed_size_bytes:0,resident_size_bytes:0,used_size_bytes:0,page_stats: [{}],free_list_stats: {}, }, {name:'NormalPageSpace3',committed_size_bytes:0,resident_size_bytes:0,used_size_bytes:0,page_stats: [{}],free_list_stats: {}, }, {name:'LargePageSpace',committed_size_bytes:0,resident_size_bytes:0,used_size_bytes:0,page_stats: [{}],free_list_stats: {}, }, ],type_names: [],detail_level:'detailed',});// Brief({committed_size_bytes:131072,resident_size_bytes:131072,used_size_bytes:128864,space_statistics: [],type_names: [],detail_level:'brief',});v8.queryObjects(ctor[, options])#
History
| Version | Changes |
|---|---|
| v25.4.0 | This API is no longer experimental. |
| v22.0.0, v20.13.0 | Added in: v22.0.0, v20.13.0 |
ctor<Function> The constructor that can be used to search on theprototype chain in order to filter target objects in the heap.options<undefined> |<Object>format<string> If it's'count', the count of matched objectsis returned. If it's'summary', an array with summary stringsof the matched objects is returned.
- Returns: {number|Array
}
This is similar to thequeryObjects() console API provided by theChromium DevTools console. It can be used to search for objects thathave the matching constructor on its prototype chain in the heap aftera full garbage collection, which can be useful for memory leakregression tests. To avoid surprising results, users should avoid usingthis API on constructors whose implementation they don't control, or onconstructors that can be invoked by other parties in the application.
To avoid accidental leaks, this API does not return raw references tothe objects found. By default, it returns the count of the objectsfound. Ifoptions.format is'summary', it returns an arraycontaining brief string representations for each object. The visibilityprovided in this API is similar to what the heap snapshot provides,while users can save the cost of serialization and parsing and directlyfilter the target objects during the search.
Only objects created in the current execution context are included in theresults.
const { queryObjects } =require('node:v8');classA { foo ='bar'; }console.log(queryObjects(A));// 0const a =newA();console.log(queryObjects(A));// 1// [ "A { foo: 'bar' }" ]console.log(queryObjects(A, {format:'summary' }));classBextendsA { bar ='qux'; }const b =newB();console.log(queryObjects(B));// 1// [ "B { foo: 'bar', bar: 'qux' }" ]console.log(queryObjects(B, {format:'summary' }));// Note that, when there are child classes inheriting from a constructor,// the constructor also shows up in the prototype chain of the child// classes's prototype, so the child classes's prototype would also be// included in the result.console.log(queryObjects(A));// 3// [ "B { foo: 'bar', bar: 'qux' }", 'A {}', "A { foo: 'bar' }" ]console.log(queryObjects(A, {format:'summary' }));import { queryObjects }from'node:v8';classA { foo ='bar'; }console.log(queryObjects(A));// 0const a =newA();console.log(queryObjects(A));// 1// [ "A { foo: 'bar' }" ]console.log(queryObjects(A, {format:'summary' }));classBextendsA { bar ='qux'; }const b =newB();console.log(queryObjects(B));// 1// [ "B { foo: 'bar', bar: 'qux' }" ]console.log(queryObjects(B, {format:'summary' }));// Note that, when there are child classes inheriting from a constructor,// the constructor also shows up in the prototype chain of the child// classes's prototype, so the child classes's prototype would also be// included in the result.console.log(queryObjects(A));// 3// [ "B { foo: 'bar', bar: 'qux' }", 'A {}', "A { foo: 'bar' }" ]console.log(queryObjects(A, {format:'summary' }));
v8.setFlagsFromString(flags)#
flags<string>
Thev8.setFlagsFromString() method can be used to programmatically setV8 command-line flags. This method should be used with care. Changing settingsafter the VM has started may result in unpredictable behavior, includingcrashes and data loss; or it may simply do nothing.
The V8 options available for a version of Node.js may be determined by runningnode --v8-options.
Usage:
import { setFlagsFromString }from'node:v8';import {setInterval }from'node:timers';// setFlagsFromString to trace garbage collection eventssetFlagsFromString('--trace-gc');// Trigger GC events by using some memorylet arrays = [];const interval =setInterval(() => {for (let i =0; i <500; i++) { arrays.push(newArray(10000).fill(Math.random())); }if (arrays.length >5000) { arrays = arrays.slice(-1000); }console.log(`\n* Created${arrays.length} arrays\n`);},100);// setFlagsFromString to stop tracing GC events after 1.5 secondssetTimeout(() => {setFlagsFromString('--notrace-gc');console.log('\nStopped tracing!\n');},1500);// Stop triggering GC events altogether after 2.5 secondssetTimeout(() => {clearInterval(interval);},2500);const { setFlagsFromString } =require('node:v8');const {setInterval } =require('node:timers');// setFlagsFromString to trace garbage collection eventssetFlagsFromString('--trace-gc');// Trigger GC events by using some memorylet arrays = [];const interval =setInterval(() => {for (let i =0; i <500; i++) { arrays.push(newArray(10000).fill(Math.random())); }if (arrays.length >5000) { arrays = arrays.slice(-1000); }console.log(`\n* Created${arrays.length} arrays\n`);},100);// setFlagsFromString to stop tracing GC events after 1.5 secondssetTimeout(() => {console.log('\nStopped tracing!\n');setFlagsFromString('--notrace-gc');},1500);// Stop triggering GC events altogether after 2.5 secondssetTimeout(() => {clearInterval(interval);},2500);
v8.stopCoverage()#
Thev8.stopCoverage() method allows the user to stop the coverage collectionstarted byNODE_V8_COVERAGE, so that V8 can release the execution countrecords and optimize code. This can be used in conjunction withv8.takeCoverage() if the user wants to collect the coverage on demand.
v8.takeCoverage()#
Thev8.takeCoverage() method allows the user to write the coverage started byNODE_V8_COVERAGE to disk on demand. This method can be invoked multipletimes during the lifetime of the process. Each time the execution counter willbe reset and a new coverage report will be written to the directory specifiedbyNODE_V8_COVERAGE.
When the process is about to exit, one last coverage will still be written todisk unlessv8.stopCoverage() is invoked before the process exits.
v8.writeHeapSnapshot([filename[,options]])#
History
| Version | Changes |
|---|---|
| v19.1.0 | Support options to configure the heap snapshot. |
| v18.0.0 | An exception will now be thrown if the file could not be written. |
| v18.0.0 | Make the returned error codes consistent across all platforms. |
| v11.13.0 | Added in: v11.13.0 |
filename<string> The file path where the V8 heap snapshot is to besaved. If not specified, a file name with the pattern'Heap-${yyyymmdd}-${hhmmss}-${pid}-${thread_id}.heapsnapshot'will begenerated, where{pid}will be the PID of the Node.js process,{thread_id}will be0whenwriteHeapSnapshot()is called fromthe main Node.js thread or the id of a worker thread.options<Object>- Returns:<string> The filename where the snapshot was saved.
Generates a snapshot of the current V8 heap and writes it to a JSONfile. This file is intended to be used with tools such as ChromeDevTools. The JSON schema is undocumented and specific to the V8engine, and may change from one version of V8 to the next.
A heap snapshot is specific to a single V8 isolate. When usingworker threads, a heap snapshot generated from the main thread willnot contain any information about the workers, and vice versa.
Creating a heap snapshot requires memory about twice the size of the heap atthe time the snapshot is created. This results in the risk of OOM killersterminating the process.
Generating a snapshot is a synchronous operation which blocks the event loopfor a duration depending on the heap size.
import { writeHeapSnapshot }from'node:v8';import {Worker, isMainThread, parentPort }from'node:worker_threads';import { fileURLToPath }from'node:url';if (isMainThread) {const __filename =fileURLToPath(import.meta.url);const worker =newWorker(__filename); worker.once('message',(filename) => {console.log(`worker heapdump:${filename}`);// Now get a heapdump for the main thread.console.log(`main thread heapdump:${writeHeapSnapshot()}`); });// Tell the worker to create a heapdump. worker.postMessage('heapdump');}else { parentPort.once('message',(message) => {if (message ==='heapdump') {// Generate a heapdump for the worker// and return the filename to the parent. parentPort.postMessage(writeHeapSnapshot()); } });}const { writeHeapSnapshot } =require('node:v8');const {Worker, isMainThread, parentPort } =require('node:worker_threads');if (isMainThread) {const worker =newWorker(__filename); worker.once('message',(filename) => {console.log(`worker heapdump:${filename}`);// Now get a heapdump for the main thread.console.log(`main thread heapdump:${writeHeapSnapshot()}`); });// Tell the worker to create a heapdump. worker.postMessage('heapdump');}else { parentPort.once('message',(message) => {if (message ==='heapdump') {// Generate a heapdump for the worker// and return the filename to the parent. parentPort.postMessage(writeHeapSnapshot()); } });}
v8.setHeapSnapshotNearHeapLimit(limit)#
History
| Version | Changes |
|---|---|
| v24.0.0, v22.17.0 | Marking the API stable. |
| v18.10.0, v16.18.0 | Added in: v18.10.0, v16.18.0 |
limit<integer>
The API is a no-op if--heapsnapshot-near-heap-limit is already set from thecommand line or the API is called more than once.limit must be a positiveinteger. See--heapsnapshot-near-heap-limit for more information.
Serialization API#
The serialization API provides means of serializing JavaScript values in a waythat is compatible with theHTML structured clone algorithm.
The format is backward-compatible (i.e. safe to store to disk).Equal JavaScript values may result in different serialized output.
v8.serialize(value)#
Uses aDefaultSerializer to serializevalue into a buffer.
ERR_BUFFER_TOO_LARGE will be thrown when trying toserialize a huge object which requires bufferlarger thanbuffer.constants.MAX_LENGTH.
v8.deserialize(buffer)#
buffer<Buffer> |<TypedArray> |<DataView> A buffer returned byserialize().
Uses aDefaultDeserializer with default options to read a JS valuefrom a buffer.
Class:v8.Serializer#
new Serializer()#
Creates a newSerializer object.
serializer.writeHeader()#
Writes out a header, which includes the serialization format version.
serializer.writeValue(value)#
value<any>
Serializes a JavaScript value and adds the serialized representation to theinternal buffer.
This throws an error ifvalue cannot be serialized.
serializer.releaseBuffer()#
- Returns:<Buffer>
Returns the stored internal buffer. This serializer should not be used oncethe buffer is released. Calling this method results in undefined behaviorif a previous write has failed.
serializer.transferArrayBuffer(id, arrayBuffer)#
id<integer> A 32-bit unsigned integer.arrayBuffer<ArrayBuffer> AnArrayBufferinstance.
Marks anArrayBuffer as having its contents transferred out of band.Pass the correspondingArrayBuffer in the deserializing context todeserializer.transferArrayBuffer().
serializer.writeUint32(value)#
value<integer>
Write a raw 32-bit unsigned integer.For use inside of a customserializer._writeHostObject().
serializer.writeUint64(hi, lo)#
Write a raw 64-bit unsigned integer, split into high and low 32-bit parts.For use inside of a customserializer._writeHostObject().
serializer.writeDouble(value)#
value<number>
Write a JSnumber value.For use inside of a customserializer._writeHostObject().
serializer.writeRawBytes(buffer)#
buffer<Buffer> |<TypedArray> |<DataView>
Write raw bytes into the serializer's internal buffer. The deserializerwill require a way to compute the length of the buffer.For use inside of a customserializer._writeHostObject().
serializer._writeHostObject(object)#
object<Object>
This method is called to write some kind of host object, i.e. an object createdby native C++ bindings. If it is not possible to serializeobject, a suitableexception should be thrown.
This method is not present on theSerializer class itself but can be providedby subclasses.
serializer._getDataCloneError(message)#
message<string>
This method is called to generate error objects that will be thrown when anobject can not be cloned.
This method defaults to theError constructor and can be overridden onsubclasses.
serializer._getSharedArrayBufferId(sharedArrayBuffer)#
sharedArrayBuffer<SharedArrayBuffer>
This method is called when the serializer is going to serialize aSharedArrayBuffer object. It must return an unsigned 32-bit integer ID forthe object, using the same ID if thisSharedArrayBuffer has already beenserialized. When deserializing, this ID will be passed todeserializer.transferArrayBuffer().
If the object cannot be serialized, an exception should be thrown.
This method is not present on theSerializer class itself but can be providedby subclasses.
serializer._setTreatArrayBufferViewsAsHostObjects(flag)#
flag<boolean>Default:false
Indicate whether to treatTypedArray andDataView objects ashost objects, i.e. pass them toserializer._writeHostObject().
Class:v8.Deserializer#
new Deserializer(buffer)#
buffer<Buffer> |<TypedArray> |<DataView> A buffer returned byserializer.releaseBuffer().
Creates a newDeserializer object.
deserializer.readHeader()#
Reads and validates a header (including the format version).May, for example, reject an invalid or unsupported wire format. In that case,anError is thrown.
deserializer.readValue()#
Deserializes a JavaScript value from the buffer and returns it.
deserializer.transferArrayBuffer(id, arrayBuffer)#
id<integer> A 32-bit unsigned integer.arrayBuffer<ArrayBuffer> |<SharedArrayBuffer> AnArrayBufferinstance.
Marks anArrayBuffer as having its contents transferred out of band.Pass the correspondingArrayBuffer in the serializing context toserializer.transferArrayBuffer() (or return theid fromserializer._getSharedArrayBufferId() in the case ofSharedArrayBuffers).
deserializer.getWireFormatVersion()#
- Returns:<integer>
Reads the underlying wire format version. Likely mostly to be useful tolegacy code reading old wire format versions. May not be called before.readHeader().
deserializer.readUint32()#
- Returns:<integer>
Read a raw 32-bit unsigned integer and return it.For use inside of a customdeserializer._readHostObject().
deserializer.readUint64()#
- Returns:<integer[]>
Read a raw 64-bit unsigned integer and return it as an array[hi, lo]with two 32-bit unsigned integer entries.For use inside of a customdeserializer._readHostObject().
deserializer.readDouble()#
- Returns:<number>
Read a JSnumber value.For use inside of a customdeserializer._readHostObject().
deserializer.readRawBytes(length)#
Read raw bytes from the deserializer's internal buffer. Thelength parametermust correspond to the length of the buffer that was passed toserializer.writeRawBytes().For use inside of a customdeserializer._readHostObject().
deserializer._readHostObject()#
This method is called to read some kind of host object, i.e. an object that iscreated by native C++ bindings. If it is not possible to deserialize the data,a suitable exception should be thrown.
This method is not present on theDeserializer class itself but can beprovided by subclasses.
Class:v8.DefaultSerializer#
A subclass ofSerializer that serializesTypedArray(in particularBuffer) andDataView objects as host objects, and onlystores the part of their underlyingArrayBuffers that they are referring to.
Class:v8.DefaultDeserializer#
A subclass ofDeserializer corresponding to the format written byDefaultSerializer.
Promise hooks#
ThepromiseHooks interface can be used to track promise lifecycle events.To trackall async activity, seeasync_hooks which internally uses thismodule to produce promise lifecycle events in addition to events for otherasync resources. For request context management, seeAsyncLocalStorage.
import { promiseHooks }from'node:v8';// There are four lifecycle events produced by promises:// The `init` event represents the creation of a promise. This could be a// direct creation such as with `new Promise(...)` or a continuation such// as `then()` or `catch()`. It also happens whenever an async function is// called or does an `await`. If a continuation promise is created, the// `parent` will be the promise it is a continuation from.functioninit(promise, parent) {console.log('a promise was created', { promise, parent });}// The `settled` event happens when a promise receives a resolution or// rejection value. This may happen synchronously such as when using// `Promise.resolve()` on non-promise input.functionsettled(promise) {console.log('a promise resolved or rejected', { promise });}// The `before` event runs immediately before a `then()` or `catch()` handler// runs or an `await` resumes execution.functionbefore(promise) {console.log('a promise is about to call a then handler', { promise });}// The `after` event runs immediately after a `then()` handler runs or when// an `await` begins after resuming from another.functionafter(promise) {console.log('a promise is done calling a then handler', { promise });}// Lifecycle hooks may be started and stopped individuallyconst stopWatchingInits = promiseHooks.onInit(init);const stopWatchingSettleds = promiseHooks.onSettled(settled);const stopWatchingBefores = promiseHooks.onBefore(before);const stopWatchingAfters = promiseHooks.onAfter(after);// Or they may be started and stopped in groupsconst stopHookSet = promiseHooks.createHook({ init, settled, before, after,});// Trigger the hooks by using promisesconstpromiseLog = (word) =>Promise.resolve(word).then(console.log);promiseLog('Hello');promiseLog('World');// To stop a hook, call the function returned at its creation.stopWatchingInits();stopWatchingSettleds();stopWatchingBefores();stopWatchingAfters();stopHookSet();const { promiseHooks } =require('node:v8');// There are four lifecycle events produced by promises:// The `init` event represents the creation of a promise. This could be a// direct creation such as with `new Promise(...)` or a continuation such// as `then()` or `catch()`. It also happens whenever an async function is// called or does an `await`. If a continuation promise is created, the// `parent` will be the promise it is a continuation from.functioninit(promise, parent) {console.log('a promise was created', { promise, parent });}// The `settled` event happens when a promise receives a resolution or// rejection value. This may happen synchronously such as when using// `Promise.resolve()` on non-promise input.functionsettled(promise) {console.log('a promise resolved or rejected', { promise });}// The `before` event runs immediately before a `then()` or `catch()` handler// runs or an `await` resumes execution.functionbefore(promise) {console.log('a promise is about to call a then handler', { promise });}// The `after` event runs immediately after a `then()` handler runs or when// an `await` begins after resuming from another.functionafter(promise) {console.log('a promise is done calling a then handler', { promise });}// Lifecycle hooks may be started and stopped individuallyconst stopWatchingInits = promiseHooks.onInit(init);const stopWatchingSettleds = promiseHooks.onSettled(settled);const stopWatchingBefores = promiseHooks.onBefore(before);const stopWatchingAfters = promiseHooks.onAfter(after);// Or they may be started and stopped in groupsconst stopHookSet = promiseHooks.createHook({ init, settled, before, after,});// Trigger the hooks by using promisesconstpromisePrint = (word) =>Promise.resolve(word).then(console.log);promisePrint('Hello');promisePrint('World');// To stop a hook, call the function returned at its creation.stopWatchingInits();stopWatchingSettleds();stopWatchingBefores();stopWatchingAfters();stopHookSet();
promiseHooks.onInit(init)#
init<Function> Theinitcallback to call when a promise is created.- Returns:<Function> Call to stop the hook.
Theinit hook must be a plain function. Providing an async function willthrow as it would produce an infinite microtask loop.
import { promiseHooks }from'node:v8';const stop = promiseHooks.onInit((promise, parent) => {});const { promiseHooks } =require('node:v8');const stop = promiseHooks.onInit((promise, parent) => {});
promiseHooks.onSettled(settled)#
settled<Function> Thesettledcallback to call when a promiseis resolved or rejected.- Returns:<Function> Call to stop the hook.
Thesettled hook must be a plain function. Providing an async function willthrow as it would produce an infinite microtask loop.
import { promiseHooks }from'node:v8';const stop = promiseHooks.onSettled((promise) => {});const { promiseHooks } =require('node:v8');const stop = promiseHooks.onSettled((promise) => {});
promiseHooks.onBefore(before)#
before<Function> Thebeforecallback to call before a promisecontinuation executes.- Returns:<Function> Call to stop the hook.
Thebefore hook must be a plain function. Providing an async function willthrow as it would produce an infinite microtask loop.
import { promiseHooks }from'node:v8';const stop = promiseHooks.onBefore((promise) => {});const { promiseHooks } =require('node:v8');const stop = promiseHooks.onBefore((promise) => {});
promiseHooks.onAfter(after)#
after<Function> Theaftercallback to call after a promisecontinuation executes.- Returns:<Function> Call to stop the hook.
Theafter hook must be a plain function. Providing an async function willthrow as it would produce an infinite microtask loop.
import { promiseHooks }from'node:v8';const stop = promiseHooks.onAfter((promise) => {});const { promiseHooks } =require('node:v8');const stop = promiseHooks.onAfter((promise) => {});
promiseHooks.createHook(callbacks)#
callbacks<Object> TheHook Callbacks to registerinit<Function> Theinitcallback.before<Function> Thebeforecallback.after<Function> Theaftercallback.settled<Function> Thesettledcallback.
- Returns:<Function> Used for disabling hooks
The hook callbacks must be plain functions. Providing async functions willthrow as it would produce an infinite microtask loop.
Registers functions to be called for different lifetime events of each promise.
The callbacksinit()/before()/after()/settled() are called for therespective events during a promise's lifetime.
All callbacks are optional. For example, if only promise creation needs tobe tracked, then only theinit callback needs to be passed. Thespecifics of all functions that can be passed tocallbacks is in theHook Callbacks section.
import { promiseHooks }from'node:v8';const stopAll = promiseHooks.createHook({init(promise, parent) {},});const { promiseHooks } =require('node:v8');const stopAll = promiseHooks.createHook({init(promise, parent) {},});
Hook callbacks#
Key events in the lifetime of a promise have been categorized into four areas:creation of a promise, before/after a continuation handler is called or aroundan await, and when the promise resolves or rejects.
While these hooks are similar to those ofasync_hooks they lack adestroy hook. Other types of async resources typically represent sockets orfile descriptors which have a distinct "closed" state to express thedestroylifecycle event while promises remain usable for as long as code can stillreach them. Garbage collection tracking is used to make promises fit into theasync_hooks event model, however this tracking is very expensive and they maynot necessarily ever even be garbage collected.
Because promises are asynchronous resources whose lifecycle is trackedvia the promise hooks mechanism, theinit(),before(),after(), andsettled() callbacksmust not be async functions as they create morepromises which would produce an infinite loop.
While this API is used to feed promise events intoasync_hooks, theordering between the two is undefined. Both APIs are multi-tenantand therefore could produce events in any order relative to each other.
init(promise, parent)#
promise<Promise> The promise being created.parent<Promise> The promise continued from, if applicable.
Called when a promise is constructed. Thisdoes not mean that correspondingbefore/after events will occur, only that the possibility exists. This willhappen if a promise is created without ever getting a continuation.
before(promise)#
promise<Promise>
Called before a promise continuation executes. This can be in the form ofthen(),catch(), orfinally() handlers or anawait resuming.
Thebefore callback will be called 0 to N times. Thebefore callbackwill typically be called 0 times if no continuation was ever made for thepromise. Thebefore callback may be called many times in the case wheremany continuations have been made from the same promise.
Startup Snapshot API#
History
| Version | Changes |
|---|---|
| v24.0.0, v22.17.0 | Marking the API stable. |
| v18.6.0, v16.17.0 | Added in: v18.6.0, v16.17.0 |
Thev8.startupSnapshot interface can be used to add serialization anddeserialization hooks for custom startup snapshots.
$node --snapshot-blob snapshot.blob --build-snapshot entry.js#This launches a process with the snapshot$node --snapshot-blob snapshot.blobIn the example above,entry.js can use methods from thev8.startupSnapshotinterface to specify how to save information for custom objects in the snapshotduring serialization and how the information can be used to synchronize theseobjects during deserialization of the snapshot. For example, if theentry.jscontains the following script:
'use strict';const fs =require('node:fs');const zlib =require('node:zlib');const path =require('node:path');const assert =require('node:assert');const v8 =require('node:v8');classBookShelf { storage =newMap();// Reading a series of files from directory and store them into storage.constructor(directory, books) {for (const bookof books) {this.storage.set(book, fs.readFileSync(path.join(directory, book))); } }staticcompressAll(shelf) {for (const [ book, content ]of shelf.storage) { shelf.storage.set(book, zlib.gzipSync(content)); } }staticdecompressAll(shelf) {for (const [ book, content ]of shelf.storage) { shelf.storage.set(book, zlib.gunzipSync(content)); } }}// __dirname here is where the snapshot script is placed// during snapshot building time.const shelf =newBookShelf(__dirname, ['book1.en_US.txt','book1.es_ES.txt','book2.zh_CN.txt',]);assert(v8.startupSnapshot.isBuildingSnapshot());// On snapshot serialization, compress the books to reduce size.v8.startupSnapshot.addSerializeCallback(BookShelf.compressAll, shelf);// On snapshot deserialization, decompress the books.v8.startupSnapshot.addDeserializeCallback(BookShelf.decompressAll, shelf);v8.startupSnapshot.setDeserializeMainFunction((shelf) => {// process.env and process.argv are refreshed during snapshot// deserialization.const lang = process.env.BOOK_LANG ||'en_US';const book = process.argv[1];const name =`${book}.${lang}.txt`;console.log(shelf.storage.get(name));}, shelf);The resulted binary will get print the data deserialized from the snapshotduring start up, using the refreshedprocess.env andprocess.argv ofthe launched process:
$BOOK_LANG=es_ES node --snapshot-blob snapshot.blob book1#Prints content of book1.es_ES.txt deserialized from the snapshot.Currently the application deserialized from a user-land snapshot cannotbe snapshotted again, so these APIs are only available to applicationsthat are not deserialized from a user-land snapshot.
v8.startupSnapshot.addSerializeCallback(callback[, data])#
callback<Function> Callback to be invoked before serialization.data<any> Optional data that will be passed to thecallbackwhen itgets called.
Add a callback that will be called when the Node.js instance is about toget serialized into a snapshot and exit. This can be used to releaseresources that should not or cannot be serialized or to convert user datainto a form more suitable for serialization.
Callbacks are run in the order in which they are added.
v8.startupSnapshot.addDeserializeCallback(callback[, data])#
callback<Function> Callback to be invoked after the snapshot isdeserialized.data<any> Optional data that will be passed to thecallbackwhen itgets called.
Add a callback that will be called when the Node.js instance is deserializedfrom a snapshot. Thecallback and thedata (if provided) will beserialized into the snapshot, they can be used to re-initialize the stateof the application or to re-acquire resources that the application needswhen the application is restarted from the snapshot.
Callbacks are run in the order in which they are added.
v8.startupSnapshot.setDeserializeMainFunction(callback[, data])#
callback<Function> Callback to be invoked as the entry point after thesnapshot is deserialized.data<any> Optional data that will be passed to thecallbackwhen itgets called.
This sets the entry point of the Node.js application when it is deserializedfrom a snapshot. This can be called only once in the snapshot buildingscript. If called, the deserialized application no longer needs an additionalentry point script to start up and will simply invoke the callback along withthe deserialized data (if provided), otherwise an entry point script stillneeds to be provided to the deserialized application.
Class:v8.GCProfiler#
This API collects GC data in current thread.
new v8.GCProfiler()#
Create a new instance of thev8.GCProfiler class.This API supportsusing syntax.
profiler.stop()#
Stop collecting GC data and return an object. The content of objectis as follows.
{"version":1,"startTime":1674059033862,"statistics":[{"gcType":"Scavenge","beforeGC":{"heapStatistics":{"totalHeapSize":5005312,"totalHeapSizeExecutable":524288,"totalPhysicalSize":5226496,"totalAvailableSize":4341325216,"totalGlobalHandlesSize":8192,"usedGlobalHandlesSize":2112,"usedHeapSize":4883840,"heapSizeLimit":4345298944,"mallocedMemory":254128,"externalMemory":225138,"peakMallocedMemory":181760},"heapSpaceStatistics":[{"spaceName":"read_only_space","spaceSize":0,"spaceUsedSize":0,"spaceAvailableSize":0,"physicalSpaceSize":0}]},"cost":1574.14,"afterGC":{"heapStatistics":{"totalHeapSize":6053888,"totalHeapSizeExecutable":524288,"totalPhysicalSize":5500928,"totalAvailableSize":4341101384,"totalGlobalHandlesSize":8192,"usedGlobalHandlesSize":2112,"usedHeapSize":4059096,"heapSizeLimit":4345298944,"mallocedMemory":254128,"externalMemory":225138,"peakMallocedMemory":181760},"heapSpaceStatistics":[{"spaceName":"read_only_space","spaceSize":0,"spaceUsedSize":0,"spaceAvailableSize":0,"physicalSpaceSize":0}]}}],"endTime":1674059036865}Here's an example.
import {GCProfiler }from'node:v8';const profiler =newGCProfiler();profiler.start();setTimeout(() => {console.log(profiler.stop());},1000);const {GCProfiler } =require('node:v8');const profiler =newGCProfiler();profiler.start();setTimeout(() => {console.log(profiler.stop());},1000);
Class:SyncCPUProfileHandle#
syncCpuProfileHandle.stop()#
- Returns:<string>
Stopping collecting the profile and return the profile data.
syncCpuProfileHandle[Symbol.dispose]()#
Stopping collecting the profile and the profile will be discarded.
Class:CPUProfileHandle#
Class:HeapProfileHandle#
v8.isStringOneByteRepresentation(content)#
V8 only supportsLatin-1/ISO-8859-1 andUTF16 as the underlying representation of a string.If thecontent usesLatin-1/ISO-8859-1 as the underlying representation, this function will return true;otherwise, it returns false.
If this method returns false, that does not mean that the string contains some characters not inLatin-1/ISO-8859-1.Sometimes aLatin-1 string may also be represented asUTF16.
import { isStringOneByteRepresentation }from'node:v8';import {Buffer }from'node:buffer';constEncoding = {latin1:1,utf16le:2,};const buffer =Buffer.alloc(100);functionwriteString(input) {if (isStringOneByteRepresentation(input)) {console.log(`input: '${input}'`); buffer.writeUint8(Encoding.latin1); buffer.writeUint32LE(input.length,1); buffer.write(input,5,'latin1');console.log(`decoded: '${buffer.toString('latin1',5,5 + input.length)}'\n`); }else {console.log(`input: '${input}'`); buffer.writeUint8(Encoding.utf16le); buffer.writeUint32LE(input.length *2,1); buffer.write(input,5,'utf16le');console.log(`decoded: '${buffer.toString('utf16le',5,5 + input.length *2)}'`); }}writeString('hello');writeString('你好');const { isStringOneByteRepresentation } =require('node:v8');const {Buffer } =require('node:buffer');constEncoding = {latin1:1,utf16le:2,};const buffer =Buffer.alloc(100);functionwriteString(input) {if (isStringOneByteRepresentation(input)) {console.log(`input: '${input}'`); buffer.writeUint8(Encoding.latin1); buffer.writeUint32LE(input.length,1); buffer.write(input,5,'latin1');console.log(`decoded: '${buffer.toString('latin1',5,5 + input.length)}'\n`); }else {console.log(`input: '${input}'`); buffer.writeUint8(Encoding.utf16le); buffer.writeUint32LE(input.length *2,1); buffer.write(input,5,'utf16le');console.log(`decoded: '${buffer.toString('utf16le',5,5 + input.length *2)}'`); }}writeString('hello');writeString('你好');
v8.startCpuProfile()#
- Returns:<SyncCPUProfileHandle>
Starting a CPU profile then return aSyncCPUProfileHandle object.This API supportsusing syntax.
const handle = v8.startCpuProfile();const profile = handle.stop();console.log(profile);VM (executing JavaScript)#
Source Code:lib/vm.js
Thenode:vm module enables compiling and running code within V8 VirtualMachine contexts.
Thenode:vm module is not a securitymechanism. Do not use it to run untrusted code.
JavaScript code can be compiled and run immediately orcompiled, saved, and run later.
A common use case is to run the code in a different V8 Context. This meansinvoked code has a different global object than the invoking code.
One can provide the context bycontextifying anobject. The invoked code treats any property in the context like aglobal variable. Any changes to global variables caused by the invokedcode are reflected in the context object.
const vm =require('node:vm');const x =1;const context = {x:2 };vm.createContext(context);// Contextify the object.const code ='x += 40; var y = 17;';// `x` and `y` are global variables in the context.// Initially, x has the value 2 because that is the value of context.x.vm.runInContext(code, context);console.log(context.x);// 42console.log(context.y);// 17console.log(x);// 1; y is not defined.Class:vm.Script#
Instances of thevm.Script class contain precompiled scripts that can beexecuted in specific contexts.
new vm.Script(code[, options])#
History
| Version | Changes |
|---|---|
| v21.7.0, v20.12.0 | Added support for |
| v17.0.0, v16.12.0 | Added support for import attributes to the |
| v10.6.0 | The |
| v5.7.0 | The |
| v0.3.1 | Added in: v0.3.1 |
code<string> The JavaScript code to compile.options<Object> |<string>filename<string> Specifies the filename used in stack traces producedby this script.Default:'evalmachine.<anonymous>'.lineOffset<number> Specifies the line number offset that is displayedin stack traces produced by this script.Default:0.columnOffset<number> Specifies the first-line column number offset thatis displayed in stack traces produced by this script.Default:0.cachedData<Buffer> |<TypedArray> |<DataView> Provides an optionalBufferorTypedArray, orDataViewwith V8's code cache data for the suppliedsource. When supplied, thecachedDataRejectedvalue will be set toeithertrueorfalsedepending on acceptance of the data by V8.produceCachedData<boolean> Whentrueand nocachedDatais present, V8will attempt to produce code cache data forcode. Upon success, aBufferwith V8's code cache data will be produced and stored in thecachedDataproperty of the returnedvm.Scriptinstance.ThecachedDataProducedvalue will be set to eithertrueorfalsedepending on whether code cache data is produced successfully.This option isdeprecated in favor ofscript.createCachedData().Default:false.importModuleDynamically<Function> |<vm.constants.USE_MAIN_CONTEXT_DEFAULT_LOADER>Used to specify how the modules should be loaded during the evaluationof this script whenimport()is called. This option is part of theexperimental modules API. We do not recommend using it in a productionenvironment. For detailed information, seeSupport of dynamicimport()in compilation APIs.
Ifoptions is a string, then it specifies the filename.
Creating a newvm.Script object compilescode but does not run it. Thecompiledvm.Script can be run later multiple times. Thecode is not bound toany global object; rather, it is bound before each run, just for that run.
script.cachedDataRejected#
- Type:<boolean> |<undefined>
WhencachedData is supplied to create thevm.Script, this value will be setto eithertrue orfalse depending on acceptance of the data by V8.Otherwise the value isundefined.
script.createCachedData()#
- Returns:<Buffer>
Creates a code cache that can be used with theScript constructor'scachedData option. Returns aBuffer. This method may be called at anytime and any number of times.
The code cache of theScript doesn't contain any JavaScript observablestates. The code cache is safe to be saved along side the script source andused to construct newScript instances multiple times.
Functions in theScript source can be marked as lazily compiled and they arenot compiled at construction of theScript. These functions are going to becompiled when they are invoked the first time. The code cache serializes themetadata that V8 currently knows about theScript that it can use to speed upfuture compilations.
const script =new vm.Script(`function add(a, b) { return a + b;}const x = add(1, 2);`);const cacheWithoutAdd = script.createCachedData();// In `cacheWithoutAdd` the function `add()` is marked for full compilation// upon invocation.script.runInThisContext();const cacheWithAdd = script.createCachedData();// `cacheWithAdd` contains fully compiled function `add()`.script.runInContext(contextifiedObject[, options])#
History
| Version | Changes |
|---|---|
| v6.3.0 | The |
| v0.3.1 | Added in: v0.3.1 |
contextifiedObject<Object> Acontextified object as returned by thevm.createContext()method.options<Object>displayErrors<boolean> Whentrue, if anErroroccurswhile compiling thecode, the line of code causing the error is attachedto the stack trace.Default:true.timeout<integer> Specifies the number of milliseconds to executecodebefore terminating execution. If execution is terminated, anErrorwill be thrown. This value must be a strictly positive integer.breakOnSigint<boolean> Iftrue, receivingSIGINT(Ctrl+C) will terminate execution and throw anError. Existing handlers for the event that have been attached viaprocess.on('SIGINT')are disabled during script execution, but continue towork after that.Default:false.
- Returns:<any> the result of the very last statement executed in the script.
Runs the compiled code contained by thevm.Script object within the givencontextifiedObject and returns the result. Running code does not have accessto local scope.
The following example compiles code that increments a global variable, setsthe value of another global variable, then execute the code multiple times.The globals are contained in thecontext object.
const vm =require('node:vm');const context = {animal:'cat',count:2,};const script =new vm.Script('count += 1; name = "kitty";');vm.createContext(context);for (let i =0; i <10; ++i) { script.runInContext(context);}console.log(context);// Prints: { animal: 'cat', count: 12, name: 'kitty' }Using thetimeout orbreakOnSigint options will result in new event loopsand corresponding threads being started, which have a non-zero performanceoverhead.
script.runInNewContext([contextObject[, options]])#
History
| Version | Changes |
|---|---|
| v22.8.0, v20.18.0 | The |
| v14.6.0 | The |
| v10.0.0 | The |
| v6.3.0 | The |
| v0.3.1 | Added in: v0.3.1 |
contextObject<Object> |<vm.constants.DONT_CONTEXTIFY> |<undefined>Eithervm.constants.DONT_CONTEXTIFYor an object that will becontextified.Ifundefined, an empty contextified object will be created for backwards compatibility.options<Object>displayErrors<boolean> Whentrue, if anErroroccurswhile compiling thecode, the line of code causing the error is attachedto the stack trace.Default:true.timeout<integer> Specifies the number of milliseconds to executecodebefore terminating execution. If execution is terminated, anErrorwill be thrown. This value must be a strictly positive integer.breakOnSigint<boolean> Iftrue, receivingSIGINT(Ctrl+C) will terminate execution and throw anError. Existing handlers for the event that have been attached viaprocess.on('SIGINT')are disabled during script execution, but continue towork after that.Default:false.contextName<string> Human-readable name of the newly created context.Default:'VM Context i', whereiis an ascending numerical index ofthe created context.contextOrigin<string>Origin corresponding to the newlycreated context for display purposes. The origin should be formatted like aURL, but with only the scheme, host, and port (if necessary), like thevalue of theurl.originproperty of aURLobject. Most notably,this string should omit the trailing slash, as that denotes a path.Default:''.contextCodeGeneration<Object>microtaskMode<string> If set toafterEvaluate, microtasks (tasksscheduled throughPromises andasync functions) will be run immediatelyafter the script has run. They are included in thetimeoutandbreakOnSigintscopes in that case.
- Returns:<any> the result of the very last statement executed in the script.
This method is a shortcut toscript.runInContext(vm.createContext(options), options).It does several things at once:
- Creates a new context.
- If
contextObjectis an object,contextifies it with the new context.IfcontextObjectis undefined, creates a new object andcontextifies it.IfcontextObjectisvm.constants.DONT_CONTEXTIFY, don'tcontextify anything. - Runs the compiled code contained by the
vm.Scriptobject within the created context. The codedoes not have access to the scope in which this method is called. - Returns the result.
The following example compiles code that sets a global variable, then executesthe code multiple times in different contexts. The globals are set on andcontained within each individualcontext.
const vm =require('node:vm');const script =new vm.Script('globalVar = "set"');const contexts = [{}, {}, {}];contexts.forEach((context) => { script.runInNewContext(context);});console.log(contexts);// Prints: [{ globalVar: 'set' }, { globalVar: 'set' }, { globalVar: 'set' }]// This would throw if the context is created from a contextified object.// vm.constants.DONT_CONTEXTIFY allows creating contexts with ordinary// global objects that can be frozen.const freezeScript =new vm.Script('Object.freeze(globalThis); globalThis;');const frozenContext = freezeScript.runInNewContext(vm.constants.DONT_CONTEXTIFY);script.runInThisContext([options])#
History
| Version | Changes |
|---|---|
| v6.3.0 | The |
| v0.3.1 | Added in: v0.3.1 |
options<Object>displayErrors<boolean> Whentrue, if anErroroccurswhile compiling thecode, the line of code causing the error is attachedto the stack trace.Default:true.timeout<integer> Specifies the number of milliseconds to executecodebefore terminating execution. If execution is terminated, anErrorwill be thrown. This value must be a strictly positive integer.breakOnSigint<boolean> Iftrue, receivingSIGINT(Ctrl+C) will terminate execution and throw anError. Existing handlers for the event that have been attached viaprocess.on('SIGINT')are disabled during script execution, but continue towork after that.Default:false.
- Returns:<any> the result of the very last statement executed in the script.
Runs the compiled code contained by thevm.Script within the context of thecurrentglobal object. Running code does not have access to local scope, butdoes have access to the currentglobal object.
The following example compiles code that increments aglobal variable thenexecutes that code multiple times:
const vm =require('node:vm');global.globalVar =0;const script =new vm.Script('globalVar += 1', {filename:'myfile.vm' });for (let i =0; i <1000; ++i) { script.runInThisContext();}console.log(globalVar);// 1000script.sourceMapURL#
- Type:<string> |<undefined>
When the script is compiled from a source that contains a source map magiccomment, this property will be set to the URL of the source map.
import vmfrom'node:vm';const script =new vm.Script(`function myFunc() {}//# sourceMappingURL=sourcemap.json`);console.log(script.sourceMapURL);// Prints: sourcemap.jsonconst vm =require('node:vm');const script =new vm.Script(`function myFunc() {}//# sourceMappingURL=sourcemap.json`);console.log(script.sourceMapURL);// Prints: sourcemap.json
Class:vm.Module#
This feature is only available with the--experimental-vm-modules commandflag enabled.
Thevm.Module class provides a low-level interface for usingECMAScript modules in VM contexts. It is the counterpart of thevm.Scriptclass that closely mirrorsModule Records as defined in the ECMAScriptspecification.
Unlikevm.Script however, everyvm.Module object is bound to a context fromits creation.
Using avm.Module object requires three distinct steps: creation/parsing,linking, and evaluation. These three steps are illustrated in the followingexample.
This implementation lies at a lower level than theECMAScript Moduleloader. There is also no way to interact with the Loader yet, thoughsupport is planned.
import vmfrom'node:vm';const contextifiedObject = vm.createContext({secret:42,print:console.log,});// Step 1//// Create a Module by constructing a new `vm.SourceTextModule` object. This// parses the provided source text, throwing a `SyntaxError` if anything goes// wrong. By default, a Module is created in the top context. But here, we// specify `contextifiedObject` as the context this Module belongs to.//// Here, we attempt to obtain the default export from the module "foo", and// put it into local binding "secret".const rootModule =new vm.SourceTextModule(` import s from 'foo'; s; print(s);`, {context: contextifiedObject });// Step 2//// "Link" the imported dependencies of this Module to it.//// Obtain the requested dependencies of a SourceTextModule by// `sourceTextModule.moduleRequests` and resolve them.//// Even top-level Modules without dependencies must be explicitly linked. The// array passed to `sourceTextModule.linkRequests(modules)` can be// empty, however.//// Note: This is a contrived example in that the resolveAndLinkDependencies// creates a new "foo" module every time it is called. In a full-fledged// module system, a cache would probably be used to avoid duplicated modules.const moduleMap =newMap([ ['root', rootModule],]);functionresolveAndLinkDependencies(module) {const requestedModules =module.moduleRequests.map((request) => {// In a full-fledged module system, the resolveAndLinkDependencies would// resolve the module with the module cache key `[specifier, attributes]`.// In this example, we just use the specifier as the key.const specifier = request.specifier;let requestedModule = moduleMap.get(specifier);if (requestedModule ===undefined) { requestedModule =new vm.SourceTextModule(` // The "secret" variable refers to the global variable we added to // "contextifiedObject" when creating the context. export default secret; `, {context:module.context }); moduleMap.set(specifier, requestedModule);// Resolve the dependencies of the new module as well.resolveAndLinkDependencies(requestedModule); }return requestedModule; });module.linkRequests(requestedModules);}resolveAndLinkDependencies(rootModule);rootModule.instantiate();// Step 3//// Evaluate the Module. The evaluate() method returns a promise which will// resolve after the module has finished evaluating.// Prints 42.await rootModule.evaluate();const vm =require('node:vm');const contextifiedObject = vm.createContext({secret:42,print:console.log,});(async () => {// Step 1//// Create a Module by constructing a new `vm.SourceTextModule` object. This// parses the provided source text, throwing a `SyntaxError` if anything goes// wrong. By default, a Module is created in the top context. But here, we// specify `contextifiedObject` as the context this Module belongs to.//// Here, we attempt to obtain the default export from the module "foo", and// put it into local binding "secret".const rootModule =new vm.SourceTextModule(` import s from 'foo'; s; print(s); `, {context: contextifiedObject });// Step 2//// "Link" the imported dependencies of this Module to it.//// Obtain the requested dependencies of a SourceTextModule by// `sourceTextModule.moduleRequests` and resolve them.//// Even top-level Modules without dependencies must be explicitly linked. The// array passed to `sourceTextModule.linkRequests(modules)` can be// empty, however.//// Note: This is a contrived example in that the resolveAndLinkDependencies// creates a new "foo" module every time it is called. In a full-fledged// module system, a cache would probably be used to avoid duplicated modules.const moduleMap =newMap([ ['root', rootModule], ]);functionresolveAndLinkDependencies(module) {const requestedModules =module.moduleRequests.map((request) => {// In a full-fledged module system, the resolveAndLinkDependencies would// resolve the module with the module cache key `[specifier, attributes]`.// In this example, we just use the specifier as the key.const specifier = request.specifier;let requestedModule = moduleMap.get(specifier);if (requestedModule ===undefined) { requestedModule =new vm.SourceTextModule(` // The "secret" variable refers to the global variable we added to // "contextifiedObject" when creating the context. export default secret; `, {context:module.context }); moduleMap.set(specifier, requestedModule);// Resolve the dependencies of the new module as well.resolveAndLinkDependencies(requestedModule); }return requestedModule; });module.linkRequests(requestedModules); }resolveAndLinkDependencies(rootModule); rootModule.instantiate();// Step 3//// Evaluate the Module. The evaluate() method returns a promise which will// resolve after the module has finished evaluating.// Prints 42.await rootModule.evaluate();})();
module.error#
- Type:<any>
If themodule.status is'errored', this property contains the exceptionthrown by the module during evaluation. If the status is anything else,accessing this property will result in a thrown exception.
The valueundefined cannot be used for cases where there is not a thrownexception due to possible ambiguity withthrow undefined;.
Corresponds to the[[EvaluationError]] field ofCyclic Module Recordsin the ECMAScript specification.
module.evaluate([options])#
options<Object>timeout<integer> Specifies the number of milliseconds to evaluatebefore terminating execution. If execution is interrupted, anErrorwill be thrown. This value must be a strictly positive integer.breakOnSigint<boolean> Iftrue, receivingSIGINT(Ctrl+C) will terminate execution and throw anError. Existing handlers for the event that have been attached viaprocess.on('SIGINT')are disabled during script execution, but continue towork after that.Default:false.
- Returns:<Promise> Fulfills with
undefinedupon success.
Evaluate the module and its depenendencies. Corresponds to theEvaluate() concrete method field ofCyclic Module Records in the ECMAScript specification.
If the module is avm.SourceTextModule,evaluate() must be called after the module has been instantiated;otherwiseevaluate() will return a rejected promise.
For avm.SourceTextModule, the promise returned byevaluate() may be fulfilled eithersynchronously or asynchronously:
- If the
vm.SourceTextModulehas no top-levelawaitin itself or any of its dependencies, the promise will befulfilledsynchronously after the module and all its dependencies have been evaluated.- If the evaluation succeeds, the promise will besynchronously resolved to
undefined. - If the evaluation results in an exception, the promise will besynchronously rejected with the exceptionthat causes the evaluation to fail, which is the same as
module.error.
- If the evaluation succeeds, the promise will besynchronously resolved to
- If the
vm.SourceTextModulehas top-levelawaitin itself or any of its dependencies, the promise will befulfilledasynchronously after the module and all its dependencies have been evaluated.- If the evaluation succeeds, the promise will beasynchronously resolved to
undefined. - If the evaluation results in an exception, the promise will beasynchronously rejected with the exceptionthat causes the evaluation to fail.
- If the evaluation succeeds, the promise will beasynchronously resolved to
If the module is avm.SyntheticModule,evaluate() always returns a promise that fulfills synchronously, seethe specification ofEvaluate() of a Synthetic Module Record:
- If the
evaluateCallbackpassed to its constructor throws an exception synchronously,evaluate()returnsa promise that will be synchronously rejected with that exception. - If the
evaluateCallbackdoes not throw an exception,evaluate()returns a promise that will besynchronously resolved toundefined.
TheevaluateCallback of avm.SyntheticModule is executed synchronously within theevaluate() call, and itsreturn value is discarded. This means ifevaluateCallback is an asynchronous function, the promise returned byevaluate() will not reflect its asynchronous behavior, and any rejections from an asynchronousevaluateCallback will be lost.
evaluate() could also be called again after the module has already been evaluated, in which case:
- If the initial evaluation ended in success (
module.statusis'evaluated'), it will do nothingand return a promise that resolves toundefined. - If the initial evaluation resulted in an exception (
module.statusis'errored'), it will re-rejectthe exception that the initial evaluation resulted in.
This method cannot be called while the module is being evaluated (module.status is'evaluating').
module.link(linker)#
History
| Version | Changes |
|---|---|
| v21.1.0, v20.10.0, v18.19.0 | The option |
linker<Function>specifier<string> The specifier of the requested module:import foofrom'foo';// ^^^^^ the module specifierreferencingModule<vm.Module> TheModuleobjectlink()is called on.extra<Object>Returns:<vm.Module> |<Promise>
- Returns:<Promise>
Link module dependencies. This method must be called before evaluation, andcan only be called once per module.
UsesourceTextModule.linkRequests(modules) andsourceTextModule.instantiate() to link modules either synchronously orasynchronously.
The function is expected to return aModule object or aPromise thateventually resolves to aModule object. The returnedModule must satisfy thefollowing two invariants:
- It must belong to the same context as the parent
Module. - Its
statusmust not be'errored'.
If the returnedModule'sstatus is'unlinked', this method will berecursively called on the returnedModule with the same providedlinkerfunction.
link() returns aPromise that will either get resolved when all linkinginstances resolve to a validModule, or rejected if the linker function eitherthrows an exception or returns an invalidModule.
The linker function roughly corresponds to the implementation-definedHostResolveImportedModule abstract operation in the ECMAScriptspecification, with a few key differences:
- The linker function is allowed to be asynchronous whileHostResolveImportedModule is synchronous.
The actualHostResolveImportedModule implementation used during modulelinking is one that returns the modules linked during linking. Since atthat point all modules would have been fully linked already, theHostResolveImportedModule implementation is fully synchronous perspecification.
Corresponds to theLink() concrete method field ofCyclic ModuleRecords in the ECMAScript specification.
module.namespace#
- Type:<Object>
The namespace object of the module. This is only available after linking(module.link()) has completed.
Corresponds to theGetModuleNamespace abstract operation in the ECMAScriptspecification.
module.status#
- Type:<string>
The current status of the module. Will be one of:
'unlinked':module.link()has not yet been called.'linking':module.link()has been called, but not all Promises returnedby the linker function have been resolved yet.'linked': The module has been linked successfully, and all of itsdependencies are linked, butmodule.evaluate()has not yet been called.'evaluating': The module is being evaluated through amodule.evaluate()onitself or a parent module.'evaluated': The module has been successfully evaluated.'errored': The module has been evaluated, but an exception was thrown.
Other than'errored', this status string corresponds to the specification'sCyclic Module Record's[[Status]] field.'errored' corresponds to'evaluated' in the specification, but with[[EvaluationError]] set to avalue that is notundefined.
Class:vm.SourceTextModule#
This feature is only available with the--experimental-vm-modules commandflag enabled.
- Extends:<vm.Module>
Thevm.SourceTextModule class provides theSource Text Module Record asdefined in the ECMAScript specification.
new vm.SourceTextModule(code[, options])#
History
| Version | Changes |
|---|---|
| v17.0.0, v16.12.0 | Added support for import attributes to the |
code<string> JavaScript Module code to parseoptionsidentifier<string> String used in stack traces.Default:'vm:module(i)'whereiis a context-specific ascendingindex.cachedData<Buffer> |<TypedArray> |<DataView> Provides an optionalBufferorTypedArray, orDataViewwith V8's code cache data for the suppliedsource. Thecodemust be the same as the module from which thiscachedDatawas created.context<Object> Thecontextified object as returned by thevm.createContext()method, to compile and evaluate thisModulein.If no context is specified, the module is evaluated in the currentexecution context.lineOffset<integer> Specifies the line number offset that is displayedin stack traces produced by thisModule.Default:0.columnOffset<integer> Specifies the first-line column number offset thatis displayed in stack traces produced by thisModule.Default:0.initializeImportMeta<Function> Called during evaluation of thisModuleto initialize theimport.meta.meta<import.meta>module<vm.SourceTextModule>
importModuleDynamically<Function> Used to specify thehow the modules should be loaded during the evaluation of this modulewhenimport()is called. This option is part of the experimentalmodules API. We do not recommend using it in a production environment.For detailed information, seeSupport of dynamicimport()in compilation APIs.
Creates a newSourceTextModule instance.
Properties assigned to theimport.meta object that are objects mayallow the module to access information outside the specifiedcontext. Usevm.runInContext() to create objects in a specific context.
import vmfrom'node:vm';const contextifiedObject = vm.createContext({secret:42 });constmodule =new vm.SourceTextModule('Object.getPrototypeOf(import.meta.prop).secret = secret;', {initializeImportMeta(meta) {// Note: this object is created in the top context. As such,// Object.getPrototypeOf(import.meta.prop) points to the// Object.prototype in the top context rather than that in// the contextified object. meta.prop = {}; }, });// The module has an empty `moduleRequests` array.module.linkRequests([]);module.instantiate();awaitmodule.evaluate();// Now, Object.prototype.secret will be equal to 42.//// To fix this problem, replace// meta.prop = {};// above with// meta.prop = vm.runInContext('{}', contextifiedObject);const vm =require('node:vm');const contextifiedObject = vm.createContext({secret:42 });(async () => {constmodule =new vm.SourceTextModule('Object.getPrototypeOf(import.meta.prop).secret = secret;', {initializeImportMeta(meta) {// Note: this object is created in the top context. As such,// Object.getPrototypeOf(import.meta.prop) points to the// Object.prototype in the top context rather than that in// the contextified object. meta.prop = {}; }, });// The module has an empty `moduleRequests` array.module.linkRequests([]);module.instantiate();awaitmodule.evaluate();// Now, Object.prototype.secret will be equal to 42.//// To fix this problem, replace// meta.prop = {};// above with// meta.prop = vm.runInContext('{}', contextifiedObject);})();
sourceTextModule.createCachedData()#
- Returns:<Buffer>
Creates a code cache that can be used with theSourceTextModule constructor'scachedData option. Returns aBuffer. This method may be called any numberof times before the module has been evaluated.
The code cache of theSourceTextModule doesn't contain any JavaScriptobservable states. The code cache is safe to be saved along side the scriptsource and used to construct newSourceTextModule instances multiple times.
Functions in theSourceTextModule source can be marked as lazily compiledand they are not compiled at construction of theSourceTextModule. Thesefunctions are going to be compiled when they are invoked the first time. Thecode cache serializes the metadata that V8 currently knows about theSourceTextModule that it can use to speed up future compilations.
// Create an initial moduleconstmodule =new vm.SourceTextModule('const a = 1;');// Create cached data from this moduleconst cachedData =module.createCachedData();// Create a new module using the cached data. The code must be the same.const module2 =new vm.SourceTextModule('const a = 1;', { cachedData });sourceTextModule.dependencySpecifiers#
History
| Version | Changes |
|---|---|
| v24.4.0, v22.20.0 | This is deprecated in favour of |
sourceTextModule.moduleRequests instead.- Type:<string[]>
The specifiers of all dependencies of this module. The returned array is frozento disallow any changes to it.
Corresponds to the[[RequestedModules]] field ofCyclic Module Records inthe ECMAScript specification.
sourceTextModule.hasAsyncGraph()#
- Returns:<boolean>
Iterates over the dependency graph and returnstrue if any module in itsdependencies or this module itself contains top-levelawait expressions,otherwise returnsfalse.
The search may be slow if the graph is big enough.
This requires the module to be instantiated first. If the module is notinstantiated yet, an error will be thrown.
sourceTextModule.hasTopLevelAwait()#
- Returns:<boolean>
Returns whether the module itself contains any top-levelawait expressions.
This corresponds to the field[[HasTLA]] inCyclic Module Record in theECMAScript specification.
sourceTextModule.instantiate()#
- Returns:<undefined>
Instantiate the module with the linked requested modules.
This resolves the imported bindings of the module, including re-exportedbinding names. When there are any bindings that cannot be resolved,an error would be thrown synchronously.
If the requested modules include cyclic dependencies, thesourceTextModule.linkRequests(modules) method must be called on allmodules in the cycle before calling this method.
sourceTextModule.linkRequests(modules)#
modules<vm.Module[]> Array ofvm.Moduleobjects that this module depends on.The order of the modules in the array is the order ofsourceTextModule.moduleRequests.- Returns:<undefined>
Link module dependencies. This method must be called before evaluation, andcan only be called once per module.
The order of the module instances in themodules array should correspond to the order ofsourceTextModule.moduleRequests being resolved. If two module requests have the samespecifier and import attributes, they must be resolved with the same module instance or anERR_MODULE_LINK_MISMATCH would be thrown. For example, when linking requests for thismodule:
import foofrom'foo';import sourceFoofrom'foo';Themodules array must contain two references to the same instance, because the twomodule requests are identical but in two phases.
If the module has no dependencies, themodules array can be empty.
Users can usesourceTextModule.moduleRequests to implement the host-definedHostLoadImportedModule abstract operation in the ECMAScript specification,and usingsourceTextModule.linkRequests() to invoke specification definedFinishLoadingImportedModule, on the module with all dependencies in a batch.
It's up to the creator of theSourceTextModule to determine if the resolutionof the dependencies is synchronous or asynchronous.
After each module in themodules array is linked, callsourceTextModule.instantiate().
sourceTextModule.moduleRequests#
- Type:<ModuleRequest[]> Dependencies of this module.
The requested import dependencies of this module. The returned array is frozento disallow any changes to it.
For example, given a source text:
import foofrom'foo';import fooAliasfrom'foo';import barfrom'./bar.js';import withAttrsfrom'../with-attrs.ts'with {arbitraryAttr:'attr-val' };import sourceModulefrom'wasm-mod.wasm';The value of thesourceTextModule.moduleRequests will be:
[ {specifier:'foo',attributes: {},phase:'evaluation', }, {specifier:'foo',attributes: {},phase:'evaluation', }, {specifier:'./bar.js',attributes: {},phase:'evaluation', }, {specifier:'../with-attrs.ts',attributes: {arbitraryAttr:'attr-val' },phase:'evaluation', }, {specifier:'wasm-mod.wasm',attributes: {},phase:'source', },];Class:vm.SyntheticModule#
This feature is only available with the--experimental-vm-modules commandflag enabled.
- Extends:<vm.Module>
Thevm.SyntheticModule class provides theSynthetic Module Record asdefined in the WebIDL specification. The purpose of synthetic modules is toprovide a generic interface for exposing non-JavaScript sources to ECMAScriptmodule graphs.
const vm =require('node:vm');const source ='{ "a": 1 }';constmodule =new vm.SyntheticModule(['default'],function() {const obj =JSON.parse(source);this.setExport('default', obj);});// Use `module` in linking...new vm.SyntheticModule(exportNames, evaluateCallback[, options])#
exportNames<string[]> Array of names that will be exported from themodule.evaluateCallback<Function> Called when the module is evaluated.optionsidentifier<string> String used in stack traces.Default:'vm:module(i)'whereiis a context-specific ascendingindex.context<Object> Thecontextified object as returned by thevm.createContext()method, to compile and evaluate thisModulein.
Creates a newSyntheticModule instance.
Objects assigned to the exports of this instance may allow importers ofthe module to access information outside the specifiedcontext. Usevm.runInContext() to create objects in a specific context.
syntheticModule.setExport(name, value)#
History
| Version | Changes |
|---|---|
| v24.8.0 | No longer need to call |
| v13.0.0, v12.16.0 | Added in: v13.0.0, v12.16.0 |
This method sets the module export binding slots with the given value.
import vmfrom'node:vm';const m =new vm.SyntheticModule(['x'],() => { m.setExport('x',1);});await m.evaluate();assert.strictEqual(m.namespace.x,1);const vm =require('node:vm');(async () => {const m =new vm.SyntheticModule(['x'],() => { m.setExport('x',1); });await m.evaluate(); assert.strictEqual(m.namespace.x,1);})();
Type:ModuleRequest#
- Type:<Object>
specifier<string> The specifier of the requested module.attributes<Object> The"with"value passed to theWithClause in aImportDeclaration, or an empty object if no value wasprovided.phase<string> The phase of the requested module ("source"or"evaluation").
AModuleRequest represents the request to import a module with given import attributes and phase.
vm.compileFunction(code[, params[, options]])#
History
| Version | Changes |
|---|---|
| v21.7.0, v20.12.0 | Added support for |
| v19.6.0, v18.15.0 | The return value now includes |
| v17.0.0, v16.12.0 | Added support for import attributes to the |
| v15.9.0 | Added |
| v14.3.0 | Removal of |
| v14.1.0, v13.14.0 | The |
| v10.10.0 | Added in: v10.10.0 |
code<string> The body of the function to compile.params<string[]> An array of strings containing all parameters for thefunction.options<Object>filename<string> Specifies the filename used in stack traces producedby this script.Default:''.lineOffset<number> Specifies the line number offset that is displayedin stack traces produced by this script.Default:0.columnOffset<number> Specifies the first-line column number offset thatis displayed in stack traces produced by this script.Default:0.cachedData<Buffer> |<TypedArray> |<DataView> Provides an optionalBufferorTypedArray, orDataViewwith V8's code cache data for the suppliedsource. This must be produced by a prior call tovm.compileFunction()with the samecodeandparams.produceCachedData<boolean> Specifies whether to produce new cache data.Default:false.parsingContext<Object> Thecontextified object in which the saidfunction should be compiled in.contextExtensions<Object[]> An array containing a collection of contextextensions (objects wrapping the current scope) to be applied whilecompiling.Default:[].importModuleDynamically<Function> |<vm.constants.USE_MAIN_CONTEXT_DEFAULT_LOADER>Used to specify the how the modules should be loaded during the evaluation ofthis function whenimport()is called. This option is part of theexperimental modules API. We do not recommend using it in a productionenvironment. For detailed information, seeSupport of dynamicimport()in compilation APIs.
- Returns:<Function>
Compiles the given code into the provided context (if no context issupplied, the current context is used), and returns it wrapped inside afunction with the givenparams.
vm.constants#
- Type:<Object>
Returns an object containing commonly used constants for VM operations.
vm.constants.USE_MAIN_CONTEXT_DEFAULT_LOADER#
A constant that can be used as theimportModuleDynamically option tovm.Script andvm.compileFunction() so that Node.js uses the defaultESM loader from the main context to load the requested module.
For detailed information, seeSupport of dynamicimport() in compilation APIs.
vm.createContext([contextObject[, options]])#
History
| Version | Changes |
|---|---|
| v22.8.0, v20.18.0 | The |
| v21.7.0, v20.12.0 | Added support for |
| v21.2.0, v20.11.0 | The |
| v14.6.0 | The |
| v10.0.0 | The first argument can no longer be a function. |
| v10.0.0 | The |
| v0.3.1 | Added in: v0.3.1 |
contextObject<Object> |<vm.constants.DONT_CONTEXTIFY> |<undefined>Eithervm.constants.DONT_CONTEXTIFYor an object that will becontextified.Ifundefined, an empty contextified object will be created for backwards compatibility.options<Object>name<string> Human-readable name of the newly created context.Default:'VM Context i', whereiis an ascending numerical index ofthe created context.origin<string>Origin corresponding to the newly createdcontext for display purposes. The origin should be formatted like a URL,but with only the scheme, host, and port (if necessary), like the value oftheurl.originproperty of aURLobject. Most notably, thisstring should omit the trailing slash, as that denotes a path.Default:''.codeGeneration<Object>microtaskMode<string> If set toafterEvaluate, microtasks (tasksscheduled throughPromises andasync functions) will be run immediatelyafter a script has run throughscript.runInContext().They are included in thetimeoutandbreakOnSigintscopes in that case.importModuleDynamically<Function> |<vm.constants.USE_MAIN_CONTEXT_DEFAULT_LOADER>Used to specify the how the modules should be loaded whenimport()iscalled in this context without a referrer script or module. This option ispart of the experimental modules API. We do not recommend using it in aproduction environment. For detailed information, seeSupport of dynamicimport()in compilation APIs.
- Returns:<Object> contextified object.
If the givencontextObject is an object, thevm.createContext() method willprepare thatobject and return a reference to it so that it can be used incalls tovm.runInContext() orscript.runInContext(). Inside suchscripts, the global object will be wrapped by thecontextObject, retaining all of itsexisting properties but also having the built-in objects and functions anystandardglobal object has. Outside of scripts run by the vm module, globalvariables will remain unchanged.
const vm =require('node:vm');global.globalVar =3;const context = {globalVar:1 };vm.createContext(context);vm.runInContext('globalVar *= 2;', context);console.log(context);// Prints: { globalVar: 2 }console.log(global.globalVar);// Prints: 3IfcontextObject is omitted (or passed explicitly asundefined), a new,emptycontextified object will be returned.
When the global object in the newly created context iscontextified, it has some quirkscompared to ordinary global objects. For example, it cannot be frozen. To create a contextwithout the contextifying quirks, passvm.constants.DONT_CONTEXTIFY as thecontextObjectargument. See the documentation ofvm.constants.DONT_CONTEXTIFY for details.
Thevm.createContext() method is primarily useful for creating a singlecontext that can be used to run multiple scripts. For instance, if emulating aweb browser, the method can be used to create a single context representing awindow's global object, then run all<script> tags together within thatcontext.
The providedname andorigin of the context are made visible through theInspector API.
vm.isContext(object)#
Returnstrue if the givenobject object has beencontextified usingvm.createContext(), or if it's the global object of a context createdusingvm.constants.DONT_CONTEXTIFY.
vm.measureMemory([options])#
Measure the memory known to V8 and used by all contexts known to thecurrent V8 isolate, or the main context.
options<Object> Optional.mode<string> Either'summary'or'detailed'. In summary mode,only the memory measured for the main context will be returned. Indetailed mode, the memory measured for all contexts known to thecurrent V8 isolate will be returned.Default:'summary'execution<string> Either'default'or'eager'. With defaultexecution, the promise will not resolve until after the next scheduledgarbage collection starts, which may take a while (or never if the programexits before the next GC). With eager execution, the GC will be startedright away to measure the memory.Default:'default'
- Returns:<Promise> If the memory is successfully measured, the promise willresolve with an object containing information about the memory usage.Otherwise it will be rejected with an
ERR_CONTEXT_NOT_INITIALIZEDerror.
The format of the object that the returned Promise may resolve with isspecific to the V8 engine and may change from one version of V8 to the next.
The returned result is different from the statistics returned byv8.getHeapSpaceStatistics() in thatvm.measureMemory() measure thememory reachable by each V8 specific contexts in the current instance ofthe V8 engine, while the result ofv8.getHeapSpaceStatistics() measurethe memory occupied by each heap space in the current V8 instance.
const vm =require('node:vm');// Measure the memory used by the main context.vm.measureMemory({mode:'summary' })// This is the same as vm.measureMemory() .then((result) => {// The current format is:// {// total: {// jsMemoryEstimate: 2418479, jsMemoryRange: [ 2418479, 2745799 ]// }// }console.log(result); });const context = vm.createContext({a:1 });vm.measureMemory({mode:'detailed',execution:'eager' }) .then((result) => {// Reference the context here so that it won't be GC'ed// until the measurement is complete.console.log(context.a);// {// total: {// jsMemoryEstimate: 2574732,// jsMemoryRange: [ 2574732, 2904372 ]// },// current: {// jsMemoryEstimate: 2438996,// jsMemoryRange: [ 2438996, 2768636 ]// },// other: [// {// jsMemoryEstimate: 135736,// jsMemoryRange: [ 135736, 465376 ]// }// ]// }console.log(result); });vm.runInContext(code, contextifiedObject[, options])#
History
| Version | Changes |
|---|---|
| v21.7.0, v20.12.0 | Added support for |
| v17.0.0, v16.12.0 | Added support for import attributes to the |
| v6.3.0 | The |
| v0.3.1 | Added in: v0.3.1 |
code<string> The JavaScript code to compile and run.contextifiedObject<Object> Thecontextified object that will be usedas theglobalwhen thecodeis compiled and run.options<Object> |<string>filename<string> Specifies the filename used in stack traces producedby this script.Default:'evalmachine.<anonymous>'.lineOffset<number> Specifies the line number offset that is displayedin stack traces produced by this script.Default:0.columnOffset<number> Specifies the first-line column number offset thatis displayed in stack traces produced by this script.Default:0.displayErrors<boolean> Whentrue, if anErroroccurswhile compiling thecode, the line of code causing the error is attachedto the stack trace.Default:true.timeout<integer> Specifies the number of milliseconds to executecodebefore terminating execution. If execution is terminated, anErrorwill be thrown. This value must be a strictly positive integer.breakOnSigint<boolean> Iftrue, receivingSIGINT(Ctrl+C) will terminate execution and throw anError. Existing handlers for the event that have been attached viaprocess.on('SIGINT')are disabled during script execution, but continue towork after that.Default:false.cachedData<Buffer> |<TypedArray> |<DataView> Provides an optionalBufferorTypedArray, orDataViewwith V8's code cache data for the suppliedsource.importModuleDynamically<Function> |<vm.constants.USE_MAIN_CONTEXT_DEFAULT_LOADER>Used to specify the how the modules should be loaded during the evaluationof this script whenimport()is called. This option is part of theexperimental modules API. We do not recommend using it in a productionenvironment. For detailed information, seeSupport of dynamicimport()in compilation APIs.
Thevm.runInContext() method compilescode, runs it within the context ofthecontextifiedObject, then returns the result. Running code does not haveaccess to the local scope. ThecontextifiedObject objectmust have beenpreviouslycontextified using thevm.createContext() method.
Ifoptions is a string, then it specifies the filename.
The following example compiles and executes different scripts using a singlecontextified object:
const vm =require('node:vm');const contextObject = {globalVar:1 };vm.createContext(contextObject);for (let i =0; i <10; ++i) { vm.runInContext('globalVar *= 2;', contextObject);}console.log(contextObject);// Prints: { globalVar: 1024 }vm.runInNewContext(code[, contextObject[, options]])#
History
| Version | Changes |
|---|---|
| v22.8.0, v20.18.0 | The |
| v21.7.0, v20.12.0 | Added support for |
| v17.0.0, v16.12.0 | Added support for import attributes to the |
| v14.6.0 | The |
| v10.0.0 | The |
| v6.3.0 | The |
| v0.3.1 | Added in: v0.3.1 |
code<string> The JavaScript code to compile and run.contextObject<Object> |<vm.constants.DONT_CONTEXTIFY> |<undefined>Eithervm.constants.DONT_CONTEXTIFYor an object that will becontextified.Ifundefined, an empty contextified object will be created for backwards compatibility.options<Object> |<string>filename<string> Specifies the filename used in stack traces producedby this script.Default:'evalmachine.<anonymous>'.lineOffset<number> Specifies the line number offset that is displayedin stack traces produced by this script.Default:0.columnOffset<number> Specifies the first-line column number offset thatis displayed in stack traces produced by this script.Default:0.displayErrors<boolean> Whentrue, if anErroroccurswhile compiling thecode, the line of code causing the error is attachedto the stack trace.Default:true.timeout<integer> Specifies the number of milliseconds to executecodebefore terminating execution. If execution is terminated, anErrorwill be thrown. This value must be a strictly positive integer.breakOnSigint<boolean> Iftrue, receivingSIGINT(Ctrl+C) will terminate execution and throw anError. Existing handlers for the event that have been attached viaprocess.on('SIGINT')are disabled during script execution, but continue towork after that.Default:false.contextName<string> Human-readable name of the newly created context.Default:'VM Context i', whereiis an ascending numerical index ofthe created context.contextOrigin<string>Origin corresponding to the newlycreated context for display purposes. The origin should be formatted like aURL, but with only the scheme, host, and port (if necessary), like thevalue of theurl.originproperty of aURLobject. Most notably,this string should omit the trailing slash, as that denotes a path.Default:''.contextCodeGeneration<Object>cachedData<Buffer> |<TypedArray> |<DataView> Provides an optionalBufferorTypedArray, orDataViewwith V8's code cache data for the suppliedsource.importModuleDynamically<Function> |<vm.constants.USE_MAIN_CONTEXT_DEFAULT_LOADER>Used to specify the how the modules should be loaded during the evaluationof this script whenimport()is called. This option is part of theexperimental modules API. We do not recommend using it in a productionenvironment. For detailed information, seeSupport of dynamicimport()in compilation APIs.microtaskMode<string> If set toafterEvaluate, microtasks (tasksscheduled throughPromises andasync functions) will be run immediatelyafter the script has run. They are included in thetimeoutandbreakOnSigintscopes in that case.
- Returns:<any> the result of the very last statement executed in the script.
This method is a shortcut to(new vm.Script(code, options)).runInContext(vm.createContext(options), options).Ifoptions is a string, then it specifies the filename.
It does several things at once:
- Creates a new context.
- If
contextObjectis an object,contextifies it with the new context.IfcontextObjectis undefined, creates a new object andcontextifies it.IfcontextObjectisvm.constants.DONT_CONTEXTIFY, don'tcontextify anything. - Compiles the code as a
vm.Script - Runs the compield code within the created context. The code does not have access to the scope inwhich this method is called.
- Returns the result.
The following example compiles and executes code that increments a globalvariable and sets a new one. These globals are contained in thecontextObject.
const vm =require('node:vm');const contextObject = {animal:'cat',count:2,};vm.runInNewContext('count += 1; name = "kitty"', contextObject);console.log(contextObject);// Prints: { animal: 'cat', count: 3, name: 'kitty' }// This would throw if the context is created from a contextified object.// vm.constants.DONT_CONTEXTIFY allows creating contexts with ordinary global objects that// can be frozen.const frozenContext = vm.runInNewContext('Object.freeze(globalThis); globalThis;', vm.constants.DONT_CONTEXTIFY);vm.runInThisContext(code[, options])#
History
| Version | Changes |
|---|---|
| v21.7.0, v20.12.0 | Added support for |
| v17.0.0, v16.12.0 | Added support for import attributes to the |
| v6.3.0 | The |
| v0.3.1 | Added in: v0.3.1 |
code<string> The JavaScript code to compile and run.options<Object> |<string>filename<string> Specifies the filename used in stack traces producedby this script.Default:'evalmachine.<anonymous>'.lineOffset<number> Specifies the line number offset that is displayedin stack traces produced by this script.Default:0.columnOffset<number> Specifies the first-line column number offset thatis displayed in stack traces produced by this script.Default:0.displayErrors<boolean> Whentrue, if anErroroccurswhile compiling thecode, the line of code causing the error is attachedto the stack trace.Default:true.timeout<integer> Specifies the number of milliseconds to executecodebefore terminating execution. If execution is terminated, anErrorwill be thrown. This value must be a strictly positive integer.breakOnSigint<boolean> Iftrue, receivingSIGINT(Ctrl+C) will terminate execution and throw anError. Existing handlers for the event that have been attached viaprocess.on('SIGINT')are disabled during script execution, but continue towork after that.Default:false.cachedData<Buffer> |<TypedArray> |<DataView> Provides an optionalBufferorTypedArray, orDataViewwith V8's code cache data for the suppliedsource.importModuleDynamically<Function> |<vm.constants.USE_MAIN_CONTEXT_DEFAULT_LOADER>Used to specify the how the modules should be loaded during the evaluationof this script whenimport()is called. This option is part of theexperimental modules API. We do not recommend using it in a productionenvironment. For detailed information, seeSupport of dynamicimport()in compilation APIs.
- Returns:<any> the result of the very last statement executed in the script.
vm.runInThisContext() compilescode, runs it within the context of thecurrentglobal and returns the result. Running code does not have access tolocal scope, but does have access to the currentglobal object.
Ifoptions is a string, then it specifies the filename.
The following example illustrates using bothvm.runInThisContext() andthe JavaScripteval() function to run the same code:
const vm =require('node:vm');let localVar ='initial value';const vmResult = vm.runInThisContext('localVar = "vm";');console.log(`vmResult: '${vmResult}', localVar: '${localVar}'`);// Prints: vmResult: 'vm', localVar: 'initial value'const evalResult =eval('localVar = "eval";');console.log(`evalResult: '${evalResult}', localVar: '${localVar}'`);// Prints: evalResult: 'eval', localVar: 'eval'Becausevm.runInThisContext() does not have access to the local scope,localVar is unchanged. In contrast, a directeval() calldoes have accessto the local scope, so the valuelocalVar is changed. In this wayvm.runInThisContext() is much like anindirecteval() call, e.g.(0,eval)('code').
Example: Running an HTTP server within a VM#
When using eitherscript.runInThisContext() orvm.runInThisContext(), the code is executed within the current V8 globalcontext. The code passed to this VM context will have its own isolated scope.
In order to run a simple web server using thenode:http module the code passedto the context must either callrequire('node:http') on its own, or have areference to thenode:http module passed to it. For instance:
'use strict';const vm =require('node:vm');const code =`((require) => { const http = require('node:http'); http.createServer((request, response) => { response.writeHead(200, { 'Content-Type': 'text/plain' }); response.end('Hello World\\n'); }).listen(8124); console.log('Server running at http://127.0.0.1:8124/');})`;vm.runInThisContext(code)(require);Therequire() in the above case shares the state with the context it ispassed from. This may introduce risks when untrusted code is executed, e.g.altering objects in the context in unwanted ways.
What does it mean to "contextify" an object?#
All JavaScript executed within Node.js runs within the scope of a "context".According to theV8 Embedder's Guide:
In V8, a context is an execution environment that allows separate, unrelated,JavaScript applications to run in a single instance of V8. You must explicitlyspecify the context in which you want any JavaScript code to be run.
When the methodvm.createContext() is called with an object, thecontextObject argumentwill be used to wrap the global object of a new instance of a V8 Context(ifcontextObject isundefined, a new object will be created from the current contextbefore its contextified). This V8 Context provides thecode run using thenode:vmmodule's methods with an isolated global environment within which it can operate.The process of creating the V8 Context and associating it with thecontextObjectin the outer context is what this document refers to as "contextifying" the object.
The contextifying would introduce some quirks to theglobalThis value in the context.For example, it cannot be frozen, and it is not reference equal to thecontextObjectin the outer context.
const vm =require('node:vm');// An undefined `contextObject` option makes the global object contextified.const context = vm.createContext();console.log(vm.runInContext('globalThis', context) === context);// false// A contextified global object cannot be frozen.try { vm.runInContext('Object.freeze(globalThis);', context);}catch (e) {console.log(e);// TypeError: Cannot freeze}console.log(vm.runInContext('globalThis.foo = 1; foo;', context));// 1To create a context with an ordinary global object and get access to a global proxy inthe outer context with fewer quirks, specifyvm.constants.DONT_CONTEXTIFY as thecontextObject argument.
vm.constants.DONT_CONTEXTIFY#
This constant, when used as thecontextObject argument in vm APIs, instructs Node.js to createa context without wrapping its global object with another object in a Node.js-specific manner.As a result, theglobalThis value inside the new context would behave more closely to an ordinaryone.
const vm =require('node:vm');// Use vm.constants.DONT_CONTEXTIFY to freeze the global object.const context = vm.createContext(vm.constants.DONT_CONTEXTIFY);vm.runInContext('Object.freeze(globalThis);', context);try { vm.runInContext('bar = 1; bar;', context);}catch (e) {console.log(e);// Uncaught ReferenceError: bar is not defined}Whenvm.constants.DONT_CONTEXTIFY is used as thecontextObject argument tovm.createContext(),the returned object is a proxy-like object to the global object in the newly created context withfewer Node.js-specific quirks. It is reference equal to theglobalThis value in the new context,can be modified from outside the context, and can be used to access built-ins in the new context directly.
const vm =require('node:vm');const context = vm.createContext(vm.constants.DONT_CONTEXTIFY);// Returned object is reference equal to globalThis in the new context.console.log(vm.runInContext('globalThis', context) === context);// true// Can be used to access globals in the new context directly.console.log(context.Array);// [Function: Array]vm.runInContext('foo = 1;', context);console.log(context.foo);// 1context.bar =1;console.log(vm.runInContext('bar;', context));// 1// Can be frozen and it affects the inner context.Object.freeze(context);try { vm.runInContext('baz = 1; baz;', context);}catch (e) {console.log(e);// Uncaught ReferenceError: baz is not defined}Timeout interactions with asynchronous tasks and Promises#
Promises andasync functions can schedule tasks run by the JavaScriptengine asynchronously. By default, these tasks are run after all JavaScriptfunctions on the current stack are done executing.This allows escaping the functionality of thetimeout andbreakOnSigint options.
For example, the following code executed byvm.runInNewContext() with atimeout of 5 milliseconds schedules an infinite loop to run after a promiseresolves. The scheduled loop is never interrupted by the timeout:
const vm =require('node:vm');functionloop() {console.log('entering loop');while (1)console.log(Date.now());}vm.runInNewContext('Promise.resolve().then(() => loop());', { loop,console }, {timeout:5 },);// This is printed *before* 'entering loop' (!)console.log('done executing');This can be addressed by passingmicrotaskMode: 'afterEvaluate' to the codethat creates theContext:
const vm =require('node:vm');functionloop() {while (1)console.log(Date.now());}vm.runInNewContext('Promise.resolve().then(() => loop());', { loop,console }, {timeout:5,microtaskMode:'afterEvaluate' },);In this case, the microtask scheduled throughpromise.then() will be runbefore returning fromvm.runInNewContext(), and will be interruptedby thetimeout functionality. This applies only to code running in avm.Context, so e.g.vm.runInThisContext() does not take this option.
Promise callbacks are entered into the microtask queue of the context in whichthey were created. For example, if() => loop() is replaced with justloopin the above example, thenloop will be pushed into the global microtaskqueue, because it is a function from the outer (main) context, and thus willalso be able to escape the timeout.
If asynchronous scheduling functions such asprocess.nextTick(),queueMicrotask(),setTimeout(),setImmediate(), etc. are made availableinside avm.Context, functions passed to them will be added to global queues,which are shared by all contexts. Therefore, callbacks passed to those functionsare not controllable through the timeout either.
WhenmicrotaskMode is'afterEvaluate', beware sharing Promises between Contexts#
In'afterEvaluate' mode, theContext has its own microtask queue, separatefrom the global microtask queue used by the outer (main) context. While thismode is necessary to enforcetimeout and enablebreakOnSigint withasynchronous tasks, it also makes sharing promises between contexts challenging.
In the example below, a promise is created in the inner context and shared withthe outer context. When the outer contextawait on the promise, the executionflow of the outer context is disrupted in a surprising way: the log statementis never executed.
import *as vmfrom'node:vm';const inner_context = vm.createContext({}, {microtaskMode:'afterEvaluate' });// runInContext() returns a Promise created in the inner context.const inner_promise = vm.runInContext('Promise.resolve()', context,);// As part of performing `await`, the JavaScript runtime must enqueue a task// on the microtask queue of the context where `inner_promise` was created.// A task is added on the inner microtask queue, but **it will not be run// automatically**: this task will remain pending indefinitely.//// Since the outer microtask queue is empty, execution in the outer module// falls through, and the log statement below is never executed.await inner_promise;console.log('this will NOT be printed');To successfully share promises between contexts with different microtask queues,it is necessary to ensure that tasks on the inner microtask queue will be runwhenever the outer context enqueues a task on the inner microtask queue.
The tasks on the microtask queue of a given context are run wheneverrunInContext() orSourceTextModule.evaluate() are invoked on a script ormodule using this context. In our example, the normal execution flow can berestored by scheduling a second call torunInContext()beforeawait inner_promise.
// Schedule `runInContext()` to manually drain the inner context microtask// queue; it will run after the `await` statement below.setImmediate(() => { vm.runInContext('', context);});await inner_promise;console.log('OK');Note: Strictly speaking, in this mode,node:vm departs from the letter ofthe ECMAScript specification forenqueing jobs, by allowing asynchronoustasks from different contexts to run in a different order than they wereenqueued.
Support of dynamicimport() in compilation APIs#
The following APIs support animportModuleDynamically option to enable dynamicimport() in code compiled by the vm module.
new vm.Scriptvm.compileFunction()new vm.SourceTextModulevm.runInThisContext()vm.runInContext()vm.runInNewContext()vm.createContext()
This option is still part of the experimental modules API. We do not recommendusing it in a production environment.
When theimportModuleDynamically option is not specified or undefined#
If this option is not specified, or if it'sundefined, code containingimport() can still be compiled by the vm APIs, but when the compiled code isexecuted and it actually callsimport(), the result will reject withERR_VM_DYNAMIC_IMPORT_CALLBACK_MISSING.
WhenimportModuleDynamically isvm.constants.USE_MAIN_CONTEXT_DEFAULT_LOADER#
This option is currently not supported forvm.SourceTextModule.
With this option, when animport() is initiated in the compiled code, Node.jswould use the default ESM loader from the main context to load the requestedmodule and return it to the code being executed.
This gives access to Node.js built-in modules such asfs orhttpto the code being compiled. If the code is executed in a different context,be aware that the objects created by modules loaded from the main contextare still from the main context and notinstanceof built-in classes in thenew context.
const {Script, constants } =require('node:vm');const script =newScript('import("node:fs").then(({readFile}) => readFile instanceof Function)', {importModuleDynamically: constants.USE_MAIN_CONTEXT_DEFAULT_LOADER });// false: URL loaded from the main context is not an instance of the Function// class in the new context.script.runInNewContext().then(console.log);import {Script, constants }from'node:vm';const script =newScript('import("node:fs").then(({readFile}) => readFile instanceof Function)', {importModuleDynamically: constants.USE_MAIN_CONTEXT_DEFAULT_LOADER });// false: URL loaded from the main context is not an instance of the Function// class in the new context.script.runInNewContext().then(console.log);
This option also allows the script or function to load user modules:
import {Script, constants }from'node:vm';import { resolve }from'node:path';import { writeFileSync }from'node:fs';// Write test.js and test.txt to the directory where the current script// being run is located.writeFileSync(resolve(import.meta.dirname,'test.mjs'),'export const filename = "./test.json";');writeFileSync(resolve(import.meta.dirname,'test.json'),'{"hello": "world"}');// Compile a script that loads test.mjs and then test.json// as if the script is placed in the same directory.const script =newScript(`(async function() { const { filename } = await import('./test.mjs'); return import(filename, { with: { type: 'json' } }) })();`, {filename:resolve(import.meta.dirname,'test-with-default.js'),importModuleDynamically: constants.USE_MAIN_CONTEXT_DEFAULT_LOADER, });// { default: { hello: 'world' } }script.runInThisContext().then(console.log);const {Script, constants } =require('node:vm');const { resolve } =require('node:path');const { writeFileSync } =require('node:fs');// Write test.js and test.txt to the directory where the current script// being run is located.writeFileSync(resolve(__dirname,'test.mjs'),'export const filename = "./test.json";');writeFileSync(resolve(__dirname,'test.json'),'{"hello": "world"}');// Compile a script that loads test.mjs and then test.json// as if the script is placed in the same directory.const script =newScript(`(async function() { const { filename } = await import('./test.mjs'); return import(filename, { with: { type: 'json' } }) })();`, {filename:resolve(__dirname,'test-with-default.js'),importModuleDynamically: constants.USE_MAIN_CONTEXT_DEFAULT_LOADER, });// { default: { hello: 'world' } }script.runInThisContext().then(console.log);
There are a few caveats with loading user modules using the default loaderfrom the main context:
- The module being resolved would be relative to the
filenameoption passedtovm.Scriptorvm.compileFunction(). The resolution can work with afilenamethat's either an absolute path or a URL string. Iffilenameisa string that's neither an absolute path or a URL, or if it's undefined,the resolution will be relative to the current working directoryof the process. In the case ofvm.createContext(), the resolution is alwaysrelative to the current working directory since this option is only used whenthere isn't a referrer script or module. - For any given
filenamethat resolves to a specific path, once the processmanages to load a particular module from that path, the result may be cached,and subsequent load of the same module from the same path would return thesame thing. If thefilenameis a URL string, the cache would not be hitif it has different search parameters. Forfilenames that are not URLstrings, there is currently no way to bypass the caching behavior.
WhenimportModuleDynamically is a function#
WhenimportModuleDynamically is a function, it will be invoked whenimport()is called in the compiled code for users to customize how the requested moduleshould be compiled and evaluated. Currently, the Node.js instance must belaunched with the--experimental-vm-modules flag for this option to work. Ifthe flag isn't set, this callback will be ignored. If the code evaluatedactually calls toimport(), the result will reject withERR_VM_DYNAMIC_IMPORT_CALLBACK_MISSING_FLAG.
The callbackimportModuleDynamically(specifier, referrer, importAttributes)has the following signature:
specifier<string> specifier passed toimport()referrer<vm.Script> |<Function> |<vm.SourceTextModule> |<Object>The referrer is the compiledvm.Scriptfornew vm.Script,vm.runInThisContext,vm.runInContextandvm.runInNewContext. It's thecompiledFunctionforvm.compileFunction, the compiledvm.SourceTextModulefornew vm.SourceTextModule, and the contextObjectforvm.createContext().importAttributes<Object> The"with"value passed to theoptionsExpressionoptional parameter, or an empty object if no value wasprovided.phase<string> The phase of the dynamic import ("source"or"evaluation").- Returns:<Module Namespace Object> |<vm.Module> Returning a
vm.Moduleisrecommended in order to take advantage of error tracking, and to avoid issueswith namespaces that containthenfunction exports.
// This script must be run with --experimental-vm-modules.import {Script,SyntheticModule }from'node:vm';const script =newScript('import("foo.json", { with: { type: "json" } })', {asyncimportModuleDynamically(specifier, referrer, importAttributes) {console.log(specifier);// 'foo.json'console.log(referrer);// The compiled scriptconsole.log(importAttributes);// { type: 'json' }const m =newSyntheticModule(['bar'],() => { });await m.link(() => { }); m.setExport('bar', {hello:'world' });return m; },});const result =await script.runInThisContext();console.log(result);// { bar: { hello: 'world' } }// This script must be run with --experimental-vm-modules.const {Script,SyntheticModule } =require('node:vm');(asyncfunctionmain() {const script =newScript('import("foo.json", { with: { type: "json" } })', {asyncimportModuleDynamically(specifier, referrer, importAttributes) {console.log(specifier);// 'foo.json'console.log(referrer);// The compiled scriptconsole.log(importAttributes);// { type: 'json' }const m =newSyntheticModule(['bar'],() => { });await m.link(() => { }); m.setExport('bar', {hello:'world' });return m; }, });const result =await script.runInThisContext();console.log(result);// { bar: { hello: 'world' } }})();
WebAssembly System Interface (WASI)#
Thenode:wasi module does not currently provide thecomprehensive file system security properties provided by some WASI runtimes.Full support for secure file system sandboxing may or may not be implemented infuture. In the mean time, do not rely on it to run untrusted code.
Source Code:lib/wasi.js
The WASI API provides an implementation of theWebAssembly System Interfacespecification. WASI gives WebAssembly applications access to the underlyingoperating system via a collection of POSIX-like functions.
import { readFile }from'node:fs/promises';import {WASI }from'node:wasi';import { argv, env }from'node:process';const wasi =newWASI({version:'preview1',args: argv, env,preopens: {'/local':'/some/real/path/that/wasm/can/access', },});const wasm =awaitWebAssembly.compile(awaitreadFile(newURL('./demo.wasm',import.meta.url)),);const instance =awaitWebAssembly.instantiate(wasm, wasi.getImportObject());wasi.start(instance);'use strict';const { readFile } =require('node:fs/promises');const {WASI } =require('node:wasi');const { argv, env } =require('node:process');const { join } =require('node:path');const wasi =newWASI({version:'preview1',args: argv, env,preopens: {'/local':'/some/real/path/that/wasm/can/access', },});(async () => {const wasm =awaitWebAssembly.compile(awaitreadFile(join(__dirname,'demo.wasm')), );const instance =awaitWebAssembly.instantiate(wasm, wasi.getImportObject()); wasi.start(instance);})();
To run the above example, create a new WebAssembly text format file nameddemo.wat:
(module ;; Import the required fd_write WASI function which will write the given io vectors to stdout ;; The function signature for fd_write is: ;; (File Descriptor, *iovs, iovs_len, nwritten) -> Returns number of bytes written (import "wasi_snapshot_preview1" "fd_write" (func $fd_write (param i32 i32 i32 i32) (result i32))) (memory 1) (export "memory" (memory 0)) ;; Write 'hello world\n' to memory at an offset of 8 bytes ;; Note the trailing newline which is required for the text to appear (data (i32.const 8) "hello world\n") (func $main (export "_start") ;; Creating a new io vector within linear memory (i32.store (i32.const 0) (i32.const 8)) ;; iov.iov_base - This is a pointer to the start of the 'hello world\n' string (i32.store (i32.const 4) (i32.const 12)) ;; iov.iov_len - The length of the 'hello world\n' string (call $fd_write (i32.const 1) ;; file_descriptor - 1 for stdout (i32.const 0) ;; *iovs - The pointer to the iov array, which is stored at memory location 0 (i32.const 1) ;; iovs_len - We're printing 1 string stored in an iov - so one. (i32.const 20) ;; nwritten - A place in memory to store the number of bytes written ) drop ;; Discard the number of bytes written from the top of the stack ))Usewabt to compile.wat to.wasm
wat2wasm demo.watSecurity#
History
| Version | Changes |
|---|---|
| v21.2.0, v20.11.0 | Clarify WASI security properties. |
| v21.2.0, v20.11.0 | Added in: v21.2.0, v20.11.0 |
WASI provides a capabilities-based model through which applications are providedtheir own customenv,preopens,stdin,stdout,stderr, andexitcapabilities.
The current Node.js threat model does not provide secure sandboxing as ispresent in some WASI runtimes.
While the capability features are supported, they do not form a security modelin Node.js. For example, the file system sandboxing can be escaped with varioustechniques. The project is exploring whether these security guarantees could beadded in future.
Class:WASI#
TheWASI class provides the WASI system call API and additional conveniencemethods for working with WASI-based applications. EachWASI instancerepresents a distinct environment.
new WASI([options])#
History
| Version | Changes |
|---|---|
| v20.1.0 | default value of returnOnExit changed to true. |
| v20.0.0 | The version option is now required and has no default value. |
| v19.8.0 | version field added to options. |
| v13.3.0, v12.16.0 | Added in: v13.3.0, v12.16.0 |
options<Object>args<Array> An array of strings that the WebAssembly application willsee as command-line arguments. The first argument is the virtual path to theWASI command itself.Default:[].env<Object> An object similar toprocess.envthat the WebAssemblyapplication will see as its environment.Default:{}.preopens<Object> This object represents the WebAssembly application'slocal directory structure. The string keys ofpreopensare treated asdirectories within the file system. The corresponding values inpreopensare the real paths to those directories on the host machine.returnOnExit<boolean> By default, when WASI applications call__wasi_proc_exit()wasi.start()will return with the exit codespecified rather than terminating the process. Setting this option tofalsewill cause the Node.js process to exit with the specifiedexit code instead.Default:true.stdin<integer> The file descriptor used as standard input in theWebAssembly application.Default:0.stdout<integer> The file descriptor used as standard output in theWebAssembly application.Default:1.stderr<integer> The file descriptor used as standard error in theWebAssembly application.Default:2.version<string> The version of WASI requested. Currently the onlysupported versions areunstableandpreview1. This option ismandatory.
wasi.getImportObject()#
Return an import object that can be passed toWebAssembly.instantiate() ifno other WASM imports are needed beyond those provided by WASI.
If versionunstable was passed into the constructor it will return:
{ wasi_unstable: wasi.wasiImport}If versionpreview1 was passed into the constructor or no version wasspecified it will return:
{ wasi_snapshot_preview1: wasi.wasiImport}wasi.start(instance)#
instance<WebAssembly.Instance>
Attempt to begin execution ofinstance as a WASI command by invoking its_start() export. Ifinstance does not contain a_start() export, or ifinstance contains an_initialize() export, then an exception is thrown.
start() requires thatinstance exports aWebAssembly.Memory namedmemory. Ifinstance does not have amemory export an exception is thrown.
Ifstart() is called more than once, an exception is thrown.
wasi.initialize(instance)#
instance<WebAssembly.Instance>
Attempt to initializeinstance as a WASI reactor by invoking its_initialize() export, if it is present. Ifinstance contains a_start()export, then an exception is thrown.
initialize() requires thatinstance exports aWebAssembly.Memory namedmemory. Ifinstance does not have amemory export an exception is thrown.
Ifinitialize() is called more than once, an exception is thrown.
wasi.finalizeBindings(instance[, options])#
instance<WebAssembly.Instance>options<Object>memory<WebAssembly.Memory>Default:instance.exports.memory.
Set up WASI host bindings toinstance without callinginitialize()orstart(). This method is useful when the WASI module is instantiated inchild threads for sharing the memory across threads.
finalizeBindings() requires that eitherinstance exports aWebAssembly.Memory namedmemory or user specify aWebAssembly.Memory object inoptions.memory. If thememory is invalidan exception is thrown.
start() andinitialize() will callfinalizeBindings() internally.IffinalizeBindings() is called more than once, an exception is thrown.
wasi.wasiImport#
- Type:<Object>
wasiImport is an object that implements the WASI system call API. This objectshould be passed as thewasi_snapshot_preview1 import during the instantiationof aWebAssembly.Instance.
Web Crypto API#
History
| Version | Changes |
|---|---|
| v24.8.0 | KMAC algorithms are now supported. |
| v24.8.0 | Argon2 algorithms are now supported. |
| v24.7.0 | AES-OCB algorithm is now supported. |
| v24.7.0 | ML-KEM algorithms are now supported. |
| v24.7.0 | ChaCha20-Poly1305 algorithm is now supported. |
| v24.7.0 | SHA-3 algorithms are now supported. |
| v24.7.0 | SHAKE algorithms are now supported. |
| v24.7.0 | ML-DSA algorithms are now supported. |
| v23.5.0, v22.13.0, v20.19.3 | Algorithms |
| v19.0.0 | No longer experimental except for the |
| v20.0.0, v18.17.0 | Arguments are now coerced and validated as per their WebIDL definitions like in other Web Crypto API implementations. |
| v18.4.0, v16.17.0 | Removed proprietary |
| v18.4.0, v16.17.0 | Removed proprietary |
| v18.4.0, v16.17.0 | Added |
| v18.4.0, v16.17.0 | Removed proprietary |
| v18.4.0, v16.17.0 | Removed proprietary |
Node.js provides an implementation of theWeb Crypto API standard.
UseglobalThis.crypto orrequire('node:crypto').webcrypto to access thismodule.
const { subtle } = globalThis.crypto;(asyncfunction() {const key =await subtle.generateKey({name:'HMAC',hash:'SHA-256',length:256, },true, ['sign','verify']);const enc =newTextEncoder();const message = enc.encode('I love cupcakes');const digest =await subtle.sign({name:'HMAC', }, key, message);})();Modern Algorithms in the Web Cryptography API#
Node.js provides an implementation of the following features from theModern Algorithms in the Web Cryptography APIWICG proposal:
Algorithms:
'AES-OCB'1'Argon2d'2'Argon2i'2'Argon2id'2'ChaCha20-Poly1305''cSHAKE128''cSHAKE256''KMAC128'1'KMAC256'1'ML-DSA-44'3'ML-DSA-65'3'ML-DSA-87'3'ML-KEM-512'3'ML-KEM-768'3'ML-KEM-1024'3'SHA3-256''SHA3-384''SHA3-512'
Key Formats:
'raw-public''raw-secret''raw-seed'
Methods:
Secure Curves in the Web Cryptography API#
Node.js provides an implementation of the following features from theSecure Curves in the Web Cryptography APIWICG proposal:
Algorithms:
'Ed448''X448'
Examples#
Generating keys#
The<SubtleCrypto> class can be used to generate symmetric (secret) keysor asymmetric key pairs (public key and private key).
AES keys#
const { subtle } = globalThis.crypto;asyncfunctiongenerateAesKey(length =256) {const key =await subtle.generateKey({name:'AES-CBC', length, },true, ['encrypt','decrypt']);return key;}ECDSA key pairs#
const { subtle } = globalThis.crypto;asyncfunctiongenerateEcKey(namedCurve ='P-521') {const { publicKey, privateKey, } =await subtle.generateKey({name:'ECDSA', namedCurve, },true, ['sign','verify']);return { publicKey, privateKey };}Ed25519/X25519 key pairs#
const { subtle } = globalThis.crypto;asyncfunctiongenerateEd25519Key() {return subtle.generateKey({name:'Ed25519', },true, ['sign','verify']);}asyncfunctiongenerateX25519Key() {return subtle.generateKey({name:'X25519', },true, ['deriveKey']);}HMAC keys#
const { subtle } = globalThis.crypto;asyncfunctiongenerateHmacKey(hash ='SHA-256') {const key =await subtle.generateKey({name:'HMAC', hash, },true, ['sign','verify']);return key;}RSA key pairs#
const { subtle } = globalThis.crypto;const publicExponent =newUint8Array([1,0,1]);asyncfunctiongenerateRsaKey(modulusLength =2048, hash ='SHA-256') {const { publicKey, privateKey, } =await subtle.generateKey({name:'RSASSA-PKCS1-v1_5', modulusLength, publicExponent, hash, },true, ['sign','verify']);return { publicKey, privateKey };}Encryption and decryption#
const crypto = globalThis.crypto;asyncfunctionaesEncrypt(plaintext) {const ec =newTextEncoder();const key =awaitgenerateAesKey();const iv = crypto.getRandomValues(newUint8Array(16));const ciphertext =await crypto.subtle.encrypt({name:'AES-CBC', iv, }, key, ec.encode(plaintext));return { key, iv, ciphertext, };}asyncfunctionaesDecrypt(ciphertext, key, iv) {const dec =newTextDecoder();const plaintext =await crypto.subtle.decrypt({name:'AES-CBC', iv, }, key, ciphertext);return dec.decode(plaintext);}Exporting and importing keys#
const { subtle } = globalThis.crypto;asyncfunctiongenerateAndExportHmacKey(format ='jwk', hash ='SHA-512') {const key =await subtle.generateKey({name:'HMAC', hash, },true, ['sign','verify']);return subtle.exportKey(format, key);}asyncfunctionimportHmacKey(keyData, format ='jwk', hash ='SHA-512') {const key =await subtle.importKey(format, keyData, {name:'HMAC', hash, },true, ['sign','verify']);return key;}Wrapping and unwrapping keys#
const { subtle } = globalThis.crypto;asyncfunctiongenerateAndWrapHmacKey(format ='jwk', hash ='SHA-512') {const [ key, wrappingKey, ] =awaitPromise.all([ subtle.generateKey({name:'HMAC', hash, },true, ['sign','verify']), subtle.generateKey({name:'AES-KW',length:256, },true, ['wrapKey','unwrapKey']), ]);const wrappedKey =await subtle.wrapKey(format, key, wrappingKey,'AES-KW');return { wrappedKey, wrappingKey };}asyncfunctionunwrapHmacKey( wrappedKey, wrappingKey, format ='jwk', hash ='SHA-512') {const key =await subtle.unwrapKey( format, wrappedKey, wrappingKey,'AES-KW', {name:'HMAC', hash },true, ['sign','verify']);return key;}Sign and verify#
const { subtle } = globalThis.crypto;asyncfunctionsign(key, data) {const ec =newTextEncoder();const signature =await subtle.sign('RSASSA-PKCS1-v1_5', key, ec.encode(data));return signature;}asyncfunctionverify(key, signature, data) {const ec =newTextEncoder();const verified =await subtle.verify('RSASSA-PKCS1-v1_5', key, signature, ec.encode(data));return verified;}Deriving bits and keys#
const { subtle } = globalThis.crypto;asyncfunctionpbkdf2(pass, salt, iterations =1000, length =256) {const ec =newTextEncoder();const key =await subtle.importKey('raw', ec.encode(pass),'PBKDF2',false, ['deriveBits']);const bits =await subtle.deriveBits({name:'PBKDF2',hash:'SHA-512',salt: ec.encode(salt), iterations, }, key, length);return bits;}asyncfunctionpbkdf2Key(pass, salt, iterations =1000, length =256) {const ec =newTextEncoder();const keyMaterial =await subtle.importKey('raw', ec.encode(pass),'PBKDF2',false, ['deriveKey']);const key =await subtle.deriveKey({name:'PBKDF2',hash:'SHA-512',salt: ec.encode(salt), iterations, }, keyMaterial, {name:'AES-GCM', length, },true, ['encrypt','decrypt']);return key;}Digest#
const { subtle } = globalThis.crypto;asyncfunctiondigest(data, algorithm ='SHA-512') {const ec =newTextEncoder();const digest =await subtle.digest(algorithm, ec.encode(data));return digest;}Checking for runtime algorithm support#
SubtleCrypto.supports() allows feature detection in Web Crypto API,which can be used to detect whether a given algorithm identifier(including its parameters) is supported for the given operation.
This example derives a key from a password using Argon2, if available,or PBKDF2, otherwise; and then encrypts and decrypts some text with itusing AES-OCB, if available, and AES-GCM, otherwise.
const {SubtleCrypto, crypto } = globalThis;const password ='correct horse battery staple';const derivationAlg =SubtleCrypto.supports?.('importKey','Argon2id') ?'Argon2id' :'PBKDF2';const encryptionAlg =SubtleCrypto.supports?.('importKey','AES-OCB') ?'AES-OCB' :'AES-GCM';const passwordKey =await crypto.subtle.importKey( derivationAlg ==='Argon2id' ?'raw-secret' :'raw',newTextEncoder().encode(password), derivationAlg,false, ['deriveKey'],);const nonce = crypto.getRandomValues(newUint8Array(16));const derivationParams = derivationAlg ==='Argon2id' ? { nonce,parallelism:4,memory:2 **21,passes:1, } : {salt: nonce,iterations:100_000,hash:'SHA-256', };const key =await crypto.subtle.deriveKey( {name: derivationAlg, ...derivationParams, }, passwordKey, {name: encryptionAlg,length:256, },false, ['encrypt','decrypt'],);const plaintext ='Hello, world!';const iv = crypto.getRandomValues(newUint8Array(16));const encrypted =await crypto.subtle.encrypt( {name: encryptionAlg, iv }, key,newTextEncoder().encode(plaintext),);const decrypted =newTextDecoder().decode(await crypto.subtle.decrypt( {name: encryptionAlg, iv }, key, encrypted,));Algorithm matrix#
The tables details the algorithms supported by the Node.js Web Crypto APIimplementation and the APIs supported for each:
Key Management APIs#
| Algorithm | subtle.generateKey() | subtle.exportKey() | subtle.importKey() | subtle.getPublicKey() |
|---|---|---|---|---|
'AES-CBC' | ✔ | ✔ | ✔ | |
'AES-CTR' | ✔ | ✔ | ✔ | |
'AES-GCM' | ✔ | ✔ | ✔ | |
'AES-KW' | ✔ | ✔ | ✔ | |
'AES-OCB' | ✔ | ✔ | ✔ | |
'Argon2d' | ✔ | |||
'Argon2i' | ✔ | |||
'Argon2id' | ✔ | |||
'ChaCha20-Poly1305'4 | ✔ | ✔ | ✔ | |
'ECDH' | ✔ | ✔ | ✔ | ✔ |
'ECDSA' | ✔ | ✔ | ✔ | ✔ |
'Ed25519' | ✔ | ✔ | ✔ | ✔ |
'Ed448'5 | ✔ | ✔ | ✔ | ✔ |
'HKDF' | ✔ | |||
'HMAC' | ✔ | ✔ | ✔ | |
'KMAC128'4 | ✔ | ✔ | ✔ | |
'KMAC256'4 | ✔ | ✔ | ✔ | |
'ML-DSA-44'4 | ✔ | ✔ | ✔ | ✔ |
'ML-DSA-65'4 | ✔ | ✔ | ✔ | ✔ |
'ML-DSA-87'4 | ✔ | ✔ | ✔ | ✔ |
'ML-KEM-512'4 | ✔ | ✔ | ✔ | ✔ |
'ML-KEM-768'4 | ✔ | ✔ | ✔ | ✔ |
'ML-KEM-1024'4 | ✔ | ✔ | ✔ | ✔ |
'PBKDF2' | ✔ | |||
'RSA-OAEP' | ✔ | ✔ | ✔ | ✔ |
'RSA-PSS' | ✔ | ✔ | ✔ | ✔ |
'RSASSA-PKCS1-v1_5' | ✔ | ✔ | ✔ | ✔ |
'X25519' | ✔ | ✔ | ✔ | ✔ |
'X448'5 | ✔ | ✔ | ✔ | ✔ |
Crypto Operation APIs#
Column Legend:
- Encryption:
subtle.encrypt()/subtle.decrypt() - Signatures and MAC:
subtle.sign()/subtle.verify() - Key or Bits Derivation:
subtle.deriveBits()/subtle.deriveKey() - Key Wrapping:
subtle.wrapKey()/subtle.unwrapKey() - Key Encapsulation:
subtle.encapsulateBits()/subtle.decapsulateBits()/subtle.encapsulateKey()/subtle.decapsulateKey() - Digest:
subtle.digest()
| Algorithm | Encryption | Signatures and MAC | Key or Bits Derivation | Key Wrapping | Key Encapsulation | Digest |
|---|---|---|---|---|---|---|
'AES-CBC' | ✔ | ✔ | ||||
'AES-CTR' | ✔ | ✔ | ||||
'AES-GCM' | ✔ | ✔ | ||||
'AES-KW' | ✔ | |||||
'AES-OCB' | ✔ | ✔ | ||||
'Argon2d' | ✔ | |||||
'Argon2i' | ✔ | |||||
'Argon2id' | ✔ | |||||
'ChaCha20-Poly1305'4 | ✔ | ✔ | ||||
'cSHAKE128'4 | ✔ | |||||
'cSHAKE256'4 | ✔ | |||||
'ECDH' | ✔ | |||||
'ECDSA' | ✔ | |||||
'Ed25519' | ✔ | |||||
'Ed448'5 | ✔ | |||||
'HKDF' | ✔ | |||||
'HMAC' | ✔ | |||||
'KMAC128'4 | ✔ | |||||
'KMAC256'4 | ✔ | |||||
'ML-DSA-44'4 | ✔ | |||||
'ML-DSA-65'4 | ✔ | |||||
'ML-DSA-87'4 | ✔ | |||||
'ML-KEM-512'4 | ✔ | |||||
'ML-KEM-768'4 | ✔ | |||||
'ML-KEM-1024'4 | ✔ | |||||
'PBKDF2' | ✔ | |||||
'RSA-OAEP' | ✔ | ✔ | ||||
'RSA-PSS' | ✔ | |||||
'RSASSA-PKCS1-v1_5' | ✔ | |||||
'SHA-1' | ✔ | |||||
'SHA-256' | ✔ | |||||
'SHA-384' | ✔ | |||||
'SHA-512' | ✔ | |||||
'SHA3-256'4 | ✔ | |||||
'SHA3-384'4 | ✔ | |||||
'SHA3-512'4 | ✔ | |||||
'X25519' | ✔ | |||||
'X448'5 | ✔ |
Class:Crypto#
globalThis.crypto is an instance of theCryptoclass.Crypto is a singleton that provides access to the remainder of thecrypto API.
crypto.getRandomValues(typedArray)#
typedArray<Buffer> |<TypedArray>- Returns:<Buffer> |<TypedArray>
Generates cryptographically strong random values. The giventypedArray isfilled with random values, and a reference totypedArray is returned.
The giventypedArray must be an integer-based instance of<TypedArray>,i.e.Float32Array andFloat64Array are not accepted.
An error will be thrown if the giventypedArray is larger than 65,536 bytes.
Class:CryptoKey#
cryptoKey.algorithm#
- Type:<KeyAlgorithm> |<RsaHashedKeyAlgorithm> |<EcKeyAlgorithm> |<AesKeyAlgorithm> |<HmacKeyAlgorithm> |<KmacKeyAlgorithm>
An object detailing the algorithm for which the key can be used along withadditional algorithm-specific parameters.
Read-only.
cryptoKey.extractable#
- Type:<boolean>
Whentrue, the<CryptoKey> can be extracted using eithersubtle.exportKey() orsubtle.wrapKey().
Read-only.
cryptoKey.type#
- Type:<string> One of
'secret','private', or'public'.
A string identifying whether the key is a symmetric ('secret') orasymmetric ('private' or'public') key.
cryptoKey.usages#
- Type:<string[]>
An array of strings identifying the operations for which thekey may be used.
The possible usages are:
'encrypt'- Enable using the key withsubtle.encrypt()'decrypt'- Enable using the key withsubtle.decrypt()'sign'- Enable using the key withsubtle.sign()'verify'- Enable using the key withsubtle.verify()'deriveKey'- Enable using the key withsubtle.deriveKey()'deriveBits'- Enable using the key withsubtle.deriveBits()'encapsulateBits'- Enable using the key withsubtle.encapsulateBits()'decapsulateBits'- Enable using the key withsubtle.decapsulateBits()'encapsulateKey'- Enable using the key withsubtle.encapsulateKey()'decapsulateKey'- Enable using the key withsubtle.decapsulateKey()'wrapKey'- Enable using the key withsubtle.wrapKey()'unwrapKey'- Enable using the key withsubtle.unwrapKey()
Valid key usages depend on the key algorithm (identified bycryptokey.algorithm.name).
Column Legend:
- Encryption:
subtle.encrypt()/subtle.decrypt() - Signatures and MAC:
subtle.sign()/subtle.verify() - Key or Bits Derivation:
subtle.deriveBits()/subtle.deriveKey() - Key Wrapping:
subtle.wrapKey()/subtle.unwrapKey() - Key Encapsulation:
subtle.encapsulateBits()/subtle.decapsulateBits()/subtle.encapsulateKey()/subtle.decapsulateKey()
| Supported Key Algorithm | Encryption | Signatures and MAC | Key or Bits Derivation | Key Wrapping | Key Encapsulation |
|---|---|---|---|---|---|
'AES-CBC' | ✔ | ✔ | |||
'AES-CTR' | ✔ | ✔ | |||
'AES-GCM' | ✔ | ✔ | |||
'AES-KW' | ✔ | ||||
'AES-OCB' | ✔ | ✔ | |||
'Argon2d' | ✔ | ||||
'Argon2i' | ✔ | ||||
'Argon2id' | ✔ | ||||
'ChaCha20-Poly1305'4 | ✔ | ✔ | |||
'ECDH' | ✔ | ||||
'ECDSA' | ✔ | ||||
'Ed25519' | ✔ | ||||
'Ed448'5 | ✔ | ||||
'HDKF' | ✔ | ||||
'HMAC' | ✔ | ||||
'KMAC128'4 | ✔ | ||||
'KMAC256'4 | ✔ | ||||
'ML-DSA-44'4 | ✔ | ||||
'ML-DSA-65'4 | ✔ | ||||
'ML-DSA-87'4 | ✔ | ||||
'ML-KEM-512'4 | ✔ | ||||
'ML-KEM-768'4 | ✔ | ||||
'ML-KEM-1024'4 | ✔ | ||||
'PBKDF2' | ✔ | ||||
'RSA-OAEP' | ✔ | ✔ | |||
'RSA-PSS' | ✔ | ||||
'RSASSA-PKCS1-v1_5' | ✔ | ||||
'X25519' | ✔ | ||||
'X448'5 | ✔ |
Class:CryptoKeyPair#
TheCryptoKeyPair is a simple dictionary object withpublicKey andprivateKey properties, representing an asymmetric key pair.
Class:SubtleCrypto#
Static method:SubtleCrypto.supports(operation, algorithm[, lengthOrAdditionalAlgorithm])#
operation<string> "encrypt", "decrypt", "sign", "verify", "digest", "generateKey", "deriveKey", "deriveBits", "importKey", "exportKey", "getPublicKey", "wrapKey", "unwrapKey", "encapsulateBits", "encapsulateKey", "decapsulateBits", or "decapsulateKey"algorithm<string> |<Algorithm>lengthOrAdditionalAlgorithm<null> |<number> |<string> |<Algorithm> |<undefined> Depending on the operation this is either ignored, the value of the length argument when operation is "deriveBits", the algorithm of key to be derived when operation is "deriveKey", the algorithm of key to be exported before wrapping when operation is "wrapKey", the algorithm of key to be imported after unwrapping when operation is "unwrapKey", or the algorithm of key to be imported after en/decapsulating a key when operation is "encapsulateKey" or "decapsulateKey".Default:nullwhen operation is "deriveBits",undefinedotherwise.- Returns:<boolean> Indicating whether the implementation supports the given operation
Allows feature detection in Web Crypto API,which can be used to detect whether a given algorithm identifier(including its parameters) is supported for the given operation.
SeeChecking for runtime algorithm support for an example use of this method.
subtle.decapsulateBits(decapsulationAlgorithm, decapsulationKey, ciphertext)#
decapsulationAlgorithm<string> |<Algorithm>decapsulationKey<CryptoKey>ciphertext<ArrayBuffer> |<TypedArray> |<DataView> |<Buffer>- Returns:<Promise> Fulfills with<ArrayBuffer> upon success.
A message recipient uses their asymmetric private key to decrypt an"encapsulated key" (ciphertext), thereby recovering a temporary symmetrickey (represented as<ArrayBuffer>) which is then used to decrypt a message.
The algorithms currently supported include:
subtle.decapsulateKey(decapsulationAlgorithm, decapsulationKey, ciphertext, sharedKeyAlgorithm, extractable, usages)#
decapsulationAlgorithm<string> |<Algorithm>decapsulationKey<CryptoKey>ciphertext<ArrayBuffer> |<TypedArray> |<DataView> |<Buffer>sharedKeyAlgorithm<string> |<Algorithm> |<HmacImportParams> |<AesDerivedKeyParams> |<KmacImportParams>extractable<boolean>usages<string[]> SeeKey usages.- Returns:<Promise> Fulfills with<CryptoKey> upon success.
A message recipient uses their asymmetric private key to decrypt an"encapsulated key" (ciphertext), thereby recovering a temporary symmetrickey (represented as<CryptoKey>) which is then used to decrypt a message.
The algorithms currently supported include:
subtle.decrypt(algorithm, key, data)#
History
| Version | Changes |
|---|---|
| v24.7.0 | AES-OCB algorithm is now supported. |
| v24.7.0 | ChaCha20-Poly1305 algorithm is now supported. |
| v15.0.0 | Added in: v15.0.0 |
algorithm<RsaOaepParams> |<AesCtrParams> |<AesCbcParams> |<AeadParams>key<CryptoKey>data<ArrayBuffer> |<TypedArray> |<DataView> |<Buffer>- Returns:<Promise> Fulfills with an<ArrayBuffer> upon success.
Using the method and parameters specified inalgorithm and the keyingmaterial provided bykey, this method attempts to decipher theprovideddata. If successful, the returned promise will be resolved withan<ArrayBuffer> containing the plaintext result.
The algorithms currently supported include:
subtle.deriveBits(algorithm, baseKey[, length])#
History
| Version | Changes |
|---|---|
| v24.8.0 | Argon2 algorithms are now supported. |
| v22.5.0, v20.17.0, v18.20.5 | The length parameter is now optional for |
| v18.4.0, v16.17.0 | Added |
| v15.0.0 | Added in: v15.0.0 |
algorithm<EcdhKeyDeriveParams> |<HkdfParams> |<Pbkdf2Params> |<Argon2Params>baseKey<CryptoKey>length<number> |<null>Default:null- Returns:<Promise> Fulfills with an<ArrayBuffer> upon success.
Using the method and parameters specified inalgorithm and the keyingmaterial provided bybaseKey, this method attempts to generatelength bits.
Whenlength is not provided ornull the maximum number of bits for a givenalgorithm is generated. This is allowed for the'ECDH','X25519', and'X448'5algorithms, for other algorithmslength is required to be a number.
If successful, the returned promise will be resolved with an<ArrayBuffer>containing the generated data.
The algorithms currently supported include:
subtle.deriveKey(algorithm, baseKey, derivedKeyAlgorithm, extractable, keyUsages)#
History
| Version | Changes |
|---|---|
| v24.8.0 | Argon2 algorithms are now supported. |
| v18.4.0, v16.17.0 | Added |
| v15.0.0 | Added in: v15.0.0 |
algorithm<EcdhKeyDeriveParams> |<HkdfParams> |<Pbkdf2Params> |<Argon2Params>baseKey<CryptoKey>derivedKeyAlgorithm<string> |<Algorithm> |<HmacImportParams> |<AesDerivedKeyParams> |<KmacImportParams>extractable<boolean>keyUsages<string[]> SeeKey usages.- Returns:<Promise> Fulfills with a<CryptoKey> upon success.
Using the method and parameters specified inalgorithm, and the keyingmaterial provided bybaseKey, this method attempts to generatea new<CryptoKey> based on the method and parameters inderivedKeyAlgorithm.
Calling this method is equivalent to callingsubtle.deriveBits() togenerate raw keying material, then passing the result into thesubtle.importKey() method using thederiveKeyAlgorithm,extractable, andkeyUsages parameters as input.
The algorithms currently supported include:
subtle.digest(algorithm, data)#
History
| Version | Changes |
|---|---|
| v24.7.0 | SHA-3 algorithms are now supported. |
| v24.7.0 | SHAKE algorithms are now supported. |
| v15.0.0 | Added in: v15.0.0 |
algorithm<string> |<Algorithm> |<CShakeParams>data<ArrayBuffer> |<TypedArray> |<DataView> |<Buffer>- Returns:<Promise> Fulfills with an<ArrayBuffer> upon success.
Using the method identified byalgorithm, this method attempts togenerate a digest ofdata. If successful, the returned promise is resolvedwith an<ArrayBuffer> containing the computed digest.
Ifalgorithm is provided as a<string>, it must be one of:
Ifalgorithm is provided as an<Object>, it must have aname propertywhose value is one of the above.
subtle.encapsulateBits(encapsulationAlgorithm, encapsulationKey)#
encapsulationAlgorithm<string> |<Algorithm>encapsulationKey<CryptoKey>- Returns:<Promise> Fulfills with<EncapsulatedBits> upon success.
Uses a message recipient's asymmetric public key to encrypt a temporary symmetric key.This encrypted key is the "encapsulated key" represented as<EncapsulatedBits>.
The algorithms currently supported include:
subtle.encapsulateKey(encapsulationAlgorithm, encapsulationKey, sharedKeyAlgorithm, extractable, usages)#
encapsulationAlgorithm<string> |<Algorithm>encapsulationKey<CryptoKey>sharedKeyAlgorithm<string> |<Algorithm> |<HmacImportParams> |<AesDerivedKeyParams> |<KmacImportParams>extractable<boolean>usages<string[]> SeeKey usages.- Returns:<Promise> Fulfills with<EncapsulatedKey> upon success.
Uses a message recipient's asymmetric public key to encrypt a temporary symmetric key.This encrypted key is the "encapsulated key" represented as<EncapsulatedKey>.
The algorithms currently supported include:
subtle.encrypt(algorithm, key, data)#
History
| Version | Changes |
|---|---|
| v24.7.0 | AES-OCB algorithm is now supported. |
| v24.7.0 | ChaCha20-Poly1305 algorithm is now supported. |
| v15.0.0 | Added in: v15.0.0 |
algorithm<RsaOaepParams> |<AesCtrParams> |<AesCbcParams> |<AeadParams>key<CryptoKey>data<ArrayBuffer> |<TypedArray> |<DataView> |<Buffer>- Returns:<Promise> Fulfills with an<ArrayBuffer> upon success.
Using the method and parameters specified byalgorithm and the keyingmaterial provided bykey, this method attempts to encipherdata.If successful, the returned promise is resolved with an<ArrayBuffer>containing the encrypted result.
The algorithms currently supported include:
subtle.exportKey(format, key)#
History
| Version | Changes |
|---|---|
| v24.8.0 | KMAC algorithms are now supported. |
| v24.7.0 | ML-KEM algorithms are now supported. |
| v24.7.0 | ChaCha20-Poly1305 algorithm is now supported. |
| v24.7.0 | ML-DSA algorithms are now supported. |
| v18.4.0, v16.17.0 | Added |
| v15.9.0 | Removed |
| v15.0.0 | Added in: v15.0.0 |
format<string> Must be one of'raw','pkcs8','spki','jwk','raw-secret'4,'raw-public'4, or'raw-seed'4.key<CryptoKey>- Returns:<Promise> Fulfills with an<ArrayBuffer> |<Object> upon success.
Exports the given key into the specified format, if supported.
If the<CryptoKey> is not extractable, the returned promise will reject.
Whenformat is either'pkcs8' or'spki' and the export is successful,the returned promise will be resolved with an<ArrayBuffer> containing theexported key data.
Whenformat is'jwk' and the export is successful, the returned promisewill be resolved with a JavaScript object conforming to theJSON Web Keyspecification.
| Supported Key Algorithm | 'spki' | 'pkcs8' | 'jwk' | 'raw' | 'raw-secret' | 'raw-public' | 'raw-seed' |
|---|---|---|---|---|---|---|---|
'AES-CBC' | ✔ | ✔ | ✔ | ||||
'AES-CTR' | ✔ | ✔ | ✔ | ||||
'AES-GCM' | ✔ | ✔ | ✔ | ||||
'AES-KW' | ✔ | ✔ | ✔ | ||||
'AES-OCB'4 | ✔ | ✔ | |||||
'ChaCha20-Poly1305'4 | ✔ | ✔ | |||||
'ECDH' | ✔ | ✔ | ✔ | ✔ | ✔ | ||
'ECDSA' | ✔ | ✔ | ✔ | ✔ | ✔ | ||
'Ed25519' | ✔ | ✔ | ✔ | ✔ | ✔ | ||
'Ed448'5 | ✔ | ✔ | ✔ | ✔ | ✔ | ||
'HMAC' | ✔ | ✔ | ✔ | ||||
'KMAC128'4 | ✔ | ✔ | |||||
'KMAC256'4 | ✔ | ✔ | |||||
'ML-DSA-44'4 | ✔ | ✔ | ✔ | ✔ | ✔ | ||
'ML-DSA-65'4 | ✔ | ✔ | ✔ | ✔ | ✔ | ||
'ML-DSA-87'4 | ✔ | ✔ | ✔ | ✔ | ✔ | ||
'ML-KEM-512'4 | ✔ | ✔ | ✔ | ✔ | |||
'ML-KEM-768'4 | ✔ | ✔ | ✔ | ✔ | |||
'ML-KEM-1024'4 | ✔ | ✔ | ✔ | ✔ | |||
'RSA-OAEP' | ✔ | ✔ | ✔ | ||||
'RSA-PSS' | ✔ | ✔ | ✔ | ||||
'RSASSA-PKCS1-v1_5' | ✔ | ✔ | ✔ |
subtle.getPublicKey(key, keyUsages)#
key<CryptoKey> A private key from which to derive the corresponding public key.keyUsages<string[]> SeeKey usages.- Returns:<Promise> Fulfills with a<CryptoKey> upon success.
Derives the public key from a given private key.
subtle.generateKey(algorithm, extractable, keyUsages)#
History
| Version | Changes |
|---|---|
| v24.8.0 | KMAC algorithms are now supported. |
| v24.7.0 | ML-KEM algorithms are now supported. |
| v24.7.0 | ChaCha20-Poly1305 algorithm is now supported. |
| v24.7.0 | ML-DSA algorithms are now supported. |
| v15.0.0 | Added in: v15.0.0 |
algorithm<string> |<Algorithm> |<RsaHashedKeyGenParams> |<EcKeyGenParams> |<HmacKeyGenParams> |<AesKeyGenParams> |<KmacKeyGenParams>
extractable<boolean>keyUsages<string[]> SeeKey usages.- Returns:<Promise> Fulfills with a<CryptoKey> |<CryptoKeyPair> upon success.
Using the parameters provided inalgorithm, this methodattempts to generate new keying material. Depending on the algorithm usedeither a single<CryptoKey> or a<CryptoKeyPair> is generated.
The<CryptoKeyPair> (public and private key) generating algorithms supportedinclude:
'ECDH''ECDSA''Ed25519''Ed448'5'ML-DSA-44'4'ML-DSA-65'4'ML-DSA-87'4'ML-KEM-512'4'ML-KEM-768'4'ML-KEM-1024'4'RSA-OAEP''RSA-PSS''RSASSA-PKCS1-v1_5''X25519''X448'5
The<CryptoKey> (secret key) generating algorithms supported include:
subtle.importKey(format, keyData, algorithm, extractable, keyUsages)#
History
| Version | Changes |
|---|---|
| v24.8.0 | KMAC algorithms are now supported. |
| v24.7.0 | ML-KEM algorithms are now supported. |
| v24.7.0 | ChaCha20-Poly1305 algorithm is now supported. |
| v24.7.0 | ML-DSA algorithms are now supported. |
| v18.4.0, v16.17.0 | Added |
| v15.9.0 | Removed |
| v15.0.0 | Added in: v15.0.0 |
format<string> Must be one of'raw','pkcs8','spki','jwk','raw-secret'4,'raw-public'4, or'raw-seed'4.keyData<ArrayBuffer> |<TypedArray> |<DataView> |<Buffer> |<Object>
algorithm<string> |<Algorithm> |<RsaHashedImportParams> |<EcKeyImportParams> |<HmacImportParams> |<KmacImportParams>
extractable<boolean>keyUsages<string[]> SeeKey usages.- Returns:<Promise> Fulfills with a<CryptoKey> upon success.
This method attempts to interpret the providedkeyDataas the givenformat to create a<CryptoKey> instance using the providedalgorithm,extractable, andkeyUsages arguments. If the import issuccessful, the returned promise will be resolved with a<CryptoKey>representation of the key material.
If importing KDF algorithm keys,extractable must befalse.
The algorithms currently supported include:
| Supported Key Algorithm | 'spki' | 'pkcs8' | 'jwk' | 'raw' | 'raw-secret' | 'raw-public' | 'raw-seed' |
|---|---|---|---|---|---|---|---|
'AES-CBC' | ✔ | ✔ | ✔ | ||||
'AES-CTR' | ✔ | ✔ | ✔ | ||||
'AES-GCM' | ✔ | ✔ | ✔ | ||||
'AES-KW' | ✔ | ✔ | ✔ | ||||
'AES-OCB'4 | ✔ | ✔ | |||||
'Argon2d'4 | ✔ | ||||||
'Argon2i'4 | ✔ | ||||||
'Argon2id'4 | ✔ | ||||||
'ChaCha20-Poly1305'4 | ✔ | ✔ | |||||
'ECDH' | ✔ | ✔ | ✔ | ✔ | ✔ | ||
'ECDSA' | ✔ | ✔ | ✔ | ✔ | ✔ | ||
'Ed25519' | ✔ | ✔ | ✔ | ✔ | ✔ | ||
'Ed448'5 | ✔ | ✔ | ✔ | ✔ | ✔ | ||
'HDKF' | ✔ | ✔ | |||||
'HMAC' | ✔ | ✔ | ✔ | ||||
'KMAC128'4 | ✔ | ✔ | |||||
'KMAC256'4 | ✔ | ✔ | |||||
'ML-DSA-44'4 | ✔ | ✔ | ✔ | ✔ | ✔ | ||
'ML-DSA-65'4 | ✔ | ✔ | ✔ | ✔ | ✔ | ||
'ML-DSA-87'4 | ✔ | ✔ | ✔ | ✔ | ✔ | ||
'ML-KEM-512'4 | ✔ | ✔ | ✔ | ✔ | |||
'ML-KEM-768'4 | ✔ | ✔ | ✔ | ✔ | |||
'ML-KEM-1024'4 | ✔ | ✔ | ✔ | ✔ | |||
'PBKDF2' | ✔ | ✔ | |||||
'RSA-OAEP' | ✔ | ✔ | ✔ | ||||
'RSA-PSS' | ✔ | ✔ | ✔ | ||||
'RSASSA-PKCS1-v1_5' | ✔ | ✔ | ✔ | ||||
'X25519' | ✔ | ✔ | ✔ | ✔ | ✔ | ||
'X448'5 | ✔ | ✔ | ✔ | ✔ | ✔ |
subtle.sign(algorithm, key, data)#
History
| Version | Changes |
|---|---|
| v24.8.0 | KMAC algorithms are now supported. |
| v24.7.0 | ML-DSA algorithms are now supported. |
| v18.4.0, v16.17.0 | Added |
| v15.0.0 | Added in: v15.0.0 |
algorithm<string> |<Algorithm> |<RsaPssParams> |<EcdsaParams> |<ContextParams> |<KmacParams>key<CryptoKey>data<ArrayBuffer> |<TypedArray> |<DataView> |<Buffer>- Returns:<Promise> Fulfills with an<ArrayBuffer> upon success.
Using the method and parameters given byalgorithm and the keying materialprovided bykey, this method attempts to generate a cryptographicsignature ofdata. If successful, the returned promise is resolved withan<ArrayBuffer> containing the generated signature.
The algorithms currently supported include:
subtle.unwrapKey(format, wrappedKey, unwrappingKey, unwrapAlgo, unwrappedKeyAlgo, extractable, keyUsages)#
History
| Version | Changes |
|---|---|
| v24.7.0 | AES-OCB algorithm is now supported. |
| v24.7.0 | ChaCha20-Poly1305 algorithm is now supported. |
| v15.0.0 | Added in: v15.0.0 |
format<string> Must be one of'raw','pkcs8','spki','jwk','raw-secret'4,'raw-public'4, or'raw-seed'4.wrappedKey<ArrayBuffer> |<TypedArray> |<DataView> |<Buffer>unwrappingKey<CryptoKey>
unwrapAlgo<string> |<Algorithm> |<RsaOaepParams> |<AesCtrParams> |<AesCbcParams> |<AeadParams>unwrappedKeyAlgo<string> |<Algorithm> |<RsaHashedImportParams> |<EcKeyImportParams> |<HmacImportParams> |<KmacImportParams>
extractable<boolean>keyUsages<string[]> SeeKey usages.- Returns:<Promise> Fulfills with a<CryptoKey> upon success.
In cryptography, "wrapping a key" refers to exporting and then encrypting thekeying material. This method attempts to decrypt a wrappedkey and create a<CryptoKey> instance. It is equivalent to callingsubtle.decrypt() first on the encrypted key data (using thewrappedKey,unwrapAlgo, andunwrappingKey arguments as input) then passing the resultsto thesubtle.importKey() method using theunwrappedKeyAlgo,extractable, andkeyUsages arguments as inputs. If successful, the returnedpromise is resolved with a<CryptoKey> object.
The wrapping algorithms currently supported include:
The unwrapped key algorithms supported include:
subtle.verify(algorithm, key, signature, data)#
History
| Version | Changes |
|---|---|
| v24.8.0 | KMAC algorithms are now supported. |
| v24.7.0 | ML-DSA algorithms are now supported. |
| v18.4.0, v16.17.0 | Added |
| v15.0.0 | Added in: v15.0.0 |
algorithm<string> |<Algorithm> |<RsaPssParams> |<EcdsaParams> |<ContextParams> |<KmacParams>key<CryptoKey>signature<ArrayBuffer> |<TypedArray> |<DataView> |<Buffer>data<ArrayBuffer> |<TypedArray> |<DataView> |<Buffer>- Returns:<Promise> Fulfills with a<boolean> upon success.
Using the method and parameters given inalgorithm and the keying materialprovided bykey, this method attempts to verify thatsignature isa valid cryptographic signature ofdata. The returned promise is resolvedwith eithertrue orfalse.
The algorithms currently supported include:
subtle.wrapKey(format, key, wrappingKey, wrapAlgo)#
History
| Version | Changes |
|---|---|
| v24.7.0 | AES-OCB algorithm is now supported. |
| v24.7.0 | ChaCha20-Poly1305 algorithm is now supported. |
| v15.0.0 | Added in: v15.0.0 |
format<string> Must be one of'raw','pkcs8','spki','jwk','raw-secret'4,'raw-public'4, or'raw-seed'4.key<CryptoKey>wrappingKey<CryptoKey>wrapAlgo<string> |<Algorithm> |<RsaOaepParams> |<AesCtrParams> |<AesCbcParams> |<AeadParams>- Returns:<Promise> Fulfills with an<ArrayBuffer> upon success.
In cryptography, "wrapping a key" refers to exporting and then encrypting thekeying material. This method exports the keying material intothe format identified byformat, then encrypts it using the method andparameters specified bywrapAlgo and the keying material provided bywrappingKey. It is the equivalent to callingsubtle.exportKey() usingformat andkey as the arguments, then passing the result to thesubtle.encrypt() method usingwrappingKey andwrapAlgo as inputs. Ifsuccessful, the returned promise will be resolved with an<ArrayBuffer>containing the encrypted key data.
The wrapping algorithms currently supported include:
Algorithm parameters#
The algorithm parameter objects define the methods and parameters used bythe various<SubtleCrypto> methods. While described here as "classes", theyare simple JavaScript dictionary objects.
Class:AeadParams#
aeadParams.additionalData#
Extra input that is not encrypted but is included in the authenticationof the data. The use ofadditionalData is optional.
aeadParams.iv#
- Type:<ArrayBuffer> |<TypedArray> |<DataView> |<Buffer>
The initialization vector must be unique for every encryption operation using agiven key.
Class:AesDerivedKeyParams#
Class:AesCbcParams#
aesCbcParams.iv#
- Type:<ArrayBuffer> |<TypedArray> |<DataView> |<Buffer>
Provides the initialization vector. It must be exactly 16-bytes in lengthand should be unpredictable and cryptographically random.
Class:AesCtrParams#
aesCtrParams.counter#
- Type:<ArrayBuffer> |<TypedArray> |<DataView> |<Buffer>
The initial value of the counter block. This must be exactly 16 bytes long.
TheAES-CTR method uses the rightmostlength bits of the block as thecounter and the remaining bits as the nonce.
Class:AesKeyAlgorithm#
Class:AesKeyGenParams#
Class:Argon2Params#
argon2Params.associatedData#
- Type:<ArrayBuffer> |<TypedArray> |<DataView> |<Buffer>
Represents the optional associated data.
argon2Params.memory#
- Type:<number>
Represents the memory size in kibibytes. It must be at least 8 times the degree of parallelism.
argon2Params.nonce#
- Type:<ArrayBuffer> |<TypedArray> |<DataView> |<Buffer>
Represents the nonce, which is a salt for password hashing applications.
argon2Params.secretValue#
- Type:<ArrayBuffer> |<TypedArray> |<DataView> |<Buffer>
Represents the optional secret value.
Class:ContextParams#
contextParams.name#
contextParams.context#
History
| Version | Changes |
|---|---|
| v24.8.0 | Non-empty context is now supported. |
| v24.7.0 | Added in: v24.7.0 |
Thecontext member represents the optional context data to associate withthe message.
Class:CShakeParams#
cShakeParams.customization#
Thecustomization member represents the customization string.The Node.js Web Crypto API implementation only supports zero-length customizationwhich is equivalent to not providing customization at all.
cShakeParams.functionName#
ThefunctionName member represents represents the function name, used by NIST to definefunctions based on cSHAKE.The Node.js Web Crypto API implementation only supports zero-length functionNamewhich is equivalent to not providing functionName at all.
Class:EcdhKeyDeriveParams#
ecdhKeyDeriveParams.public#
- Type:<CryptoKey>
ECDH key derivation operates by taking as input one parties private key andanother parties public key -- using both to generate a common shared secret.TheecdhKeyDeriveParams.public property is set to the other parties publickey.
Class:EcdsaParams#
ecdsaParams.hash#
History
| Version | Changes |
|---|---|
| v24.7.0 | SHA-3 algorithms are now supported. |
| v15.0.0 | Added in: v15.0.0 |
- Type:<string> |<Algorithm>
If represented as a<string>, the value must be one of:
If represented as an<Algorithm>, the object'sname propertymust be one of the above listed values.
Class:EcKeyAlgorithm#
Class:EcKeyGenParams#
Class:EcKeyImportParams#
Class:EncapsulatedBits#
A temporary symmetric secret key (represented as<ArrayBuffer>) for message encryptionand the ciphertext (that can be transmitted to the message recipient along with themessage) encrypted by this shared key. The recipient uses their private key to determinewhat the shared key is which then allows them to decrypt the message.
Class:EncapsulatedKey#
A temporary symmetric secret key (represented as<CryptoKey>) for message encryptionand the ciphertext (that can be transmitted to the message recipient along with themessage) encrypted by this shared key. The recipient uses their private key to determinewhat the shared key is which then allows them to decrypt the message.
Class:HkdfParams#
hkdfParams.hash#
History
| Version | Changes |
|---|---|
| v24.7.0 | SHA-3 algorithms are now supported. |
| v15.0.0 | Added in: v15.0.0 |
- Type:<string> |<Algorithm>
If represented as a<string>, the value must be one of:
If represented as an<Algorithm>, the object'sname propertymust be one of the above listed values.
hkdfParams.info#
- Type:<ArrayBuffer> |<TypedArray> |<DataView> |<Buffer>
Provides application-specific contextual input to the HKDF algorithm.This can be zero-length but must be provided.
hkdfParams.salt#
- Type:<ArrayBuffer> |<TypedArray> |<DataView> |<Buffer>
The salt value significantly improves the strength of the HKDF algorithm.It should be random or pseudorandom and should be the same length as theoutput of the digest function (for instance, if using'SHA-256' as thedigest, the salt should be 256-bits of random data).
Class:HmacImportParams#
hmacImportParams.hash#
History
| Version | Changes |
|---|---|
| v24.7.0 | SHA-3 algorithms are now supported. |
| v15.0.0 | Added in: v15.0.0 |
- Type:<string> |<Algorithm>
If represented as a<string>, the value must be one of:
If represented as an<Algorithm>, the object'sname propertymust be one of the above listed values.
Class:HmacKeyAlgorithm#
Class:HmacKeyGenParams#
hmacKeyGenParams.hash#
History
| Version | Changes |
|---|---|
| v24.7.0 | SHA-3 algorithms are now supported. |
| v15.0.0 | Added in: v15.0.0 |
- Type:<string> |<Algorithm>
If represented as a<string>, the value must be one of:
If represented as an<Algorithm>, the object'sname propertymust be one of the above listed values.
Class:KmacImportParams#
Class:KmacKeyAlgorithm#
Class:KmacKeyGenParams#
Class:KmacParams#
kmacParams.customization#
Thecustomization member represents the optional customization string.
Class:Pbkdf2Params#
pbkdf2Params.hash#
History
| Version | Changes |
|---|---|
| v24.7.0 | SHA-3 algorithms are now supported. |
| v15.0.0 | Added in: v15.0.0 |
- Type:<string> |<Algorithm>
If represented as a<string>, the value must be one of:
If represented as an<Algorithm>, the object'sname propertymust be one of the above listed values.
pbkdf2Params.iterations#
- Type:<number>
The number of iterations the PBKDF2 algorithm should make when deriving bits.
pbkdf2Params.salt#
- Type:<ArrayBuffer> |<TypedArray> |<DataView> |<Buffer>
Should be at least 16 random or pseudorandom bytes.
Class:RsaHashedImportParams#
rsaHashedImportParams.hash#
History
| Version | Changes |
|---|---|
| v24.7.0 | SHA-3 algorithms are now supported. |
| v15.0.0 | Added in: v15.0.0 |
- Type:<string> |<Algorithm>
If represented as a<string>, the value must be one of:
If represented as an<Algorithm>, the object'sname propertymust be one of the above listed values.
Class:RsaHashedKeyAlgorithm#
Class:RsaHashedKeyGenParams#
rsaHashedKeyGenParams.hash#
History
| Version | Changes |
|---|---|
| v24.7.0 | SHA-3 algorithms are now supported. |
| v15.0.0 | Added in: v15.0.0 |
- Type:<string> |<Algorithm>
If represented as a<string>, the value must be one of:
If represented as an<Algorithm>, the object'sname propertymust be one of the above listed values.
rsaHashedKeyGenParams.modulusLength#
- Type:<number>
The length in bits of the RSA modulus. As a best practice, this should beat least2048.
rsaHashedKeyGenParams.name#
- Type:<string> Must be one of
'RSASSA-PKCS1-v1_5','RSA-PSS', or'RSA-OAEP'.
rsaHashedKeyGenParams.publicExponent#
- Type:<Uint8Array>
The RSA public exponent. This must be a<Uint8Array> containing a big-endian,unsigned integer that must fit within 32-bits. The<Uint8Array> may contain anarbitrary number of leading zero-bits. The value must be a prime number. Unlessthere is reason to use a different value, usenew Uint8Array([1, 0, 1])(65537) as the public exponent.
Class:RsaOaepParams#
rsaOaepParams.label#
- Type:<ArrayBuffer> |<TypedArray> |<DataView> |<Buffer>
An additional collection of bytes that will not be encrypted, but will be boundto the generated ciphertext.
ThersaOaepParams.label parameter is optional.
Class:RsaPssParams#
rsaPssParams.saltLength#
- Type:<number>
The length (in bytes) of the random salt to use.
Footnotes
SeeModern Algorithms in the Web Cryptography API↩↩2↩3↩4↩5↩6↩7↩8↩9↩10↩11↩12↩13↩14↩15↩16↩17↩18↩19↩20↩21↩22↩23↩24↩25↩26↩27↩28↩29↩30↩31↩32↩33↩34↩35↩36↩37↩38↩39↩40↩41↩42↩43↩44↩45↩46↩47↩48↩49↩50↩51↩52↩53↩54↩55↩56↩57↩58↩59↩60↩61↩62↩63↩64↩65↩66↩67↩68↩69↩70↩71↩72↩73↩74↩75↩76↩77↩78↩79↩80↩81↩82↩83↩84↩85↩86↩87↩88↩89↩90↩91↩92↩93↩94↩95↩96↩97↩98↩99↩100↩101↩102↩103↩104↩105↩106↩107↩108↩109↩110↩111↩112↩113↩114↩115↩116↩117↩118↩119↩120↩121↩122↩123↩124↩125↩126↩127↩128↩129↩130↩131↩132↩133↩134↩135↩136↩137↩138↩139↩140↩141↩142↩143↩144↩145↩146↩147↩148↩149↩150
SeeSecure Curves in the Web Cryptography API↩↩2↩3↩4↩5↩6↩7↩8↩9↩10↩11↩12↩13↩14↩15↩16↩17↩18↩19↩20↩21↩22↩23↩24
Web Streams API#
History
| Version | Changes |
|---|---|
| v21.0.0 | No longer experimental. |
| v18.0.0 | Use of this API no longer emit a runtime warning. |
| v16.5.0 | Added in: v16.5.0 |
An implementation of theWHATWG Streams Standard.
Overview#
TheWHATWG Streams Standard (or "web streams") defines an API for handlingstreaming data. It is similar to the Node.jsStreams API but emerged laterand has become the "standard" API for streaming data across many JavaScriptenvironments.
There are three primary types of objects:
ReadableStream- Represents a source of streaming data.WritableStream- Represents a destination for streaming data.TransformStream- Represents an algorithm for transforming streaming data.
ExampleReadableStream#
This example creates a simpleReadableStream that pushes the currentperformance.now() timestamp once every second forever. An async iterableis used to read the data from the stream.
import {ReadableStream,}from'node:stream/web';import {setIntervalas every,}from'node:timers/promises';import { performance,}from'node:perf_hooks';constSECOND =1000;const stream =newReadableStream({asyncstart(controller) {forawait (const _ofevery(SECOND)) controller.enqueue(performance.now()); },});forawait (const valueof stream)console.log(value);const {ReadableStream,} =require('node:stream/web');const {setInterval: every,} =require('node:timers/promises');const { performance,} =require('node:perf_hooks');constSECOND =1000;const stream =newReadableStream({asyncstart(controller) {forawait (const _ofevery(SECOND)) controller.enqueue(performance.now()); },});(async () => {forawait (const valueof stream)console.log(value);})();
Node.js streams interoperability#
Node.js streams can be converted to web streams and vice versa via thetoWeb andfromWeb methods present onstream.Readable,stream.Writable andstream.Duplex objects.
For more details refer to the relevant documentation:
API#
Class:ReadableStream#
History
| Version | Changes |
|---|---|
| v18.0.0 | This class is now exposed on the global object. |
| v16.5.0 | Added in: v16.5.0 |
new ReadableStream([underlyingSource [, strategy]])#
underlyingSource<Object>start<Function> A user-defined function that is invoked immediately whentheReadableStreamis created.controller<ReadableStreamDefaultController> |<ReadableByteStreamController>- Returns:
undefinedor a promise fulfilled withundefined.
pull<Function> A user-defined function that is called repeatedly when theReadableStreaminternal queue is not full. The operation may be sync orasync. If async, the function will not be called again until the previouslyreturned promise is fulfilled.controller<ReadableStreamDefaultController> |<ReadableByteStreamController>- Returns: A promise fulfilled with
undefined.
cancel<Function> A user-defined function that is called when theReadableStreamis canceled.reason<any>- Returns: A promise fulfilled with
undefined.
type<string> Must be'bytes'orundefined.autoAllocateChunkSize<number> Used only whentypeis equal to'bytes'. When set to a non-zero value a view buffer is automaticallyallocated toReadableByteStreamController.byobRequest. When not setone must use stream's internal queues to transfer data via the defaultreaderReadableStreamDefaultReader.
strategy<Object>highWaterMark<number> The maximum internal queue size before backpressureis applied.size<Function> A user-defined function used to identify the size of eachchunk of data.
readableStream.locked#
- Type:<boolean> Set to
trueif there is an active reader for this<ReadableStream>.
ThereadableStream.locked property isfalse by default, and isswitched totrue while there is an active reader consuming thestream's data.
readableStream.cancel([reason])#
reason<any>- Returns: A promise fulfilled with
undefinedonce cancelation hasbeen completed.
readableStream.getReader([options])#
options<Object>mode<string>'byob'orundefined
- Returns:<ReadableStreamDefaultReader> |<ReadableStreamBYOBReader>
import {ReadableStream }from'node:stream/web';const stream =newReadableStream();const reader = stream.getReader();console.log(await reader.read());const {ReadableStream } =require('node:stream/web');const stream =newReadableStream();const reader = stream.getReader();reader.read().then(console.log);
Causes thereadableStream.locked to betrue.
readableStream.pipeThrough(transform[, options])#
transform<Object>readable<ReadableStream> TheReadableStreamto whichtransform.writablewill push the potentially modified datait receives from thisReadableStream.writable<WritableStream> TheWritableStreamto which thisReadableStream's data will be written.
options<Object>preventAbort<boolean> Whentrue, errors in thisReadableStreamwill not causetransform.writableto be aborted.preventCancel<boolean> Whentrue, errors in the destinationtransform.writabledo not cause thisReadableStreamto becanceled.preventClose<boolean> Whentrue, closing thisReadableStreamdoes not causetransform.writableto be closed.signal<AbortSignal> Allows the transfer of data to be canceledusing an<AbortController>.
- Returns:<ReadableStream> From
transform.readable.
Connects this<ReadableStream> to the pair of<ReadableStream> and<WritableStream> provided in thetransform argument such that thedata from this<ReadableStream> is written in totransform.writable,possibly transformed, then pushed totransform.readable. Once thepipeline is configured,transform.readable is returned.
Causes thereadableStream.locked to betrue while the pipe operationis active.
import {ReadableStream,TransformStream,}from'node:stream/web';const stream =newReadableStream({start(controller) { controller.enqueue('a'); },});const transform =newTransformStream({transform(chunk, controller) { controller.enqueue(chunk.toUpperCase()); },});const transformedStream = stream.pipeThrough(transform);forawait (const chunkof transformedStream)console.log(chunk);// Prints: Aconst {ReadableStream,TransformStream,} =require('node:stream/web');const stream =newReadableStream({start(controller) { controller.enqueue('a'); },});const transform =newTransformStream({transform(chunk, controller) { controller.enqueue(chunk.toUpperCase()); },});const transformedStream = stream.pipeThrough(transform);(async () => {forawait (const chunkof transformedStream)console.log(chunk);// Prints: A})();
readableStream.pipeTo(destination[, options])#
destination<WritableStream> A<WritableStream> to which thisReadableStream's data will be written.options<Object>preventAbort<boolean> Whentrue, errors in thisReadableStreamwill not causedestinationto be aborted.preventCancel<boolean> Whentrue, errors in thedestinationwill not cause thisReadableStreamto be canceled.preventClose<boolean> Whentrue, closing thisReadableStreamdoes not causedestinationto be closed.signal<AbortSignal> Allows the transfer of data to be canceledusing an<AbortController>.
- Returns: A promise fulfilled with
undefined
Causes thereadableStream.locked to betrue while the pipe operationis active.
readableStream.tee()#
History
| Version | Changes |
|---|---|
| v18.10.0, v16.18.0 | Support teeing a readable byte stream. |
| v16.5.0 | Added in: v16.5.0 |
- Returns:<ReadableStream[]>
Returns a pair of new<ReadableStream> instances to which thisReadableStream's data will be forwarded. Each will receive thesame data.
Causes thereadableStream.locked to betrue.
readableStream.values([options])#
options<Object>preventCancel<boolean> Whentrue, prevents the<ReadableStream>from being closed when the async iterator abruptly terminates.Default:false.
Creates and returns an async iterator usable for consuming thisReadableStream's data.
Causes thereadableStream.locked to betrue while the async iteratoris active.
import {Buffer }from'node:buffer';const stream =newReadableStream(getSomeSource());forawait (const chunkof stream.values({preventCancel:true }))console.log(Buffer.from(chunk).toString());Async Iteration#
The<ReadableStream> object supports the async iterator protocol usingfor await syntax.
import {Buffer }from'node:buffer';const stream =newReadableStream(getSomeSource());forawait (const chunkof stream)console.log(Buffer.from(chunk).toString());The async iterator will consume the<ReadableStream> until it terminates.
By default, if the async iterator exits early (via either abreak,return, or athrow), the<ReadableStream> will be closed. To preventautomatic closing of the<ReadableStream>, use thereadableStream.values()method to acquire the async iterator and set thepreventCancel option totrue.
The<ReadableStream> must not be locked (that is, it must not have an existingactive reader). During the async iteration, the<ReadableStream> will be locked.
Transferring withpostMessage()#
A<ReadableStream> instance can be transferred using a<MessagePort>.
const stream =newReadableStream(getReadableSourceSomehow());const { port1, port2 } =newMessageChannel();port1.onmessage =({ data }) => { data.getReader().read().then((chunk) => {console.log(chunk); });};port2.postMessage(stream, [stream]);ReadableStream.from(iterable)#
iterable<Iterable> Object implementing theSymbol.asyncIteratororSymbol.iteratoriterable protocol.
A utility method that creates a new<ReadableStream> from an iterable.
import {ReadableStream }from'node:stream/web';asyncfunction*asyncIterableGenerator() {yield'a';yield'b';yield'c';}const stream =ReadableStream.from(asyncIterableGenerator());forawait (const chunkof stream)console.log(chunk);// Prints: 'a', 'b', 'c'const {ReadableStream } =require('node:stream/web');asyncfunction*asyncIterableGenerator() {yield'a';yield'b';yield'c';}(async () => {const stream =ReadableStream.from(asyncIterableGenerator());forawait (const chunkof stream)console.log(chunk);// Prints: 'a', 'b', 'c'})();
To pipe the resulting<ReadableStream> into a<WritableStream> the<Iterable>should yield a sequence of<Buffer>,<TypedArray>, or<DataView> objects.
import {ReadableStream }from'node:stream/web';import {Buffer }from'node:buffer';asyncfunction*asyncIterableGenerator() {yieldBuffer.from('a');yieldBuffer.from('b');yieldBuffer.from('c');}const stream =ReadableStream.from(asyncIterableGenerator());await stream.pipeTo(createWritableStreamSomehow());const {ReadableStream } =require('node:stream/web');const {Buffer } =require('node:buffer');asyncfunction*asyncIterableGenerator() {yieldBuffer.from('a');yieldBuffer.from('b');yieldBuffer.from('c');}const stream =ReadableStream.from(asyncIterableGenerator());(async () => {await stream.pipeTo(createWritableStreamSomehow());})();
Class:ReadableStreamDefaultReader#
History
| Version | Changes |
|---|---|
| v18.0.0 | This class is now exposed on the global object. |
| v16.5.0 | Added in: v16.5.0 |
By default, callingreadableStream.getReader() with no argumentswill return an instance ofReadableStreamDefaultReader. The defaultreader treats the chunks of data passed through the stream as opaquevalues, which allows the<ReadableStream> to work with generally anyJavaScript value.
new ReadableStreamDefaultReader(stream)#
stream<ReadableStream>
Creates a new<ReadableStreamDefaultReader> that is locked to thegiven<ReadableStream>.
readableStreamDefaultReader.cancel([reason])#
reason<any>- Returns: A promise fulfilled with
undefined.
Cancels the<ReadableStream> and returns a promise that is fulfilledwhen the underlying stream has been canceled.
readableStreamDefaultReader.closed#
- Type:<Promise> Fulfilled with
undefinedwhen the associated<ReadableStream> is closed or rejected if the stream errors or the reader'slock is released before the stream finishes closing.
readableStreamDefaultReader.read()#
Requests the next chunk of data from the underlying<ReadableStream>and returns a promise that is fulfilled with the data once it isavailable.
readableStreamDefaultReader.releaseLock()#
Releases this reader's lock on the underlying<ReadableStream>.
Class:ReadableStreamBYOBReader#
History
| Version | Changes |
|---|---|
| v18.0.0 | This class is now exposed on the global object. |
| v16.5.0 | Added in: v16.5.0 |
TheReadableStreamBYOBReader is an alternative consumer forbyte-oriented<ReadableStream>s (those that are created withunderlyingSource.type set equal to'bytes' when theReadableStream was created).
TheBYOB is short for "bring your own buffer". This is apattern that allows for more efficient reading of byte-orienteddata that avoids extraneous copying.
import { open,}from'node:fs/promises';import {ReadableStream,}from'node:stream/web';import {Buffer }from'node:buffer';classSource { type ='bytes'; autoAllocateChunkSize =1024;asyncstart(controller) {this.file =awaitopen(newURL(import.meta.url));this.controller = controller; }asyncpull(controller) {const view = controller.byobRequest?.view;const { bytesRead, } =awaitthis.file.read({buffer: view,offset: view.byteOffset,length: view.byteLength, });if (bytesRead ===0) {awaitthis.file.close();this.controller.close(); } controller.byobRequest.respond(bytesRead); }}const stream =newReadableStream(newSource());asyncfunctionread(stream) {const reader = stream.getReader({mode:'byob' });const chunks = [];let result;do { result =await reader.read(Buffer.alloc(100));if (result.value !==undefined) chunks.push(Buffer.from(result.value)); }while (!result.done);returnBuffer.concat(chunks);}const data =awaitread(stream);console.log(Buffer.from(data).toString());new ReadableStreamBYOBReader(stream)#
stream<ReadableStream>
Creates a newReadableStreamBYOBReader that is locked to thegiven<ReadableStream>.
readableStreamBYOBReader.cancel([reason])#
reason<any>- Returns: A promise fulfilled with
undefined.
Cancels the<ReadableStream> and returns a promise that is fulfilledwhen the underlying stream has been canceled.
readableStreamBYOBReader.closed#
- Type:<Promise> Fulfilled with
undefinedwhen the associated<ReadableStream> is closed or rejected if the stream errors or the reader'slock is released before the stream finishes closing.
readableStreamBYOBReader.read(view[, options])#
History
| Version | Changes |
|---|---|
| v21.7.0, v20.17.0 | Added |
| v16.5.0 | Added in: v16.5.0 |
view<Buffer> |<TypedArray> |<DataView>options<Object>min<number> When set, the returned promise will only befulfilled as soon asminnumber of elements are available.When not set, the promise fulfills when at least one elementis available.
- Returns: A promise fulfilled with an object:
value<TypedArray> |<DataView>done<boolean>
Requests the next chunk of data from the underlying<ReadableStream>and returns a promise that is fulfilled with the data once it isavailable.
Do not pass a pooled<Buffer> object instance in to this method.PooledBuffer objects are created usingBuffer.allocUnsafe(),orBuffer.from(), or are often returned by variousnode:fs modulecallbacks. These types ofBuffers use a shared underlying<ArrayBuffer> object that contains all of the data from all ofthe pooledBuffer instances. When aBuffer,<TypedArray>,or<DataView> is passed in toreadableStreamBYOBReader.read(),the view's underlyingArrayBuffer isdetached, invalidatingall existing views that may exist on thatArrayBuffer. Thiscan have disastrous consequences for your application.
readableStreamBYOBReader.releaseLock()#
Releases this reader's lock on the underlying<ReadableStream>.
Class:ReadableStreamDefaultController#
Every<ReadableStream> has a controller that is responsible forthe internal state and management of the stream's queue. TheReadableStreamDefaultController is the default controllerimplementation forReadableStreams that are not byte-oriented.
readableStreamDefaultController.close()#
Closes the<ReadableStream> to which this controller is associated.
readableStreamDefaultController.desiredSize#
- Type:<number>
Returns the amount of data remaining to fill the<ReadableStream>'squeue.
readableStreamDefaultController.enqueue([chunk])#
chunk<any>
Appends a new chunk of data to the<ReadableStream>'s queue.
readableStreamDefaultController.error([error])#
error<any>
Signals an error that causes the<ReadableStream> to error and close.
Class:ReadableByteStreamController#
History
| Version | Changes |
|---|---|
| v18.10.0 | Support handling a BYOB pull request from a released reader. |
| v16.5.0 | Added in: v16.5.0 |
Every<ReadableStream> has a controller that is responsible forthe internal state and management of the stream's queue. TheReadableByteStreamController is for byte-orientedReadableStreams.
readableByteStreamController.close()#
Closes the<ReadableStream> to which this controller is associated.
readableByteStreamController.desiredSize#
- Type:<number>
Returns the amount of data remaining to fill the<ReadableStream>'squeue.
readableByteStreamController.enqueue(chunk)#
chunk<Buffer> |<TypedArray> |<DataView>
Appends a new chunk of data to the<ReadableStream>'s queue.
readableByteStreamController.error([error])#
error<any>
Signals an error that causes the<ReadableStream> to error and close.
Class:ReadableStreamBYOBRequest#
History
| Version | Changes |
|---|---|
| v18.0.0 | This class is now exposed on the global object. |
| v16.5.0 | Added in: v16.5.0 |
When usingReadableByteStreamController in byte-orientedstreams, and when using theReadableStreamBYOBReader,thereadableByteStreamController.byobRequest propertyprovides access to aReadableStreamBYOBRequest instancethat represents the current read request. The objectis used to gain access to theArrayBuffer/TypedArraythat has been provided for the read request to fill,and provides methods for signaling that the data hasbeen provided.
readableStreamBYOBRequest.respond(bytesWritten)#
bytesWritten<number>
Signals that abytesWritten number of bytes have been writtentoreadableStreamBYOBRequest.view.
readableStreamBYOBRequest.respondWithNewView(view)#
view<Buffer> |<TypedArray> |<DataView>
Signals that the request has been fulfilled with bytes writtento a newBuffer,TypedArray, orDataView.
Class:WritableStream#
History
| Version | Changes |
|---|---|
| v18.0.0 | This class is now exposed on the global object. |
| v16.5.0 | Added in: v16.5.0 |
TheWritableStream is a destination to which stream data is sent.
import {WritableStream,}from'node:stream/web';const stream =newWritableStream({write(chunk) {console.log(chunk); },});await stream.getWriter().write('Hello World');new WritableStream([underlyingSink[, strategy]])#
underlyingSink<Object>start<Function> A user-defined function that is invoked immediately whentheWritableStreamis created.controller<WritableStreamDefaultController>- Returns:
undefinedor a promise fulfilled withundefined.
write<Function> A user-defined function that is invoked when a chunk ofdata has been written to theWritableStream.chunk<any>controller<WritableStreamDefaultController>- Returns: A promise fulfilled with
undefined.
close<Function> A user-defined function that is called when theWritableStreamis closed.- Returns: A promise fulfilled with
undefined.
- Returns: A promise fulfilled with
abort<Function> A user-defined function that is called to abruptly closetheWritableStream.reason<any>- Returns: A promise fulfilled with
undefined.
type<any> Thetypeoption is reserved for future use andmust beundefined.
strategy<Object>highWaterMark<number> The maximum internal queue size before backpressureis applied.size<Function> A user-defined function used to identify the size of eachchunk of data.
writableStream.abort([reason])#
reason<any>- Returns: A promise fulfilled with
undefined.
Abruptly terminates theWritableStream. All queued writes will becanceled with their associated promises rejected.
writableStream.close()#
- Returns: A promise fulfilled with
undefined.
Closes theWritableStream when no additional writes are expected.
writableStream.getWriter()#
- Returns:<WritableStreamDefaultWriter>
Creates and returns a new writer instance that can be used to writedata into theWritableStream.
writableStream.locked#
- Type:<boolean>
ThewritableStream.locked property isfalse by default, and isswitched totrue while there is an active writer attached to thisWritableStream.
Transferring with postMessage()#
A<WritableStream> instance can be transferred using a<MessagePort>.
const stream =newWritableStream(getWritableSinkSomehow());const { port1, port2 } =newMessageChannel();port1.onmessage =({ data }) => { data.getWriter().write('hello');};port2.postMessage(stream, [stream]);Class:WritableStreamDefaultWriter#
History
| Version | Changes |
|---|---|
| v18.0.0 | This class is now exposed on the global object. |
| v16.5.0 | Added in: v16.5.0 |
new WritableStreamDefaultWriter(stream)#
stream<WritableStream>
Creates a newWritableStreamDefaultWriter that is locked to the givenWritableStream.
writableStreamDefaultWriter.abort([reason])#
reason<any>- Returns: A promise fulfilled with
undefined.
Abruptly terminates theWritableStream. All queued writes will becanceled with their associated promises rejected.
writableStreamDefaultWriter.close()#
- Returns: A promise fulfilled with
undefined.
Closes theWritableStream when no additional writes are expected.
writableStreamDefaultWriter.closed#
- Type:<Promise> Fulfilled with
undefinedwhen the associated<WritableStream> is closed or rejected if the stream errors or the writer'slock is released before the stream finishes closing.
writableStreamDefaultWriter.desiredSize#
- Type:<number>
The amount of data required to fill the<WritableStream>'s queue.
writableStreamDefaultWriter.ready#
- Type:<Promise> Fulfilled with
undefinedwhen the writer is readyto be used.
writableStreamDefaultWriter.releaseLock()#
Releases this writer's lock on the underlying<ReadableStream>.
writableStreamDefaultWriter.write([chunk])#
chunk<any>- Returns: A promise fulfilled with
undefined.
Appends a new chunk of data to the<WritableStream>'s queue.
Class:WritableStreamDefaultController#
History
| Version | Changes |
|---|---|
| v18.0.0 | This class is now exposed on the global object. |
| v16.5.0 | Added in: v16.5.0 |
TheWritableStreamDefaultController manages the<WritableStream>'sinternal state.
writableStreamDefaultController.error([error])#
error<any>
Called by user-code to signal that an error has occurred while processingtheWritableStream data. When called, the<WritableStream> will be aborted,with currently pending writes canceled.
writableStreamDefaultController.signal#
- Type:<AbortSignal> An
AbortSignalthat can be used to cancel pendingwrite or close operations when a<WritableStream> is aborted.
Class:TransformStream#
History
| Version | Changes |
|---|---|
| v18.0.0 | This class is now exposed on the global object. |
| v16.5.0 | Added in: v16.5.0 |
ATransformStream consists of a<ReadableStream> and a<WritableStream> thatare connected such that the data written to theWritableStream is received,and potentially transformed, before being pushed into theReadableStream'squeue.
import {TransformStream,}from'node:stream/web';const transform =newTransformStream({transform(chunk, controller) { controller.enqueue(chunk.toUpperCase()); },});awaitPromise.all([ transform.writable.getWriter().write('A'), transform.readable.getReader().read(),]);new TransformStream([transformer[, writableStrategy[, readableStrategy]]])#
transformer<Object>start<Function> A user-defined function that is invoked immediately whentheTransformStreamis created.controller<TransformStreamDefaultController>- Returns:
undefinedor a promise fulfilled withundefined
transform<Function> A user-defined function that receives, andpotentially modifies, a chunk of data written totransformStream.writable,before forwarding that on totransformStream.readable.chunk<any>controller<TransformStreamDefaultController>- Returns: A promise fulfilled with
undefined.
flush<Function> A user-defined function that is called immediately beforethe writable side of theTransformStreamis closed, signaling the end ofthe transformation process.controller<TransformStreamDefaultController>- Returns: A promise fulfilled with
undefined.
readableType<any> thereadableTypeoption is reserved for future useandmust beundefined.writableType<any> thewritableTypeoption is reserved for future useandmust beundefined.
writableStrategy<Object>highWaterMark<number> The maximum internal queue size before backpressureis applied.size<Function> A user-defined function used to identify the size of eachchunk of data.
readableStrategy<Object>highWaterMark<number> The maximum internal queue size before backpressureis applied.size<Function> A user-defined function used to identify the size of eachchunk of data.
Transferring with postMessage()#
A<TransformStream> instance can be transferred using a<MessagePort>.
const stream =newTransformStream();const { port1, port2 } =newMessageChannel();port1.onmessage =({ data }) => {const { writable, readable } = data;// ...};port2.postMessage(stream, [stream]);Class:TransformStreamDefaultController#
History
| Version | Changes |
|---|---|
| v18.0.0 | This class is now exposed on the global object. |
| v16.5.0 | Added in: v16.5.0 |
TheTransformStreamDefaultController manages the internal stateof theTransformStream.
transformStreamDefaultController.desiredSize#
- Type:<number>
The amount of data required to fill the readable side's queue.
transformStreamDefaultController.enqueue([chunk])#
chunk<any>
Appends a chunk of data to the readable side's queue.
transformStreamDefaultController.error([reason])#
reason<any>
Signals to both the readable and writable side that an error has occurredwhile processing the transform data, causing both sides to be abruptlyclosed.
transformStreamDefaultController.terminate()#
Closes the readable side of the transport and causes the writable sideto be abruptly closed with an error.
Class:ByteLengthQueuingStrategy#
History
| Version | Changes |
|---|---|
| v18.0.0 | This class is now exposed on the global object. |
| v16.5.0 | Added in: v16.5.0 |
Class:CountQueuingStrategy#
History
| Version | Changes |
|---|---|
| v18.0.0 | This class is now exposed on the global object. |
| v16.5.0 | Added in: v16.5.0 |
Class:TextEncoderStream#
History
| Version | Changes |
|---|---|
| v18.0.0 | This class is now exposed on the global object. |
| v16.6.0 | Added in: v16.6.0 |
Class:TextDecoderStream#
History
| Version | Changes |
|---|---|
| v18.0.0 | This class is now exposed on the global object. |
| v16.6.0 | Added in: v16.6.0 |
new TextDecoderStream([encoding[, options]])#
encoding<string> Identifies theencodingthat thisTextDecoderinstancesupports.Default:'utf-8'.options<Object>fatal<boolean>trueif decoding failures are fatal.ignoreBOM<boolean> Whentrue, theTextDecoderStreamwill include thebyte order mark in the decoded result. Whenfalse, the byte order markwill be removed from the output. This option is only used whenencodingis'utf-8','utf-16be', or'utf-16le'.Default:false.
Creates a newTextDecoderStream instance.
textDecoderStream.encoding#
- Type:<string>
The encoding supported by theTextDecoderStream instance.
textDecoderStream.fatal#
- Type:<boolean>
The value will betrue if decoding errors result in aTypeError beingthrown.
Class:CompressionStream#
History
| Version | Changes |
|---|---|
| v18.0.0 | This class is now exposed on the global object. |
| v17.0.0 | Added in: v17.0.0 |
Class:DecompressionStream#
History
| Version | Changes |
|---|---|
| v18.0.0 | This class is now exposed on the global object. |
| v17.0.0 | Added in: v17.0.0 |
Utility Consumers#
The utility consumer functions provide common options for consumingstreams.
They are accessed using:
import { arrayBuffer, blob, buffer, json, text,}from'node:stream/consumers';const { arrayBuffer, blob, buffer, json, text,} =require('node:stream/consumers');
streamConsumers.arrayBuffer(stream)#
stream<ReadableStream> |<stream.Readable> |<AsyncIterator>- Returns:<Promise> Fulfills with an
ArrayBuffercontaining the fullcontents of the stream.
import { arrayBuffer }from'node:stream/consumers';import {Readable }from'node:stream';import {TextEncoder }from'node:util';const encoder =newTextEncoder();const dataArray = encoder.encode('hello world from consumers!');const readable =Readable.from(dataArray);const data =awaitarrayBuffer(readable);console.log(`from readable:${data.byteLength}`);// Prints: from readable: 76const { arrayBuffer } =require('node:stream/consumers');const {Readable } =require('node:stream');const {TextEncoder } =require('node:util');const encoder =newTextEncoder();const dataArray = encoder.encode('hello world from consumers!');const readable =Readable.from(dataArray);arrayBuffer(readable).then((data) => {console.log(`from readable:${data.byteLength}`);// Prints: from readable: 76});
streamConsumers.blob(stream)#
stream<ReadableStream> |<stream.Readable> |<AsyncIterator>- Returns:<Promise> Fulfills with a<Blob> containing the full contentsof the stream.
import { blob }from'node:stream/consumers';const dataBlob =newBlob(['hello world from consumers!']);const readable = dataBlob.stream();const data =awaitblob(readable);console.log(`from readable:${data.size}`);// Prints: from readable: 27const { blob } =require('node:stream/consumers');const dataBlob =newBlob(['hello world from consumers!']);const readable = dataBlob.stream();blob(readable).then((data) => {console.log(`from readable:${data.size}`);// Prints: from readable: 27});
streamConsumers.buffer(stream)#
stream<ReadableStream> |<stream.Readable> |<AsyncIterator>- Returns:<Promise> Fulfills with a<Buffer> containing the fullcontents of the stream.
import { buffer }from'node:stream/consumers';import {Readable }from'node:stream';import {Buffer }from'node:buffer';const dataBuffer =Buffer.from('hello world from consumers!');const readable =Readable.from(dataBuffer);const data =awaitbuffer(readable);console.log(`from readable:${data.length}`);// Prints: from readable: 27const { buffer } =require('node:stream/consumers');const {Readable } =require('node:stream');const {Buffer } =require('node:buffer');const dataBuffer =Buffer.from('hello world from consumers!');const readable =Readable.from(dataBuffer);buffer(readable).then((data) => {console.log(`from readable:${data.length}`);// Prints: from readable: 27});
streamConsumers.bytes(stream)#
stream<ReadableStream> |<stream.Readable> |<AsyncIterator>- Returns:<Promise> Fulfills with a<Uint8Array> containing the fullcontents of the stream.
import { bytes }from'node:stream/consumers';import {Readable }from'node:stream';import {Buffer }from'node:buffer';const dataBuffer =Buffer.from('hello world from consumers!');const readable =Readable.from(dataBuffer);const data =awaitbytes(readable);console.log(`from readable:${data.length}`);// Prints: from readable: 27const { bytes } =require('node:stream/consumers');const {Readable } =require('node:stream');const {Buffer } =require('node:buffer');const dataBuffer =Buffer.from('hello world from consumers!');const readable =Readable.from(dataBuffer);bytes(readable).then((data) => {console.log(`from readable:${data.length}`);// Prints: from readable: 27});
streamConsumers.json(stream)#
stream<ReadableStream> |<stream.Readable> |<AsyncIterator>- Returns:<Promise> Fulfills with the contents of the stream parsed as aUTF-8 encoded string that is then passed through
JSON.parse().
import { json }from'node:stream/consumers';import {Readable }from'node:stream';const items =Array.from( {length:100, },() => ({message:'hello world from consumers!', }),);const readable =Readable.from(JSON.stringify(items));const data =awaitjson(readable);console.log(`from readable:${data.length}`);// Prints: from readable: 100const { json } =require('node:stream/consumers');const {Readable } =require('node:stream');const items =Array.from( {length:100, },() => ({message:'hello world from consumers!', }),);const readable =Readable.from(JSON.stringify(items));json(readable).then((data) => {console.log(`from readable:${data.length}`);// Prints: from readable: 100});
streamConsumers.text(stream)#
stream<ReadableStream> |<stream.Readable> |<AsyncIterator>- Returns:<Promise> Fulfills with the contents of the stream parsed as aUTF-8 encoded string.
import { text }from'node:stream/consumers';import {Readable }from'node:stream';const readable =Readable.from('Hello world from consumers!');const data =awaittext(readable);console.log(`from readable:${data.length}`);// Prints: from readable: 27const { text } =require('node:stream/consumers');const {Readable } =require('node:stream');const readable =Readable.from('Hello world from consumers!');text(readable).then((data) => {console.log(`from readable:${data.length}`);// Prints: from readable: 27});
Worker threads#
Source Code:lib/worker_threads.js
Thenode:worker_threads module enables the use of threads that executeJavaScript in parallel. To access it:
import worker_threadsfrom'node:worker_threads';'use strict';const worker_threads =require('node:worker_threads');
Workers (threads) are useful for performing CPU-intensive JavaScript operations.They do not help much with I/O-intensive work. The Node.js built-inasynchronous I/O operations are more efficient than Workers can be.
Unlikechild_process orcluster,worker_threads can share memory. They doso by transferringArrayBuffer instances or sharingSharedArrayBufferinstances.
import {Worker, isMainThread, parentPort, workerData,}from'node:worker_threads';if (!isMainThread) {const { parse } =awaitimport('some-js-parsing-library');const script = workerData; parentPort.postMessage(parse(script));}exportdefaultfunctionparseJSAsync(script) {returnnewPromise((resolve, reject) => {const worker =newWorker(newURL(import.meta.url), {workerData: script, }); worker.on('message', resolve); worker.once('error', reject); worker.once('exit',(code) => {if (code !==0)reject(newError(`Worker stopped with exit code${code}`)); }); });};'use strict';const {Worker, isMainThread, parentPort, workerData,} =require('node:worker_threads');if (isMainThread) {module.exports =functionparseJSAsync(script) {returnnewPromise((resolve, reject) => {const worker =newWorker(__filename, {workerData: script, }); worker.on('message', resolve); worker.once('error', reject); worker.once('exit',(code) => {if (code !==0)reject(newError(`Worker stopped with exit code${code}`)); }); }); };}else {const { parse } =require('some-js-parsing-library');const script = workerData; parentPort.postMessage(parse(script));}
The above example spawns a Worker thread for eachparseJSAsync() call. Inpractice, use a pool of Workers for these kinds of tasks. Otherwise, theoverhead of creating Workers would likely exceed their benefit.
When implementing a worker pool, use theAsyncResource API to informdiagnostic tools (e.g. to provide asynchronous stack traces) about thecorrelation between tasks and their outcomes. See"UsingAsyncResource for aWorker thread pool"in theasync_hooks documentation for an example implementation.
Worker threads inherit non-process-specific options by default. Refer toWorker constructor options to know how to customize worker thread options,specificallyargv andexecArgv options.
worker_threads.getEnvironmentData(key)#
History
| Version | Changes |
|---|---|
| v17.5.0, v16.15.0 | No longer experimental. |
| v15.12.0, v14.18.0 | Added in: v15.12.0, v14.18.0 |
Within a worker thread,worker.getEnvironmentData() returns a cloneof data passed to the spawning thread'sworker.setEnvironmentData().Every newWorker receives its own copy of the environment dataautomatically.
import {Worker, isMainThread, setEnvironmentData, getEnvironmentData,}from'node:worker_threads';if (isMainThread) {setEnvironmentData('Hello','World!');const worker =newWorker(newURL(import.meta.url));}else {console.log(getEnvironmentData('Hello'));// Prints 'World!'.}'use strict';const {Worker, isMainThread, setEnvironmentData, getEnvironmentData,} =require('node:worker_threads');if (isMainThread) {setEnvironmentData('Hello','World!');const worker =newWorker(__filename);}else {console.log(getEnvironmentData('Hello'));// Prints 'World!'.}
worker_threads.isInternalThread#
- Type:<boolean>
Istrue if this code is running inside of an internalWorker thread (e.g the loader thread).
node --experimental-loader ./loader.js main.js// loader.jsimport { isInternalThread }from'node:worker_threads';console.log(isInternalThread);// true// loader.js'use strict';const { isInternalThread } =require('node:worker_threads');console.log(isInternalThread);// true
// main.jsimport { isInternalThread }from'node:worker_threads';console.log(isInternalThread);// false// main.js'use strict';const { isInternalThread } =require('node:worker_threads');console.log(isInternalThread);// false
worker_threads.isMainThread#
- Type:<boolean>
Istrue if this code is not running inside of aWorker thread.
import {Worker, isMainThread }from'node:worker_threads';if (isMainThread) {// This re-loads the current file inside a Worker instance.newWorker(newURL(import.meta.url));}else {console.log('Inside Worker!');console.log(isMainThread);// Prints 'false'.}'use strict';const {Worker, isMainThread } =require('node:worker_threads');if (isMainThread) {// This re-loads the current file inside a Worker instance.newWorker(__filename);}else {console.log('Inside Worker!');console.log(isMainThread);// Prints 'false'.}
worker_threads.markAsUntransferable(object)#
object<any> Any arbitrary JavaScript value.
Mark an object as not transferable. Ifobject occurs in the transfer list ofaport.postMessage() call, an error is thrown. This is a no-op ifobject is a primitive value.
In particular, this makes sense for objects that can be cloned, rather thantransferred, and which are used by other objects on the sending side.For example, Node.js marks theArrayBuffers it uses for itsBuffer pool with this.ArrayBuffer.prototype.transfer() is disallowed on such array bufferinstances.
This operation cannot be undone.
import {MessageChannel, markAsUntransferable }from'node:worker_threads';const pooledBuffer =newArrayBuffer(8);const typedArray1 =newUint8Array(pooledBuffer);const typedArray2 =newFloat64Array(pooledBuffer);markAsUntransferable(pooledBuffer);const { port1 } =newMessageChannel();try {// This will throw an error, because pooledBuffer is not transferable. port1.postMessage(typedArray1, [ typedArray1.buffer ]);}catch (error) {// error.name === 'DataCloneError'}// The following line prints the contents of typedArray1 -- it still owns// its memory and has not been transferred. Without// `markAsUntransferable()`, this would print an empty Uint8Array and the// postMessage call would have succeeded.// typedArray2 is intact as well.console.log(typedArray1);console.log(typedArray2);'use strict';const {MessageChannel, markAsUntransferable } =require('node:worker_threads');const pooledBuffer =newArrayBuffer(8);const typedArray1 =newUint8Array(pooledBuffer);const typedArray2 =newFloat64Array(pooledBuffer);markAsUntransferable(pooledBuffer);const { port1 } =newMessageChannel();try {// This will throw an error, because pooledBuffer is not transferable. port1.postMessage(typedArray1, [ typedArray1.buffer ]);}catch (error) {// error.name === 'DataCloneError'}// The following line prints the contents of typedArray1 -- it still owns// its memory and has not been transferred. Without// `markAsUntransferable()`, this would print an empty Uint8Array and the// postMessage call would have succeeded.// typedArray2 is intact as well.console.log(typedArray1);console.log(typedArray2);
There is no equivalent to this API in browsers.
worker_threads.isMarkedAsUntransferable(object)#
Check if an object is marked as not transferable withmarkAsUntransferable().
import { markAsUntransferable, isMarkedAsUntransferable }from'node:worker_threads';const pooledBuffer =newArrayBuffer(8);markAsUntransferable(pooledBuffer);isMarkedAsUntransferable(pooledBuffer);// Returns true.'use strict';const { markAsUntransferable, isMarkedAsUntransferable } =require('node:worker_threads');const pooledBuffer =newArrayBuffer(8);markAsUntransferable(pooledBuffer);isMarkedAsUntransferable(pooledBuffer);// Returns true.
There is no equivalent to this API in browsers.
worker_threads.markAsUncloneable(object)#
object<any> Any arbitrary JavaScript value.
Mark an object as not cloneable. Ifobject is used asmessage inaport.postMessage() call, an error is thrown. This is a no-op ifobject is aprimitive value.
This has no effect onArrayBuffer, or anyBuffer like objects.
This operation cannot be undone.
import { markAsUncloneable }from'node:worker_threads';const anyObject = {foo:'bar' };markAsUncloneable(anyObject);const { port1 } =newMessageChannel();try {// This will throw an error, because anyObject is not cloneable. port1.postMessage(anyObject);}catch (error) {// error.name === 'DataCloneError'}'use strict';const { markAsUncloneable } =require('node:worker_threads');const anyObject = {foo:'bar' };markAsUncloneable(anyObject);const { port1 } =newMessageChannel();try {// This will throw an error, because anyObject is not cloneable. port1.postMessage(anyObject);}catch (error) {// error.name === 'DataCloneError'}
There is no equivalent to this API in browsers.
worker_threads.moveMessagePortToContext(port, contextifiedSandbox)#
port<MessagePort> The message port to transfer.contextifiedSandbox<Object> Acontextified object as returned by thevm.createContext()method.Returns:<MessagePort>
Transfer aMessagePort to a differentvm Context. The originalportobject is rendered unusable, and the returnedMessagePort instancetakes its place.
The returnedMessagePort is an object in the target context andinherits from its globalObject class. Objects passed to theport.onmessage() listener are also created in the target contextand inherit from its globalObject class.
However, the createdMessagePort no longer inherits from<EventTarget>, and onlyport.onmessage() can be used to receiveevents using it.
worker_threads.parentPort#
- Type:<null> |<MessagePort>
If this thread is aWorker, this is aMessagePortallowing communication with the parent thread. Messages sent usingparentPort.postMessage() are available in the parent threadusingworker.on('message'), and messages sent from the parent threadusingworker.postMessage() are available in this thread usingparentPort.on('message').
import {Worker, isMainThread, parentPort }from'node:worker_threads';if (isMainThread) {const worker =newWorker(newURL(import.meta.url)); worker.once('message',(message) => {console.log(message);// Prints 'Hello, world!'. }); worker.postMessage('Hello, world!');}else {// When a message from the parent thread is received, send it back: parentPort.once('message',(message) => { parentPort.postMessage(message); });}'use strict';const {Worker, isMainThread, parentPort } =require('node:worker_threads');if (isMainThread) {const worker =newWorker(__filename); worker.once('message',(message) => {console.log(message);// Prints 'Hello, world!'. }); worker.postMessage('Hello, world!');}else {// When a message from the parent thread is received, send it back: parentPort.once('message',(message) => { parentPort.postMessage(message); });}
worker_threads.postMessageToThread(threadId, value[, transferList][, timeout])#
threadId<number> The target thread ID. If the thread ID is invalid, aERR_WORKER_MESSAGING_FAILEDerror will be thrown. If the target thread ID is the current thread ID,aERR_WORKER_MESSAGING_SAME_THREADerror will be thrown.value<any> The value to send.transferList<Object[]> If one or moreMessagePort-like objects are passed invalue,atransferListis required for those items orERR_MISSING_MESSAGE_PORT_IN_TRANSFER_LISTis thrown.Seeport.postMessage()for more information.timeout<number> Time to wait for the message to be delivered in milliseconds.By default it'sundefined, which means wait forever. If the operation times out,aERR_WORKER_MESSAGING_TIMEOUTerror is thrown.- Returns:<Promise> A promise which is fulfilled if the message was successfully processed by destination thread.
Sends a value to another worker, identified by its thread ID.
If the target thread has no listener for theworkerMessage event, then the operation will throwaERR_WORKER_MESSAGING_FAILED error.
If the target thread threw an error while processing theworkerMessage event, then the operation will throwaERR_WORKER_MESSAGING_ERRORED error.
This method should be used when the target thread is not the directparent or child of the current thread.If the two threads are parent-children, use therequire('node:worker_threads').parentPort.postMessage()and theworker.postMessage() to let the threads communicate.
The example below shows the use of ofpostMessageToThread: it creates 10 nested threads,the last one will try to communicate with the main thread.
import processfrom'node:process';import { postMessageToThread, threadId, workerData,Worker,}from'node:worker_threads';const channel =newBroadcastChannel('sync');const level = workerData?.level ??0;if (level <10) {const worker =newWorker(newURL(import.meta.url), {workerData: {level: level +1 }, });}if (level ===0) { process.on('workerMessage',(value, source) => {console.log(`${source} ->${threadId}:`, value);postMessageToThread(source, {message:'pong' }); });}elseif (level ===10) { process.on('workerMessage',(value, source) => {console.log(`${source} ->${threadId}:`, value); channel.postMessage('done'); channel.close(); });awaitpostMessageToThread(0, {message:'ping' });}channel.onmessage = channel.close;'use strict';const process =require('node:process');const { postMessageToThread, threadId, workerData,Worker,} =require('node:worker_threads');const channel =newBroadcastChannel('sync');const level = workerData?.level ??0;if (level <10) {const worker =newWorker(__filename, {workerData: {level: level +1 }, });}if (level ===0) { process.on('workerMessage',(value, source) => {console.log(`${source} ->${threadId}:`, value);postMessageToThread(source, {message:'pong' }); });}elseif (level ===10) { process.on('workerMessage',(value, source) => {console.log(`${source} ->${threadId}:`, value); channel.postMessage('done'); channel.close(); });postMessageToThread(0, {message:'ping' });}channel.onmessage = channel.close;
worker_threads.receiveMessageOnPort(port)#
History
| Version | Changes |
|---|---|
| v15.12.0 | The port argument can also refer to a |
| v12.3.0 | Added in: v12.3.0 |
Returns:<Object> |<undefined>
Receive a single message from a givenMessagePort. If no message is available,undefined is returned, otherwise an object with a singlemessage propertythat contains the message payload, corresponding to the oldest message in theMessagePort's queue.
import {MessageChannel, receiveMessageOnPort }from'node:worker_threads';const { port1, port2 } =newMessageChannel();port1.postMessage({hello:'world' });console.log(receiveMessageOnPort(port2));// Prints: { message: { hello: 'world' } }console.log(receiveMessageOnPort(port2));// Prints: undefined'use strict';const {MessageChannel, receiveMessageOnPort } =require('node:worker_threads');const { port1, port2 } =newMessageChannel();port1.postMessage({hello:'world' });console.log(receiveMessageOnPort(port2));// Prints: { message: { hello: 'world' } }console.log(receiveMessageOnPort(port2));// Prints: undefined
When this function is used, no'message' event is emitted and theonmessage listener is not invoked.
worker_threads.resourceLimits#
- Type:<Object>
Provides the set of JS engine resource constraints inside this Worker thread.If theresourceLimits option was passed to theWorker constructor,this matches its values.
If this is used in the main thread, its value is an empty object.
worker_threads.SHARE_ENV#
- Type:<symbol>
A special value that can be passed as theenv option of theWorkerconstructor, to indicate that the current thread and the Worker thread shouldshare read and write access to the same set of environment variables.
import processfrom'node:process';import {Worker,SHARE_ENV }from'node:worker_threads';newWorker('process.env.SET_IN_WORKER = "foo"', {eval:true,env:SHARE_ENV }) .once('exit',() => {console.log(process.env.SET_IN_WORKER);// Prints 'foo'. });'use strict';const {Worker,SHARE_ENV } =require('node:worker_threads');newWorker('process.env.SET_IN_WORKER = "foo"', {eval:true,env:SHARE_ENV }) .once('exit',() => {console.log(process.env.SET_IN_WORKER);// Prints 'foo'. });
worker_threads.setEnvironmentData(key[, value])#
History
| Version | Changes |
|---|---|
| v17.5.0, v16.15.0 | No longer experimental. |
| v15.12.0, v14.18.0 | Added in: v15.12.0, v14.18.0 |
key<any> Any arbitrary, cloneable JavaScript value that can be used as a<Map> key.value<any> Any arbitrary, cloneable JavaScript value that will be clonedand passed automatically to all newWorkerinstances. Ifvalueis passedasundefined, any previously set value for thekeywill be deleted.
Theworker.setEnvironmentData() API sets the content ofworker.getEnvironmentData() in the current thread and all newWorkerinstances spawned from the current context.
worker_threads.threadId#
- Type:<integer>
An integer identifier for the current thread. On the corresponding worker object(if there is any), it is available asworker.threadId.This value is unique for eachWorker instance inside a single process.
worker_threads.threadName#
A string identifier for the current thread or null if the thread is not running.On the corresponding worker object (if there is any), it is available asworker.threadName.
worker_threads.workerData#
An arbitrary JavaScript value that contains a clone of the data passedto this thread'sWorker constructor.
The data is cloned as if usingpostMessage(),according to theHTML structured clone algorithm.
import {Worker, isMainThread, workerData }from'node:worker_threads';if (isMainThread) {const worker =newWorker(newURL(import.meta.url), {workerData:'Hello, world!' });}else {console.log(workerData);// Prints 'Hello, world!'.}'use strict';const {Worker, isMainThread, workerData } =require('node:worker_threads');if (isMainThread) {const worker =newWorker(__filename, {workerData:'Hello, world!' });}else {console.log(workerData);// Prints 'Hello, world!'.}
worker_threads.locks#
An instance of aLockManager that can be used to coordinateaccess to resources that may be shared across multiple threads within the sameprocess. The API mirrors the semantics of thebrowserLockManager
Class:Lock#
TheLock interface provides information about a lock that has been granted vialocks.request()
Class:LockManager#
TheLockManager interface provides methods for requesting and introspectinglocks. To obtain aLockManager instance use
import { locks }from'node:worker_threads';'use strict';const { locks } =require('node:worker_threads');
This implementation matches thebrowserLockManager API.
locks.request(name[, options], callback)#
name<string>options<Object>mode<string> Either'exclusive'or'shared'.Default:'exclusive'.ifAvailable<boolean> Iftrue, the request will only be granted if thelock is not already held. If it cannot be granted,callbackwill beinvoked withnullinstead of aLockinstance.Default:false.steal<boolean> Iftrue, any existing locks with the same name arereleased and the request is granted immediately, pre-empting any queuedrequests.Default:false.signal<AbortSignal> that can be used to abort apending (but not yet granted) lock request.
callback<Function> Invoked once the lock is granted (or immediately withnullififAvailableistrueand the lock is unavailable). The lock isreleased automatically when the function returns, or—if the function returnsa promise—when that promise settles.- Returns:<Promise> Resolves once the lock has been released.
import { locks }from'node:worker_threads';await locks.request('my_resource',async (lock) => {// The lock has been acquired.});// The lock has been released here.'use strict';const { locks } =require('node:worker_threads');locks.request('my_resource',async (lock) => {// The lock has been acquired.}).then(() => {// The lock has been released here.});
locks.query()#
- Returns:<Promise>
Resolves with aLockManagerSnapshot describing the currently held and pendinglocks for the current process.
import { locks }from'node:worker_threads';const snapshot =await locks.query();for (const lockof snapshot.held) {console.log(`held lock: name${lock.name}, mode${lock.mode}`);}for (const pendingof snapshot.pending) {console.log(`pending lock: name${pending.name}, mode${pending.mode}`);}'use strict';const { locks } =require('node:worker_threads');locks.query().then((snapshot) => {for (const lockof snapshot.held) {console.log(`held lock: name${lock.name}, mode${lock.mode}`); }for (const pendingof snapshot.pending) {console.log(`pending lock: name${pending.name}, mode${pending.mode}`); }});
Class:BroadcastChannel extends EventTarget#
History
| Version | Changes |
|---|---|
| v18.0.0 | No longer experimental. |
| v15.4.0 | Added in: v15.4.0 |
Instances ofBroadcastChannel allow asynchronous one-to-many communicationwith all otherBroadcastChannel instances bound to the same channel name.
import { isMainThread,BroadcastChannel,Worker,}from'node:worker_threads';const bc =newBroadcastChannel('hello');if (isMainThread) {let c =0; bc.onmessage =(event) => {console.log(event.data);if (++c ===10) bc.close(); };for (let n =0; n <10; n++)newWorker(newURL(import.meta.url));}else { bc.postMessage('hello from every worker'); bc.close();}'use strict';const { isMainThread,BroadcastChannel,Worker,} =require('node:worker_threads');const bc =newBroadcastChannel('hello');if (isMainThread) {let c =0; bc.onmessage =(event) => {console.log(event.data);if (++c ===10) bc.close(); };for (let n =0; n <10; n++)newWorker(__filename);}else { bc.postMessage('hello from every worker'); bc.close();}
new BroadcastChannel(name)#
name<any> The name of the channel to connect to. Any JavaScript valuethat can be converted to a string using`${name}`is permitted.
broadcastChannel.onmessage#
- Type:<Function> Invoked with a single
MessageEventargumentwhen a message is received.
broadcastChannel.onmessageerror#
- Type:<Function> Invoked with a received message cannot bedeserialized.
broadcastChannel.ref()#
Opposite ofunref(). Callingref() on a previouslyunref()edBroadcastChannel doesnot let the program exit if it's the only active handleleft (the default behavior). If the port isref()ed, callingref() againhas no effect.
broadcastChannel.unref()#
Callingunref() on a BroadcastChannel allows the thread to exit if thisis the only active handle in the event system. If the BroadcastChannel isalreadyunref()ed callingunref() again has no effect.
Class:MessageChannel#
Instances of theworker.MessageChannel class represent an asynchronous,two-way communications channel.TheMessageChannel has no methods of its own.new MessageChannel()yields an object withport1 andport2 properties, which refer to linkedMessagePort instances.
import {MessageChannel }from'node:worker_threads';const { port1, port2 } =newMessageChannel();port1.on('message',(message) =>console.log('received', message));port2.postMessage({foo:'bar' });// Prints: received { foo: 'bar' } from the `port1.on('message')` listener'use strict';const {MessageChannel } =require('node:worker_threads');const { port1, port2 } =newMessageChannel();port1.on('message',(message) =>console.log('received', message));port2.postMessage({foo:'bar' });// Prints: received { foo: 'bar' } from the `port1.on('message')` listener
Class:MessagePort#
History
| Version | Changes |
|---|---|
| v14.7.0 | This class now inherits from |
| v10.5.0 | Added in: v10.5.0 |
- Extends:<EventTarget>
Instances of theworker.MessagePort class represent one end of anasynchronous, two-way communications channel. It can be used to transferstructured data, memory regions and otherMessagePorts between differentWorkers.
This implementation matchesbrowserMessagePorts.
Event:'close'#
The'close' event is emitted once either side of the channel has beendisconnected.
import {MessageChannel }from'node:worker_threads';const { port1, port2 } =newMessageChannel();// Prints:// foobar// closed!port2.on('message',(message) =>console.log(message));port2.once('close',() =>console.log('closed!'));port1.postMessage('foobar');port1.close();'use strict';const {MessageChannel } =require('node:worker_threads');const { port1, port2 } =newMessageChannel();// Prints:// foobar// closed!port2.on('message',(message) =>console.log(message));port2.once('close',() =>console.log('closed!'));port1.postMessage('foobar');port1.close();
Event:'message'#
value<any> The transmitted value
The'message' event is emitted for any incoming message, containing the clonedinput ofport.postMessage().
Listeners on this event receive a clone of thevalue parameter as passedtopostMessage() and no further arguments.
Event:'messageerror'#
error<Error> An Error object
The'messageerror' event is emitted when deserializing a message failed.
Currently, this event is emitted when there is an error occurring whileinstantiating the posted JS object on the receiving end. Such situationsare rare, but can happen, for instance, when certain Node.js API objectsare received in avm.Context (where Node.js APIs are currentlyunavailable).
port.close()#
Disables further sending of messages on either side of the connection.This method can be called when no further communication will happen over thisMessagePort.
The'close' event is emitted on bothMessagePort instances thatare part of the channel.
port.postMessage(value[, transferList])#
History
| Version | Changes |
|---|---|
| v21.0.0 | An error is thrown when an untransferable object is in the transfer list. |
| v15.6.0 | Added |
| v15.0.0 | Added |
| v15.14.0, v14.18.0 | Add 'BlockList' to the list of cloneable types. |
| v15.9.0, v14.18.0 | Add 'Histogram' types to the list of cloneable types. |
| v14.5.0, v12.19.0 | Added |
| v14.5.0, v12.19.0 | Added |
| v10.5.0 | Added in: v10.5.0 |
value<any>transferList<Object[]>
Sends a JavaScript value to the receiving side of this channel.value is transferred in a way which is compatible withtheHTML structured clone algorithm.
In particular, the significant differences toJSON are:
valuemay contain circular references.valuemay contain instances of builtin JS types such asRegExps,BigInts,Maps,Sets, etc.valuemay contain typed arrays, both usingArrayBuffersandSharedArrayBuffers.valuemay containWebAssembly.Moduleinstances.valuemay not contain native (C++-backed) objects other than:
import {MessageChannel }from'node:worker_threads';const { port1, port2 } =newMessageChannel();port1.on('message',(message) =>console.log(message));const circularData = {};circularData.foo = circularData;// Prints: { foo: [Circular] }port2.postMessage(circularData);'use strict';const {MessageChannel } =require('node:worker_threads');const { port1, port2 } =newMessageChannel();port1.on('message',(message) =>console.log(message));const circularData = {};circularData.foo = circularData;// Prints: { foo: [Circular] }port2.postMessage(circularData);
transferList may be a list of<ArrayBuffer>,MessagePort, andFileHandle objects.After transferring, they are not usable on the sending side of the channelanymore (even if they are not contained invalue). Unlike withchild processes, transferring handles such as network sockets is currentlynot supported.
Ifvalue contains<SharedArrayBuffer> instances, those are accessiblefrom either thread. They cannot be listed intransferList.
value may still containArrayBuffer instances that are not intransferList; in that case, the underlying memory is copied rather than moved.
import {MessageChannel }from'node:worker_threads';const { port1, port2 } =newMessageChannel();port1.on('message',(message) =>console.log(message));const uint8Array =newUint8Array([1,2,3,4 ]);// This posts a copy of `uint8Array`:port2.postMessage(uint8Array);// This does not copy data, but renders `uint8Array` unusable:port2.postMessage(uint8Array, [ uint8Array.buffer ]);// The memory for the `sharedUint8Array` is accessible from both the// original and the copy received by `.on('message')`:const sharedUint8Array =newUint8Array(newSharedArrayBuffer(4));port2.postMessage(sharedUint8Array);// This transfers a freshly created message port to the receiver.// This can be used, for example, to create communication channels between// multiple `Worker` threads that are children of the same parent thread.const otherChannel =newMessageChannel();port2.postMessage({port: otherChannel.port1 }, [ otherChannel.port1 ]);'use strict';const {MessageChannel } =require('node:worker_threads');const { port1, port2 } =newMessageChannel();port1.on('message',(message) =>console.log(message));const uint8Array =newUint8Array([1,2,3,4 ]);// This posts a copy of `uint8Array`:port2.postMessage(uint8Array);// This does not copy data, but renders `uint8Array` unusable:port2.postMessage(uint8Array, [ uint8Array.buffer ]);// The memory for the `sharedUint8Array` is accessible from both the// original and the copy received by `.on('message')`:const sharedUint8Array =newUint8Array(newSharedArrayBuffer(4));port2.postMessage(sharedUint8Array);// This transfers a freshly created message port to the receiver.// This can be used, for example, to create communication channels between// multiple `Worker` threads that are children of the same parent thread.const otherChannel =newMessageChannel();port2.postMessage({port: otherChannel.port1 }, [ otherChannel.port1 ]);
The message object is cloned immediately, and can be modified afterposting without having side effects.
For more information on the serialization and deserialization mechanismsbehind this API, see theserialization API of thenode:v8 module.
Considerations when transferring TypedArrays and Buffers#
All<TypedArray> |<Buffer> instances are views over an underlying<ArrayBuffer>. That is, it is theArrayBuffer that actually storesthe raw data while theTypedArray andBuffer objects provide away of viewing and manipulating the data. It is possible and commonfor multiple views to be created over the sameArrayBuffer instance.Great care must be taken when using a transfer list to transfer anArrayBuffer as doing so causes allTypedArray andBufferinstances that share that sameArrayBuffer to become unusable.
const ab =newArrayBuffer(10);const u1 =newUint8Array(ab);const u2 =newUint16Array(ab);console.log(u2.length);// prints 5port.postMessage(u1, [u1.buffer]);console.log(u2.length);// prints 0ForBuffer instances, specifically, whether the underlyingArrayBuffer can be transferred or cloned depends entirely on howinstances were created, which often cannot be reliably determined.
AnArrayBuffer can be marked withmarkAsUntransferable() to indicatethat it should always be cloned and never transferred.
Depending on how aBuffer instance was created, it may or maynot own its underlyingArrayBuffer. AnArrayBuffer must notbe transferred unless it is known that theBuffer instanceowns it. In particular, forBuffers created from the internalBuffer pool (using, for instanceBuffer.from() orBuffer.allocUnsafe()),transferring them is not possible and they are always cloned,which sends a copy of the entireBuffer pool.This behavior may come with unintended higher memoryusage and possible security concerns.
SeeBuffer.allocUnsafe() for more details onBuffer pooling.
TheArrayBuffers forBuffer instances created usingBuffer.alloc() orBuffer.allocUnsafeSlow() can always betransferred but doing so renders all other existing views ofthoseArrayBuffers unusable.
Considerations when cloning objects with prototypes, classes, and accessors#
Because object cloning uses theHTML structured clone algorithm,non-enumerable properties, property accessors, and object prototypes arenot preserved. In particular,<Buffer> objects will be read asplain<Uint8Array>s on the receiving side, and instances of JavaScriptclasses will be cloned as plain JavaScript objects.
const b =Symbol('b');classFoo { #a =1;constructor() {this[b] =2;this.c =3; }getd() {return4; }}const { port1, port2 } =newMessageChannel();port1.onmessage =({ data }) =>console.log(data);port2.postMessage(newFoo());// Prints: { c: 3 }This limitation extends to many built-in objects, such as the globalURLobject:
const { port1, port2 } =newMessageChannel();port1.onmessage =({ data }) =>console.log(data);port2.postMessage(newURL('https://example.org'));// Prints: { }port.hasRef()#
History
| Version | Changes |
|---|---|
| v24.0.0, v22.17.0 | Marking the API stable. |
| v18.1.0, v16.17.0 | Added in: v18.1.0, v16.17.0 |
- Returns:<boolean>
If true, theMessagePort object will keep the Node.js event loop active.
port.ref()#
Opposite ofunref(). Callingref() on a previouslyunref()ed port doesnot let the program exit if it's the only active handle left (the defaultbehavior). If the port isref()ed, callingref() again has no effect.
If listeners are attached or removed using.on('message'), the portisref()ed andunref()ed automatically depending on whetherlisteners for the event exist.
port.start()#
Starts receiving messages on thisMessagePort. When using this portas an event emitter, this is called automatically once'message'listeners are attached.
This method exists for parity with the WebMessagePort API. In Node.js,it is only useful for ignoring messages when no event listener is present.Node.js also diverges in its handling of.onmessage. Setting itautomatically calls.start(), but unsetting it lets messages queue upuntil a new handler is set or the port is discarded.
port.unref()#
Callingunref() on a port allows the thread to exit if this is the onlyactive handle in the event system. If the port is alreadyunref()ed callingunref() again has no effect.
If listeners are attached or removed using.on('message'), the port isref()ed andunref()ed automatically depending on whetherlisteners for the event exist.
Class:Worker#
- Extends:<EventEmitter>
TheWorker class represents an independent JavaScript execution thread.Most Node.js APIs are available inside of it.
Notable differences inside a Worker environment are:
- The
process.stdin,process.stdout, andprocess.stderrstreams may be redirected by the parent thread. - The
require('node:worker_threads').isMainThreadproperty is set tofalse. - The
require('node:worker_threads').parentPortmessage port is available. process.exit()does not stop the whole program, just the single thread,andprocess.abort()is not available.process.chdir()andprocessmethods that set group or user idsare not available.process.envis a copy of the parent thread's environment variables,unless otherwise specified. Changes to one copy are not visible in otherthreads, and are not visible to native add-ons (unlessworker.SHARE_ENVis passed as theenvoption to theWorkerconstructor). On Windows, unlike the main thread, a copy of theenvironment variables operates in a case-sensitive manner.process.titlecannot be modified.- Signals are not delivered through
process.on('...'). - Execution may stop at any point as a result of
worker.terminate()being invoked. - IPC channels from parent processes are not accessible.
- The
trace_eventsmodule is not supported. - Native add-ons can only be loaded from multiple threads if they fulfillcertain conditions.
CreatingWorker instances inside of otherWorkers is possible.
LikeWeb Workers and thenode:cluster module, two-way communicationcan be achieved through inter-thread message passing. Internally, aWorker hasa built-in pair ofMessagePorts that are already associated with eachother when theWorker is created. While theMessagePort object on the parentside is not directly exposed, its functionalities are exposed throughworker.postMessage() and theworker.on('message') eventon theWorker object for the parent thread.
To create custom messaging channels (which is encouraged over using the defaultglobal channel because it facilitates separation of concerns), users can createaMessageChannel object on either thread and pass one of theMessagePorts on thatMessageChannel to the other thread through apre-existing channel, such as the global one.
Seeport.postMessage() for more information on how messages are passed,and what kind of JavaScript values can be successfully transported throughthe thread barrier.
import assertfrom'node:assert';import {Worker,MessageChannel,MessagePort, isMainThread, parentPort,}from'node:worker_threads';if (isMainThread) {const worker =newWorker(newURL(import.meta.url));const subChannel =newMessageChannel(); worker.postMessage({hereIsYourPort: subChannel.port1 }, [subChannel.port1]); subChannel.port2.on('message',(value) => {console.log('received:', value); });}else { parentPort.once('message',(value) => {assert(value.hereIsYourPortinstanceofMessagePort); value.hereIsYourPort.postMessage('the worker is sending this'); value.hereIsYourPort.close(); });}'use strict';const assert =require('node:assert');const {Worker,MessageChannel,MessagePort, isMainThread, parentPort,} =require('node:worker_threads');if (isMainThread) {const worker =newWorker(__filename);const subChannel =newMessageChannel(); worker.postMessage({hereIsYourPort: subChannel.port1 }, [subChannel.port1]); subChannel.port2.on('message',(value) => {console.log('received:', value); });}else { parentPort.once('message',(value) => {assert(value.hereIsYourPortinstanceofMessagePort); value.hereIsYourPort.postMessage('the worker is sending this'); value.hereIsYourPort.close(); });}
new Worker(filename[, options])#
History
| Version | Changes |
|---|---|
| v19.8.0, v18.16.0 | Added support for a |
| v14.9.0 | The |
| v14.9.0 | The |
| v14.6.0, v12.19.0 | The |
| v13.13.0, v12.17.0 | The |
| v13.12.0, v12.17.0 | The |
| v13.4.0, v12.16.0 | The |
| v13.2.0, v12.16.0 | The |
| v10.5.0 | Added in: v10.5.0 |
filename<string> |<URL> The path to the Worker's main script or module. Mustbe either an absolute path or a relative path (i.e. relative to thecurrent working directory) starting with./or../, or a WHATWGURLobject usingfile:ordata:protocol.When using adata:URL, the data is interpreted based on MIME type usingtheECMAScript module loader.Ifoptions.evalistrue, this is a string containing JavaScript coderather than a path.options<Object>argv<any[]> List of arguments which would be stringified and appended toprocess.argvin the worker. This is mostly similar to theworkerDatabut the values are available on the globalprocess.argvas if theywere passed as CLI options to the script.env<Object> If set, specifies the initial value ofprocess.envinsidethe Worker thread. As a special value,worker.SHARE_ENVmay be usedto specify that the parent thread and the child thread should share theirenvironment variables; in that case, changes to one thread'sprocess.envobject affect the other thread as well.Default:process.env.eval<boolean> Iftrueand the first argument is astring, interpretthe first argument to the constructor as a script that is executed once theworker is online.execArgv<string[]> List of node CLI options passed to the worker.V8 options (such as--max-old-space-size) and options that affect theprocess (such as--title) are not supported. If set, this is providedasprocess.execArgvinside the worker. By default, options areinherited from the parent thread.stdin<boolean> If this is set totrue, thenworker.stdinprovides a writable stream whose contents appear asprocess.stdininside the Worker. By default, no data is provided.stdout<boolean> If this is set totrue, thenworker.stdoutisnot automatically piped through toprocess.stdoutin the parent.stderr<boolean> If this is set totrue, thenworker.stderrisnot automatically piped through toprocess.stderrin the parent.workerData<any> Any JavaScript value that is cloned and madeavailable asrequire('node:worker_threads').workerData. The cloningoccurs as described in theHTML structured clone algorithm, and an erroris thrown if the object cannot be cloned (e.g. because it containsfunctions).trackUnmanagedFds<boolean> If this is set totrue, then the Workertracks raw file descriptors managed throughfs.open()andfs.close(), and closes them when the Worker exits, similar to otherresources like network sockets or file descriptors managed throughtheFileHandleAPI. This option is automatically inherited by allnestedWorkers.Default:true.transferList<Object[]> If one or moreMessagePort-like objectsare passed inworkerData, atransferListis required for thoseitems orERR_MISSING_MESSAGE_PORT_IN_TRANSFER_LISTis thrown.Seeport.postMessage()for more information.resourceLimits<Object> An optional set of resource limits for the new JSengine instance. Reaching these limits leads to termination of theWorkerinstance. These limits only affect the JS engine, and no external data,including noArrayBuffers. Even if these limits are set, the process maystill abort if it encounters a global out-of-memory situation.maxOldGenerationSizeMb<number> The maximum size of the main heap inMB. If the command-line argument--max-old-space-sizeis set, itoverrides this setting.maxYoungGenerationSizeMb<number> The maximum size of a heap space forrecently created objects. If the command-line argument--max-semi-space-sizeis set, it overrides this setting.codeRangeSizeMb<number> The size of a pre-allocated memory rangeused for generated code.stackSizeMb<number> The default maximum stack size for the thread.Small values may lead to unusable Worker instances.Default:4.
name<string> An optionalnameto be replaced in the thread nameand to the worker title for debugging/identification purposes,making the final title as[worker ${id}] ${name}.This parameter has a maximum allowed size, depending on the operatingsystem. If the provided name exceeds the limit, it will be truncated- Maximum sizes:
- Windows: 32,767 characters
- macOS: 64 characters
- Linux: 16 characters
- NetBSD: limited to
PTHREAD_MAX_NAMELEN_NP - FreeBSD and OpenBSD: limited to
MAXCOMLENDefault:'WorkerThread'.
- Maximum sizes:
Event:'error'#
err<any>
The'error' event is emitted if the worker thread throws an uncaughtexception. In that case, the worker is terminated.
Event:'exit'#
exitCode<integer>
The'exit' event is emitted once the worker has stopped. If the workerexited by callingprocess.exit(), theexitCode parameter is thepassed exit code. If the worker was terminated, theexitCode parameter is1.
This is the final event emitted by anyWorker instance.
Event:'message'#
value<any> The transmitted value
The'message' event is emitted when the worker thread has invokedrequire('node:worker_threads').parentPort.postMessage().See theport.on('message') event for more details.
All messages sent from the worker thread are emitted before the'exit' event is emitted on theWorker object.
Event:'messageerror'#
error<Error> An Error object
The'messageerror' event is emitted when deserializing a message failed.
Event:'online'#
The'online' event is emitted when the worker thread has started executingJavaScript code.
worker.cpuUsage([prev])#
- Returns:<Promise>
This method returns aPromise that will resolve to an object identical toprocess.threadCpuUsage(),or reject with anERR_WORKER_NOT_RUNNING error if the worker is no longer running.This methods allows the statistics to be observed from outside the actual thread.
worker.getHeapSnapshot([options])#
History
| Version | Changes |
|---|---|
| v19.1.0 | Support options to configure the heap snapshot. |
| v13.9.0, v12.17.0 | Added in: v13.9.0, v12.17.0 |
Returns a readable stream for a V8 snapshot of the current state of the Worker.Seev8.getHeapSnapshot() for more details.
If the Worker thread is no longer running, which may occur before the'exit' event is emitted, the returnedPromise is rejectedimmediately with anERR_WORKER_NOT_RUNNING error.
worker.getHeapStatistics()#
- Returns:<Promise>
This method returns aPromise that will resolve to an object identical tov8.getHeapStatistics(),or reject with anERR_WORKER_NOT_RUNNING error if the worker is no longer running.This methods allows the statistics to be observed from outside the actual thread.
worker.performance#
An object that can be used to query performance information from a workerinstance.
performance.eventLoopUtilization([utilization1[, utilization2]])#
utilization1<Object> The result of a previous call toeventLoopUtilization().utilization2<Object> The result of a previous call toeventLoopUtilization()prior toutilization1.- Returns:<Object>
The same call asperf_hookseventLoopUtilization(), except the valuesof the worker instance are returned.
One difference is that, unlike the main thread, bootstrapping within a workeris done within the event loop. So the event loop utilization isimmediately available once the worker's script begins execution.
Anidle time that does not increase does not indicate that the worker isstuck in bootstrap. The following examples shows how the worker's entirelifetime never accumulates anyidle time, but is still be able to processmessages.
import {Worker, isMainThread, parentPort }from'node:worker_threads';if (isMainThread) {const worker =newWorker(newURL(import.meta.url));setInterval(() => { worker.postMessage('hi');console.log(worker.performance.eventLoopUtilization()); },100).unref();}else { parentPort.on('message',() =>console.log('msg')).unref(); (functionr(n) {if (--n <0)return;const t =Date.now();while (Date.now() - t <300);setImmediate(r, n); })(10);}'use strict';const {Worker, isMainThread, parentPort } =require('node:worker_threads');if (isMainThread) {const worker =newWorker(__filename);setInterval(() => { worker.postMessage('hi');console.log(worker.performance.eventLoopUtilization()); },100).unref();}else { parentPort.on('message',() =>console.log('msg')).unref(); (functionr(n) {if (--n <0)return;const t =Date.now();while (Date.now() - t <300);setImmediate(r, n); })(10);}
The event loop utilization of a worker is available only after the'online'event emitted, and if called before this, or after the'exit'event, then all properties have the value of0.
worker.postMessage(value[, transferList])#
value<any>transferList<Object[]>
Send a message to the worker that is received viarequire('node:worker_threads').parentPort.on('message').Seeport.postMessage() for more details.
worker.ref()#
Opposite ofunref(), callingref() on a previouslyunref()ed worker doesnot let the program exit if it's the only active handle left (the defaultbehavior). If the worker isref()ed, callingref() again hasno effect.
worker.resourceLimits#
- Type:<Object>
Provides the set of JS engine resource constraints for this Worker thread.If theresourceLimits option was passed to theWorker constructor,this matches its values.
If the worker has stopped, the return value is an empty object.
worker.startCpuProfile()#
- Returns:<Promise>
Starting a CPU profile then return a Promise that fulfills with an erroror anCPUProfileHandle object. This API supportsawait using syntax.
const {Worker } =require('node:worker_threads');const worker =newWorker(` const { parentPort } = require('worker_threads'); parentPort.on('message', () => {}); `, {eval:true });worker.on('online',async () => {const handle =await worker.startCpuProfile();const profile =await handle.stop();console.log(profile); worker.terminate();});await using example.
const {Worker } =require('node:worker_threads');const w =newWorker(` const { parentPort } = require('node:worker_threads'); parentPort.on('message', () => {}); `, {eval:true });w.on('online',async () => {// Stop profile automatically when return and profile will be discardedawaitusing handle =await w.startCpuProfile();});worker.startHeapProfile()#
- Returns:<Promise>
Starting a Heap profile then return a Promise that fulfills with an erroror anHeapProfileHandle object. This API supportsawait using syntax.
const {Worker } =require('node:worker_threads');const worker =newWorker(` const { parentPort } = require('worker_threads'); parentPort.on('message', () => {}); `, {eval:true });worker.on('online',async () => {const handle =await worker.startHeapProfile();const profile =await handle.stop();console.log(profile); worker.terminate();});await using example.
const {Worker } =require('node:worker_threads');const w =newWorker(` const { parentPort } = require('node:worker_threads'); parentPort.on('message', () => {}); `, {eval:true });w.on('online',async () => {// Stop profile automatically when return and profile will be discardedawaitusing handle =await w.startHeapProfile();});worker.stderr#
- Type:<stream.Readable>
This is a readable stream which contains data written toprocess.stderrinside the worker thread. Ifstderr: true was not passed to theWorker constructor, then data is piped to the parent thread'sprocess.stderr stream.
worker.stdin#
- Type:<null> |<stream.Writable>
Ifstdin: true was passed to theWorker constructor, this is awritable stream. The data written to this stream will be made available inthe worker thread asprocess.stdin.
worker.stdout#
- Type:<stream.Readable>
This is a readable stream which contains data written toprocess.stdoutinside the worker thread. Ifstdout: true was not passed to theWorker constructor, then data is piped to the parent thread'sprocess.stdout stream.
worker.terminate()#
History
| Version | Changes |
|---|---|
| v12.5.0 | This function now returns a Promise. Passing a callback is deprecated, and was useless up to this version, as the Worker was actually terminated synchronously. Terminating is now a fully asynchronous operation. |
| v10.5.0 | Added in: v10.5.0 |
- Returns:<Promise>
Stop all JavaScript execution in the worker thread as soon as possible.Returns a Promise for the exit code that is fulfilled when the'exit' event is emitted.
worker.threadId#
- Type:<integer>
An integer identifier for the referenced thread. Inside the worker thread,it is available asrequire('node:worker_threads').threadId.This value is unique for eachWorker instance inside a single process.
worker.threadName#
A string identifier for the referenced thread or null if the thread is not running.Inside the worker thread, it is available asrequire('node:worker_threads').threadName.
worker.unref()#
Callingunref() on a worker allows the thread to exit if this is the onlyactive handle in the event system. If the worker is alreadyunref()ed callingunref() again has no effect.
worker[Symbol.asyncDispose]()#
Callsworker.terminate() when the dispose scope is exited.
asyncfunctionexample() {awaitusing worker =newWorker('for (;;) {}', {eval:true });// Worker is automatically terminate when the scope is exited.}Notes#
Synchronous blocking of stdio#
Workers utilize message passing via<MessagePort> to implement interactionswithstdio. This means thatstdio output originating from aWorker canget blocked by synchronous code on the receiving end that is blocking theNode.js event loop.
import {Worker, isMainThread,}from'node:worker_threads';if (isMainThread) {newWorker(newURL(import.meta.url));for (let n =0; n <1e10; n++) {// Looping to simulate work. }}else {// This output will be blocked by the for loop in the main thread.console.log('foo');}'use strict';const {Worker, isMainThread,} =require('node:worker_threads');if (isMainThread) {newWorker(__filename);for (let n =0; n <1e10; n++) {// Looping to simulate work. }}else {// This output will be blocked by the for loop in the main thread.console.log('foo');}
Launching worker threads from preload scripts#
Take care when launching worker threads from preload scripts (scripts loadedand run using the-r command line flag). Unless theexecArgv option isexplicitly set, new Worker threads automatically inherit the command line flagsfrom the running process and will preload the same preload scripts as the mainthread. If the preload script unconditionally launches a worker thread, everythread spawned will spawn another until the application crashes.
Zlib#
Source Code:lib/zlib.js
Thenode:zlib module provides compression functionality implemented usingGzip, Deflate/Inflate, Brotli, and Zstd.
To access it:
import zlibfrom'node:zlib';const zlib =require('node:zlib');
Compression and decompression are built around the Node.jsStreams API.
Compressing or decompressing a stream (such as a file) can be accomplished bypiping the source stream through azlibTransform stream into a destinationstream:
import { createReadStream, createWriteStream,}from'node:fs';import processfrom'node:process';import { createGzip }from'node:zlib';import { pipeline }from'node:stream';const gzip =createGzip();const source =createReadStream('input.txt');const destination =createWriteStream('input.txt.gz');pipeline(source, gzip, destination,(err) => {if (err) {console.error('An error occurred:', err); process.exitCode =1; }});const { createReadStream, createWriteStream,} =require('node:fs');const process =require('node:process');const { createGzip } =require('node:zlib');const { pipeline } =require('node:stream');const gzip =createGzip();const source =createReadStream('input.txt');const destination =createWriteStream('input.txt.gz');pipeline(source, gzip, destination,(err) => {if (err) {console.error('An error occurred:', err); process.exitCode =1; }});
Or, using the promisepipeline API:
import { createReadStream, createWriteStream,}from'node:fs';import { createGzip }from'node:zlib';import { pipeline }from'node:stream/promises';asyncfunctiondo_gzip(input, output) {const gzip =createGzip();const source =createReadStream(input);const destination =createWriteStream(output);awaitpipeline(source, gzip, destination);}awaitdo_gzip('input.txt','input.txt.gz');const { createReadStream, createWriteStream,} =require('node:fs');const process =require('node:process');const { createGzip } =require('node:zlib');const { pipeline } =require('node:stream/promises');asyncfunctiondo_gzip(input, output) {const gzip =createGzip();const source =createReadStream(input);const destination =createWriteStream(output);awaitpipeline(source, gzip, destination);}do_gzip('input.txt','input.txt.gz') .catch((err) => {console.error('An error occurred:', err); process.exitCode =1; });
It is also possible to compress or decompress data in a single step:
import processfrom'node:process';import {Buffer }from'node:buffer';import { deflate, unzip }from'node:zlib';const input ='.................................';deflate(input,(err, buffer) => {if (err) {console.error('An error occurred:', err); process.exitCode =1; }console.log(buffer.toString('base64'));});const buffer =Buffer.from('eJzT0yMAAGTvBe8=','base64');unzip(buffer,(err, buffer) => {if (err) {console.error('An error occurred:', err); process.exitCode =1; }console.log(buffer.toString());});// Or, Promisifiedimport { promisify }from'node:util';const do_unzip =promisify(unzip);const unzippedBuffer =awaitdo_unzip(buffer);console.log(unzippedBuffer.toString());const { deflate, unzip } =require('node:zlib');const input ='.................................';deflate(input,(err, buffer) => {if (err) {console.error('An error occurred:', err); process.exitCode =1; }console.log(buffer.toString('base64'));});const buffer =Buffer.from('eJzT0yMAAGTvBe8=','base64');unzip(buffer,(err, buffer) => {if (err) {console.error('An error occurred:', err); process.exitCode =1; }console.log(buffer.toString());});// Or, Promisifiedconst { promisify } =require('node:util');const do_unzip =promisify(unzip);do_unzip(buffer) .then((buf) =>console.log(buf.toString())) .catch((err) => {console.error('An error occurred:', err); process.exitCode =1; });
Threadpool usage and performance considerations#
Allzlib APIs, except those that are explicitly synchronous, use the Node.jsinternal threadpool. This can lead to surprising effects and performancelimitations in some applications.
Creating and using a large number of zlib objects simultaneously can causesignificant memory fragmentation.
import zlibfrom'node:zlib';import {Buffer }from'node:buffer';const payload =Buffer.from('This is some data');// WARNING: DO NOT DO THIS!for (let i =0; i <30000; ++i) { zlib.deflate(payload,(err, buffer) => {});}const zlib =require('node:zlib');const payload =Buffer.from('This is some data');// WARNING: DO NOT DO THIS!for (let i =0; i <30000; ++i) { zlib.deflate(payload,(err, buffer) => {});}
In the preceding example, 30,000 deflate instances are created concurrently.Because of how some operating systems handle memory allocation anddeallocation, this may lead to significant memory fragmentation.
It is strongly recommended that the results of compressionoperations be cached to avoid duplication of effort.
Compressing HTTP requests and responses#
Thenode:zlib module can be used to implement support for thegzip,deflate,br, andzstd content-encoding mechanisms defined byHTTP.
The HTTPAccept-Encoding header is used within an HTTP request to identifythe compression encodings accepted by the client. TheContent-Encodingheader is used to identify the compression encodings actually applied to amessage.
The examples given below are drastically simplified to show the basic concept.Usingzlib encoding can be expensive, and the results ought to be cached.SeeMemory usage tuning for more information on the speed/memory/compressiontradeoffs involved inzlib usage.
// Client request exampleimport fsfrom'node:fs';import zlibfrom'node:zlib';import httpfrom'node:http';import processfrom'node:process';import { pipeline }from'node:stream';const request = http.get({host:'example.com',path:'/',port:80,headers: {'Accept-Encoding':'br,gzip,deflate,zstd' } });request.on('response',(response) => {const output = fs.createWriteStream('example.com_index.html');constonError = (err) => {if (err) {console.error('An error occurred:', err); process.exitCode =1; } };switch (response.headers['content-encoding']) {case'br':pipeline(response, zlib.createBrotliDecompress(), output, onError);break;// Or, just use zlib.createUnzip() to handle both of the following cases:case'gzip':pipeline(response, zlib.createGunzip(), output, onError);break;case'deflate':pipeline(response, zlib.createInflate(), output, onError);break;case'zstd':pipeline(response, zlib.createZstdDecompress(), output, onError);break;default:pipeline(response, output, onError);break; }});// Client request exampleconst zlib =require('node:zlib');const http =require('node:http');const fs =require('node:fs');const { pipeline } =require('node:stream');const request = http.get({host:'example.com',path:'/',port:80,headers: {'Accept-Encoding':'br,gzip,deflate,zstd' } });request.on('response',(response) => {const output = fs.createWriteStream('example.com_index.html');constonError = (err) => {if (err) {console.error('An error occurred:', err); process.exitCode =1; } };switch (response.headers['content-encoding']) {case'br':pipeline(response, zlib.createBrotliDecompress(), output, onError);break;// Or, just use zlib.createUnzip() to handle both of the following cases:case'gzip':pipeline(response, zlib.createGunzip(), output, onError);break;case'deflate':pipeline(response, zlib.createInflate(), output, onError);break;case'zstd':pipeline(response, zlib.createZstdDecompress(), output, onError);break;default:pipeline(response, output, onError);break; }});
// server example// Running a gzip operation on every request is quite expensive.// It would be much more efficient to cache the compressed buffer.import zlibfrom'node:zlib';import httpfrom'node:http';import fsfrom'node:fs';import { pipeline }from'node:stream';http.createServer((request, response) => {const raw = fs.createReadStream('index.html');// Store both a compressed and an uncompressed version of the resource. response.setHeader('Vary','Accept-Encoding');const acceptEncoding = request.headers['accept-encoding'] ||'';constonError = (err) => {if (err) {// If an error occurs, there's not much we can do because// the server has already sent the 200 response code and// some amount of data has already been sent to the client.// The best we can do is terminate the response immediately// and log the error. response.end();console.error('An error occurred:', err); } };// Note: This is not a conformant accept-encoding parser.// See https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.3if (/\bdeflate\b/.test(acceptEncoding)) { response.writeHead(200, {'Content-Encoding':'deflate' });pipeline(raw, zlib.createDeflate(), response, onError); }elseif (/\bgzip\b/.test(acceptEncoding)) { response.writeHead(200, {'Content-Encoding':'gzip' });pipeline(raw, zlib.createGzip(), response, onError); }elseif (/\bbr\b/.test(acceptEncoding)) { response.writeHead(200, {'Content-Encoding':'br' });pipeline(raw, zlib.createBrotliCompress(), response, onError); }elseif (/\bzstd\b/.test(acceptEncoding)) { response.writeHead(200, {'Content-Encoding':'zstd' });pipeline(raw, zlib.createZstdCompress(), response, onError); }else { response.writeHead(200, {});pipeline(raw, response, onError); }}).listen(1337);// server example// Running a gzip operation on every request is quite expensive.// It would be much more efficient to cache the compressed buffer.const zlib =require('node:zlib');const http =require('node:http');const fs =require('node:fs');const { pipeline } =require('node:stream');http.createServer((request, response) => {const raw = fs.createReadStream('index.html');// Store both a compressed and an uncompressed version of the resource. response.setHeader('Vary','Accept-Encoding');const acceptEncoding = request.headers['accept-encoding'] ||'';constonError = (err) => {if (err) {// If an error occurs, there's not much we can do because// the server has already sent the 200 response code and// some amount of data has already been sent to the client.// The best we can do is terminate the response immediately// and log the error. response.end();console.error('An error occurred:', err); } };// Note: This is not a conformant accept-encoding parser.// See https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.3if (/\bdeflate\b/.test(acceptEncoding)) { response.writeHead(200, {'Content-Encoding':'deflate' });pipeline(raw, zlib.createDeflate(), response, onError); }elseif (/\bgzip\b/.test(acceptEncoding)) { response.writeHead(200, {'Content-Encoding':'gzip' });pipeline(raw, zlib.createGzip(), response, onError); }elseif (/\bbr\b/.test(acceptEncoding)) { response.writeHead(200, {'Content-Encoding':'br' });pipeline(raw, zlib.createBrotliCompress(), response, onError); }elseif (/\bzstd\b/.test(acceptEncoding)) { response.writeHead(200, {'Content-Encoding':'zstd' });pipeline(raw, zlib.createZstdCompress(), response, onError); }else { response.writeHead(200, {});pipeline(raw, response, onError); }}).listen(1337);
By default, thezlib methods will throw an error when decompressingtruncated data. However, if it is known that the data is incomplete, orthe desire is to inspect only the beginning of a compressed file, it ispossible to suppress the default error handling by changing the flushingmethod that is used to decompress the last chunk of input data:
// This is a truncated version of the buffer from the above examplesconst buffer =Buffer.from('eJzT0yMA','base64');zlib.unzip( buffer,// For Brotli, the equivalent is zlib.constants.BROTLI_OPERATION_FLUSH.// For Zstd, the equivalent is zlib.constants.ZSTD_e_flush. {finishFlush: zlib.constants.Z_SYNC_FLUSH },(err, buffer) => {if (err) {console.error('An error occurred:', err); process.exitCode =1; }console.log(buffer.toString()); });This will not change the behavior in other error-throwing situations, e.g.when the input data has an invalid format. Using this method, it will not bepossible to determine whether the input ended prematurely or lacks theintegrity checks, making it necessary to manually check that thedecompressed result is valid.
Memory usage tuning#
For zlib-based streams#
Fromzlib/zconf.h, modified for Node.js usage:
The memory requirements for deflate are (in bytes):
(1 << (windowBits +2)) + (1 << (memLevel +9))That is: 128K forwindowBits = 15 + 128K formemLevel = 8(default values) plus a few kilobytes for small objects.
For example, to reduce the default memory requirements from 256K to 128K, theoptions should be set to:
const options = {windowBits:14,memLevel:7 };This will, however, generally degrade compression.
The memory requirements for inflate are (in bytes)1 << windowBits.That is, 32K forwindowBits = 15 (default value) plus a few kilobytesfor small objects.
This is in addition to a single internal output slab buffer of sizechunkSize, which defaults to 16K.
The speed ofzlib compression is affected most dramatically by thelevel setting. A higher level will result in better compression, butwill take longer to complete. A lower level will result in lesscompression, but will be much faster.
In general, greater memory usage options will mean that Node.js has to makefewer calls tozlib because it will be able to process more data oneachwrite operation. So, this is another factor that affects thespeed, at the cost of memory usage.
For Brotli-based streams#
There are equivalents to the zlib options for Brotli-based streams, althoughthese options have different ranges than the zlib ones:
- zlib's
leveloption matches Brotli'sBROTLI_PARAM_QUALITYoption. - zlib's
windowBitsoption matches Brotli'sBROTLI_PARAM_LGWINoption.
Seebelow for more details on Brotli-specific options.
For Zstd-based streams#
There are equivalents to the zlib options for Zstd-based streams, althoughthese options have different ranges than the zlib ones:
- zlib's
leveloption matches Zstd'sZSTD_c_compressionLeveloption. - zlib's
windowBitsoption matches Zstd'sZSTD_c_windowLogoption.
Seebelow for more details on Zstd-specific options.
Flushing#
Calling.flush() on a compression stream will makezlib return as muchoutput as currently possible. This may come at the cost of degraded compressionquality, but can be useful when data needs to be available as soon as possible.
In the following example,flush() is used to write a compressed partialHTTP response to the client:
import zlibfrom'node:zlib';import httpfrom'node:http';import { pipeline }from'node:stream';http.createServer((request, response) => {// For the sake of simplicity, the Accept-Encoding checks are omitted. response.writeHead(200, {'content-encoding':'gzip' });const output = zlib.createGzip();let i;pipeline(output, response,(err) => {if (err) {// If an error occurs, there's not much we can do because// the server has already sent the 200 response code and// some amount of data has already been sent to the client.// The best we can do is terminate the response immediately// and log the error.clearInterval(i); response.end();console.error('An error occurred:', err); } }); i =setInterval(() => { output.write(`The current time is${Date()}\n`,() => {// The data has been passed to zlib, but the compression algorithm may// have decided to buffer the data for more efficient compression.// Calling .flush() will make the data available as soon as the client// is ready to receive it. output.flush(); }); },1000);}).listen(1337);const zlib =require('node:zlib');const http =require('node:http');const { pipeline } =require('node:stream');http.createServer((request, response) => {// For the sake of simplicity, the Accept-Encoding checks are omitted. response.writeHead(200, {'content-encoding':'gzip' });const output = zlib.createGzip();let i;pipeline(output, response,(err) => {if (err) {// If an error occurs, there's not much we can do because// the server has already sent the 200 response code and// some amount of data has already been sent to the client.// The best we can do is terminate the response immediately// and log the error.clearInterval(i); response.end();console.error('An error occurred:', err); } }); i =setInterval(() => { output.write(`The current time is${Date()}\n`,() => {// The data has been passed to zlib, but the compression algorithm may// have decided to buffer the data for more efficient compression.// Calling .flush() will make the data available as soon as the client// is ready to receive it. output.flush(); }); },1000);}).listen(1337);
Constants#
zlib constants#
All of the constants defined inzlib.h are also defined onrequire('node:zlib').constants. In the normal course of operations, it willnot be necessary to use these constants. They are documented so that theirpresence is not surprising. This section is taken almost directly from thezlib documentation.
Previously, the constants were available directly fromrequire('node:zlib'),for instancezlib.Z_NO_FLUSH. Accessing the constants directly from the moduleis currently still possible but is deprecated.
Allowed flush values.
zlib.constants.Z_NO_FLUSHzlib.constants.Z_PARTIAL_FLUSHzlib.constants.Z_SYNC_FLUSHzlib.constants.Z_FULL_FLUSHzlib.constants.Z_FINISHzlib.constants.Z_BLOCK
Return codes for the compression/decompression functions. Negativevalues are errors, positive values are used for special but normalevents.
zlib.constants.Z_OKzlib.constants.Z_STREAM_ENDzlib.constants.Z_NEED_DICTzlib.constants.Z_ERRNOzlib.constants.Z_STREAM_ERRORzlib.constants.Z_DATA_ERRORzlib.constants.Z_MEM_ERRORzlib.constants.Z_BUF_ERRORzlib.constants.Z_VERSION_ERROR
Compression levels.
zlib.constants.Z_NO_COMPRESSIONzlib.constants.Z_BEST_SPEEDzlib.constants.Z_BEST_COMPRESSIONzlib.constants.Z_DEFAULT_COMPRESSION
Compression strategy.
zlib.constants.Z_FILTEREDzlib.constants.Z_HUFFMAN_ONLYzlib.constants.Z_RLEzlib.constants.Z_FIXEDzlib.constants.Z_DEFAULT_STRATEGY
Brotli constants#
There are several options and other constants available for Brotli-basedstreams:
Flush operations#
The following values are valid flush operations for Brotli-based streams:
zlib.constants.BROTLI_OPERATION_PROCESS(default for all operations)zlib.constants.BROTLI_OPERATION_FLUSH(default when calling.flush())zlib.constants.BROTLI_OPERATION_FINISH(default for the last chunk)zlib.constants.BROTLI_OPERATION_EMIT_METADATA- This particular operation may be hard to use in a Node.js context,as the streaming layer makes it hard to know which data will end upin this frame. Also, there is currently no way to consume this data throughthe Node.js API.
Compressor options#
There are several options that can be set on Brotli encoders, affectingcompression efficiency and speed. Both the keys and the values can be accessedas properties of thezlib.constants object.
The most important options are:
BROTLI_PARAM_MODEBROTLI_MODE_GENERIC(default)BROTLI_MODE_TEXT, adjusted for UTF-8 textBROTLI_MODE_FONT, adjusted for WOFF 2.0 fonts
BROTLI_PARAM_QUALITY- Ranges from
BROTLI_MIN_QUALITYtoBROTLI_MAX_QUALITY,with a default ofBROTLI_DEFAULT_QUALITY.
- Ranges from
BROTLI_PARAM_SIZE_HINT- Integer value representing the expected input size;defaults to
0for an unknown input size.
- Integer value representing the expected input size;defaults to
The following flags can be set for advanced control over the compressionalgorithm and memory usage tuning:
BROTLI_PARAM_LGWIN- Ranges from
BROTLI_MIN_WINDOW_BITStoBROTLI_MAX_WINDOW_BITS,with a default ofBROTLI_DEFAULT_WINDOW, or up toBROTLI_LARGE_MAX_WINDOW_BITSif theBROTLI_PARAM_LARGE_WINDOWflagis set.
- Ranges from
BROTLI_PARAM_LGBLOCK- Ranges from
BROTLI_MIN_INPUT_BLOCK_BITStoBROTLI_MAX_INPUT_BLOCK_BITS.
- Ranges from
BROTLI_PARAM_DISABLE_LITERAL_CONTEXT_MODELING- Boolean flag that decreases compression ratio in favour ofdecompression speed.
BROTLI_PARAM_LARGE_WINDOW- Boolean flag enabling “Large Window Brotli” mode (not compatible with theBrotli format as standardized inRFC 7932).
BROTLI_PARAM_NPOSTFIX- Ranges from
0toBROTLI_MAX_NPOSTFIX.
- Ranges from
BROTLI_PARAM_NDIRECT- Ranges from
0to15 << NPOSTFIXin steps of1 << NPOSTFIX.
- Ranges from
Decompressor options#
These advanced options are available for controlling decompression:
BROTLI_DECODER_PARAM_DISABLE_RING_BUFFER_REALLOCATION- Boolean flag that affects internal memory allocation patterns.
BROTLI_DECODER_PARAM_LARGE_WINDOW- Boolean flag enabling “Large Window Brotli” mode (not compatible with theBrotli format as standardized inRFC 7932).
Zstd constants#
There are several options and other constants available for Zstd-basedstreams:
Flush operations#
The following values are valid flush operations for Zstd-based streams:
zlib.constants.ZSTD_e_continue(default for all operations)zlib.constants.ZSTD_e_flush(default when calling.flush())zlib.constants.ZSTD_e_end(default for the last chunk)
Compressor options#
There are several options that can be set on Zstd encoders, affectingcompression efficiency and speed. Both the keys and the values can be accessedas properties of thezlib.constants object.
The most important options are:
ZSTD_c_compressionLevel- Set compression parameters according to pre-defined cLevel table. Defaultlevel is ZSTD_CLEVEL_DEFAULT==3.
ZSTD_c_strategy- Select the compression strategy.
- Possible values are listed in the strategy options section below.
Strategy options#
The following constants can be used as values for theZSTD_c_strategyparameter:
zlib.constants.ZSTD_fastzlib.constants.ZSTD_dfastzlib.constants.ZSTD_greedyzlib.constants.ZSTD_lazyzlib.constants.ZSTD_lazy2zlib.constants.ZSTD_btlazy2zlib.constants.ZSTD_btoptzlib.constants.ZSTD_btultrazlib.constants.ZSTD_btultra2
Example:
const stream = zlib.createZstdCompress({params: { [zlib.constants.ZSTD_c_strategy]: zlib.constants.ZSTD_btultra, },});Pledged Source Size#
It's possible to specify the expected total size of the uncompressed input viaopts.pledgedSrcSize. If the size doesn't match at the end of the input,compression will fail with the codeZSTD_error_srcSize_wrong.
Decompressor options#
These advanced options are available for controlling decompression:
ZSTD_d_windowLogMax- Select a size limit (in power of 2) beyond which the streaming API willrefuse to allocate memory buffer in order to protect the host fromunreasonable memory requirements.
Class:Options#
History
| Version | Changes |
|---|---|
| v14.5.0, v12.19.0 | The |
| v9.4.0 | The |
| v8.0.0 | The |
| v5.11.0 | The |
| v0.11.1 | Added in: v0.11.1 |
Each zlib-based class takes anoptions object. No options are required.
Some options are only relevant when compressing and areignored by the decompression classes.
flush<integer>Default:zlib.constants.Z_NO_FLUSHfinishFlush<integer>Default:zlib.constants.Z_FINISHchunkSize<integer>Default:16 * 1024windowBits<integer>level<integer> (compression only)memLevel<integer> (compression only)strategy<integer> (compression only)dictionary<Buffer> |<TypedArray> |<DataView> |<ArrayBuffer> (deflate/inflate only,empty dictionary by default)info<boolean> (Iftrue, returns an object withbufferandengine.)maxOutputLength<integer> Limits output size when usingconvenience methods.Default:buffer.kMaxLength
See thedeflateInit2 andinflateInit2 documentation for moreinformation.
Class:BrotliOptions#
History
| Version | Changes |
|---|---|
| v14.5.0, v12.19.0 | The |
| v11.7.0 | Added in: v11.7.0 |
Each Brotli-based class takes anoptions object. All options are optional.
flush<integer>Default:zlib.constants.BROTLI_OPERATION_PROCESSfinishFlush<integer>Default:zlib.constants.BROTLI_OPERATION_FINISHchunkSize<integer>Default:16 * 1024params<Object> Key-value object containing indexedBrotli parameters.maxOutputLength<integer> Limits output size when usingconvenience methods.Default:buffer.kMaxLengthinfo<boolean> Iftrue, returns an object withbufferandengine.Default:false
For example:
const stream = zlib.createBrotliCompress({chunkSize:32 *1024,params: { [zlib.constants.BROTLI_PARAM_MODE]: zlib.constants.BROTLI_MODE_TEXT, [zlib.constants.BROTLI_PARAM_QUALITY]:4, [zlib.constants.BROTLI_PARAM_SIZE_HINT]: fs.statSync(inputFile).size, },});Class:zlib.BrotliCompress#
- Extends:
ZlibBase
Compress data using the Brotli algorithm.
Class:zlib.BrotliDecompress#
- Extends:
ZlibBase
Decompress data using the Brotli algorithm.
Class:zlib.Deflate#
- Extends:
ZlibBase
Compress data using deflate.
Class:zlib.DeflateRaw#
- Extends:
ZlibBase
Compress data using deflate, and do not append azlib header.
Class:zlib.Gunzip#
History
| Version | Changes |
|---|---|
| v6.0.0 | Trailing garbage at the end of the input stream will now result in an |
| v5.9.0 | Multiple concatenated gzip file members are supported now. |
| v5.0.0 | A truncated input stream will now result in an |
| v0.5.8 | Added in: v0.5.8 |
- Extends:
ZlibBase
Decompress a gzip stream.
Class:zlib.Gzip#
- Extends:
ZlibBase
Compress data using gzip.
Class:zlib.Inflate#
History
| Version | Changes |
|---|---|
| v5.0.0 | A truncated input stream will now result in an |
| v0.5.8 | Added in: v0.5.8 |
- Extends:
ZlibBase
Decompress a deflate stream.
Class:zlib.InflateRaw#
History
| Version | Changes |
|---|---|
| v6.8.0 | Custom dictionaries are now supported by |
| v5.0.0 | A truncated input stream will now result in an |
| v0.5.8 | Added in: v0.5.8 |
- Extends:
ZlibBase
Decompress a raw deflate stream.
Class:zlib.Unzip#
- Extends:
ZlibBase
Decompress either a Gzip- or Deflate-compressed stream by auto-detectingthe header.
Class:zlib.ZlibBase#
History
| Version | Changes |
|---|---|
| v11.7.0, v10.16.0 | This class was renamed from |
| v0.5.8 | Added in: v0.5.8 |
- Extends:
stream.Transform
Not exported by thenode:zlib module. It is documented here because it is thebase class of the compressor/decompressor classes.
This class inherits fromstream.Transform, allowingnode:zlib objects tobe used in pipes and similar stream operations.
zlib.bytesWritten#
- Type:<number>
Thezlib.bytesWritten property specifies the number of bytes written tothe engine, before the bytes are processed (compressed or decompressed,as appropriate for the derived class).
zlib.flush([kind, ]callback)#
kindDefault:zlib.constants.Z_FULL_FLUSHfor zlib-based streams,zlib.constants.BROTLI_OPERATION_FLUSHfor Brotli-based streams.callback<Function>
Flush pending data. Don't call this frivolously, premature flushes negativelyimpact the effectiveness of the compression algorithm.
Calling this only flushes data from the internalzlib state, and does notperform flushing of any kind on the streams level. Rather, it behaves like anormal call to.write(), i.e. it will be queued up behind other pendingwrites and will only produce output when data is being read from the stream.
zlib.params(level, strategy, callback)#
level<integer>strategy<integer>callback<Function>
This function is only available for zlib-based streams, i.e. not Brotli.
Dynamically update the compression level and compression strategy.Only applicable to deflate algorithm.
zlib.reset()#
Reset the compressor/decompressor to factory defaults. Only applicable tothe inflate and deflate algorithms.
Class:ZstdOptions#
Each Zstd-based class takes anoptions object. All options are optional.
flush<integer>Default:zlib.constants.ZSTD_e_continuefinishFlush<integer>Default:zlib.constants.ZSTD_e_endchunkSize<integer>Default:16 * 1024params<Object> Key-value object containing indexedZstd parameters.maxOutputLength<integer> Limits output size when usingconvenience methods.Default:buffer.kMaxLengthinfo<boolean> Iftrue, returns an object withbufferandengine.Default:falsedictionary<Buffer> Optional dictionary used toimprove compression efficiency when compressing or decompressing data thatshares common patterns with the dictionary.
For example:
const stream = zlib.createZstdCompress({chunkSize:32 *1024,params: { [zlib.constants.ZSTD_c_compressionLevel]:10, [zlib.constants.ZSTD_c_checksumFlag]:1, },});Class:zlib.ZstdCompress#
Compress data using the Zstd algorithm.
Class:zlib.ZstdDecompress#
Decompress data using the Zstd algorithm.
zlib.constants#
Provides an object enumerating Zlib-related constants.
zlib.crc32(data[, value])#
data<string> |<Buffer> |<TypedArray> |<DataView> Whendatais a string,it will be encoded as UTF-8 before being used for computation.value<integer> An optional starting value. It must be a 32-bit unsignedinteger.Default:0- Returns:<integer> A 32-bit unsigned integer containing the checksum.
Computes a 32-bitCyclic Redundancy Check checksum ofdata. Ifvalue is specified, it is used as the starting value of the checksum,otherwise, 0 is used as the starting value.
The CRC algorithm is designed to compute checksums and to detect errorin data transmission. It's not suitable for cryptographic authentication.
To be consistent with other APIs, if thedata is a string, it willbe encoded with UTF-8 before being used for computation. If users onlyuse Node.js to compute and match the checksums, this works well withother APIs that uses the UTF-8 encoding by default.
Some third-party JavaScript libraries compute the checksum on astring based onstr.charCodeAt() so that it can be run in browsers.If users want to match the checksum computed with this kind of libraryin the browser, it's better to use the same library in Node.jsif it also runs in Node.js. If users have to usezlib.crc32() tomatch the checksum produced by such a third-party library:
- If the library accepts
Uint8Arrayas input, useTextEncoderin the browser to encode the string into aUint8Arraywith UTF-8encoding, and compute the checksum based on the UTF-8 encoded stringin the browser. - If the library only takes a string and compute the data based on
str.charCodeAt(), on the Node.js side, convert the string intoa buffer usingBuffer.from(str, 'utf16le').
import zlibfrom'node:zlib';import {Buffer }from'node:buffer';let crc = zlib.crc32('hello');// 907060870crc = zlib.crc32('world', crc);// 4192936109crc = zlib.crc32(Buffer.from('hello','utf16le'));// 1427272415crc = zlib.crc32(Buffer.from('world','utf16le'), crc);// 4150509955const zlib =require('node:zlib');const {Buffer } =require('node:buffer');let crc = zlib.crc32('hello');// 907060870crc = zlib.crc32('world', crc);// 4192936109crc = zlib.crc32(Buffer.from('hello','utf16le'));// 1427272415crc = zlib.crc32(Buffer.from('world','utf16le'), crc);// 4150509955
zlib.createBrotliCompress([options])#
options<brotli options>
Creates and returns a newBrotliCompress object.
zlib.createBrotliDecompress([options])#
options<brotli options>
Creates and returns a newBrotliDecompress object.
zlib.createDeflate([options])#
options<zlib options>
Creates and returns a newDeflate object.
zlib.createDeflateRaw([options])#
options<zlib options>
Creates and returns a newDeflateRaw object.
An upgrade of zlib from 1.2.8 to 1.2.11 changed behavior whenwindowBitsis set to 8 for raw deflate streams. zlib would automatically setwindowBitsto 9 if was initially set to 8. Newer versions of zlib will throw an exception,so Node.js restored the original behavior of upgrading a value of 8 to 9,since passingwindowBits = 9 to zlib actually results in a compressed streamthat effectively uses an 8-bit window only.
zlib.createGunzip([options])#
options<zlib options>
Creates and returns a newGunzip object.
zlib.createGzip([options])#
options<zlib options>
zlib.createInflate([options])#
options<zlib options>
Creates and returns a newInflate object.
zlib.createInflateRaw([options])#
options<zlib options>
Creates and returns a newInflateRaw object.
zlib.createUnzip([options])#
options<zlib options>
Creates and returns a newUnzip object.
zlib.createZstdCompress([options])#
options<zstd options>
Creates and returns a newZstdCompress object.
zlib.createZstdDecompress([options])#
options<zstd options>
Creates and returns a newZstdDecompress object.
Convenience methods#
All of these take a<Buffer>,<TypedArray>,<DataView>,<ArrayBuffer>, or stringas the first argument, an optional second argumentto supply options to thezlib classes and will call the supplied callbackwithcallback(error, result).
Every method has a*Sync counterpart, which accept the same arguments, butwithout a callback.
zlib.brotliCompress(buffer[, options], callback)#
buffer<Buffer> |<TypedArray> |<DataView> |<ArrayBuffer> |<string>options<brotli options>callback<Function>
zlib.brotliCompressSync(buffer[, options])#
buffer<Buffer> |<TypedArray> |<DataView> |<ArrayBuffer> |<string>options<brotli options>
Compress a chunk of data withBrotliCompress.
zlib.brotliDecompress(buffer[, options], callback)#
buffer<Buffer> |<TypedArray> |<DataView> |<ArrayBuffer> |<string>options<brotli options>callback<Function>
zlib.brotliDecompressSync(buffer[, options])#
buffer<Buffer> |<TypedArray> |<DataView> |<ArrayBuffer> |<string>options<brotli options>
Decompress a chunk of data withBrotliDecompress.
zlib.deflate(buffer[, options], callback)#
History
| Version | Changes |
|---|---|
| v9.4.0 | The |
| v8.0.0 | The |
| v8.0.0 | The |
| v0.6.0 | Added in: v0.6.0 |
buffer<Buffer> |<TypedArray> |<DataView> |<ArrayBuffer> |<string>options<zlib options>callback<Function>
zlib.deflateSync(buffer[, options])#
History
| Version | Changes |
|---|---|
| v9.4.0 | The |
| v8.0.0 | The |
| v8.0.0 | The |
| v0.11.12 | Added in: v0.11.12 |
buffer<Buffer> |<TypedArray> |<DataView> |<ArrayBuffer> |<string>options<zlib options>
Compress a chunk of data withDeflate.
zlib.deflateRaw(buffer[, options], callback)#
History
| Version | Changes |
|---|---|
| v8.0.0 | The |
| v8.0.0 | The |
| v0.6.0 | Added in: v0.6.0 |
buffer<Buffer> |<TypedArray> |<DataView> |<ArrayBuffer> |<string>options<zlib options>callback<Function>
zlib.deflateRawSync(buffer[, options])#
History
| Version | Changes |
|---|---|
| v9.4.0 | The |
| v8.0.0 | The |
| v8.0.0 | The |
| v0.11.12 | Added in: v0.11.12 |
buffer<Buffer> |<TypedArray> |<DataView> |<ArrayBuffer> |<string>options<zlib options>
Compress a chunk of data withDeflateRaw.
zlib.gunzip(buffer[, options], callback)#
History
| Version | Changes |
|---|---|
| v9.4.0 | The |
| v8.0.0 | The |
| v8.0.0 | The |
| v0.6.0 | Added in: v0.6.0 |
buffer<Buffer> |<TypedArray> |<DataView> |<ArrayBuffer> |<string>options<zlib options>callback<Function>
zlib.gunzipSync(buffer[, options])#
History
| Version | Changes |
|---|---|
| v9.4.0 | The |
| v8.0.0 | The |
| v8.0.0 | The |
| v0.11.12 | Added in: v0.11.12 |
buffer<Buffer> |<TypedArray> |<DataView> |<ArrayBuffer> |<string>options<zlib options>
Decompress a chunk of data withGunzip.
zlib.gzip(buffer[, options], callback)#
History
| Version | Changes |
|---|---|
| v9.4.0 | The |
| v8.0.0 | The |
| v8.0.0 | The |
| v0.6.0 | Added in: v0.6.0 |
buffer<Buffer> |<TypedArray> |<DataView> |<ArrayBuffer> |<string>options<zlib options>callback<Function>
zlib.gzipSync(buffer[, options])#
History
| Version | Changes |
|---|---|
| v9.4.0 | The |
| v8.0.0 | The |
| v8.0.0 | The |
| v0.11.12 | Added in: v0.11.12 |
buffer<Buffer> |<TypedArray> |<DataView> |<ArrayBuffer> |<string>options<zlib options>
Compress a chunk of data withGzip.
zlib.inflate(buffer[, options], callback)#
History
| Version | Changes |
|---|---|
| v9.4.0 | The |
| v8.0.0 | The |
| v8.0.0 | The |
| v0.6.0 | Added in: v0.6.0 |
buffer<Buffer> |<TypedArray> |<DataView> |<ArrayBuffer> |<string>options<zlib options>callback<Function>
zlib.inflateSync(buffer[, options])#
History
| Version | Changes |
|---|---|
| v9.4.0 | The |
| v8.0.0 | The |
| v8.0.0 | The |
| v0.11.12 | Added in: v0.11.12 |
buffer<Buffer> |<TypedArray> |<DataView> |<ArrayBuffer> |<string>options<zlib options>
Decompress a chunk of data withInflate.
zlib.inflateRaw(buffer[, options], callback)#
History
| Version | Changes |
|---|---|
| v9.4.0 | The |
| v8.0.0 | The |
| v8.0.0 | The |
| v0.6.0 | Added in: v0.6.0 |
buffer<Buffer> |<TypedArray> |<DataView> |<ArrayBuffer> |<string>options<zlib options>callback<Function>
zlib.inflateRawSync(buffer[, options])#
History
| Version | Changes |
|---|---|
| v9.4.0 | The |
| v8.0.0 | The |
| v8.0.0 | The |
| v0.11.12 | Added in: v0.11.12 |
buffer<Buffer> |<TypedArray> |<DataView> |<ArrayBuffer> |<string>options<zlib options>
Decompress a chunk of data withInflateRaw.
zlib.unzip(buffer[, options], callback)#
History
| Version | Changes |
|---|---|
| v9.4.0 | The |
| v8.0.0 | The |
| v8.0.0 | The |
| v0.6.0 | Added in: v0.6.0 |
buffer<Buffer> |<TypedArray> |<DataView> |<ArrayBuffer> |<string>options<zlib options>callback<Function>
zlib.unzipSync(buffer[, options])#
History
| Version | Changes |
|---|---|
| v9.4.0 | The |
| v8.0.0 | The |
| v8.0.0 | The |
| v0.11.12 | Added in: v0.11.12 |
buffer<Buffer> |<TypedArray> |<DataView> |<ArrayBuffer> |<string>options<zlib options>
Decompress a chunk of data withUnzip.
zlib.zstdCompress(buffer[, options], callback)#
buffer<Buffer> |<TypedArray> |<DataView> |<ArrayBuffer> |<string>options<zstd options>callback<Function>
zlib.zstdCompressSync(buffer[, options])#
buffer<Buffer> |<TypedArray> |<DataView> |<ArrayBuffer> |<string>options<zstd options>
Compress a chunk of data withZstdCompress.
zlib.zstdDecompress(buffer[, options], callback)#
buffer<Buffer> |<TypedArray> |<DataView> |<ArrayBuffer> |<string>options<zstd options>callback<Function>
zlib.zstdDecompressSync(buffer[, options])#
buffer<Buffer> |<TypedArray> |<DataView> |<ArrayBuffer> |<string>options<zstd options>
Decompress a chunk of data withZstdDecompress.