Node JS Notes
Node JS Notes
Stability: 3 - Locked
The assert module provides a simple set of assertion tests that can be
used to test invariants. The module is intended for internal use by Node.js,
but can be used in application code via require('assert').
However, assert is not a testing framework, and is not intended to be used
as a general purpose assertion library.
The API for the assert module is Locked. This means that there will be no
additions or changes to any of the methods implemented and exposed by
the module.
assert(value[, message])#
Added in: v0.5.9
An alias of assert.ok() .
const assert = require('assert');
assert(true); // OK
assert(1);
// OK
assert(false);
// throws "AssertionError: false == true"
assert(0);
// throws "AssertionError: 0 == true"
assert(false, 'it\'s false');
// throws "AssertionError: it's false"
assert.deepEqual(actual, expected[, message])#
Added in: v0.1.21
Tests for deep equality between the actual and expected parameters.
Primitive values are compared with the equal comparison operator ( == ).
Only enumerable "own" properties are considered.
The deepEqual() implementation does not test object prototypes, attached
symbols, or non-enumerable properties. This can lead to some potentially
surprising results. For example, the following example does not throw
an AssertionError because the properties on the Error object are nonenumerable:
// WARNING: This does not throw an AssertionError!
assert.deepEqual(Error('a'), Error('b'));
"Deep" equality means that the enumerable "own" properties of child
objects are evaluated also:
={
={
={
= Object.create(obj1);
assert.deepEqual(obj1, obj1);
// OK, object is equal to itself
assert.deepEqual(obj1, obj2);
// AssertionError: { a: { b: 1 } } deepEqual { a: { b: 2 } }
// values of b are different
assert.deepEqual(obj1, obj3);
// OK, objects are equal
assert.deepEqual(obj1, obj4);
// AssertionError: { a: { b: 1 } } deepEqual {}
// Prototypes are ignored
If the values are not equal, an AssertionError is thrown with
a message property set equal to the value of the message parameter. If
the message parameter is undefined, a default error message is assigned.
assert.deepStrictEqual(actual, expected[, message])#
Added in: v1.2.0
},
TypeError
);
If an AssertionError is thrown and a value is provided for
the message parameter, the value of message will be appended to
theAssertionError message:
assert.doesNotThrow(
() => {
throw new TypeError('Wrong value');
},
TypeError,
'Whoops'
);
// Throws: AssertionError: Got unwanted exception (TypeError).
Whoops
assert.equal(actual, expected[, message])#
Added in: v0.1.21
Tests shallow, coercive equality between
the actual and expected parameters using the equal comparison operator
( == ).
const assert = require('assert');
assert.equal(1, 1);
// OK, 1 == 1
assert.equal(1, '1');
// OK, 1 == '1'
assert.equal(1, 2);
// AssertionError: 1 == 2
assert.equal({a: {b: 1}}, {a: {b: 1}});
//AssertionError: { a: { b: 1 } } == { a: { b: 1 } }
If the values are not equal, an AssertionError is thrown with
a message property set equal to the value of the message parameter. If
the message parameter is undefined, a default error message is assigned.
assert.fail(actual, expected, message, operator)#
Added in: v0.1.21
}
}
const obj4 = Object.create(obj1);
assert.notDeepEqual(obj1, obj1);
// AssertionError: { a: { b: 1 } } notDeepEqual { a: { b: 1 } }
assert.notDeepEqual(obj1, obj2);
// OK, obj1 and obj2 are not deeply equal
assert.notDeepEqual(obj1, obj3);
// AssertionError: { a: { b: 1 } } notDeepEqual { a: { b: 1 } }
assert.notDeepEqual(obj1, obj4);
// OK, obj1 and obj2 are not deeply equal
If the values are deeply equal, an AssertionError is thrown with
a message property set equal to the value of the message parameter. If
the message parameter is undefined, a default error message is assigned.
assert.notDeepStrictEqual(actual, expected[, message])#
Added in: v1.2.0
Tests for deep strict inequality. Opposite of assert.deepStrictEqual().
const assert = require('assert');
assert.notDeepEqual({a:1}, {a:'1'});
// AssertionError: { a: 1 } notDeepEqual { a: '1' }
assert.notDeepStrictEqual({a:1}, {a:'1'});
// OK
If the values are deeply and strictly equal, an AssertionError is thrown with
a message property set equal to the value of themessage parameter. If
the message parameter is undefined, a default error message is assigned.
assert.notEqual(actual, expected[, message])#
Added in: v0.1.21
Tests shallow, coercive inequality with the not equal comparison operator
( != ).
const assert = require('assert');
assert.notEqual(1, 2);
// OK
assert.notEqual(1, 1);
// AssertionError: 1 != 1
assert.notEqual(1, '1');
// AssertionError: 1 != '1'
If the values are equal, an AssertionError is thrown with
a message property set equal to the value of the message parameter. If
themessage parameter is undefined, a default error message is assigned.
assert.notStrictEqual(actual, expected[, message])#
Added in: v0.1.21
Tests strict inequality as determined by the strict not equal operator ( !
== ).
const assert = require('assert');
assert.notStrictEqual(1, 2);
// OK
assert.notStrictEqual(1, 1);
// AssertionError: 1 != 1
assert.notStrictEqual(1, '1');
// OK
If the values are strictly equal, an AssertionError is thrown with
a message property set equal to the value of the messageparameter. If
the message parameter is undefined, a default error message is assigned.
assert.ok(value[, message])#
Added in: v0.1.21
Tests if value is truthy. It is equivalent to assert.equal(!!value, true,
message).
If value is not truthy, an AssertionError is thrown with a message property
set equal to the value of the message parameter. If
themessage parameter is undefined, a default error message is assigned.
const assert = require('assert');
assert.ok(true); // OK
assert.ok(1);
// OK
assert.ok(false);
// throws "AssertionError: false == true"
assert.ok(0);
// throws "AssertionError: 0 == true"
assert.ok(false, 'it\'s false');
// throws "AssertionError: it's false"
assert.strictEqual(actual, expected[, message])#
Added in: v0.1.21
Tests strict equality as determined by the strict equality operator ( === ).
const assert = require('assert');
assert.strictEqual(1, 2);
// AssertionError: 1 === 2
assert.strictEqual(1, 1);
// OK
assert.strictEqual(1, '1');
// AssertionError: 1 === '1'
If the values are not strictly equal, an AssertionError is thrown with
a message property set equal to the value of the messageparameter. If
the message parameter is undefined, a default error message is assigned.
assert.throws(block[, error][, message])#
Added in: v0.1.21
Expects the function block to throw an error.
If specified, error can be a constructor, RegExp, or validation function.
If specified, message will be the message provided by the AssertionError if
the block fails to throw.
Validate instanceof using constructor:
assert.throws(
() => {
throw new Error('Wrong value');
},
Error
);
Validate error message using RegExp:
assert.throws(
() => {
throw new Error('Wrong value');
},
/value/
);
Custom error validation:
assert.throws(
() => {
throw new Error('Wrong value');
},
function(err) {
if ( (err instanceof Error) && /value/.test(err) ) {
return true;
}
},
'unexpected error'
);
Note that error can not be a string. If a string is provided as the second
argument, then error is assumed to be omitted and the string will be used
for message instead. This can lead to easy-to-miss mistakes:
// THIS IS A MISTAKE! DO NOT DO THIS!
assert.throws(myFunction, 'missing foo', 'did not throw with expected
message');
// Do this instead.
assert.throws(myFunction, /missing foo/, 'did not throw with expected
message');
Buffer
Class: Buffer
new Buffer(array)
new Buffer(buffer)
new Buffer(size)
buf[index]
buf.entries()
buf.equals(otherBuffer)
buf.keys()
buf.length
buf.readDoubleBE(offset[, noAssert])
buf.readDoubleLE(offset[, noAssert])
buf.readFloatBE(offset[, noAssert])
buf.readFloatLE(offset[, noAssert])
buf.readInt8(offset[, noAssert])
buf.readInt16BE(offset[, noAssert])
buf.readInt16LE(offset[, noAssert])
buf.readInt32BE(offset[, noAssert])
buf.readInt32LE(offset[, noAssert])
buf.readUInt8(offset[, noAssert])
buf.readUInt16BE(offset[, noAssert])
buf.readUInt16LE(offset[, noAssert])
buf.readUInt32BE(offset[, noAssert])
buf.readUInt32LE(offset[, noAssert])
buf.slice([start[, end]])
buf.swap16()
buf.swap32()
buf.toJSON()
buf.values()
buffer.INSPECT_MAX_BYTES
Class: SlowBuffer
new SlowBuffer(size)
Buffer#
Stability: 2 - Stable
Prior to the introduction of TypedArray in ECMAScript 2015 (ES6), the
JavaScript language had no mechanism for reading or manipulating
streams of binary data. The Buffer class was introduced as part of the
Node.js API to make it possible to interact with octet streams in the
context of things like TCP streams and file system operations.
Now that TypedArray has been added in ES6, the Buffer class implements
the Uint8Array API in a manner that is more optimized and suitable for
Node.js' use cases.
Instances of the Buffer class are similar to arrays of integers but
correspond to fixed-sized, raw memory allocations outside the V8 heap.
The size of the Buffer is established when it is created and cannot be
resized.
The Buffer class is a global within Node.js, making it unlikely that one
would need to ever use require('buffer').
const buf1 = Buffer.alloc(10);
// Creates a zero-filled Buffer of length 10.
const buf2 = Buffer.alloc(10, 1);
// Creates a Buffer of length 10, filled with 0x01.
const buf3 = Buffer.allocUnsafe(10);
// Creates an uninitialized buffer of length 10.
// This is faster than calling Buffer.alloc() but the returned
'ascii' - for 7-bit ASCII data only. This encoding method is very fast
and will strip the high bit if set.
2.
Buffer.from(array)
Buffer.from(buffer)
Buffer.from(str[, encoding])
Prints:
1
2
3
array <Array>
buffer <Buffer>
console.log(buf);
// Prints: <Buffer 88 13 a0 0f>
// changing the TypdArray changes the Buffer also
arr[1] = 6000;
console.log(buf);
// Prints: <Buffer 88 13 70 17>
new Buffer(size)#
Deprecated since: v6.0.0
Stability: 0 - Deprecated: Use
Buffer.alloc(size[, fill[, encoding]]) instead (also
see Buffer.allocUnsafe(size)).
size <Number>
Allocates a new Buffer of size bytes. The size must be less than or equal to
the value of require('buffer').kMaxLength (on 64-bit
architectures, kMaxLength is (2^31)-1). Otherwise, a RangeError is
thrown. A zero-length Buffer will be created if a size less than or equal to 0
is specified.
Unlike ArrayBuffers, the underlying memory for Buffer instances created in
this way is not initialized. The contents of a newly created Buffer are
unknown and could contain sensitive data. Use buf.fill(0) to initialize
a Buffer to zeroes.
const buf = new Buffer(5);
console.log(buf);
// <Buffer 78 e0 82 02 01>
// (octets will be different, every time)
buf.fill(0);
console.log(buf);
// <Buffer 00 00 00 00 00>
new Buffer(str[, encoding])#
Deprecated since: v6.0.0
Stability: 0 - Deprecated:
Creates a new Buffer containing the given JavaScript string str. If provided,
the encoding parameter identifies the strings character encoding.
const buf1 = new Buffer('this is a tst');
console.log(buf1.toString());
// prints: this is a tst
console.log(buf1.toString('ascii'));
// prints: this is a tC)st
const buf2 = new Buffer('7468697320697320612074c3a97374',
'hex');
console.log(buf2.toString());
// prints: this is a tst
Class Method: Buffer.alloc(size[, fill[, encoding]])#
Added in: v5.10.0
size <Number>
Allocates a new Buffer of size bytes. If fill is undefined, the Buffer will
be zero-filled.
const buf = Buffer.alloc(5);
console.log(buf);
// <Buffer 00 00 00 00 00>
The size must be less than or equal to the value
of require('buffer').kMaxLength (on 64-bit
architectures, kMaxLength is(2^31)-1). Otherwise, a RangeError is thrown.
A zero-length Buffer will be created if a size less than or equal to 0 is
specified.
size <Number>
Allocates a new non-zero-filled Buffer of size bytes. The size must be less
than or equal to the value ofrequire('buffer').kMaxLength (on 64-bit
architectures, kMaxLength is (2^31)-1). Otherwise, a RangeError is
thrown. A zero-length Buffer will be created if a size less than or equal to 0
is specified.
The underlying memory for Buffer instances created in this way is not
initialized. The contents of the newly created Buffer are unknown and may
contain sensitive data. Use buf.fill(0) to initialize such Buffer instances to
zeroes.
const buf = Buffer.allocUnsafe(5);
console.log(buf);
// <Buffer 78 e0 82 02 01>
// (octets will be different, every time)
buf.fill(0);
console.log(buf);
// <Buffer 00 00 00 00 00>
A TypeError will be thrown if size is not a number.
Note that the Buffer module pre-allocates an internal Buffer instance of
size Buffer.poolSize that is used as a pool for the fast allocation of
new Buffer instances created using Buffer.allocUnsafe(size) (and the
deprecated new Buffer(size) constructor) only when size is less than or
equal to Buffer.poolSize >> 1 (floor of Buffer.poolSize divided by two). The
default value ofBuffer.poolSize is 8192 but can be modified.
Use of this pre-allocated internal memory pool is a key difference between
calling Buffer.alloc(size, fill) vs.Buffer.allocUnsafe(size).fill(fill).
Specifically, Buffer.alloc(size, fill) will never use the internal Buffer pool,
whileBuffer.allocUnsafe(size).fill(fill) will use the internal Buffer pool
if size is less than or equal to half Buffer.poolSize. The difference is subtle
but can be important when an application requires the additional
performance that Buffer.allocUnsafe(size)provides.
Class Method: Buffer.allocUnsafeSlow(size)#
Added in: v5.10.0
size <Number>
Return: <Number>
Returns the actual byte length of a string. This is not the same
as String.prototype.length since that returns the number of charactersin a
string.
Example:
buf1 <Buffer>
buf2 <Buffer>
Return: <Number>
Returns a new Buffer which is the result of concatenating all the Buffers in
the list together.
If the list has no items, or if the totalLength is 0, then a new zero-length
Buffer is returned.
buf1 = Buffer.alloc(10);
buf2 = Buffer.alloc(14);
buf3 = Buffer.alloc(18);
totalLength = buf1.length + buf2.length + buf3.length;
console.log(totalLength);
const bufA = Buffer.concat([buf1, buf2, buf3], totalLength);
console.log(bufA);
console.log(bufA.length);
// 42
// <Buffer 00 00 00 00 ...>
// 42
Class Method: Buffer.from(array)#
Added in: v3.0.0
array <Array>
buffer <Buffer>
console.log(buf1.toString());
// 'auffer'
console.log(buf2.toString());
// 'buffer' (copy is not changed)
A TypeError will be thrown if buffer is not a Buffer.
Class Method: Buffer.from(str[, encoding])#
Added in: v5.10.0
Creates a new Buffer containing the given JavaScript string str. If provided,
the encoding parameter identifies the character encoding. If not
provided, encoding defaults to 'utf8'.
const buf1 = Buffer.from('this is a tst');
console.log(buf1.toString());
// prints: this is a tst
console.log(buf1.toString('ascii'));
// prints: this is a tC)st
const buf2 = Buffer.from('7468697320697320612074c3a97374',
'hex');
console.log(buf2.toString());
// prints: this is a tst
A TypeError will be thrown if str is not a string.
Class Method: Buffer.isBuffer(obj)#
obj <Object>
Return: <Boolean>
Return: <Boolean>
target <Buffer>
Return: <Number>
console.log(buf1.compare(buf2, 5, 6, 5));
// Prints: 1
A RangeError will be thrown if: targetStart < 0, sourceStart < 0, targetEnd
> target.byteLength or sourceEnd > source.byteLength.
buf.copy(targetBuffer[, targetStart[, sourceStart[, sourceEnd]]])#
Copies data from a region of this Buffer to a region in the target Buffer
even if the target memory region overlaps with the source.
Example: build two Buffers, then copy buf1 from byte 16 through byte 19
into buf2, starting at the 8th byte in buf2.
const buf1 = Buffer.allocUnsafe(26);
const buf2 = Buffer.allocUnsafe(26).fill('!');
for (let i = 0 ; i < 26 ; i++) {
buf1[i] = i + 97; // 97 is ASCII a
}
buf1.copy(buf2, 8, 16, 20);
console.log(buf2.toString('ascii', 0, 25));
// Prints: !!!!!!!!qrst!!!!!!!!!!!!!
Example: Build a single Buffer, then copy data from one region to an
overlapping region in the same Buffer
const buf = Buffer.allocUnsafe(26);
for (var i = 0 ; i < 26 ; i++) {
buf[i] = i + 97; // 97 is ASCII a
}
buf.copy(buf, 0, 4, 10);
console.log(buf.toString());
// efghijghijklmnopqrstuvwxyz
buf.entries()#
Added in: v1.1.0
Return: <Iterator>
Creates and returns an iterator of [index, byte] pairs from the Buffer
contents.
const buf = Buffer.from('buffer');
for (var pair of buf.entries()) {
console.log(pair);
}
// prints:
// [0, 98]
// [1, 117]
// [2, 102]
// [3, 102]
// [4, 101]
// [5, 114]
buf.equals(otherBuffer)#
Added in: v1.0.0
otherBuffer <Buffer>
Return: <Boolean>
console.log(buf1.equals(buf3));
// Prints: false
buf.fill(value[, offset[, end]][, encoding])#
Added in: v0.5.0
Return: <Buffer>
Fills the Buffer with the specified value. If the offset (defaults to 0)
and end (defaults to buf.length) are not given the entire buffer will be
filled. The method returns a reference to the Buffer, so calls can be
chained. This is meant as a small simplification to creating a Buffer.
Allowing the creation and fill of the Buffer to be done on a single line:
const b = Buffer.allocUnsafe(50).fill('h');
console.log(b.toString());
// Prints:
hhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh
encoding is only relevant if value is a string. Otherwise it is
ignored. value is coerced to a uint32 value if it is not a String or Number.
The fill() operation writes bytes into the Buffer dumbly. If the final write
falls in between a multi-byte character then whatever bytes fit into the
buffer are written.
Buffer(3).fill('\u0222');
// Prints: <Buffer c8 a2 c8>
buf.indexOf(value[, byteOffset][, encoding])#
Added in: v1.5.0
Return: <Number>
Return: <Boolean>
Return: <Iterator>
// 5
buf.lastIndexOf(value[, byteOffset][, encoding])#
Added in: v6.0.0
Return: <Number>
// returns 6
utf16Buffer.lastIndexOf('\u03a3', -5, 'ucs2');
// returns 4
buf.length#
<Number>
Return: <Number>
Reads a 64-bit double from the Buffer at the specified offset with specified
endian format (readDoubleBE() returns big endian,readDoubleLE() returns
little endian).
Setting noAssert to true skips validation of the offset. This allows
the offset to be beyond the end of the Buffer.
const buf = Buffer.from([1,2,3,4,5,6,7,8]);
buf.readDoubleBE();
// Returns: 8.20788039913184e-304
buf.readDoubleLE();
// Returns: 5.447603722011605e-270
buf.readDoubleLE(1);
// throws RangeError: Index out of range
buf.readDoubleLE(1, true); // Warning: reads passed end of buffer!
// Segmentation fault! don't do this!
buf.readFloatBE(offset[, noAssert])#
buf.readFloatLE(offset[, noAssert])#
Return: <Number>
Reads a 32-bit float from the Buffer at the specified offset with specified
endian format (readFloatBE() returns big endian,readFloatLE() returns little
endian).
Setting noAssert to true skips validation of the offset. This allows
the offset to be beyond the end of the Buffer.
const buf = Buffer.from([1,2,3,4]);
buf.readFloatBE();
// Returns: 2.387939260590663e-38
buf.readFloatLE();
// Returns: 1.539989614439558e-36
buf.readFloatLE(1);
Return: <Number>
Reads a signed 8-bit integer from the Buffer at the specified offset.
Setting noAssert to true skips validation of the offset. This allows
the offset to be beyond the end of the Buffer.
Integers read from the Buffer are interpreted as two's complement signed
values.
const buf = Buffer.from([1,-2,3,4]);
buf.readInt8(0);
// returns 1
buf.readInt8(1);
// returns -2
buf.readInt16BE(offset[, noAssert])#
buf.readInt16LE(offset[, noAssert])#
Return: <Number>
Reads a signed 16-bit integer from the Buffer at the specified offset with
the specified endian format (readInt16BE() returns big
endian, readInt16LE() returns little endian).
Setting noAssert to true skips validation of the offset. This allows
the offset to be beyond the end of the Buffer.
Integers read from the Buffer are interpreted as two's complement signed
values.
const buf = Buffer.from([1,-2,3,4]);
buf.readInt16BE();
// returns 510
buf.readInt16LE(1);
// returns 1022
buf.readInt32BE(offset[, noAssert])#
buf.readInt32LE(offset[, noAssert])#
Return: <Number>
Reads a signed 32-bit integer from the Buffer at the specified offset with
the specified endian format (readInt32BE() returns big
endian, readInt32LE() returns little endian).
Setting noAssert to true skips validation of the offset. This allows
the offset to be beyond the end of the Buffer.
Integers read from the Buffer are interpreted as two's complement signed
values.
const buf = Buffer.from([1,-2,3,4]);
buf.readInt32BE();
// returns 33424132
buf.readInt32LE();
// returns 67370497
buf.readInt32LE(1);
// throws RangeError: Index out of range
buf.readIntBE(offset, byteLength[, noAssert])#
buf.readIntLE(offset, byteLength[, noAssert])#
Added in: v1.0.0
Return: <Number>
Return: <Number>
Reads an unsigned 8-bit integer from the Buffer at the specified offset.
Setting noAssert to true skips validation of the offset. This allows
the offset to be beyond the end of the Buffer.
const buf = Buffer.from([1,-2,3,4]);
buf.readUInt8(0);
// returns 1
buf.readUInt8(1);
// returns 254
buf.readUInt16BE(offset[, noAssert])#
buf.readUInt16LE(offset[, noAssert])#
Return: <Number>
Return: <Number>
Return: <Number>
Return: <Buffer>
Returns a new Buffer that references the same memory as the original,
but offset and cropped by the start and end indices.
Note that modifying the new Buffer slice will modify the memory
in the original Buffer because the allocated memory of the two
objects overlap.
Example: build a Buffer with the ASCII alphabet, take a slice, then modify
one byte from the original Buffer.
const buf1 = Buffer.allocUnsafe(26);
for (var i = 0 ; i < 26 ; i++) {
buf1[i] = i + 97; // 97 is ASCII a
}
const buf2 = buf1.slice(0, 3);
buf2.toString('ascii', 0, buf2.length);
// Returns: 'abc'
buf1[0] = 33;
buf2.toString('ascii', 0, buf2.length);
// Returns : '!bc'
Specifying negative indexes causes the slice to be generated relative to
the end of the Buffer rather than the beginning.
const buf = Buffer.from('buffer');
buf.slice(-6, -1).toString();
// Returns 'buffe', equivalent to buf.slice(0, 5)
buf.slice(-6, -2).toString();
// Returns 'buff', equivalent to buf.slice(0, 4)
buf.slice(-5, -2).toString();
// Returns 'uff', equivalent to buf.slice(1, 4)
buf.swap16()#
Added in: v5.10.0
Return: <Buffer>
Return: <Buffer>
Return: <String>
Decodes and returns a string from the Buffer data using the specified
character set encoding.
const buf = Buffer.allocUnsafe(26);
for (var i = 0 ; i < 26 ; i++) {
buf[i] = i + 97; // 97 is ASCII a
}
buf.toString('ascii');
// Returns: 'abcdefghijklmnopqrstuvwxyz'
buf.toString('ascii',0,5);
// Returns: 'abcde'
buf.toString('utf8',0,5);
// Returns: 'abcde'
buf.toString(undefined,0,5);
// Returns: 'abcde', encoding defaults to 'utf8'
buf.toJSON()#
Added in: v0.9.2
Return: <Object>
});
console.log(copy.toString());
// Prints: 'test'
buf.values()#
Added in: v1.1.0
Return: <Iterator>
Creates and returns an iterator for Buffer values (bytes). This function is
called automatically when the Buffer is used in a for..ofstatement.
const buf = Buffer.from('buffer');
for (var value of buf.values()) {
console.log(value);
}
// prints:
// 98
// 117
// 102
// 102
// 101
// 114
for (var value of buf) {
console.log(value);
}
// prints:
// 98
// 117
// 102
// 102
// 101
// 114
buf.write(string[, offset[, length]][, encoding])#
Writes value to the Buffer at the specified offset with specified endian
format (writeDoubleBE() writes big endian,writeDoubleLE() writes little
endian). The value argument should be a valid 64-bit double. Behavior is
not defined when value is anything other than a 64-bit double.
Set noAssert to true to skip validation of value and offset. This means
that value may be too large for the specific function andoffset may be
beyond the end of the Buffer leading to the values being silently dropped.
This should not be used unless you are certain of correctness.
Example:
const buf = Buffer.allocUnsafe(8);
buf.writeDoubleBE(0xdeadbeefcafebabe, 0);
console.log(buf);
// Prints: <Buffer 43 eb d5 b7 dd f9 5f d7>
buf.writeDoubleLE(0xdeadbeefcafebabe, 0);
console.log(buf);
// Prints: <Buffer d7 5f f9 dd b7 d5 eb 43>
buf.writeFloatBE(value, offset[, noAssert])#
buf.writeFloatLE(value, offset[, noAssert])#
Writes value to the Buffer at the specified offset with specified endian
format (writeFloatBE() writes big endian,writeFloatLE() writes little
endian). Behavior is not defined when value is anything other than a 32bit float.
Set noAssert to true to skip validation of value and offset. This means
that value may be too large for the specific function andoffset may be
beyond the end of the Buffer leading to the values being silently dropped.
This should not be used unless you are certain of correctness.
Example:
const buf = Buffer.allocUnsafe(4);
buf.writeFloatBE(0xcafebabe, 0);
console.log(buf);
// Prints: <Buffer 4f 4a fe bb>
buf.writeFloatLE(0xcafebabe, 0);
console.log(buf);
// Prints: <Buffer bb fe 4a 4f>
Writes value to the Buffer at the specified offset. The value should be a
valid signed 8-bit integer. Behavior is not defined whenvalue is anything
other than a signed 8-bit integer.
Set noAssert to true to skip validation of value and offset. This means
that value may be too large for the specific function andoffset may be
beyond the end of the Buffer leading to the values being silently dropped.
This should not be used unless you are certain of correctness.
The value is interpreted and written as a two's complement signed
integer.
const buf = Buffer.allocUnsafe(2);
buf.writeInt8(2, 0);
buf.writeInt8(-2, 1);
console.log(buf);
// Prints: <Buffer 02 fe>
buf.writeInt16BE(value, offset[, noAssert])#
buf.writeInt16LE(value, offset[, noAssert])#
Writes value to the Buffer at the specified offset with specified endian
format (writeInt16BE() writes big endian,writeInt16LE() writes little
endian). The value should be a valid signed 16-bit integer. Behavior is not
defined when value is anything other than a signed 16-bit integer.
Set noAssert to true to skip validation of value and offset. This means
that value may be too large for the specific function andoffset may be
beyond the end of the Buffer leading to the values being silently dropped.
This should not be used unless you are certain of correctness.
The value is interpreted and written as a two's complement signed
integer.
const buf = Buffer.allocUnsafe(4);
buf.writeInt16BE(0x0102,0);
buf.writeInt16LE(0x0304,2);
console.log(buf);
// Prints: <Buffer 01 02 04 03>
buf.writeInt32BE(value, offset[, noAssert])#
buf.writeInt32LE(value, offset[, noAssert])#
Writes value to the Buffer at the specified offset with specified endian
format (writeInt32BE() writes big endian,writeInt32LE() writes little
endian). The value should be a valid signed 32-bit integer. Behavior is not
defined when value is anything other than a signed 32-bit integer.
Set noAssert to true to skip validation of value and offset. This means
that value may be too large for the specific function andoffset may be
beyond the end of the Buffer leading to the values being silently dropped.
This should not be used unless you are certain of correctness.
The value is interpreted and written as a two's complement signed
integer.
const buf = Buffer.allocUnsafe(8);
buf.writeInt32BE(0x01020304,0);
buf.writeInt32LE(0x05060708,4);
console.log(buf);
Writes value to the Buffer at the specified offset and byteLength. Supports
up to 48 bits of accuracy. For example:
const buf1 = Buffer.allocUnsafe(6);
buf1.writeUIntBE(0x1234567890ab, 0, 6);
console.log(buf1);
// Prints: <Buffer 12 34 56 78 90 ab>
const buf2 = Buffer.allocUnsafe(6);
buf2.writeUIntLE(0x1234567890ab, 0, 6);
console.log(buf2);
// Prints: <Buffer ab 90 78 56 34 12>
Set noAssert to true to skip validation of value and offset. This means
that value may be too large for the specific function andoffset may be
beyond the end of the Buffer leading to the values being silently dropped.
This should not be used unless you are certain of correctness.
Behavior is not defined when value is anything other than an integer.
buf.writeUInt8(value, offset[, noAssert])#
Writes value to the Buffer at the specified offset. The value should be a
valid unsigned 8-bit integer. Behavior is not defined whenvalue is anything
other than an unsigned 8-bit integer.
Set noAssert to true to skip validation of value and offset. This means
that value may be too large for the specific function andoffset may be
beyond the end of the Buffer leading to the values being silently dropped.
This should not be used unless you are certain of correctness.
Example:
const buf = Buffer.allocUnsafe(4);
buf.writeUInt8(0x3, 0);
buf.writeUInt8(0x4, 1);
buf.writeUInt8(0x23, 2);
buf.writeUInt8(0x42, 3);
console.log(buf);
// Prints: <Buffer 03 04 23 42>
buf.writeUInt16BE(value, offset[, noAssert])#
buf.writeUInt16LE(value, offset[, noAssert])#
Writes value to the Buffer at the specified offset with specified endian
format (writeUInt16BE() writes big endian,writeUInt16LE() writes little
endian). The value should be a valid unsigned 16-bit integer. Behavior is
not defined when value is anything other than an unsigned 16-bit integer.
Set noAssert to true to skip validation of value and offset. This means
that value may be too large for the specific function andoffset may be
beyond the end of the Buffer leading to the values being silently dropped.
This should not be used unless you are certain of correctness.
Example:
const buf = Buffer.allocUnsafe(4);
buf.writeUInt16BE(0xdead, 0);
buf.writeUInt16BE(0xbeef, 2);
console.log(buf);
// Prints: <Buffer de ad be ef>
buf.writeUInt16LE(0xdead, 0);
buf.writeUInt16LE(0xbeef, 2);
console.log(buf);
// Prints: <Buffer ad de ef be>
buf.writeUInt32BE(value, offset[, noAssert])#
buf.writeUInt32LE(value, offset[, noAssert])#
Writes value to the Buffer at the specified offset with specified endian
format (writeUInt32BE() writes big endian,writeUInt32LE() writes little
endian). The value should be a valid unsigned 32-bit integer. Behavior is
not defined when value is anything other than an unsigned 32-bit integer.
Set noAssert to true to skip validation of value and offset. This means
that value may be too large for the specific function andoffset may be
beyond the end of the Buffer leading to the values being silently dropped.
This should not be used unless you are certain of correctness.
Example:
const buf = Buffer.allocUnsafe(4);
buf.writeUInt32BE(0xfeedface, 0);
console.log(buf);
Writes value to the Buffer at the specified offset and byteLength. Supports
up to 48 bits of accuracy. For example:
const buf = Buffer.allocUnsafe(6);
buf.writeUIntBE(0x1234567890ab, 0, 6);
console.log(buf);
// Prints: <Buffer 12 34 56 78 90 ab>
Set noAssert to true to skip validation of value and offset. This means
that value may be too large for the specific function andoffset may be
beyond the end of the Buffer leading to the values being silently dropped.
This should not be used unless you are certain of correctness.
Behavior is not defined when value is anything other than an unsigned
integer.
buffer.INSPECT_MAX_BYTES#
<Number> Default: 50
size Number
Allocates a new SlowBuffer of size bytes. The size must be less than or
equal to the value of require('buffer').kMaxLength (on 64-bit
architectures, kMaxLength is (2^31)-1). Otherwise, a RangeError is
thrown. A zero-length Buffer will be created if a size less than or equal to 0
is specified.
The underlying memory for SlowBuffer instances is not initialized. The
contents of a newly created SlowBuffer are unknown and could contain
sensitive data. Use buf.fill(0) to initialize a SlowBuffer to zeroes.
const SlowBuffer = require('buffer').SlowBuffer;
const buf = new SlowBuffer(5);
console.log(buf);
// <Buffer 78 e0 82 02 01>
// (octets will be different, every time)
buf.fill(0);
console.log(buf);
// <Buffer 00 00 00 00 00>
Addons
Hello world
Building
Addon examples
Function arguments
Callbacks
Object factory
Function factory
AtExit hooks
Addons#
Node.js Addons are dynamically-linked shared objects, written in C or C+
+, that can be loaded into Node.js using the require()function, and used
just as if they were an ordinary Node.js module. They are used primarily to
provide an interface between JavaScript running in Node.js and C/C++
libraries.
At the moment, the method for implementing Addons is rather
complicated, involving knowledge of several components and APIs :
V8: the C++ library Node.js currently uses to provide the JavaScript
implementation. V8 provides the mechanisms for creating objects,
calling functions, etc. V8's API is documented mostly in
the v8.h header file (deps/v8/include/v8.h in the Node.js source tree),
which is also available online.
libuv: The C library that implements the Node.js event loop, its
worker threads and all of the asynchronous behaviors of the platform.
It also serves as a cross-platform abstraction library, giving easy,
POSIX-like access across all major operating systems to many
common system tasks, such as interacting with the filesystem,
sockets, timers and system events. libuv also provides a pthreads-like
threading abstraction that may be used to power more sophisticated
asynchronous Addons that need to move beyond the standard event
loop. Addon authors are encouraged to think about how to avoid
blocking the event loop with I/O or other time-intensive tasks by offloading work via libuv to non-blocking system operations, worker
threads or a custom use of libuv's threads.
All of the following examples are available for download and may be used
as a starting-point for your own Addon.
Hello world#
This "Hello world" example is a simple Addon, written in C++, that is the
equivalent of the following JavaScript code:
module.exports.hello = () => 'world';
First, create the file hello.cc:
// hello.cc
#include <node.h>
namespace demo {
using
using
using
using
using
using
v8::FunctionCallbackInfo;
v8::Isolate;
v8::Local;
v8::Object;
v8::String;
v8::Value;
try {
return require('./build/Release/addon.node');
} catch (err) {
return require('./build/Debug/addon.node');
}
Linking to Node.js' own dependencies#
Node.js uses a number of statically linked libraries such as V8, libuv and
OpenSSL. All Addons are required to link to V8 and may link to any of the
other dependencies as well. Typically, this is as simple as including the
appropriate #include <...> statements (e.g.#include <v8.h>) and nodegyp will locate the appropriate headers automatically. However, there are
a few caveats to be aware of:
Each of the examples illustrated in this document make direct use of the
Node.js and V8 APIs for implementing Addons. It is important to
understand that the V8 API can, and has, changed dramatically from one
V8 release to the next (and one major Node.js release to the next). With
each change, Addons may need to be updated and recompiled in order to
continue functioning. The Node.js release schedule is designed to
minimize the frequency and impact of such changes but there is little that
Node.js can do currently to ensure stability of the V8 APIs.
The Native Abstractions for Node.js (or nan) provide a set of tools that
Addon developers are recommended to use to keep compatibility between
past and future releases of V8 and Node.js. See the nan examples for an
illustration of how it can be used.
Addon examples#
Following are some example Addons intended to help developers get
started. The examples make use of the V8 APIs. Refer to the online V8
reference for help with the various V8 calls, and V8's Embedder's
Guide for an explanation of several concepts used such as handles,
scopes, function templates, etc.
Each of these examples using the following binding.gyp file:
{
"targets": [
{
"target_name": "addon",
"sources": [ "addon.cc" ]
}
]
}
In cases where there is more than one .cc file, simply add the additional
filename to the sources array. For example:
"sources": ["addon.cc", "myexample.cc"]
Once the binding.gyp file is ready, the example Addons can be configured
and built using node-gyp:
v8::Exception;
v8::FunctionCallbackInfo;
v8::Isolate;
v8::Local;
v8::Number;
v8::Object;
v8::String;
v8::Value;
namespace demo {
using
using
using
using
using
using
using
using
v8::Function;
v8::FunctionCallbackInfo;
v8::Isolate;
v8::Local;
v8::Null;
v8::Object;
v8::String;
v8::Value;
addon((msg) => {
console.log(msg); // 'hello world'
});
Note that, in this example, the callback function is invoked synchronously.
Object factory#
Addons can create and return new objects from within a C++ function as
illustrated in the following example. An object is created and returned with
a property msg that echoes the string passed to createObject():
// addon.cc
#include <node.h>
namespace demo {
using
using
using
using
using
using
v8::FunctionCallbackInfo;
v8::Isolate;
v8::Local;
v8::Object;
v8::String;
v8::Value;
} // namespace demo
To test it in JavaScript:
// test.js
const addon = require('./build/Release/addon');
var obj1 = addon('hello');
var obj2 = addon('world');
console.log(obj1.msg + ' ' + obj2.msg); // 'hello world'
Function factory#
Another common scenario is creating JavaScript functions that wrap C++
functions and returning those back to JavaScript:
// addon.cc
#include <node.h>
namespace demo {
using
using
using
using
using
using
using
using
v8::Function;
v8::FunctionCallbackInfo;
v8::FunctionTemplate;
v8::Isolate;
v8::Local;
v8::Object;
v8::String;
v8::Value;
v8::Context;
v8::Function;
v8::FunctionCallbackInfo;
v8::FunctionTemplate;
v8::Isolate;
v8::Local;
v8::Number;
v8::Object;
v8::Persistent;
v8::String;
v8::Value;
Persistent<Function> MyObject::constructor;
MyObject::MyObject(double value) : value_(value) {
}
MyObject::~MyObject() {
}
void MyObject::Init(Local<Object> exports) {
Isolate* isolate = exports->GetIsolate();
// Prepare constructor template
Local<FunctionTemplate> tpl = FunctionTemplate::New(isolate,
New);
tpl->SetClassName(String::NewFromUtf8(isolate, "MyObject"));
tpl->InstanceTemplate()->SetInternalFieldCount(1);
// Prototype
NODE_SET_PROTOTYPE_METHOD(tpl, "plusOne", PlusOne);
constructor.Reset(isolate, tpl->GetFunction());
exports->Set(String::NewFromUtf8(isolate, "MyObject"),
tpl->GetFunction());
}
void MyObject::New(const FunctionCallbackInfo<Value>& args) {
Isolate* isolate = args.GetIsolate();
if (args.IsConstructCall()) {
// Invoked as constructor: `new MyObject(...)`
double value = args[0]->IsUndefined() ? 0 : args[0]>NumberValue();
MyObject* obj = new MyObject(value);
obj->Wrap(args.This());
args.GetReturnValue().Set(args.This());
} else {
// Invoked as plain function `MyObject(...)`, turn into construct call.
const int argc = 1;
Local<Value> argv[argc] = { args[0] };
Local<Context> context = isolate->GetCurrentContext();
Local<Function> cons = Local<Function>::New(isolate,
constructor);
Local<Object> result =
cons->NewInstance(context, argc, argv).ToLocalChecked();
args.GetReturnValue().Set(result);
}
}
void MyObject::PlusOne(const FunctionCallbackInfo<Value>& args) {
Isolate* isolate = args.GetIsolate();
MyObject* obj = ObjectWrap::Unwrap<MyObject>(args.Holder());
obj->value_ += 1;
args.GetReturnValue().Set(Number::New(isolate, obj->value_));
}
} // namespace demo
To build this example, the myobject.cc file must be added to
the binding.gyp:
{
"targets": [
{
"target_name": "addon",
"sources": [
"addon.cc",
"myobject.cc"
]
}
]
}
Test it with:
// test.js
const addon = require('./build/Release/addon');
var obj = new addon.MyObject(10);
console.log(obj.plusOne()); // 11
console.log(obj.plusOne()); // 12
console.log(obj.plusOne()); // 13
Factory of wrapped objects#
Alternatively, it is possible to use a factory pattern to avoid explicitly
creating object instances using the JavaScript new operator:
var obj = addon.createObject();
// instead of:
// var obj = new addon.Object();
First, the createObject() method is implemented in addon.cc:
// addon.cc
#include <node.h>
#include "myobject.h"
namespace demo {
using
using
using
using
using
using
v8::FunctionCallbackInfo;
v8::Isolate;
v8::Local;
v8::Object;
v8::String;
v8::Value;
v8::Context;
v8::Function;
v8::FunctionCallbackInfo;
v8::FunctionTemplate;
v8::Isolate;
v8::Local;
v8::Number;
v8::Object;
v8::Persistent;
v8::String;
using v8::Value;
Persistent<Function> MyObject::constructor;
MyObject::MyObject(double value) : value_(value) {
}
MyObject::~MyObject() {
}
void MyObject::Init(Isolate* isolate) {
// Prepare constructor template
Local<FunctionTemplate> tpl = FunctionTemplate::New(isolate,
New);
tpl->SetClassName(String::NewFromUtf8(isolate, "MyObject"));
tpl->InstanceTemplate()->SetInternalFieldCount(1);
// Prototype
NODE_SET_PROTOTYPE_METHOD(tpl, "plusOne", PlusOne);
constructor.Reset(isolate, tpl->GetFunction());
}
void MyObject::New(const FunctionCallbackInfo<Value>& args) {
Isolate* isolate = args.GetIsolate();
if (args.IsConstructCall()) {
// Invoked as constructor: `new MyObject(...)`
double value = args[0]->IsUndefined() ? 0 : args[0]>NumberValue();
MyObject* obj = new MyObject(value);
obj->Wrap(args.This());
args.GetReturnValue().Set(args.This());
} else {
// Invoked as plain function `MyObject(...)`, turn into construct call.
const int argc = 1;
Local<Value> argv[argc] = { args[0] };
{
"target_name": "addon",
"sources": [
"addon.cc",
"myobject.cc"
]
}
]
}
Test it with:
// test.js
const createObject = require('./build/Release/addon');
var obj = createObject(10);
console.log(obj.plusOne()); // 11
console.log(obj.plusOne()); // 12
console.log(obj.plusOne()); // 13
var obj2 = createObject(20);
console.log(obj2.plusOne()); // 21
console.log(obj2.plusOne()); // 22
console.log(obj2.plusOne()); // 23
Passing wrapped objects around#
In addition to wrapping and returning C++ objects, it is possible to pass
wrapped objects around by unwrapping them with the Node.js helper
function node::ObjectWrap::Unwrap. The following examples shows a
function add() that can take two MyObject objects as input arguments:
// addon.cc
#include <node.h>
#include <node_object_wrap.h>
#include "myobject.h"
namespace demo {
using v8::FunctionCallbackInfo;
using
using
using
using
using
using
v8::Isolate;
v8::Local;
v8::Number;
v8::Object;
v8::String;
v8::Value;
#define MYOBJECT_H
#include <node.h>
#include <node_object_wrap.h>
namespace demo {
class MyObject : public node::ObjectWrap {
public:
static void Init(v8::Isolate* isolate);
static void NewInstance(const v8::FunctionCallbackInfo<v8::Value>&
args);
inline double value() const { return value_; }
private:
explicit MyObject(double value = 0);
~MyObject();
static void New(const v8::FunctionCallbackInfo<v8::Value>& args);
static v8::Persistent<v8::Function> constructor;
double value_;
};
} // namespace demo
#endif
The implementation of myobject.cc is similar to before:
// myobject.cc
#include <node.h>
#include "myobject.h"
namespace demo {
using
using
using
using
v8::Context;
v8::Function;
v8::FunctionCallbackInfo;
v8::FunctionTemplate;
using
using
using
using
using
using
v8::Isolate;
v8::Local;
v8::Object;
v8::Persistent;
v8::String;
v8::Value;
Persistent<Function> MyObject::constructor;
MyObject::MyObject(double value) : value_(value) {
}
MyObject::~MyObject() {
}
void MyObject::Init(Isolate* isolate) {
// Prepare constructor template
Local<FunctionTemplate> tpl = FunctionTemplate::New(isolate,
New);
tpl->SetClassName(String::NewFromUtf8(isolate, "MyObject"));
tpl->InstanceTemplate()->SetInternalFieldCount(1);
constructor.Reset(isolate, tpl->GetFunction());
}
void MyObject::New(const FunctionCallbackInfo<Value>& args) {
Isolate* isolate = args.GetIsolate();
if (args.IsConstructCall()) {
// Invoked as constructor: `new MyObject(...)`
double value = args[0]->IsUndefined() ? 0 : args[0]>NumberValue();
MyObject* obj = new MyObject(value);
obj->Wrap(args.This());
args.GetReturnValue().Set(args.This());
} else {
// Invoked as plain function `MyObject(...)`, turn into construct call.
const int argc = 1;
AtExit hooks#
An "AtExit" hook is a function that is invoked after the Node.js event loop
has ended by before the JavaScript VM is terminated and Node.js shuts
down. "AtExit" hooks are registered using the node::AtExit API.
void AtExit(callback, args)#
Registers exit hooks that run after the event loop has ended but before
the VM is killed.
AtExit takes two parameters: a pointer to a callback function to run at exit,
and a pointer to untyped context data to be passed to that callback.
Callbacks are run in last-in first-out order.
The following addon.cc implements AtExit:
// addon.cc
#undef NDEBUG
#include <assert.h>
#include <stdlib.h>
#include <node.h>
namespace demo {
using
using
using
using
using
node::AtExit;
v8::HandleScope;
v8::Isolate;
v8::Local;
v8::Object;
options.detached
options.stdio
child_process.execSync(command[, options])
Event: 'close'
Event: 'disconnect'
Event: 'error'
Event: 'exit'
Event: 'message'
child.connected
child.disconnect()
child.kill([signal])
child.pid
child.stdin
child.stdio
child.stdout
in a synchronous manner that blocks the event loop until the spawned
process either exits or is terminated.
For convenience, the child_process module provides a handful of
synchronous and asynchronous alternatives
tochild_process.spawn() and child_process.spawnSync(). Note that each of
these alternatives are implemented on top
ofchild_process.spawn() or child_process.spawnSync().
For certain use cases, such as automating shell scripts, the synchronous
counterparts may be more convenient. In many cases, however, the
synchronous methods can have significant impact on performance due to
stalling the event loop while spawned processes complete.
Asynchronous Process Creation#
The child_process.spawn(), child_process.fork(), child_process.exec(),
and child_process.execFile() methods all follow the idiomatic
asynchronous programming pattern typical of other Node.js APIs.
Each of the methods returns a ChildProcess instance. These objects
implement the Node.js EventEmitter API, allowing the parent process to
register listener functions that are called when certain events occur during
the life cycle of the child process.
console.error(err);
return;
}
console.log(stdout);
});
child_process.exec(command[, options][, callback])#
o
o
error <Error>
Return: <ChildProcess>
Spawns a shell then executes the command within that shell, buffering
any generated output.
const exec = require('child_process').exec;
exec('cat *.js bad_file | wc -l', (error, stdout, stderr) => {
if (error) {
console.error(`exec error: ${error}`);
return;
}
console.log(`stdout: ${stdout}`);
console.log(`stderr: ${stderr}`);
});
If a callback function is provided, it is called with the arguments (error,
stdout, stderr). On success, error will be null. On error, error will be an
instance of Error. The error.code property will be the exit code of the child
process while error.signal will be set to the signal that terminated the
process. Any exit code other than 0 is considered to be an error.
The stdout and stderr arguments passed to the callback will contain the
stdout and stderr output of the child process. By default, Node.js will
decode the output as UTF-8 and pass strings to the callback.
The encoding option can be used to specify the character encoding used
to decode the stdout and stderr output.
If encoding is 'buffer', Buffer objects will be passed to the callback instead.
The options argument may be passed as the second argument to
customize how the process is spawned. The default options are:
{
encoding: 'utf8',
timeout: 0,
maxBuffer: 200*1024,
killSignal: 'SIGTERM',
cwd: null,
env: null
}
If timeout is greater than 0, the parent will send the the signal identified
by the killSignal property (the default is 'SIGTERM') if the child runs longer
than timeout milliseconds.
Note: Unlike the exec(3) POSIX system call, child_process.exec() does not
replace the existing process and uses a shell to execute the command.
child_process.execFile(file[, args][, options][, callback])#
options <Object>
o
error <Error>
Return: <ChildProcess>
options <Object>
o
Return: <ChildProcess>
options <Object>
return: <ChildProcess>
Use env to specify environment variables that will be visible to the new
process, the default is process.env.
Example of running ls -lh /usr, capturing stdout, stderr, and the exit code:
const spawn = require('child_process').spawn;
const ls = spawn('ls', ['-lh', '/usr']);
ls.stdout.on('data', (data) => {
console.log(`stdout: ${data}`);
});
ls.stderr.on('data', (data) => {
console.log(`stderr: ${data}`);
});
ls.on('close', (code) => {
console.log(`child process exited with code ${code}`);
});
Example: A very elaborate way to run ps ax | grep ssh
const spawn = require('child_process').spawn;
const ps = spawn('ps', ['ax']);
const grep = spawn('grep', ['ssh']);
ps.stdout.on('data', (data) => {
grep.stdin.write(data);
});
ps.stderr.on('data', (data) => {
console.log(`ps stderr: ${data}`);
});
ps.on('close', (code) => {
if (code !== 0) {
console.log(`ps process exited with code ${code}`);
}
grep.stdin.end();
});
grep.stdout.on('data', (data) => {
console.log(`${data}`);
});
grep.stderr.on('data', (data) => {
console.log(`grep stderr: ${data}`);
});
grep.on('close', (code) => {
if (code !== 0) {
console.log(`grep process exited with code ${code}`);
}
});
Example of checking for failed exec:
const spawn = require('child_process').spawn;
const child = spawn('bad_command');
child.on('error', (err) => {
console.log('Failed to start child process.');
});
options.detached#
fs = require('fs');
spawn = require('child_process').spawn;
out = fs.openSync('./out.log', 'a');
err = fs.openSync('./out.log', 'a');
options.stdio#
The options.stdio option is used to configure the pipes that are established
between the parent and child process. By default, the child's stdin, stdout,
and stderr are redirected to corresponding child.stdin, child.stdout,
'pipe' - Create a pipe between the child process and the parent
process. The parent end of the pipe is exposed to the parent as a
property on the child_process object as child.stdio[fd]. Pipes created
for fds 0 - 2 are also available
as child.stdin,child.stdout and child.stderr, respectively.
2.
3.
4.
5.
6.
null, undefined - Use default value. For stdio fds 0, 1 and 2 (in other
words, stdin, stdout, and stderr) a pipe is created. For fd 3 and up, the
default is 'ignore'.
Example:
const spawn = require('child_process').spawn;
// Child will use parent's stdios
spawn('prg', [], { stdio: 'inherit' });
// Spawn child sharing only stderr
spawn('prg', [], { stdio: ['pipe', 'pipe', process.stderr] });
// Open an extra fd=4, to interact with programs presenting a
// startd-style interface.
spawn('prg', [], { stdio: ['pipe', null, null, null, 'pipe'] });
It is worth noting that when an IPC channel is established between the
parent and child processes, and the child is a Node.js process, the child is
launched with the IPC channel unreferenced (using unref()) until the child
registers an event handler for the process.on('disconnect')event. This
allows the child to exit normally without the process being held open by
the open IPC channel.
See also: child_process.exec() and child_process.fork()
Synchronous Process Creation#
The child_process.spawnSync(), child_process.execSync(),
and child_process.execFileSync() methods
are synchronous andWILL block the Node.js event loop, pausing
execution of any additional code until the spawned process exits.
Blocking calls like these are mostly useful for simplifying general purpose
scripting tasks and for simplifying the loading/processing of application
configuration at startup.
options <Object>
cwd <String> Current working directory of the child process
o
o
encoding <String> The encoding used for all stdio inputs and
outputs. (Default: 'buffer')
and handles the SIGTERM signal and does not exit, the parent process will
still wait until the child process has exited.
If the process times out, or has a non-zero exit code, this
method will throw. The Error object will contain the entire result
fromchild_process.spawnSync()
child_process.execSync(command[, options])#
options <Object>
cwd <String> Current working directory of the child process
o
o
encoding <String> The encoding used for all stdio inputs and
outputs. (Default: 'buffer')
options <Object>
cwd <String> Current working directory of the child process
o
o
encoding <String> The encoding used for all stdio inputs and
outputs. (Default: 'buffer')
return: <Object>
o
code <Number> the exit code if the child exited on its own.
signal <String> the signal by which the child process was
terminated.
The 'close' event is emitted when the stdio streams of a child process
have been closed. This is distinct from the 'exit' event, since multiple
processes might share the same stdio streams.
Event: 'disconnect'#
The 'disconnect' event is emitted after calling
the child.disconnect() method in parent process or process.disconnect() in
child process. After disconnecting it is no longer possible to send or
receive messages, and the child.connected property is false.
Event: 'error'#
2.
3.
Note that the 'exit' event may or may not fire after an error has occurred.
If you are listening to both the 'exit' and 'error'events, it is important to
guard against accidentally invoking handler functions multiple times.
See also child.kill() and child.send().
Event: 'exit'#
code <Number> the exit code if the child exited on its own.
The 'exit' event is emitted after the child process ends. If the process
exited, code is the final exit code of the process, otherwisenull. If the
process terminated due to receipt of a signal, signal is the string name of
the signal, otherwise null. One of the two will always be non-null.
Note that when the 'exit' event is triggered, child process stdio streams
might still be open.
Also, note that Node.js establishes signal handlers
for SIGINT and SIGTERM and Node.js processes will not terminate
immediately due to receipt of those signals. Rather, Node.js will perform a
sequence of cleanup actions and then will re-raise the handled signal.
See waitpid(2).
Event: 'message'#
in both the parent and child (respectively) will be set to false, and it will be
no longer possible to pass messages between the processes.
The 'disconnect' event will be emitted when there are no messages in the
process of being received. This will most often be triggered immediately
after calling child.disconnect().
Note that when the child process is a Node.js instance (e.g. spawned
using child_process.fork()), the process.disconnect() method can be
invoked within the child process to close the IPC channel as well.
child.kill([signal])#
signal <String>
<Number> Integer
message <Object>
sendHandle <Handle>
options <Object>
callback <Function>
Return: <Boolean>
When an IPC channel has been established between the parent and child (
i.e. when using child_process.fork()), the child.send()method can be used
to send messages to the child process. When the child process is a Node.js
instance, these messages can be received via
the process.on('message') event.
For example, in the parent script:
const cp = require('child_process');
const n = cp.fork(`${__dirname}/sub.js`);
n.on('message', (m) => {
console.log('PARENT got message:', m);
});
n.send({ hello: 'world' });
And then the child script, 'sub.js' might look like this:
process.on('message', (m) => {
console.log('CHILD got message:', m);
});
process.send({ foo: 'bar' });
Child Node.js processes will have a process.send() method of their own
that allows the child to send messages back to the parent.
There is a special case when sending a {cmd: 'NODE_foo'} message. All
messages containing a NODE_ prefix in its cmd property are considered to
be reserved for use within Node.js core and will not be emitted in the
child's process.on('message') event. Rather, such messages are emitted
using the process.on('internalMessage') event and are consumed
internally by Node.js. Applications should avoid using such messages or
listening for 'internalMessage' events as it is subject to change without
notice.
The sendHandle argument can be used, for instance, to pass the handle of
a TCP server object to the child process as illustrated in the example
below:
const child = require('child_process').fork('child.js');
// Open up the server object and send the handle.
const server = require('net').createServer();
server.on('connection', (socket) => {
socket.end('handled by parent');
});
server.listen(1337, () => {
child.send('server', server);
});
The child would then receive the server object as:
process.on('message', (m, server) => {
if (m === 'server') {
server.on('connection', (socket) => {
socket.end('handled by child');
});
}
});
Once the server is now shared between the parent and child, some
connections can be handled by the parent and some by the child.
While the example above uses a server created using
the net module, dgram module servers use exactly the same workflow
with the exceptions of listening on a 'message' event instead
of 'connection' and using server.bind() instead of server.listen(). This is,
however, currently only supported on UNIX platforms.
Example: sending a socket object#
normal.send('socket', socket);
});
server.listen(1337);
The child.js would receive the socket handle as the second argument
passed to the event callback function:
process.on('message', (m, socket) => {
if (m === 'socket') {
socket.end(`Request handled with ${process.argv[2]} priority`);
}
});
Once a socket has been passed to a child, the parent is no longer capable
of tracking when the socket is destroyed. To indicate this,
the.connections property becomes null. It is recommended not to
use .maxConnections when this occurs.
Note: this function uses JSON.stringify() internally to serialize
the message.
child.stderr#
<Stream>
<Stream>
If the child was spawned with stdio[0] set to anything other than 'pipe',
then this will be undefined.
child.stdin is an alias for child.stdio[0]. Both properties will refer to the
same value.
child.stdio#
<Array>
child.stdout#
<Stream>
Cluster
How It Works
Class: Worker
Event: 'disconnect'
Event: 'error'
Event: 'exit'
Event: 'listening'
Event: 'message'
Event: 'online'
worker.disconnect()
worker.exitedAfterDisconnect
worker.id
worker.isConnected()
worker.isDead()
worker.kill([signal='SIGTERM'])
worker.process
worker.suicide
Event: 'disconnect'
Event: 'exit'
Event: 'fork'
Event: 'listening'
Event: 'message'
Event: 'online'
Event: 'setup'
cluster.disconnect([callback])
cluster.fork([env])
cluster.isMaster
cluster.isWorker
cluster.schedulingPolicy
cluster.settings
cluster.setupMaster([settings])
cluster.worker
Cluster#
cluster.workers
Stability: 2 - Stable
A single instance of Node.js runs in a single thread. To take advantage of
multi-core systems the user will sometimes want to launch a cluster of
Node.js processes to handle the load.
The cluster module allows you to easily create child processes that all
share server ports.
const cluster = require('cluster');
const http = require('http');
const numCPUs = require('os').cpus().length;
if (cluster.isMaster) {
// Fork workers.
for (var i = 0; i < numCPUs; i++) {
cluster.fork();
}
cluster.on('exit', (worker, code, signal) => {
console.log(`worker ${worker.process.pid} died`);
});
} else {
// Workers can share any TCP connection
// In this case it is an HTTP server
http.createServer((req, res) => {
res.writeHead(200);
res.end('hello world\n');
}).listen(8000);
}
Running Node.js will now share port 8000 between the workers:
$ NODE_DEBUG=cluster node server.js
23521,Master Worker 23524 online
23521,Master Worker 23526 online
23521,Master Worker 23523 online
2.
3.
signal <String> the name of the signal (eg. 'SIGHUP') that caused
the process to be killed.
address <Object>
message <Object>
if (cluster.isMaster) {
// Keep track of http requests
var numReqs = 0;
setInterval(() => {
console.log('numReqs =', numReqs);
}, 1000);
// Count requests
function messageHandler(msg) {
if (msg.cmd && msg.cmd == 'notifyRequest') {
numReqs += 1;
}
}
// Start workers and listen for messages containing notifyRequest
const numCPUs = require('os').cpus().length;
for (var i = 0; i < numCPUs; i++) {
cluster.fork();
}
Object.keys(cluster.workers).forEach((id) => {
cluster.workers[id].on('message', messageHandler);
});
} else {
// Worker processes have a http server.
http.Server((req, res) => {
res.writeHead(200);
res.end('hello world\n');
// notify master about the request
process.send({ cmd: 'notifyRequest' });
}).listen(8000);
}
Event: 'online'#
Similar to the cluster.on('online') event, but specific to this worker.
cluster.fork().on('online', () => {
// Worker is online
});
It is not emitted in the worker.
worker.disconnect()#
In a worker, this function will close all servers, wait for the 'close' event on
those servers, and then disconnect the IPC channel.
In the master, an internal message is sent to the worker causing it to
call .disconnect() on itself.
Causes .exitedAfterDisconnect to be set.
Note that after a server is closed, it will no longer accept new connections,
but connections may be accepted by any other listening worker. Existing
connections will be allowed to close as usual. When no more connections
exist, see server.close(), the IPC channel to the worker will close allowing
it to die gracefully.
The above applies only to server connections, client connections are not
automatically closed by workers, and disconnect does not wait for them to
close before exiting.
Note that in a worker, process.disconnect exists, but it is not this function,
it is disconnect.
Because long living server connections may block workers from
disconnecting, it may be useful to send a message, so application specific
actions may be taken to close them. It also may be useful to implement a
timeout, killing a worker if the 'disconnect' event has not been emitted
after some time.
if (cluster.isMaster) {
var worker = cluster.fork();
var timeout;
worker.on('listening', (address) => {
worker.send('shutdown');
worker.disconnect();
timeout = setTimeout(() => {
worker.kill();
}, 2000);
});
worker.on('disconnect', () => {
clearTimeout(timeout);
});
} else if (cluster.isWorker) {
const net = require('net');
var server = net.createServer((socket) => {
// connections never end
});
server.listen(8000);
process.on('message', (msg) => {
if (msg === 'shutdown') {
// initiate graceful close of any connections to server
}
});
}
worker.exitedAfterDisconnect#
<Boolean>
<Number>
Each new worker is given its own unique id, this id is stored in the id.
While a worker is alive, this is the key that indexes it in cluster.workers
worker.isConnected()#
This function returns true if the worker is connected to its master via its
IPC channel, false otherwise. A worker is connected to its master after it's
been created. It is disconnected after the 'disconnect' event is emitted.
worker.isDead()#
This function returns true if the worker's process has terminated (either
because of exiting or being signaled). Otherwise, it returnsfalse.
worker.kill([signal='SIGTERM'])#
This function will kill the worker. In the master, it does this by
disconnecting the worker.process, and once disconnected, killing
withsignal. In the worker, it does it by disconnecting the channel, and then
exiting with code 0.
Causes .exitedAfterDisconnect to be set.
This method is aliased as worker.destroy() for backwards compatibility.
Note that in a worker, process.kill() exists, but it is not this function, it
is kill.
worker.process#
<ChildProcess>
All workers are created using child_process.fork(), the returned object from
this function is stored as .process. In a worker, the global process is
stored.
See: Child Process module
Note that workers will call process.exit(0) if the 'disconnect' event occurs
on process and .exitedAfterDisconnect is nottrue. This protects against
accidental disconnection.
worker.send(message[, sendHandle][, callback])#
message <Object>
sendHandle <Handle>
callback <Function>
Return: Boolean
}
worker.suicide#
Stability: 0 - Deprecated: Use worker.exitedAfterDisconnect instead.
An alias to worker.exitedAfterDisconnect.
Set by calling .kill() or .disconnect(). Until then, it is undefined.
The boolean worker.suicide lets you distinguish between voluntary and
accidental exit, the master may choose not to respawn a worker based on
this value.
cluster.on('exit', (worker, code, signal) => {
if (worker.suicide === true) {
console.log('Oh, it was just voluntary no need to worry');
}
});
// kill worker
worker.kill();
This API only exists for backwards compatibility and will be removed in the
future.
Event: 'disconnect'#
worker <cluster.Worker>
Emitted after the worker IPC channel has disconnected. This can occur
when a worker exits gracefully, is killed, or is disconnected manually (such
as with worker.disconnect()).
There may be a delay between the 'disconnect' and 'exit' events. These
events can be used to detect if the process is stuck in a cleanup or if there
are long-living connections.
cluster.on('disconnect', (worker) => {
console.log(`The worker #${worker.id} has disconnected`);
});
Event: 'exit'#
worker <cluster.Worker>
signal <String> the name of the signal (eg. 'SIGHUP') that caused
the process to be killed.
When any of the workers die the cluster module will emit the 'exit' event.
This can be used to restart the worker by calling .fork() again.
cluster.on('exit', (worker, code, signal) => {
console.log('worker %d died (%s). restarting...',
worker.process.pid, signal || code);
cluster.fork();
});
See child_process event: 'exit'.
Event: 'fork'#
worker <cluster.Worker>
When a new worker is forked the cluster module will emit a 'fork' event.
This can be used to log worker activity, and create your own timeout.
var timeouts = [];
function errorMsg() {
console.error('Something must be wrong with the connection ...');
}
cluster.on('fork', (worker) => {
timeouts[worker.id] = setTimeout(errorMsg, 2000);
});
cluster.on('listening', (worker, address) => {
clearTimeout(timeouts[worker.id]);
});
cluster.on('exit', (worker, code, signal) => {
clearTimeout(timeouts[worker.id]);
errorMsg();
});
Event: 'listening'#
worker <cluster.Worker>
address <Object>
After calling listen() from a worker, when the 'listening' event is emitted on
the server, a 'listening' event will also be emitted on cluster in the master.
The event handler is executed with two arguments, the worker contains
the worker object and the address object contains the following
connection properties: address, port and addressType. This is very useful if
the worker is listening on more than one address.
cluster.on('listening', (worker, address) => {
console.log(
`A worker is now connected to ${address.address}:$
{address.port}`);
});
The addressType is one of:
4 (TCPv4)
6 (TCPv6)
Event: 'message'#
worker <cluster.Worker>
message <Object>
Before Node.js v6.0, this event emitted only the message and the handle,
but not the worker object, contrary to what the documentation stated.
If you need to support older versions and don't need the worker object,
you can work around the discrepancy by checking the number of
arguments:
cluster.on('message', function(worker, message, handle) {
if (arguments.length === 2) {
handle = message;
message = worker;
worker = undefined;
}
// ...
});
Event: 'online'#
worker <cluster.Worker>
After forking a new worker, the worker should respond with an online
message. When the master receives an online message it will emit this
event. The difference between 'fork' and 'online' is that fork is emitted
when the master forks a worker, and 'online' is emitted when the worker is
running.
cluster.on('online', (worker) => {
console.log('Yay, the worker responded after it was forked');
});
Event: 'setup'#
settings <Object>
cluster.disconnect([callback])#
<Boolean>
<Boolean>
global setting and effectively frozen once you spawn the first worker or
call cluster.setupMaster(), whatever comes first.
SCHED_RR is the default on all operating systems except Windows.
Windows will change to SCHED_RR once libuv is able to effectively
distribute IOCP handles without incurring a large performance hit.
cluster.schedulingPolicy can also be set through
the NODE_CLUSTER_SCHED_POLICY environment variable. Valid values
are "rr"and "none".
cluster.settings#
<Object>
o
After calling .setupMaster() (or .fork()) this settings object will contain the
settings, including the default values.
It is effectively frozen after being set, because .setupMaster() can only be
called once.
This object is not supposed to be changed or set manually, by you.
cluster.setupMaster([settings])#
settings <Object>
setupMaster is used to change the default 'fork' behavior. Once called, the
settings will be present in cluster.settings.
Note that:
any settings changes only affect future calls to .fork() and have no
effect on workers that are already running
the defaults above apply to the first call only, the defaults for later
calls is the current value at the time of cluster.setupMaster()is called
Example:
const cluster = require('cluster');
cluster.setupMaster({
exec: 'worker.js',
args: ['--use', 'https'],
silent: true
});
cluster.fork(); // https worker
cluster.setupMaster({
exec: 'worker.js',
args: ['--use', 'http']
});
cluster.fork(); // http worker
This can only be called from the master process.
cluster.worker#
<Object>
<Object>
A hash that stores the active worker objects, keyed by id field. Makes it
easy to loop through all the workers. It is only available in the master
process.
A worker is removed from cluster.workers after the worker has
disconnected and exited. The order between these two events cannot be
determined in advance. However, it is guaranteed that the removal from
the cluster.workers list happens before last 'disconnect'or 'exit' event is
emitted.
// Go through all workers
function eachWorker(callback) {
for (var id in cluster.workers) {
callback(cluster.workers[id]);
}
}
eachWorker((worker) => {
worker.send('big announcement to all workers');
});
Should you wish to reference a worker over a communication channel,
using the worker's unique id is the easiest way to find the worker.
--zero-fill-buffers#
Automatically zero-fills all newly
allocated Buffer and SlowBuffer instances.
--preserve-symlinks#
Instructs the module loader to preserve symbolic links when resolving and
caching modules.
By default, when Node.js loads a module from a path that is symbolically
linked to a different on-disk location, Node.js will dereference the link and
use the actual on-disk "real path" of the module as both an identifier and
as a root path to locate other dependency modules. In most cases, this
default behavior is acceptable. However, when using symbolically linked
peer dependencies, as illustrated in the example below, the default
behavior causes an exception to be thrown if moduleA attempts to
require moduleB as a peer dependency:
{appDir}
app
index.js
node_modules
moduleB
index.js
package.json
moduleA
index.js
package.json
The --preserve-symlinks command line flag instructs Node.js to use the
symlink path for modules as opposed to the real path, allowing
symbolically linked peer dependencies to be found.
Note, however, that using --preserve-symlinks can have other side effects.
Specifically, symbolically linked native modules can fail to load if those are
linked from more than one location in the dependency tree (Node.js would
see those as two separate modules and would attempt to load the module
multiple times, causing an exception to be thrown).
--track-heap-objects#
Track heap object allocations for heap snapshots.
--prof-process#
Process v8 profiler output generated using the v8 option --prof.
--v8-options#
Print v8 command line options.
Note: v8 options allow words to be separated by both dashes (-) or
underscores (_).
For example, --stack-trace-limit is equivalent to --stack_trace_limit.
--tls-cipher-list=list#
Specify an alternative default TLS cipher list. (Requires Node.js to be built
with crypto support. (Default))
--enable-fips#
Enable FIPS-compliant crypto at startup. (Requires Node.js to be built
with ./configure --openssl-fips)
--force-fips#
Force FIPS-compliant crypto on startup. (Cannot be disabled from script
code.) (Same requirements as --enable-fips)
--icu-data-dir=file#
Specify ICU data load path. (overrides NODE_ICU_DATA)
Environment Variables#
NODE_DEBUG=module[,]#
','-separated list of core modules that should print debug information.
NODE_PATH=path[:]#
':'-separated list of directories prefixed to the module search path.
Note: on Windows, this is a ';'-separated list instead.
NODE_DISABLE_COLORS=1#
When set to 1 colors will not be used in the REPL.
NODE_ICU_DATA=file#
Data path for ICU (Intl object) data. Will extend linked-in data when
compiled with small-icu support.
NODE_REPL_HISTORY=file#
Path to the file used to store the persistent REPL history. The default path
is ~/.node_repl_history, which is overridden by this variable. Setting the
value to an empty string ("" or " ") disables persistent REPL history.
Console
Class: Console
console.dir(obj[, options])
console.error([data][, ...])
console.info([data][, ...])
console.log([data][, ...])
console.time(label)
console.timeEnd(label)
Console#
console.trace(message[, ...])
console.warn([data][, ...])
Stability: 2 - Stable
The console module provides a simple debugging console that is similar to
the JavaScript console mechanism provided by web browsers.
The module exports two specific components:
colors - if true, then the output will be styled with ANSI color codes.
Defaults to false. Colors are customizable;
see customizing util.inspect()colors.
console.error([data][, ...])#
Prints to stderr with newline. Multiple arguments can be passed, with the
first used as the primary message and all additional used as substitution
values similar to printf(3) (the arguments are all passed to util.format()).
const code = 5;
console.error('error #%d', code);
console.time('100-elements');
for (var i = 0; i < 100; i++) {
;
}
console.timeEnd('100-elements');
// prints 100-elements: 225.438ms
Note: As of Node.js v6.0.0, console.timeEnd() deletes the timer to avoid
leaking it. On older versions, the timer persisted. This
allowed console.timeEnd() to be called multiple times for the same label.
This functionality was unintended and is no longer supported.
console.trace(message[, ...])#
Prints to stderr the string 'Trace :', followed by the util.format() formatted
message and stack trace to the current position in the code.
console.trace('Show me');
// Prints: (stack trace will vary based on where trace is called)
// Trace: Show me
// at repl:2:9
// at REPLServer.defaultEval (repl.js:248:27)
// at bound (domain.js:287:14)
// at REPLServer.runBound [as eval] (domain.js:300:12)
// at REPLServer.<anonymous> (repl.js:412:12)
// at emitOne (events.js:82:20)
// at REPLServer.emit (events.js:169:7)
// at REPLServer.Interface._onLine (readline.js:210:10)
// at REPLServer.Interface._line (readline.js:549:8)
// at REPLServer.Interface._ttyWrite (readline.js:826:14)
console.warn([data][, ...])#
The console.warn() function is an alias for console.error().
Crypto
Class: Certificate
new crypto.Certificate()
certificate.exportChallenge(spkac)
certificate.exportPublicKey(spkac)
certificate.verifySpkac(spkac)
Class: Cipher
cipher.final([output_encoding])
cipher.setAAD(buffer)
cipher.getAuthTag()
cipher.setAutoPadding(auto_padding=true)
cipher.update(data[, input_encoding][,
output_encoding])
Class: Decipher
decipher.final([output_encoding])
decipher.setAAD(buffer)
decipher.setAuthTag(buffer)
decipher.setAutoPadding(auto_padding=true)
decipher.update(data[, input_encoding][,
output_encoding])
Class: DiffieHellman
diffieHellman.computeSecret(other_public_key[,
input_encoding][, output_encoding])
diffieHellman.generateKeys([encoding])
diffieHellman.getGenerator([encoding])
diffieHellman.getPrime([encoding])
diffieHellman.getPrivateKey([encoding])
diffieHellman.getPublicKey([encoding])
diffieHellman.setPrivateKey(private_key[, encoding])
diffieHellman.setPublicKey(public_key[, encoding])
diffieHellman.verifyError
Class: ECDH
ecdh.computeSecret(other_public_key[, input_encoding]
[, output_encoding])
ecdh.generateKeys([encoding[, format]])
ecdh.getPrivateKey([encoding])
ecdh.getPublicKey([encoding[, format]])
ecdh.setPrivateKey(private_key[, encoding])
ecdh.setPublicKey(public_key[, encoding])
Class: Hash
hash.digest([encoding])
hash.update(data[, input_encoding])
Class: Hmac
hmac.digest([encoding])
hmac.update(data[, input_encoding])
Class: Sign
sign.sign(private_key[, output_format])
sign.update(data[, input_encoding])
Class: Verify
verifier.update(data[, input_encoding])
crypto.DEFAULT_ENCODING
crypto.fips
crypto.createCipher(algorithm, password)
crypto.createCredentials(details)
crypto.createDecipher(algorithm, password)
crypto.createDiffieHellman(prime[, prime_encoding][,
generator][, generator_encoding])
crypto.createDiffieHellman(prime_length[, generator])
crypto.createECDH(curve_name)
crypto.createHash(algorithm)
crypto.createHmac(algorithm, key)
crypto.createSign(algorithm)
crypto.createVerify(algorithm)
crypto.getCiphers()
crypto.getCurves()
crypto.getDiffieHellman(group_name)
crypto.getHashes()
crypto.privateDecrypt(private_key, buffer)
crypto.privateEncrypt(private_key, buffer)
crypto.publicDecrypt(public_key, buffer)
crypto.publicEncrypt(public_key, buffer)
crypto.randomBytes(size[, callback])
crypto.setEngine(engine[, flags])
Notes
Crypto#
Stability: 2 - Stable
The crypto module provides cryptographic functionality that includes a set
of wrappers for OpenSSL's hash, HMAC, cipher, decipher, sign and verify
functions.
Use require('crypto') to access this module.
const crypto = require('crypto');
const secret = 'abcdefg';
const hash = crypto.createHmac('sha256', secret)
.update('I love cupcakes')
.digest('hex');
console.log(hash);
// Prints:
//
c0fa1bc00531bd78ef38c628449c5102aeabd49b5dc3a2a516ea6ea95
9d6658e
Determining if crypto support is unavailable#
It is possible for Node.js to be built without including support for
the crypto module. In such cases, calling require('crypto') will result in an
error being thrown.
var crypto;
try {
crypto = require('crypto');
} catch (err) {
console.log('crypto support is disabled!');
}
Class: Certificate#
SPKAC is a Certificate Signing Request mechanism originally implemented
by Netscape and now specified formally as part
of HTML5's keygen element.
The crypto module provides the Certificate class for working with SPKAC
data. The most common usage is handling output generated by the
HTML5<keygen> element. Node.js uses OpenSSL's SPKAC
implementation internally.
new crypto.Certificate()#
Instances of the Certificate class can be created using the new keyword or
by calling crypto.Certificate() as a function:
const crypto = require('crypto');
const cert1 = new crypto.Certificate();
const cert2 = crypto.Certificate();
certificate.exportChallenge(spkac)#
The spkac data structure includes a public key and a challenge.
The certificate.exportChallenge() returns the challenge component in the
form of a Node.js Buffer. The spkac argument can be either a string or
a Buffer.
const cert = require('crypto').Certificate();
const spkac = getSpkacSomehow();
const challenge = cert.exportChallenge(spkac);
console.log(challenge.toString('utf8'));
// Prints the challenge as a UTF8 string
certificate.exportPublicKey(spkac)#
The spkac data structure includes a public key and a challenge.
The certificate.exportPublicKey() returns the public key component in the
if (data)
encrypted += data.toString('hex');
});
cipher.on('end', () => {
console.log(encrypted);
// Prints:
ca981be48e90867604588e75d04feabb63cc007a8f8ad89b10616ed84
d815504
});
cipher.write('some clear text data');
cipher.end();
Example: Using Cipher and piped streams:
const crypto = require('crypto');
const fs = require('fs');
const cipher = crypto.createCipher('aes192', 'a password');
const input = fs.createReadStream('test.js');
const output = fs.createWriteStream('test.enc');
input.pipe(cipher).pipe(output);
Example: Using the cipher.update() and cipher.final() methods:
const crypto = require('crypto');
const cipher = crypto.createCipher('aes192', 'a password');
var encrypted = cipher.update('some clear text data', 'utf8', 'hex');
encrypted += cipher.final('hex');
console.log(encrypted);
// Prints:
ca981be48e90867604588e75d04feabb63cc007a8f8ad89b10616ed84
d815504
cipher.final([output_encoding])#
Returns any remaining enciphered contents. If output_encoding parameter
is one of 'binary', 'base64' or 'hex', a string is returned. If
anoutput_encoding is not provided, a Buffer is returned.
Once the cipher.final() method has been called, the Cipher object can no
longer be used to encrypt data. Attempts to call cipher.final() more than
once will result in an error being thrown.
cipher.setAAD(buffer)#
When using an authenticated encryption mode (only GCM is currently
supported), the cipher.setAAD() method sets the value used for
the additional authenticated data (AAD) input parameter.
cipher.getAuthTag()#
When using an authenticated encryption mode (only GCM is currently
supported), the cipher.getAuthTag() method returns a Buffer containing
theauthentication tag that has been computed from the given data.
The cipher.getAuthTag() method should only be called after encryption has
been completed using the cipher.final() method.
cipher.setAutoPadding(auto_padding=true)#
When using block encryption algorithms, the Cipher class will
automatically add padding to the input data to the appropriate block size.
To disable the default padding call cipher.setAutoPadding(false).
When auto_padding is false, the length of the entire input data must be a
multiple of the cipher's block size or cipher.final() will throw an Error.
Disabling automatic padding is useful for non-standard padding, for
instance using 0x0 instead of PKCS padding.
The cipher.setAutoPadding() method must be called before cipher.final().
cipher.update(data[, input_encoding][, output_encoding])#
Updates the cipher with data. If the input_encoding argument is given, it's
value must be one of 'utf8', 'ascii', or 'binary' and the dataargument is a
console.log(decrypted);
// Prints: some clear text data
});
var encrypted =
'ca981be48e90867604588e75d04feabb63cc007a8f8ad89b10616ed84
d815504';
decipher.write(encrypted, 'hex');
decipher.end();
Example: Using Decipher and piped streams:
const crypto = require('crypto');
const fs = require('fs');
const decipher = crypto.createDecipher('aes192', 'a password');
const input = fs.createReadStream('test.enc');
const output = fs.createWriteStream('test.js');
input.pipe(decipher).pipe(output);
Example: Using the decipher.update() and decipher.final() methods:
const crypto = require('crypto');
const decipher = crypto.createDecipher('aes192', 'a password');
var encrypted =
'ca981be48e90867604588e75d04feabb63cc007a8f8ad89b10616ed84
d815504';
var decrypted = decipher.update(encrypted, 'hex', 'utf8');
decrypted += decipher.final('utf8');
console.log(decrypted);
// Prints: some clear text data
decipher.final([output_encoding])#
Returns any remaining deciphered contents. If output_encoding parameter
is one of 'binary', 'base64' or 'hex', a string is returned. If
anoutput_encoding is not provided, a Buffer is returned.
Once the decipher.final() method has been called, the Decipher object can
no longer be used to decrypt data. Attempts to call decipher.final()more
than once will result in an error being thrown.
decipher.setAAD(buffer)#
When using an authenticated encryption mode (only GCM is currently
supported), the cipher.setAAD() method sets the value used for
the additional authenticated data (AAD) input parameter.
decipher.setAuthTag(buffer)#
When using an authenticated encryption mode (only GCM is currently
supported), the decipher.setAuthTag() method is used to pass in the
receivedauthentication tag. If no tag is provided, or if the cipher text has
been tampered with, decipher.final() with throw, indicating that the cipher
text should be discarded due to failed authentication.
decipher.setAutoPadding(auto_padding=true)#
When data has been encrypted without standard block padding,
calling decipher.setAutoPadding(false) will disable automatic padding to
preventdecipher.final() from checking for and removing padding.
Turning auto padding off will only work if the input data's length is a
multiple of the ciphers block size.
The decipher.setAutoPadding() method must be called
before decipher.update().
decipher.update(data[, input_encoding][, output_encoding])#
Updates the decipher with data. If the input_encoding argument is given,
it's value must be one of 'binary', 'base64', or 'hex' and the dataargument
is a string using the specified encoding. If the input_encoding argument is
not given, data must be a Buffer. If data is a Buffer theninput_encoding is
ignored.
The output_encoding specifies the output format of the enciphered data,
and can be 'binary', 'ascii' or 'utf8'. If the output_encoding is specified, a
diffieHellman.setPrivateKey(private_key[, encoding])#
Sets the Diffie-Hellman private key. If the encoding argument is provided
and is either 'binary', 'hex', or 'base64', private_key is expected to be a
string. If no encoding is provided, private_key is expected to be a Buffer.
diffieHellman.setPublicKey(public_key[, encoding])#
Sets the Diffie-Hellman public key. If the encoding argument is provided
and is either 'binary', 'hex' or 'base64', public_key is expected to be a
string. If no encoding is provided, public_key is expected to be a Buffer.
diffieHellman.verifyError#
A bit field containing any warnings and/or errors resulting from a check
performed during initialization of the DiffieHellman object.
The following values are valid for this property (as defined
in constants module):
DH_CHECK_P_NOT_SAFE_PRIME
DH_CHECK_P_NOT_PRIME
DH_UNABLE_TO_CHECK_GENERATOR
DH_NOT_SUITABLE_GENERATOR
Class: ECDH#
The ECDH class is a utility for creating Elliptic Curve Diffie-Hellman (ECDH)
key exchanges.
Instances of the ECDH class can be created using
the crypto.createECDH() function.
const crypto = require('crypto');
const assert = require('assert');
// Generate Alice's keys...
const alice = crypto.createECDH('secp521r1');
const alice_key = alice.generateKeys();
ecdh.getPrivateKey([encoding])#
Returns the EC Diffie-Hellman private key in the specified encoding, which
can be 'binary', 'hex', or 'base64'. If encoding is provided a string is
returned; otherwise a Buffer is returned.
ecdh.getPublicKey([encoding[, format]])#
Returns the EC Diffie-Hellman public key in the
specified encoding and format.
The format argument specifies point encoding and can
be 'compressed', 'uncompressed', or 'hybrid'. If format is not specified the
point will be returned in 'uncompressed' format.
The encoding argument can be 'binary', 'hex', or 'base64'. If encoding is
specified, a string is returned; otherwise a Buffer is returned.
ecdh.setPrivateKey(private_key[, encoding])#
Sets the EC Diffie-Hellman private key. The encoding can
be 'binary', 'hex' or 'base64'. If encoding is provided, private_key is
expected to be a string; otherwise private_key is expected to be a Buffer.
If private_key is not valid for the curve specified when the ECDH object
was created, an error is thrown. Upon setting the private key, the
associated public point (key) is also generated and set in the ECDH object.
ecdh.setPublicKey(public_key[, encoding])#
Stability: 0 - Deprecated
Sets the EC Diffie-Hellman public key. Key encoding can
be 'binary', 'hex' or 'base64'. If encoding is provided public_key is
expected to be a string; otherwise a Buffer is expected.
Note that there is not normally a reason to call this method
because ECDH only requires a private key and the other party's public key
to compute the shared secret. Typically
either ecdh.generateKeys() or ecdh.setPrivateKey() will be called.
The ecdh.setPrivateKey() method attempts to generate the public
point/key associated with the private key being set.
hash.digest([encoding])#
Calculates the digest of all of the data passed to be hashed (using
the hash.update() method). The encoding can
be 'hex', 'binary' or 'base64'. Ifencoding is provided a string will be
returned; otherwise a Buffer is returned.
The Hash object can not be used again after hash.digest() method has
been called. Multiple calls will cause an error to be thrown.
hash.update(data[, input_encoding])#
Updates the hash content with the given data, the encoding of which is
given in input_encoding and can be 'utf8', 'ascii' or 'binary'. If encodingis
not provided, and the data is a string, an encoding of 'utf8' is enforced.
If data is a Buffer then input_encoding is ignored.
This can be called many times with new data as it is streamed.
Class: Hmac#
The Hmac Class is a utility for creating cryptographic HMAC digests. It can
be used in one of two ways:
console.log(data.toString('hex'));
// Prints:
//
7fd04df92f636fd450bc841c9418e5825c17f33ad9c87c518115a45971f
7f77e
});
hmac.write('some data to hash');
hmac.end();
Example: Using Hmac and piped streams:
const crypto = require('crypto');
const fs = require('fs');
const hmac = crypto.createHmac('sha256', 'a secret');
const input = fs.createReadStream('test.js');
input.pipe(hmac).pipe(process.stdout);
Example: Using the hmac.update() and hmac.digest() methods:
const crypto = require('crypto');
const hmac = crypto.createHmac('sha256', 'a secret');
hmac.update('some data to hash');
console.log(hmac.digest('hex'));
// Prints:
//
7fd04df92f636fd450bc841c9418e5825c17f33ad9c87c518115a45971f
7f77e
hmac.digest([encoding])#
Calculates the HMAC digest of all of the data passed using hmac.update().
The encoding can be 'hex', 'binary' or 'base64'. If encoding is provided a
string is returned; otherwise a Buffer is returned;
The Hmac object can not be used again after hmac.digest() has been
called. Multiple calls to hmac.digest() will result in an error being thrown.
hmac.update(data[, input_encoding])#
Updates the Hmac content with the given data, the encoding of which is
given in input_encoding and can be 'utf8', 'ascii' or 'binary'. Ifencoding is
not provided, and the data is a string, an encoding of 'utf8' is enforced.
If data is a Buffer then input_encoding is ignored.
This can be called many times with new data as it is streamed.
Class: Sign#
The Sign Class is a utility for generating signatures. It can be used in one
of two ways:
crypto.DEFAULT_ENCODING#
The default encoding to use for functions that can take either strings
or buffers. The default value is 'buffer', which makes methods default
to Bufferobjects.
The crypto.DEFAULT_ENCODING mechanism is provided for backwards
compatibility with legacy programs that expect 'binary' to be the default
encoding.
New applications should expect the default to be 'buffer'. This property
may become deprecated in a future Node.js release.
crypto.fips#
Property for checking and controlling whether a FIPS compliant crypto
provider is currently in use. Setting to true requires a FIPS build of Node.js.
crypto.createCipher(algorithm, password)#
Creates and returns a Cipher object that uses the
given algorithm and password.
The algorithm is dependent on OpenSSL, examples are 'aes192', etc. On
recent OpenSSL releases, openssl list-cipher-algorithms will display the
available cipher algorithms.
The password is used to derive the cipher key and initialization vector (IV).
The value must be either a 'binary' encoded string or a Buffer.
The implementation of crypto.createCipher() derives keys using the
OpenSSL function EVP_BytesToKey with the digest algorithm set to MD5,
one iteration, and no salt. The lack of salt allows dictionary attacks as the
same password always creates the same key. The low iteration count and
non-cryptographically secure hash algorithm allow passwords to be tested
very rapidly.
In line with OpenSSL's recommendation to use pbkdf2 instead
of EVP_BytesToKey it is recommended that developers derive a key and IV
on their own using crypto.pbkdf2() and to use crypto.createCipheriv() to
create the Cipher object.
If no 'ca' details are given, Node.js will use Mozilla's default publicly
trusted list of CAs.
crypto.createDecipher(algorithm, password)#
Creates and returns a Decipher object that uses the
given algorithm and password (key).
The implementation of crypto.createDecipher() derives keys using the
OpenSSL function EVP_BytesToKey with the digest algorithm set to MD5,
one iteration, and no salt. The lack of salt allows dictionary attacks as the
same password always creates the same key. The low iteration count and
non-cryptographically secure hash algorithm allow passwords to be tested
very rapidly.
In line with OpenSSL's recommendation to use pbkdf2 instead
of EVP_BytesToKey it is recommended that developers derive a key and IV
on their own using crypto.pbkdf2() and to use crypto.createDecipheriv() to
create the Decipher object.
crypto.createDecipheriv(algorithm, key, iv)#
Creates and returns a Decipher object that uses the
given algorithm, key and initialization vector (iv).
The algorithm is dependent on OpenSSL, examples are 'aes192', etc. On
recent OpenSSL releases, openssl list-cipher-algorithms will display the
available cipher algorithms.
The key is the raw key used by the algorithm and iv is an initialization
vector. Both arguments must be 'binary' encoded strings or buffers.
crypto.createDiffieHellman(prime[, prime_encoding][, generator][,
generator_encoding])#
Creates a DiffieHellman key exchange object using the supplied prime and
an optional specific generator.
The generator argument can be a number, string, or Buffer. If generator is
not specified, the value 2 is used.
The prime_encoding and generator_encoding arguments can
be 'binary', 'hex', or 'base64'.
if (data)
hash.update(data);
else {
console.log(`${hash.digest('hex')} ${filename}`);
}
});
crypto.createHmac(algorithm, key)#
Creates and returns an Hmac object that uses the
given algorithm and key.
The algorithm is dependent on the available algorithms supported by the
version of OpenSSL on the platform. Examples are 'sha256', 'sha512', etc.
On recent releases of OpenSSL, openssl list-message-digestalgorithms will display the available digest algorithms.
The key is the HMAC key used to generate the cryptographic HMAC hash.
Example: generating the sha256 HMAC of a file
const filename = process.argv[2];
const crypto = require('crypto');
const fs = require('fs');
const hmac = crypto.createHmac('sha256', 'a secret');
const input = fs.createReadStream(filename);
input.on('readable', () => {
var data = input.read();
if (data)
hmac.update(data);
else {
console.log(`${hmac.digest('hex')} ${filename}`);
}
});
crypto.createSign(algorithm)#
Creates and returns a Sign object that uses the given algorithm. On recent
OpenSSL releases, openssl list-public-key-algorithms will display the
available signing algorithms. One example is 'RSA-SHA256'.
crypto.createVerify(algorithm)#
Creates and returns a Verify object that uses the given algorithm. On
recent OpenSSL releases, openssl list-public-key-algorithms will display
the available signing algorithms. One example is 'RSA-SHA256'.
crypto.getCiphers()#
Returns an array with the names of the supported cipher algorithms.
Example:
const ciphers = crypto.getCiphers();
console.log(ciphers); // ['aes-128-cbc', 'aes-128-ccm', ...]
crypto.getCurves()#
Returns an array with the names of the supported elliptic curves.
Example:
const curves = crypto.getCurves();
console.log(curves); // ['secp256k1', 'secp384r1', ...]
crypto.getDiffieHellman(group_name)#
Creates a predefined DiffieHellman key exchange object. The supported
groups are: 'modp1', 'modp2', 'modp5' (defined in RFC 2412, but
seeCaveats)
and 'modp14', 'modp15', 'modp16', 'modp17', 'modp18' (defined in RFC
3526). The returned object mimics the interface of objects created
by crypto.createDiffieHellman(), but will not allow changing the keys
(with diffieHellman.setPublicKey() for example). The advantage of using
this method is that the parties do not have to generate nor exchange a
group modulus beforehand, saving both processor and communication
time.
Example (obtaining a shared secret):
const crypto = require('crypto');
const alice = crypto.getDiffieHellman('modp14');
const bob = crypto.getDiffieHellman('modp14');
alice.generateKeys();
bob.generateKeys();
const alice_secret = alice.computeSecret(bob.getPublicKey(), null,
'hex');
const bob_secret = bob.computeSecret(alice.getPublicKey(), null,
'hex');
/* alice_secret and bob_secret should be the same */
console.log(alice_secret == bob_secret);
crypto.getHashes()#
Returns an array with the names of the supported hash algorithms.
Example:
const hashes = crypto.getHashes();
console.log(hashes); // ['sha', 'sha1', 'sha1WithRSAEncryption', ...]
crypto.pbkdf2(password, salt, iterations, keylen, digest,
callback)#
Provides an asynchronous Password-Based Key Derivation Function 2
(PBKDF2) implementation. A selected HMAC digest algorithm specified
by digestis applied to derive a key of the requested byte length (keylen)
from the password, salt and iterations.
The supplied callback function is called with two
arguments: err and derivedKey. If an error occurs, err will be set;
otherwise err will be null. The successfully generated derivedKey will be
passed as a Buffer.
The iterations argument must be a number set as high as possible. The
higher the number of iterations, the more secure the derived key will be,
but will take a longer amount of time to complete.
The salt should also be as unique as possible. It is recommended that the
salts are random and their lengths are greater than 16 bytes. See NIST SP
800-132 for details.
Example:
constants.RSA_NO_PADDING
constants.RSA_PKCS1_PADDING
constants.RSA_PKCS1_OAEP_PADDING
constants.RSA_NO_PADDING
constants.RSA_PKCS1_PADDING
constants.RSA_PKCS1_OAEP_PADDING
constants.RSA_NO_PADDING
constants.RSA_PKCS1_PADDING
constants.RSA_PKCS1_OAEP_PADDING
Because RSA public keys can be derived from private keys, a private key
may be passed instead of a public key.
All paddings are defined in the constants module.
crypto.publicEncrypt(public_key, buffer)#
Encrypts buffer with public_key.
public_key can be an object or a string. If public_key is a string, it is
treated as the key with no passphrase and will
use RSA_PKCS1_OAEP_PADDING. Ifpublic_key is an object, it is interpreted
as a hash object with the keys:
constants.RSA_NO_PADDING
constants.RSA_PKCS1_PADDING
constants.RSA_PKCS1_OAEP_PADDING
Because RSA public keys can be derived from private keys, a private key
may be passed instead of a public key.
ENGINE_METHOD_RSA
ENGINE_METHOD_DSA
ENGINE_METHOD_DH
ENGINE_METHOD_RAND
ENGINE_METHOD_ECDH
ENGINE_METHOD_ECDSA
ENGINE_METHOD_CIPHERS
ENGINE_METHOD_DIGESTS
ENGINE_METHOD_STORE
ENGINE_METHOD_PKEY_METHS
ENGINE_METHOD_PKEY_ASN1_METHS
ENGINE_METHOD_ALL
ENGINE_METHOD_NONE
Notes#
Legacy Streams API (pre Node.js v0.10)#
The Crypto module was added to Node.js before there was the concept of
a unified Stream API, and before there were Buffer objects for handling
binary data. As such, the many of the crypto defined classes have
methods not typically found on other Node.js classes that implement
the streams API (e.g.update(), final(), or digest()). Also, many methods
accepted and returned 'binary' encoded strings by default rather than
Buffers. This default was changed after Node.js v0.8 to use Buffer objects
by default instead.
Debugger
Watchers
Commands reference
Stepping
Breakpoints
Info
Execution control
Various
Advanced Usage
Debugger#
Stability: 2 - Stable
Node.js includes a full-featured out-of-process debugging utility accessible
via a simple TCP-based protocol and built-in debugging client. To use it,
start Node.js with the debug argument followed by the path to the script
to debug; a prompt will be displayed indicating successful launch of the
debugger:
$ node debug myscript.js
< debugger listening on port 5858
connecting... ok
break in /home/indutny/Code/git/indutny/myscript.js:1
1 x = 5;
2 setTimeout(() => {
3 debugger;
debug>
Node.js's debugger client does not yet support the full range of
commands, but simple step and inspection are possible.
Inserting the statement debugger; into the source code of a script will
enable a breakpoint at that position in the code.
For example, suppose myscript.js is written as:
// myscript.js
x = 5;
setTimeout(() => {
debugger;
console.log('world');
}, 1000);
console.log('hello');
Once the debugger is run, a breakpoint will occur at line 4:
$ node debug myscript.js
< debugger listening on port 5858
connecting... ok
break in /home/indutny/Code/git/indutny/myscript.js:1
1 x = 5;
2 setTimeout(() => {
3 debugger;
debug> cont
< hello
break in /home/indutny/Code/git/indutny/myscript.js:3
1 x = 5;
2 setTimeout(() => {
3 debugger;
4 console.log('world');
5 }, 1000);
debug> next
break in /home/indutny/Code/git/indutny/myscript.js:4
2 setTimeout(() => {
3 debugger;
4 console.log('world');
5 }, 1000);
6 console.log('hello');
debug> repl
Press Ctrl + C to leave debug repl
>x
5
> 2+2
4
debug> next
< world
break in /home/indutny/Code/git/indutny/myscript.js:5
3 debugger;
4 console.log('world');
5 }, 1000);
6 console.log('hello');
7
debug> quit
The repl command allows code to be evaluated remotely.
The next command steps over to the next line. Type help to see what
other commands are available.
Pressing enter without typing a command will repeat the previous
debugger command.
Watchers#
It is possible to watch expression and variable values while debugging. On
every breakpoint, each expression from the watchers list will be evaluated
in the current context and displayed immediately before the breakpoint's
source code listing.
To begin watching an expression, type watch('my_expression'). The
command watchers will print the active watchers. To remove a watcher,
typeunwatch('my_expression').
Commands reference#
Stepping#
step, s - Step in
Advanced Usage#
An alternative way of enabling and accessing the debugger is to start
Node.js with the --debug command-line flag or by signaling an existing
Node.js process with SIGUSR1.
Once a process has been set in debug mode this way, it can be connected
to using the Node.js debugger by either connecting to the pid of the
running process or via URI reference to the listening debugger:
node debug <URI> - Connects to the process via the URI such as
localhost:5858
DNS#
Stability: 2 - Stable
The dns module contains functions belonging to two different categories:
1) Functions that use the underlying operating system facilities to perform
name resolution, and that do not necessarily perform any network
});
});
There are subtle consequences in choosing one over the other, please
consult the Implementation considerations section for more information.
dns.getServers()#
Returns an array of IP address strings that are being used for name
resolution.
dns.lookup(hostname[, options], callback)#
Resolves a hostname (e.g. 'nodejs.org') into the first found A (IPv4) or
AAAA (IPv6) record. options can be an object or integer. If options is not
provided, then IPv4 and IPv6 addresses are both valid. If options is an
integer, then it must be 4 or 6.
Alternatively, options can be an object containing these properties:
On error, err is an Error object, where err.code is one of the error codes
listed here.
dns.resolve4(hostname, callback)#
Uses the DNS protocol to resolve a IPv4 addresses (A records) for
the hostname. The addresses argument passed to the callback function
will contain an array of IPv4 addresses (e.g. ['74.125.79.104',
'74.125.79.105', '74.125.79.106']).
dns.resolve6(hostname, callback)#
Uses the DNS protocol to resolve a IPv6 addresses (AAAA records) for
the hostname. The addresses argument passed to the callback function
will contain an array of IPv6 addresses.
dns.resolveCname(hostname, callback)#
Uses the DNS protocol to resolve CNAME records for the hostname.
The addresses argument passed to the callback function will contain an
array of canonical name records available for
the hostname (e.g. ['bar.example.com']).
dns.resolveMx(hostname, callback)#
Uses the DNS protocol to resolve mail exchange records (MX records) for
the hostname. The addresses argument passed to the callback function
will contain an array of objects containing both
a priority and exchange property (e.g. [{priority: 10, exchange:
'mx.example.com'}, ...]).
dns.resolveNs(hostname, callback)#
Uses the DNS protocol to resolve name server records (NS records) for
the hostname. The addresses argument passed to the callback function
will contain an array of name server records available
for hostname (e.g., ['ns1.example.com', 'ns2.example.com']).
dns.resolveSoa(hostname, callback)#
Uses the DNS protocol to resolve a start of authority record (SOA record)
for the hostname. The addresses argument passed to
the callback function will be an object with the following properties:
nsname
hostmaster
serial
refresh
retry
expire
minttl
{
nsname: 'ns.example.com',
hostmaster: 'root.example.com',
serial: 2013101809,
refresh: 10000,
retry: 2400,
expire: 604800,
minttl: 3600
}
dns.resolveSrv(hostname, callback)#
Uses the DNS protocol to resolve service records (SRV records) for
the hostname. The addresses argument passed to the callback function
will be an array of objects with the following properties:
priority
weight
port
name
{
priority: 10,
weight: 5,
port: 21223,
name: 'service.example.com'
}
dns.resolvePtr(hostname, callback)#
Uses the DNS protocol to resolve pointer records (PTR records) for
the hostname. The addresses argument passed to the callback function
will be an array of strings containing the reply records.
dns.resolveTxt(hostname, callback)#
Uses the DNS protocol to resolve text queries (TXT records) for
the hostname. The addresses argument passed to the callback function is
is a two-dimentional array of the text records available
for hostname (e.g., [ ['v=spf1 ip4:0.0.0.0 ', '~all' ] ]). Each sub-array
contains TXT chunks of one record. Depending on the use case, these
could be either joined together or treated separately.
dns.reverse(ip, callback)#
Performs a reverse DNS query that resolves an IPv4 or IPv6 address to an
array of hostnames.
The callback function has arguments (err, hostnames),
where hostnames is an array of resolved hostnames for the given ip.
On error, err is an Error object, where err.code is one of the DNS error
codes.
dns.setServers(servers)#
Sets the IP addresses of the servers to be used when resolving.
The servers argument is an array of IPv4 or IPv6 addresses.
If a port specified on the address it will be removed.
An error will be thrown if an invalid address is provided.
The dns.setServers() method must not be called while a DNS query is in
progress.
Error codes#
Each DNS query can return one of the following error codes:
Implementation considerations#
Although dns.lookup() and the
various dns.resolve*()/dns.reverse() functions have the same goal of
associating a network name with a network address (or vice versa), their
behavior is quite different. These differences can have subtle but
significant consequences on the behavior of Node.js programs.
dns.lookup()#
Under the hood, dns.lookup() uses the same operating system facilities as
most other programs. For instance, dns.lookup() will almost always resolve
a given name the same way as the ping command. On most POSIX-like
operating systems, the behavior of the dns.lookup() function can be
modified by changing settings in nsswitch.conf(5) and/or resolv.conf(5),
but note that changing these files will change the behavior of all other
programs running on the same operating system.
Though the call to dns.lookup() will be asynchronous from JavaScript's
perspective, it is implemented as a synchronous call to getaddrinfo(3) that
runs on libuv's threadpool. Because libuv's threadpool has a fixed size, it
means that if for whatever reason the call to getaddrinfo(3) takes a long
time, other operations that could run on libuv's threadpool (such as
filesystem operations) will experience degraded performance. In order to
mitigate this issue, one potential solution is to increase the size of libuv's
threadpool by setting the 'UV_THREADPOOL_SIZE' environment variable to
a value greater than 4(its current default value). For more information on
libuv's threadpool, see the official libuv documentation.
dns.resolve(), dns.resolve*() and dns.reverse()#
These functions are implemented quite differently than dns.lookup(). They
do not use getaddrinfo(3) and they always perform a DNS query on the
Domain#
Stability: 0 - Deprecated
This module is pending deprecation. Once a replacement API has
been finalized, this module will be fully deprecated. Most end users
should not have cause to use this module. Users who absolutely must
have the functionality that domains provide may rely on it for the time
being but should expect to have to migrate to a different solution in the
future.
Domains provide a way to handle multiple different IO operations as a
single group. If any of the event emitters or callbacks registered to a
domain emit an'error' event, or throw an error, then the domain object will
be notified, rather than losing the context of the error in
theprocess.on('uncaughtException') handler, or causing the program to
exit immediately with an error code.
Warning: Don't Ignore Errors!#
Domain error handlers are not a substitute for closing down your process
when an error occurs.
By the very nature of how throw works in JavaScript, there is almost never
any way to safely "pick up where you left off", without leaking references,
or creating some other sort of undefined brittle state.
The safest way to respond to a thrown error is to shut down the process.
Of course, in a normal web server, you might have many connections
open, and it is not reasonable to abruptly shut those down because an
error was triggered by someone else.
The better approach is to send an error response to the request that
triggered the error, while letting the others finish in their normal time, and
stop listening for new requests in that worker.
In this way, domain usage goes hand-in-hand with the cluster module,
since the master process can fork a new worker when a worker
encounters an error. For Node.js programs that scale to multiple machines,
the terminating proxy or service registry can take note of the failure, and
react accordingly.
For example, this is not a good idea:
// XXX WARNING! BAD IDEA!
var d = require('domain').create();
d.on('error', (er) => {
// The error won't crash the process, but what it does is worse!
// Though we've prevented abrupt process restarting, we are leaking
// resources like crazy if this ever happens.
// This is no better than process.on('uncaughtException')!
console.log('error, but oh well', er.message);
});
d.run(() => {
require('http').createServer((req, res) => {
handleRequest(req, res);
}).listen(PORT);
});
By using the context of a domain, and the resilience of separating our
program into multiple worker processes, we can react more appropriately,
and handle errors with much greater safety.
// Much better!
const cluster = require('cluster');
//
//
//
//
try {
// make sure we close down within 30 seconds
var killtimer = setTimeout(() => {
process.exit(1);
}, 30000);
// But don't keep the process open just for that!
killtimer.unref();
// stop taking new requests.
server.close();
// Let the master know we're dead. This will trigger a
// 'disconnect' in the cluster master, and then it will fork
// a new worker.
cluster.worker.disconnect();
// try to send an error to the request that triggered the problem
res.statusCode = 500;
res.setHeader('content-type', 'text/plain');
res.end('Oops, there was a problem!\n');
} catch (er2) {
// oh well, not much we can do at this point.
console.error('Error sending 500!', er2.stack);
}
});
// Because req and res were created before this domain existed,
// we need to explicitly add them.
// See the explanation of implicit vs explicit binding below.
d.add(req);
d.add(res);
// Now run the handler function in the domain.
d.run(() => {
handleRequest(req, res);
});
});
server.listen(PORT);
}
// This part isn't important. Just an example routing thing.
// You'd put your fancy application logic here.
function handleRequest(req, res) {
switch(req.url) {
case '/error':
// We do some async stuff, and then...
setTimeout(() => {
// Whoops!
flerb.bark();
});
break;
default:
res.end('ok');
}
}
Additions to Error objects#
Any time an Error object is routed through a domain, a few extra fields are
added to it.
Implicit Binding#
If domains are in use, then all new EventEmitter objects (including Stream
objects, requests, responses, etc.) will be implicitly bound to the active
domain at the time of their creation.
Additionally, callbacks passed to lowlevel event loop requests (such as to
fs.open, or other callback-taking methods) will automatically be bound to
the active domain. If they throw, then the domain will catch the error.
In order to prevent excessive memory usage, Domain objects themselves
are not implicitly added as children of the active domain. If they were,
then it would be too easy to prevent request and response objects from
being properly garbage collected.
If you want to nest Domain objects as children of a parent Domain, then
you must explicitly add them.
Implicit binding routes thrown errors and 'error' events to the
Domain's 'error' event, but does not register the EventEmitter on the
Domain, sodomain.dispose() will not shut down the EventEmitter. Implicit
binding only takes care of thrown errors and 'error' events.
Explicit Binding#
Sometimes, the domain in use is not the one that ought to be used for a
specific event emitter. Or, the event emitter could have been created in
the context of one domain, but ought to instead be bound to some other
domain.
For example, there could be one domain in use for an HTTP server, but
perhaps we would like to have a separate domain to use for each request.
That is possible via explicit binding.
For example:
// create a top-level domain for the server
const domain = require('domain');
const http = require('http');
const serverDomain = domain.create();
serverDomain.run(() => {
return: <Domain>
fn <Function>
Run the supplied function in the context of the domain, implicitly binding
all event emitters, timers, and lowlevel requests that are created in that
context. Optionally, arguments can be passed to the function.
<Array>
An array of timers and event emitters that have been explicitly added to
the domain.
domain.add(emitter)#
const d = domain.create();
function readSomeFile(filename, cb) {
fs.readFile(filename, 'utf8', d.bind((er, data) => {
// if this throws, it will also be passed to the domain
return cb(er, data ? JSON.parse(data) : null);
}));
}
d.on('error', (er) => {
// an error occurred somewhere.
// if we throw it now, it will crash the program
// with the normal line number and stack message.
});
domain.intercept(callback)#
const d = domain.create();
function readSomeFile(filename, cb) {
fs.readFile(filename, 'utf8', d.intercept((data) => {
// note, the first argument is never passed to the
// callback since it is assumed to be the 'Error' argument
// and thus intercepted by the domain.
// if this throws, it will also be passed to the domain
// so the error-handling logic can be moved to the 'error'
// event on the domain instead of being repeated throughout
// the program.
return cb(null, JSON.parse(data));
}));
}
d.on('error', (er) => {
// an error occurred somewhere.
// if we throw it now, it will crash the program
// with the normal line number and stack message.
});
domain.enter()#
The enter method is plumbing used by the run, bind,
and intercept methods to set the active domain. It
Errors#
Applications running in Node.js will generally experience four categories of
errors:
All JavaScript and System errors raised by Node.js inherit from, or are
instances of, the standard JavaScript <Error> class and are guaranteed to
provide at least the properties available on that class.
Error Propagation and Interception#
Node.js supports several mechanisms for propagating and handling errors
that occur while an application is running. How these errors are reported
and handled depends entirely on the type of Error and the style of the API
that is called.
const fs = require('fs');
fs.readFile('a file that does not exist', (err, data) => {
if (err) {
console.error('There was an error reading the file!', err);
return;
}
// Otherwise handle the data
});
When an asynchronous method is called on an object that is
an EventEmitter, errors can be routed to that object's 'error' event.
connection.pipe(process.stdout);
A handful of typically asynchronous methods in the Node.js API may
still use the throw mechanism to raise exceptions that must be
handled usingtry / catch. There is no comprehensive list of such
methods; please refer to the documentation of each method to
determine the appropriate error handling mechanism required.
The use of the 'error' event mechanism is most common for streambased and event emitter-based APIs, which themselves represent a series
of asynchronous operations over time (as opposed to a single operation
that may pass or fail).
For all EventEmitter objects, if an 'error' event handler is not provided, the
error will be thrown, causing the Node.js process to report an unhandled
exception and crash unless either: The domain module is used
appropriately or a handler has been registered for
the process.on('uncaughtException')event.
const EventEmitter = require('events');
const ee = new EventEmitter();
setImmediate(() => {
// This will crash the process because no 'error' event
// handler has been added.
ee.emit('error', new Error('This will crash'));
});
}
});
} catch(err) {
// This will not catch the throw!
console.log(err);
}
This will not work because the callback function passed to fs.readFile() is
called asynchronously. By the time the callback has been called, the
surrounding code (including the try { } catch(err) { } block will have
already exited. Throwing an error inside the callback can crash the
Node.js process in most cases. If domains are enabled, or a handler has
been registered with process.on('uncaughtException'), such errors can be
intercepted.
Class: Error#
A generic JavaScript Error object that does not denote any specific
circumstance of why the error occurred. Error objects capture a "stack
trace" detailing the point in the code at which the Error was instantiated,
and may provide a text description of the error.
All errors generated by Node.js, including all System and JavaScript errors,
will either be instances of, or inherit from, the Error class.
new Error(message)#
Creates a new Error object and sets the error.message property to the
provided text message. If an object is passed as message, the text
message is generated by calling message.toString().
The error.stack property will represent the point in the code at which new
Error() was called. Stack traces are dependent on V8's stack trace API.
Stack traces extend only to either (a) the beginning of synchronous code
execution, or (b) the number of frames given by the
property Error.stackTraceLimit, whichever is smaller.
Error.captureStackTrace(targetObject[, constructorOpt])#
Creates a .stack property on targetObject, which when accessed returns a
string representing the location in the code at
whichError.captureStackTrace() was called.
const myObject = {};
Error.captureStackTrace(myObject);
myObject.stack // similar to `new Error().stack`
The first line of the trace, instead of being prefixed with ErrorType:
message, will be the result of calling targetObject.toString().
The optional constructorOpt argument accepts a function. If given, all
frames above constructorOpt, including constructorOpt, will be omitted
from the generated stack trace.
The constructorOpt argument is useful for hiding implementation details
of error generation from an end user. For instance:
function MyError() {
Error.captureStackTrace(this, MyError);
}
// Without passing MyError to captureStackTrace, the MyError
// frame would should up in the .stack property. by passing
// the constructor, we omit that frame and all frames above it.
new MyError().stack
Error.stackTraceLimit#
The Error.stackTraceLimit property specifies the number of stack frames
collected by a stack trace (whether generated by new
Error().stack orError.captureStackTrace(obj)).
The default value is 10 but may be set to any valid JavaScript number.
Changes will affect any stack trace captured after the value has been
changed.
If set to a non-number value, or set to a negative number, stack traces
will not capture any frames.
error.message#
error.stack#
Returns a string describing the point in the code at which the Error was
instantiated.
For example:
Error: Things keep happening!
at /home/gbusey/file.js:525:2
at Frobnicator.refrobulate (/home/gbusey/business-logic.js:424:21)
at Actor.<anonymous> (/home/gbusey/actors.js:400:8)
at increaseSynergy (/home/gbusey/actors.js:701:6)
The first line is formatted as <error class name>: <error message>, and
is followed by a series of stack frames (each line beginning with "at ").
Each frame describes a call site within the code that lead to the error
being generated. V8 attempts to display a name for each function (by
variable name, function name, or object method name), but occasionally it
will not be able to find a suitable name. If V8 cannot determine a name for
the function, only location information will be displayed for that frame.
Otherwise, the determined function name will be displayed with location
information appended in parentheses.
It is important to note that frames are only generated for JavaScript
functions. If, for example, execution synchronously passes through a C++
addon function called cheetahify, which itself calls a JavaScript function,
the frame representing the cheetahify call will not be present in the stack
traces:
doesNotExist;
} catch(err) {
assert(err.arguments[0], 'doesNotExist');
}
Unless an application is dynamically generating and running
code, ReferenceError instances should always be considered a bug in the
code or its dependencies.
Class: SyntaxError#
A subclass of Error that indicates that a program is not valid JavaScript.
These errors may only be generated and propagated as a result of code
evaluation. Code evaluation may happen as a result
of eval, Function, require, or vm. These errors are almost always indicative
of a broken program.
try {
require('vm').runInThisContext('binary ! isNotOk');
} catch(err) {
// err will be a SyntaxError
}
SyntaxError instances are unrecoverable in the context that created them
they may only be caught by other contexts.
Class: TypeError#
A subclass of Error that indicates that a provided argument is not an
allowable type. For example, passing a function to a parameter which
expects a string would be considered a TypeError.
require('url').parse(() => { });
// throws TypeError, since it expected a string
Node.js will generate and throw TypeError instances immediately as a
form of argument validation.
Exceptions vs. Errors#
Events#
Stability: 2 - Stable
Much of the Node.js core API is built around an idiomatic asynchronous
event-driven architecture in which certain kinds of objects (called
"emitters") periodically emit named events that cause Function objects
("listeners") to be called.
For instance: a net.Server object emits an event each time a peer
connects to it; a fs.ReadStream emits an event when the file is opened;
a stream emits an event whenever data is available to be read.
All objects that emit events are instances of the EventEmitter class. These
objects expose an eventEmitter.on() function that allows one or more
Functions to be attached to named events emitted by the object. Typically,
event names are camel-cased strings but any valid JavaScript property
key can be used.
When the EventEmitter object emits an event, all of the Functions
attached to that specific event are called synchronously. Any values
returned by the called listeners are ignored and will be discarded.
The following example shows a simple EventEmitter instance with a single
listener. The eventEmitter.on() method is used to register listeners, while
the eventEmitter.emit() method is used to trigger the event.
const EventEmitter = require('events');
class MyEmitter extends EventEmitter {}
const myEmitter = new MyEmitter();
myEmitter.on('event', () => {
console.log('an event occurred!');
});
myEmitter.emit('event');
Passing arguments and this to listeners#
The eventEmitter.emit() method allows an arbitrary set of arguments to be
passed to the listener functions. It is important to keep in mind that when
an ordinary listener function is called by the EventEmitter, the
standard this keyword is intentionally set to reference the EventEmitter to
which the listener is attached.
const myEmitter = new MyEmitter();
myEmitter.on('event', function(a, b) {
console.log(a, b, this);
// Prints:
// a b MyEmitter {
//
domain: null,
//
_events: { event: [Function] },
//
_eventsCount: 1,
//
_maxListeners: undefined }
});
myEmitter.emit('event', 'a', 'b');
It is possible to use ES6 Arrow Functions as listeners, however, when doing
so, the this keyword will no longer reference the EventEmitter instance:
const myEmitter = new MyEmitter();
myEmitter.on('event', (a, b) => {
console.log(a, b, this);
// Prints: a b {}
});
myEmitter.emit('event', 'a', 'b');
Asynchronous vs. Synchronous#
The EventListener calls all listeners synchronously in the order in which
they were registered. This is important to ensure the proper sequencing of
events and to avoid race conditions or logic errors. When appropriate,
listener functions can switch to an asynchronous mode of operation using
thesetImmediate() or process.nextTick() methods:
const myEmitter = new MyEmitter();
myEmitter.on('event', (a, b) => {
setImmediate(() => {
console.log('this happens asynchronously');
});
});
myEmitter.emit('event', 'a', 'b');
Handling events only once#
When a listener is registered using the eventEmitter.on() method, that
listener will be invoked every time the named event is emitted.
const myEmitter = new MyEmitter();
var m = 0;
myEmitter.on('event', () => {
console.log(++m);
});
myEmitter.emit('event');
// Prints: 1
myEmitter.emit('event');
// Prints: 2
Using the eventEmitter.once() method, it is possible to register a listener
that is unregistered before it is called.
const myEmitter = new MyEmitter();
var m = 0;
myEmitter.once('event', () => {
console.log(++m);
});
myEmitter.emit('event');
// Prints: 1
myEmitter.emit('event');
// Ignored
Error events#
When an error occurs within an EventEmitter instance, the typical action is
for an 'error' event to be emitted. These are treated as a special case
within Node.js.
If an EventEmitter does not have at least one listener registered for
the 'error' event, and an 'error' event is emitted, the error is thrown, a
stack trace is printed, and the Node.js process exits.
const myEmitter = new MyEmitter();
myEmitter.emit('error', new Error('whoops!'));
// Throws and crashes Node.js
To guard against crashing the Node.js process, developers can either
register a listener for the process.on('uncaughtException') event or use
thedomain module (Note, however, that the domain module has been
deprecated).
const myEmitter = new MyEmitter();
process.on('uncaughtException', (err) => {
The EventEmitter instance will emit it's own 'newListener' event before a
listener is added to it's internal array of listeners.
Listeners registered for the 'newListener' event will be passed the event
name and a reference to the listener being added.
The fact that the event is triggered before adding the listener has a subtle
but important side effect: any additional listeners registered to the
same namewithin the 'newListener' callback will be inserted before the
listener that is in the process of being added.
Adds the listener function to the end of the listeners array for the event
named eventName. No checks are made to see if the listener has already
been added. Multiple calls passing the same combination
of eventName and listener will result in the listener being added, and
called, multiple times.
server.on('connection', (stream) => {
console.log('someone connected!');
});
Returns a reference to the EventEmitter so calls can be chained.
By default, event listeners are invoked in the order they are added.
The emitter.prependListener() method can be used as an alternative to
add the event listener to the beginning of the listeners array.
const myEE = new EventEmitter();
myEE.on('foo', () => console.log('a'));
myEE.prependListener('foo', () => console.log('b'));
myEE.emit('foo');
// Prints:
// b
// a
emitter.once(eventName, listener)#
Adds a one time listener function for the event named eventName. The
next time eventName is triggered, this listener is removed and then
invoked.
server.once('connection', (stream) => {
console.log('Ah, we have our first user!');
});
Returns a reference to the EventEmitter so calls can be chained.
By default, event listeners are invoked in the order they are added.
The emitter.prependOnceListener() method can be used as an alternative
to add the event listener to the beginning of the listeners array.
const myEE = new EventEmitter();
myEE.once('foo', () => console.log('a'));
myEE.prependOnceListener('foo', () => console.log('b'));
myEE.emit('foo');
// Prints:
// b
// a
emitter.prependListener(eventName, listener)#
Adds the listener function to the beginning of the listeners array for the
event named eventName. No checks are made to see if the listener has
already been added. Multiple calls passing the same combination
of eventName and listener will result in the listener being added, and
called, multiple times.
server.prependListener('connection', (stream) => {
console.log('someone connected!');
});
Returns a reference to the EventEmitter so calls can be chained.
emitter.prependOnceListener(eventName, listener)#
Adds a one time listener function for the event named eventName to
the beginning of the listeners array. The next time eventName is
triggered, this listener is removed, and then invoked.
server.prependOnceListener('connection', (stream) => {
console.log('Ah, we have our first user!');
});
Returns a reference to the EventEmitter so calls can be chained.
emitter.removeAllListeners([eventName])#
Removes all listeners, or those of the specified eventName.
myEmitter.on('event', callbackA);
myEmitter.on('event', callbackB);
// callbackA removes listener callbackB but it will still be called.
// Internal listener array at time of emit [callbackA, callbackB]
myEmitter.emit('event');
// Prints:
// A
// B
// callbackB is now removed.
// Internal listener array [callbackA]
myEmitter.emit('event');
// Prints:
// A
Because listeners are managed using an internal array, calling this will
change the position indices of any listener registered after the listener
being removed. This will not impact the order in which listeners are called,
but it will means that any copies of the listener array as returned by
theemitter.listeners() method will need to be recreated.
Returns a reference to the EventEmitter so calls can be chained.
emitter.setMaxListeners(n)#
By default EventEmitters will print a warning if more than 10 listeners are
added for a particular event. This is a useful default that helps finding
memory leaks. Obviously, not all events should be limited to just 10
listeners. The emitter.setMaxListeners() method allows the limit to be
modified for this specific EventEmitter instance. The value can be set
to Infinity (or 0) for to indicate an unlimited number of listeners.
Returns a reference to the EventEmitter so calls can be chained.
File System#
Stability: 2 - Stable
File I/O is provided by simple wrappers around standard POSIX functions.
To use this module do require('fs'). All the methods have asynchronous
and synchronous forms.
The asynchronous form always takes a completion callback as its last
argument. The arguments passed to the completion callback depend on
the method, but the first argument is always reserved for an exception. If
the operation was completed successfully, then the first argument will
be null orundefined.
When using the synchronous form any exceptions are immediately
thrown. You can use try/catch to handle exceptions or allow them to
bubble up.
Here is an example of the asynchronous version:
const fs = require('fs');
fs.unlink('/tmp/hello', (err) => {
if (err) throw err;
console.log('successfully deleted /tmp/hello');
});
Here is the synchronous version:
const fs = require('fs');
fs.unlinkSync('/tmp/hello');
console.log('successfully deleted /tmp/hello');
With the asynchronous methods there is no guaranteed ordering. So the
following is prone to error:
fs.rename('/tmp/hello', '/tmp/world', (err) => {
if (err) throw err;
console.log('renamed complete');
});
fs.stat('/tmp/world', (err, stats) => {
error <Error>
watcher.close()#
Stop watching for changes on the given fs.FSWatcher.
Class: fs.ReadStream#
ReadStream is a Readable Stream.
Event: 'open'#
stats.isFile()
stats.isDirectory()
stats.isBlockDevice()
stats.isCharacterDevice()
stats.isFIFO()
stats.isSocket()
atime "Access Time" - Time when file data last accessed. Changed
by the mknod(2), utimes(2), and read(2) system calls.
mtime "Modified Time" - Time when file data last modified. Changed
by the mknod(2), utimes(2), and write(2) system calls.
ctime "Change Time" - Time when file status was last changed
(inode data modification). Changed by
birthtime "Birth Time" - Time of file creation. Set once when the file
is created. On filesystems where birthtime is not available, this field
may instead hold either the ctime or 1970-01-01T00:00Z (ie, unix
epoch timestamp 0). Note that this value may be greater
than atime or mtime in this case. On Darwin and other FreeBSD
variants, also set if the atime is explicitly set to an earlier value than
the current birthtime using the utimes(2)system call.
Prior to Node v0.12, the ctime held the birthtime on Windows systems.
Note that as of v0.12, ctime is not "creation time", and on Unix systems, it
never was.
Class: fs.WriteStream#
WriteStream is a Writable Stream.
Event: 'open'#
mode <Integer>
callback <Function>
mode <Integer>
callback <Function>
Asynchronously append data to a file, creating the file if it does not yet
exist. data can be a string or a buffer.
Example:
fs.appendFile('message.txt', 'data to append', (err) => {
if (err) throw err;
console.log('The "data to append" was appended to file!');
});
If options is a string, then it specifies the encoding. Example:
fs.appendFile('message.txt', 'data to append', 'utf8', callback);
Any specified file descriptor has to have been opened for appending.
Note: Specified file descriptors will not be closed automatically.
fs.appendFileSync(file, data[, options])#
mode <Integer>
callback <Function>
mode <Integer>
uid <Integer>
gid <Integer>
callback <Function>
uid <Integer>
gid <Integer>
fd <Integer>
callback <Function>
fd <Integer>
flags <String>
encoding <String>
fd <Integer>
mode <Integer>
autoClose <Boolean>
start <Integer>
end <Integer>
autoClose: true
}
options can include start and end values to read a range of bytes from the
file instead of the entire file. Both start and end are inclusive and start at
0. The encoding can be any one of those accepted by Buffer.
If fd is specified, ReadStream will ignore the path argument and will use
the specified file descriptor. This means that no 'open' event will be
emitted. Note that fd should be blocking; non-blocking fds should be
passed to net.Socket.
If autoClose is false, then the file descriptor won't be closed, even if
there's an error. It is your responsibility to close it and make sure there's
no file descriptor leak. If autoClose is set to true (default behavior),
on error or end the file descriptor will be closed automatically.
mode sets the file mode (permission and sticky bits), but only if the file
was created.
An example to read the last 10 bytes of a file which is 100 bytes long:
fs.createReadStream('sample.txt', {start: 90, end: 99});
If options is a string, then it specifies the encoding.
fs.createWriteStream(path[, options])#
flags <String>
defaultEncoding <String>
fd <Integer>
mode <Integer>
autoClose <Boolean>
start <Integer>
callback <Function>
Test whether or not the given path exists by checking with the file system.
Then call the callback argument with either true or false. Example:
fs.exists('/etc/passwd', (exists) => {
fd <Integer>
mode <Integer>
callback <Function>
fd <Integer>
mode <Integer>
fd <Integer>
uid <Integer>
gid <Integer>
callback <Function>
fd <Integer>
uid <Integer>
gid <Integer>
fd <Integer>
callback <Function>
fd <Integer>
fd <Integer>
callback <Function>
fd <Integer>
fs.fsync(fd, callback)#
fd <Integer>
callback <Function>
fd <Integer>
fd <Integer>
len <Integer>
callback <Function>
fd <Integer>
len <Integer>
fd <Integer>
atime <Integer>
mtime <Integer>
callback <Function>
fd <Integer>
atime <Integer>
mtime <Integer>
mode <Integer>
callback <Function>
mode <Integer>
uid <Integer>
gid <Integer>
callback <Function>
uid <Integer>
gid <Integer>
callback <Function>
callback <Function>
mode <Integer>
callback <Function>
mode <Integer>
mode <Integer>
callback <Function>
'r' - Open file for reading. An exception occurs if the file does not
exist.
'r+' - Open file for reading and writing. An exception occurs if the file
does not exist.
'w' - Open file for writing. The file is created (if it does not exist) or
truncated (if it exists).
'w+' - Open file for reading and writing. The file is created (if it does
not exist) or truncated (if it exists).
'a' - Open file for appending. The file is created if it does not exist.
'a+' - Open file for reading and appending. The file is created if it
does not exist.
mode sets the file mode (permission and sticky bits), but only if the file
was created. It defaults to 0666, readable and writable.
The callback gets two arguments (err, fd).
The exclusive flag 'x' (O_EXCL flag in open(2)) ensures that path is newly
created. On POSIX systems, path is considered to exist even if it is a
symlink to a non-existent file. The exclusive flag may or may not work
with network file systems.
flags can also be a number as documented by open(2); commonly used
constants are available from require('constants'). On Windows, flags are
translated to their equivalent ones where applicable,
e.g. O_WRONLY to FILE_GENERIC_WRITE, or O_EXCL|
O_CREAT to CREATE_NEW, as accepted by CreateFileW.
On Linux, positional writes don't work when the file is opened in append
mode. The kernel ignores the position argument and always appends the
data to the end of the file.
Note: The behavior of fs.open() is platform specific for some flags. As
such, opening a directory on OS X and Linux with the 'a+' flag - see
example below - will return an error. In contrast, on Windows and
FreeBSD, a file descriptor will be returned.
// OS X and Linux
fs.open('<directory>', 'a+', (err, fd) => {
// => [Error: EISDIR: illegal operation on a directory, open
<directory>]
})
// Windows and FreeBSD
fs.open('<directory>', 'a+', (err, fd) => {
mode <Integer>
fd <Integer>
offset <Integer>
length <Integer>
position <Integer>
callback <Function>
callback <Function>
callback <Function>
callback <Function>
fd <Integer>
offset <Integer>
length <Integer>
position <Integer>
callback <Function>
callback <Function>
fs.rmdir(path, callback)#
callback <Function>
callback <Function>
type <String>
callback <Function>
type <String>
len <Integer>
callback <Function>
len <Integer>
callback <Function>
listener <Function>
atime <Integer>
mtime <Integer>
callback <Function>
atime <Integer>
mtime <Integer>
listener <Function>
Caveats#
The fs.watch API is not 100% consistent across platforms, and is
unavailable in some situations.
The recursive option is only supported on OS X and Windows.
Availability#
options <Object>
persistent <Boolean>
interval <Integer>
listener <Function>
Watch for changes on filename. The callback listener will be called each
time the file is accessed.
The options argument may be omitted. If provided, it should be an object.
The options object may contain a boolean named persistent that indicates
whether the process should continue to run as long as files are being
watched. The options object may specify an interval property indicating
how often the target should be polled in milliseconds. The default
is { persistent: true, interval: 5007 }.
The listener gets two arguments the current stat object and the previous
stat object:
fs.watchFile('message.text', (curr, prev) => {
console.log(`the current mtime is: ${curr.mtime}`);
fd <Integer>
offset <Integer>
length <Integer>
position <Integer>
callback <Function>
Note that it is unsafe to use fs.write multiple times on the same file
without waiting for the callback. For this scenario, fs.createWriteStream is
strongly recommended.
On Linux, positional writes don't work when the file is opened in append
mode. The kernel ignores the position argument and always appends the
data to the end of the file.
fs.write(fd, data[, position[, encoding]], callback)#
fd <Integer>
position <Integer>
encoding <String>
callback <Function>
Write data to the file specified by fd. If data is not a Buffer instance then
the value will be coerced to a string.
position refers to the offset from the beginning of the file where this data
should be written. If typeof position !== 'number' the data will be written
at the current position. See pwrite(2).
encoding is the expected string encoding.
The callback will receive the arguments (err, written,
string) where written specifies how many bytes the passed string required
to be written. Note that bytes written is not the same as string characters.
See Buffer.byteLength.
Unlike when writing buffer, the entire string must be written. No substring
may be specified. This is because the byte offset of the resulting data may
not be the same as the string offset.
Note that it is unsafe to use fs.write multiple times on the same file
without waiting for the callback. For this scenario, fs.createWriteStream is
strongly recommended.
On Linux, positional writes don't work when the file is opened in append
mode. The kernel ignores the position argument and always appends the
data to the end of the file.
fs.writeFile(file, data[, options], callback)#
callback <Function>
fd <Integer>
offset <Integer>
length <Integer>
position <Integer>
fd <Integer>
position <Integer>
encoding <String>
Global Objects#
These objects are available in all modules. Some of these objects aren't
actually in the global scope but in the module scope - this will be noted.
Class: Buffer#
<Function>
<String>
The name of the directory that the currently executing script resides in.
Example: running node example.js from /Users/mjr
console.log(__dirname);
// /Users/mjr
__dirname isn't actually a global but rather local to each module.
For instance, given two modules: a and b, where b is a dependency
of a and there is a directory structure of:
/Users/mjr/app/a.js
/Users/mjr/app/node_modules/b/b.js
<String>
The filename of the code being executed. This is the resolved absolute
path of this code file. For a main program this is not necessarily the same
filename used in the command line. The value inside a module is the path
to that module file.
Example: running node example.js from /Users/mjr
console.log(__filename);
// /Users/mjr/example.js
<Object>
In browsers, the top-level scope is the global scope. That means that in
browsers if you're in the global scope var something will define a global
variable. In Node.js this is different. The top-level scope is not the global
scope; var something inside an Node.js module will be local to that
module.
module#
<Object>
<Object>
<Function>
To require modules. See the Modules section. require isn't actually a global
but rather local to each module.
require.cache#
<Object>
Modules are cached in this object when they are required. By deleting a
key value from this object, the next require will reload the module. Note
that this does not apply to native addons, for which reloading will result in
an Error.
require.extensions#
Stability: 0 - Deprecated
<Object>
there are much better ways to do this, such as loading modules via some
other Node.js program, or compiling them to JavaScript ahead of time.
Since the Module system is locked, this feature will probably never go
away. However, it may have subtle bugs and complexities that are best
left untouched.
require.resolve()#
Use the internal require() machinery to look up the location of a module,
but rather than loading the module, just return the resolved filename.
setImmediate(callback[, arg][, ...])#
setImmediate is described in the timers section.
setInterval(callback, delay[, arg][, ...])#
setInterval is described in the timers section.
setTimeout(callback, delay[, arg][, ...])#
setTimeout is described in the timers section.
HTTP#
Stability: 2 - Stable
To use the HTTP server and client one must require('http').
The HTTP interfaces in Node.js are designed to support many features of
the protocol which have been traditionally difficult to use. In particular,
large, possibly chunk-encoded, messages. The interface is careful to never
buffer entire requests or responses--the user is able to stream data.
HTTP message headers are represented by an object like this:
{ 'content-length': '123',
'content-type': 'text/plain',
'connection': 'keep-alive',
'host': 'mysite.com',
'accept': '*/*' }
Keys are lowercased. Values are not modified.
In order to support the full spectrum of possible HTTP applications,
Node.js's HTTP API is very low-level. It deals with stream handling and
message parsing only. It parses a message into headers and body but it
does not parse the actual headers or the body.
See message.headers for details on how duplicate headers are handled.
The raw headers as they were received are retained in
the rawHeaders property, which is an array of [key, value, key2,
value2, ...]. For example, the previous message header object might have
a rawHeaders list like the following:
[ 'ConTent-Length', '123456',
'content-LENGTH', '123',
'content-type', 'text/plain',
'CONNECTION', 'keep-alive',
'Host', 'mysite.com',
'accepT', '*/*' ]
Class: http.Agent#
The HTTP Agent is used for pooling sockets used in HTTP client requests.
The HTTP Agent also defaults client requests to using Connection:keepalive. If no pending HTTP requests are waiting on a socket to become free
the socket is closed. This means that Node.js's pool has the benefit of
keep-alive when under load but still does not require developers to
manually close the HTTP clients using KeepAlive.
If you opt into using HTTP KeepAlive, you can create an Agent object with
that flag set to true. (See the constructor options.) Then, the Agent will
keep unused sockets in a pool for later use. They will be explicitly marked
so as to not keep the Node.js process running. However, it is still a good
idea to explicitly destroy() KeepAlive agents when they are no longer in
use, so that the Sockets will be shut down.
Sockets are removed from the agent's pool when the socket emits either
a 'close' event or a special 'agentRemove' event. This means that if you
intend to keep one HTTP request open for a long time and don't want it to
stay in the pool you can do something along the lines of:
http.get(options, (res) => {
// Do stuff
}).on('socket', (socket) => {
socket.emit('agentRemove');
});
Alternatively, you could just opt out of pooling entirely using agent:false:
http.get({
hostname: 'localhost',
port: 80,
path: '/',
agent: false // create a new agent just for this one request
}, (res) => {
// Do stuff with response
})
new Agent([options])#
To configure any of them, you must create your own http.Agent object.
const http = require('http');
var keepAliveAgent = new http.Agent({ keepAlive: true });
options.agent = keepAliveAgent;
http.request(options, onResponseCallback);
agent.createConnection(options[, callback])#
Produces a socket/stream to be used for HTTP requests.
By default, this function is the same as net.createConnection(). However,
custom Agents may override this method in case greater flexibility is
desired.
A socket/stream can be supplied in one of two ways: by returning the
socket/stream from this function, or by passing the socket/stream
to callback.
callback has a signature of (err, stream).
agent.destroy()#
Destroy any sockets that are currently in use by the agent.
It is usually not necessary to do this. However, if you are using an agent
with KeepAlive enabled, then it is best to explicitly shut down the agent
when you know that it will no longer be used. Otherwise, sockets may
hang open for quite a long time before the server terminates them.
agent.freeSockets#
An object which contains arrays of sockets currently awaiting use by the
Agent when HTTP KeepAlive is used. Do not modify.
agent.getName(options)#
Get a unique name for a set of request options, to determine whether a
connection can be reused. In the http agent, this
returns host:port:localAddress. In the https agent, the name includes the
CA, cert, ciphers, and other HTTPS/TLS-specific options that determine
socket reusability.
Options:
During the 'response' event, one can add listeners to the response object;
particularly to listen for the 'data' event.
If no 'response' handler is added, then the response will be entirely
discarded. However, if you add a 'response' event handler, then
you must consume the data from the response object, either by
calling response.read() whenever there is a 'readable' event, or by adding
a 'data' handler, or by calling the .resume() method. Until the data is
consumed, the 'end' event will not fire. Also, until the data is read it will
consume memory that can eventually lead to a 'process out of memory'
error.
Note: Node.js does not check whether Content-Length and the length of
the body which has been transmitted are equal or not.
The request implements the Writable Stream interface. This is
an EventEmitter with the following events:
Event: 'abort'#
function () { }
Emitted when the request has been aborted by the client. This event is
only emitted on the first call to abort().
Event: 'checkExpectation'#
function (request, response) { }
Emitted each time a request with an http Expect header is received,
where the value is not 100-continue. If this event isn't listened for, the
server will automatically respond with a 417 Expectation Failed as
appropriate.
Note that when this event is emitted and handled, the request event will
not be emitted.
Event: 'connect'#
function (response, socket, head) { }
Event: 'upgrade'#
function (response, socket, head) { }
Emitted each time a server responds to a request with an upgrade. If this
event isn't being listened for, clients receiving an upgrade header will
have their connections closed.
A client server pair that show you how to listen for the 'upgrade' event.
const http = require('http');
// Create an HTTP server
var srv = http.createServer( (req, res) => {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('okay');
});
srv.on('upgrade', (req, socket, head) => {
socket.write('HTTP/1.1 101 Web Socket Protocol Handshake\r\n' +
'Upgrade: WebSocket\r\n' +
'Connection: Upgrade\r\n' +
'\r\n');
socket.pipe(socket); // echo back
});
// now that server is running
srv.listen(1337, '127.0.0.1', () => {
// make a request
var options = {
port: 1337,
hostname: '127.0.0.1',
headers: {
'Connection': 'Upgrade',
'Upgrade': 'websocket'
}
};
request.setNoDelay([noDelay])#
Once a socket is assigned to this request and is
connected socket.setNoDelay() will be called.
request.setSocketKeepAlive([enable][, initialDelay])#
Once a socket is assigned to this request and is
connected socket.setKeepAlive() will be called.
request.setTimeout(timeout[, callback])#
Once a socket is assigned to this request and is
connected socket.setTimeout() will be called.
the socket object. Care must be taken to ensure the response is a properly
formatted HTTP response message.
Event: 'close'#
function () { }
Emitted when the server closes.
Event: 'connect'#
function (request, socket, head) { }
Emitted each time a client requests a http CONNECT method. If this event
isn't listened for, then clients requesting a CONNECT method will have
their connections closed.
After this event is emitted, the request's socket will not have
a 'data' event listener, meaning you will need to bind to it in order to
handle data sent to the server on that socket.
Event: 'connection'#
function (socket) { }
When a new TCP stream is established. socket is an object of
type net.Socket. Usually users will not want to access this event. In
particular, the socket will not emit 'readable' events because of how the
protocol parser attaches to the socket. The socket can also be accessed
at request.connection.
Event: 'request'#
function (request, response) { }
Emitted each time there is a request. Note that there may be multiple
requests per connection (in the case of keep-alive connections). request is
an instance of http.IncomingMessage and response is an instance
of http.ServerResponse.
Event: 'upgrade'#
function (request, socket, head) { }
Emitted each time a client requests a http upgrade. If this event isn't
listened for, then clients requesting an upgrade will have their
connections closed.
After this event is emitted, the request's socket will not have
a 'data' event listener, meaning you will need to bind to it in order to
handle data sent to the server on that socket.
server.close([callback])#
Stops the server from accepting new connections. See net.Server.close().
server.listen(handle[, callback])#
handle <Object>
callback <Function>
The handle object can be set to either a server or socket (anything with an
underlying _handle member), or a {fd: <n>} object.
This will cause the server to accept connections on the specified handle,
but it is presumed that the file descriptor or handle has already been
bound to a port or domain socket.
Listening on a file descriptor is not supported on Windows.
msecs <Number>
callback <Function>
Sets the timeout value for sockets, and emits a 'timeout' event on the
Server object, passing the socket as an argument, if a timeout occurs.
If there is a 'timeout' event listener on the Server object, then it will be
called with the timed-out socket as an argument.
By default, the Server's timeout value is 2 minutes, and sockets are
destroyed automatically if they time out. However, if you assign a callback
to the Server's 'timeout' event, then you are responsible for handling
socket timeouts.
Returns server.
server.timeout#
Event: 'finish'#
function () { }
Emitted when the response has been sent. More specifically, this event is
emitted when the last segment of the response headers and body have
been handed off to the operating system for transmission over the
network. It does not imply that the client has received anything yet.
After this event, no more events will be emitted on the response object.
response.addTrailers(headers)#
This method adds HTTP trailing headers (a header but at the end of the
message) to the response.
Trailers will only be emitted if chunked encoding is used for the response;
if it is not (e.g., if the request was HTTP/1.0), they will be silently
discarded.
Note that HTTP requires the Trailer header to be sent if you intend to emit
trailers, with a list of the header fields in its value. E.g.,
response.writeHead(200, { 'Content-Type': 'text/plain',
'Trailer': 'Content-MD5' });
response.write(fileData);
response.addTrailers({'Content-MD5':
'7895bf4b8828b55ceaf47747b4bca667'});
response.end();
Attempting to set a header field name or value that contains invalid
characters will result in a TypeError being thrown.
response.end([data][, encoding][, callback])#
This method signals to the server that all of the response headers and
body have been sent; that server should consider this message complete.
The method, response.end(), MUST be called on each response.
If data is specified, it is equivalent to calling response.write(data,
encoding) followed by response.end(callback).
Example:
response.setHeader('Content-Type', 'text/html');
or
response.setHeader('Set-Cookie', ['type=ninja',
'language=javascript']);
Attempting to set a header field name or value that contains invalid
characters will result in a TypeError being thrown.
When headers have been set with response.setHeader(), they will be
merged with any headers passed to response.writeHead(), with the
headers passed to response.writeHead() given precedence.
// returns content-type = text/plain
const server = http.createServer((req,res) => {
res.setHeader('Content-Type', 'text/html');
res.setHeader('X-Foo', 'bar');
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('ok');
});
response.setTimeout(msecs, callback)#
msecs <Number>
callback <Function>
response.statusCode#
When using implicit headers (not calling response.writeHead() explicitly),
this property controls the status code that will be sent to the client when
the headers get flushed.
Example:
response.statusCode = 404;
After response header was sent to the client, this property indicates the
status code which was sent out.
response.statusMessage#
When using implicit headers (not calling response.writeHead() explicitly),
this property controls the status message that will be sent to the client
when the headers get flushed. If this is left as undefined then the standard
message for the status code will be used.
Example:
response.statusMessage = 'Not found';
After response header was sent to the client, this property indicates the
status message which was sent out.
response.write(chunk[, encoding][, callback])#
If this method is called and response.writeHead() has not been called, it
will switch to implicit header mode and flush the implicit headers.
This sends a chunk of the response body. This method may be called
multiple times to provide successive parts of the body.
chunk can be a string or a buffer. If chunk is a string, the second
parameter specifies how to encode it into a byte stream. By default
the encoding is'utf8'. The last parameter callback will be called when this
chunk of data is flushed.
Note: This is the raw HTTP body and has nothing to do with higher-level
multi-part body encodings that may be used.
The first time response.write() is called, it will send the buffered header
information and the first body to the client. The second
time response.write()is called, Node.js assumes you're going to be
streaming data, and sends that separately. That is, the response is
buffered up to the first chunk of body.
Returns true if the entire data was flushed successfully to the kernel
buffer. Returns false if all or part of the data was queued in user
memory. 'drain'will be emitted when the buffer is free again.
response.writeContinue()#
Sends a HTTP/1.1 100 Continue message to the client, indicating that the
request body should be sent. See the 'checkContinue' event on Server.
response.writeHead(statusCode[, statusMessage][, headers])#
Sends a response header to the request. The status code is a 3-digit HTTP
status code, like 404. The last argument, headers, are the response
headers. Optionally one can give a human-readable statusMessage as the
second argument.
Example:
var body = 'hello world';
response.writeHead(200, {
'Content-Length': body.length,
'Content-Type': 'text/plain' });
This method must only be called once on a message and it must be called
before response.end() is called.
If you call response.write() or response.end() before calling this, the
implicit/mutable headers will be calculated and call this function for you.
When headers have been set with response.setHeader(), they will be
merged with any headers passed to response.writeHead(), with the
headers passed to response.writeHead() given precedence.
// returns content-type = text/plain
const server = http.createServer((req,res) => {
res.setHeader('Content-Type', 'text/html');
res.setHeader('X-Foo', 'bar');
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('ok');
});
Note that Content-Length is given in bytes not characters. The above
example works because the string 'hello world' contains only single byte
characters. If the body contains higher coded characters
then Buffer.byteLength() should be used to determine the number of bytes
in a given encoding. And Node.js does not check whether Content-Length
and the length of the body which has been transmitted are equal or not.
Attempting to set a header field name or value that contains invalid
characters will result in a TypeError being thrown.
Class: http.IncomingMessage#
An IncomingMessage object is created
by http.Server or http.ClientRequest and passed as the first argument to
the 'request' and 'response' event respectively. It may be used to access
response status, headers and data.
It implements the Readable Stream interface, as well as the following
additional events, methods, and properties.
Event: 'close'#
function () { }
Indicates that the underlying connection was closed. Just like 'end', this
event occurs only once per response.
message.headers#
The request/response headers object.
Key-value pairs of header names and values. Header names are lowercased. Example:
// Prints something like:
//
// { 'user-agent': 'curl/7.22.0',
// host: '127.0.0.1:8000',
// accept: '*/*' }
console.log(request.headers);
Duplicates in raw headers are handled in the following ways, depending
on the header name:
Duplicates of age, authorization, content-length, contenttype, etag, expires, from, host, if-modified-since, if-unmodifiedsince,last-modified, location, max-forwards, proxy-authorization, refere
r, retry-after, or user-agent are discarded.
set-cookie is always an array. Duplicates are added to the array.
For all other headers, the values are joined together with ', '.
message.httpVersion#
In case of server request, the HTTP version sent by the client. In the case
of client response, the HTTP version of the connected-to server. Probably
either'1.1' or '1.0'.
Also message.httpVersionMajor is the first integer
and message.httpVersionMinor is the second.
message.method#
Only valid for request obtained from http.Server.
The request method as a string. Read only. Example: 'GET', 'DELETE'.
message.rawHeaders#
The raw request/response headers list exactly as they were received.
Note that the keys and values are in the same list. It is not a list of tuples.
So, the even-numbered offsets are key values, and the odd-numbered
offsets are the associated values.
Header names are not lowercased, and duplicates are not merged.
// Prints something like:
//
// [ 'user-agent',
// 'this is invalid because there can be only one',
// 'User-Agent',
// 'curl/7.22.0',
// 'Host',
// '127.0.0.1:8000',
// 'ACCEPT',
// '*/*' ]
console.log(request.rawHeaders);
message.rawTrailers#
The raw request/response trailer keys and values exactly as they were
received. Only populated at the 'end' event.
message.setTimeout(msecs, callback)#
msecs <Number>
callback <Function>
href: '/status?name=ryan',
search: '?name=ryan',
query: {name: 'ryan'},
pathname: '/status'
}
http.METHODS#
<Array>
<Object>
A collection of all the standard HTTP response status codes, and the short
description of each. For example, http.STATUS_CODES[404] === 'Not
Found'.
http.createClient([port][, host])#
Stability: 0 - Deprecated: Use http.request() instead.
Constructs a new HTTP client. port and host refer to the server to be
connected to.
http.createServer([requestListener])#
Returns a new instance of http.Server.
The requestListener is a function which is automatically added to
the 'request' event.
http.get(options[, callback])#
Since most requests are GET requests without bodies, Node.js provides
this convenience method. The only difference between this method
andhttp.request() is that it sets the method to GET and
calls req.end() automatically.
Example:
path: Request path. Defaults to '/'. Should include query string if any.
E.G. '/index.html?page=12'. An exception is thrown when the request
path contains illegal characters. Currently, only spaces are rejected
but that may change in the future.
headers: An object containing request headers.
o
o
The optional callback parameter will be added as a one time listener for
the 'response' event.
http.request() returns an instance of the http.ClientRequest class.
The ClientRequest instance is a writable stream. If one needs to upload a
file with a POST request, then write to the ClientRequest object.
Example:
var postData = querystring.stringify({
'msg' : 'Hello World!'
});
var options = {
hostname: 'www.google.com',
port: 80,
path: '/upload',
method: 'POST',
headers: {
'Content-Type': 'application/x-www-form-urlencoded',
'Content-Length': postData.length
}
};
var req = http.request(options, (res) => {
console.log(`STATUS: ${res.statusCode}`);
console.log(`HEADERS: ${JSON.stringify(res.headers)}`);
res.setEncoding('utf8');
res.on('data', (chunk) => {
console.log(`BODY: ${chunk}`);
});
res.on('end', () => {
console.log('No more data in response.')
})
});
req.on('error', (e) => {
console.log(`problem with request: ${e.message}`);
});
// write data to request body
req.write(postData);
req.end();
Note that in the example req.end() was called. With http.request() one
must always call req.end() to signify that you're done with the request even if there is no data being written to the request body.
If any error is encountered during the request (be that with DNS
resolution, TCP level errors, or actual HTTP parse errors) an 'error' event is
emitted on the returned request object. As with all 'error' events, if no
listeners are registered the error will be thrown.
There are a few special headers that should be noted.
HTTPS#
Stability: 2 - Stable
HTTPS is the HTTP protocol over TLS/SSL. In Node.js this is implemented
as a separate module.
Class: https.Agent#
An Agent object for HTTPS similar to http.Agent. See https.request() for
more information.
Class: https.Server#
This class is a subclass of tls.Server and emits events same
as http.Server. See http.Server for more information.
server.setTimeout(msecs, callback)#
See http.Server#setTimeout().
server.timeout#
See http.Server#timeout.
https.createServer(options[, requestListener])#
server.listen(handle[, callback])#
server.listen(path[, callback])#
server.listen(port[, host][, backlog][, callback])#
See http.listen() for details.
https.get(options, callback)#
Like http.get() but for HTTPS.
options can be an object or a string. If options is a string, it is
automatically parsed with url.parse().
Example:
const https = require('https');
https.get('https://encrypted.google.com/', (res) => {
console.log('statusCode: ', res.statusCode);
console.log('headers: ', res.headers);
res.on('data', (d) => {
process.stdout.write(d);
});
}).on('error', (e) => {
console.error(e);
});
https.globalAgent#
Global instance of https.Agent for all HTTPS client requests.
https.request(options, callback)#
Makes a request to a secure web server.
options can be an object or a string. If options is a string, it is
automatically parsed with url.parse().
All options from http.request() are valid.
Example:
const https = require('https');
var options = {
hostname: 'encrypted.google.com',
port: 443,
path: '/',
method: 'GET'
};
var req = https.request(options, (res) => {
console.log('statusCode: ', res.statusCode);
console.log('headers: ', res.headers);
res.on('data', (d) => {
process.stdout.write(d);
});
});
req.end();
req.on('error', (e) => {
console.error(e);
});
The options argument has the following options
path: Request path. Defaults to '/'. Should include query string if any.
E.G. '/index.html?page=12'. An exception is thrown when the request
path contains illegal characters. Currently, only spaces are rejected
but that may change in the future.
};
var req = https.request(options, (res) => {
...
}
Modules#
Stability: 3 - Locked
Node.js has a simple module loading system. In Node.js, files and modules
are in one-to-one correspondence. As an example, foo.js loads the
modulecircle.js in the same directory.
The contents of foo.js:
const circle = require('./circle.js');
console.log( `The area of a circle of radius 4 is ${circle.area(4)}`);
The contents of circle.js:
const PI = Math.PI;
exports.area = (r) => PI * r * r;
exports.circumference = (r) => 2 * PI * r;
The module circle.js has exported the
functions area() and circumference(). To add functions and objects to the
root of your module, you can add them to the special exports object.
Variables local to the module will be private, because the module is
wrapped in a function by Node.js (see module wrapper). In this example,
the variablePI is private to circle.js.
If you want the root of your module's export to be a function (such as a
constructor) or if you want to export a complete object in one assignment
instead of building it one property at a time, assign it
to module.exports instead of exports.
console.log('b done');
main.js:
console.log('main starting');
const a = require('./a.js');
const b = require('./b.js');
console.log('in main, a.done=%j, b.done=%j', a.done, b.done);
When main.js loads a.js, then a.js in turn loads b.js. At that point, b.js tries
to load a.js. In order to prevent an infinite loop, an unfinished copyof
the a.js exports object is returned to the b.js module. b.js then finishes
loading, and its exports object is provided to the a.js module.
By the time main.js has loaded both modules, they're both finished. The
output of this program would thus be:
$ node main.js
main starting
a starting
b starting
in b, a.done = false
b done
in a, b.done = true
a done
in main, a.done=true, b.done=true
If you have cyclic module dependencies in your program, make sure to
plan accordingly.
File Modules#
If the exact filename is not found, then Node.js will attempt to load the
required filename with the added extensions: .js, .json, and finally .node.
.js files are interpreted as JavaScript text files, and .json files are parsed as
JSON text files. .node files are interpreted as compiled addon modules
loaded with dlopen.
A required module prefixed with '/' is an absolute path to the file. For
example, require('/home/marco/foo.js') will load the file
at/home/marco/foo.js.
A required module prefixed with './' is relative to the file calling require().
That is, circle.js must be in the same directory
as foo.js forrequire('./circle') to find it.
Without a leading '/', './', or '../' to indicate a file, the module must either
be a core module or is loaded from a node_modules folder.
If the given path does not exist, require() will throw an Error with
its code property set to 'MODULE_NOT_FOUND'.
Folders as Modules#
It is convenient to organize programs and libraries into self-contained
directories, and then provide a single entry point to that library. There are
three ways in which a folder may be passed to require() as an argument.
The first is to create a package.json file in the root of the folder, which
specifies a main module. An example package.json file might look like
this:
{ "name" : "some-library",
"main" : "./lib/some-library.js" }
If this was in a folder at ./some-library, then require('./some-library') would
attempt to load ./some-library/lib/some-library.js.
This is the extent of Node.js's awareness of package.json files.
Note: If the file specified by the "main" entry of package.json is missing
and can not be resolved, Node.js will report the entire module as missing
with the default error:
Error: Cannot find module 'some-library'
If there is no package.json file present in the directory, then Node.js will
attempt to load an index.js or index.node file out of that directory. For
./some-library/index.js
./some-library/index.node
/home/ry/projects/node_modules/bar.js
/home/ry/node_modules/bar.js
/home/node_modules/bar.js
/node_modules/bar.js
1: $HOME/.node_modules
2: $HOME/.node_libraries
3: $PREFIX/lib/node
The module and exports objects that the implementor can use
to export values from the module.
<Object>
<Array>
<Object>
module.exports.emit('ready');
}, 1000);
Then in another file we could do
const a = require('./a');
a.on('ready', () => {
console.log('module a is ready');
});
Note that assignment to module.exports must be done immediately. It
cannot be done in any callbacks. This does not work:
x.js:
setTimeout(() => {
module.exports = { a: 'hello' };
}, 0);
y.js:
const x = require('./x');
console.log(x.a);
exports alias#
}
As a guideline, if the relationship
between exports and module.exports seems like magic to you,
ignore exports and only use module.exports.
module.filename#
<String>
<String>
The identifier for the module. Typically this is the fully resolved filename.
module.loaded#
<Boolean>
id <String>