summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorJames M Snell <jasnell@gmail.com>2015-06-23 20:42:49 -0700
committerJames M Snell <jasnell@gmail.com>2015-08-05 08:44:55 -0700
commit936c9ffb0fa8e023b40e20882c16aaaef0ba4178 (patch)
treef92d3b160166f4726ffd40dec487a15b3c9bf09a
parentd88194d261ddf7f0b4b821158244ad0b4d0f1279 (diff)
downloadandroid-node-v8-936c9ffb0fa8e023b40e20882c16aaaef0ba4178.tar.gz
android-node-v8-936c9ffb0fa8e023b40e20882c16aaaef0ba4178.tar.bz2
android-node-v8-936c9ffb0fa8e023b40e20882c16aaaef0ba4178.zip
doc: multiple documentation updates cherry picked from v0.12
* doc: improve http.abort description * doc: mention that mode is ignored if file exists * docs: Fix default options for fs.createWriteStream() * Documentation update about Buffer initialization * doc: add a note about readable in flowing mode * doc: Document http.request protocol option * doc, comments: Grammar and spelling fixes * updated documentation for fs.createReadStream * Update child_process.markdown, spelling * doc: Clarified read method with specified size argument. * docs:events clarify emitter.listener() behavior * doc: two minor stream doc improvements * doc: clarify Readable._read and Readable.push * doc: stream.unshift does not reset reading state * doc: readable event clarification * doc: additional refinement to readable event Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Ben Noorduis <ben@strongloop.com> PR-URL: https://github.com/nodejs/io.js/pull/2302
-rw-r--r--doc/api/buffer.markdown6
-rw-r--r--doc/api/child_process.markdown2
-rw-r--r--doc/api/cluster.markdown4
-rw-r--r--doc/api/dns.markdown4
-rw-r--r--doc/api/events.markdown2
-rw-r--r--doc/api/fs.markdown11
-rw-r--r--doc/api/http.markdown4
-rw-r--r--doc/api/stream.markdown96
-rw-r--r--lib/_http_client.js4
-rw-r--r--lib/url.js4
-rw-r--r--src/node.cc2
-rw-r--r--src/node_object_wrap.h2
12 files changed, 98 insertions, 43 deletions
diff --git a/doc/api/buffer.markdown b/doc/api/buffer.markdown
index 92b7f2ba93..94f3e3d8e1 100644
--- a/doc/api/buffer.markdown
+++ b/doc/api/buffer.markdown
@@ -43,7 +43,7 @@ Creating a typed array from a `Buffer` works with the following caveats:
2. The buffer's memory is interpreted as an array, not a byte array. That is,
`new Uint32Array(new Buffer([1,2,3,4]))` creates a 4-element `Uint32Array`
- with elements `[1,2,3,4]`, not an `Uint32Array` with a single element
+ with elements `[1,2,3,4]`, not a `Uint32Array` with a single element
`[0x1020304]` or `[0x4030201]`.
NOTE: Node.js v0.8 simply retained a reference to the buffer in `array.buffer`
@@ -67,6 +67,10 @@ Allocates a new buffer of `size` bytes. `size` must be less than
2,147,483,648 bytes (2 GB) on 64 bits architectures,
otherwise a `RangeError` is thrown.
+Unlike `ArrayBuffers`, the underlying memory for buffers is not initialized. So
+the contents of a newly created `Buffer` is unknown. Use `buf.fill(0)`to
+initialize a buffer to zeroes.
+
### new Buffer(array)
* `array` Array
diff --git a/doc/api/child_process.markdown b/doc/api/child_process.markdown
index 3d6e5f5fe2..4acd61bfa6 100644
--- a/doc/api/child_process.markdown
+++ b/doc/api/child_process.markdown
@@ -279,7 +279,7 @@ Here is an example of sending a server:
child.send('server', server);
});
-And the child would the receive the server object as:
+And the child would then receive the server object as:
process.on('message', function(m, server) {
if (m === 'server') {
diff --git a/doc/api/cluster.markdown b/doc/api/cluster.markdown
index b7e76fcb8f..abf05bf9f4 100644
--- a/doc/api/cluster.markdown
+++ b/doc/api/cluster.markdown
@@ -121,7 +121,7 @@ values are `"rr"` and `"none"`.
## cluster.settings
* {Object}
- * `execArgv` {Array} list of string arguments passed to the io.js executable.
+ * `execArgv` {Array} list of string arguments passed to the io.js executable.
(Default=`process.execArgv`)
* `exec` {String} file path to worker file. (Default=`process.argv[1]`)
* `args` {Array} string arguments passed to worker.
@@ -613,7 +613,7 @@ It is not emitted in the worker.
### Event: 'disconnect'
-Similar to the `cluster.on('disconnect')` event, but specfic to this worker.
+Similar to the `cluster.on('disconnect')` event, but specific to this worker.
cluster.fork().on('disconnect', function() {
// Worker has disconnected
diff --git a/doc/api/dns.markdown b/doc/api/dns.markdown
index d8ed53e3fa..7c9f419ce0 100644
--- a/doc/api/dns.markdown
+++ b/doc/api/dns.markdown
@@ -85,7 +85,7 @@ All properties are optional. An example usage of options is shown below.
```
The callback has arguments `(err, address, family)`. `address` is a string
-representation of a IP v4 or v6 address. `family` is either the integer 4 or 6
+representation of an IP v4 or v6 address. `family` is either the integer 4 or 6
and denotes the family of `address` (not necessarily the value initially passed
to `lookup`).
@@ -163,7 +163,7 @@ attribute (e.g. `[{'priority': 10, 'exchange': 'mx.example.com'},...]`).
## dns.resolveTxt(hostname, callback)
The same as `dns.resolve()`, but only for text queries (`TXT` records).
-`addresses` is an 2-d array of the text records available for `hostname` (e.g.,
+`addresses` is a 2-d array of the text records available for `hostname` (e.g.,
`[ ['v=spf1 ip4:0.0.0.0 ', '~all' ] ]`). Each sub-array contains TXT chunks of
one record. Depending on the use case, the could be either joined together or
treated separately.
diff --git a/doc/api/events.markdown b/doc/api/events.markdown
index fbc04a9623..a51905ab4d 100644
--- a/doc/api/events.markdown
+++ b/doc/api/events.markdown
@@ -122,7 +122,7 @@ Note that `emitter.setMaxListeners(n)` still has precedence over
### emitter.listeners(event)
-Returns an array of listeners for the specified event.
+Returns a copy of the array of listeners for the specified event.
server.on('connection', function (stream) {
console.log('someone connected!');
diff --git a/doc/api/fs.markdown b/doc/api/fs.markdown
index 5f96b10763..ebc2046d76 100644
--- a/doc/api/fs.markdown
+++ b/doc/api/fs.markdown
@@ -801,6 +801,10 @@ on Unix systems, it never was.
Returns a new ReadStream object (See `Readable Stream`).
+Be aware that, unlike the default value set for `highWaterMark` on a
+readable stream (16 kb), the stream returned by this method has a
+default value of 64 kb for the same parameter.
+
`options` is an object or string with the following defaults:
{ flags: 'r',
@@ -823,6 +827,9 @@ there's no file descriptor leak. If `autoClose` is set to true (default
behavior), on `error` or `end` the file descriptor will be closed
automatically.
+`mode` sets the file mode (permission and sticky bits), but only if the
+file was created.
+
An example to read the last 10 bytes of a file which is 100 bytes long:
fs.createReadStream('sample.txt', {start: 90, end: 99});
@@ -847,14 +854,14 @@ Returns a new WriteStream object (See `Writable Stream`).
`options` is an object or string with the following defaults:
{ flags: 'w',
- encoding: null,
+ defaultEncoding: 'utf8',
fd: null,
mode: 0o666 }
`options` may also include a `start` option to allow writing data at
some position past the beginning of the file. Modifying a file rather
than replacing it may require a `flags` mode of `r+` rather than the
-default mode `w`. The `encoding` can be `'utf8'`, `'ascii'`, `binary`,
+default mode `w`. The `defaultEncoding` can be `'utf8'`, `'ascii'`, `binary`,
or `'base64'`.
Like `ReadStream` above, if `fd` is specified, `WriteStream` will ignore the
diff --git a/doc/api/http.markdown b/doc/api/http.markdown
index 5cf3b07a9c..966b304b56 100644
--- a/doc/api/http.markdown
+++ b/doc/api/http.markdown
@@ -462,6 +462,7 @@ automatically parsed with [url.parse()][].
Options:
+- `protocol`: Protocol to use. Defaults to `'http'`.
- `host`: A domain name or IP address of the server to issue the request to.
Defaults to `'localhost'`.
- `hostname`: Alias for `host`. To support `url.parse()` `hostname` is
@@ -911,7 +912,8 @@ is finished.
### request.abort()
-Aborts a request. (New since v0.3.8.)
+Marks the request as aborting. Calling this will cause remaining data
+in the response to be dropped and the socket to be destroyed.
### request.setTimeout(timeout[, callback])
diff --git a/doc/api/stream.markdown b/doc/api/stream.markdown
index a7a78f229e..ffad1717f7 100644
--- a/doc/api/stream.markdown
+++ b/doc/api/stream.markdown
@@ -164,6 +164,34 @@ readable.on('readable', function() {
Once the internal buffer is drained, a `readable` event will fire
again when more data is available.
+The `readable` event is not emitted in the "flowing" mode with the
+sole exception of the last one, on end-of-stream.
+
+The 'readable' event indicates that the stream has new information:
+either new data is available or the end of the stream has been reached.
+In the former case, `.read()` will return that data. In the latter case,
+`.read()` will return null. For instance, in the following example, `foo.txt`
+is an empty file:
+
+```javascript
+var fs = require('fs');
+var rr = fs.createReadStream('foo.txt');
+rr.on('readable', function() {
+ console.log('readable:', rr.read());
+});
+rr.on('end', function() {
+ console.log('end');
+});
+```
+
+The output of running this script is:
+
+```
+bash-3.2$ node test.js
+readable: null
+end
+```
+
#### Event: 'data'
* `chunk` {Buffer | String} The chunk of data.
@@ -221,7 +249,9 @@ returns it. If there is no data available, then it will return
`null`.
If you pass in a `size` argument, then it will return that many
-bytes. If `size` bytes are not available, then it will return `null`.
+bytes. If `size` bytes are not available, then it will return `null`,
+unless we've ended, in which case it will return the data remaining
+in the buffer.
If you do not specify a `size` argument, then it will return all the
data in the internal buffer.
@@ -243,6 +273,9 @@ readable.on('readable', function() {
If this method returns a data chunk, then it will also trigger the
emission of a [`'data'` event][].
+Note that calling `readable.read([size])` after the `end` event has been
+triggered will return `null`. No runtime error will be raised.
+
#### readable.setEncoding(encoding)
* `encoding` {String} The encoding to use.
@@ -414,6 +447,9 @@ parser, which needs to "un-consume" some data that it has
optimistically pulled out of the source, so that the stream can be
passed on to some other party.
+Note that `stream.unshift(chunk)` cannot be called after the `end` event
+has been triggered; a runtime error will be raised.
+
If you find that you must often call `stream.unshift(chunk)` in your
programs, consider implementing a [Transform][] stream instead. (See API
for Stream Implementors, below.)
@@ -452,6 +488,13 @@ function parseHeader(stream, callback) {
}
}
```
+Note that, unlike `stream.push(chunk)`, `stream.unshift(chunk)` will not
+end the reading process by resetting the internal reading state of the
+stream. This can cause unexpected results if `unshift` is called during a
+read (i.e. from within a `_read` implementation on a custom stream). Following
+the call to `unshift` with an immediate `stream.push('')` will reset the
+reading state appropriately, however it is best to simply avoid calling
+`unshift` while in the process of performing a read.
#### readable.wrap(stream)
@@ -883,6 +926,10 @@ SimpleProtocol.prototype._read = function(n) {
// back into the read queue so that our consumer will see it.
var b = chunk.slice(split);
this.unshift(b);
+ // calling unshift by itself does not reset the reading state
+ // of the stream; since we're inside _read, doing an additional
+ // push('') will reset the state appropriately.
+ this.push('');
// and let them know that we are done parsing the header.
this.emit('header', this.header);
@@ -922,24 +969,22 @@ initialized.
* `size` {Number} Number of bytes to read asynchronously
-Note: **Implement this function, but do NOT call it directly.**
+Note: **Implement this method, but do NOT call it directly.**
-This function should NOT be called directly. It should be implemented
-by child classes, and only called by the internal Readable class
-methods.
+This method is prefixed with an underscore because it is internal to the
+class that defines it and should only be called by the internal Readable
+class methods. All Readable stream implementations must provide a _read
+method to fetch data from the underlying resource.
-All Readable stream implementations must provide a `_read` method to
-fetch data from the underlying resource.
-
-This method is prefixed with an underscore because it is internal to
-the class that defines it, and should not be called directly by user
-programs. However, you **are** expected to override this method in
-your own extension classes.
+When _read is called, if data is available from the resource, `_read` should
+start pushing that data into the read queue by calling `this.push(dataChunk)`.
+`_read` should continue reading from the resource and pushing data until push
+returns false, at which point it should stop reading from the resource. Only
+when _read is called again after it has stopped should it start reading
+more data from the resource and pushing that data onto the queue.
-When data is available, put it into the read queue by calling
-`readable.push(chunk)`. If `push` returns false, then you should stop
-reading. When `_read` is called again, you should start pushing more
-data.
+Note: once the `_read()` method is called, it will not be called again until
+the `push` method is called.
The `size` argument is advisory. Implementations where a "read" is a
single call that returns data can use this to know how much data to
@@ -955,19 +1000,16 @@ becomes available. There is no need, for example to "wait" until
Buffer encoding, such as `'utf8'` or `'ascii'`
* return {Boolean} Whether or not more pushes should be performed
-Note: **This function should be called by Readable implementors, NOT
+Note: **This method should be called by Readable implementors, NOT
by consumers of Readable streams.**
-The `_read()` function will not be called again until at least one
-`push(chunk)` call is made.
-
-The `Readable` class works by putting data into a read queue to be
-pulled out later by calling the `read()` method when the `'readable'`
-event fires.
+If a value other than null is passed, The `push()` method adds a chunk of data
+into the queue for subsequent stream processors to consume. If `null` is
+passed, it signals the end of the stream (EOF), after which no more data
+can be written.
-The `push()` method will explicitly insert some data into the read
-queue. If it is called with `null` then it will signal the end of the
-data (EOF).
+The data added with `push` can be pulled out by calling the `read()` method
+when the `'readable'`event fires.
This API is designed to be as flexible as possible. For example,
you may be wrapping a lower-level source which has some sort of
@@ -1315,7 +1357,7 @@ for examples and testing, but there are occasionally use cases where
it can come in handy as a building block for novel sorts of streams.
-## Simplified Constructor API
+## Simplified Constructor API
<!--type=misc-->
diff --git a/lib/_http_client.js b/lib/_http_client.js
index a7d714f7e0..50d1052b44 100644
--- a/lib/_http_client.js
+++ b/lib/_http_client.js
@@ -359,7 +359,7 @@ function parserOnIncomingClient(res, shouldKeepAlive) {
var req = socket._httpMessage;
- // propogate "domain" setting...
+ // propagate "domain" setting...
if (req.domain && !res.domain) {
debug('setting "res.domain"');
res.domain = req.domain;
@@ -465,7 +465,7 @@ function tickOnSocket(req, socket) {
socket.parser = parser;
socket._httpMessage = req;
- // Setup "drain" propogation.
+ // Setup "drain" propagation.
httpSocketSetup(socket);
// Propagate headers limit from request object to parser
diff --git a/lib/url.js b/lib/url.js
index 55c5248e47..45155fee93 100644
--- a/lib/url.js
+++ b/lib/url.js
@@ -587,7 +587,7 @@ Url.prototype.resolveObject = function(relative) {
if (psychotic) {
result.hostname = result.host = srcPath.shift();
//occationaly the auth can get stuck only in host
- //this especialy happens in cases like
+ //this especially happens in cases like
//url.resolveObject('mailto:local1@domain1', 'local2@domain2')
var authInHost = result.host && result.host.indexOf('@') > 0 ?
result.host.split('@') : false;
@@ -669,7 +669,7 @@ Url.prototype.resolveObject = function(relative) {
result.hostname = result.host = isAbsolute ? '' :
srcPath.length ? srcPath.shift() : '';
//occationaly the auth can get stuck only in host
- //this especialy happens in cases like
+ //this especially happens in cases like
//url.resolveObject('mailto:local1@domain1', 'local2@domain2')
var authInHost = result.host && result.host.indexOf('@') > 0 ?
result.host.split('@') : false;
diff --git a/src/node.cc b/src/node.cc
index 7c0a80ec31..c272139e55 100644
--- a/src/node.cc
+++ b/src/node.cc
@@ -2184,7 +2184,7 @@ static void OnFatalError(const char* location, const char* message) {
NO_RETURN void FatalError(const char* location, const char* message) {
OnFatalError(location, message);
- // to supress compiler warning
+ // to suppress compiler warning
abort();
}
diff --git a/src/node_object_wrap.h b/src/node_object_wrap.h
index d00e1484b7..f022662227 100644
--- a/src/node_object_wrap.h
+++ b/src/node_object_wrap.h
@@ -80,7 +80,7 @@ class ObjectWrap {
* attached to detached state it will be freed. Be careful not to access
* the object after making this call as it might be gone!
* (A "weak reference" means an object that only has a
- * persistant handle.)
+ * persistent handle.)
*
* DO NOT CALL THIS FROM DESTRUCTOR
*/