summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorJoyee Cheung <joyeec9h3@gmail.com>2017-02-08 01:10:09 +0800
committerJoyee Cheung <joyeec9h3@gmail.com>2017-02-17 23:50:41 +0800
commit8926f1110dec9a9d16e54cc25b8bcf707c7f656e (patch)
tree9df7f98da2abaddc12369531baed062ebdc9f942
parent4bed6d6e93b48715281f0783d3695d360056f967 (diff)
downloadandroid-node-v8-8926f1110dec9a9d16e54cc25b8bcf707c7f656e.tar.gz
android-node-v8-8926f1110dec9a9d16e54cc25b8bcf707c7f656e.tar.bz2
android-node-v8-8926f1110dec9a9d16e54cc25b8bcf707c7f656e.zip
doc: add benchmark/README.md and fix guide
* Write a new benchmark/README.md describing the benchmark directory layout and common API. * Fix the moved benchmarking guide accordingly, add tips about how to get the help text from the benchmarking tools. PR-URL: https://github.com/nodejs/node/pull/11237 Fixes: https://github.com/nodejs/node/issues/11190 Reviewed-By: James M Snell <jasnell@gmail.com> Reviewed-By: Andreas Madsen <amwebdk@gmail.com>
-rw-r--r--benchmark/README.md246
-rw-r--r--doc/guides/writing-and-running-benchmarks.md54
2 files changed, 278 insertions, 22 deletions
diff --git a/benchmark/README.md b/benchmark/README.md
new file mode 100644
index 0000000000..6fd9a97bdf
--- /dev/null
+++ b/benchmark/README.md
@@ -0,0 +1,246 @@
+# Node.js Core Benchmarks
+
+This folder contains code and data used to measure performance
+of different Node.js implementations and different ways of
+writing JavaScript run by the built-in JavaScript engine.
+
+For a detailed guide on how to write and run benchmarks in this
+directory, see [the guide on benchmarks](../doc/guides/writing-and-running-benchmarks.md).
+
+## Table of Contents
+
+* [Benchmark directories](#benchmark-directories)
+* [Common API](#common-api)
+
+## Benchmark Directories
+
+<table>
+ <thead>
+ <tr>
+ <th>Directory</th>
+ <th>Purpose</th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <td>arrays</td>
+ <td>
+ Benchmarks for various operations on array-like objects,
+ including <code>Array</code>, <code>Buffer</code>, and typed arrays.
+ </td>
+ </tr>
+ <tr>
+ <td>assert</td>
+ <td>
+ Benchmarks for the <code>assert</code> subsystem.
+ </td>
+ </tr>
+ <tr>
+ <td>buffers</td>
+ <td>
+ Benchmarks for the <code>buffer</code> subsystem.
+ </td>
+ </tr>
+ <tr>
+ <td>child_process</td>
+ <td>
+ Benchmarks for the <code>child_process</code> subsystem.
+ </td>
+ </tr>
+ <tr>
+ <td>crypto</td>
+ <td>
+ Benchmarks for the <code>crypto</code> subsystem.
+ </td>
+ </tr>
+ <tr>
+ <td>dgram</td>
+ <td>
+ Benchmarks for the <code>dgram</code> subsystem.
+ </td>
+ </tr>
+ <tr>
+ <td>domain</td>
+ <td>
+ Benchmarks for the <code>domain</code> subsystem.
+ </td>
+ </tr>
+ <tr>
+ <td>es</td>
+ <td>
+ Benchmarks for various new ECMAScript features and their
+ pre-ES2015 counterparts.
+ </td>
+ </tr>
+ <tr>
+ <td>events</td>
+ <td>
+ Benchmarks for the <code>events</code> subsystem.
+ </td>
+ </tr>
+ <tr>
+ <td>fixtures</td>
+ <td>
+ Benchmarks fixtures used in various benchmarks throughout
+ the benchmark suite.
+ </td>
+ </tr>
+ <tr>
+ <td>fs</td>
+ <td>
+ Benchmarks for the <code>fs</code> subsystem.
+ </td>
+ </tr>
+ <tr>
+ <td>http</td>
+ <td>
+ Benchmarks for the <code>http</code> subsystem.
+ </td>
+ </tr>
+ <tr>
+ <td>misc</td>
+ <td>
+ Miscellaneous benchmarks and benchmarks for shared
+ internal modules.
+ </td>
+ </tr>
+ <tr>
+ <td>module</td>
+ <td>
+ Benchmarks for the <code>module</code> subsystem.
+ </td>
+ </tr>
+ <tr>
+ <td>net</td>
+ <td>
+ Benchmarks for the <code>net</code> subsystem.
+ </td>
+ </tr>
+ <tr>
+ <td>path</td>
+ <td>
+ Benchmarks for the <code>path</code> subsystem.
+ </td>
+ </tr>
+ <tr>
+ <td>process</td>
+ <td>
+ Benchmarks for the <code>process</code> subsystem.
+ </td>
+ </tr>
+ <tr>
+ <td>querystring</td>
+ <td>
+ Benchmarks for the <code>querystring</code> subsystem.
+ </td>
+ </tr>
+ <tr>
+ <td>streams</td>
+ <td>
+ Benchmarks for the <code>streams</code> subsystem.
+ </td>
+ </tr>
+ <tr>
+ <td>string_decoder</td>
+ <td>
+ Benchmarks for the <code>string_decoder</code> subsystem.
+ </td>
+ </tr>
+ <tr>
+ <td>timers</td>
+ <td>
+ Benchmarks for the <code>timers</code> subsystem, including
+ <code>setTimeout</code>, <code>setInterval</code>, .etc.
+ </td>
+ </tr>
+ <tr>
+ <td>tls</td>
+ <td>
+ Benchmarks for the <code>tls</code> subsystem.
+ </td>
+ </tr>
+ <tr>
+ <td>url</td>
+ <td>
+ Benchmarks for the <code>url</code> subsystem, including the legacy
+ <code>url</code> implementation and the WHATWG URL implementation.
+ </td>
+ </tr>
+ <tr>
+ <td>util</td>
+ <td>
+ Benchmarks for the <code>util</code> subsystem.
+ </td>
+ </tr>
+ <tr>
+ <td>vm</td>
+ <td>
+ Benchmarks for the <code>vm</code> subsystem.
+ </td>
+ </tr>
+ </tbody>
+</table>
+
+### Other Top-level files
+
+The top-level files include common dependencies of the benchmarks
+and the tools for launching benchmarks and visualizing their output.
+The actual benchmark scripts should be placed in their corresponding
+directories.
+
+* `_benchmark_progress.js`: implements the progress bar displayed
+ when running `compare.js`
+* `_cli.js`: parses the command line arguments passed to `compare.js`,
+ `run.js` and `scatter.js`
+* `_cli.R`: parses the command line arguments passed to `compare.R`
+* `_http-benchmarkers.js`: selects and runs external tools for benchmarking
+ the `http` subsystem.
+* `common.js`: see [Common API](#common-api).
+* `compare.js`: command line tool for comparing performance between different
+ Node.js binaries.
+* `compare.R`: R script for statistically analyzing the output of
+ `compare.js`
+* `run.js`: command line tool for running individual benchmark suite(s).
+* `scatter.js`: command line tool for comparing the performance
+ between different parameters in benchmark configurations,
+ for example to analyze the time complexity.
+* `scatter.R`: R script for visualizing the output of `scatter.js` with
+ scatter plots.
+
+## Common API
+
+The common.js module is used by benchmarks for consistency across repeated
+tasks. It has a number of helpful functions and properties to help with
+writing benchmarks.
+
+### createBenchmark(fn, configs[, options])
+
+See [the guide on writing benchmarks](../doc/guides/writing-and-running-benchmarks.md#basics-of-a-benchmark).
+
+### default\_http\_benchmarker
+
+The default benchmarker used to run HTTP benchmarks.
+See [the guide on writing HTTP benchmarks](../doc/guides/writing-and-running-benchmarks.md#creating-an-http-benchmark).
+
+
+### PORT
+
+The default port used to run HTTP benchmarks.
+See [the guide on writing HTTP benchmarks](../doc/guides/writing-and-running-benchmarks.md#creating-an-http-benchmark).
+
+### sendResult(data)
+
+Used in special benchmarks that can't use `createBenchmark` and the object
+it returns to accomplish what they need. This function reports timing
+data to the parent process (usually created by running `compare.js`, `run.js` or
+`scatter.js`).
+
+### v8ForceOptimization(method[, ...args])
+
+Force V8 to mark the `method` for optimization with the native function
+`%OptimizeFunctionOnNextCall()` and return the optimization status
+after that.
+
+It can be used to prevent the benchmark from getting disrupted by the optimizer
+kicking in halfway through. However, this could result in a less effective
+optimization. In general, only use it if you know what it actually does.
diff --git a/doc/guides/writing-and-running-benchmarks.md b/doc/guides/writing-and-running-benchmarks.md
index d123347075..a20f321b7c 100644
--- a/doc/guides/writing-and-running-benchmarks.md
+++ b/doc/guides/writing-and-running-benchmarks.md
@@ -1,26 +1,34 @@
-# Node.js core benchmark
+# How to Write and Run Benchmarks in Node.js Core
-This folder contains benchmarks to measure the performance of the Node.js APIs.
-
-## Table of Content
+## Table of Contents
* [Prerequisites](#prerequisites)
+ * [HTTP Benchmark Requirements](#http-benchmark-requirements)
+ * [Benchmark Analysis Requirements](#benchmark-analysis-requirements)
* [Running benchmarks](#running-benchmarks)
- * [Running individual benchmarks](#running-individual-benchmarks)
- * [Running all benchmarks](#running-all-benchmarks)
- * [Comparing node versions](#comparing-node-versions)
- * [Comparing parameters](#comparing-parameters)
+ * [Running individual benchmarks](#running-individual-benchmarks)
+ * [Running all benchmarks](#running-all-benchmarks)
+ * [Comparing Node.js versions](#comparing-nodejs-versions)
+ * [Comparing parameters](#comparing-parameters)
* [Creating a benchmark](#creating-a-benchmark)
+ * [Basics of a benchmark](#basics-of-a-benchmark)
+ * [Creating an HTTP benchmark](#creating-an-http-benchmark)
## Prerequisites
+Basic Unix tools are required for some benchmarks.
+[Git for Windows][git-for-windows] includes Git Bash and the necessary tools,
+which need to be included in the global Windows `PATH`.
+
+### HTTP Benchmark Requirements
+
Most of the HTTP benchmarks require a benchmarker to be installed, this can be
either [`wrk`][wrk] or [`autocannon`][autocannon].
-`Autocannon` is a Node script that can be installed using
-`npm install -g autocannon`. It will use the Node executable that is in the
+`Autocannon` is a Node.js script that can be installed using
+`npm install -g autocannon`. It will use the Node.js executable that is in the
path, hence if you want to compare two HTTP benchmark runs make sure that the
-Node version in the path is not altered.
+Node.js version in the path is not altered.
`wrk` may be available through your preferred package manager. If not, you can
easily build it [from source][wrk] via `make`.
@@ -34,9 +42,7 @@ benchmarker to be used by providing it as an argument, e. g.:
`node benchmark/http/simple.js benchmarker=autocannon`
-Basic Unix tools are required for some benchmarks.
-[Git for Windows][git-for-windows] includes Git Bash and the necessary tools,
-which need to be included in the global Windows `PATH`.
+### Benchmark Analysis Requirements
To analyze the results `R` should be installed. Check you package manager or
download it from https://www.r-project.org/.
@@ -50,7 +56,6 @@ install.packages("ggplot2")
install.packages("plyr")
```
-### CRAN Mirror Issues
In the event you get a message that you need to select a CRAN mirror first.
You can specify a mirror by adding in the repo parameter.
@@ -108,7 +113,8 @@ buffers/buffer-tostring.js n=10000000 len=1024 arg=false: 3783071.1678948295
### Running all benchmarks
Similar to running individual benchmarks, a group of benchmarks can be executed
-by using the `run.js` tool. Again this does not provide the statistical
+by using the `run.js` tool. To see how to use this script,
+run `node benchmark/run.js`. Again this does not provide the statistical
information to make any conclusions.
```console
@@ -135,18 +141,19 @@ It is possible to execute more groups by adding extra process arguments.
$ node benchmark/run.js arrays buffers
```
-### Comparing node versions
+### Comparing Node.js versions
-To compare the effect of a new node version use the `compare.js` tool. This
+To compare the effect of a new Node.js version use the `compare.js` tool. This
will run each benchmark multiple times, making it possible to calculate
-statistics on the performance measures.
+statistics on the performance measures. To see how to use this script,
+run `node benchmark/compare.js`.
As an example on how to check for a possible performance improvement, the
[#5134](https://github.com/nodejs/node/pull/5134) pull request will be used as
an example. This pull request _claims_ to improve the performance of the
`string_decoder` module.
-First build two versions of node, one from the master branch (here called
+First build two versions of Node.js, one from the master branch (here called
`./node-master`) and another with the pull request applied (here called
`./node-pr-5135`).
@@ -219,7 +226,8 @@ It can be useful to compare the performance for different parameters, for
example to analyze the time complexity.
To do this use the `scatter.js` tool, this will run a benchmark multiple times
-and generate a csv with the results.
+and generate a csv with the results. To see how to use this script,
+run `node benchmark/scatter.js`.
```console
$ node benchmark/scatter.js benchmark/string_decoder/string-decoder.js > scatter.csv
@@ -286,6 +294,8 @@ chunk encoding mean confidence.interval
## Creating a benchmark
+### Basics of a benchmark
+
All benchmarks use the `require('../common.js')` module. This contains the
`createBenchmark(main, configs[, options])` method which will setup your
benchmark.
@@ -369,7 +379,7 @@ function main(conf) {
}
```
-## Creating HTTP benchmark
+### Creating an HTTP benchmark
The `bench` object returned by `createBenchmark` implements
`http(options, callback)` method. It can be used to run external tool to