summaryrefslogtreecommitdiff
path: root/deps/v8/test/benchmarks/csuite/README.md
blob: 726133820eb858ef340f92e3cc253dd4c5221e77 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
# CSuite: Local benchmarking help for V8 performance analysis

CSuite helps you make N averaged runs of a benchmark, then compare with
a different binary and/or different flags. It knows about the "classic"
benchmarks of SunSpider, Kraken and Octane, which are still useful for
investigating peak performance scenarios. It offers a default number of
runs, by default they are:

 * SunSpider - 100 runs
 * Kraken - 80 runs
 * Octane - 10 runs

# Usage

Say you want to see how much optimization buys you:

    ./csuite.py kraken baseline ~/src/v8/out/d8 -x="--noopt"
    ./csuite.py kraken compare ~/src/v8/out/d8


Suppose you are comparing two binaries, and want a quick look at results.
Normally, Octane should have about 10 runs, but 3 will only take a few
minutes:

    ./csuite.py -r 3 octane baseline ~/src/v8/out-master/d8
    ./csuite.py -r 3 octane compare ~/src/v8/out-mine/d8

You can run from any place:

    ../../somewhere-strange/csuite.py sunspider baseline ./d8
    ../../somewhere-strange/csuite.py sunspider compare ./d8-better

Note that all output files are created in the directory where you run
from. A `_benchmark_runner_data` directory will be created to store run
output, and a `_results` directory as well for scores.

For more detailed documentation, see:

    ./csuite.py --help

Output from the runners is captured into files and cached, so you can cancel
and resume multi-hour benchmark runs with minimal loss of data/time. The -f
flag forces re-running even if these cached files still exist.