Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

A benchmarking framework for the Julia language

License

NotificationsYou must be signed in to change notification settings

JuliaCI/BenchmarkTools.jl

Repository files navigation

BenchmarkTools logo

Build StatusCode CoverageCode Style: BlueAqua QA

BenchmarkTools makesperformance tracking of Julia code easy by supplying a framework forwriting and running groups of benchmarks as well ascomparing benchmark results.

This package is used to write and run the benchmarks found inBaseBenchmarks.jl.

The CI infrastructure for automated performance testing of the Julia language is not in this package, but can be found inNanosoldier.jl.

Installation

BenchmarkTools is a   Julia Language   package. To install BenchmarkTools, pleaseopen Julia's interactive session (known as REPL) and press] key in the REPL to use the package mode, then type the following command

pkg> add BenchmarkTools

Documentation

If you're just getting started, check out themanual for a thorough explanation of BenchmarkTools.

If you want to explore the BenchmarkTools API, see thereference document.

If you want a short example of a toy benchmark suite, see the sample file in this repo (benchmark/benchmarks.jl).

If you want an extensive example of a benchmark suite being used in the real world, you can look at the source code ofBaseBenchmarks.jl.

If you're benchmarking on Linux, I wrote up a series oftips and tricks to help eliminate noise during performance tests.

Quick Start

The primary macro provided by BenchmarkTools is@benchmark:

julia>using BenchmarkTools# The `setup` expression is run once per sample, and is not included in the# timing results. Note that each sample can require multiple evaluations# benchmark kernel evaluations. See the BenchmarkTools manual for details.julia>@benchmarksort(data) setup=(data=rand(10))BenchmarkTools.Trial:10000 samples with972 evaluations. Range (min max):69.399 ns1.066 μs  ┊ GC (min max):0.00%0.00% Time  (median):83.850 ns              ┊ GC (median):0.00% Time  (mean± σ):89.471 ns±53.666 ns  ┊ GC (mean± σ):3.25%±5.16%          ▁▄▇█▇▆▃▁                                                   ▂▁▁▂▂▃▄▆████████▆▅▄▃▃▃▃▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂69.4 ns           Histogram: frequency by time145 ns (top1%) Memory estimate:160 bytes, allocs estimate:1.

For quick sanity checks, one can use the@btime macro, which is a convenience wrapper around@benchmark whose output is analogous to Julia's built-in@time macro:

# The `seconds` expression helps set a rough time budget, see Manual for more explanationjulia>@btimesin(x) setup=(x=rand()) seconds=34.361 ns (0 allocations:0 bytes)0.49587200950472454

If the expression you want to benchmark depends on external variables, you should use$ to "interpolate" them into the benchmark expression toavoid the problems of benchmarking with globals.Essentially, any interpolated variable$x or expression$(...) is "pre-computed" before benchmarking begins, and passed to the benchmarkas a function argument:

julia> A=rand(3,3);julia>@btimeinv($A);# we interpolate the global variable A with $A1.191 μs (10 allocations:2.31 KiB)julia>@btimeinv($(rand(3,3)));# interpolation: the rand(3,3) call occurs before benchmarking1.192 μs (10 allocations:2.31 KiB)julia>@btimeinv(rand(3,3));# the rand(3,3) call is included in the benchmark time1.295 μs (11 allocations:2.47 KiB)

Sometimes, inline values in simple expressions can give the compiler more information than you intended, causing it to "cheat" the benchmark by hoisting the calculation out of the benchmark code

julia>@btime1+20.024 ns (0 allocations:0 bytes)3

As a rule of thumb, if a benchmark reports that it took less than a nanosecond to perform, this hoisting probably occurred. You can avoid this using interpolation:

julia> a=1; b=22julia>@btime$a+$b1.277 ns (0 allocations:0 bytes)3

As described in themanual, the BenchmarkTools package supports many other features, both for additional output and for more fine-grained control over the benchmarking process.

Why does this package exist?

Our story begins with two packages, "Benchmarks" and "BenchmarkTrackers". The Benchmarks package implemented an execution strategy for collecting and summarizing individual benchmark results, while BenchmarkTrackers implemented a framework for organizing, running, and determining regressions of groups of benchmarks. Under the hood, BenchmarkTrackers relied on Benchmarks for actual benchmark execution.

For a while, the Benchmarks + BenchmarkTrackers system was used for automated performance testing of Julia's Base library. It soon became apparent that the system suffered from a variety of issues:

  1. Individual sample noise could significantly change the execution strategy used to collect further samples.
  2. The estimates used to characterize benchmark results and to detect regressions were statistically vulnerable to noise (i.e. not robust).
  3. Different benchmarks have different noise tolerances, but there was no way to tune this parameter on a per-benchmark basis.
  4. Running benchmarks took a long time - an order of magnitude longer than theoretically necessary for many functions.
  5. Using the system in the REPL (for example, to reproduce regressions locally) was often cumbersome.

The BenchmarkTools package is a response to these issues, designed by examining user reports and the benchmark data generated by the old system. BenchmarkTools offers the following solutions to the corresponding issues above:

  1. Benchmark execution parameters are configured separately from the execution of the benchmark itself. This means that subsequent experiments are performed more consistently, avoiding branching "substrategies" based on small numbers of samples.
  2. A variety of simple estimators are supported, and the user can pick which one to use for regression detection.
  3. Noise tolerance has been made a per-benchmark configuration parameter.
  4. Benchmark configuration parameters can be easily cached and reloaded, significantly reducing benchmark execution time.
  5. The API is simpler, more transparent, and overall easier to use.

Acknowledgements

This package was authored primarily by Jarrett Revels (@jrevels). Additionally, I'd like to thank the following people:

  • John Myles White, for authoring the original Benchmarks package, which greatly inspired BenchmarkTools
  • Andreas Noack, for statistics help and investigating weird benchmark time distributions
  • Oscar Blumberg, for discussions on noise robustness
  • Jiahao Chen, for discussions on error analysis

About

A benchmarking framework for the Julia language

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages


[8]ページ先頭

©2009-2025 Movatter.jp