NumPy benchmarks#
Benchmarking NumPy with Airspeed Velocity.
Usage#
Airspeed Velocity manages building and Python virtualenvs by itself,unless told otherwise. To run the benchmarks, you do not need to install adevelopment version of NumPy to your current Python environment.
Before beginning, ensure thatairspeed velocity is installed.By default,asv ships with support for anaconda and virtualenv:
pipinstallasvpipinstallvirtualenv
After contributing new benchmarks, you should test them locally beforesubmitting a pull request.
To run all benchmarks, navigate to the root NumPy directory atthe command line and execute:
spinbench
This builds NumPy and runs all available benchmarksdefined inbenchmarks/. (Note: this could take a while. Eachbenchmark is run multiple times to measure the distribution inexecution times.)
Fortesting benchmarks locally, it may be better to run these withoutreplications:
cd benchmarks/export REGEXP="bench.*Ufunc"asv run --dry-run --show-stderr --python=same --quick -b $REGEXP
Where the regular expression used to match benchmarks is stored in$REGEXP,and–quick is used to avoid repetitions.
To run benchmarks from a particular benchmark module, such asbench_core.py, simply append the filename without the extension:
spinbench-tbench_core
To run a benchmark defined in a class, such asMeshGridfrombench_creation.py:
spinbench-tbench_creation.MeshGrid
Compare changes in benchmark results to another version/commit/branch, use the--compare option (or the equivalent-c):
spinbench--comparev1.6.2-tbench_corespinbench--compare20d03bcfd-tbench_corespinbench-cmain-tbench_core
All of the commands above display the results in plain text inthe console, and the results are not saved for comparison withfuture commits. For greater control, a graphical view, and tohave results saved for future comparison you can run ASV commands(record results and generate HTML):
cdbenchmarksasvrun-n-e--python=sameasvpublishasvpreview
More on how to useasv can be found inASV documentationCommand-line help is available as usual viaasv--help andasvrun--help.
Benchmarking versions#
To benchmark or visualize only releases on different machines locally, the tags with their commits can be generated, before being run withasv, that is:
cd benchmarks# Get commits for tags# delete tag_commits.txt before re-runsfor gtag in $(git tag --list --sort taggerdate | grep "^v"); dogit log $gtag --oneline -n1 --decorate=no | awk '{print $1;}' >> tag_commits.txtdone# Use the last 20tail --lines=20 tag_commits.txt > 20_vers.txtasv run HASHFILE:20_vers.txt# Publish and viewasv publishasv previewFor details on contributing these, see thebenchmark results repository.
Writing benchmarks#
SeeASV documentation for basics on how to write benchmarks.
Some things to consider:
The benchmark suite should be importable with any NumPy version.
The benchmark parameters etc. should not depend on which NumPy versionis installed.
Try to keep the runtime of the benchmark reasonable.
Prefer ASV’s
time_methods for benchmarking times rather than cooking uptime measurements viatime.clock, even if it requires some juggling whenwriting the benchmark.Preparing arrays etc. should generally be put in the
setupmethod ratherthan thetime_methods, to avoid counting preparation time together withthe time of the benchmarked operation.Be mindful that large arrays created with
np.emptyornp.zerosmightnot be allocated in physical memory until the memory is accessed. If this isdesired behaviour, make sure to comment it in your setup function. Ifyou are benchmarking an algorithm, it is unlikely that a user will beexecuting said algorithm on a newly created empty/zero array. One can forcepagefaults to occur in the setup phase either by callingnp.onesorarr.fill(value)after creating the array.