Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

GitHub Action for continuous benchmarking to keep performance

License

NotificationsYou must be signed in to change notification settings

benchmark-action/github-action-benchmark

Use this GitHub action with your project
Add this Action to an existing workflow or create a new one
View on Marketplace

Repository files navigation

Action MarketplaceBuild Statuscodecov

This repository provides aGitHub Action for continuous benchmarking.If your project has some benchmark suites, this action collects data from the benchmark outputsand monitor the results on GitHub Actions workflow.

  • This action can store collected benchmark results inGitHub pages branch and providea chart view. Benchmark results are visualized on the GitHub pages of your project.
  • This action can detect possible performance regressions by comparing benchmark results. Whenbenchmark results get worse than previous exceeding the specified threshold, it can raise an alertvia commit comment or workflow failure.

This action currently supports the following tools:

Multiple languages in the same repository are supported for polyglot projects.

Japanese Blog post

Examples

Example projects for each language are inexamples/ directory. Live example workflowdefinitions are in.github/workflows/ directory. Live workflows are:

LanguageWorkflowExample Project
RustRust Example Workflowexamples/rust
GoGo Example Workflowexamples/go
JavaScriptJavaScript Example Workflowexamples/benchmarkjs
Pythonpytest-benchmark Example Workflowexamples/pytest
C++C++ Example Workflowexamples/cpp
C++ (Catch2)C++ Catch2 Example Workflowexamples/catch2
JuliaJulia Exampleexamples/julia
.NetC# Benchmark.Net Example Workflowexamples/benchmarkdotnet
JavaJava Example Workflowexamples/java
LuauComing soonComing soon

All benchmark charts from above workflows are gathered in GitHub pages:

https://benchmark-action.github.io/github-action-benchmark/dev/bench/

Additionally, even though there is no explicit example for them, you can usecustomBiggerIsBetter andcustomSmallerIsBetter to use thisaction and create your own graphs from your own benchmark data. The name inthese tools define which direction "is better" for your benchmarks.

Every entry in the JSON file you provide only needs to providename,unit,andvalue. You can also provide optionalrange (results' variance) andextra (any additional information that might be useful to your benchmark'scontext) properties. Like this:

[    {"name":"My Custom Smaller Is Better Benchmark - CPU Load","unit":"Percent","value":50    },    {"name":"My Custom Smaller Is Better Benchmark - Memory Used","unit":"Megabytes","value":100,"range":"3","extra":"Value for Tooltip: 25\nOptional Num #2: 100\nAnything Else!"    }]

Screenshots

Charts on GitHub Pages

page screenshot

Mouseover on data point shows a tooltip. It includes

  • Commit hash
  • Commit message
  • Date and committer
  • Benchmark value

Clicking data point in chart opens the commit page on a GitHub repository.

tooltip

At bottom of the page, the download button is available for downloading benchmark results as a JSON file.

download button

Alert comment on commit page

This action can raisean alert comment. to the commit when its benchmarkresults are worse than previous exceeding a specified threshold.

alert comment

Why?

Since performance is important. Writing benchmarks is a popular and correct way to visualize a softwareperformance. Benchmarks help us to keep performance and to confirm the effects of optimizations.For keeping the performance, it's important to monitor the benchmark results along with changes tothe software. To notice performance regression quickly, it's useful to monitor benchmarking resultscontinuously.

However, there is no good free tool to watch the performance easily and continuously across languages(as far as I looked into). So I built a new tool on top of GitHub Actions.

How to use

This action takes a file that contains benchmark output. And it outputs the results to GitHub Pagesbranch and/or alert commit comment.

Minimal setup

Let's start with a minimal workflow setup. For explanation, here let's say we have a Go project. But basicsetup is the same when you use other languages. For language-specific setup, please read the later section.

name:Minimal setupon:push:branches:      -masterjobs:benchmark:name:Performance regression checkruns-on:ubuntu-lateststeps:      -uses:actions/checkout@v4      -uses:actions/setup-go@v4with:go-version:"stable"# Run benchmark with `go test -bench` and stores the output to a file      -name:Run benchmarkrun:go test -bench 'BenchmarkFib' | tee output.txt# Download previous benchmark result from cache (if exists)      -name:Download previous benchmark datauses:actions/cache@v4with:path:./cachekey:${{ runner.os }}-benchmark# Run `github-action-benchmark` action      -name:Store benchmark resultuses:benchmark-action/github-action-benchmark@v1with:# What benchmark tool the output.txt came fromtool:'go'# Where the output from the benchmark tool is storedoutput-file-path:output.txt# Where the previous data file is storedexternal-data-json-path:./cache/benchmark-data.json# Workflow will fail when an alert happensfail-on-alert:true# Upload the updated cache file for the next job by actions/cache

The step which runsgithub-action-benchmark does followings:

  1. Extract benchmark result from the output inoutput.txt
  2. Update the downloaded cache file with the extracted result
  3. Compare the result with the previous result. If it gets worse than previous exceeding 200% threshold,the workflow fails and the failure is notified to you

By default, this action marks the result as performance regression when it is worse than the previousexceeding 200% threshold. For example, if the previous benchmark result was 100 iter/ns and this timeit is 230 iter/ns, it means 230% worse than the previous and an alert will happen. The threshold canbe changed byalert-threshold input.

A live workflow example ishere. And the results of the workflow canbe seenhere.

Commit comment

In addition to the above setup, GitHub API token needs to be given to enablecomment-on-alert feature.

-name:Store benchmark resultuses:benchmark-action/github-action-benchmark@v1with:tool:'go'output-file-path:output.txtexternal-data-json-path:./cache/benchmark-data.jsonfail-on-alert:true# GitHub API token to make a commit commentgithub-token:${{ secrets.GITHUB_TOKEN }}# Enable alert commit commentcomment-on-alert:true# Mention @rhysd in the commit commentalert-comment-cc-users:'@rhysd'

secrets.GITHUB_TOKEN isa GitHub API token automatically generated for each workflow run.It is necessary to send a commit comment when the benchmark result of the commit is detected as possibleperformance regression.

Now, in addition to making workflow fail, the step leaves a commit comment when it detects performanceregressionlike this. Thoughalert-comment-cc-users input is not mandatory forthis, I recommend to set it to make sure you can notice the comment via GitHub notification. Please notethat this value must be quoted like'@rhysd' because@ is an indicator in YAML syntax.

A live workflow example ishere. And the results of the workflowcan be seenhere.

Job Summary

Similar to theCommit comment feature, Github ActionsJob Summaries arealso supported. In order to use Job Summaries, turn on thesummary-alwaysoption.

-name:Store benchmark resultuses:benchmark-action/github-action-benchmark@v1with:tool:'cargo'output-file-path:output.txtexternal-data-json-path:./cache/benchmark-data.jsonfail-on-alert:true# GitHub API token to make a commit commentgithub-token:${{ secrets.GITHUB_TOKEN }}# Enable alert commit commentcomment-on-alert:true# Enable Job Summary for PRssummary-always:true# Mention @rhysd in the commit commentalert-comment-cc-users:'@rhysd'

Charts on GitHub Pages

It is useful to see how the benchmark results changed on each change in time-series charts. This actionprovides a chart dashboard on GitHub pages.

It requires some preparations before the workflow setup.

You need to create a branch for GitHub Pages if you haven't created it yet.

# Create a local branch$ git checkout --orphan gh-pages# Push it to create a remote branch$ git push origin gh-pages:gh-pages

Now you're ready for workflow setup.

# Do not run this workflow on pull request since this workflow has permission to modify contents.on:push:branches:      -masterpermissions:# deployments permission to deploy GitHub pages websitedeployments:write# contents permission to update benchmark contents in gh-pages branchcontents:writejobs:benchmark:name:Performance regression checkruns-on:ubuntu-lateststeps:      -uses:actions/checkout@v4      -uses:actions/setup-go@v4with:go-version:"stable"# Run benchmark with `go test -bench` and stores the output to a file      -name:Run benchmarkrun:go test -bench 'BenchmarkFib' | tee output.txt# gh-pages branch is updated and pushed automatically with extracted benchmark data      -name:Store benchmark resultuses:benchmark-action/github-action-benchmark@v1with:name:My Project Go Benchmarktool:'go'output-file-path:output.txt# Access token to deploy GitHub Pages branchgithub-token:${{ secrets.GITHUB_TOKEN }}# Push and deploy GitHub pages branch automaticallyauto-push:true

The step which runsgithub-action-benchmark does followings:

  1. Extract benchmark result from the output inoutput.txt
  2. Switch branch togh-pages
  3. Read existing benchmark results fromdev/bench/data.js
  4. Updatedev/bench/data.js with the extracted benchmark result
  5. Generate a commit to store the update ingh-pages branch
  6. Pushgh-pages branch to remote
  7. Compare the results with previous results and make an alert if possible performance regression is detected

After the first workflow run, you will get the first result onhttps://you.github.io/repo/dev/benchlike this.

By default, this action assumes thatgh-pages is your GitHub Pages branch and that/dev/bench isa path to put the benchmark dashboard page. If they don't fit your use case, please tweak them bygh-pages-branch,gh-repository andbenchmark-data-dir-path inputs.

This action merges all benchmark results into one GitHub pages branch. If your workflows have multiplesteps to check benchmarks from multiple tools, please givename input to each step to make eachbenchmark results identical.

Please see the above'Examples' section to see live workflow examples for each language.

If you don't want to pass GitHub API token to this action, it's still OK.

-name:Store benchmark resultuses:benchmark-action/github-action-benchmark@v1with:name:My Project Go Benchmarktool:'go'output-file-path:output.txt# Set auto-push to false since GitHub API token is not givenauto-push:false# Push gh-pages branch by yourself-name:Push benchmark resultrun:git push 'https://you:${{ secrets.GITHUB_TOKEN }}@github.com/you/repo-name.git' gh-pages:gh-pages

Please add a step to push the branch to the remote.

Tool specific setup

Please readREADME.md files at each example directory. Usually, take stdout from a benchmark tooland store it to file. Then specify the file path tooutput-file-path input.

These examples are run in workflows of this repository as described in the 'Examples' section above.

Action inputs

Input definitions are written inaction.yml.

name (Required)

  • Type: String
  • Default:"Benchmark"

Name of the benchmark. This value must be identical across all benchmarks in your repository.

tool (Required)

  • Type: String
  • Default: N/A

Tool for running benchmark. The value must be one of"cargo","go","benchmarkjs","pytest","googlecpp","catch2","julia","jmh","benchmarkdotnet","benchmarkluau","customBiggerIsBetter","customSmallerIsBetter".

output-file-path (Required)

  • Type: String
  • Default: N/A

Path to a file which contains the output from benchmark tool. The path can be relative to repository root.

gh-pages-branch (Required)

  • Type: String
  • Default:"gh-pages"

Name of your GitHub pages branch.

Note: If you're usingdocs/ directory ofmaster branch for GitHub pages, please setgh-pages-branchtomaster andbenchmark-data-dir-path to the directory underdocs likedocs/dev/bench.

gh-repository

  • Type: String

Url to an optional different repository to store benchmark results (eg.github.com/benchmark-action/github-action-benchmark-results)

NOTE: if you want to auto push to a different repository you need to use a separate Personal Access Token that has a write access to the specified repository.If you are not using theauto-push option then you can avoid passing thegh-token if your data repository is public

benchmark-data-dir-path (Required)

  • Type: String
  • Default:"dev/bench"

Path to a directory that contains benchmark files on the GitHub pages branch. For example, when this valueis set to"path/to/bench",https://you.github.io/repo-name/path/to/bench will be available as benchmarksdashboard page. If it does not containindex.html, this action automatically generates it at first run.The path can be relative to repository root.

github-token (Optional)

  • Type: String
  • Default: N/A

GitHub API access token.

ref (Optional)

  • Type: String
  • Default: N/A

Ref to use for reporting the commit

auto-push (Optional)

  • Type: Boolean
  • Default:false

If it is set totrue, this action automatically pushes the generated commit to GitHub Pages branch.Otherwise, you need to push it by your own. Please read 'Commit comment' section above for more details.

comment-always (Optional)

  • Type: Boolean
  • Default:false

If it is set totrue, this action will leave a commit comment comparing the current benchmark with previous.github-token is necessary as well.

save-data-file (Optional)

  • Type: Boolean
  • Default:true

If it is set tofalse, this action will not save the current benchmark to the external data file.You can use this option to set up your action to compare the benchmarks between PR and base branch.

alert-threshold (Optional)

  • Type: String
  • Default:"200%"

Percentage value like"150%". It is a ratio indicating how worse the current benchmark result is.For example, if we now get150 ns/iter and previously got100 ns/iter, it gets150% worse.

If the current benchmark result is worse than previous exceeding the threshold, an alert will happen.Seecomment-on-alert andfail-on-alert also.

comment-on-alert (Optional)

  • Type: Boolean
  • Default:false

If it is set totrue, this action will leave a commit comment when an alert happenslike this.github-token is necessary as well. For the threshold, please seealert-threshold also.

fail-on-alert (Optional)

  • Type: Boolean
  • Default:false

If it is set totrue, the workflow will fail when an alert happens. For the threshold for this, pleaseseealert-threshold andfail-threshold also.

fail-threshold (Optional)

  • Type: String
  • Default: The same value asalert-threshold

Percentage value in the same format asalert-threshold. If this value is set, the threshold valuewill be used to determine if the workflow should fail. Default value is set to the same value asalert-threshold input.This value must be equal or larger thanalert-threshold value.

alert-comment-cc-users (Optional)

  • Type: String
  • Default: N/A

Comma-separated GitHub user names mentioned in alert commit comment like"@foo,@bar". These userswill be mentioned in a commit comment when an alert happens. For configuring alerts, please seealert-threshold andcomment-on-alert also.

external-data-json-path (Optional)

  • Type: String
  • Default: N/A

External JSON file which contains benchmark results until previous job run. When this value is set,this action updates the file content instead of generating a Git commit in GitHub Pages branch.This option is useful if you don't want to put benchmark results in GitHub Pages branch. Instead,you need to keep the JSON file persistently among job runs. One option is using a workflow cachewithactions/cache action. Please read 'Minimal setup' section above.

max-items-in-chart (Optional)

  • Type: Number
  • Default: N/A

Max number of data points in a chart for avoiding too busy chart. This value must be unsigned integerlarger than zero. If the number of benchmark results for some benchmark suite exceeds this value,the oldest one will be removed before storing the results to file. By default this value is emptywhich means there is no limit.

skip-fetch-gh-pages (Optional)

  • Type: Boolean
  • Default:false

If set totrue, the workflow will skip fetching branch defined with thegh-pages-branch variable.

Action outputs

No action output is set by this action for the parent GitHub workflow.

Caveats

Run only on your branches

Please ensure that your benchmark workflow runs only on your branches. Please avoid running it onpull requests. If a branch were pushed to GitHub pages branch on a pull request, anyone who createsa pull request on your repository could modify your GitHub pages branch.

For this, you can specify a branch that runs your benchmark workflow onon: section. Or set theproper condition toif: section of step which pushes GitHub pages.

e.g. Runs on onlymaster branch

on:push:branches:      -master

e.g. Push when not running for a pull request

-name:Push benchmark resultrun:git push ...if:github.event_name != 'pull_request'

Stability of Virtual Environment

As far as watching the benchmark results of examples in this repository, the amplitude of the benchmarksis about +- 10~20%. If your benchmarks use some resources such as networks or file I/O, the amplitudemight be bigger.

If the amplitude is not acceptable, please prepare a stable environment to run benchmarks.GitHub action supportsself-hosted runners.

Customizing the benchmarks result page

This action creates the defaultindex.html in the directory specified withbenchmark-data-dir-pathinput. By default, every benchmark test case has own chart on the page. Charts are drawn withChart.js.

If it does not fit your use case, please modify the HTML file or replace it with your favorite one.Every benchmark data is stored inwindow.BENCHMARK_DATA so you can create your favorite view.

Versioning

This action conforms semantic versioning 2.0.

For example,benchmark-action/github-action-benchmark@v1 means the latest version of1.x.y. Andbenchmark-action/github-action-benchmark@v1.0.2 always usesv1.0.2 even if a newer version is published.

master branch of this repository is for development and does not work as action.

Track updates of this action

To notice new version releases, pleasewatch 'release only' atthis repository.Every release will appear on your GitHub notifications page.

Future work

  • Support pull requests. Instead of updating GitHub pages, add a comment to the pull request to explainbenchmark results.
  • Add more benchmark tools:
  • Allow uploading results to metrics services such asmackerel
  • Show extracted benchmark data in the output from this action
  • Add a table view in dashboard page to see all data points in table

Related actions

  • lighthouse-ci-action is an action forLighthouse CI. If you're measuring performanceof your web application, using Lighthouse CI and lighthouse-ci-action would be better than using thisaction.

License

the MIT License


[8]ページ先頭

©2009-2025 Movatter.jp