- Notifications
You must be signed in to change notification settings - Fork0
Yet another implementation of computer language benchmarks game
License
GabrielLasso/Programming-Language-Benchmarks
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
The idea is to build an automated process for benchmark generation and publishing.
It currently use CI to generate benchmark results to guarantee all the numbers are generated from the same environment at nearly the same time. All benchmark tests are executed in a single CI job
Once a change is merged into main branch, the CI job will re-generate and publish the static website
- Compare performance differences between different languages. Note that implementations might be using different optimizations, e.g. with or w/o multithreading, please do read the source code to check if it's a fair comparision or not.
- Compare performance differences between different compilers or runtimes of the same language with the same source code.
- Facilitate benchmarking on real server environments as nowadays more and more applications are deployed with docker/k8s. It's likely to get a very different result from what you get on your dev machine.
- A reference for CI setup / Dev environment setup / package management setup for different languages. Refer toGithub action
- It focuses more on new programming languages, classicprogramming languages that are covered byCLBG receive limited or no maintainence, based on their popularity.
To achieve better SEO, the published site is static and prerendered, powered bynuxt.js.
The website is hosted onVercel
git clone https://github.com/hanabi1224/Programming-Language-Benchmarks.gitcd websitepnpm ipnpm buildpnpm devAll benchmarks are defined inbench.yaml
Current benchmarks problems and their implementations are fromThe Computer Language Benchmarks Game ( Repo)
podman (ordocker by changingdocker_cmd: podman todocker_cmd: docker inbench/bench.yaml)
The 1st step is to build source code from various of lanuages
cd bench# To build a subsetdotnet run -p tool -- --task build --langs lisp go --problems nbody helloworld --force-rebuild# To build alldotnet run -p tool -- --task build
The 2nd step is to test built binaries to ensure the correctness of their implementation
cd bench# To test a subsetdotnet run -p tool -- --tasktest --langs lisp go --problems nbody helloworld# To test alldotnet run -p tool -- --tasktest
The 3rd step is to generate benchmarks
cd bench# To bench a subsetdotnet run -p tool -- --task bench --langs lisp go --problems nbody helloworld# To bench alldotnet run -p tool -- --task bench
For usage
cd benchdotnet run -p tool -- -hBenchTool Main functionUsage: BenchTool [options]Options: --config<config> Path to benchmark config file [default: bench.yaml] --algorithm<algorithm> Root path that contains all algorithm code [default: algorithm] --include<include> Root path that contains all include project templates [default: include] --build-output<build-output> Output folder of build step [default: build] --task<task> Benchmark task to run, valid values: build, test, bench [default: build] --force-pull-docker A flag that indicates whether to force pull docker image even when it exists [default: False] --force-rebuild A flag that indicates whether to force rebuild [default: False] --fail-fast A Flag that indicates whether to fail fast when error occurs [default: False] --build-pool A flag that indicates whether builds that can runin parallel [default: False] --verbose A Flag that indicates whether to print verbose infomation [default: False] --no-docker A Flag that forces disabling docker [default: False] --langs<langs> Languages to incldue, e.g. --langs go csharp [default: ] --problems<problems> Problems to incldue, e.g. --problems binarytrees nbody [default: ] --environments<environments> OS environments to incldue, e.g. --environments linux windows [default: ] --version Show version information -?, -h, --help Showhelp and usage information
Lastly you can re-generate website with latest benchmark numbers
cd websitepnpm ipnpm contentpnpm buildserve distIntergrate test environment info into website
Intergrate build / test / benchmark infomation into website
...
TODO
This is inspired byThe Computer Language Benchmarks Game, thanks to the curator.
Code of problem implementation fromThe Computer Language Benchmarks Game is under theirRevised BSD
Other code in this repo is under MIT.
About
Yet another implementation of computer language benchmarks game
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Packages0
Languages
- C#14.9%
- Rust13.4%
- Common Lisp9.4%
- Zig4.7%
- D4.5%
- Java4.0%
- Other49.1%