Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Yet another implementation of computer language benchmarks game

License

NotificationsYou must be signed in to change notification settings

hanabi1224/Programming-Language-Benchmarks

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

benchMIT License

Why Build This

The idea is to build an automated process for benchmark generation and publishing.

Comparable numbers

It currently use CI to generate benchmark results to guarantee all the numbers are generated from the same environment at nearly the same time. All benchmark tests are executed in a single CI job

Automatic publishing

Once a change is merged into main branch, the CI job will re-generate and publish the static website

Main Goals

  • Compare performance differences between different languages. Note that implementations might be using different optimizations, e.g. with or w/o multithreading, please do read the source code to check if it's a fair comparison or not.
  • Compare performance differences between different compilers or runtimes of the same language with the same source code.
  • Facilitate benchmarking on real server environments as nowadays more and more applications are deployed with docker/k8s. It's likely to get a very different result from what you get on your dev machine.
  • A reference for CI setup / Dev environment setup / package management setup for different languages. Refer toGithub action
  • It focuses more on new programming languages, classicprogramming languages that are covered byCLBG receive limited or no maintenance, based on their popularity.

Build

To achieve better SEO, the published site is static and prerendered, powered bynuxt.js.

Host

The website is hosted onVercel

Development

git clone https://github.com/hanabi1224/Programming-Language-Benchmarks.gitcd websitepnpm ipnpm buildpnpm dev

Benchmarks

All benchmarks are defined inbench.yaml

Current benchmarks problems and their implementations are fromThe Computer Language Benchmarks Game ( Repo)

Local development

Prerequisites

net9

nodejs 14

pnpm

podman (ordocker by changingdocker_cmd: podman todocker_cmd: docker inbench/bench.yaml)

Build

The 1st step is to build source code from various of languages

cd bench# To build a subsetdotnet run --project tool -- --task build --langs lisp go --problems nbody helloworld --force-rebuild# To build alldotnet run --project tool -- --task build

Test

The 2nd step is to test built binaries to ensure the correctness of their implementation

cd bench# To test a subsetdotnet run --project tool -- --tasktest --langs lisp go --problems nbody helloworld# To test alldotnet run --project tool -- --tasktest

Bench

The 3rd step is to generate benchmarks

cd bench# To bench a subsetdotnet run --project tool -- --task bench --langs lisp go --problems nbody helloworld# To bench alldotnet run --project tool -- --task bench

For usage

cd benchdotnet run --project tool -- -hBenchTool  Main functionUsage:  BenchTool [options]Options:  --config<config>              Path to benchmark config file [default: bench.yaml]  --algorithm<algorithm>        Root path that contains all algorithm code [default: algorithm]  --include<include>            Root path that contains all include project templates [default: include]  --build-output<build-output>  Output folder of build step [default: build]  --task<task>                  Benchmark task to run, valid values: build, test, bench [default: build]  --force-pull-docker            A flag that indicates whether to force pull docker image even when it exists [default: False]  --force-rebuild                A flag that indicates whether to force rebuild [default: False]  --fail-fast                    A Flag that indicates whether to fail fast when error occurs [default: False]  --build-pool                   A flag that indicates whether builds that can runin parallel [default: False]  --verbose                      A Flag that indicates whether to print verbose information [default: False]  --no-docker                    A Flag that forces disabling docker [default: False]  --langs<langs>                Languages to include, e.g. --langs go csharp [default: ]  --problems<problems>          Problems to include, e.g. --problems binarytrees nbody [default: ]  --environments<environments>  OS environments to include, e.g. --environments linux windows [default: ]  --version                      Show version information  -?, -h, --help                 Showhelp and usage information

Refresh website

Lastly you can re-generate website with latest benchmark numbers

cd websitepnpm ipnpm contentpnpm buildserve dist

TODOs

Integrate test environment info into website

Integrate build / test / benchmark information into website

...

How to contribute

TODO

Thanks

This is inspired byThe Computer Language Benchmarks Game, thanks to the curator.

LICENSES

Code of problem implementation fromThe Computer Language Benchmarks Game is under theirRevised BSD

Other code in this repo is under MIT.

Releases

No releases published

Packages

No packages published

Contributors47


[8]ページ先頭

©2009-2025 Movatter.jp