- Notifications
You must be signed in to change notification settings - Fork16
Benchmarks for Django using asv
License
django/django-asv
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
This repository contains the benchmarks for measuring Django's performance over time.
The benchmarking process is carried out by the benchmarking toolairspeed velocity and the results can be viewedhere
Conda
is being used to run the benchmarks against different versions of python
If you already have conda or miniconda installed,you can run the benchmarks by using the commands
pip install asvasv run
to run the benchmarks against the latest commit.
If you do not have conda or miniconda installed, change the contents of the fileasv.conf.json
as follows to usevirutalenv
to run the benchmarks
{"version":1,"project":"Django","project_url":"https://www.djangoproject.com/","repo":"https://github.com/django/django.git","branches": ["main"],"environment_type":"virtualenv","show_commit_url":"http://github.com/django/django/commit/",}
and run the benchmarks using the commands
pip install asvasv run
Note:ASV
prompts you to set a machine name on the first run, please do not set it to 'ubuntu-22.04', 'windows-2022' or 'macos-12' as the results for the machines with these names are currently being stored in the repository
Benchmarking results of differnt branches can be compared using the following method
asv run <commit1 SHA or branch1 name>asv run <commit2 SHA or branch2 name>asv compare <commit1 SHA or branch name> <commit2 SHA or branch name>
Fork this repository and create a new branch
Install
pre-commit
and runpre-commit install
to install pre-commit hooks which will be used to format the codeCreate a new directory with the name
benchmark_name
under the appropriate category of benchmarksAdd the files
__init__.py
andbenchmark.py
to the directoryAdd the directory to the list of
INSTALLLED_APPS
in settings.pyUse the following format to write your benchmark in the file
benchmark.py
from ...utilsimportbench_setup()classBenchmarkClass:defsetup():bench_setup()# if your benchmark makes use of models then use# bench_setup(migrate=True) ...deftime_benchmark_name(): ...
Commit changes and create a pull request