- Notifications
You must be signed in to change notification settings - Fork74
scikit-learn_bench benchmarks various implementations of machine learning algorithms across data analytics frameworks. It currently support the scikit-learn, DAAL4PY, cuML, and XGBoost frameworks for commonly used machine learning algorithms.
License
IntelPython/scikit-learn_bench
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
Scikit-learn_bench is a benchmark tool for libraries and frameworks implementing Scikit-learn-like APIs and other workloads.
Benefits:
- Full control of benchmarks suite through CLI
- Flexible and powerful benchmark config structure
- Available with advanced profiling tools, such as Intel(R) VTune* Profiler
- Automated benchmarks report generation
How to create a usable Python environment with the following required frameworks:
- sklearn, sklearnex, and gradient boosting frameworks:
# with pippip install -r envs/requirements-sklearn.txt# or with condaconda env create -n sklearn -f envs/conda-env-sklearn.yml
- RAPIDS:
conda env create -n rapids --solver=libmamba -f envs/conda-env-rapids.yml
How to run benchmarks using thesklbench module and a specific configuration:
python -m sklbench --config configs/sklearn_example.json
The default output is a file with JSON-formatted results of benchmarking cases. To generate a better human-readable report, use the following command:
python -m sklbench --config configs/sklearn_example.json --report
By default, output and report file paths areresult.json andreport.xlsx. To specify custom file paths, run:
python -m sklbench --config configs/sklearn_example.json --report --result-file result_example.json --report-file report_example.xlsx
For a description of all benchmarks runner arguments, refer todocumentation.
To combine raw result files gathered from different environments, call the report generator:
python -m sklbench.report --result-files result_1.json result_2.json --report-file report_example.xlsx
For a description of all report generator arguments, refer todocumentation.
flowchart TB A[User] -- High-level arguments --> B[Benchmarks runner] B -- Generated benchmarking cases --> C["Benchmarks collection"] C -- Raw JSON-formatted results --> D[Report generator] D -- Human-readable report --> A classDef userStyle fill:#44b,color:white,stroke-width:2px,stroke:white; class A userStyleScikit-learn_bench supports the following types of benchmarks:
- Scikit-learn estimator - Measures performance and quality metrics of thesklearn-like estimator.
- Function - Measures performance metrics of specified function.
About
scikit-learn_bench benchmarks various implementations of machine learning algorithms across data analytics frameworks. It currently support the scikit-learn, DAAL4PY, cuML, and XGBoost frameworks for commonly used machine learning algorithms.
Topics
Resources
License
Security policy
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Uh oh!
There was an error while loading.Please reload this page.