- Notifications
You must be signed in to change notification settings - Fork74
Update example commands in README#182
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
base:main
Are you sure you want to change the base?
Uh oh!
There was an error while loading.Please reload this page.
Conversation
| ``` | ||
| By default, output and report file paths are`result.json` and`report.xlsx`. To specify custom file paths, run: | ||
| In order to optimize datasets downloading and get more verbose output, use`--prefetch-datasets` and`-l INFO` arguments: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Please remember to provide info about the requirements for kaggle data.
| # Same command with shorter argument aliases for typing convenience | ||
| python -m sklbench -c configs/regular \ | ||
| -f algorithm:library=sklearnex algorithm:device=cpu \ | ||
| -e ENV_NAME -r result_sklearnex_cpu_regular.json |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
ThisENV_NAME is very unclear to me.
The docs say:
Environment name to use instead of it's configuration hash.
But that doesn't tell me what the environment is or what is is used for. ShouldENV_NAME be substituted with something else? Is it required?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Same question.
| ```bash | ||
| python -m sklbench.report --result-files result_1.json result_2.json --report-file report_example.xlsx | ||
| python -m sklbench.report \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
I think this one is missing the flag that's needed when mixing sklearn and sklearnex.
| ```bash | ||
| python -m sklbench --config configs/sklearn_example.json --report --result-file result_example.json --report-file report_example.xlsx | ||
| # ... | ||
| -f algorithm:library=sklearnex algorithm:device=cpu algorithm:estimator=PCA,KMeans |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Here it could mention that these algorithms need to be in the config JSON.
| --prefetch-datasets -l INFO | ||
| ``` | ||
| To select measurement for few algorithms only, extend filter (`-f`) argument: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
| Toselect measurement for few algorithms only, extend filter (`-f`) argument: | |
| Torun benchmarks for a for few algorithms only, extend filter (`-f`) argument: |
| ``` | ||
| By default, output and report file paths are`result.json` and`report.xlsx`. To specify custom file paths, run: | ||
| In order to optimize datasets downloading and get more verbose output, use`--prefetch-datasets` and`-l INFO` arguments: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Why not to use--prefetch-datasets as the default recommendation for regular runs?
Does it have any drawbacks?
Description
PR should start as a draft, then move to ready for review state after CI is passed and all applicable checkboxes are closed.
This approach ensures that reviewers don't spend extra time asking for regular requirements.
You can remove a checkbox as not applicable only if it doesn't relate to this PR in any way.
Checklist to comply withbefore moving PR from draft:
PR completeness and readability
Testing