Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Update example commands in README#182

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Open
Alexsandruss wants to merge2 commits intoIntelPython:main
base:main
Choose a base branch
Loading
fromAlexsandruss:dev/readme-update

Conversation

@Alexsandruss
Copy link
Contributor

Description

  • Change example commands to frequently used ones.

PR should start as a draft, then move to ready for review state after CI is passed and all applicable checkboxes are closed.
This approach ensures that reviewers don't spend extra time asking for regular requirements.

You can remove a checkbox as not applicable only if it doesn't relate to this PR in any way.

Checklist to comply withbefore moving PR from draft:

PR completeness and readability

  • I have reviewed my changes thoroughly before submitting this pull request.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have updated the documentation to reflect the changes or created a separate PR with update and provided its number in the description, if necessary.
  • Git commit message contains an appropriate signed-off-by string(seeCONTRIBUTING.md for details).
  • I have added a respective label(s) to PR if I have a permission for that.
  • I have resolved any merge conflicts that might occur with the base branch.

Testing

  • I have run it locally and tested the changes extensively.
  • All CI jobs are green or I have provided justification why they aren't.
  • I have extended testing suite if new functionality was introduced in this PR.

@AlexsandrussAlexsandruss added the docsdocumentation and readme update labelApr 15, 2025
```

By default, output and report file paths are`result.json` and`report.xlsx`. To specify custom file paths, run:
In order to optimize datasets downloading and get more verbose output, use`--prefetch-datasets` and`-l INFO` arguments:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Please remember to provide info about the requirements for kaggle data.

# Same command with shorter argument aliases for typing convenience
python -m sklbench -c configs/regular \
-f algorithm:library=sklearnex algorithm:device=cpu \
-e ENV_NAME -r result_sklearnex_cpu_regular.json
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

ThisENV_NAME is very unclear to me.

The docs say:

Environment name to use instead of it's configuration hash.

But that doesn't tell me what the environment is or what is is used for. ShouldENV_NAME be substituted with something else? Is it required?

Vika-F reacted with thumbs up emoji

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Same question.


```bash
python -m sklbench.report --result-files result_1.json result_2.json --report-file report_example.xlsx
python -m sklbench.report \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I think this one is missing the flag that's needed when mixing sklearn and sklearnex.

```bash
python -m sklbench --config configs/sklearn_example.json --report --result-file result_example.json --report-file report_example.xlsx
# ...
-f algorithm:library=sklearnex algorithm:device=cpu algorithm:estimator=PCA,KMeans
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Here it could mention that these algorithms need to be in the config JSON.

--prefetch-datasets -l INFO
```

To select measurement for few algorithms only, extend filter (`-f`) argument:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Suggested change
Toselect measurement for few algorithms only, extend filter (`-f`) argument:
Torun benchmarks for a for few algorithms only, extend filter (`-f`) argument:

```

By default, output and report file paths are`result.json` and`report.xlsx`. To specify custom file paths, run:
In order to optimize datasets downloading and get more verbose output, use`--prefetch-datasets` and`-l INFO` arguments:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Why not to use--prefetch-datasets as the default recommendation for regular runs?
Does it have any drawbacks?

Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment

Reviewers

@Vika-FVika-FVika-F left review comments

@david-cortes-inteldavid-cortes-inteldavid-cortes-intel requested changes

Requested changes must be addressed to merge this pull request.

Assignees

No one assigned

Labels

docsdocumentation and readme update

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

3 participants

@Alexsandruss@Vika-F@david-cortes-intel

[8]ページ先頭

©2009-2025 Movatter.jp