Uh oh!
There was an error while loading.Please reload this page.
- Notifications
You must be signed in to change notification settings - Fork187
Add benchmarks for ctypes function call overhead#197
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
base:main
Are you sure you want to change the base?
Uh oh!
There was an error while loading.Please reload this page.
Conversation
Wanted to add a note: This is definitely a "microbenchmark" that isn't a great example of a real-world workload. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Mostly LGTM. There are some slight changes we should consider though, particularly usingrunner.bench_time_func()
for micro benchmarks.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
void_foo_void() | ||
int_foo_int(1) | ||
void_foo_int(1) | ||
void_foo_int_int(1, 2) | ||
void_foo_int_int_int(1, 2, 3) | ||
void_foo_int_int_int_int(1, 2, 3, 4) | ||
void_foo_constchar(b"bytes") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
The real benefit of micro-benchmarks is that it narrows down where performance regressions might be. With that in mind, would these different signatures have enough independent potential for regression that it would it make sense to have a separate benchmark for each? Would it be worth bothering even if they did?
pyperformance/data-files/benchmarks/bm_ctypes/run_benchmark_argtypes.py OutdatedShow resolvedHide resolved
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Thanks for the changes. I just have a couple more suggestions.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
LGTM, but I am a bit concerned that benchmarking all of the call argtypes in one iteration can hide regressions, but I can't think of anyway to fix that apart from splitting this benchmark further to each call argtypes. |
Uh oh!
There was an error while loading.Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Mostly LGTM. I've left a few comments on a few minor things and to get clarification in a couple spots.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
A quick follow-up suggestion...
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
LGTM
I'll leave it to you about my recommended change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
LGTM
There's only one small suggestion, which I'll leave to your discretion.
if os.path.isfile(req): | ||
name = os.path.basename(req) | ||
if name == "setup.py": | ||
req = os.path.dirname(req) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
The reason we use the dirname isn't obvious, so it may be worth adding a comment here indicating pip's limitations.
# pip doesn't support installing a setup.py, | ||
# but it does support installing from the directory it is in. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
This comment is what I was thinking of above. Consider moving it there.
Add ctypes and ctypes_argtypes benchmarks
This additionally adds support for building C extensions as part of creating a benchmark's virtual environment, since that didn't seem to be possible prior to this change.
Seefaster-cpython/ideas#370 for some background discussion.