Uh oh!
There was an error while loading.Please reload this page.
- Notifications
You must be signed in to change notification settings - Fork33.3k
gh-85160: improve performance of singledispatchmethod#107148
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
Uh oh!
There was an error while loading.Please reload this page.
Conversation
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
lgtm with nit comment
It's become a lot faster
script
import pyperfrunner = pyperf.Runner()runner.timeit(name="bench singledispatchmethod", stmt="""_ = t.go(1, 1)""",setup="""from functools import singledispatch, singledispatchmethodclass Test: @singledispatchmethod def go(self, item, arg): pass @go.register def _(self, item: int, arg): return item + argt = Test()""")Mean +- std dev: [base] 1.37 us +- 0.01 us -> [opt] 410 ns +- 2 ns: 3.35x faster
Co-authored-by: Alex Waygood <Alex.Waygood@Gmail.com>
AlexWaygood commentedJul 31, 2023 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
I did some more playing around locally and found we could improve performance even more with a few tweaks. Proposed a PR to your branch ateendebakpt#5, which also adds some more test coverage. |
eendebakpt commentedJul 31, 2023 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
@AlexWaygood I merged your PR and added one more optimization: we can skip the One more optimization is inhttps://github.com/eendebakpt/cpython/pull/6/files. It eliminates the caching variable. The assumption there is that if I still have to do benchmarking on this. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
LGTM.@ethanhs, how does this look to you now?
Latest benchmarks: So the performance improves for the common case (normal methods) and is more or less equal for class methods and slotted classes. One more variation is to eliminate attribute lookups in the dispatch wrapper:eendebakpt/cpython@singledispatchmethod3...eendebakpt:cpython:singledispatchmethod3b |
AlexWaygood commentedAug 1, 2023 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
I can reproduce the speedup locally (great spot!), and it seems like a reasonable thing to do. Please name the variable One other optimisation that shaves around 5 microseconds off the benchmark for slotted classes and other objects that can't be weakref'd (but doesn't do much for those that can): --- a/Lib/functools.py+++ b/Lib/functools.py@@ -959,7 +959,7 @@ def _method(*args, **kwargs): method = self.dispatcher.dispatch(args[0].__class__) return method.__get__(obj, cls)(*args, **kwargs)- _method.__isabstractmethod__ = self.__isabstractmethod__+ _method.__isabstractmethod__ = getattr(self.func, "__isabstractmethod__", False) But if you want to do ^that, you should probably add a comment about why we're duplicating the logic in the |
Hmm, I thought I could see a speedup from this earlier, but now I can no longer reproduce it. I assume it was only noise -- best to leave it. |
@mental32 and@eendebakpt, thanks so much for all your work on this! |
| try: | ||
| _method=self._method_cache[obj] | ||
| exceptTypeError: | ||
| self._all_weakrefable_instances=False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Hmm, I wonder if this would have been a good idea, to keep memory usage down for the slow path (since in the slow path, we don't really need theWeakKeyDictionary at all)
| self._all_weakrefable_instances=False | |
| self._all_weakrefable_instances=False | |
| delself._method_cache |
@corona10, what do you think? Does it matter?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
One could even delay the creation of the Weakkeydictionary untill the first invocation of__get__. The code will look a bit odd though, a simpledel looks cleaner.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Yeah, I think I prefer the approach withdel
A small followup to#107148Co-authored-by: Serhiy Storchaka <storchaka@gmail.com>
Uh oh!
There was an error while loading.Please reload this page.
This PR implements the idea from#85160 to improve performance of the singledispatchmethod by caching the generated dispatch method. It is a continuation of#23213
Also see#106448