Uh oh!
There was an error while loading.Please reload this page.
- Notifications
You must be signed in to change notification settings - Fork33.3k
gh-127750: Fix singledispatchmethod caching#127839
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
Uh oh!
There was an error while loading.Please reload this page.
Conversation
vodik commentedDec 12, 2024
I tried this locally and added and it seems to be working correctly. Haven't benchmarked it either though. The cached methods seem to stick around in the cache until the gc fires. I haven't been able to find a way to make them stick around. |
dg-pb commentedJan 7, 2025
Does CPython re-use SO says that yes, but it was in 2018. Is it still true? If so, then can identity be reliably cached at all? Maybe similar approach to |
eendebakpt commentedJan 7, 2025
Yes, cpython re-uses the id (the id is equal to the memory address, and memory is reused). But the id is unique during the lifetime of an object, seehttps://docs.python.org/3/library/functions.html#id. The But your remark got me thinking: the PR right now is reliable, but maybe not very efficient. Running |
dg-pb commentedJan 7, 2025
Could you clarify what do you mean by this? |
eendebakpt commentedJan 8, 2025
Right after running this code the cache (e.g. We can address the issue by creating a reference from This works, but has the disadvantage that we are adding some (private) attributes in the object instance. Note: the documentationhttps://docs.python.org/3/library/weakref.html#weakref.WeakValueDictionary states:
That text is a bit misleading: there is a reference to |
dg-pb commentedJan 8, 2025 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
I see now. Also, I would like the method to be cached for the lifetime of my instance, doesn't it currently garbage collect perfectly good cached methods?
What about the same way as classsingledispatchmethod:def__set_name__(self,owner,name):self.attrname=namedef__get__(self,instance,owner=None):try:cache=instance.__dict__exceptAttributeError:# disable caching going forwardelse:method=cache.get(self.attrname)ifmethodisnotNone:returnmethodelse: ... |
| return_method | ||
| dispatch=self.dispatcher.dispatch | ||
| funcname=getattr(self.func,'__name__','singledispatchmethod method') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Maybe moving this line just above theTypeError would be a bit more readable?
The performance benefit this offers would only matter if there was a code which actively depends on handling this exception.
eendebakpt commentedJan 8, 2025 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
Yes, that is why I did not like the approach in this PR.
Nice idea. I created an alternative PR for this variation. |
eendebakpt commentedMar 12, 2025
Issue resolved in other PRs, closing |
Uh oh!
There was an error while loading.Please reload this page.
We address two issues:
singledispatchmethodwould work incorrectly with objects that compare equal (but are not identical)singledispatchmethodcache would hold on to objects foreverSee#127751