Uh oh!
There was an error while loading.Please reload this page.
- Notifications
You must be signed in to change notification settings - Fork32.1k
gh-134761: Use deferred reference counting forthreading
concurrency primitives#134762
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
base:main
Are you sure you want to change the base?
Uh oh!
There was an error while loading.Please reload this page.
Conversation
I'd appreciate feedback from anyone here. There's no precedent in the standard library for using deferred reference counting at the Python level, so we're in some uncharted waters here. I don't think anyone would have been relying on when |
Doc/library/sys.rst Outdated
@@ -290,6 +290,25 @@ always available. Unless explicitly noted otherwise, all variables are read-only | |||
This function is specific to CPython. The exact output format is not | |||
defined here, and may change. | |||
.. function:: _defer_refcount(op) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Did not take a look overall but not sure it is worth to to expose it through the documentation since this is not a public api.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Yeah, I'm a little bit on the fence about it. We do this already for some other functions insys
:_is_gil_enabled
,_is_interned
,_is_immortal
, and a few others. Basically, it's a way to expose implementation details that are useful at a Python level.
Overall, I think this is worth documenting. There's no other way to improve scaling from Python right now, and it's similar to using an unstable C API (in the sense that it could be removed in any version).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
What about documenting this inhttps://github.com/python/cpython/tree/main/InternalDocs somewhere? I fully agree with removing this from the normal documentation (for the reasons mentioned), but as a user/developer/tester I find the python api to_defer_refcount
(and others like_is_immortal
) useful and some documentation is always nice.
corona10 commentedMay 27, 2025 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
Personal opinion: I think that we should not expose free threaded implementation to the Python level and end user. Fragmentation is only enough for C API. |
Reiterating my previous comment: I'm +0.1 for documenting it. I'll happily yield if others are strongly opposed. |
corona10 commentedMay 27, 2025 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
I am -1 on documentation, we have bunch of private Python API that are related to C API but we don't expose it the Python documentation since we don't want people depends on it. |
Same. I truly think we shouldn't be exposing anything that relies on our implementation of refcounting. It's causing a hard enough time for other implementations already. And even causes problems for ourselves when we try to optimize away refcounting. |
In general, I agree. I've removed the documentation note from this PR. That said, I do think it's important that we do expose and document some (unstable) APIs for working with the current reference counting implementation, because otherwise people will try to hack it together themselves. For example, Nanobindwas using silly hacks with immortalization before we exposed some better APIs for messing with reference counting. I think that'smuch more dangerous. |
Modules/_threadmodule.c Outdated
@@ -950,6 +950,7 @@ lock_new_impl(PyTypeObject *type) | |||
if (self == NULL) { | |||
return NULL; | |||
} | |||
_PyObject_SetDeferredRefcount((PyObject *)self); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Can you move the call after initializingself->lock
? Same remark forrlock_new_impl()
below.
@@ -2653,6 +2654,23 @@ sys__is_gil_enabled_impl(PyObject *module) | |||
#endif | |||
} | |||
/*[clinic input] | |||
sys._defer_refcount -> bool |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Please document it inDoc/library/sys.rst
. If it "should not be used", add a clear explanation why it should not be used there. If it's not documented, the lack of documentation doesn't prevent users from using it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
See Ken and Donghee's comments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Well, I disageee with them. IMO we should document sys functions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
I would only be supportive of documenting this if we were allowed to change it in a minor version with no deprecation period. My understanding is thatPyUnstable
in the C API allows that, but exposing tosys._x
means we are stuck with at least 2 deprecation cycle and recommended 5 deprecation cycles. Users should not rely on this function in the first place except in very specific scenarios.
One way to "bypass" this is make the function a no-op in future versions of Python once we solve this issue altogether. But I don't know what users will rely on by then so I'm a bit worried.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Hmm, I thought we were allowed to changesys._x
things in minor versions without deprecation. If not, that's a problem.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
I think we're getting a bit hung up on this point. We can add or remove the documentation for_defer_refcount
later, it's not too important. Does everything else look fine here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Well I am preparing a better proposal for this approach. Give me hours.
ZeroIntensityMay 28, 2025 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Ok, cool. Feel free to cc me on it.
Something we also need to consider is whether we want to address this for 3.14. Should this general idea be considered a bugfix or a feature?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
See:#134819
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
I think that this is improvement rather than bug fix.
I don't understand well your benchmark. Can you show numbers before/after this change? |
Updated the description. The benchmark is just trying to measure the amount of time spent accessing the object. See alsothe issue. |
@Fidget-Spinner Out of curiosity can we just apply deferred refcount with tracing objects that have a possilbity of high concentration? |
From my understanding, the issue with automatically applying DRC is that it only lets an object be deallocated during a garbage collection, which breaks code that relies on Maybe we could enable it for objects without |
You mean like track objects that are frequently shared and apply it to those objects? It would be quite easy to implement, the only problem is like Peter said we can't defer finalizers of things that take up a huge amount of memory and rely on immediate reclamation. |
corona10 commentedMay 27, 2025 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
If we can apply it to the conditional case, as Peter said, behavior change would not be an issue. OTAH, memory consumption also would not be an issue if we can provide a disable tracing option where the device has low memory. Most of the production machines have enough memory, so it would not be an issue, IIUC. (Please let me know if it will consume huuuuge memory) |
Side note, the reason I am talking about tracing is that I am now starting to fear that we will attach the deferred ref API all over the codebase due to performance reasons. It would be chaos for us.. |
It's generally a one-line change to an object's initializer, why would that be chaos? We could also shift the decision to the user and add a generic |
corona10 commentedMay 27, 2025 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
No, you are now starting to measure microbenchmarks and then applying them. Can you guarantee that the case is limited? A better solution would be that we don't have to care about such a case, and the interpreter applies it automatically. I am fine with just applying for this case, but let's discuss a better solution and think about what we can provide. |
You can work around the scaling issues in the repro pretty easily by passing the lock as an argument to importthreadingimporttimelock=threading.Lock()defscale(lock):a=time.perf_counter()for_inrange(10000000):lock.locked()b=time.perf_counter()print(b-a,"s")threads= [threading.Thread(target=scale,args=(lock,))for_inrange(8)]forthreadinthreads:thread.start() I think we should be very cautious about exposing implementation details like deferred reference counting. |
FWIW, I don't think we'reexposing it, per se. Things will "just work" from the user's perspective. I'd really rather the interpreter do it than force weird implementation details, like using locals. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Maybe you can extractModules/_threadmodule.c
changes into a separated PR since these changes don't require addingsys._defer_refcount()
.
Ok. |
I don't really see the motivation for these particular classes. It seems unlikely to me that someone is checking I agree with@mpage that we should be cautious about exposing implementation details. The motivation here doesn't seem sufficient enough to me to overcome that caution. |
I don't think we're exposing any actual implementation details, right? The reference count is still hidden to |
Uh oh!
There was an error while loading.Please reload this page.
This scales much better. Using this script:
With this applied, I see similar performance to if
lock
was a local variable:Prior to this change, I see:
threading
primitives are subject to reference count contention #134761📚 Documentation preview 📚:https://cpython-previews--134762.org.readthedocs.build/