Uh oh!
There was an error while loading.Please reload this page.
- Notifications
You must be signed in to change notification settings - Fork32.4k
gh-109653: Fix py312 regression in the import time ofrandom
#110221
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
Uh oh!
There was an error while loading.Please reload this page.
Conversation
random
by 60%random
by 60%Misc/NEWS.d/next/Library/2023-10-02-15-40-10.gh-issue-109653.iB0peK.rst OutdatedShow resolvedHide resolved
Uh oh!
There was an error while loading.Please reload this page.
Given that this is a regression, should we backport it into 3.12? |
AlexWaygood commentedOct 2, 2023 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
I was wondering that. I'd vote in favour of doing so, since it doesn't seem particularly high-risk to me. But I'd like to hear Raymond's and/or Greg's thoughts. |
A backport to 3.12 would be reasonable. |
Great, I've scheduled the backport. Thanks for the review! |
random
by 60%random
Thanks@AlexWaygood for the PR 🌮🎉.. I'm working now to backport this PR to: 3.12. |
GH-110247 is a backport of this pull request to the3.12 branch. |
This comment was marked as duplicate.
This comment was marked as duplicate.
bedevere-bot commentedOct 2, 2023
|
@@ -65,7 +65,7 @@ | |||
try: | |||
# hashlib is pretty heavy to load, try lean internal module first |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
This code is technically awkward... It tried to speed up import time and did so by circumventing hashlib which means that it is loading and using the slowest possible sha512 implementation by default (hashlib will pick up openssl 3's accelerated sha512 support by default on most platforms and not use our builtin). so faster startup time for a slower runtime computation? thankfully this is only ever used by seed() which is a single/constant number of calls for most programs on tiny data so there is zero reason to care about sha512 performance for its purposes. The slower implementation may still be faster on small seed data anyways due to less setup overhead.
Nothing to do here. This works for random's purposes & thanks for the fixup. But this being a regression in the first place demonstrates how fragile direct use of internal details can be.(and indirectly how much in need of an overhaul hashlib.py could use)
I'd file an issue rather than leaving this comment in the merged PR void if there were anything concrete to describe and tackle, there isn't. :)
Uh oh!
There was an error while loading.Please reload this page.
As an optimisation to reduce the import time of the module,
random
first tries to importsha512
from the internal_sha512
module before falling back tohashlib
. The problem, however, is that Python no longer has a_sha512
module! It was removed in0b13575, by@gpshead. That means we're currently always falling back to the slow path inrandom.py
, leading to the import time ofrandom
being far slower than it should be.Importing
sha512
from the correct module in the fast path cuts 60% off the import time ofrandom
.