Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

gh-103092: Add a mutex to make the random state of rotatingtree concurrent-safe#115301

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Merged
erlend-aasland merged 5 commits intopython:mainfromaisk:isolate-lsprof-lock
Feb 29, 2024

Conversation

aisk
Copy link
Member

@aiskaisk commentedFeb 11, 2024
edited
Loading

The only two static variables,random_value andrandom_stream, inrotatingtree.c (which is only used by_lsprof module) are just the state of a pseudo-random generator. They can be shared between interpreters if we add a mutex to make them concurrent-safe. And this work can be done easily, and make the_lsprofile module isolated.

Another way to isolate_lsprof is to store the two variables in module state. This will involve more work and review of modifications to existing functions to pass the module state. See#115130.

Adding the mutex does not introduce a noticeable performance decrease. See the comment below for a micro benchmark.

@erlend-aasland

@aisk
Copy link
MemberAuthor

aisk commentedFeb 11, 2024
edited
Loading

Codes

importsysimportthreadingimport_xxsubinterpretersimportcProfilecode="""def f():    import re    import json    import pickle    d = {str(x): {x: x} for x in range(1000)}    for _ in range(100):        re.compile("foo|bar")        json.loads(json.dumps(d))        pickle.loads(pickle.dumps(d))"""defrun_single():ctx= {}exec(code,ctx)cProfile.runctx("f()",ctx, {})defrun_multi():ts= []interps= []for_inrange(4):interp=_xxsubinterpreters.create(isolated=1)interps.append(interp)c=code+"import cProfile; cProfile.run('f()')"t=threading.Thread(target=_xxsubinterpreters.run_string,args=[int(interp),c])t.start()ts.append(t)fortints:t.join()forinterpininterps:_xxsubinterpreters.destroy(int(interp))iflen(sys.argv)>1andsys.argv[1]=='multi':run_multi()else:run_single()

Single interpreter:

base (b104360):

❯ ./python.exe foo.py         12726 function calls (12464 primitive calls) in 96.526 seconds

current:

❯ ./python.exe foo.py         12726 function calls (12464 primitive calls) in 97.027 seconds

Multiple interpreters:

I'm using a 4 physical core Intel MacBook. As the code above shows, 4 isolated interpreters are used for this benchmark.

base (b104360):

With

cpython/Modules/_lsprof.c

Lines 1008 to 1009 inb104360

{Py_mod_multiple_interpreters,Py_MOD_MULTIPLE_INTERPRETERS_NOT_SUPPORTED},
//{Py_mod_multiple_interpreters, Py_MOD_PER_INTERPRETER_GIL_SUPPORTED},
modified to enable itsPER_INTERPRETER_GIL_SUPPORTED.

Although this is not safe, I think it doesn't matter for this microbenchmark.

./python.exe foo.py multi         13086 function calls (12807 primitive calls) in 118.085 seconds

All 4 interpreter finished in the same seconds (+-0.x seconds).

current:

❯ ./python.exe foo.py multi         13086 function calls (12807 primitive calls) in 115.202 seconds

Summary

On my machine, the execution time of the code before and after the modification varies, sometimes better and sometimes worse. I believe that the introduced performance difference falls within the observable error range.

Copy link
Contributor

@erlend-aaslanderlend-aasland left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

AFAICS, it should be ok to share the pseudo-random generator between interpreters.@ericsnowcurrently, thouhts?

@erlend-aasland
Copy link
Contributor

(Needs a NEWS entry, though.)

@erlend-aasland
Copy link
Contributor

I'll land this sometime next week, to give@ericsnowcurrently a chance to chime in.

@ericsnowcurrently
Copy link
Member

I'll take a look in the next couple days. Thanks for the heads-up!

erlend-aasland and aisk reacted with heart emoji

Copy link
Member

@ericsnowcurrentlyericsnowcurrently left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

LGTM

@erlend-aasland
Copy link
Contributor

Thanks for the PR,@aisk, and thanks for the review,@ericsnowcurrently 🎉

aisk reacted with heart emoji

@erlend-aaslanderlend-aasland merged commitca56c3a intopython:mainFeb 29, 2024
@aiskaisk deleted the isolate-lsprof-lock branchMarch 1, 2024 15:51
woodruffw pushed a commit to woodruffw-forks/cpython that referenced this pull requestMar 4, 2024
adorilson pushed a commit to adorilson/cpython that referenced this pull requestMar 25, 2024
diegorusso pushed a commit to diegorusso/cpython that referenced this pull requestApr 17, 2024
Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Reviewers

@ericsnowcurrentlyericsnowcurrentlyericsnowcurrently approved these changes

@erlend-aaslanderlend-aaslanderlend-aasland approved these changes

Assignees
No one assigned
Labels
None yet
Projects
None yet
Milestone
No milestone
Development

Successfully merging this pull request may close these issues.

3 participants
@aisk@erlend-aasland@ericsnowcurrently

[8]ページ先頭

©2009-2025 Movatter.jp