Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

gh-132519: fix excessive mem usage in QSBR with large blocks#132520

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Closed
tom-pytel wants to merge2 commits intopython:mainfromtom-pytel:fix-issue-132519

Conversation

tom-pytel
Copy link
Contributor

@tom-pyteltom-pytel commentedApr 14, 2025
edited
Loading

Memory usage numbers (proposed fix explained below):

             VmHWMGIL      135104 kB  - normal GIL-enabled baselinenoGIL   6702788 kB  - free-threaded current QSBR behaviorfix      517760 kB  - free-threaded with _PyMem_ProcessDelayed() in _Py_HandlePending()

Test script:

importthreadingfromqueueimportQueuedefthrdfunc(queue):whileTrue:l=queue.get()l.append(0)# force resize in non-parent thread which will free using _PyMem_FreeDelayed()queue=Queue(maxsize=2)threading.Thread(target=thrdfunc,args=(queue,)).start()whileTrue:l= [None]*int(3840*2160*3/8)# sys.getsizeof(l) ~= 3840*2160*3 bytesqueue.put(l)

Delayed memory free checks (and subsequent frees if applicable) currently only occur in one of two situations:

  • Garbage collection, which doesn't trigger often enough in this script, though manual trigger solves problem.
  • On a_PyMem_FreeDelayed() when the number of pending delayed free memory blocks reaches exactly 254. And then it waits another 254 frees even if could not free any pending blocks this time, which is a lot for big buffers.

This works great for many small objects, but with larger buffers these can accumulate quickly, so more frequent checks should be done.

I tried a few things but_PyMem_ProcessDelayed() added to_Py_HandlePending() seems to work well and be safe and aQSBR_QUIESCENT_STATE has just been reported so there is a fresh chance to actually free. Seems to happen often enough that memory usage is kept down, and if nothing to free then_PyMem_ProcessDelayed() is super-cheap.

Another option would be to track the amount of pending memory to be freed and increase the frequency of free attempts if that number gets too large, but to start with this small change seems to solved the problem well enough. Could also schedule GC if pending frees get too high, but that seems like a roundabout way to arrive at_PyMem_ProcessDelayedNoDealloc().

Performance as checked bypyperformance full suite is unchanged with the fix (literally 0.17% better avg, so noise).

@tom-pytel
Copy link
ContributorAuthor

Ping@colesbury,@kumaraditya303. Is there a better place for the_PyMem_ProcessDelayed()? I thought_PyThreadState_Attach() at first but that is too low level.

Copy link
Contributor

@colesburycolesbury left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I don't think we should do this. You risk accidentally introducing quadratic behavior.

We will likely tweak the heuristics in the future for when_PyMem_ProcessDelayed() is called, but that should be based on data for real applications.

tom-pytel reacted with thumbs up emoji
@tom-pyteltom-pytel deleted the fix-issue-132519 branchApril 14, 2025 18:41
Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Reviewers

@colesburycolesburycolesbury left review comments

@ericsnowcurrentlyericsnowcurrentlyAwaiting requested review from ericsnowcurrentlyericsnowcurrently is a code owner

Assignees
No one assigned
Projects
None yet
Milestone
No milestone
Development

Successfully merging this pull request may close these issues.

2 participants
@tom-pytel@colesbury

[8]ページ先頭

©2009-2025 Movatter.jp