Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

gh-140739: Fix crashes from corrupted remote memory#143190

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Merged
pablogsal merged 3 commits intopython:mainfrompablogsal:gh-140739
Dec 26, 2025

Conversation

@pablogsal
Copy link
Member

@pablogsalpablogsal commentedDec 26, 2025
edited by bedevere-appbot
Loading

The remote debugging module reads memory from another Python process
which can be modified or freed at any time due to race conditions.
When garbage data is read, various code paths could cause SIGSEGV
crashes in the profiler process itself rather than gracefully
rejecting the sample.

Add bounds checking and validation for data read from remote memory:
linetable parsing now checks buffer bounds, PyLong reading validates
digit count, stack chunk sizes are bounded, set iteration limits
table size, task pointer arithmetic checks for underflow, TLBC index
is validated against array bounds, and thread list iteration detects
cycles. All cases now reject the sample with an exception instead of
crashing or looping forever.

dimaqq reacted with rocket emoji
The remote debugging module reads memory from another Python processwhich can be modified or freed at any time due to race conditions.When garbage data is read, various code paths could cause SIGSEGVcrashes in the profiler process itself rather than gracefullyrejecting the sample.Add bounds checking and validation for data read from remote memory:linetable parsing now checks buffer bounds, PyLong reading validatesdigit count, stack chunk sizes are bounded, set iteration limitstable size, task pointer arithmetic checks for underflow, TLBC indexis validated against array bounds, and thread list iteration detectscycles. All cases now reject the sample with an exception instead ofcrashing or looping forever.
Copy link
Member

@Fidget-SpinnerFidget-Spinner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Looks good. Just a question: is remote debugging expected to work on FT? Because a lot of these operations don't look thread safe to me.

@pablogsal
Copy link
MemberAuthor

Looks good. Just a question: is remote debugging expected to work on FT? Because a lot of these operations don't look thread safe to me.

Yeah it is and we do have tests for that indeed. The same profiler object cannot be entered from different threads (it's locked) so there is no need to protect internal state.

…AbZTo.rstCo-authored-by: Ken Jin <kenjin4096@gmail.com>
if (actual_size != current_size) {
// Validate size: reject garbage (too small or unreasonably large)
// Size must be at least enough for the header and reasonably bounded
if (actual_size <= offsetof(_PyStackChunk, data) || actual_size > MAX_STACK_CHUNK_SIZE) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I think this is looser than I'd like for_PyStackChunk, but whatever.

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Do you have any suggestion?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I wish we had a tighter bound, but I'm not sure what would be appropiate for a real world stack frame? So that's why I said whatever :)

Copy link
MemberAuthor

@pablogsalpablogsalDec 26, 2025
edited
Loading

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I wish we had a tighter bound, but I'm not sure what would be appropiate for a real world stack frame?

But this is for the entire chunk no? Chunks will grow from 16Kb to whatever, we just need to ensure we don't copy too much because we just read a garbage size to copy. Chunks here are just an optimization so if we fail to read the chunks we will fallback to read frame-by-frame (which sucks but it works).

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Oh I didn't know you do the second part (fallback to read one by one). I think that's fine then.

@pablogsalpablogsalenabled auto-merge (squash)December 26, 2025 16:01
@pablogsalpablogsal merged commitd3d4cf9 intopython:mainDec 26, 2025
50 checks passed
@pablogsalpablogsal deleted the gh-140739 branchDecember 26, 2025 16:06
Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment

Reviewers

@Fidget-SpinnerFidget-SpinnerFidget-Spinner approved these changes

Assignees

No one assigned

Labels

None yet

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

2 participants

@pablogsal@Fidget-Spinner

[8]ページ先頭

©2009-2026 Movatter.jp