Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Try simple-minded call expression cache#19505

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Merged
ilevkivskyi merged 6 commits intopython:masterfromilevkivskyi:call-expr-cache
Jul 28, 2025

Conversation

@ilevkivskyi
Copy link
Member

This gives a modest 1% improvement on self-check (compiled), but it gives almost 40% onmypy -c "import colour". Some comments:

  • I only cacheCallExpr,ListExpr, andTupleExpr, this is not very principled, I found this as a best balance between rare cases likecolour, and more common cases like self-check.
  • Caching is fragile within lambdas, so I simply disable it, it rarely matters anyway.
  • I cache both messages and the type map, surprisingly the latter only affects couple test cases, but I still do this generally for peace of mind.
  • It looks like there are only three things that require cache invalidation: binder, partial types, and deferrals.

In general, this is a bit scary (as this a major change), but also perf improvements for slow libraries are very tempting.

@github-actions

This comment has been minimized.

@ilevkivskyi
Copy link
MemberAuthor

Nice! Interpretedcolour inmypy_primer got 3x faster.

brianschubert and sterliakov reacted with hooray emoji

@ilevkivskyi
Copy link
MemberAuthor

Hm, both changes in the primer however look suspicious, I will take a look at them later.

# Deeply nested generic calls can deteriorate performance dramatically.
# Although in most cases caching makes little difference, in worst case
# it avoids exponential complexity.
# TODO: figure out why caching within lambdas is fragile.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I also discovered that in#19408. Accepting a lambda has explicit type context dependency (infer_lambda_type_using_context), so generic calls do really require re-accepting the lambda every time, context from outer generic may have propagated later.

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

OK, thanks for the pointer. I will update the comment.

# Errors ignored, so no point generating fancy messages
returnTrue
for_watcherinself._watchers:
ifself._watchers:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Could you explain this change? Watchers used to be additive and that sounded reasonable to me...

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Previously, ifany of the active watchers was ignoring errors, we could use simpler messages, but in presence of caching this is not valid anymore. For example, we can accept an expression when there is enclosing ignoring watcher, but then the caching watcher will record simple message, and if next time we, by chance, accept same expression in same type context, but without the ignoring watcher, an incorrect (i.e. way too terse) error message will be pulled from the cache.

Without this change 6 tests fail because of terse/simplistic error messages are used.

sterliakov reacted with thumbs up emoji
# Although in most cases caching makes little difference, in worst case
# it avoids exponential complexity.
# TODO: figure out why caching within lambdas is fragile.
elifisinstance(node, (CallExpr,ListExpr,TupleExpr))andnot (
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Would it be difficult to allow dicts and sets here? Inline dictionaries are relatively common and even heavier than lists, and sets just for consistency.

Also operator exprs can bereally heavy (#14978) and are fundamentally similar toCallExpr, are they worth considering?

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

The problem with dicts/sets is that I see around 0.3% regression on self-check when I add them (but maybe this is just noise). My reasoning is that most code has a bunch of shallow dictionaries, and for those caching is just busy-work that will never be used (note caching is not free, since mypyc is slow on creating local type maps and watchers).

Anyway, I am open to considering more expression kinds to cache, but lets put those in separate PR(s).

sterliakov reacted with thumbs up emoji
@ilevkivskyi
Copy link
MemberAuthor

OK, it looks like I found the problem withmitmproxy, it was a pre-existing bug inmultiassign_from_union, will push a fix in a minute.

@github-actions
Copy link
Contributor

Diff frommypy_primer, showing the effect of this PR on open source code:

werkzeug (https://github.com/pallets/werkzeug)+ src/werkzeug/datastructures/structures.py:193: error: Generator has incompatible item type "tuple[Union[K, Any], Union[list[V], list[Any]]]"; expected "tuple[K, V]"  [misc]

@ilevkivskyi
Copy link
MemberAuthor

OK, the two previous changes inmypy_primer are fixed, and the new one error is correct, it was previously hidden by the bug in multi-assign from union.

@sterliakov
Copy link
Collaborator

LG! I'm still a bit scared to approve this now, will take another look at this tomorrow:)

@ilevkivskyi
Copy link
MemberAuthor

@sterliakov Did you have time for another look?

Btw comparison with current master went below noise level, probably because of#19515. But I also played with some "micro-benchmarks" involving deeply nested calls, and this PR gives performance improvements on order of 100x (no exaggeration). If there are no concrete suggestions, I think we should give this a try.

Copy link
Collaborator

@sterliakovsterliakov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Yeah, sorry, I think this is good to go!

comparison with current master went below noise level

that sounds OK: mypy itself isn't generic-heavy, so it shouldn't demonstrate huge improvements.colour changes speak for themselves:)

@ilevkivskyiilevkivskyi merged commitbd94bcb intopython:masterJul 28, 2025
20 checks passed
@ilevkivskyiilevkivskyi deleted the call-expr-cache branchJuly 28, 2025 17:18
agentydragon pushed a commit to agentydragon/ducktape that referenced this pull requestNov 15, 2025
Added the disable_expression_cache flag to [tool.mypy] section inpyproject.toml to ensure mypy runs use the workaround everywhere, not justin pre-commit hooks.Added documentation linking to:- PR #19505 that introduced the expression cache bug- TODO to file upstream issue with reproduction case- TODO to remove workaround once bug is fixedThe flag is needed in both places:- .pre-commit-config.yaml: For pre-commit hook runs- pyproject.toml: For direct mypy invocations and IDE integrationsReferences:-python/mypy#19505 (expression cache PR)- mypy 1.18.1+ affected by cross-module Final[str] type loss
Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment

Reviewers

@JukkaLJukkaLAwaiting requested review from JukkaL

@hauntsaninjahauntsaninjaAwaiting requested review from hauntsaninja

1 more reviewer

@sterliakovsterliakovsterliakov approved these changes

Reviewers whose approvals may not affect merge requirements

Assignees

No one assigned

Labels

None yet

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

2 participants

@ilevkivskyi@sterliakov

[8]ページ先頭

©2009-2025 Movatter.jp