Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Use checkmember.py to check protocol subtyping#18943

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Merged
ilevkivskyi merged 8 commits intopython:masterfromilevkivskyi:checkmember-proto
May 30, 2025

Conversation

@ilevkivskyi
Copy link
Member

@ilevkivskyiilevkivskyi commentedApr 19, 2025
edited
Loading

Fixes#18024
Fixes#18706
Fixes#17734
Fixes#15097
Fixes#14814
Fixes#14806
Fixes#14259
Fixes#13041
Fixes#11993
Fixes#9585
Fixes#9266
Fixes#9202
Fixes#5481

This is a fourth "major" PR toward#7724. This is one is watershed/crux of the whole series (but to set correct expectations, there are almost a dozen smaller follow-up/clean-up PRs in the pipeline).

The core of the idea is to set current type-checker as part of the global state. There are however some details:

  • There are cases where we callis_subtype() before type-checking. For now, I fall back to old logic in this cases. In follow up PRs we may switch to using type-checker instances before type checking phase (this requires some care).
  • This increases typeops import cycle by a few modules, but unfortunately this is inevitable.
  • This PR increases potential for infinite recursion in protocols. To mitigate I add: one legitimate fix for__call__, and one temporary hack forfreshen_all_functions_type_vars (to reduce performance impact).
  • Finally I change semantics for method access on class objects to match the one in oldfind_member(). Now we will expand type by instance, so we have something like this:
    classB(Generic[T]):deffoo(self,x:T)->T: ...classC(B[str]): ...reveal_type(C.foo)# def (self: B[str], x: str) -> str
    FWIW, I am not even 100% sure this is correct, it seems to me wemay keep the method generic. But in any case what we do currently is definitely wrong (we infer anon-genericdef (x: T) -> T).

antonagestam and Molkree reacted with hooray emoji
@github-actions

This comment has been minimized.

@ilevkivskyi
Copy link
MemberAuthor

OK, again as expected, it looks like the biggest problem here is performance. But also it looks very heterogeneous. It looks like unit tests, as well as most of the mypy_primer batches are roughly the same performance. But couple batches are much slower. I remember mypy_primer used to show the performance difference,@hauntsaninja is there a way to put that feature back? Even if it is noisy, it would be helpful for this PR.

In the meantime,@sterliakov could you please run your magic "found fixed issues" tool on this PR?

hauntsaninja reacted with thumbs up emoji

@sterliakov
Copy link
Collaborator

Sure, there you go!

Results are available atsterliakov/mypy-issues#11 (won't copy here to avoid triggering a bunch of PR-issue links)

Ignore one FP with topic-incremental, that's a caching issue I haven't resolved yet, but the rest should be representative. Quite a few good changes!

BTW, you can also just create an issue with PR number in title and wait for results:)

ilevkivskyi reacted with thumbs up emojihauntsaninja reacted with rocket emoji

@github-actions

This comment has been minimized.

@sterliakov
Copy link
Collaborator

sterliakov commentedApr 19, 2025
edited
Loading

it seems to me we may keep the method generic

IMO your current implementation is reasonable - consistent behavior of typevars during specialization is more important.

Given that we acceptdef foo(self, x: str) -> str as an override signature forB[str] subclass, it isn't so important to inferdef foo[T: str](self, x: T) -> T for non-overridden method IMO. Doing that can even be a source of very funny[incompatible-override] in future subclasses?

class A[T]:    def foo(self, x: T) -> T: ...class B(A[str]):    passclass C(B):    def foo(self, x: str) -> str: ...

IfB.foo is generic, thenC is an incompatible override. But we would have considered it compatible if that override was put inB.

(in other words,A.foo has never been generic, it shouldn't become generic in subclasses by inheritance - for any typeX,A[X]().foo has typedef (x: X) -> X and not a genericdef [T] (x: T) -> T)

@hauntsaninja
Copy link
Collaborator

hauntsaninja commentedApr 20, 2025
edited
Loading

In addition to pushing a commit re-enabling the mypy_primer timings, I ran some benchmarking on mypy_primer corpus myself. I think this PR does make mypy single digit percentage slower for the median project, with some projects seeing 10-20% slowdown.

I think part of what you're seeing with the batching is unrelated to this PR though. Before yesterday, for dumb reasons, we weren't really checkinghttps://github.com/colour-science/colour properly in primer, and that project is exceptionally slow for us now (possible there was some other regression). That said, colour-science also happens to be the project most adversely affected by this PR on performance (and most positively impacted in type checking) — the 30% slowdown in Github Actions matches what I measure.

@github-actions

This comment has been minimized.

@ilevkivskyi
Copy link
MemberAuthor

@sterliakov OK, yes, this is a good argument.
@hauntsaninja Yeah, it is single-digit percent for most projects, but still a bit slower than ideal. I have few ideas how to speed things up, I will have an offline chat tomorrow with Jukka about how to better proceed with this.

Over next few days I will spot-checkmypy_primer changes, and also update the list of fixed issues (with tests added). Potentially I will add some simple perf improvements.

hauntsaninja reacted with heart emoji

@JukkaL
Copy link
Collaborator

I'm planning to profile this in case there is some simple way to reduce the performance regression. I want to unblock the next public release first, however. I'm investigating a few potential regressions.

@ilevkivskyi
Copy link
MemberAuthor

@JukkaL I agree unblocking the release is higher priority.

Copy link
Collaborator

@JukkaLJukkaL left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I did some profiling and analyzed the sources of the performance regression. Not a full review.

mypy/types.py Outdated

defrelevant_items(self)->list[Type]:
"""Removes NoneTypes from Unions when strict Optional checking is off."""
frommypy.stateimportstate
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Nested imports are slow when compiled. This causes a performance regression, since this is called a lot.

is_operator:bool=False,
class_obj:bool=False,
is_lvalue:bool=False,
)->Type|None:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Most of the remaining performance regression comes fromfind_member. Would it be feasible to add a fast path that could be used in the majority of (simple) cases? This only makes sense if the semantics would remain identical. We'd first try the fast path, and if it can't be used (not a simple case), we'd fall back to the general implementation that is added here (afterfrom mypy.checkmember import ...). The fast path might look a bit likefind_member_simple.

That fast path might cover access to normal attribute/method via instance when there are no self types or properties, for example. Maybe we can avoid creatingMemberContext and usingfilter_errors.

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

This may be harder to tune, so although I agree we should do this, i would leave this optimization for later.

hauntsaninja reacted with thumbs up emoji
* Remove everything else if there is an `object`
* Remove strict duplicate types
"""
frommypy.stateimportstate
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

This nested imports also causes a small performance regression (maybe 0.1% to 0.2%).

original_type:Type,
mx:MemberContext,
original_vars:Sequence[TypeVarLikeType]|None=None,
)->Type:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

This function does not appear to be a performance bottleneck (at least in self check).

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

@JukkaL If you will have time, could you please check if there is any slowness because ofbind_self() andcheck_self_arg()? Although they are not modified, they may be called much more often now.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

check_self_arg could be more expensive -- it appears to consume an extra ~0.5% of runtime in this PR. We are now spending maybe 2-3% of CPU in it, so it's quite hot, but it already was pretty hot before this PR. This could be noise though.

I didn't see any major change inbind_self when doing self check, though it's pretty hot both before and after, though less hot thancheck_self_arg.

mypy/state.py Outdated
self.strict_optional=saved

@contextmanager
deftype_checker_set(self,value:TypeCheckerSharedApi)->Iterator[None]:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Dependency onTypeCheckerSharedApi probably makes various import cycles worse, and I assume this why there are some additional nested imports. Definingtype_checker_set in a new module would improve things, right? Splitting this module seems better than making import cycles bigger, and it should also reduce the performance regression.

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Definingtype_checker_set in a new module would improve things, right?

Yeah, I think this should be better. I will play with this.

withtype_checker.msg.filter_errors(filter_deprecated=True):
ifclass_obj:
fallback=itype.type.metaclass_typeormx.named_type("builtins.type")
returnanalyze_class_attribute_access(itype,name,mx,mcs_fallback=fallback)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I can't tell why this PR doesn't fix#17567, because this should work?

Copy link
MemberAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Hm, this is because the fallback to metaclass is handled by the caller ofanalyze_class_attribute_access(),not byanalyze_class_attribute_access() itself. I guess I will need to copy this fallback logic (which may be a bit ugly because we will also need to updateget_member_flags()). I will look into it later.

@github-actions

This comment has been minimized.

@ilevkivskyi
Copy link
MemberAuthor

OK, I updated the list of issues this PR will fix using themypy-issues script results: 13 issues in total. Fun part is that only ~half of those are directly related to protocols, other half is fixed by a "small" fix for an inconsistency incheckmember.py itself uncovered by this PR (that was correctly implemented infind_member()).

I also measured the performance impact after implementing couple (easy ones) improvements discussed above: this PR still makes self-check 2-3% slower, but IMO now this is acceptable.

@JukkaL I would propose to merge this, and then work on trickier performance improvements in separate PR(s).

sterliakov and hauntsaninja reacted with thumbs up emojihauntsaninja reacted with hooray emoji

@github-actions

This comment has been minimized.

@github-actions
Copy link
Contributor

Diff frommypy_primer, showing the effect of this PR on open source code:

colour (https://github.com/colour-science/colour): 1.08x slower (1556.4s -> 1673.8s in single noisy sample)- colour/algebra/interpolation.py:692: error: Value of type variable "SupportsRichComparisonT" of "min" cannot be "floating[_16Bit] | floating[_32Bit] | float64"  [type-var]- colour/algebra/interpolation.py:693: error: Value of type variable "SupportsRichComparisonT" of "max" cannot be "floating[_16Bit] | floating[_32Bit] | float64"  [type-var]- colour/algebra/interpolation.py:700: error: Value of type variable "SupportsRichComparisonT" of "min" cannot be "floating[_16Bit] | floating[_32Bit] | float64"  [type-var]- colour/io/luts/lut.py:1544: error: Value of type "floating[_16Bit] | floating[_32Bit] | float64" is not indexable  [index]- colour/colorimetry/spectrum.py:1611: error: Value of type variable "SupportsRichComparisonT" of "max" cannot be "floating[_16Bit] | floating[_32Bit] | float64"  [type-var]- colour/colorimetry/generation.py:762: error: Argument 1 to "sd_single_led_Ohno2005" has incompatible type "floating[_16Bit] | floating[_32Bit] | float64"; expected "float"  [arg-type]- colour/plotting/volume.py:261: error: Value of type variable "SupportsRichComparisonT" of "sorted" cannot be "floating[_16Bit] | floating[_32Bit] | float64"  [type-var]- colour/plotting/volume.py:268: error: Incompatible types in assignment (expression has type "int | floating[_16Bit] | floating[_32Bit] | float64", variable has type "floating[_16Bit] | floating[_32Bit] | float64")  [assignment]- colour/plotting/colorimetry.py:195: error: No overload variant of "max" matches argument types "floating[_16Bit] | floating[_32Bit] | float64", "floating[_16Bit] | floating[_32Bit] | float64"  [call-overload]- colour/plotting/colorimetry.py:195: note: Possible overload variants:- colour/plotting/colorimetry.py:195: note:     def [SupportsRichComparisonT: SupportsDunderLT[Any] | SupportsDunderGT[Any]] max(SupportsRichComparisonT, SupportsRichComparisonT, /, *_args: SupportsRichComparisonT, key: None = ...) -> SupportsRichComparisonT- colour/plotting/colorimetry.py:195: note:     def [_T] max(_T, _T, /, *_args: _T, key: Callable[[_T], SupportsDunderLT[Any] | SupportsDunderGT[Any]]) -> _T- colour/plotting/colorimetry.py:195: note:     def [SupportsRichComparisonT: SupportsDunderLT[Any] | SupportsDunderGT[Any]] max(Iterable[SupportsRichComparisonT], /, *, key: None = ...) -> SupportsRichComparisonT- colour/plotting/colorimetry.py:195: note:     def [_T] max(Iterable[_T], /, *, key: Callable[[_T], SupportsDunderLT[Any] | SupportsDunderGT[Any]]) -> _T- colour/plotting/colorimetry.py:195: note:     def [SupportsRichComparisonT: SupportsDunderLT[Any] | SupportsDunderGT[Any], _T] max(Iterable[SupportsRichComparisonT], /, *, key: None = ..., default: _T) -> SupportsRichComparisonT | _T- colour/plotting/colorimetry.py:195: note:     def [_T1, _T2] max(Iterable[_T1], /, *, key: Callable[[_T1], SupportsDunderLT[Any] | SupportsDunderGT[Any]], default: _T2) -> _T1 | _T2- colour/plotting/colorimetry.py:195: error: Value of type variable "SupportsRichComparisonT" of "min" cannot be "floating[_16Bit] | floating[_32Bit] | float64"  [type-var]- colour/plotting/colorimetry.py:196: error: No overload variant of "min" matches argument types "floating[_16Bit] | floating[_32Bit] | float64", "floating[_16Bit] | floating[_32Bit] | float64"  [call-overload]- colour/plotting/colorimetry.py:196: note: Possible overload variants:- colour/plotting/colorimetry.py:196: note:     def [SupportsRichComparisonT: SupportsDunderLT[Any] | SupportsDunderGT[Any]] min(SupportsRichComparisonT, SupportsRichComparisonT, /, *_args: SupportsRichComparisonT, key: None = ...) -> SupportsRichComparisonT- colour/plotting/colorimetry.py:196: note:     def [_T] min(_T, _T, /, *_args: _T, key: Callable[[_T], SupportsDunderLT[Any] | SupportsDunderGT[Any]]) -> _T- colour/plotting/colorimetry.py:196: note:     def [SupportsRichComparisonT: SupportsDunderLT[Any] | SupportsDunderGT[Any]] min(Iterable[SupportsRichComparisonT], /, *, key: None = ...) -> SupportsRichComparisonT- colour/plotting/colorimetry.py:196: note:     def [_T] min(Iterable[_T], /, *, key: Callable[[_T], SupportsDunderLT[Any] | SupportsDunderGT[Any]]) -> _T- colour/plotting/colorimetry.py:196: note:     def [SupportsRichComparisonT: SupportsDunderLT[Any] | SupportsDunderGT[Any], _T] min(Iterable[SupportsRichComparisonT], /, *, key: None = ..., default: _T) -> SupportsRichComparisonT | _T- colour/plotting/colorimetry.py:196: note:     def [_T1, _T2] min(Iterable[_T1], /, *, key: Callable[[_T1], SupportsDunderLT[Any] | SupportsDunderGT[Any]], default: _T2) -> _T1 | _T2- colour/plotting/colorimetry.py:196: error: Value of type variable "SupportsRichComparisonT" of "max" cannot be "floating[_16Bit] | floating[_32Bit] | float64"  [type-var]- colour/plotting/colorimetry.py:223: error: Value of type variable "SupportsRichComparisonT" of "min" cannot be "floating[_16Bit] | floating[_32Bit] | float64"  [type-var]- colour/plotting/colorimetry.py:223: error: Value of type variable "SupportsRichComparisonT" of "max" cannot be "floating[_16Bit] | floating[_32Bit] | float64"  [type-var]- colour/plotting/colorimetry.py:224: error: Value of type variable "SupportsRichComparisonT" of "max" cannot be "floating[_16Bit] | floating[_32Bit] | float64"  [type-var]- colour/plotting/colorimetry.py:229: error: List item 0 has incompatible type "tuple[floating[_16Bit] | floating[_32Bit] | float64, int]"; expected "_SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]]"  [list-item]- colour/plotting/colorimetry.py:231: error: List item 2 has incompatible type "tuple[floating[_16Bit] | floating[_32Bit] | float64, int]"; expected "_SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]]"  [list-item]- colour/plotting/colorimetry.py:243: error: Value of type variable "SupportsRichComparisonT" of "max" cannot be "floating[_16Bit] | floating[_32Bit] | float64"  [type-var]- colour/plotting/colorimetry.py:406: error: Value of type variable "SupportsRichComparisonT" of "min" cannot be "floating[_16Bit] | floating[_32Bit] | float64"  [type-var]- colour/plotting/colorimetry.py:407: error: Value of type variable "SupportsRichComparisonT" of "max" cannot be "floating[_16Bit] | floating[_32Bit] | float64"  [type-var]- colour/plotting/colorimetry.py:423: error: Value of type variable "SupportsRichComparisonT" of "min" cannot be "floating[_16Bit] | floating[_32Bit] | float64"  [type-var]- colour/plotting/colorimetry.py:424: error: Value of type variable "SupportsRichComparisonT" of "max" cannot be "floating[_16Bit] | floating[_32Bit] | float64"  [type-var]- colour/plotting/colorimetry.py:543: error: Incompatible types in assignment (expression has type "list[Any]", variable has type "floating[_16Bit] | floating[_32Bit] | float64")  [assignment]- colour/plotting/colorimetry.py:544: error: Item "floating[_16Bit]" of "floating[_16Bit] | floating[_32Bit] | float64" has no attribute "__iter__" (not iterable)  [union-attr]- colour/plotting/colorimetry.py:544: error: Item "floating[_32Bit]" of "floating[_16Bit] | floating[_32Bit] | float64" has no attribute "__iter__" (not iterable)  [union-attr]- colour/plotting/colorimetry.py:544: error: Item "float64" of "floating[_16Bit] | floating[_32Bit] | float64" has no attribute "__iter__" (not iterable)  [union-attr]- colour/plotting/colorimetry.py:768: error: Value of type variable "SupportsRichComparisonT" of "min" cannot be "floating[_16Bit] | floating[_32Bit] | float64"  [type-var]- colour/plotting/colorimetry.py:768: error: Value of type variable "SupportsRichComparisonT" of "max" cannot be "floating[_16Bit] | floating[_32Bit] | float64"  [type-var]- colour/plotting/tm3018/components.py:553: error: Argument "xy" to "annotate" of "Axes" has incompatible type "tuple[int, floating[_16Bit] | float64 | floating[_32Bit]]"; expected "tuple[float, float]"  [arg-type]- colour/plotting/tm3018/components.py:568: error: Argument "xy" to "annotate" of "Axes" has incompatible type "tuple[int, floating[_16Bit] | float64 | floating[_32Bit]]"; expected "tuple[float, float]"  [arg-type]- colour/notation/tests/test_munsell.py:558: error: Argument 1 to "as_array" has incompatible type "list[tuple[floating[Any], list[float]]]"; expected "Buffer | _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes] | KeysView[Any] | ValuesView[Any]"  [arg-type]- colour/notation/tests/test_munsell.py:2045: error: List item 0 has incompatible type "list[floating[_16Bit] | floating[_32Bit] | float64]"; expected "float"  [list-item]- colour/notation/tests/test_munsell.py:2301: error: "floating[_16Bit]" object is not iterable  [misc]- colour/notation/tests/test_munsell.py:2301: error: "floating[_32Bit]" object is not iterable  [misc]- colour/notation/tests/test_munsell.py:2301: error: "float64" object is not iterable  [misc]- colour/notation/tests/test_munsell.py:2360: error: "floating[_16Bit]" object is not iterable  [misc]- colour/notation/tests/test_munsell.py:2360: error: "floating[_32Bit]" object is not iterable  [misc]- colour/notation/tests/test_munsell.py:2360: error: "float64" object is not iterable  [misc]- colour/notation/tests/test_munsell.py:2391: error: "floating[_16Bit]" object is not iterable  [misc]- colour/notation/tests/test_munsell.py:2391: error: "floating[_32Bit]" object is not iterable  [misc]- colour/notation/tests/test_munsell.py:2391: error: "float64" object is not iterable  [misc]- colour/examples/plotting/examples_section_plots.py:130: error: Argument "color" to "Line2D" has incompatible type "floating[_16Bit] | floating[_32Bit] | float64"; expected "tuple[float, float, float] | str | str | tuple[float, float, float, float] | tuple[tuple[float, float, float] | str, float] | tuple[tuple[float, float, float, float], float] | None"  [arg-type]- colour/examples/algebra/examples_interpolation.py:135: error: Value of type variable "SupportsRichComparisonT" of "min" cannot be "floating[_16Bit] | floating[_32Bit] | float64"  [type-var]- colour/examples/algebra/examples_interpolation.py:136: error: Value of type variable "SupportsRichComparisonT" of "max" cannot be "floating[_16Bit] | floating[_32Bit] | float64"  [type-var]- colour/examples/algebra/examples_interpolation.py:148: error: Value of type variable "SupportsRichComparisonT" of "min" cannot be "floating[_16Bit] | floating[_32Bit] | float64"  [type-var]- colour/examples/algebra/examples_interpolation.py:149: error: Value of type variable "SupportsRichComparisonT" of "max" cannot be "floating[_16Bit] | floating[_32Bit] | float64"  [type-var]- colour/algebra/tests/test_extrapolation.py:140: note:     x: expected setter type "Buffer | _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | builtins.bool | int | float | complex | str | bytes | _NestedSequence[builtins.bool | int | float | complex | str | bytes]", got "ndarray[tuple[int], dtype[floating[Any] | integer[Any] | numpy.bool[builtins.bool]]]"+ colour/algebra/tests/test_extrapolation.py:140: note:     x: expected setter type "Buffer | _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | int | float | complex | str | _NestedSequence[builtins.bool | int | float | complex | str | bytes]", got "ndarray[tuple[int], dtype[floating[Any] | integer[Any] | numpy.bool[builtins.bool]]]"- colour/algebra/tests/test_extrapolation.py:140: note:     y: expected setter type "Buffer | _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes]", got "ndarray[tuple[Any, ...], dtype[inexact[Any, float | complex]]]"+ colour/algebra/tests/test_extrapolation.py:140: note:     y: expected setter type "Buffer | _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | int | float | complex | str | _NestedSequence[bool | int | float | complex | str | bytes]", got "ndarray[tuple[Any, ...], dtype[inexact[Any, float | complex]]]"- colour/algebra/tests/test_extrapolation.py:150: note:     x: expected setter type "Buffer | _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | bool | int | float | complex | str | bytes | _NestedSequence[bool | int | float | complex | str | bytes]", got "ndarray[tuple[int], dtype[float64]]"+ colour/algebra/tests/test_extrapolation.py:150: note:     x: expected setter type "Buffer | _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | int | float | complex | str | _NestedSequence[bool | int | float | complex | str | bytes]", got "ndarray[tuple[int], dtype[float64]]"antidote (https://github.com/Finistere/antidote): 1.20x slower (74.4s -> 88.9s in single noisy sample)pandera (https://github.com/pandera-dev/pandera): 1.05x slower (127.0s -> 133.5s in single noisy sample)hydpy (https://github.com/hydpy-dev/hydpy)- hydpy/core/itemtools.py:951: error: Argument 2 to "update_variable" of "ChangeItem" has incompatible type "float64"; expected "ndarray[tuple[int, ...], dtype[float64]]"  [arg-type]- hydpy/core/itemtools.py:954: error: Argument 2 to "update_variable" of "ChangeItem" has incompatible type "float64"; expected "ndarray[tuple[int, ...], dtype[float64]]"  [arg-type]- hydpy/auxs/ppolytools.py:253: error: Value of type variable "_AnyShapeType" of "__call__" of "_ConstructorEmpty" cannot be "tuple[int, signedinteger[_64Bit]]"  [type-var]- hydpy/auxs/ppolytools.py:253: error: Value of type variable "SupportsRichComparisonT" of "max" cannot be "signedinteger[_64Bit]"  [type-var]- hydpy/auxs/ppolytools.py:256: error: Incompatible types in assignment (expression has type "ndarray[tuple[int, signedinteger[_64Bit]], dtype[float64]]", variable has type "Sequence[Sequence[float] | ndarray[tuple[int, ...], dtype[float64]]] | ndarray[tuple[int, ...], dtype[float64]]")  [assignment]- hydpy/auxs/ppolytools.py:661: error: Value of type "float64" is not indexable  [index]pandas (https://github.com/pandas-dev/pandas)+ pandas/core/array_algos/quantile.py:199: error: Unused "type: ignore" comment  [unused-ignore]+ pandas/core/sorting.py:479: error: Unused "type: ignore" comment  [unused-ignore]+ pandas/io/parsers/python_parser.py:1471: error: Unused "type: ignore" comment  [unused-ignore]- pandas/core/internals/managers.py:972: error: Incompatible types in assignment (expression has type "signedinteger[_32Bit | _64Bit]", variable has type "int")  [assignment]+ pandas/core/internals/construction.py:637: error: Unused "type: ignore" comment  [unused-ignore]+ pandas/core/internals/blocks.py:2101: error: Unused "type: ignore" comment  [unused-ignore]scipy (https://github.com/scipy/scipy)- scipy/spatial/tests/test_spherical_voronoi.py:194: error: Value of type variable "SupportsRichComparisonT" of "sorted" cannot be "floating[Any]"  [type-var]jax (https://github.com/google/jax)+ jax/_src/pallas/hlo_interpreter.py:93: error: Unused "type: ignore" comment  [unused-ignore]hydra-zen (https://github.com/mit-ll-responsible-ai/hydra-zen): 1.08x slower (80.8s -> 87.4s in single noisy sample)pyinstrument (https://github.com/joerick/pyinstrument)- pyinstrument/vendor/decorator.py:295: error: Incompatible types in assignment (expression has type "Callable[[Any, Any, VarArg(Any), KwArg(Any)], Any]", variable has type "Callable[[_GeneratorContextManagerBase[_G_co], Callable[..., _G_co], tuple[Any, ...], dict[str, Any]], None]")  [assignment]+ pyinstrument/vendor/decorator.py:295: error: Incompatible types in assignment (expression has type "Callable[[Any, Any, VarArg(Any), KwArg(Any)], Any]", variable has type "Callable[[_GeneratorContextManagerBase[Generator[Any, None, None]], Callable[..., Generator[Any, None, None]], tuple[Any, ...], dict[str, Any]], None]")  [assignment]- pyinstrument/vendor/decorator.py:301: error: Incompatible types in assignment (expression has type "Callable[[Any, Any, VarArg(Any), KwArg(Any)], Any]", variable has type "Callable[[_GeneratorContextManagerBase[_G_co], Callable[..., _G_co], tuple[Any, ...], dict[str, Any]], None]")  [assignment]+ pyinstrument/vendor/decorator.py:301: error: Incompatible types in assignment (expression has type "Callable[[Any, Any, VarArg(Any), KwArg(Any)], Any]", variable has type "Callable[[_GeneratorContextManagerBase[Generator[Any, None, None]], Callable[..., Generator[Any, None, None]], tuple[Any, ...], dict[str, Any]], None]")  [assignment]rclip (https://github.com/yurijmikhalevich/rclip)- rclip/model.py:218: error: Argument "key" to "sorted" has incompatible type "Callable[[tuple[float64, int]], float64]"; expected "Callable[[tuple[float64, int]], SupportsDunderLT[Any] | SupportsDunderGT[Any]]"  [arg-type]- rclip/model.py:218: error: Incompatible return value type (got "float64", expected "SupportsDunderLT[Any] | SupportsDunderGT[Any]")  [return-value]- rclip/model.py:220: error: Incompatible return value type (got "list[tuple[float64, int]]", expected "list[tuple[float, int]]")  [return-value]- rclip/model.py:220: note: "list" is invariant -- see https://mypy.readthedocs.io/en/stable/common_issues.html#variance- rclip/model.py:220: note: Consider using "Sequence" instead, which is covariant- rclip/model.py:220: note: Perhaps you need a type annotation for "sorted_similarities"? Suggestion: "list[tuple[float, int]]"werkzeug (https://github.com/pallets/werkzeug)+ src/werkzeug/datastructures/structures.py:711: error: Unused "type: ignore[arg-type, attr-defined]" comment  [unused-ignore]+ src/werkzeug/datastructures/structures.py:711: error: Cannot infer type argument 2 of "setdefault" of "MutableMapping"  [misc]+ src/werkzeug/datastructures/structures.py:711: note: Error code "misc" not covered by "type: ignore" comment+ src/werkzeug/datastructures/structures.py:792: error: Unused "type: ignore[assignment]" comment  [unused-ignore]+ src/werkzeug/datastructures/structures.py:806: error: Unused "type: ignore[assignment]" comment  [unused-ignore]- tests/test_local.py:146: error: No overload variant of "list" matches argument type "LocalProxy[Any]"  [call-overload]- tests/test_local.py:146: note: Possible overload variants:- tests/test_local.py:146: note:     def [_T] __init__(self) -> list[_T]- tests/test_local.py:146: note:     def [_T] __init__(self, Iterable[_T], /) -> list[_T]discord.py (https://github.com/Rapptz/discord.py): 1.05x slower (276.5s -> 290.7s in single noisy sample)- discord/enums.py:90: error: "tuple[_T_co, ...]" has no attribute "value"  [attr-defined]+ discord/enums.py:90: error: "tuple[Never, ...]" has no attribute "value"  [attr-defined]- discord/enums.py:91: error: "tuple[_T_co, ...]" has no attribute "value"  [attr-defined]+ discord/enums.py:91: error: "tuple[Never, ...]" has no attribute "value"  [attr-defined]- discord/enums.py:92: error: "tuple[_T_co, ...]" has no attribute "value"  [attr-defined]+ discord/enums.py:92: error: "tuple[Never, ...]" has no attribute "value"  [attr-defined]- discord/enums.py:93: error: "tuple[_T_co, ...]" has no attribute "value"  [attr-defined]+ discord/enums.py:93: error: "tuple[Never, ...]" has no attribute "value"  [attr-defined]- discord/ext/commands/_types.py:62: error: Unused "type: ignore" comment  [unused-ignore]scrapy (https://github.com/scrapy/scrapy)+ scrapy/utils/datatypes.py:85: error: Unused "type: ignore" comment  [unused-ignore]xarray (https://github.com/pydata/xarray)+ xarray/structure/concat.py: note: In function "_dataset_concat":+ xarray/structure/concat.py:507: error: Incompatible types in assignment (expression has type "None", variable has type "Variable")  [assignment]steam.py (https://github.com/Gobot1234/steam.py)- steam/user.py:506: error: Argument "type" to "__init__" of "ID" has incompatible type "int"; expected "TypeT | None"  [arg-type]- steam/message.py:74: error: Argument 1 to "__init__" of "Message" has incompatible type "Self"; expected "Message[UserT, ChannelT]"  [arg-type]

@ilevkivskyi
Copy link
MemberAuthor

I didn't hear any objections since my last post, and this PR starts getting some merge conflicts, so I am going to merge this now.

antonagestam and Molkree reacted with hooray emoji

@ilevkivskyiilevkivskyi merged commit3957025 intopython:masterMay 30, 2025
19 checks passed
@ilevkivskyiilevkivskyi deleted the checkmember-proto branchMay 30, 2025 21:30
cdce8p pushed a commit to cdce8p/mypy that referenced this pull requestMay 31, 2025
Fixespython#18024Fixespython#18706Fixespython#17734Fixespython#15097Fixespython#14814Fixespython#14806Fixespython#14259Fixespython#13041Fixespython#11993Fixespython#9585Fixespython#9266Fixespython#9202Fixespython#5481This is a fourth "major" PR towardpython#7724. This is one iswatershed/crux of the whole series (but to set correct expectations,there are almost a dozen smaller follow-up/clean-up PRs in thepipeline).The core of the idea is to set current type-checker as part of theglobal state. There are however some details:* There are cases where we call `is_subtype()` before type-checking. Fornow, I fall back to old logic in this cases. In follow up PRs we mayswitch to using type-checker instances before type checking phase (thisrequires some care).* This increases typeops import cycle by a few modules, butunfortunately this is inevitable.* This PR increases potential for infinite recursion in protocols. Tomitigate I add: one legitimate fix for `__call__`, and one temporaryhack for `freshen_all_functions_type_vars` (to reduce performanceimpact).* Finally I change semantics for method access on class objects to matchthe one in old `find_member()`. Now we will expand type by instance, sowe have something like this:  ```python  class B(Generic[T]):      def foo(self, x: T) -> T: ...  class C(B[str]): ...  reveal_type(C.foo)  # def (self: B[str], x: str) -> str  ```FWIW, I am not even 100% sure this is correct, it seems to me we _may_keep the method generic. But in any case what we do currently isdefinitely wrong (we infer a _non-generic_ `def (x: T) -> T`).---------Co-authored-by: hauntsaninja <hauntsaninja@gmail.com>Co-authored-by: Shantanu <12621235+hauntsaninja@users.noreply.github.com>
@hauntsaninjahauntsaninja mentioned this pull requestJun 6, 2025
@embray
Copy link

This is just an observation, not a complaint, but this PR broke some of my plugins that usedget_attribute_hook. In anything this was more a bug in my plugins, which always assumed that the attribute hooks would be called in the context of aMemberExpr, and now they are being called in new, different contexts resulting inAttributeError onctx.context.name, for example.

Just noting it in here in case anyone else has the same problem.

@ilevkivskyi
Copy link
MemberAuthor

@embray Yeah, good point, I will leave a comment in the plugin announcement issue#6617

embray reacted with thumbs up emoji

@embray
Copy link

Great fix by the way! At first I didn't understand what it was about but it actually allowed me to remove some hacks for protocols that involved descriptors.

ilevkivskyi reacted with thumbs up emoji

Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment

Reviewers

@JukkaLJukkaLJukkaL left review comments

@hauntsaninjahauntsaninjaAwaiting requested review from hauntsaninja

+1 more reviewer

@A5rocksA5rocksA5rocks left review comments

Reviewers whose approvals may not affect merge requirements

Assignees

No one assigned

Labels

None yet

Projects

None yet

Milestone

No milestone

6 participants

@ilevkivskyi@sterliakov@hauntsaninja@JukkaL@embray@A5rocks

[8]ページ先頭

©2009-2025 Movatter.jp