Uh oh!
There was an error while loading.Please reload this page.
- Notifications
You must be signed in to change notification settings - Fork11.8k
TYP: optional type parameters forndarray andflatiter#28940
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
Uh oh!
There was an error while loading.Please reload this page.
Conversation
5000f1f to7c66326Compare This comment has been minimized.
This comment has been minimized.
jorenham commentedMay 10, 2025 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
The new The The remaining errors appear to be a consequence of mypy's type-inference limitations in case of redefinitions (which I believe will no longer be an issue in the upcoming mypy 1.16 release, which has been branched already) |
jorenham commentedMay 10, 2025
@mroeschke, any thoughts on this? |
mroeschke commentedMay 10, 2025
Thanks for the ping again. Happy to address the pandas errors during release. cc@Dr-Irv for the pandas-stubs errors |
Dr-Irv commentedMay 10, 2025
Same for |
Dr-Irv commentedMay 10, 2025
@jorenham I've created a PR in |
jorenham commentedMay 10, 2025
Good! Mypy primer runs on the main branch, so once it's merged rerunning CI should do the trick 👌 |
7c66326 toede9ea4CompareDiff frommypy_primer, showing the effect of this PR on type check results on a corpus of open source code: xarray (https://github.com/pydata/xarray)+ xarray/tests/test_parallelcompat.py: note: In member "compute" of class "DummyChunkManager":+ xarray/tests/test_parallelcompat.py:93: error: Return type "tuple[ndarray[tuple[int, ...], dtype[Any]], ...]" of "compute" incompatible with return type "tuple[ndarray[Any, _DType_co], ...]" in supertype "ChunkManagerEntrypoint" [override]+ xarray/tests/test_parallelcompat.py: note: At top level:dedupe (https://github.com/dedupeio/dedupe)- dedupe/api.py:551: error: Unused "type: ignore" comment [unused-ignore]optuna (https://github.com/optuna/optuna)+ optuna/_hypervolume/hssp.py:108: error: Incompatible types in assignment (expression has type "ndarray[tuple[int, ...], dtype[signedinteger[Any]]]", variable has type "ndarray[tuple[int], dtype[signedinteger[Any]]]") [assignment]+ optuna/_hypervolume/box_decomposition.py:92: error: Incompatible types in assignment (expression has type "ndarray[tuple[int, ...], dtype[Any]]", variable has type "ndarray[tuple[int, int, int], dtype[Any]]") [assignment]+ tests/hypervolume_tests/test_wfg.py:26: error: Incompatible types in assignment (expression has type "ndarray[tuple[int, ...], dtype[Any]]", variable has type "ndarray[tuple[int, int], dtype[Any]]") [assignment]pandas (https://github.com/pandas-dev/pandas)- pandas/core/_numba/executor.py:90: error: Incompatible redefinition (redefinition with type "Callable[[ndarray[Any, Any], ndarray[Any, Any], ndarray[Any, Any], int, VarArg(Any)], Any]", original type "Callable[[ndarray[Any, Any], ndarray[Any, Any], int, int, VarArg(Any)], Any]") [misc]+ pandas/core/window/numba_.py:238: error: Incompatible types in assignment (expression has type "ndarray[tuple[int, ...], dtype[Any]]", variable has type "ndarray[tuple[int, int], dtype[float64]]") [assignment]+ pandas/core/_numba/executor.py:90: error: Incompatible redefinition (redefinition with type "Callable[[ndarray[tuple[int, ...], dtype[Any]], ndarray[tuple[int, ...], dtype[Any]], ndarray[tuple[int, ...], dtype[Any]], int, VarArg(Any)], Any]", original type "Callable[[ndarray[tuple[int, ...], dtype[Any]], ndarray[tuple[int, ...], dtype[Any]], int, int, VarArg(Any)], Any]") [misc]+ pandas/core/util/hashing.py:327: error: Incompatible types in assignment (expression has type "CategoricalDtype", variable has type "dtype[Any]") [assignment]+ pandas/core/util/hashing.py:328: error: Argument 2 to "_simple_new" of "Categorical" has incompatible type "dtype[Any]"; expected "CategoricalDtype" [arg-type]+ pandas/core/nanops.py:656: error: Unused "type: ignore" comment [unused-ignore]+ pandas/core/construction.py:687: error: Incompatible types in assignment (expression has type "ndarray[tuple[int, ...], dtype[Any]]", variable has type "ndarray[tuple[int], dtype[Any]]") [assignment]+ pandas/core/construction.py:689: error: Incompatible types in assignment (expression has type "ndarray[tuple[int, ...], dtype[Any]]", variable has type "ndarray[tuple[int], dtype[Any]]") [assignment]+ pandas/core/arrays/_utils.py:48: error: Item "dtype[Any]" of "ExtensionDtype | dtype[Any]" has no attribute "na_value" [union-attr]+ pandas/core/arrays/_utils.py:56: error: Item "dtype[Any]" of "ExtensionDtype | dtype[Any]" has no attribute "na_value" [union-attr]+ pandas/core/arrays/datetimes.py:816: error: Argument 1 to "view" of "ndarray" has incompatible type "dtype[datetime64[date | int | None]] | DatetimeTZDtype"; expected "dtype[Any] | _HasDType[dtype[Any]]" [arg-type]+ pandas/core/arrays/_mixins.py:150: error: Argument "dtype" to "view" of "ndarray" has incompatible type "ExtensionDtype | dtype[Any]"; expected "dtype[Any] | _HasDType[dtype[Any]]" [arg-type]+ pandas/core/arrays/categorical.py:1858: error: Incompatible types in assignment (expression has type "ndarray[tuple[int, ...], dtype[Any]]", variable has type "ndarray[tuple[int], dtype[signedinteger[Any]]]") [assignment]+ pandas/io/pytables.py:3306: error: Unused "type: ignore" comment [unused-ignore]+ pandas/io/pytables.py:3306: error: "ExtensionArray" has no attribute "asi8" [attr-defined]+ pandas/io/pytables.py:3306: note: Error code "attr-defined" not covered by "type: ignore" comment+ pandas/io/pytables.py:3312: error: Unused "type: ignore" comment [unused-ignore]+ pandas/io/pytables.py:3312: error: "ExtensionArray" has no attribute "tz" [attr-defined]+ pandas/io/pytables.py:3312: note: Error code "attr-defined" not covered by "type: ignore" comment+ pandas/io/pytables.py:5196: error: Unused "type: ignore" comment [unused-ignore]+ pandas/core/frame.py:11368: error: Incompatible types in assignment (expression has type "ndarray[tuple[int, ...], dtype[float64]]", variable has type "ndarray[tuple[int, int], dtype[float64]]") [assignment]+ pandas/core/reshape/merge.py:1737: error: Item "ExtensionArray" of "ExtensionArray | Any" has no attribute "all" [union-attr]+ pandas/core/reshape/merge.py:1757: error: Item "ExtensionArray" of "ExtensionArray | Any" has no attribute "all" [union-attr]static-frame (https://github.com/static-frame/static-frame)- static_frame/core/join.py:98: error: Argument 1 to "nonzero_1d" has incompatible type "numpy.bool[builtins.bool] | ndarray[tuple[int, ...], dtype[numpy.bool[builtins.bool]]]"; expected "ndarray[Any, Any]" [arg-type]+ static_frame/core/join.py:98: error: Argument 1 to "nonzero_1d" has incompatible type "numpy.bool[builtins.bool] | ndarray[tuple[int, ...], dtype[numpy.bool[builtins.bool]]]"; expected "ndarray[tuple[int, ...], dtype[Any]]" [arg-type]+ static_frame/core/frame.py:7524: error: Incompatible types in assignment (expression has type "ndarray[tuple[int, ...], dtype[Any]]", variable has type "ndarray[tuple[int], dtype[Any]]") [assignment]spark (https://github.com/apache/spark)- python/pyspark/pandas/namespace.py:1139: note: def [IntStrT: (int, str)] read_excel(io: str | PathLike[str] | ReadBuffer[bytes] | ExcelFile | Any | Any | Any | Any, sheet_name: list[IntStrT], *, header: int | Sequence[int] | None = ..., names: MutableSequence[Any] | ndarray[Any, Any] | tuple[Any, ...] | range | None = ..., index_col: int | Sequence[int] | str | None = ..., usecols: str | SequenceNotStr[Hashable] | range | ExtensionArray | ndarray[Any, Any] | Index[Any] | Series[Any] | Callable[[Any], bool] | None = ..., dtype: str | ExtensionDtype | str | dtype[generic[Any]] | type[str] | type[complex] | type[bool] | type[object] | Mapping[str, str | ExtensionDtype | str | dtype[generic[Any]] | type[str] | type[complex] | type[bool] | type[object]] | None = ..., engine: Literal['xlrd', 'openpyxl', 'odf', 'pyxlsb', 'calamine'] | None = ..., converters: Mapping[int | str, Callable[[object], object]] | None = ..., true_values: Iterable[Hashable] | None = ..., false_values: Iterable[Hashable] | None = ..., skiprows: int | Sequence[int] | Callable[[object], bool] | None = ..., nrows: int | None = ..., na_values: Sequence[str] | dict[str | int, Sequence[str]] = ..., keep_default_na: bool = ..., na_filter: bool = ..., verbose: bool = ..., parse_dates: bool | Sequence[int] | Sequence[Sequence[str] | Sequence[int]] | dict[str, Sequence[int] | list[str]] = ..., date_format: dict[Hashable, str] | str | None = ..., thousands: str | None = ..., decimal: str = ..., comment: str | None = ..., skipfooter: int = ..., storage_options: dict[str, Any] | None = ..., dtype_backend: Literal['pyarrow', 'numpy_nullable'] | Literal[_NoDefault.no_default] = ...) -> dict[IntStrT, DataFrame]+ python/pyspark/pandas/namespace.py:1139: note: def [IntStrT: (int, str)] read_excel(io: str | PathLike[str] | ReadBuffer[bytes] | ExcelFile | Any | Any | Any | Any, sheet_name: list[IntStrT], *, header: int | Sequence[int] | None = ..., names: MutableSequence[Any] | ndarray[tuple[int, ...], dtype[Any]] | tuple[Any, ...] | range | None = ..., index_col: int | Sequence[int] | str | None = ..., usecols: str | SequenceNotStr[Hashable] | range | ExtensionArray | ndarray[Any, Any] | Index[Any] | Series[Any] | Callable[[Any], bool] | None = ..., dtype: str | ExtensionDtype | str | dtype[generic[Any]] | type[str] | type[complex] | type[bool] | type[object] | Mapping[str, str | ExtensionDtype | str | dtype[generic[Any]] | type[str] | type[complex] | type[bool] | type[object]] | None = ..., engine: Literal['xlrd', 'openpyxl', 'odf', 'pyxlsb', 'calamine'] | None = ..., converters: Mapping[int | str, Callable[[object], object]] | None = ..., true_values: Iterable[Hashable] | None = ..., false_values: Iterable[Hashable] | None = ..., skiprows: int | Sequence[int] | Callable[[object], bool] | None = ..., nrows: int | None = ..., na_values: Sequence[str] | dict[str | int, Sequence[str]] = ..., keep_default_na: bool = ..., na_filter: bool = ..., verbose: bool = ..., parse_dates: bool | Sequence[int] | Sequence[Sequence[str] | Sequence[int]] | dict[str, Sequence[int] | list[str]] = ..., date_format: dict[Hashable, str] | str | None = ..., thousands: str | None = ..., decimal: str = ..., comment: str | None = ..., skipfooter: int = ..., storage_options: dict[str, Any] | None = ..., dtype_backend: Literal['pyarrow', 'numpy_nullable'] | Literal[_NoDefault.no_default] = ...) -> dict[IntStrT, DataFrame]- python/pyspark/pandas/namespace.py:1139: note: def read_excel(io: str | PathLike[str] | ReadBuffer[bytes] | ExcelFile | Any | Any | Any | Any, sheet_name: None, *, header: int | Sequence[int] | None = ..., names: MutableSequence[Any] | ndarray[Any, Any] | tuple[Any, ...] | range | None = ..., index_col: int | Sequence[int] | str | None = ..., usecols: str | SequenceNotStr[Hashable] | range | ExtensionArray | ndarray[Any, Any] | Index[Any] | Series[Any] | Callable[[Any], bool] | None = ..., dtype: str | ExtensionDtype | str | dtype[generic[Any]] | type[str] | type[complex] | type[bool] | type[object] | Mapping[str, str | ExtensionDtype | str | dtype[generic[Any]] | type[str] | type[complex] | type[bool] | type[object]] | None = ..., engine: Literal['xlrd', 'openpyxl', 'odf', 'pyxlsb', 'calamine'] | None = ..., converters: Mapping[int | str, Callable[[object], object]] | None = ..., true_values: Iterable[Hashable] | None = ..., false_values: Iterable[Hashable] | None = ..., skiprows: int | Sequence[int] | Callable[[object], bool] | None = ..., nrows: int | None = ..., na_values: Sequence[str] | dict[str | int, Sequence[str]] = ..., keep_default_na: bool = ..., na_filter: bool = ..., verbose: bool = ..., parse_dates: bool | Sequence[int] | Sequence[Sequence[str] | Sequence[int]] | dict[str, Sequence[int] | list[str]] = ..., date_format: dict[Hashable, str] | str | None = ..., thousands: str | None = ..., decimal: str = ..., comment: str | None = ..., skipfooter: int = ..., storage_options: dict[str, Any] | None = ..., dtype_backend: Literal['pyarrow', 'numpy_nullable'] | Literal[_NoDefault.no_default] = ...) -> dict[str, DataFrame]+ python/pyspark/pandas/namespace.py:1139: note: def read_excel(io: str | PathLike[str] | ReadBuffer[bytes] | ExcelFile | Any | Any | Any | Any, sheet_name: None, *, header: int | Sequence[int] | None = ..., names: MutableSequence[Any] | ndarray[tuple[int, ...], dtype[Any]] | tuple[Any, ...] | range | None = ..., index_col: int | Sequence[int] | str | None = ..., usecols: str | SequenceNotStr[Hashable] | range | ExtensionArray | ndarray[Any, Any] | Index[Any] | Series[Any] | Callable[[Any], bool] | None = ..., dtype: str | ExtensionDtype | str | dtype[generic[Any]] | type[str] | type[complex] | type[bool] | type[object] | Mapping[str, str | ExtensionDtype | str | dtype[generic[Any]] | type[str] | type[complex] | type[bool] | type[object]] | None = ..., engine: Literal['xlrd', 'openpyxl', 'odf', 'pyxlsb', 'calamine'] | None = ..., converters: Mapping[int | str, Callable[[object], object]] | None = ..., true_values: Iterable[Hashable] | None = ..., false_values: Iterable[Hashable] | None = ..., skiprows: int | Sequence[int] | Callable[[object], bool] | None = ..., nrows: int | None = ..., na_values: Sequence[str] | dict[str | int, Sequence[str]] = ..., keep_default_na: bool = ..., na_filter: bool = ..., verbose: bool = ..., parse_dates: bool | Sequence[int] | Sequence[Sequence[str] | Sequence[int]] | dict[str, Sequence[int] | list[str]] = ..., date_format: dict[Hashable, str] | str | None = ..., thousands: str | None = ..., decimal: str = ..., comment: str | None = ..., skipfooter: int = ..., storage_options: dict[str, Any] | None = ..., dtype_backend: Literal['pyarrow', 'numpy_nullable'] | Literal[_NoDefault.no_default] = ...) -> dict[str, DataFrame]- python/pyspark/pandas/namespace.py:1139: note: def read_excel(io: str | PathLike[str] | ReadBuffer[bytes] | ExcelFile | Any | Any | Any | Any, sheet_name: list[int | str], *, header: int | Sequence[int] | None = ..., names: MutableSequence[Any] | ndarray[Any, Any] | tuple[Any, ...] | range | None = ..., index_col: int | Sequence[int] | str | None = ..., usecols: str | SequenceNotStr[Hashable] | range | ExtensionArray | ndarray[Any, Any] | Index[Any] | Series[Any] | Callable[[Any], bool] | None = ..., dtype: str | ExtensionDtype | str | dtype[generic[Any]] | type[str] | type[complex] | type[bool] | type[object] | Mapping[str, str | ExtensionDtype | str | dtype[generic[Any]] | type[str] | type[complex] | type[bool] | type[object]] | None = ..., engine: Literal['xlrd', 'openpyxl', 'odf', 'pyxlsb', 'calamine'] | None = ..., converters: Mapping[int | str, Callable[[object], object]] | None = ..., true_values: Iterable[Hashable] | None = ..., false_values: Iterable[Hashable] | None = ..., skiprows: int | Sequence[int] | Callable[[object], bool] | None = ..., nrows: int | None = ..., na_values: Sequence[str] | dict[str | int, Sequence[str]] = ..., keep_default_na: bool = ..., na_filter: bool = ..., verbose: bool = ..., parse_dates: bool | Sequence[int] | Sequence[Sequence[str] | Sequence[int]] | dict[str, Sequence[int] | list[str]] = ..., date_format: dict[Hashable, str] | str | None = ..., thousands: str | None = ..., decimal: str = ..., comment: str | None = ..., skipfooter: int = ..., storage_options: dict[str, Any] | None = ..., dtype_backend: Literal['pyarrow', 'numpy_nullable'] | Literal[_NoDefault.no_default] = ...) -> dict[int | str, DataFrame]+ python/pyspark/pandas/namespace.py:1139: note: def read_excel(io: str | PathLike[str] | ReadBuffer[bytes] | ExcelFile | Any | Any | Any | Any, sheet_name: list[int | str], *, header: int | Sequence[int] | None = ..., names: MutableSequence[Any] | ndarray[tuple[int, ...], dtype[Any]] | tuple[Any, ...] | range | None = ..., index_col: int | Sequence[int] | str | None = ..., usecols: str | SequenceNotStr[Hashable] | range | ExtensionArray | ndarray[Any, Any] | Index[Any] | Series[Any] | Callable[[Any], bool] | None = ..., dtype: str | ExtensionDtype | str | dtype[generic[Any]] | type[str] | type[complex] | type[bool] | type[object] | Mapping[str, str | ExtensionDtype | str | dtype[generic[Any]] | type[str] | type[complex] | type[bool] | type[object]] | None = ..., engine: Literal['xlrd', 'openpyxl', 'odf', 'pyxlsb', 'calamine'] | None = ..., converters: Mapping[int | str, Callable[[object], object]] | None = ..., true_values: Iterable[Hashable] | None = ..., false_values: Iterable[Hashable] | None = ..., skiprows: int | Sequence[int] | Callable[[object], bool] | None = ..., nrows: int | None = ..., na_values: Sequence[str] | dict[str | int, Sequence[str]] = ..., keep_default_na: bool = ..., na_filter: bool = ..., verbose: bool = ..., parse_dates: bool | Sequence[int] | Sequence[Sequence[str] | Sequence[int]] | dict[str, Sequence[int] | list[str]] = ..., date_format: dict[Hashable, str] | str | None = ..., thousands: str | None = ..., decimal: str = ..., comment: str | None = ..., skipfooter: int = ..., storage_options: dict[str, Any] | None = ..., dtype_backend: Literal['pyarrow', 'numpy_nullable'] | Literal[_NoDefault.no_default] = ...) -> dict[int | str, DataFrame]- python/pyspark/pandas/namespace.py:1139: note: def read_excel(io: str | PathLike[str] | ReadBuffer[bytes] | ExcelFile | Any | Any | Any | Any, sheet_name: int | str = ..., *, header: int | Sequence[int] | None = ..., names: MutableSequence[Any] | ndarray[Any, Any] | tuple[Any, ...] | range | None = ..., index_col: int | Sequence[int] | str | None = ..., usecols: str | SequenceNotStr[Hashable] | range | ExtensionArray | ndarray[Any, Any] | Index[Any] | Series[Any] | Callable[[Any], bool] | None = ..., dtype: str | ExtensionDtype | str | dtype[generic[Any]] | type[str] | type[complex] | type[bool] | type[object] | Mapping[str, str | ExtensionDtype | str | dtype[generic[Any]] | type[str] | type[complex] | type[bool] | type[object]] | None = ..., engine: Literal['xlrd', 'openpyxl', 'odf', 'pyxlsb', 'calamine'] | None = ..., converters: Mapping[int | str, Callable[[object], object]] | None = ..., true_values: Iterable[Hashable] | None = ..., false_values: Iterable[Hashable] | None = ..., skiprows: int | Sequence[int] | Callable[[object], bool] | None = ..., nrows: int | None = ..., na_values: Sequence[str] | dict[str | int, Sequence[str]] = ..., keep_default_na: bool = ..., na_filter: bool = ..., verbose: bool = ..., parse_dates: bool | Sequence[int] | Sequence[Sequence[str] | Sequence[int]] | dict[str, Sequence[int] | list[str]] = ..., date_format: dict[Hashable, str] | str | None = ..., thousands: str | None = ..., decimal: str = ..., comment: str | None = ..., skipfooter: int = ..., storage_options: dict[str, Any] | None = ..., dtype_backend: Literal['pyarrow', 'numpy_nullable'] | Literal[_NoDefault.no_default] = ...) -> DataFrame+ python/pyspark/pandas/namespace.py:1139: note: def read_excel(io: str | PathLike[str] | ReadBuffer[bytes] | ExcelFile | Any | Any | Any | Any, sheet_name: int | str = ..., *, header: int | Sequence[int] | None = ..., names: MutableSequence[Any] | ndarray[tuple[int, ...], dtype[Any]] | tuple[Any, ...] | range | None = ..., index_col: int | Sequence[int] | str | None = ..., usecols: str | SequenceNotStr[Hashable] | range | ExtensionArray | ndarray[Any, Any] | Index[Any] | Series[Any] | Callable[[Any], bool] | None = ..., dtype: str | ExtensionDtype | str | dtype[generic[Any]] | type[str] | type[complex] | type[bool] | type[object] | Mapping[str, str | ExtensionDtype | str | dtype[generic[Any]] | type[str] | type[complex] | type[bool] | type[object]] | None = ..., engine: Literal['xlrd', 'openpyxl', 'odf', 'pyxlsb', 'calamine'] | None = ..., converters: Mapping[int | str, Callable[[object], object]] | None = ..., true_values: Iterable[Hashable] | None = ..., false_values: Iterable[Hashable] | None = ..., skiprows: int | Sequence[int] | Callable[[object], bool] | None = ..., nrows: int | None = ..., na_values: Sequence[str] | dict[str | int, Sequence[str]] = ..., keep_default_na: bool = ..., na_filter: bool = ..., verbose: bool = ..., parse_dates: bool | Sequence[int] | Sequence[Sequence[str] | Sequence[int]] | dict[str, Sequence[int] | list[str]] = ..., date_format: dict[Hashable, str] | str | None = ..., thousands: str | None = ..., decimal: str = ..., comment: str | None = ..., skipfooter: int = ..., storage_options: dict[str, Any] | None = ..., dtype_backend: Literal['pyarrow', 'numpy_nullable'] | Literal[_NoDefault.no_default] = ...) -> DataFrame+ python/pyspark/ml/linalg/__init__.py:1145: error: Incompatible types in assignment (expression has type "ndarray[tuple[int, ...], dtype[Any]]", variable has type "ndarray[tuple[int], dtype[Any]]") [assignment]+ python/pyspark/mllib/linalg/__init__.py:1327: error: Incompatible types in assignment (expression has type "ndarray[tuple[int, ...], dtype[Any]]", variable has type "ndarray[tuple[int], dtype[Any]]") [assignment]scikit-learn (https://github.com/scikit-learn/scikit-learn)+ sklearn/model_selection/tests/test_search.py:2894: error: Unused "type: ignore" comment [unused-ignore]pandera (https://github.com/pandera-dev/pandera)- pandera/strategies/pandas_strategies.py:71: note: def mask(self, cond: Series[Any] | Series[bool] | ndarray[Any, Any] | Callable[[Series[Any]], Series[bool]] | Callable[[Any], bool], other: str | bytes | date | datetime | timedelta | <12 more items> | None = ..., *, inplace: Literal[True], axis: Literal['index', 0] | None = ..., level: Hashable | int | None = ...) -> None+ pandera/strategies/pandas_strategies.py:71: note: def mask(self, cond: Series[Any] | Series[bool] | ndarray[tuple[int, ...], dtype[Any]] | Callable[[Series[Any]], Series[bool]] | Callable[[Any], bool], other: str | bytes | date | datetime | timedelta | <12 more items> | None = ..., *, inplace: Literal[True], axis: Literal['index', 0] | None = ..., level: Hashable | int | None = ...) -> None- pandera/strategies/pandas_strategies.py:71: note: def mask(self, cond: Series[Any] | Series[bool] | ndarray[Any, Any] | Callable[[Series[Any]], Series[bool]] | Callable[[Any], bool], other: str | bytes | date | datetime | timedelta | <12 more items> | None = ..., *, inplace: Literal[False] = ..., axis: Literal['index', 0] | None = ..., level: Hashable | int | None = ...) -> Series[Any]+ pandera/strategies/pandas_strategies.py:71: note: def mask(self, cond: Series[Any] | Series[bool] | ndarray[tuple[int, ...], dtype[Any]] | Callable[[Series[Any]], Series[bool]] | Callable[[Any], bool], other: str | bytes | date | datetime | timedelta | <12 more items> | None = ..., *, inplace: Literal[False] = ..., axis: Literal['index', 0] | None = ..., level: Hashable | int | None = ...) -> Series[Any]- pandera/strategies/pandas_strategies.py:73: note: def mask(self, cond: Series[Any] | Series[bool] | ndarray[Any, Any] | Callable[[Series[Any]], Series[bool]] | Callable[[Any], bool], other: str | bytes | date | datetime | timedelta | <12 more items> | None = ..., *, inplace: Literal[True], axis: Literal['index', 0] | None = ..., level: Hashable | int | None = ...) -> None+ pandera/strategies/pandas_strategies.py:73: note: def mask(self, cond: Series[Any] | Series[bool] | ndarray[tuple[int, ...], dtype[Any]] | Callable[[Series[Any]], Series[bool]] | Callable[[Any], bool], other: str | bytes | date | datetime | timedelta | <12 more items> | None = ..., *, inplace: Literal[True], axis: Literal['index', 0] | None = ..., level: Hashable | int | None = ...) -> None- pandera/strategies/pandas_strategies.py:73: note: def mask(self, cond: Series[Any] | Series[bool] | ndarray[Any, Any] | Callable[[Series[Any]], Series[bool]] | Callable[[Any], bool], other: str | bytes | date | datetime | timedelta | <12 more items> | None = ..., *, inplace: Literal[False] = ..., axis: Literal['index', 0] | None = ..., level: Hashable | int | None = ...) -> Series[Any]+ pandera/strategies/pandas_strategies.py:73: note: def mask(self, cond: Series[Any] | Series[bool] | ndarray[tuple[int, ...], dtype[Any]] | Callable[[Series[Any]], Series[bool]] | Callable[[Any], bool], other: str | bytes | date | datetime | timedelta | <12 more items> | None = ..., *, inplace: Literal[False] = ..., axis: Literal['index', 0] | None = ..., level: Hashable | int | None = ...) -> Series[Any]- pandera/strategies/pandas_strategies.py:74: note: def mask(self, cond: Series[Any] | Series[bool] | ndarray[Any, Any] | Callable[[Series[Any]], Series[bool]] | Callable[[Any], bool], other: str | bytes | date | datetime | timedelta | <12 more items> | None = ..., *, inplace: Literal[True], axis: Literal['index', 0] | None = ..., level: Hashable | int | None = ...) -> None+ pandera/strategies/pandas_strategies.py:74: note: def mask(self, cond: Series[Any] | Series[bool] | ndarray[tuple[int, ...], dtype[Any]] | Callable[[Series[Any]], Series[bool]] | Callable[[Any], bool], other: str | bytes | date | datetime | timedelta | <12 more items> | None = ..., *, inplace: Literal[True], axis: Literal['index', 0] | None = ..., level: Hashable | int | None = ...) -> None- pandera/strategies/pandas_strategies.py:74: note: def mask(self, cond: Series[Any] | Series[bool] | ndarray[Any, Any] | Callable[[Series[Any]], Series[bool]] | Callable[[Any], bool], other: str | bytes | date | datetime | timedelta | <12 more items> | None = ..., *, inplace: Literal[False] = ..., axis: Literal['index', 0] | None = ..., level: Hashable | int | None = ...) -> Series[Any]+ pandera/strategies/pandas_strategies.py:74: note: def mask(self, cond: Series[Any] | Series[bool] | ndarray[tuple[int, ...], dtype[Any]] | Callable[[Series[Any]], Series[bool]] | Callable[[Any], bool], other: str | bytes | date | datetime | timedelta | <12 more items> | None = ..., *, inplace: Literal[False] = ..., axis: Literal['index', 0] | None = ..., level: Hashable | int | None = ...) -> Series[Any]arviz (https://github.com/arviz-devs/arviz)- arviz/stats/ecdf_utils.py:83: error: Incompatible return value type (got "tuple[float | ndarray[tuple[int, ...], dtype[float64]], float | ndarray[tuple[int, ...], dtype[float64]]]", expected "tuple[ndarray[Any, Any], ndarray[Any, Any]]") [return-value]+ arviz/stats/ecdf_utils.py:83: error: Incompatible return value type (got "tuple[float | ndarray[tuple[int, ...], dtype[float64]], float | ndarray[tuple[int, ...], dtype[float64]]]", expected "tuple[ndarray[tuple[int, ...], dtype[Any]], ndarray[tuple[int, ...], dtype[Any]]]") [return-value] |
Dr-Irv commentedMay 11, 2025
Looks like the fix for |
jorenham commentedMay 15, 2025
This is ready to merge, as far as I'm concerned. |
charris commentedMay 15, 2025
OK then, in it goes. Thanks Joren,. |
c3fbf03 intonumpy:mainUh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
Static type-checkers now consider
_: np.ndarrayas equivalent to_: npt.NDArray[Any].Similarly,
_: np.flatiterwill now be considered equivalent to_: np.flatiter[np.ndarray].