- Notifications
You must be signed in to change notification settings - Fork26.3k
[MPS] Add support for two more isin variants#154010
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
Uh oh!
There was an error while loading.Please reload this page.
Conversation
pytorch-botbot commentedMay 21, 2025 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
🔗 Helpful Links🧪 See artifacts and rendered test results athud.pytorch.org/pr/154010
Note: Links to docs will display an error until the docs builds have been completed. ⏳ No Failures, 72 PendingAs of commit64c8b36 with merge base2e56ce0 ( UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:This comment was automatically generated by Dr. CI and updates every 15 minutes. |
`isin_Tensor_Scalar_out` is just a redispatch to eq/neq`isin_Scalar_Tensor_out` redispatches back to generic `isin` opAdd unittests to validate thatBefore this change both of those failed```python>>> import torch>>> t = torch.tensor([0, 1, 2], device='mps')>>> torch.isin(t, 1)Traceback (most recent call last): File "<stdin>", line 1, in <module>NotImplementedError: The operator 'aten::isin.Tensor_Scalar_out' is not currently implemented for the MPS device. If you want this op to be considered for addition please comment on#141287 and mention use-case, that resulted in missing op as well as commit hash3b875c2. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.>>> torch.isin(1, t)Traceback (most recent call last): File "<stdin>", line 1, in <module>NotImplementedError: The operator 'aten::isin.Scalar_Tensor_out' is not currently implemented for the MPS device. If you want this op to be considered for addition please comment on#141287 and mention use-case, that resulted in missing op as well as commit hash3b875c2. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.```ghstack-source-id:9c8eb66Pull Requestresolved:#154010
Attention! native_functions.yaml was changedIf you are adding a new function or defaulted argument to native_functions.yaml, you cannot use it from pre-existing Python frontend code until our FC window passes (two weeks). Split your PR into two PRs, one which adds the new C++ functionality, and one that makes use of it from Python, and land them two weeks apart. Seehttps://github.com/pytorch/pytorch/wiki/PyTorch's-Python-Frontend-Backward-and-Forward-Compatibility-Policy#forwards-compatibility-fc for more info. Caused by: |
`isin_Tensor_Scalar_out` is just a redispatch to eq/neq`isin_Scalar_Tensor_out` redispatches back to generic `isin` opAdd unittests to validate thatBefore this change both of those failed```python>>> import torch>>> t = torch.tensor([0, 1, 2], device='mps')>>> torch.isin(t, 1)Traceback (most recent call last): File "<stdin>", line 1, in <module>NotImplementedError: The operator 'aten::isin.Tensor_Scalar_out' is not currently implemented for the MPS device. If you want this op to be considered for addition please comment on#141287 and mention use-case, that resulted in missing op as well as commit hash3b875c2. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.>>> torch.isin(1, t)Traceback (most recent call last): File "<stdin>", line 1, in <module>NotImplementedError: The operator 'aten::isin.Scalar_Tensor_out' is not currently implemented for the MPS device. If you want this op to be considered for addition please comment on#141287 and mention use-case, that resulted in missing op as well as commit hash3b875c2. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.```ghstack-source-id:123db4ePull Requestresolved:#154010
`isin_Tensor_Scalar_out` is just a redispatch to eq/neq`isin_Scalar_Tensor_out` redispatches back to generic `isin` opAdd unittests to validate thatBefore this change both of those failed```python>>> import torch>>> t = torch.tensor([0, 1, 2], device='mps')>>> torch.isin(t, 1)Traceback (most recent call last): File "<stdin>", line 1, in <module>NotImplementedError: The operator 'aten::isin.Tensor_Scalar_out' is not currently implemented for the MPS device. If you want this op to be considered for addition please comment on#141287 and mention use-case, that resulted in missing op as well as commit hash3b875c2. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.>>> torch.isin(1, t)Traceback (most recent call last): File "<stdin>", line 1, in <module>NotImplementedError: The operator 'aten::isin.Scalar_Tensor_out' is not currently implemented for the MPS device. If you want this op to be considered for addition please comment on#141287 and mention use-case, that resulted in missing op as well as commit hash3b875c2. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.```ghstack-source-id:99c45aePull Requestresolved:#154010
`isin_Tensor_Scalar_out` is just a redispatch to eq/neq`isin_Scalar_Tensor_out` redispatches back to generic `isin` opAdd unittests to validate thatBefore this change both of those failed```python>>> import torch>>> t = torch.tensor([0, 1, 2], device='mps')>>> torch.isin(t, 1)Traceback (most recent call last): File "<stdin>", line 1, in <module>NotImplementedError: The operator 'aten::isin.Tensor_Scalar_out' is not currently implemented for the MPS device. If you want this op to be considered for addition please comment on#141287 and mention use-case, that resulted in missing op as well as commit hash3b875c2. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.>>> torch.isin(1, t)Traceback (most recent call last): File "<stdin>", line 1, in <module>NotImplementedError: The operator 'aten::isin.Scalar_Tensor_out' is not currently implemented for the MPS device. If you want this op to be considered for addition please comment on#141287 and mention use-case, that resulted in missing op as well as commit hash3b875c2. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.```ghstack-source-id:96faf9aPull Requestresolved:#154010
`isin_Tensor_Scalar_out` is just a redispatch to eq/neq`isin_Scalar_Tensor_out` redispatches back to generic `isin` opAdd unittests to validate thatBefore this change both of those failed```python>>> import torch>>> t = torch.tensor([0, 1, 2], device='mps')>>> torch.isin(t, 1)Traceback (most recent call last): File "<stdin>", line 1, in <module>NotImplementedError: The operator 'aten::isin.Tensor_Scalar_out' is not currently implemented for the MPS device. If you want this op to be considered for addition please comment on#141287 and mention use-case, that resulted in missing op as well as commit hash3b875c2. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.>>> torch.isin(1, t)Traceback (most recent call last): File "<stdin>", line 1, in <module>NotImplementedError: The operator 'aten::isin.Scalar_Tensor_out' is not currently implemented for the MPS device. If you want this op to be considered for addition please comment on#141287 and mention use-case, that resulted in missing op as well as commit hash3b875c2. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.```ghstack-source-id:e60527dPull Requestresolved:#154010
malfet commentedMay 22, 2025
@pytorchbot merge -f "Roses are red, violets are blue, I want to land this PR now" |
pytorchmergebot commentedMay 22, 2025
Merge startedYour change will be merged immediately since you used the force (-f) flag,bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in thewiki. Questions? Feedback? Please reach out to thePyTorch DevX Team |
Uh oh!
There was an error while loading.Please reload this page.
Stack fromghstack (oldest at bottom):
isin_Tensor_Scalar_outis just a redispatch to eq/neqisin_Scalar_Tensor_outredispatches back to genericisinop, but needs a small tweak to handle float scalarsMake sure that
outis resized to an expected value inisin_Tensor_Tensor_out_mpsAdd unittests to validate that, but skip them on MacOS-13, where MPS op just returns garbage
Before this change both of those failed
cc@voznesenskym@penguinwu@EikanWang@jgong5@Guobing-Chen@XiaobingSuper@zhuhaozhe@blzheng@wenzhe-nrv@jiayisunx@ipiszy@chenyang78@kadeng@muchulee8@amjames@chauhang@aakhundov