- Notifications
You must be signed in to change notification settings - Fork26.3k
Fix subclass access custom op bug#149698
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
Uh oh!
There was an error while loading.Please reload this page.
Conversation
pytorch-botbot commentedMar 21, 2025 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
🔗 Helpful Links🧪 See artifacts and rendered test results athud.pytorch.org/pr/149698
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (1 Unrelated Failure)As of commit18e895c with merge base5d4b5ee ( FLAKY - The following job failed but was likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
facebook-github-bot commentedMar 21, 2025
This pull request wasexported from Phabricator. Differential Revision:D71599541 |
torch/export/custom_ops.py Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
we don't have much precedent for direct registrations to the Python key. But I guess this seems... ok. The argument is that this isnever an op we want anyone to override, and we effectively want this to be as-if if were a python function
Summary:When we call torch.inference_mode, we seem to skip Autograd key causing the custom op export uses to be not decomposed properly before subclass dispatching starts. We fix this by force desugaring this op at Python keyTest Plan: testReviewed By: bdhirshDifferential Revision: D71599541
6897a9c to18e895cComparefacebook-github-bot commentedMar 21, 2025
This pull request wasexported from Phabricator. Differential Revision:D71599541 |
facebook-github-bot commentedMar 21, 2025
@pytorchbot merge (Initiating merge automatically since Phabricator Diff has merged) |
pytorchmergebot commentedMar 21, 2025
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in thewiki. Questions? Feedback? Please reach out to thePyTorch DevX Team |
Summary: When we call torch.inference_mode, we seem to skip Autograd key causing the custom op export uses to be not decomposed properly before subclass dispatching starts. We fix this by force desugaring this op at Python keyTest Plan: testDifferential Revision: D71599541Pull Requestresolved:pytorch#149698Approved by:https://github.com/bdhirsh
Summary: When we call torch.inference_mode, we seem to skip Autograd key causing the custom op export uses to be not decomposed properly before subclass dispatching starts. We fix this by force desugaring this op at Python key
Test Plan: test
Differential Revision: D71599541