- Notifications
You must be signed in to change notification settings - Fork24.7k
[quant][pt2e] Add fold_quantize=True for all convert_pt2e calls#117797
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
Uh oh!
There was an error while loading.Please reload this page.
Conversation
pytorch-botbot commentedJan 18, 2024 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
🔗 Helpful Links🧪 See artifacts and rendered test results athud.pytorch.org/pr/117797
Note: Links to docs will display an error until the docs builds have been completed. ✅ You can merge normally! (5 Unrelated Failures)As of commit5189971 with merge base8f91a53 ( FLAKY - The following jobs failed but were likely due to flakiness present on trunk:
BROKEN TRUNK - The following job failed but were present on the merge base:👉Rebase onto the `viable/strict` branch to avoid these failures
UNSTABLE - The following job failed but was likely due to flakiness present on trunk and has been marked as unstable:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This pull request wasexported from Phabricator. Differential Revision:D52879612 |
969b001
toea34b75
CompareThis pull request wasexported from Phabricator. Differential Revision:D52879612 |
ea34b75
to7bcd54c
Compare…rch#117797)Summary:X-link:pytorch/executorch#1640In preparation for enabling fold_quantize=True by defaultTest Plan: CIReviewed By: andrewor14Differential Revision: D52879612
Summary:X-link:pytorch/pytorch#117797In preparation for enabling fold_quantize=True by defaultReviewed By: andrewor14Differential Revision: D52879612
This pull request wasexported from Phabricator. Differential Revision:D52879612 |
…rch#117797)Summary:X-link:pytorch/executorch#1640In preparation for enabling fold_quantize=True by defaultTest Plan: CIReviewed By: andrewor14Differential Revision: D52879612
7bcd54c
to5189971
CompareSummary:X-link:pytorch/pytorch#117797In preparation for enabling fold_quantize=True by defaultReviewed By: andrewor14Differential Revision: D52879612
This pull request wasexported from Phabricator. Differential Revision:D52879612 |
Summary:X-link:pytorch/pytorch#117797Pull Requestresolved:#1640In preparation for enabling fold_quantize=True by defaultReviewed By: andrewor14Differential Revision: D52879612fbshipit-source-id: a3db1319b8ee4fac713946453eafdcd437f63a7e
@pytorchbot merge -f 'Landed internally' (Initiating merge automatically since Phabricator Diff has merged, using force because this PR might not pass merge_rules.json but landed internally) |
Merge startedYour change will be merged immediately since you used the force (-f) flag,bypassing any CI checks (ETA: 1-5 minutes). Please use Learn more about merging in thewiki. Questions? Feedback? Please reach out to thePyTorch DevX Team |
Summary: In preparation for enabling fold_quantize=True by default
Test Plan: CI
Differential Revision: D52879612