- Notifications
You must be signed in to change notification settings - Fork26.3k
Fix too big to optimize in test, actually use O0 when aot_inductor.compile_wrapper_with_O0 is set#148714
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
Uh oh!
There was an error while loading.Please reload this page.
Conversation
pytorch-botbot commentedMar 6, 2025 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
🔗 Helpful Links🧪 See artifacts and rendered test results athud.pytorch.org/pr/148714
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commitb20b51c with merge base1e37e5b ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
facebook-github-bot commentedMar 6, 2025
This pull request wasexported from Phabricator. Differential Revision:D70670957 |
facebook-github-bot commentedMar 6, 2025
This pull request wasexported from Phabricator. Differential Revision:D70670957 |
…mpile_wrapper_with_O0 is set (pytorch#148714)Summary:1. the '\0' char cannot be properly recognized. Change to use '0' instead.2. We got the following error when using anything other than O0 flag: `error: Function ZN5torch12aot_inductorL22__check_inputs_outputsEPP16AtenTensorOpaqueS3 is too big to optimize [-Werror,-Wignored-optimization-argument]` So we use O0 flag in wrapper code when `aot_inductor.compile_wrapper_with_O0` is setTest Plan:``` buck run 'fbcode//mode/opt' fbcode//deeplearning/aot_inductor/cpu/test:ads_second_stage_dsnn_models_aoti_lowering_test -- -r AdsSecondStageDSNNModelsAOTILoweringTest```Differential Revision: D70670957
facebook-github-bot commentedMar 8, 2025
This pull request wasexported from Phabricator. Differential Revision:D70670957 |
| """ | ||
| bool _check_aoti_runtime_check_inputs_env() { | ||
| const static char* env_var_value = getenv("AOTI_RUNTIME_CHECK_INPUTS"); | ||
| const static bool result = env_var_value != nullptr && env_var_value[0] != '\0'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Is just escaping the backslash with a double backslash the proper fix?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
@Skylion007 I intend to check if it's a 0, not checking against the null character. The original code was wrong. I guess the PR summary was misleading, I updated it now.
desertfire commentedMar 10, 2025
The second bullet of your PR description is also stale. Please update. |
…mpile_wrapper_with_O0 is set (pytorch#148714)Summary:1. Check against the "0" char instead2. We got the following error when using anything other than O0 flag: `error: Function ZN5torch12aot_inductorL22__check_inputs_outputsEPP16AtenTensorOpaqueS3 is too big to optimize [-Werror,-Wignored-optimization-argument]` So we use O0 flag in wrapper code when `aot_inductor.compile_wrapper_opt_level` is set to `O0`.Test Plan:``` buck run 'fbcode//mode/opt' fbcode//deeplearning/aot_inductor/cpu/test:ads_second_stage_dsnn_models_aoti_lowering_test -- -r AdsSecondStageDSNNModelsAOTILoweringTest```Reviewed By: desertfireDifferential Revision: D70670957
facebook-github-bot commentedMar 12, 2025
This pull request wasexported from Phabricator. Differential Revision:D70670957 |
facebook-github-bot commentedMar 13, 2025
@pytorchbot merge (Initiating merge automatically since Phabricator Diff has merged) |
pytorchmergebot commentedMar 13, 2025
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in thewiki. Questions? Feedback? Please reach out to thePyTorch DevX Team |
Uh oh!
There was an error while loading.Please reload this page.
Summary:
Check against the "0" char instead
We got the following error when using anything other than O0 flag:
error: Function ZN5torch12aot_inductorL22__check_inputs_outputsEPP16AtenTensorOpaqueS3 is too big to optimize [-Werror,-Wignored-optimization-argument]So we use O0 flag in wrapper code whenaot_inductor.compile_wrapper_opt_levelis set toO0.Test Plan:
Differential Revision: D70670957
cc@voznesenskym@penguinwu@EikanWang@jgong5@Guobing-Chen@XiaobingSuper@zhuhaozhe@blzheng@wenzhe-nrv@jiayisunx@ipiszy@yf225@chenyang78@kadeng@muchulee8@amjames@chauhang@aakhundov