Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Fix too big to optimize in test, actually use O0 when aot_inductor.compile_wrapper_with_O0 is set#148714

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Closed
yushangdi wants to merge1 commit intopytorch:mainfromyushangdi:export-D70670957

Conversation

@yushangdi
Copy link
Contributor

@yushangdiyushangdi commentedMar 6, 2025
edited
Loading

Summary:

  1. Check against the "0" char instead

  2. We got the following error when using anything other than O0 flag:error: Function ZN5torch12aot_inductorL22__check_inputs_outputsEPP16AtenTensorOpaqueS3 is too big to optimize [-Werror,-Wignored-optimization-argument] So we use O0 flag in wrapper code whenaot_inductor.compile_wrapper_opt_level is set toO0.

Test Plan:

 buck run  'fbcode//mode/opt' fbcode//deeplearning/aot_inductor/cpu/test:ads_second_stage_dsnn_models_aoti_lowering_test -- -r AdsSecondStageDSNNModelsAOTILoweringTest

Differential Revision: D70670957

cc@voznesenskym@penguinwu@EikanWang@jgong5@Guobing-Chen@XiaobingSuper@zhuhaozhe@blzheng@wenzhe-nrv@jiayisunx@ipiszy@yf225@chenyang78@kadeng@muchulee8@amjames@chauhang@aakhundov

@pytorch-bot
Copy link

pytorch-botbot commentedMar 6, 2025
edited
Loading

🔗 Helpful Links

🧪 See artifacts and rendered test results athud.pytorch.org/pr/148714

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commitb20b51c with merge base1e37e5b (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot
Copy link
Contributor

This pull request wasexported from Phabricator. Differential Revision:D70670957

@facebook-github-bot
Copy link
Contributor

This pull request wasexported from Phabricator. Differential Revision:D70670957

@yushangdiyushangdi added the topic: not user facingtopic category labelMar 6, 2025
yushangdi added a commit to yushangdi/pytorch that referenced this pull requestMar 8, 2025
…mpile_wrapper_with_O0 is set (pytorch#148714)Summary:1. the '\0' char cannot be properly recognized. Change to use '0' instead.2. We got the following error when using anything other than O0 flag: `error: Function ZN5torch12aot_inductorL22__check_inputs_outputsEPP16AtenTensorOpaqueS3 is too big to optimize [-Werror,-Wignored-optimization-argument]` So we use O0 flag in wrapper code when `aot_inductor.compile_wrapper_with_O0` is setTest Plan:``` buck run  'fbcode//mode/opt' fbcode//deeplearning/aot_inductor/cpu/test:ads_second_stage_dsnn_models_aoti_lowering_test -- -r AdsSecondStageDSNNModelsAOTILoweringTest```Differential Revision: D70670957
@facebook-github-bot
Copy link
Contributor

This pull request wasexported from Phabricator. Differential Revision:D70670957

"""
bool _check_aoti_runtime_check_inputs_env() {
const static char* env_var_value = getenv("AOTI_RUNTIME_CHECK_INPUTS");
const static bool result = env_var_value != nullptr && env_var_value[0] != '\0';
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Is just escaping the backslash with a double backslash the proper fix?

Copy link
ContributorAuthor

@yushangdiyushangdiMar 9, 2025
edited
Loading

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

@Skylion007 I intend to check if it's a 0, not checking against the null character. The original code was wrong. I guess the PR summary was misleading, I updated it now.

@desertfire
Copy link
Contributor

The second bullet of your PR description is also stale. Please update.

@pytorch-botpytorch-botbot added the ciflow/trunkTrigger trunk jobs on your pull request labelMar 10, 2025
…mpile_wrapper_with_O0 is set (pytorch#148714)Summary:1. Check against the "0" char instead2. We got the following error when using anything other than O0 flag: `error: Function ZN5torch12aot_inductorL22__check_inputs_outputsEPP16AtenTensorOpaqueS3 is too big to optimize [-Werror,-Wignored-optimization-argument]` So we use O0 flag in wrapper code when `aot_inductor.compile_wrapper_opt_level` is set to `O0`.Test Plan:``` buck run  'fbcode//mode/opt' fbcode//deeplearning/aot_inductor/cpu/test:ads_second_stage_dsnn_models_aoti_lowering_test -- -r AdsSecondStageDSNNModelsAOTILoweringTest```Reviewed By: desertfireDifferential Revision: D70670957
@facebook-github-bot
Copy link
Contributor

This pull request wasexported from Phabricator. Differential Revision:D70670957

@facebook-github-bot
Copy link
Contributor

@pytorchbot merge

(Initiating merge automatically since Phabricator Diff has merged)

pytorch-bot[bot] reacted with thumbs up emoji

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in thewiki.

Questions? Feedback? Please reach out to thePyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment

Reviewers

@Skylion007Skylion007Skylion007 left review comments

@desertfiredesertfiredesertfire approved these changes

Assignees

No one assigned

Labels

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

5 participants

@yushangdi@facebook-github-bot@desertfire@pytorchmergebot@Skylion007

[8]ページ先頭

©2009-2025 Movatter.jp