Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Refactor dynamo converter and ir.Model support#1765

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Open
justinchuby wants to merge31 commits intomain
base:main
Choose a base branch
Loading
fromjustinchu/dynamo-save

Conversation

justinchuby
Copy link
Contributor

@justinchubyjustinchuby commentedApr 18, 2025
edited
Loading

Describe your changes

  • Avoid redundant disk IO when saving a dynamo exported model by saving the ir.Model directly
  • Refactor torch.onnx conversion passes

Checklist before requesting a review

  • Add unit tests for this change.
  • Make sure all tests can pass.
  • Update documents if necessary.
  • Lint and apply fixes to your code by runninglintrunner -a
  • Is this a user-facing change? If yes, give a description of this change to be included in the release notes.
  • Is this PR including examples changes? If yes, please remember to updateexample documentation in a follow-up PR.

@justinchubyjustinchuby requested review fromtitaiwangms,Copilot andjambayk and removed request fortitaiwangmsApril 18, 2025 00:52
Copy link
Contributor

@CopilotCopilotAI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Pull Request Overview

This PR refactors the model conversion flow to avoid redundant disk IO when saving a Dynamo exported model and enables direct saving of ONNX IR models by introducing the ir_model_to_olive_model function.

  • Updated onnxscript_fusion.py to replace model_proto_to_olive_model with ir_model_to_olive_model.
  • Refactored conversion.py to add separate export methods for torchscript and dynamo exporters.
  • Added ir_model_to_olive_model in onnx/common.py to support saving IR models.

Reviewed Changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 1 comment.

FileDescription
olive/passes/onnx/onnxscript_fusion.pyUpdated the model conversion call to use ir_model_to_olive_model.
olive/passes/onnx/conversion.pyRefactored export methods and introduced dynamo exporter changes.
olive/passes/onnx/common.pyAdded the new function ir_model_to_olive_model for saving IR models.

@justinchuby
Copy link
ContributorAuthor

justinchuby commentedApr 22, 2025
edited
Loading

@titaiwangms
Copy link
Contributor

https://aiinfra.visualstudio.com/PublicPackages/_build/results?buildId=758867&view=logs&j=168020a6-ab1b-592e-b9e6-a6cd0eb1939c&t=f9534d71-ad75-594d-34df-0afc963a2a4d&l=10801

@titaiwangms do you have an idea about this error in_rename_dynamic_shapes_with_model_inputs?

Yes, it's a bug and the function will be deleted in 2.7. Because I didn't consider the mismatch input names between onnx graph and nn.Module, it causes KeyError to the matching. (That's why I set it to 2.7)

justinchuby reacted with thumbs up emoji

@justinchubyjustinchuby marked this pull request as ready for reviewApril 22, 2025 17:26
Copy link
Contributor

@CopilotCopilotAI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Pull Request Overview

This PR refactors the dynamo exporter flow to avoid redundant disk IO and updates the torch.onnx conversion passes while improving type annotations and generic typing.

  • Update test conditions to account for torch version 2.7.0 and dynamic shape handling.
  • Refactor type hints in IoConfig by marking several fields as Optional.
  • Enhance generic typing in config_utils with a type variable.

Reviewed Changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated no comments.

FileDescription
test/unit_test/passes/onnx/test_conversion.pyAdjusted test skip conditions for Windows and dynamic shapes based on torch version.
olive/model/config/io_config.pyUpdated type annotations to use Optional for various input/output fields.
olive/common/config_utils.pyModified the config union type to use a generic type variable T.
Comments suppressed due to low confidence (1)

olive/common/config_utils.py:317

  • Ensure that the type variable T is properly defined (e.g., via 'from typing import TypeVar' and 'T = TypeVar("T")') to support the generic type annotation.
config: Union[dict[str, Any], T, None],

Copilot

This comment was marked as outdated.

@titaiwangms
Copy link
Contributor

@justinchuby@jambayk Are we merging this soon? I am working on a PR to refactor the patch and dynamic_shapes to the latest version for better support on LLMs.

@justinchuby
Copy link
ContributorAuthor

I think the PR is ready. I would need some reviews

Copy link
Contributor

@CopilotCopilotAI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Pull Request Overview

This PR refactors the dynamo exporter and onnx conversion passes by reducing redundant disk IO and updating type annotations.

  • Updated test conditions for dynamo export based on torch versions
  • Changed type annotations in IoConfig to use Optional
  • Updated generic type hints in config_utils

Reviewed Changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 1 comment.

FileDescription
test/unit_test/passes/onnx/test_conversion.pyAdjusted test conditions with additional torch version checks
olive/model/config/io_config.pyUpdated type hints to use Optional for better type clarity and safety
olive/common/config_utils.pyModified function signature to use generic type T for configuration input
Comments suppressed due to low confidence (2)

test/unit_test/passes/onnx/test_conversion.py:36

  • [nitpick] Consider adding an inline comment to explain why skipping is conditioned on the torch version for Windows platforms, which can help future maintainers understand the rationale.
if platform.system() == "Windows" and use_dynamo_exporter and _torch_is_older_than("2.7.0"):

olive/model/config/io_config.py:37

  • [nitpick] Updating type annotations to Optional is a good improvement. It would be helpful to update the corresponding docstrings to reflect these changes.
input_shapes: Optional[list[list[int]]] = None

@justinchuby
Copy link
ContributorAuthor

=================================== FAILURES ===================================
____________________________ test_inc_quantization _____________________________
test/unit_test/passes/inc/test_inc_quantization.py:43: in test_inc_quantization
quantized_model = p.run(ov_model, output_folder)
/passes/_pass.py:242: in run
output_model = self._run_for_config(model, self.config, output_model_path)
***/passes/onnx/inc_quantization.py:536: in _run_for_config
if q_model.is_large_model:
E AttributeError: 'NoneType' object has no attribute 'is_large_model'

@@ -531,13 +531,17 @@ def _run_for_config(
"find any quantized model which meet accuracy goal. "
"Try to increase 'max_trials' in 'tuning_criterion'."
)
if q_model is None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

thanks! The changes look good to me. Was waiting for the //build release to be stabilized before we merged this. There's a failing test for this though

Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Yes I need to figure that out. Somehow their_model_to_olive_model method seems to be missing data. Do you have an idea? Otherwise I will do some more debugging.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

i am not familiar with this failure

Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Reviewers

@jambaykjambaykjambayk left review comments

Copilot code reviewCopilotCopilot left review comments

@titaiwangmstitaiwangmsAwaiting requested review from titaiwangms

At least 1 approving review is required to merge this pull request.

Assignees

@jambaykjambayk

Labels
None yet
Projects
None yet
Milestone
No milestone
Development

Successfully merging this pull request may close these issues.

4 participants
@justinchuby@titaiwangms@jambayk@xiaoyu-work

[8]ページ先頭

©2009-2025 Movatter.jp