Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

[None][chore] Update the Flux autodeploy example#8434

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Merged

Conversation

@ajrasane
Copy link
Collaborator

@ajrasaneajrasane commentedOct 16, 2025
edited by coderabbitaibot
Loading

Summary by CodeRabbit

  • New Features

    • Added--max_batch_size CLI parameter to configure maximum batch size for the model.
    • Introduced a wrapper module for compiled models to provide consistent integration with existing pipelines.
  • Refactor

    • Updated quantized model loading process.
    • Streamlined main execution flow for model compilation and initialization.

Description

Updated the Flux example to be compatible with the latest version of auto deploy

Test Coverage

TODO: Add integration test

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR FollowsTRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (seetest instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • [] Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run/bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: DoesNOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: DoesNOT update GitHub check status.

--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: DoesNOT update GitHub check status.

--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: DoesNOT update GitHub pipeline status.

--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: DoesNOT update GitHub check status.

--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: DoesNOT update GitHub check status.

--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug(OPTIONAL) :Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-list parameter to access the appropriate container environment. Note: DoesNOT update GitHub check status.

For guidance on mapping tests to stage names, seedocs/source/reference/ci-overview.md
and thescripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request.--comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@coderabbitai
Copy link
Contributor

coderabbitaibot commentedOct 16, 2025
edited
Loading

📝 Walkthrough

Walkthrough

Modified the Flux model compilation pipeline to replace graph fusion and quantization steps withload_buffers_and_params for quantized state restoration, introducesTransformerWrapper for consistent module interface, addsmax_batch_size CLI parameter, and propagates it through compiler configuration.

Changes

Cohort / File(s)Summary
Quantized State Loading & Model Wrapping
examples/auto_deploy/build_and_run_flux.py
Removesfuse_gemms andquantize imports; addsload_buffers_and_params import from tensorrt_llm._torch.auto_deploy.transformations._graph; introduces newTransformerWrapper class exposingforward method and no-opcache_context method; updates model assignment to wrap compiled graph withTransformerWrapper(gm, config) instead of direct assignment
Compiler Configuration & CLI
examples/auto_deploy/build_and_run_flux.py
Adds--max_batch_size CLI argument with type=int and default=1; propagatesmax_batch_size from CLI args through compiler invocation via new parameter incompiler_cls(gm, args=(), max_batch_size=args.max_batch_size, kwargs=flux_kwargs)

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings, 1 inconclusive)
Check nameStatusExplanationResolution
Description Check⚠️ WarningThe PR description is incomplete and lacks sufficient detail. The "Description" section only states "Updated the Flux example to be compatible with the latest version of auto deploy," which is vague and does not explain what specifically changed or why those changes were necessary. The "Test Coverage" section explicitly marks the item as "TODO: Add integration test," indicating that no test coverage has been provided for these significant changes. The PR checklist remains unchecked, suggesting the author did not complete the pre-submission review items. Given that the raw summary shows substantial structural changes including a new public class, modified compilation paths, and removed transformations, the description fails to adequately communicate these modifications.Expand the "Description" section to explain the specific changes: introduce TransformerWrapper as a wrapper around compiled models, switch from fuse_gemms/quantize to load_buffers_and_params for quantized state restoration, add max_batch_size CLI parameter support, and why these changes improve the autodeploy pipeline. For "Test Coverage," either provide concrete test names/locations for existing tests or explicitly commit to adding integration tests before merge. Finally, review and check off items in the PR Checklist that apply to validate compliance with coding guidelines and testing requirements.
Docstring Coverage⚠️ WarningDocstring coverage is 16.67% which is insufficient. The required threshold is 80.00%.You can run@coderabbitai generate docstrings to improve docstring coverage.
Title Check❓ InconclusiveThe PR title "[None][chore] Update the Flux autodeploy example" is generic and vague. While it correctly refers to the file being changed and acknowledges that updates are being made, it fails to convey the specific nature or significance of the changes. The raw summary shows substantial technical modifications including introducing a new TransformerWrapper class, changing the quantization approach via load_buffers_and_params, adding max_batch_size CLI support, and modifying the compilation flow—none of which are reflected in the title. The title reads like a catch-all descriptor rather than a specific summary of the primary changes from the developer's perspective.Consider using a more descriptive title that captures the key architectural changes, such as "[None][feat] Introduce TransformerWrapper and refactor Flux autodeploy compilation path" or "[None][refactor] Update Flux autodeploy with new quantization and batch size support." This would give reviewers and future maintainers a clearer understanding of what the PR accomplishes at a glance.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between46ee7ac and08b4363.

📒 Files selected for processing (1)
  • examples/auto_deploy/build_and_run_flux.py (3 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • examples/auto_deploy/build_and_run_flux.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • examples/auto_deploy/build_and_run_flux.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • examples/auto_deploy/build_and_run_flux.py
🧬 Code graph analysis (1)
examples/auto_deploy/build_and_run_flux.py (3)
tensorrt_llm/_torch/auto_deploy/transformations/_graph.py (1)
  • load_buffers_and_params (32-68)
tensorrt_llm/_torch/auto_deploy/compile/compiler.py (3)
  • CompileBackendRegistry (12-31)
  • get (25-27)
  • compile (47-48)
tensorrt_llm/_torch/auto_deploy/compile/backends/torch_opt.py (1)
  • compile (26-28)
🪛 Ruff (0.14.0)
examples/auto_deploy/build_and_run_flux.py

26-26: Unused method argument:args

(ARG002)


26-26: Unused method argument:kwargs

(ARG002)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (6)
examples/auto_deploy/build_and_run_flux.py (6)

10-10:LGTM!

The import ofload_buffers_and_params is appropriate for the new quantized weight loading approach.


139-144:LGTM!

Themax_batch_size CLI argument is well-defined with a sensible default value of 1.


177-178:LGTM!

TheTransformerWrapper instantiation correctly wraps the compiled model with the configuration, providing a consistent interface for the pipeline.


167-171:Confirm permissive weight-loading settings
The call at examples/auto_deploy/build_and_run_flux.py:169–171 usesstrict_missing=False,strict_unexpected=False, andclone=False, which deviates from the usualstrict_missing=True pattern and silently ignores mismatched keys while sharing tensor memory. Confirm this is intentional and won’t mask missing parameters or cause aliasing issues.


16-36:Track the cache_context limitation and file a follow-up issue. The no-op implementation is safe—there are no call sites—and the unusedargs/kwargs are required by the interface (prefix them with underscores to silence linters if desired).


174-174:max_batch_size is supported by all compiler backends
TorchCudagraphCompiler (and thus TorchOptCompiler) explicitly handlesmax_batch_size, and the baseCompilerBackend’s__init__ accepts extra kwargs, so passing this parameter will not cause a runtime error.


Thanks for usingCodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment@coderabbitai help to get the list of available commands and usage tips.

@ajrasaneajrasane marked this pull request as draftOctober 16, 2025 16:26
@ajrasaneajrasane self-assigned thisOct 16, 2025
@ajrasaneajrasaneforce-pushed theuser/ajrasane/torch_diffusers branch 2 times, most recently from813986e to315756eCompareOctober 28, 2025 04:45
@ajrasaneajrasaneforce-pushed theuser/ajrasane/torch_diffusers branch fromd9329d5 to623ae89CompareNovember 7, 2025 00:37
@suyoggupta
Copy link
Collaborator

/bot run

Signed-off-by: ajrasane <131806219+ajrasane@users.noreply.github.com>
Signed-off-by: ajrasane <131806219+ajrasane@users.noreply.github.com>
Signed-off-by: ajrasane <131806219+ajrasane@users.noreply.github.com>
Signed-off-by: ajrasane <131806219+ajrasane@users.noreply.github.com>
Signed-off-by: ajrasane <131806219+ajrasane@users.noreply.github.com>
Signed-off-by: ajrasane <131806219+ajrasane@users.noreply.github.com>
@ajrasaneajrasaneforce-pushed theuser/ajrasane/torch_diffusers branch from623ae89 to6dcb2c7CompareNovember 7, 2025 03:32
@suyoggupta
Copy link
Collaborator

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #23802 [ run ] triggered by Bot. Commit:6dcb2c7

Signed-off-by: ajrasane <131806219+ajrasane@users.noreply.github.com>
@ajrasaneajrasane marked this pull request as ready for reviewNovember 7, 2025 03:55
@ajrasaneajrasane requested a review froma team as acode ownerNovember 7, 2025 03:55
@tensorrt-cicd
Copy link
Collaborator

PR_Github #23802 [ run ] completed with stateFAILURE. Commit:6dcb2c7
/LLM/main/L0_MergeRequest_PR pipeline #17918 completed with status: 'FAILURE'

Signed-off-by: ajrasane <131806219+ajrasane@users.noreply.github.com>
Signed-off-by: ajrasane <131806219+ajrasane@users.noreply.github.com>
@ajrasaneajrasaneforce-pushed theuser/ajrasane/torch_diffusers branch from1b735d5 to48f8a38CompareNovember 14, 2025 13:57
@Fridah-nv
Copy link
Collaborator

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24628 [ run ] triggered by Bot. Commit:bcc687f

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24628 [ run ] completed with stateSUCCESS. Commit:bcc687f
/LLM/main/L0_MergeRequest_PR pipeline #18592 completed with status: 'FAILURE'

@Fridah-nv
Copy link
Collaborator

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24730 [ run ] triggered by Bot. Commit:bcc687f

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24730 [ run ] completed with stateFAILURE. Commit:bcc687f
/LLM/main/L0_MergeRequest_PR pipeline #18664 completed with status: 'FAILURE'

@Fridah-nv
Copy link
Collaborator

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24739 [ run ] triggered by Bot. Commit:d486f99

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24739 [ run ] completed with stateFAILURE. Commit:d486f99

@Fridah-nv
Copy link
Collaborator

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24794 [ run ] triggered by Bot. Commit:7e5f4cf

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24794 [ run ] completed with stateSUCCESS. Commit:7e5f4cf
/LLM/main/L0_MergeRequest_PR pipeline #18709 completed with status: 'FAILURE'

@Fridah-nv
Copy link
Collaborator

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24805 [ run ] triggered by Bot. Commit:7e5f4cf

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24805 [ run ] completed with stateSUCCESS. Commit:7e5f4cf
/LLM/main/L0_MergeRequest_PR pipeline #18718 completed with status: 'FAILURE'

@Fridah-nv
Copy link
Collaborator

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24933 [ run ] triggered by Bot. Commit:7e5f4cf

@tensorrt-cicd
Copy link
Collaborator

PR_Github #24933 [ run ] completed with stateSUCCESS. Commit:7e5f4cf
/LLM/main/L0_MergeRequest_PR pipeline #18832 completed with status: 'SUCCESS'

@Fridah-nvFridah-nv merged commit8d7cda2 intoNVIDIA:mainNov 18, 2025
5 checks passed
@github-project-automationgithub-project-automationbot moved this fromIn review toDone inAutoDeploy BoardNov 18, 2025
lkomali pushed a commit to lkomali/TensorRT-LLM that referenced this pull requestNov 19, 2025
Signed-off-by: ajrasane <131806219+ajrasane@users.noreply.github.com>Co-authored-by: Frida Hou <201670829+Fridah-nv@users.noreply.github.com>Signed-off-by: lkomali <lkomali@nvidia.com>
Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment

Reviewers

@lucaslielucaslielucaslie left review comments

@VALLIS-NERIAVALLIS-NERIAVALLIS-NERIA left review comments

@juney-nvidiajuney-nvidiajuney-nvidia approved these changes

@Fridah-nvFridah-nvFridah-nv approved these changes

@kaiyuxkaiyuxAwaiting requested review from kaiyuxkaiyux is a code owner automatically assigned from NVIDIA/trt-llm-doc-owners

@nv-guomingznv-guomingzAwaiting requested review from nv-guomingznv-guomingz is a code owner automatically assigned from NVIDIA/trt-llm-doc-owners

@cjluo-nvcjluo-nvAwaiting requested review from cjluo-nv

Assignees

@ajrasaneajrasane

Labels

None yet

Projects

Archived in project

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

7 participants

@ajrasane@suyoggupta@tensorrt-cicd@Fridah-nv@lucaslie@VALLIS-NERIA@juney-nvidia

[8]ページ先頭

©2009-2025 Movatter.jp