Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

[None][fix] : Fix OOM issue when dp padding is enabled#8052

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Merged
peaceh-nv merged 2 commits intoNVIDIA:mainfrompeaceh-nv:fix-dp-oom
Oct 1, 2025

Conversation

@peaceh-nv
Copy link
Collaborator

@peaceh-nvpeaceh-nv commentedSep 29, 2025
edited
Loading

@coderabbitai summary

Description

When dp padding is enabled, the number of chunks should be re-calculated using padded tokens not the original all_rank_num_tokens. The OOM issue is found on SM120 due to dp padding is only enabled on the platform, seehttps://github.com/NVIDIA/TensorRT-LLM/pull/7965/files

Signed-off-by: peaceh <103117813+peaceh-nv@users.noreply.github.com>
@peaceh-nvpeaceh-nv requested a review froma team as acode ownerSeptember 29, 2025 05:02
@peaceh-nv
Copy link
CollaboratorAuthor

/bot run

@peaceh-nvpeaceh-nv changed the title[None][fix] : Fix OOM issue when dp padding is enabled on SM120[None][fix] : Fix OOM issue when dp padding is enabledSep 29, 2025
@coderabbitai
Copy link
Contributor

📝 Walkthrough

Walkthrough

Refines chunking logic in fused MoE forward_impl for distributed mode: when DP with padding is enabled and parallel_size > 1, num_chunks is computed from max(all_rank_num_tokens) and world size; otherwise, the original num_rows-based computation remains. No public APIs changed.

Changes

Cohort / File(s)Summary
Fused MoE DP chunking
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py
In forward_impl, conditional num_chunks calculation: for DP with padding and parallel_size > 1, use ceiling((max(all_rank_num_tokens) × num_ranks) ÷ moe_max_num_tokens); otherwise, use the prior num_rows-based formula. No other logic altered.

Sequence Diagram(s)

sequenceDiagram    autonumber    participant Caller    participant FusedMoE as FusedMoE.forward_impl    participant DPState as DP Context    participant Chunker as Chunking Logic    Caller->>FusedMoE: forward_impl(input, ...)    FusedMoE->>DPState: check use_dp, parallel_size, use_dp_padding    alt DP with padding enabled and parallel_size > 1        DPState-->>FusedMoE: all_rank_num_tokens        FusedMoE->>Chunker: compute num_chunks = ceil(max(all_rank_num_tokens)*num_ranks / moe_max_num_tokens)        note right of Chunker: New branch for DP padding    else        FusedMoE->>Chunker: compute num_chunks = ceil(num_rows / moe_max_num_tokens)        note right of Chunker: Original behavior    end    FusedMoE->>FusedMoE: split inputs into chunks    FusedMoE-->>Caller: proceed with existing processing per chunk
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check nameStatusExplanationResolution
Description Check⚠️ WarningThe description includes only the placeholder summary tag and a brief description of the fix but omits the actual summary under “@coderabbitai summary” and is missing the required “## Test Coverage” and “## PR Checklist” sections from the repository’s template.Please populate the summary content under the “@coderabbitai summary” heading and add the “## Test Coverage” section with relevant tests as well as the “## PR Checklist” section to fully comply with the repository’s description template.
Docstring Coverage⚠️ WarningDocstring coverage is 0.00% which is insufficient. The required threshold is 80.00%.You can run@coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check nameStatusExplanation
Title Check✅ PassedThe title “[None][fix] : Fix OOM issue when dp padding is enabled” clearly and concisely identifies the primary change by specifying that an out-of-memory bug with data-parallel padding is being fixed, and it follows the required ticket and type tagging convention.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between9cea6bf andf17dc12.

📒 Files selected for processing (1)
  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py
🧠 Learnings (2)
📓 Common learnings
Learnt from: nv-lschneiderPR: NVIDIA/TensorRT-LLM#7910File: cpp/tensorrt_llm/kernels/nccl_device/config.cu:42-49Timestamp: 2025-09-23T14:58:05.372ZLearning: In TensorRT-LLM NCCL device kernels (cpp/tensorrt_llm/kernels/nccl_device/), the token partitioning intentionally uses ceil-like distribution (same token_per_rank for all ranks) to ensure all ranks launch the same number of blocks. This is required for optimal NCCL device API barrier performance, even though it may launch extra blocks for non-existent tokens on later ranks. Runtime bounds checking in the kernel (blockID validation) handles the overshoot cases.
📚 Learning: 2025-08-09T20:57:04.084Z
Learnt from: sklevtsov-nvidiaPR: NVIDIA/TensorRT-LLM#3294File: cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_gemm_tma_warp_specialized_input.cu:118-127Timestamp: 2025-08-09T20:57:04.084ZLearning: In the CUTLASS MoE finalize fusion implementation (cpp/tensorrt_llm/kernels/cutlass_kernels/moe_gemm/moe_gemm_tma_warp_specialized_input.cu), when setting `fused_finalize_epilogue.stride_final_output` with shape `(hidden_size, num_output_tokens, 1)`, the `num_rows_in_final_output` should be set to `num_output_tokens` (not `hidden_size`) because of a swap+transpose operation that maps rows of the output tensor to `hidden_size` and columns to `num_output_tokens`.

Applied to files:

  • tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (1)
tensorrt_llm/_torch/modules/fused_moe/fused_moe_cutlass.py (1)

472-478:LGTM! Conditional chunking logic correctly addresses OOM with DP padding.

The change properly handles the chunking calculation when DP padding is enabled:

  • Usesmax(all_rank_num_tokens) * len(all_rank_num_tokens) for padded scenarios to ensure chunks are sized for the maximum token count across all ranks
  • Falls back to the originalnum_rows calculation for non-padded cases
  • This prevents OOM by accounting for the actual memory requirements when padding is applied

The implementation aligns with the retrieved learning about token partitioning using ceil-like distribution for optimal performance in distributed scenarios.

Based on learnings

Tip

👮 Agentic pre-merge checks are now available in preview!

Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.

  • Built-in checks – Quickly apply ready-made checks to enforce title conventions, require pull request descriptions that follow templates, validate linked issues for compliance, and more.
  • Custom agentic checks – Define your own rules using CodeRabbit’s advanced agentic capabilities to enforce organization-specific policies and workflows. For example, you can instruct CodeRabbit’s agent to verify that API documentation is updated whenever API schema files are modified in a PR. Note: Upto 5 custom checks are currently allowed during the preview period. Pricing for this feature will be announced in a few weeks.

Please see thedocumentation for more information.

Example:

reviews:pre_merge_checks:custom_checks:      -name:"Undocumented Breaking Changes"mode:"warning"instructions:|          Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).

Please share your feedback with us on thisDiscord post.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment@coderabbitai help to get the list of available commands and usage tips.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20215 [ run ] triggered by Bot

Signed-off-by: peaceh <103117813+peaceh-nv@users.noreply.github.com>
@peaceh-nv
Copy link
CollaboratorAuthor

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20222 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20215 [ run ] completed with stateABORTED
LLM/main/L0_MergeRequest_PR #15247(Blue Ocean) completed with status: ABORTED

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20222 [ run ] completed with stateSUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15251 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check thererun report for details.

@peaceh-nvpeaceh-nv merged commit808e556 intoNVIDIA:mainOct 1, 2025
5 checks passed
faradawn pushed a commit to faradawn/TensorRT-LLM that referenced this pull requestOct 2, 2025
Signed-off-by: peaceh <103117813+peaceh-nv@users.noreply.github.com>Signed-off-by: Faradawn Yang <faradawny@gmail.com>
evezhier pushed a commit to evezhier/TensorRT-LLM that referenced this pull requestOct 3, 2025
Signed-off-by: peaceh <103117813+peaceh-nv@users.noreply.github.com>
faradawn pushed a commit to faradawn/TensorRT-LLM that referenced this pull requestOct 3, 2025
Signed-off-by: peaceh <103117813+peaceh-nv@users.noreply.github.com>Signed-off-by: Faradawn Yang <faradawny@gmail.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull requestNov 1, 2025
Signed-off-by: peaceh <103117813+peaceh-nv@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull requestNov 3, 2025
Signed-off-by: peaceh <103117813+peaceh-nv@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull requestNov 3, 2025
Signed-off-by: peaceh <103117813+peaceh-nv@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull requestNov 3, 2025
Signed-off-by: peaceh <103117813+peaceh-nv@users.noreply.github.com>
Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment

Reviewers

@mikeiovinemikeiovinemikeiovine approved these changes

Assignees

No one assigned

Labels

None yet

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

3 participants

@peaceh-nv@tensorrt-cicd@mikeiovine

[8]ページ先頭

©2009-2025 Movatter.jp