Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

[https://nvbugs/5509024][fix] Print full parsed outputs and update keywords for multimodal model#7670

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Conversation

@Wanli-Jiang
Copy link
Collaborator

@Wanli-JiangWanli-Jiang commentedSep 10, 2025
edited
Loading

Summary by CodeRabbit

  • Tests
    • Reused parsed multimodal outputs across end-to-end tests to reduce redundancy and improve efficiency.
    • Enhanced failure messages to include full parsed outputs for easier debugging.
    • Updated expected multimodal token sets for several models (including Gemma, Mistral, Phi-4) and adjusted a mixture-text-image expectation.
    • Enabled additional Gemma-specific CLI options to support a flashinfer backend during tests.

Description

Test Coverage

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR FollowsTRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (seetest instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run/bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: DoesNOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: DoesNOT update GitHub check status.

--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: DoesNOT update GitHub check status.

--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: DoesNOT update GitHub pipeline status.

--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: DoesNOT update GitHub check status.

--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: DoesNOT update GitHub check status.

--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug(OPTIONAL) :Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-list parameter to access the appropriate container environment. Note: DoesNOT update GitHub check status.

For guidance on mapping tests to stage names, seedocs/source/reference/ci-overview.md
and thescripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request.--comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@Wanli-JiangWanli-Jiang requested a review froma team as acode ownerSeptember 10, 2025 09:30
@coderabbitai
Copy link
Contributor

coderabbitaibot commentedSep 10, 2025
edited
Loading

📝 Walkthrough

Walkthrough

Precomputes parsed multimodal outputs once per test, reuses them in assertions, expands assertion failure messages to include full parsed outputs, updates expected token lists for several models, and enables additional Gemma-specific CLI options in multimodal end-to-end tests. Changes are limited to tests.

Changes

Cohort / File(s)Summary
E2E multimodal tests: parsed output reuse, assertions, expected tokens, Gemma CLI flags
tests/integration/defs/test_e2e.py
- Introducedparsed_outputs = parse_output(output) and reused it across multiple multimodal tests (test_ptp_quickstart_multimodal,test_ptp_quickstart_multimodal_phi4mm,test_ptp_quickstart_multimodal_2gpu,test_ptp_quickstart_multimodal_multiturn).
- Replacedzip(parse_output(output), expected_keywords[model_name][modality]) withzip(parsed_outputs, ...) to avoid repeated parsing.
- Augmented assertion failure messages to appendParsed output for all prompts: {parsed_outputs} for better debugging.
- Updated expected token lists for several models (e.g.,gemma-3-27b-it,mistral-small-3.1-24b-instruct,Phi-4-multimodal-instruct) and adjusted mixture-text-image expectations.
- Enabled additional Gemma-specific CLI options for flashinfer backend (e.g.,image_format=pil,attention_backend=FLASHINFER,disable_kv_cache_reuse,kv_cache_fraction=0.5,max_seq_len=1024).
- No changes to public function/class signatures.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment@coderabbitai help to get the list of available commands and usage tips.

Pre-merge checks

❌ Failed checks (2 warnings)
Check nameStatusExplanationResolution
Docstring Coverage⚠️ WarningDocstring coverage is 0.00% which is insufficient. The required threshold is 80.00%.You can run@coderabbitai generate docstrings to improve docstring coverage.
Description Check⚠️ WarningThe PR description includes CI links but does not follow the repository's required PR template: the formatted PR title and the key sections (Description, Test Coverage, and PR Checklist) are left empty or unpopulated and there is no concise summary of what changed or why. Because those template fields are required to evaluate scope, test coverage, and reviewer responsibilities, the current description is largely incomplete and insufficient for a proper review.Please update the PR by adding a title that follows the template (e.g. [TRTLLM-1234][fix] Short summary), fill the Description with a short "what and why" summary and list of changed files/impacts, populate Test Coverage with the relevant tests and commands/CI stages that validate the change, and complete the PR Checklist (including CODEOWNERS/docs/dependency notes). Also keep the provided CI links and add a brief note indicating which stages passed and any remaining checks, then assign appropriate reviewers and labels.
✅ Passed checks (1 passed)
Check nameStatusExplanation
Title Check✅ PassedThe PR title "[https://nvbugs/5509024][fix] Print full parsed outputs and update keywords for multimodal model" accurately summarizes the primary changes in the diff—adding full parsed-output printing for debugging and updating expected multimodal keywords. It follows the repository's NVBugs ticket + type format and is concise and specific. A reviewer scanning history can understand the main purpose without extra noise.

Copy link
Contributor

@coderabbitaicoderabbitaibot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tests/integration/defs/test_e2e.py (1)

2728-2740:Do not rely on zip() truncation; assert full coverage first

Same fix pattern as above to guarantee 1:1 checking.

-    parsed_outputs = parse_output(output)-    for prompt_output, prompt_keywords in zip(-            parsed_outputs, expected_keywords[model_name]["image"]):+    parsed_outputs = parse_output(output)+    expected = expected_keywords[model_name]["image"]+    assert len(parsed_outputs) == len(expected), (+        f"Expected {len(expected)} outputs, got {len(parsed_outputs)}. "+        f"Full output: {parsed_outputs}"+    )+    for prompt_output, prompt_keywords in zip(parsed_outputs, expected):         matches = [             keyword in prompt_output.lower() for keyword in prompt_keywords         ]         obs_match_ratio = 1. * sum(matches) / len(matches)         print("prompt_output:", prompt_output)         print("prompt_keywords:", prompt_keywords)         print("matches:", matches)         print("obs_match_ratio:", obs_match_ratio)         assert obs_match_ratio >= match_ratio, f"Incorrect output!\nFull output: {parsed_outputs}\nGenerated \"{prompt_output}\"\nExpected keywords \"{prompt_keywords}\"\n Matched keywords: {matches}\n Observed match ratio {obs_match_ratio} below threshold {match_ratio}"
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between7c2f04f andad210d1.

📒 Files selected for processing (1)
  • tests/integration/defs/test_e2e.py (5 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • tests/integration/defs/test_e2e.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • tests/integration/defs/test_e2e.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • tests/integration/defs/test_e2e.py
🧬 Code graph analysis (1)
tests/integration/defs/test_e2e.py (1)
tests/integration/defs/common.py (1)
  • parse_output (665-681)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check

@Wanli-JiangWanli-Jiangforce-pushed theuser/williamj/add-outputs-e2e-test branch fromad210d1 toe3ac09dCompareSeptember 15, 2025 06:42
@Wanli-JiangWanli-Jiang changed the title[https://nvbugs/5509024][fix] Add full outputs in test_e2e[https://nvbugs/5509024][fix] Print full parsed outputs and update keywords for multimodal modelSep 15, 2025
@Wanli-JiangWanli-Jiangforce-pushed theuser/williamj/add-outputs-e2e-test branch 2 times, most recently from294ae94 tod84fde4CompareSeptember 15, 2025 10:06
* Also print parsed_ouput for easy debugging.Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
@Wanli-JiangWanli-Jiangforce-pushed theuser/williamj/add-outputs-e2e-test branch fromd84fde4 to2a6a059CompareSeptember 16, 2025 01:41
@Wanli-Jiang
Copy link
CollaboratorAuthor

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #18684 [ run ] triggered by Bot

Copy link
Contributor

@coderabbitaicoderabbitaibot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (3)
tests/integration/defs/test_e2e.py (3)

2381-2388:Add a length assertion before zipping to avoid silent truncation

Zip truncates on length mismatch; assert counts first and reuse a local expected variable.

-    parsed_outputs = parse_output(output)-    for prompt_output, prompt_keywords in zip(-            parsed_outputs, expected_keywords[model_name][modality]):+    parsed_outputs = parse_output(output)+    expected = expected_keywords[model_name][modality]+    assert len(parsed_outputs) == len(expected), (+        f"Expected {len(expected)} outputs, got {len(parsed_outputs)}. "+        f"Parsed outputs: {parsed_outputs}"+    )+    for prompt_output, prompt_keywords in zip(parsed_outputs, expected):         matches = [             keyword in prompt_output.lower() for keyword in prompt_keywords         ]         obs_match_ratio = 1. * sum(matches) / len(matches)-        assert obs_match_ratio >= match_ratio, f"Incorrect output!\nGenerated \"{prompt_output}\"\nExpected keywords \"{prompt_keywords}\"\n Matched keywords: {matches}\n Observed match ratio {obs_match_ratio} below threshold {match_ratio}\n\nParsed output for all prompts: {parsed_outputs}"+        assert obs_match_ratio >= match_ratio, f"Incorrect output!\nGenerated \"{prompt_output}\"\nExpected keywords \"{prompt_keywords}\"\n Matched keywords: {matches}\n Observed match ratio {obs_match_ratio} below threshold {match_ratio}\n\nParsed output for all prompts: {parsed_outputs}"

2521-2528:Same here: guard against mismatched counts before zip()

Mirror the length check and local expected var for Phi‑4‑MM.

-    parsed_outputs = parse_output(output)-    for prompt_output, prompt_keywords in zip(parsed_outputs,-                                              expected_keywords[modality]):+    parsed_outputs = parse_output(output)+    expected = expected_keywords[modality]+    assert len(parsed_outputs) == len(expected), (+        f"Expected {len(expected)} outputs, got {len(parsed_outputs)}. "+        f"Parsed outputs: {parsed_outputs}"+    )+    for prompt_output, prompt_keywords in zip(parsed_outputs, expected):         matches = [             keyword in prompt_output.lower() for keyword in prompt_keywords         ]         obs_match_ratio = 1. * sum(matches) / len(matches)-        assert obs_match_ratio >= match_ratio, f"Incorrect output!\nGenerated \"{prompt_output}\"\nExpected keywords \"{prompt_keywords}\"\n Matched keywords: {matches}\n Observed match ratio {obs_match_ratio} below threshold {match_ratio}\n\nParsed output for all prompts: {parsed_outputs}"+        assert obs_match_ratio >= match_ratio, f"Incorrect output!\nGenerated \"{prompt_output}\"\nExpected keywords \"{prompt_keywords}\"\n Matched keywords: {matches}\n Observed match ratio {obs_match_ratio} below threshold {match_ratio}\n\nParsed output for all prompts: {parsed_outputs}"

2628-2635:Also add the count check for the 2‑GPU path

Prevent silent passes in the multi‑GPU variant.

-    parsed_outputs = parse_output(output)-    for prompt_output, prompt_keywords in zip(-            parsed_outputs, expected_keywords[model_name]["image"]):+    parsed_outputs = parse_output(output)+    expected = expected_keywords[model_name]["image"]+    assert len(parsed_outputs) == len(expected), (+        f"Expected {len(expected)} outputs, got {len(parsed_outputs)}. "+        f"Parsed outputs: {parsed_outputs}"+    )+    for prompt_output, prompt_keywords in zip(parsed_outputs, expected):         matches = [             keyword in prompt_output.lower() for keyword in prompt_keywords         ]         obs_match_ratio = 1. * sum(matches) / len(matches)-        assert obs_match_ratio >= match_ratio, f"Incorrect output!\nGenerated \"{prompt_output}\"\nExpected keywords \"{prompt_keywords}\"\n Matched keywords: {matches}\n Observed match ratio {obs_match_ratio} below threshold {match_ratio}\n\nParsed output for all prompts: {parsed_outputs}"+        assert obs_match_ratio >= match_ratio, f"Incorrect output!\nGenerated \"{prompt_output}\"\nExpected keywords \"{prompt_keywords}\"\n Matched keywords: {matches}\n Observed match ratio {obs_match_ratio} below threshold {match_ratio}\n\nParsed output for all prompts: {parsed_outputs}"
🧹 Nitpick comments (3)
tests/integration/defs/test_e2e.py (3)

1-14:Update SPDX year range to include 2025

Coding guideline requests current year in headers. Update to 2022-2025.

-# SPDX-FileCopyrightText: Copyright (c) 2022-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.+# SPDX-FileCopyrightText: Copyright (c) 2022-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.

2731-2743:Add length assertion before zipping in multiturn test

Apply the same guard here to avoid truncated comparisons.

-    parsed_outputs = parse_output(output)-    for prompt_output, prompt_keywords in zip(-            parsed_outputs, expected_keywords[model_name]["image"]):+    parsed_outputs = parse_output(output)+    expected = expected_keywords[model_name]["image"]+    assert len(parsed_outputs) == len(expected), (+        f"Expected {len(expected)} outputs, got {len(parsed_outputs)}. "+        f"Parsed outputs: {parsed_outputs}"+    )+    for prompt_output, prompt_keywords in zip(parsed_outputs, expected):         matches = [             keyword in prompt_output.lower() for keyword in prompt_keywords         ]         obs_match_ratio = 1. * sum(matches) / len(matches)         print("prompt_output:", prompt_output)         print("prompt_keywords:", prompt_keywords)         print("matches:", matches)         print("obs_match_ratio:", obs_match_ratio)-        assert obs_match_ratio >= match_ratio, f"Incorrect output!\nGenerated \"{prompt_output}\"\nExpected keywords \"{prompt_keywords}\"\n Matched keywords: {matches}\n Observed match ratio {obs_match_ratio} below threshold {match_ratio}\n\nParsed output for all prompts: {parsed_outputs}"+        assert obs_match_ratio >= match_ratio, f"Incorrect output!\nGenerated \"{prompt_output}\"\nExpected keywords \"{prompt_keywords}\"\n Matched keywords: {matches}\n Observed match ratio {obs_match_ratio} below threshold {match_ratio}\n\nParsed output for all prompts: {parsed_outputs}"

2381-2388:Optional: factor repeated matching logic into a tiny helper

Reduces duplication across four tests; keeps failure messaging consistent.

+def _assert_keywords_match_all(parsed_outputs, expected, match_ratio):+    assert len(parsed_outputs) == len(expected), (+        f"Expected {len(expected)} outputs, got {len(parsed_outputs)}. "+        f"Parsed outputs: {parsed_outputs}"+    )+    for prompt_output, prompt_keywords in zip(parsed_outputs, expected):+        matches = [kw in prompt_output.lower() for kw in prompt_keywords]+        obs_match_ratio = 1.0 * sum(matches) / len(matches)+        assert obs_match_ratio >= match_ratio, (+            f"Incorrect output!\nGenerated \"{prompt_output}\"\n"+            f"Expected keywords \"{prompt_keywords}\"\n"+            f"Matched keywords: {matches}\n"+            f"Observed match ratio {obs_match_ratio} below threshold {match_ratio}\n\n"+            f"Parsed output for all prompts: {parsed_outputs}"+        )

Then call it with the appropriate expected list in each test.

Also applies to: 2521-2528, 2628-2635, 2731-2743

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and betweene3ac09d and2a6a059.

📒 Files selected for processing (1)
  • tests/integration/defs/test_e2e.py (7 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • tests/integration/defs/test_e2e.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • tests/integration/defs/test_e2e.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • tests/integration/defs/test_e2e.py
🧬 Code graph analysis (1)
tests/integration/defs/test_e2e.py (1)
tests/integration/defs/common.py (1)
  • parse_output (665-681)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (2)
tests/integration/defs/test_e2e.py (2)

2337-2337:Keyword tweaks look fine

Updated expected tokens for mixture_text_image improve stability.


2672-2690:Keyword updates LGTM

Revised tokens for gemma/mistral/phi multiturn look reasonable.

Please confirm these keywords reflect the latest model baselines captured in CI to avoid flakiness.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #18684 [ run ] completed with stateSUCCESS
/LLM/release-1.0/L0_MergeRequest_PR pipeline #391 completed with status: 'FAILURE'

@Wanli-Jiang
Copy link
CollaboratorAuthor

/bot run

1 similar comment
@Wanli-Jiang
Copy link
CollaboratorAuthor

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #18741 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #18741 [ run ] completed with stateSUCCESS
/LLM/release-1.0/L0_MergeRequest_PR pipeline #394 completed with status: 'SUCCESS'

@Wanli-JiangWanli-Jiang merged commit14aa34f intoNVIDIA:release/1.0Sep 16, 2025
6 checks passed
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull requestSep 16, 2025
…ywords for multimodal model (NVIDIA#7670)Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull requestSep 16, 2025
…ywords for multimodal model (NVIDIA#7670)Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull requestSep 16, 2025
…ywords for multimodal model (NVIDIA#7670)Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull requestSep 17, 2025
…ywords for multimodal model (NVIDIA#7670)Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull requestSep 17, 2025
…ywords for multimodal model (NVIDIA#7670)Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull requestSep 17, 2025
…ywords for multimodal model (NVIDIA#7670)Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull requestSep 17, 2025
…ywords for multimodal model (NVIDIA#7670)Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull requestSep 17, 2025
…ywords for multimodal model (NVIDIA#7670)Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull requestSep 17, 2025
…ywords for multimodal model (NVIDIA#7670)Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull requestSep 17, 2025
…ywords for multimodal model (NVIDIA#7670)Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull requestSep 18, 2025
…ywords for multimodal model (NVIDIA#7670)Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull requestSep 18, 2025
…ywords for multimodal model (NVIDIA#7670)Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull requestSep 18, 2025
…ywords for multimodal model (NVIDIA#7670)Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull requestSep 19, 2025
…ywords for multimodal model (NVIDIA#7670)Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull requestSep 19, 2025
…ywords for multimodal model (NVIDIA#7670)Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull requestSep 19, 2025
…ywords for multimodal model (NVIDIA#7670)Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull requestSep 19, 2025
…ywords for multimodal model (NVIDIA#7670)Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
chzblych pushed a commit that referenced this pull requestSep 22, 2025
…ywords for multimodal model (#7670)Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
JunyiXu-nv pushed a commit to JunyiXu-nv/TensorRT-LLM that referenced this pull requestSep 22, 2025
…ywords for multimodal model (NVIDIA#7670)Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
nv-lschneider pushed a commit to nv-lschneider/TensorRT-LLM that referenced this pull requestSep 22, 2025
…ywords for multimodal model (NVIDIA#7670)Signed-off-by: Wanli Jiang <35160485+Wanli-Jiang@users.noreply.github.com>Signed-off-by: Wangshanshan <30051912+dominicshanshan@users.noreply.github.com>
Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment

Reviewers

@amukkaraamukkaraamukkara left review comments

@coderabbitaicoderabbitai[bot]coderabbitai[bot] left review comments

@chzblychchzblychchzblych approved these changes

Assignees

No one assigned

Labels

None yet

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

4 participants

@Wanli-Jiang@tensorrt-cicd@chzblych@amukkara

[8]ページ先頭

©2009-2025 Movatter.jp