Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

[None][feat] Pass KvCacheRetentionConfig to torch LlmRequest#8634

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Merged
achartier merged 1 commit intoNVIDIA:mainfromachartier:llmapi-gds
Oct 24, 2025

Conversation

@achartier
Copy link
Collaborator

@achartierachartier commentedOct 23, 2025
edited by coderabbitaibot
Loading

Summary by CodeRabbit

  • Refactor
    • Enhanced internal configuration handling for KV cache retention, improving data propagation through request processing pipeline.

Description

The KV cache retention config of a request are not passed to the C++ request when using the torch backend. This change fixes the behavior.

Test Coverage

Existing torch tests

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR FollowsTRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (seetest instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run/bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: DoesNOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: DoesNOT update GitHub check status.

--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: DoesNOT update GitHub check status.

--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: DoesNOT update GitHub pipeline status.

--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: DoesNOT update GitHub check status.

--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: DoesNOT update GitHub check status.

--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug(OPTIONAL) :Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-list parameter to access the appropriate container environment. Note: DoesNOT update GitHub check status.

For guidance on mapping tests to stage names, seedocs/source/reference/ci-overview.md
and thescripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request.--comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@achartierachartier requested a review froma team as acode ownerOctober 23, 2025 22:03
@achartier
Copy link
CollaboratorAuthor

/bot run

@coderabbitai
Copy link
Contributor

📝 Walkthrough

Walkthrough

A newkv_cache_retention_config field is passed fromexecutor_request to theLlmRequest constructor inexecutor_request_to_llm_request. This extends existing request data propagation without modifying public signatures or introducing behavioral changes.

Changes

Cohort / File(s)Summary
KV cache retention config wiring
tensorrt_llm/_torch/pyexecutor/llm_request.py
Addedkv_cache_retention_config parameter forwarding from executor request to LlmRequest constructor alongside existing multimodal data wiring

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~5–10 minutes

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check nameStatusExplanationResolution
Docstring Coverage⚠️ WarningDocstring coverage is 0.00% which is insufficient. The required threshold is 80.00%.You can run@coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check nameStatusExplanation
Title Check✅ PassedThe pull request title "[None][feat] Pass KvCacheRetentionConfig to torch LlmRequest" is specific and clearly describes the main change documented in the raw summary. The title follows the required template format with the ticket identifier and type prefix, accurately reflects that a kv_cache_retention_config field is being passed through to the LlmRequest, and is concise enough for easy scanning of commit history. This directly aligns with the core objective of the PR.
Description Check✅ PassedThe pull request description includes all required sections from the template: the title follows the correct format with "[None][feat]" prefix, a Description section explains the issue and solution clearly, a Test Coverage section is provided specifying "Existing torch tests," and a PR Checklist is completed with the final confirmation checkbox marked. The description adequately conveys what is being changed and why, satisfying the template requirements for a complete submission.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between2392022 and3cdeb9b.

📒 Files selected for processing (1)
  • tensorrt_llm/_torch/pyexecutor/llm_request.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • tensorrt_llm/_torch/pyexecutor/llm_request.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • tensorrt_llm/_torch/pyexecutor/llm_request.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • tensorrt_llm/_torch/pyexecutor/llm_request.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (1)
tensorrt_llm/_torch/pyexecutor/llm_request.py (1)

767-768:LGTM! Correctly propagates KV cache retention config.

The change properly passeskv_cache_retention_config from the executor request to theLlmRequest constructor, fixing the bug where this configuration was not being propagated to the C++ backend. The implementation follows the established pattern for standard executor request attributes.


Thanks for usingCodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment@coderabbitai help to get the list of available commands and usage tips.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22323 [ run ] triggered by Bot. Commit:3cdeb9b

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22323 [ run ] completed with stateSUCCESS. Commit:3cdeb9b
/LLM/main/L0_MergeRequest_PR pipeline #16830 completed with status: 'FAILURE'

Signed-off-by: Aurelien Chartier <2567591+achartier@users.noreply.github.com>
@Funatiq
Copy link
Collaborator

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22420 [ run ] triggered by Bot. Commit:e354174

@tensorrt-cicd
Copy link
Collaborator

PR_Github #22420 [ run ] completed with stateSUCCESS. Commit:e354174
/LLM/main/L0_MergeRequest_PR pipeline #16897 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check thererun report for details.

@achartierachartier merged commitcdf0403 intoNVIDIA:mainOct 24, 2025
5 checks passed
yufeiwu-nv pushed a commit to yufeiwu-nv/TensorRT-LLM that referenced this pull requestOct 24, 2025
…8634)Signed-off-by: Aurelien Chartier <2567591+achartier@users.noreply.github.com>Signed-off-by: yufeiwu-nv <230315618+yufeiwu-nv@users.noreply.github.com>
@achartierachartier deleted the llmapi-gds branchOctober 24, 2025 17:25
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull requestNov 1, 2025
…8634)Signed-off-by: Aurelien Chartier <2567591+achartier@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull requestNov 3, 2025
…8634)Signed-off-by: Aurelien Chartier <2567591+achartier@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull requestNov 3, 2025
…8634)Signed-off-by: Aurelien Chartier <2567591+achartier@users.noreply.github.com>
dominicshanshan pushed a commit to dominicshanshan/TensorRT-LLM that referenced this pull requestNov 3, 2025
…8634)Signed-off-by: Aurelien Chartier <2567591+achartier@users.noreply.github.com>
Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment

Reviewers

@FunatiqFunatiqFunatiq approved these changes

Assignees

No one assigned

Labels

None yet

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

3 participants

@achartier@tensorrt-cicd@Funatiq

[8]ページ先頭

©2009-2025 Movatter.jp