Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

[TRTLLM-7292][feat] Support multi-threaded tokenizers for trtllm-serve#7515

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Conversation

@nv-yilinf
Copy link
Collaborator

@nv-yilinfnv-yilinf commentedSep 4, 2025
edited
Loading

Summary by CodeRabbit

  • New Features
    • Added an optional preprocessed-inputs flow to reuse tokenization across repeated requests.
    • Expanded support for vision-language/multimodal prompts with unified handling.
  • Performance
    • Input tokenization and preprocessing now run concurrently with generation, reducing end-to-end latency and improving throughput.
  • Refactor
    • Centralized input preprocessing to provide consistent behavior across text and multimodal requests.
  • Developer Notes
    • Generation APIs accept an optional preprocessed payload; existing usage continues to work.

Description

For some models (e.g., gpt_oss) that very small and well-optimized, the tokenizer can became a bottleneck because a single cpu thread is responsible for tokenization for every request.

This PR leverages the fact that tokenizer itself is probably written in Rust so GIL will be released during tokenization therefore can accelerate requests tokenization with multi-thread. Note that we cannot directly applyasyncio.to_thread() toBaseLLM.generate_async() because some part of it (GenerationResult) assume event_loop to be present, but there won't be event_loop when running withto_thread().

Below is a nsys profile for that shows multi-threading tokenization in effect
Screenshot 2025-09-02 at 6 55 42 PM

Test Coverage

Existing unit tests. If you have a good idea of how to test multi-thread tokenization, please comment.

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR FollowsTRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (seetest instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run/bot [-h|--help] to print this help message.

See details below for each supported subcommand.

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: DoesNOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: DoesNOT update GitHub check status.

--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: DoesNOT update GitHub check status.

--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: DoesNOT update GitHub pipeline status.

--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: DoesNOT update GitHub check status.

--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: DoesNOT update GitHub check status.

--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug(OPTIONAL) :Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-list parameter to access the appropriate container environment. Note: DoesNOT update GitHub check status.

For guidance on mapping tests to stage names, seedocs/source/reference/ci-overview.md
and thescripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request.--comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@nv-yilinfnv-yilinf changed the title[https://jirasw.nvidia.com/browse/TRTLLM-7292][feat] Support multi-threaded tokenizers for trtllm-serve[TRTLLM-7292][feat] Support multi-threaded tokenizers for trtllm-serveSep 4, 2025
@nv-yilinfnv-yilinf changed the title[TRTLLM-7292][feat] Support multi-threaded tokenizers for trtllm-serve[TRTLLM-7292][feat] Support multi-threaded tokenizers for trtllm-serveSep 4, 2025
@nv-yilinfnv-yilinfforce-pushed theoptimize-serve-host-overhead branch 2 times, most recently from2a7c69f to7f940c8CompareSeptember 4, 2025 05:59
@nv-yilinfnv-yilinf marked this pull request as ready for reviewSeptember 4, 2025 05:59
@nv-yilinfnv-yilinf requested a review froma team as acode ownerSeptember 4, 2025 05:59
@nv-yilinf
Copy link
CollaboratorAuthor

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17634 [ run ] triggered by Bot

@coderabbitai
Copy link
Contributor

coderabbitaibot commentedSep 4, 2025
edited
Loading

📝 Walkthrough

Walkthrough

Introduces a preprocessing stage for prompts. Adds a PreprocessedInputs TypedDict. BaseLLM gains preprocess_inputs and generate_async accepts optional preprocessed_inputs. OpenAIServer adds an async wrapper that preprocesses inputs in a background thread and then calls generate_async with the preprocessed data. Multimodal handling is centralized.

Changes

Cohort / File(s)Summary of changes
Input typing and structures
tensorrt_llm/inputs/data.py
Adds Optional import and new TypedDictPreprocessedInputs with fields:prompt_token_ids,prompt,query_token_ids,sampling_params,multimodal_params. Existing public types unchanged.
LLM preprocessing and generation path
tensorrt_llm/llmapi/llm.py
AddsBaseLLM.preprocess_inputs(inputs, sampling_params) -> PreprocessedInputs. Updates imports to includePreprocessedInputs,TextPrompt.generate_async now acceptspreprocessed_inputs; when absent, callspreprocess_inputs. Control flow derives ctx/gen-only, adjustssampling_params.max_tokens for non-TRT path, and feeds unified multimodal params.
OpenAI server wrapper and call-site updates
tensorrt_llm/serve/openai_server.py
AddsOpenAIServer.generate_async_wrapper(inputs, sampling_params, **kwargs) usingasyncio.to_thread to runllm.preprocess_inputs, then callsllm.generate_async(preprocessed_inputs=...). Replaces directgenerate_async calls across:openai_chat,openai_mm_encoder,openai_completion,chat_harmony,generator_wrapper. No other logic changes.

Sequence Diagram(s)

sequenceDiagram  autonumber  participant C as Client  participant S as OpenAIServer  participant BG as To-Thread (CPU)  participant L as BaseLLM  participant EX as Executor  C->>S: Request (PromptInputs, SamplingParams)  S->>BG: llm.preprocess_inputs(inputs, sampling_params)  Note right of BG: Tokenization, multimodal packing<br/>returns PreprocessedInputs  BG-->>S: PreprocessedInputs  S->>L: generate_async(preprocessed_inputs=..., other kwargs)  L->>L: derive ctx/gen-only flags<br/>adjust sampling params if needed  L->>EX: generate_async(prompt_token_ids, query_token_ids, multimodal_params, ...)  EX-->>L: GenerationResult  L-->>S: RequestOutput  S-->>C: Response
Loading
sequenceDiagram  autonumber  participant U as User Code  participant L as BaseLLM  U->>L: generate_async(inputs=PromptInputs, no preprocessed_inputs)  L->>L: preprocess_inputs(inputs, sampling_params)  L->>L: compute is_ctx_only / is_gen_only  L->>L: set sampling_params.max_tokens (non-TRT path)  L->>L: route unified multimodal_params  L-->>U: RequestOutput (via executor)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested reviewers

  • yizhang-nv
  • rakib-hasan
  • juney-nvidia
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests

✅ Unit Test PR creation complete.

  • Create PR with unit tests
  • Commit unit tests in branchoptimize-serve-host-overhead
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat withCodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag@coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag@coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on oursupport page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type@coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add@coderabbitai ignore or@coderabbit ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add@coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add@coderabbitai or@coderabbitai title anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit ourStatus Page to check the current availability of CodeRabbit.
  • Visit ourDocumentation for detailed information on how to use CodeRabbit.
  • Join ourDiscord Community to get help, request features, and share feedback.
  • Follow us onX/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitaicoderabbitaibot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
tensorrt_llm/llmapi/llm.py (1)

426-433:Add consistency validation forpreprocessed_inputs
No existing validation logic was found inPreprocessedInputs. To prevent subtle mismatches whenpreprocessed_inputs is passed togenerate_async, include a hash (or signature) of the originalinputs inPreprocessedInputs and verify it matches before proceeding.

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and betweend32e462 and7f940c8.

📒 Files selected for processing (3)
  • tensorrt_llm/inputs/data.py (2 hunks)
  • tensorrt_llm/llmapi/llm.py (3 hunks)
  • tensorrt_llm/serve/openai_server.py (6 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
**/*

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Filenames compiled into a target must be case-insensitively unique

Files:

  • tensorrt_llm/inputs/data.py
  • tensorrt_llm/serve/openai_server.py
  • tensorrt_llm/llmapi/llm.py
**/*.{h,hpp,hh,hxx,cc,cpp,cxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use spaces, not tabs; indent 4 spaces

Files:

  • tensorrt_llm/inputs/data.py
  • tensorrt_llm/serve/openai_server.py
  • tensorrt_llm/llmapi/llm.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Code must target Python 3.8+
Indent with 4 spaces; do not use tabs (Python)
Maintain module namespace on import: prefer from package.subpackage import foo; use foo.Symbol()
Python filenames use snake_case
Python class names use PascalCase
Python functions and methods use snake_case
Python local variables use snake_case; if starting with a number concept, prefix with k (e.g., k_99th_percentile)
Python global variables use G_ prefix with UPPER_SNAKE_CASE
Python constants use UPPER_SNAKE_CASE
Avoid shadowing variables from outer scopes
Initialize all externally visible class members ininit
For public interfaces, prefer docstrings over comments; comments should be for in-function or file-local interfaces
Use Google-style docstrings for classes and functions (Sphinx-parsable)
Document attributes and variables inline with docstrings immediately after assignment
Avoid reflection when a non-reflective approach suffices
Limit except clauses to specific exceptions where possible
When using try/except for duck-typing, keep try body minimal and move logic to else

Files:

  • tensorrt_llm/inputs/data.py
  • tensorrt_llm/serve/openai_server.py
  • tensorrt_llm/llmapi/llm.py
**/*.{cpp,cc,cxx,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend NVIDIA copyright header (current year) to all source files

Files:

  • tensorrt_llm/inputs/data.py
  • tensorrt_llm/serve/openai_server.py
  • tensorrt_llm/llmapi/llm.py
🧬 Code graph analysis (3)
tensorrt_llm/inputs/data.py (4)
tensorrt_llm/sampling_params.py (1)
  • SamplingParams (125-486)
tensorrt_llm/inputs/multimodal.py (1)
  • MultimodalParams (152-410)
tensorrt_llm/executor/result.py (1)
  • prompt_token_ids (525-526)
tensorrt_llm/llmapi/llm.py (1)
  • prompt (79-80)
tensorrt_llm/serve/openai_server.py (3)
tensorrt_llm/inputs/data.py (1)
  • prompt_inputs (67-81)
tensorrt_llm/llmapi/llm.py (4)
  • RequestOutput (46-86)
  • preprocess_inputs (316-392)
  • generate_async (395-471)
  • prompt (79-80)
tensorrt_llm/sampling_params.py (1)
  • SamplingParams (125-486)
tensorrt_llm/llmapi/llm.py (5)
tensorrt_llm/inputs/data.py (2)
  • PreprocessedInputs (56-61)
  • TextPrompt (12-31)
tensorrt_llm/sampling_params.py (1)
  • SamplingParams (125-486)
tensorrt_llm/_utils.py (1)
  • nvtx_range_debug (865-889)
tensorrt_llm/executor/executor.py (2)
  • generate_async (114-155)
  • is_shutdown (277-278)
tensorrt_llm/executor/request.py (2)
  • LoRARequest (24-53)
  • PromptAdapterRequest (57-81)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check
🔇 Additional comments (10)
tensorrt_llm/inputs/data.py (1)

56-62:LGTM! Well-structured preprocessing data type.

ThePreprocessedInputs TypedDict provides a clean, strongly-typed interface for passing preprocessed data between the tokenization and generation stages. This design effectively enables multi-threaded tokenization while maintaining type safety.

tensorrt_llm/serve/openai_server.py (5)

169-183:Good implementation of multi-threaded tokenization!

The async wrapper correctly implements multi-threaded tokenization usingasyncio.to_thread, which properly releases the GIL as mentioned in the PR objectives. This enables concurrent tokenization while the generation pipeline still runs in the event loop context.


444-451:Consistent use of the wrapper across chat completions.

The migration to usegenerate_async_wrapper is correctly implemented, maintaining the same parameters and behavior while enabling multi-threaded tokenization.


532-533:Wrapper correctly applied to multimodal encoder path.

The multimodal encoder generation path correctly uses the wrapper, though it only passes theprompt parameter. This is consistent with the original implementation.


651-658:Completion endpoint successfully migrated to use the wrapper.

The per-prompt generation in the completion endpoint correctly uses the wrapper with all necessary parameters.


730-735:Harmony chat completion correctly uses the wrapper.

The harmony adapter path successfully migrates to use the wrapper while maintaining the specialized detokenization handling (sampling_params.detokenize = False).

tensorrt_llm/llmapi/llm.py (4)

316-393:Well-designed preprocessing method that centralizes tokenization logic.

Thepreprocess_inputs method effectively consolidates all input preprocessing logic, including:

  • Tokenization for text prompts
  • Handling of pre-tokenized inputs
  • Multimodal data processing with proper hashing
  • VLM-specific prompt re-encoding when needed

This centralization enables clean separation of concerns and makes the multi-threaded tokenization possible.


394-424:Clean integration of preprocessed inputs ingenerate_async.

The updated signature and implementation properly support both the traditional path (preprocessing inline) and the new optimized path (using pre-computed preprocessed inputs). The docstring is also well-updated.


434-439:Good extraction of preprocessed data fields.

The code correctly extracts all necessary fields from thepreprocessed_inputs dictionary, maintaining the same data flow as the original implementation.


440-446:Context-only optimization preserved correctly.

The logic for optimizing KV cache allocation in context-only requests is properly maintained after the refactoring.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17634 [ run ] completed with stateSUCCESS
/LLM/release-1.1.0rc2/L0_MergeRequest_PR pipeline #60 completed with status: 'FAILURE'

@coderabbitai
Copy link
Contributor

Note

Unit test generation is an Early Access feature. Expect some limitations and changes as we gather feedback and continue to improve it.


Generating unit tests... This may take up to 20 minutes.

@coderabbitai
Copy link
Contributor

Caution

An unexpected error occurred while opening a pull request: Reference update failed -https://docs.github.com/rest/git/refs#create-a-reference

@kaiyux
Copy link
Member

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17701 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17701 [ run ] completed with stateSUCCESS
/LLM/release-1.1.0rc2/L0_MergeRequest_PR pipeline #70 completed with status: 'FAILURE'

@nv-yilinfnv-yilinf requested a review froma team as acode ownerSeptember 4, 2025 22:34
@nv-yilinfnv-yilinfforce-pushed theoptimize-serve-host-overhead branch fromc243fe8 to0d5bfacCompareSeptember 4, 2025 22:40
Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
…-threading accelerationSigned-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
This reverts commitfa7f077.Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
@nv-yilinfnv-yilinfforce-pushed theoptimize-serve-host-overhead branch from0d5bfac toc16d826CompareSeptember 4, 2025 22:42
@nv-yilinf
Copy link
CollaboratorAuthor

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17715 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17715 [ run ] completed with stateDISABLED
L0 testing is limited to prioritized users. User nv-yilinf is not in the prioritized list. L0 testing cannot be triggered.

@kaiyux
Copy link
Member

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17716 [ run ] triggered by Bot

Copy link
Collaborator

@SuperjomnSuperjomn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

LGTM for the performance. As we discussed, the generate_async API change isn’t necessary, but feel free to address it in a subsequent PR if timing allows.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17716 [ run ] completed with stateSUCCESS
/LLM/release-1.1.0rc2/L0_MergeRequest_PR pipeline #72 completed with status: 'FAILURE'

Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
@nv-yilinf
Copy link
CollaboratorAuthor

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #17794 [ run ] triggered by Bot

@kaiyuxkaiyuxenabled auto-merge (squash)September 5, 2025 17:48
@tensorrt-cicd
Copy link
Collaborator

PR_Github #17794 [ run ] completed with stateSUCCESS
/LLM/release-1.1.0rc2/L0_MergeRequest_PR pipeline #80 completed with status: 'SUCCESS'

@kaiyuxkaiyux merged commit6a5806b intoNVIDIA:release/1.1.0rc2Sep 5, 2025
5 checks passed
@kaiyuxkaiyux deleted the optimize-serve-host-overhead branchSeptember 5, 2025 22:31
nv-yilinf added a commit to nv-yilinf/TensorRT-LLM that referenced this pull requestSep 16, 2025
NVIDIA#7515)Signed-off-by: Yilin Fan <206948969+nv-yilinf@users.noreply.github.com>
Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment

Reviewers

@kaiyuxkaiyuxkaiyux left review comments

@coderabbitaicoderabbitai[bot]coderabbitai[bot] left review comments

@SuperjomnSuperjomnSuperjomn approved these changes

@syuonisyuoniAwaiting requested review from syuonisyuoni was automatically assigned from NVIDIA/trt-llm-llmapi-devs

@QiJuneQiJuneAwaiting requested review from QiJune

@LinPolyLinPolyAwaiting requested review from LinPoly

@nv-guomingznv-guomingzAwaiting requested review from nv-guomingz

Assignees

No one assigned

Labels

None yet

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

4 participants

@nv-yilinf@tensorrt-cicd@kaiyux@Superjomn

[8]ページ先頭

©2009-2025 Movatter.jp