Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT

License

NotificationsYou must be signed in to change notification settings

pytorch/TensorRT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Easily achieve the best inference performance for any PyTorch model on the NVIDIA platform.

DocumentationpytorchcudatrtlicenseLinux x86-64 Nightly WheelsLinux SBSA Nightly WheelsWindows Nightly Wheels


Torch-TensorRT brings the power of TensorRT to PyTorch. Accelerate inference latency by up to 5x compared to eager execution in just one line of code.

Installation

Stable versions of Torch-TensorRT are published on PyPI

pip install torch-tensorrt

Nightly versions of Torch-TensorRT are published on the PyTorch package index

pip install --pre torch-tensorrt --index-url https://download.pytorch.org/whl/nightly/cu130

Torch-TensorRT is also distributed in the ready-to-runNVIDIA NGC PyTorch Container which has all dependencies with the proper versions and example notebooks included.

For more advanced installation methods, please seehere

Quickstart

Option 1: torch.compile

You can use Torch-TensorRT anywhere you usetorch.compile:

importtorchimporttorch_tensorrtmodel=MyModel().eval().cuda()# define your model herex=torch.randn((1,3,224,224)).cuda()# define what the inputs to the model will look likeoptimized_model=torch.compile(model,backend="tensorrt")optimized_model(x)# compiled on first runoptimized_model(x)# this will be fast!

Option 2: Export

If you want to optimize your model ahead-of-time and/or deploy in a C++ environment, Torch-TensorRT provides an export-style workflow that serializes an optimized module. This module can be deployed in PyTorch or with libtorch (i.e. without a Python dependency).

Step 1: Optimize + serialize

importtorchimporttorch_tensorrtmodel=MyModel().eval().cuda()# define your model hereinputs= [torch.randn((1,3,224,224)).cuda()]# define a list of representative inputs heretrt_gm=torch_tensorrt.compile(model,ir="dynamo",inputs=inputs)torch_tensorrt.save(trt_gm,"trt.ep",inputs=inputs)# PyTorch only supports Python runtime for an ExportedProgram. For C++ deployment, use a TorchScript filetorch_tensorrt.save(trt_gm,"trt.ts",output_format="torchscript",inputs=inputs)

Step 2: Deploy

Deployment in PyTorch:
importtorchimporttorch_tensorrtinputs= [torch.randn((1,3,224,224)).cuda()]# your inputs go here# You can run this in a new python session!model=torch.export.load("trt.ep").module()# model = torch_tensorrt.load("trt.ep").module() # this also worksmodel(*inputs)
Deployment in C++:
#include"torch/script.h"#include"torch_tensorrt/torch_tensorrt.h"auto trt_mod = torch::jit::load("trt.ts");auto input_tensor = [...];// fill this with your inputsauto results = trt_mod.forward({input_tensor});

Further resources

Platform Support

PlatformSupport
Linux AMD64 / GPUSupported
Linux SBSA / GPUSupported
Windows / GPUSupported (Dynamo only)
Linux Jetson / GPUSource Compilation Supported on JetPack-4.4+
Linux Jetson / DLASource Compilation Supported on JetPack-4.4+
Linux ppc64le / GPUNot supported

Note: ReferNVIDIA L4T PyTorch NGC container for PyTorch libraries on JetPack.

Dependencies

These are the following dependencies used to verify the testcases. Torch-TensorRT can work with other versions, but the tests are not guaranteed to pass.

  • Bazel 8.1.1
  • Libtorch 2.10.0.dev (latest nightly)
  • CUDA 13.0 (CUDA 12.6 on Jetson)
  • TensorRT 10.14.1.48 (TensorRT 10.3 on Jetson)

Deprecation Policy

Deprecation is used to inform developers that some APIs and tools are no longer recommended for use. Beginning with version 2.3, Torch-TensorRT has the following deprecation policy:

Deprecation notices are communicated in the Release Notes. Deprecated API functions will have a statement in the source documenting when they were deprecated. Deprecated methods and classes will issue deprecation warnings at runtime, if they are used. Torch-TensorRT provides a 6-month migration period after the deprecation. APIs and tools continue to work during the migration period. After the migration period ends, APIs and tools are removed in a manner consistent with semantic versioning.

Contributing

Take a look at theCONTRIBUTING.md

License

The Torch-TensorRT license can be found in theLICENSE file. It is licensed with a BSD Style licence


[8]ページ先頭

©2009-2025 Movatter.jp