TRTorch is a compiler for PyTorch/TorchScript, targeting NVIDIA GPUs via NVIDIA’s TensorRT Deep Learning Optimizer and Runtime. Unlike PyTorch’s Just-In-Time (JIT) compiler, TRTorch is an Ahead-of-Time (AOT) compiler, meaning that before you deploy your TorchScript code, you go through an explicit compile step to convert a standard TorchScript program into an module targeting a TensorRT engine. TRTorch operates as a PyTorch extention and compiles modules that integrate into the JIT runtime seamlessly. After compilation using the optimized graph should feel no different than running a TorchScript module. You also have access to TensorRT’s suite of configurations at compile time, so you are able to specify operating precision (FP32/FP16/INT8) and other settings for your module.
#include "torch/script.h"
#include "trtorch/trtorch.h"
...
auto compile_settings = trtorch::CompileSpec(dims);
// FP16 execution
compile_settings.op_precision = torch::kFloat;
// Compile module
auto trt_mod = trtorch::CompileGraph(ts_mod, compile_settings);
// Run like normal
auto results = trt_mod.forward({in_tensor});
// Save module for later
trt_mod.save("trt_torchscript_module.ts");
...
Note: Please refer installation instructions for Pre-requisites
A tarball with the include files and library can then be found in bazel-bin
Running TRTorch on a JIT Graph
Make sure to add LibTorch to your LD_LIBRARY_PATH export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$(pwd)/bazel-TRTorch/external/libtorch/lib
bazel run //cpp/trtorchexec -- $(realpath <PATH TO GRAPH>) <input-size>
Compiling the Python Package
To compile the python package for your local machine, just run python3 setup.py install in the //py directory.
To build wheel files for different python versions, first build the Dockerfile in //py then run the following
command
docker run -it -v$(pwd)/..:/workspace/TRTorch build_trtorch_wheel /bin/bash /workspace/TRTorch/py/build_whl.sh
Python compilation expects using the tarball based compilation strategy from above.
How do I add support for a new op…
In TRTorch?
Thanks for wanting to contribute! There are two main ways to handle supporting a new op. Either you can write a converter for the op from scratch and register it in the NodeConverterRegistry or if you can map the op to a set of ops that already have converters you can write a graph rewrite pass which will replace your new op with an equivalent subgraph of supported ops. Its preferred to use graph rewriting because then we do not need to maintain a large library of op converters. Also do look at the various op support trackers in the issues for information on the support status of various operators.
In my application?
The Node Converter Registry is not exposed in the top level API but in the internal headers shipped with the tarball.
You can register a converter for your op using the NodeConverterRegistry inside your application.
TRTorch
TRTorch is a compiler for PyTorch/TorchScript, targeting NVIDIA GPUs via NVIDIA’s TensorRT Deep Learning Optimizer and Runtime. Unlike PyTorch’s Just-In-Time (JIT) compiler, TRTorch is an Ahead-of-Time (AOT) compiler, meaning that before you deploy your TorchScript code, you go through an explicit compile step to convert a standard TorchScript program into an module targeting a TensorRT engine. TRTorch operates as a PyTorch extention and compiles modules that integrate into the JIT runtime seamlessly. After compilation using the optimized graph should feel no different than running a TorchScript module. You also have access to TensorRT’s suite of configurations at compile time, so you are able to specify operating precision (FP32/FP16/INT8) and other settings for your module.
More Information / System Architecture:
Example Usage
C++
Python
Platform Support
Dependencies
These are the following dependencies used to verify the testcases. TRTorch can work with other versions, but the tests are not guaranteed to pass.
Prebuilt Binaries and Wheel files
Releases: https://github.com/NVIDIA/TRTorch/releases
Compiling TRTorch
Installing Dependencies
0. Install Bazel
If you don’t have bazel installed, the easiest way is to install bazelisk using the method of you choosing https://github.com/bazelbuild/bazelisk
Otherwise you can use the following instructions to install binaries https://docs.bazel.build/versions/master/install.html
Finally if you need to compile from source (e.g. aarch64 until bazel distributes binaries for the architecture) you can use these instructions
You need to start by having CUDA installed on the system, LibTorch will automatically be pulled for you by bazel, then you have two options.
1. Building using cuDNN & TensorRT tarball distributions
third_party/dist_dir/[x86_64-linux-gnu | aarch64-linux-gnu]exist for this purpose)2. Building using locally installed cuDNN & TensorRT
Install TensorRT, CUDA and cuDNN on the system before starting to compile.
In
WORKSPACEcomment out ```pyDownloaded distributions to use with –distdir
http_archive( name = “cudnn”, urls = [““,],
build_file = “@//third_party/cudnn/archive:BUILD”, sha256 = ““, strip_prefix = “cuda” )
http_archive( name = “tensorrt”, urls = [““,],
)
Debug build
Native compilation on NVIDIA Jetson AGX
A tarball with the include files and library can then be found in bazel-bin
Running TRTorch on a JIT Graph
Compiling the Python Package
To compile the python package for your local machine, just run
python3 setup.py installin the//pydirectory. To build wheel files for different python versions, first build the Dockerfile in//pythen run the following commandPython compilation expects using the tarball based compilation strategy from above.
How do I add support for a new op…
In TRTorch?
Thanks for wanting to contribute! There are two main ways to handle supporting a new op. Either you can write a converter for the op from scratch and register it in the NodeConverterRegistry or if you can map the op to a set of ops that already have converters you can write a graph rewrite pass which will replace your new op with an equivalent subgraph of supported ops. Its preferred to use graph rewriting because then we do not need to maintain a large library of op converters. Also do look at the various op support trackers in the issues for information on the support status of various operators.
In my application?
You can register a converter for your op using the
NodeConverterRegistryinside your application.Structure of the repo
Contributing
Take a look at the CONTRIBUTING.md
License
The TRTorch license can be found in the LICENSE file. It is licensed with a BSD Style licence