Setting up TensorRT Environment on Ubuntu 22.04 / 20.04
Summary
Ubuntu 22.04 or Ubuntu 20.04
NVIDIA driver version: 535 (535.129.03)
CUDA version: 12.1.1
cuDNN version: 8.9.7 for CUDA 12.X
TensorRT version: 8.6 GA
Installing CUDA
Follow the official instruction, download CUDA from here.
Select CUDA 12.4.0 (March 2024), and then execute the commands prompted by the instruction on the website.
Here's an example set of commands to run:
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-ubuntu2204.pin
sudo mv cuda-ubuntu2204.pin /etc/apt/preferences.d/cuda-repository-pin-600
wget https://developer.download.nvidia.com/compute/cuda/12.4.0/local_installers/cuda-repo-ubuntu2204-12-4-local_12.4.0-550.54.14-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu2204-12-4-local_12.4.0-550.54.14-1_amd64.deb
sudo cp /var/cuda-repo-ubuntu2204-12-4-local/cuda-*-keyring.gpg /usr/share/keyrings/
sudo apt-get update
sudo apt-get -y install cuda-toolkit-12-4
After installation, add CUDA to ~/.bashrc:
# ~/.bashrc
...
# CUDA
export PATH="/usr/local/cuda-12.4/bin/:$PATH"
export LD_LIBRARY_PATH="/usr/local/cuda-12.4/lib64/:$LD_LIBRARY_PATH"
...
To test cuda installation, run the following command. The system should be able to find nvcc
.
$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Tue_Feb_27_16:19:38_PST_2024
Cuda compilation tools, release 12.4, V12.4.99
Build cuda_12.4.r12.4/compiler.33961263_0
Installing cuDNN
Follow the official instruction, download cuDNN here.

Select "cuDNN v9.3.0, for CUDA 12.x" with the .deb file option, and then execute the commands prompted by the instruction on the website.
Here's an example set of command to run:
wget https://developer.download.nvidia.com/compute/cudnn/9.3.0/local_installers/cudnn-local-repo-ubuntu2204-9.3.0_1.0-1_amd64.deb
sudo dpkg -i cudnn-local-repo-ubuntu2204-9.3.0_1.0-1_amd64.deb
sudo cp /var/cudnn-local-repo-ubuntu2204-9.3.0/cudnn-*-keyring.gpg /usr/share/keyrings/
sudo apt-get update
sudo apt-get -y install cudnn
sudo apt install -y cudnn-cuda-12
(might not need do the following)
Install the runtime library.
sudo apt-get install libcudnn8=8.9.7.29-1+cuda12.2
Install the developer library.
sudo apt-get install libcudnn8-dev=8.9.7.29-1+cuda12.2
Install the code samples.
sudo apt-get install libcudnn8-samples=8.9.7.29-1+cuda12.2
run the test program to see if success
https://forums.developer.nvidia.com/t/verify-cudnn-install-failed/167220/4
Installing TensorRT
There are two parts of TensorRT installation
TensorRT GA
Goto https://developer.nvidia.com/tensorrt
Download both the "TensorRT 10.3 GA for Linux x86_64 and CUDA 12.0 to 12.5 TAR Package" and the DEB package
Warning: Be careful to download the version matching with system CUDA version.
Install the DEB package with Software Install.
Alternatively, do the following commands
sudo dpkg -i ./nv-tensorrt-local-repo-ubuntu2204-10.3.0-cuda-12.5_1.0-1_amd64.deb
sudo cp /var/nv-tensorrt-local-repo-ubuntu2204-10.3.0-cuda-12.5/nv-tensorrt-local-620E7D29-keyring.gpg /usr/share/keyrings/
sudo apt update
sudo apt install nv-tensorrt-local-repo-ubuntu2204-10.3.0-cuda-12.5
We also need to link the libraries. Unpack the tar package:
tar xzvf ./TensorRT-10.3.0.26.Linux.x86_64-gnu.cuda-12.5.tar.gz
Then. move the unpacked directory to the installation path (~/Documents/), and add to bashrc
...
# TensorRT
export TRT_LIBPATH="/home/tk/Documents/TensorRT-10.3.0.26/targets/x86_64-linux-gnu/lib/"
export LD_LIBRARY_PATH="/home/tk/Documents/TensorRT-10.3.0.26/lib/:$TRT_LIBPATH:$LD_LIBRARY_PATH"
...
Install to Python using the following command
cd ~/Documents/TensorRT-10.3.0.26/python/
pip install ./tensorrt-10.3.0-cp38-none-linux_x86_64.whl
For TensorRT < 10.3, it might also need this dependency
cd ~/Documents/TensorRT-8.6.1.6/graphsurgeon/
pip install ./graphsurgeon-0.4.6-py2.py3-none-any.whl
TensorRT OSS
Seems that normally we don't need this thing.
For python:
git clone -b main https://github.com/nvidia/TensorRT TensorRT
cd TensorRT
git submodule update --init --recursive
cd TensorRT
mkdir -p build && cd build
cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH/lib/ -DTRT_OUT_DIR=`pwd`/out
make -j$(nproc)
Other Dependencies
pip3 install cuda-python
sudo apt install libssl-dev
sudo apt install cmake g++
$ cmake --version
cmake version 3.16.3
CMake suite maintained and supported by Kitware (kitware.com/cmake).
pip install onnx
pip install onnxruntime
pip install onnxruntime-gpu
pip install onnx-simplifier
Final Step
Follow this repo to use TRT
FAQ
Hit:7 http://us.archive.ubuntu.com/ubuntu focal-updates InRelease
Err:3 file:/var/nv-tensorrt-local-repo-ubuntu2004-8.6.1-cuda-12.0 InRelease
The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6694DE8A9A1EDFBA
Hit:8 http://us.archive.ubuntu.com/ubuntu focal-backports InRelease
Reading package lists... Done
W: GPG error: file:/var/nv-tensorrt-local-repo-ubuntu2004-8.6.1-cuda-12.0 InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 6694DE8A9A1EDFBA
E: The repository 'file:/var/nv-tensorrt-local-repo-ubuntu2004-8.6.1-cuda-12.0 InRelease' is not signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
solution:
the key is not installed. run the command in "Installing TensorRT" again
sudo cp /var/nv-tensorrt-local-repo-ubuntu2204-8.6.1-cuda-12.0/nv-tensorrt-local-42B2FC56-keyring.gpg /usr/share/keyrings/
Last updated
Was this helpful?