DocumentationAPI Reference
Back to ConsoleLog In

Installing INFERY

Infery is installed using pip, Python's package manager.
You can do so by clicking the Copy icon to copy the command.
Run this command in a CLI (terminal) on the machine on which to deploy the model.
Please make sure the machine and environment matches the prerequisites.

CPU installation

The "infery" artifact on pypi will install all CPU supported frameworks.
Sub-Packages and plugins in Infery will be available soon.

python3 -m pip install -U pip
python3 -m pip install infery

Jetson Installation

"infery-jetson" is the infery artifact for Jetson devices (Nano, Xavier and Orin).

The jetson artifacts are not available on pypi, because of the Jetson ARM propietery wheels.
It will install all supported frameworks using pre-built wheels for Jetson.
Some of the frameworks don't release artifacts for ARM, so, we built those wheels for you and they are included in the dependencies.

Infery-Jetson supports only python 3.6 at the moment (following the official TensorRT and JetPack python interpreter, as of this day).

python3 -m pip install -U pip
python3 -m pip install https://deci-packages-public.s3.amazonaws.com/infery_jetson-3.2.2-cp36-cp36m-linux_aarch64.whl

GPU Installation

"infery-gpu" Is an artifact that installs all required GPU versions of the original 'infery' frameworks, including PyCuda and TensorR, etc.

We recommend installing pycuda and THEN install infery-gpu (Usually works better).
In the command, we use an extra index URL for nvidia NGC to fetch nvidia-tensorrt.

🚧

Install The Pre-Requisites First

Please install the pre-requisites for infery-gpu before proceeding.

python3 -m pip install -U pip

# Compile pycuda for the local CUDA. The example uses CUDA 11.2, change it to your version.
export PATH=/usr/local/cuda-11.2/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-11.2/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
python3 -m pip install -U pycuda

# Install infery-gpu from PyPi and TensorRT from nvidia's pip repository
python3 -m pip install -U --extra-index-url https://pypi.ngc.nvidia.com infery-gpu

👍

Installation On Existing Environments

Sometimes we need to install Infery inside our existing environment, that has pre-installed versions of numpy, TensorRT, Torch, etc.

To install Infery without replacing the above package's versions, you can pass "--no-deps" flag to the pip install command, telling pip to skip these dependencies. Those will not be re-installed and the current versions will be used by Infery.

Please keep in mind that in this constellation, some frameworks might not function as expected.

One example that might be problematic is using binary packages, like PyTorch, TensorFlow, ONNX-Runtime. These are usually coupled to an explicit numpy version during build time, and other numpy versions might prevent these libraries from loading.

Infery automatically manages these dependencies for you, making sure all the packages work together inside your interpreter.

We recommend NOT to use the --no-deps flag, whenever possible.

Verify INFERY Installation

To verify your installation, simply import INFERY and expect the following output –

$ python3
>>> import infery
-INFO- Infery was successfully imported with 2 CPUS and 1 GPUS.

INFERY should explicitly declare the visible hardware upon successful import.


Did this page help you?