Skip to content

Installation

System requirements

To install Infery, ensure that your system meets the following requirements:

  • Linux 64-bit operating system.
  • Python 3.6.9 or later.
  • Some components of Infery may require the installation of the CUDA driver and a compatible graphics card. For further details, please refer to the CUDA documentation.

Python Requirements

Infery does not have any specific Python package requirements for its core functionality.

However, individual components, such as inference and compilation, may have their own specific dependencies. These dependencies can be installed on-demand.

Basic Installation

To install Infery within an isolated environment such as conda or virtualenv, execute the following command: - Note: While it may be possible to install Infery system-wide (outside the virtual environment), this approach is not recommended.

Install from artifactory

pip install --index-url=https://[USERNAME]:[TOKEN]@deci.jfrog.io/artifactory/api/pypi/deciExternal/simple infery==x.x.x

Install from wheel

 pip install infery-x.x.x-py3-none-any.whl
x.x.x is the infery version, for example: 4.0.4

Install from source

 pip install git+https://github.com/Deci-AI/infery.git

Install using Docker

Login

docker login --username [USERNAME] --password [PASSWORD] deci.jfrog.io

Run

docker run --runtime=nvidia deci.jfrog.io/deci-external-docker-local/infery:x.x.x-<< flavor >>
x.x.x is the infery version, for example: 4.0.4

<< flavor >> should be replaced with the chosen flavor.

Flavors
CPU
  • cpu-torch-onnx-ov2022
  • cpu-torch-onnx-ov2023
GPU
  • gpu-cu117-onnx-tensorrt-8-2-1-8
  • gpu-cu117-onnx-tensorrt-8-4-1-5
  • gpu-cu117-onnx-tensorrt-8-5-2-2
  • gpu-cu117-onnx-tensorrt-8-5-3-1
JETSON (L4T)
  • jetson-r32.6.1
  • jetson-r35.1.0
  • jetson-r35.2.1
  • jetson-r35.3.1
  • jetson-r35.4.1

Dependencies for Code Components

Infery consists of multiple components, each requiring its own set of additional Python packages.

Infery includes built-in command-line utility for installing additional dependencies, eliminating the need for manual installation of individual packages. This utility is available via Infery's CLI or via invoking Infery using python -m:

python -m infery install -h

Inference and Analysis

To perform inference on a specific framework (such as ONNX) or to study its metadata, you can install the required dependencies as follows:

# Installing essential dependencies for predicting on ONNX models
python -m infery install --inference onnx
# Installing essential dependencies for predicting on OPENVINO models
python -m infery install --inference openvino
# If you intend to use multiple frameworks, you can run them in the same command:
python -m infery install --inference onnx tensorrt

This will also install the required dependencies to load and analyze the model as in the previous item. Once the installation process is complete, you can refer to the inference guide or to the analysis guide for detailed instructions.

Compilation

To facilitate the conversion of models between different frameworks, it is essential to have the relevant Python dependencies installed. The following examples show how to install the required dependencies:

# Installing essential dependencies for compiling ONNX models to TensorRT format.
python -m infery install --compile onnx2tensorrt
# Installing essential dependencies for compiling ONNX models to OpenVINO format.
python -m infery install --compile onnx2openvino
# Installing essential dependencies for compiling Pytorch models to ONNX format.
python -m infery install --compile pytorch2onnx
You can also install multiple frameworks:
python -m infery install --compile onnx2openvino onnx2tensorrt
In general, you can install compile required dependencies using:
python -m infery install --compile [FRAMEWORK_NAME]2[FRAMEWORK_NAME]
for more info please use
python -m infery install -h

This will also install the required dependencies for performing inference on either framework (if supported) and analyzing models of either framework as in the previous items. Once the installation process is complete, you can refer to the compilation guide for detailed instructions. Make sure to specify multiple compilation flows in the same command as needed.

Remote Deployment

If you wish to run Infery in a client-server setup, you may install the required dependencies like so:

python -m infery install --client http grpc

Or:

python -m infery install --server

Notice that the --server option will only install requirements for running an Infery server, and not for any additional functionality such as analysis, inference or compilation. You may choose to only install requirements for the http or grpc client without the other. The --server flag installs dependencies to run a server listening on both HTTP and gRPC.

Installing dependencies for multiple features

To ensure all the features you wish to use have their dependencies met, it is recommended to include all of them in the same command. For example:

python -m infery install --inference pytorch tensorflow2 tfjs tflite --compile onnx2tensorrt onnx2openvino --server

Inference Hardware

The installation utility will identify your environment's hardware specifics (CUDA installation and version for example) automatically, and will install the appropriate packages for the detected setup. If you wish to install packages for a different setup (for example, only install packages that utilize the CPU even though you have CUDA installed), you may do so by overriding the automatic environment detection:

python -m infery install --env cpu --inference openvino

Notice that while possible, doing so is discouraged and may lead to suboptimal performance or a broken installation. If you wish, you may also override Infery's automatic CUDA version detection by passing the --cuda-version argument:

python -m infery install --env gpu --cuda-version 11.4.3 --inference tensorrt

Automatic Environment Fixes

The installation utility will attempt to detect potential issues with already installed packages that may conflict with the newly installed ones or the identified environment (for example, packages that may only certain versions of CUDA). These packages will be uninstalled if found, and you will be prompted for confirmation in such a case. You may provide the -y flag to give implicit confirmation:

python -m infery install -y --inference pytorch

Dry Run

If you wish to manually inspect the packages that will be installed by Infery, you may choose to do a dry run, which will print pip commands to install the needed dependencies instead of running them:

python -m infery install --dry-run --inference keras

requirements.txt Generation

If you wish to incorporate Infery as a dependency in your project, you may wish to include the requirements for the feature you intend to use in your project's requirements.txt file. For this purpose, you may run the install CLI with the --output-requirements parameter to get a requirements.txt file with the needed dependencies (excluding Infery itself):

python -m infery install --dry-run --inference onnx --output-requirements ./path/to//write/requirements.txt

Dealing with missing dependencies

If Infery encounters a missing dependency during its run, it will raise a MissingRequirementsError exception. The error message attached to this exception will contain a pip command that you may execute to install the needed dependencies. If running in a REPL (IPython for example), you may catch the exception and use it to install the dependencies from within python:

from infery import Model
from infery.exceptions import MissingRequirementsError

try:
    model = Model("resnet50.onnx")
except MissingRequirementsError as e:
    e.install_missing()