Building a Docker Image With Infery
A docker container is the easiest way to try infery, on any machine.
- Use ipython to import infery and evaluate models dynamically.
- You can clone this repo by running
/workspace/clone_infery_examples.sh
inside the container.
CPU Image
The image is based on the official python:3.8-slim-bullseye which is a light Debian 11 based image.
Build Options:
- Default:
- Minimal Installation
- size: ~363MB
./dockerfiles/build_docker.sh cpu
- size: ~363MB
- Full Installation:
- Build with all supported backends:
FULL_INSTALLATION=true ./dockerfiles/build_docker.sh cpu
- Custom Installation:
- Build with custom extra dependencies for your application:
PIP_INSTALL_EXTRA="onnxruntime torch" ./dockerfiles/build_docker.sh cpu
Run Image:
docker run --rm -it infery-cpu:latest bash
GPU Image
The image is based on the official CUDA runtime image by Nvidia.
Nvidia Container Toolkit is required.
For GPU environment dependencies and setup instructions from scratch, see the Installation page.
Build Options:
- Default:
- Minimal Installation
- By default, we include only TensorRT in the container.
- The container size (uncompressed) with TensorRT backend is ~7.01 GB.
./dockerfiles/build_docker.sh gpu
- By default, we include only TensorRT in the container.
- Full Installation:
- The image size (uncompressed) with all backends installed is ~8.12 GB :
FULL_INSTALLATION=true ./dockerfiles/build_docker.sh gpu
- Custom Installation:
- Build with custom extra dependencies for your application:
- You can choose to add the following dependencies to the dockerfile:
- tensorflow-gpu==2.7.3 ~(1.4 GB)
- onnxruntime-gpu==1.11.0 ~(0.33 GB)
PIP_INSTALL_EXTRA="onnxruntime-gpu nvidia-tensorrt==8.0.1.6" ./dockerfiles/build_docker.sh gpu
Run Image:
docker run --rm -it --gpus all infery-gpu:latest bash