Skip to content

Loading Models

In a Python environment, load the model using infery.load function, as follows:

import infery, numpy as np

model = infery.load(model_path='model.onnx', framework_type='onnx', inference_hardware='gpu')

Run Inference

  • model_path – Specify the exact path to where you downloaded/saved your model.
  • framework_type – Specify the framework (programming language) used to develop this model. The supported options are listed in the table below.
  • inference_hardware – Specify either gpu *or *cpu **according to the target hardware environment on which the model will be run (CPU or GPU).
  • static_batch_size – The static batch size the model graph is frozen for if the model is static, None if it is dynamic.
  • inference_dtype – The numpy data type to use for inference.

Loading Model Example:

# Downloading a pre-trained ResNet50 (Imagenet) model, that supports inference with batch size up to 64.
from urllib.request import urlretrieve
urlretrieve('https://dips-models-public.s3.amazonaws.com/resnet50_batchsize_64.onnx',
            '/tmp/model.onnx')

# Load it with infery
import infery
model = infery.load(weights_path='/tmp/model.onnx', framework_type='onnx')

If the model loaded successfully, you should see successful output logs –

__init__ -INFO- Infery was successfully imported with 2 CPUS and 1 GPUS.
infery_manager -INFO- Loading model /tmp/model.onnx to the GPU
infery_manager -INFO- Successfully loaded /tmp/model.onnx to the GPU.

If infery failed to load the model for any reason, you should be able to see the cause for the error with a verbose description –

---------------------------------------------------------------------------
      1 import infery
----> 2 model = infery.load(model_path='/tmp/model.onnx', framework_type='onnx')
FileNotFoundError: The model file does not exist at /tmp/model.onnx

In case of errors, please see Error Handling.