DocumentationAPI Reference
Back to ConsoleLog In

Registering a Model

The following describes how to register a model with the RTiC server –

  • From the Deci Platform.
  • From another location, such as an S3 Buckets / Public URLs.

Registering a Model from the Deci Platform Model Repository

Make sure you are logged in to the Deci Platform. If your RTiC is not using an API key, you will not be able to fetch your models and use them. To connect to the deci_platform in order to register additional models, execute the following commands –

client.platform.login('YOUR_EMAIL_ADDRESS', 'YOUR_PASSWORD')

After you signed up and authenticated your RTiC instance, you can register any model from the Deci Platform repository using a single command.

RTiC provides the API method register_model_from_repository that receives a real model_id and model_name arguments can be fetched from the Deci Platform user interface by using the Deploy button in the Lab (which displays the model_id and model_name arguments) or by using the client.platform.get_all_models() command.

# You can get a real Model ID from your Deci Platform, using the **Deploy** button.

# Used internally by RTiC. It can be anything.
INFERENCE_MODEL_NAME = 'resnet50'

# The ID of the model to be downloaded and registered.
YOUR_MODEL_ID = 'd5621b65-0513-4b0f-ac45-ea8c5t7d10a5'

model = client.rtic.register_model_from_repository(model_id='YOUR_MODEL_ID', model_name='INFERENCE_MODEL_NAME')

# Deci provides a model named 'resnet50' for inference, with optimal concurrency and replications.

Registering a New Model from Another Location – Not the Deci Platform

The following describes how to register a model that is not located in the Deci platform repository. For example, a model that is located in an S3 buckets, public url or on your local machine.

RTiC enables you to register models from various sources using the register_model operation. If the model file is not available locally, RTiC will download it from the online source and then load it to the memory, thus making it available for inference.

In the following example, we register a TensorFlow (tf) model for CPU inferencing. The model is assigned a unique name. We tell RTiC to fetch it from a public Amazon S3 bucket, using the model_weights_source argument.

YOUR_MODEL_NAME = 'Custom_ResNet_50'

# Registering the model, and waiting until it's available for inference.
model_metadata = client.rtic.register_model(
            model_name='YOUR_MODEL_NAME',
            framework_type='tf',
            inference_hardware='cpu',
            model_weights_source='s3',
            weights_path='https://dips-models-public.s3.amazonaws.com/yonatan_imagenet_resnet18_DIPS.pb',
            tensorflow_input_layer_name='input:0',
            tensorflow_output_layer_name='output:0'
        )

Note – In this example, we use a random input.

📘

TensorFlow Models

TensorFlow models require two additional arguments for registration –

  • tensorflow_input_layer_name – input layer name
  • tensorflow_output_layer_name – output layer name

👍

TIP – Loading Local Models

You can set model_weights_source to 'local', and pass a local weights file path to the RTiC server.

You can run the RTiC container with a volume (docker run -v $(pwd)/models:/models) to mount a local models folder into your RTiC container.


Did this page help you?