DocumentationAPI Reference
Back to ConsoleLog In

Step 2 – Exploring an Example Project

Deci’s Lab

Among other things, the Deci platform serves as a repository into which you can upload your models and optimize them. The Lab lists all the models that you have uploaded to Deci and the versions that you have optimized.

The Deci Lab is displayed by default when you launch Deci or by clicking the Lab tab at the navigation bar.

By default, the Lab is provided with a pre-loaded well-known deep learning model architecture (ResNet50), intended for image-based classification, trained over the ImageNet dataset.

The ResNet50 model is presented below as an example baseline model (unoptimized model) in order to demonstrate the process of optimizing a model using the Deci platform.

If you are a new user, the top row of the Lab shows this baseline ResNet50 model, and below it, you will see the optimized versions of that same baseline model.

The naming and measures of the optimized models listed in the Optimized Versions section may differ from those presented in this example.

The Lab shows the following data for each model –


The accuracy of each model is presented in the Accuracy column, so that you can see what the original accuracy of this model was (before Deci optimized it – in the top row) and then the accuracy of each of the Optimized Versions created by Deci (in the rows underneath it).

Note: Currently, Deci does not validate the accuracy of the model. It simply displays the accuracy that was declared by the user when the model was uploaded.

In the example below, we see that the unoptimized ResNet50 model has an Accuracy of 76.10% (Top-1) that was measured while it performed image classification on an ImageNet Dataset. The Lab shows that Deci’s optimized models have a very similar accuracy on each of two production target hardware environments –

  • 76.30% on T4.
  • 75.98% on Intel Xeon CPU.

Deci's objective is to provide the highest score (described below) without compromising accuracy at all or only slightly (within a given statistical error of 1%). Accuracy may even be improved slightly, as in the example shown above for a T4 environment.

Deci Score

Deci assigns a standardized/normalized score to describe the efficiency of a model’s runtime performance on a specific production environment, without compromising accuracy.

A Deci Score is a normalized grade of between 1 and 10 that evaluates a model’s runtime efficiency on a specific hardware and batch size set up. It is comprised of performance metrics and measures for Accuracy, Throughput, Latency, Memory Footprint and Model Size. See Score for more details.

In the example above, we see that the unoptimized ResNet50 model has a Score of 7.0 for the primary environment and batch-size it was configured for. It's clear that after Deci has optimized the models, each has a significantly superior score –

  • 9.6 in a T4 production environment
  • 9.4 in an Intel Xeon production environment

Target HW

The Target HW column of the ResNet50 model specifies the primary environment on which this baseline model was run before optimization.

For each optimized version of this model, this column shows the target hardware environment for which it was compiled.

Note: Deci automatically benchmarks each model version for efficiency on all supported CPU and GPU target hardware environments. Deci also automatically benchmarks for all available batch sizes on each target hardware. For CPU-optimized models, it benchmarks the batch size selected by the user; and for GPU-optimized model, it benchmarks up to the batch size selected by the user on the specified hardware.

In the example above –

  • The baseline ResNet50 model was run on a T4 environment.
  • One optimized version created by Deci, named T4-Optimized v1.1, was created to run on the same environment, meaning on a T4 environment.
  • The second Deci-optimize version, named T4-Optimized v1.2, was created to run on an Intel Xeon environment.


Quantization is a method for cutting down a neural network to a reasonable size, while still achieving high-performance accuracy. This process maps a large set of input values to a smaller relevant set of output values, thus dramatically reducing the memory requirements and computational cost of using neural networks.

  • For uploaded models (before Deci optimization), this column shows 32 bit, meaning that Deci assumes a quantization level of 32 bits for the baseline model.
  • For optimized model versions, this column shows the quantization level of the optimization.


Each model can handle only a single task, which is specified in this column, such as Classification, Semantic Classification, Object Detection, Depth Estimation or Pose Estimation.


Specifies the name of the dataset on which this model was trained. The optimized versions of the model are usually trained on the same data set as the baseline model that you uploaded, unless the dataset was changed or updated, which then requires the optimized model to be fine-tuned within the Deci platform.


Specifies the Deci user who uploaded or optimized a specific model.


For baseline models, specifies the date when this model was first uploaded to the Deci platform.
For optimized models, specifies the date on which the optimization was completed.

Reviewing a Model’s Performance Metrics in the Lab

Quick review

In the Lab, you can hover over the radar graph icon (shown below) to display the details of a model’s runtime performance as reflected by the metrics that determine its Deci Score.

Hovering this icon displays a Deci radar graph that shows the absolute value of each metric of the model’s score – Accuracy, Throughput, Latency, Memory Footprint and Model Size – Measured on the configured primary/target hardware and batch-size:

Advanced Benchmark - Insights tab

Benchmarking is a critical part of optimizing the inference performance of your model. It enables you to compare any model’s efficiency according to your goals and objectives. Deci measures and displays performance changes across different target hardware environments and batch sizes by configuring the inference set-up as described below.

The Insights tab uses Deci's Runtime Inference Container (RTiC) to send requests to your model on various target hardware environments while measuring CPU/GPU memory and computational utilization.

Click on a model in order to get to the insights tab, which allows you to investigate and gain insights into the model's runtime performance.

The Insights page displays, as shown below –

See Viewing Benchmark Insights for a description of how to see Deci's insights and to benchmark your model in Deci.

Did this page help you?