DocumentationAPI Reference
Back to ConsoleLog In

Deci Platform Release Notes

Overview

The Deci platform enables you to manage, optimize, deploy and serve models in your production environment with ease. You can continue using popular DL frameworks, such as TensorFlow, PyTorch, Keras and ONNX. All you need is our web-based platform or our Python client in order to run it from your code.

What’s New in Version 1.16.1 – 01/06/2021

  • INFERY - inference engine. Added the ability to download any model from the platform, and serve it in any machine using our python package for deep-learning inference, with a simple pip-install.
  • Failure Handling. In case of failure - you can now see what went wrong and act accordingly to solve the issue, or set a meeting with our support team
  • Added Settings. Added a settings page to visualize the users' personal and team data.
  • Reset Password. Added the ability to securely change the user's password.
  • Edit Model. User can now edit a model's configuration by simply clicking on a "Edit Model" button within the Lab. It allows the user to change the entered fields of "add model" wizard (name, primary batch-size, primary HW, etc.). In case you decide to change the model's input dimensions - this will trigger a new benchmark process automatically. (platform will suggest you to edit input dimensions if you enter them wrong)

What’s New in Version 1.12.1 – 01/05/2021

  • Deci Model Hub. Added the ability to utilize a public model published by Deci. These models are trained by Deci and are provided as ready for production models, including their runtime performance benchmark. Click here to visit the model hub.

What’s New in Version 1.9.1 – 01/04/2021

  • Invite a co-worker. Added the ability to share the workspace with colleagues, so you'll be able to see and act on the same models under the same account in Deci platform.
  • Email verification. Enhanced security measurements by verifying users' email ownership upon signing up to the platform.
  • Input Dimensions separation. When uploading your own model, it's now easier to enter the input dimensions correctly, as we separated the field into 3 fields – channels, width and height.
  • Improved model optimization failure report. It is now easier to understand the status of your model's optimization.
  • Optimization progress bar contains benchmarking stage. for better visualization and process transparency.

What’s New in Version 1.5.1 – 07/03/2021

  • Multi Framework support. Deci now supports all the following frameworks for model uploading, optimization and serving in RTiC:
    • ONNX
    • TensorFlow 2.x
    • Keras
    • TorchScript
  • Added PyTorch to ONNX Converting Guide. For those who use PyTorch framework – we suggest converting to ONNX using this quick guide in order to achieve better runtime production performance.
  • Deci Model Hub. Deci now provides visibility into Deci's own models, so that you can choose between uploading your own model or achieving the same value by choosing a Deci model (public or optimized) in order to benchmark it within the platform. Contact us for beta usage of the modelHub.
  • Optimization Progress Bar. For improved visibility into the progress, we have added a progress bar within the platform GUI.
  • Memory Footprint is Measured in MB. Instead of being measured as a percentage in order to better represent the absolute measurement of the memory footprint of the model on the GPU, while it runs in RTiC (Run-Time Inference Container).
  • Minor GUI Bug Fixes. Improved and seamless user experience by solving a few UI bugs.

What’s New in Version 1.0.1 – 20/1/2020

  • GUI Support. Graphical user interface for using most of the platform features.
  • Automatic Model Optimization. Automatic run of the model optimization algorithms.
  • 3 Communication Protocols in RTiC. RTiC now supports HTTP, gRPC and IPC (inter-process communication).

What’s New in Version 0.1.8

This Beta version of Deci Platform offers an API-based platform that enables you to manage your deep learning models in a unified repository, execute runtime optimizations for specific hardware (Cloud-based CPU/GPU instances) and deploy and serve your deep learning models on any instance.

  • CLI and Python Client Interface. Using the Deci platform is made super-easy by using a designated client container or by simply utilizing a Python client (more clients to be added soon).
  • HTTP API-based Communication. Interaction with the platform is via Native API. See the API documentation for details.
  • Private Model Repository. The Deci Platform includes a built-in model registry, which makes it easy to organize, train and optimized models by providing versioning and labeling tools.
  • Support for Multiple Frameworks. The Deci Platform can handle and manage any type and quantity of models. This Deci Platform only supports ONNX models.
  • Support for Any Deep Learning Neural Network. The Deci Platform benefits can be achieved on any deep learning neural network.
  • Run-Time or Algorithmic Optimization Request. The Deci Platform offers an on-demand performance boost using Deci’s out-of-the-box runtime optimizer, which acts as a hardware-aware optimized graph compiler or AutoNAC optimizer for unparalleled algorithmic model optimization.
  • Automatic Realtime Model Benchmark. The Deci Platform allows you to easily measure and compare the performance of a model across various cloud CPU and GPU instance types and evaluate the device sizing requirements needed to deploy your models.
  • Deploy and Serve Your Models. The Deci Platform enables you to export any model from the model repository to an RTiC (Run-Time Inference Container), which offers a hardware-optimized container to deploy, run and benchmark deep-learning models in any framework. This results in an immediate performance boost for any inference process.

Did this page help you?