DocumentationAPI Reference
Back to ConsoleLog In

Introducing RTiC

What is Deci RTiC?

Runtime Inference Container (RTiC) is Deci’s proprietary containerized deep-learning run-time inference engine that turns a model into a siloed efficient run-time server. It enables efficient inference and seamless deployment, at scale, on any hardware. Deci RTiC is essential for overcoming the complex challenges of making deep learning models production-ready.
RTiC benefits include –

  • Simplifies Deployment – Deci enables quick and simple packaging of AI models into production-ready containers, built for scalability and super quick deployment.
  • Boosts Latency/Throughput – Deci provides inference performance acceleration of AI models and optimization for any given target hardware (CPU or GPU).
  • Runs Anywhere – Deci enables model portability across common frameworks and across various types of production hosts. Deci RTiC offers inference performance optimization and model portability across multiple hardware, platforms and frameworks.
  • Reduces Cost-to-Serve – Deci reduces total cost of ownership by up to 80% by maximizing hardware utilization. RTiC enables the pipelining and performance scaling of multiple models on a single host.
  • Monitors Your Model's Performance During Production – Deci reveals how your models really behave during production in order to enable debugging or scaling when required.

Using Deci RTiC

Deci RTiC provides both a simple web user interface and a API commands for optimizing your model’s run-time production performance. With Deci, you optimize and benchmark your models, and then run them for production in Deci’s RTiC Inference Engine container.

The RTiC platform is comprised of a RTiC server container and an RTiC Python client and accordingly provides two containers to be downloaded, installed and used –

  • RTiC Server – Which handles inference requests sent to your models during production.
  • RTiC Client – Which enables your application to send inference requests to RTiC server.
    Deci provides two different inference engine servers that perform the same functionality to be installed on either a CPU or a GPU machine.

Getting Started with RTiC

Here are two a quick tutorials to get you started and to experience some of our main capabilities firsthand –

These tutorials show how to deploy an RTiC server docker Image on any machine, how to run it, benchmark it and then how to send inference requests to it using Deci’s RTiC Python client.


Did this page help you?