Well, it depends. Deci’s deep learning algorithms accelerate model inference on two different levels –
Runtime-level optimization (Community tier) – Deci takes advantage of multiple tools to compile a model in a hardware-aware manner, so the model will have better performance and efficiency without compromising on accuracy. This method doesn’t affect the training process at all.
Algorithmic level optimization (AutoNAC) – Deci takes advantage of its own proprietary algorithm to maximize model runtime performance for inference. The AutoNAC technology redesigns your deep learning model to squeeze the maximum utilization out of the hardware targeted for inference in production. Deci’s AutoNAC engine contains a neural architecture search (NAS) component that revises a given trained model to optimally speed up its runtime by as much as 10x, while preserving the model’s baseline accuracy. This results in shorter training time, since the optimization flow makes the model lighter and simpler to train, using the same amount of data.
Updated about 1 year ago