Deci’s Deep Learning NAS technology automatically generated a new family of models dubbed DeciSeg, which deliver unparalleled inference performance and accuracy
Tel Aviv, Israel (PRWEB) – September 22, 2022 – Deci, the deep learning company harnessing AI to build AI, today announced a new set of industry-leading semantic segmentation models, dubbed DeciSeg. Deci’s proprietary Automated Neural Architecture Construction (AutoNAC) technology automatically generated semantic segmentation models that significantly outperform the most powerful models publicly available, such as the MobileViT released by Apple, and the DeepLab family released by Google. Deci’s models deliver more than 2x lower latency, as well as 3-7% higher accuracy.
Semantic segmentation is one of the most widely used computer vision tasks across many business verticals, including automotive, smart cities, healthcare, and consumer applications, and is often required for many edge AI applications. However, significant barriers exist to running semantic segmentation models directly on edge devices, such as high latency and the inability to deploy those models due to their size.
With DeciSeg models, semantic segmentation tasks that previously could not be carried out at the edge because they were too resource intensive are now possible. This allows companies to develop new use cases and applications on edge devices, reduce inference costs (since AI practitioners will no longer need to run these tasks in expensive cloud environments), open new markets, and shorten development times.
“DeciSegs are an example of the power of Deci’s AutoNAC engine capabilities to generate custom hardware-aware deep learning models with unparalleled performance on any hardware. AI teams can easily use DeciSegs models or leverage Deci’s AutoNAC engine to build and deploy custom models that run real-time computer vision tasks on their edge devices.” said Yonatan Geifman, PhD, co-founder and CEO of Deci.
Deci’s platform has a proven-track record in enabling AI at the edge and empowering AI teams to build and deploy production grade deep learning models. Earlier this year, Deci announced the discovery of DeciNets for CPUs, which reduced the gap between a model’s inference performance on a GPU versus a CPU by half, without sacrificing the model’s accuracy, enabling AI to run on lower cost, resource constrained hardware.
“In the world of automated deep neural network design and construction, Deci’s AutoNAC technology is a game changer. It uses deep learning to search vast spaces of neural networks for the model most appropriate for a particular task and particular AI chip. In this case, AutoNAC was applied to the Pascal VOC Semantic Segmentation task on NVIDIA’s Jetson Xavier NX™ chip and we are very pleased with the results.” said Ran El-Yaniv, co-founder and Chief Scientist of Deci and Professor of Computer Science at the Technion – Israel Institute of Technology.
Deci’s platform is serving customers across industries in various production environments including edge, mobile, data centers and cloud. To learn more about how leading AI teams leverage Deci’s platform to build production grade models and accelerate inference performance, visit here.
About Deci
Deci enables deep learning to live up to its true potential by using AI to build better AI. With the company’s deep learning development platform, AI developers can build, optimize, and deploy faster and more accurate models for any environment including cloud, edge, and mobile, allowing them to revolutionize industries with innovative products. The platform is powered by Deci’s proprietary automated Neural Architecture Construction technology (AutoNAC), which automatically generates and optimizes deep learning models’ architecture and allows teams to accelerate inference performance, enable new use cases on limited hardware, shorten development cycles and reduce computing costs. Founded by Yonatan Geifman, Jonathan Elial, and Professor Ran El-Yaniv, Deci’s team of deep learning engineers and scientists are dedicated to eliminating production-related bottlenecks across the AI lifecycle.