Using Kubernetes to Speed Development and Deployment of Edge Computer Vision Applications
In this 2022 Embedded Vision Summit presentation, Rakshit Agrawal, Vice President of Research and Development at Camio, presents real-world deployments of computer vision solutions using Kubernetes and shows how containerization enables continuous AI updates on a massive scale at the edge. Housing the code, dependencies and environment in one logical, portable block enables AI applications to run across different platforms, toolsets and chipsets. This speeds deployments of new applications by 10x or more and makes the development, packaging and deployment process predictable and consistent. You’ll learn how a multinational fuel dispenser manufacturer used a Kubernetes-driven video processing pipeline to train new models in days using existing source data, then deployed these models by adding a new container to a cluster at the edge. These new models enabled the company to detect physical problems like gas nozzles left in vehicles and gauge the effectiveness of on-site dispenser marketing initiatives within weeks of the capabilities request.
Data Versioning: Towards Reproducibility in Machine Learning
Reproducibility is still a big pain point in most data science workflows. A critical element required for reproducibility is version control. Unfortunately, in machine learning there is a notorious lack of standards for version control, so developers typically resort to crafting ad-hoc workflows. And frequently, developers reinvent the wheel due to a lack of awareness of existing solutions. In this 2022 Embedded Vision Summit talk, Nicolás Eiris, Machine Learning Engineer at Tryolabs, introduces DVC, short for “Data Version Control,” an open-source tool that Tryolabs has found can significantly alleviate the pain of reproducibility in data science workflows. He covers the motivation for such a tool, digs into its main features and will hopefully convince you that your life will be much better if you integrate it into your next project. Everything is illustrated through a real-world example of an end-to-end ML pipeline.
|
Knowledge Distillation of Convolutional Neural Networks
Convolutional neural networks are ubiquitous in academia and industry, especially for computer vision and language processing tasks. However, their superior ability to learn meaningful representations in large-scale data comes at a price—they are often over-parameterized, with millions of parameters yielding additional latency and unnecessary costs when deployed in production. In this 2022 Embedded Vision Summit talk, Federico Perazzi, Head of AI at Bending Spoons, presents the foundations of knowledge distillation, an essential tool for improving the performance of neural networks by compressing their size. Knowledge distillation entails training a lightweight model, referred to as the student, to replicate a pre-trained larger model, called the teacher. He illustrates how this process works in detail by presenting a real-world image restoration task that Bending Spoons recently worked on. By squeezing the knowledge of the teacher model, the company obtained a threefold speedup and improved the quality of the reconstructed images.
Ensuring Quality Data for Deep Learning in Varied Application Domains: Data Collection, Curation and Annotation
In this 2022 Embedded Vision Summit presentation, Gaurav Singh, Perception Lead and System Architect at Nemo @ Ridecell, explores the data lifecycle for deep learning, with a particular emphasis on data curation and how to ensure quality annotations. For improving data curation, he examines techniques like active learning, focusing on how to choose which data to send for annotation. Singh also discusses how to select an annotation partner and how to efficiently do annotation in-house. He details how to frame good annotation instructions for different annotation tasks, such as lidar annotation, semantic segmentation and sequence annotations. And he explains common problems seen in the curation and annotation processes, and how to overcome them.
|
Outsight SHIFT LiDAR Software (Best Edge AI Software or Algorithm)
Outsight’s SHIFT LiDAR Software is the 2023 Edge AI and Vision Product of the Year Award winner in the Edge AI Software and Algorithms category. The SHIFT LiDAR Software is a real-time 3D LiDAR pre-processor that enables application developers and integrators to easily utilize LiDAR data from any supplier and for any use case outside of automotive (e.g. smart infrastructure, robotics, and industrial). Outsight’s SHIFT LiDAR Software is the industry’s first 3D data pre-processor, performing all essential features required to integrate any LiDAR into any project (SLAM, object detection and tracking, segmentation and classification, etc.). One of the software’s greatest advantages is that it produces an “explainable” real-time stream of data low-level enough to either directly feed ML algorithms or be fused with other sensors, and smart enough to decrease network and central processing requirements, thereby enabling a new range of LiDAR applications. Outsight believes that accelerating the adoption of LiDAR technology with easy-to-use and scalable software will meaningfully contribute to creating transformative solutions and products to make a smarter and safer world.
Please see here for more information on Outsight’s SHIFT LiDAR Software. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts. |