Introduction to Creating a Vision Solution in the Cloud
A growing number of applications utilize cloud computing for execution of computer vision algorithms. In this presentation, Nishita Sant, Computer Vision Scientist at GumGum, introduces the basics of creating a cloud-based vision service, based on GumGum's experience developing and deploying a computer vision-based service for enterprises. Sant explores the architecture of a cloud-based computer vision solution in three parts: an API, computer vision modules (housing both algorithm and server), and computer vision features (complex pipelines built with modules). Execution in the cloud requires the API to handle a continuous, but unpredictable, stream of data from multiple sources and task the appropriate computer vision modules with work. These modules consist of an AWS Simple Queue Service and an EC2 auto-scaling group and are built to handle both images and video. Sant discusses in detail how these modules optimally utilize instance hardware for executing a computer vision algorithm. Further, she discusses GumGum's method of inter-process communication, which allows for the creation of complex computer vision pipelines that require several modules to be linked. Sant also addresses cost and run-time tradeoffs between GPU and CPU instances.
Infusing Visual Understanding in Cloud and Edge Solutions Using State Of-the-Art Microsoft Algorithms
Microsoft offers its state-of-the-art computer vision algorithms, used internally in several products, through the Cognitive Services cloud APIs. With best-in-class results on industry benchmarks, these APIs cover a broad spectrum of tasks spanning image classification, object detection, image captioning, face recognition, OCR and more. Beyond the cloud, these algorithms even run locally on mobile or edge devices, or via Docker containers, for real-time scenarios. In this talk, Anirudh Koul, Senior Data Scientist, and Jin Yamamoto, Principal Program Manager, both from Microsoft, explain more about these APIs and tools, and how you can easily get started today to make both the cloud and edge visually intelligent.
|
Deploying CNN-based Vision Solutions on a $3 Microcontroller
In this presentation, Greg Lytle, VP of Engineering at Au-Zone Technologies, explains how his company designed, trained and deployed a CNN-based embedded vision solution on a low-cost, Cortex-M-based microcontroller (MCU). He describes the steps taken to design an appropriate neural network and then to implement it within the limited memory and computational constraints of an embedded MCU. He highlights the technical challenges Au-Zone Technologies encountered in the implementation and explains the methods the company used to meet its design objectives. He also explores how this type of solution can be scaled across a range of low-cost processors with different price and performance metrics. Finally, he presents and interprets benchmark data for representative devices.
Programmable CNN Acceleration in Under 1 Watt
Driven by factors such as privacy concerns, limited network bandwidth and the need for low latency, system designers are increasingly interested in implementing artificial intelligence (AI) at the edge. Low-power (under 1 Watt), low-cost (under $10) FPGAs, such as Lattice’s ECP5, offer an attractive option for implementing AI at the edge. In order to achieve the best balance of accuracy, power and performance, designers need to carefully select the network model and quantization level. This presentation from Gordon Hands, Director of Marketing for IP and Solutions at Lattice Semiconductor, uses two application examples to help system architects better understand feasible solutions. These examples illustrate the trade-offs of network design, quantization and performance.
|