Embedded Vision Insights: January 8, 2019 Edition

EVA180x100


LETTER FROM THE EDITOR

Dear Colleague,Embedded Vision Summit

The Embedded Vision Summit is the preeminent conference on practical computer vision, covering applications at the edge and in the cloud. It attracts a global audience of over one thousand product creators, entrepreneurs and business decision-makers who are creating and using computer vision technology. The Embedded Vision Summit has experienced exciting growth over the last few years, with 97% of 2018 Summit attendees reporting that they’d recommend the event to a colleague. The next Summit will take place May 20-23, 2019 in Santa Clara, California, and online registration is now available. The Summit is the place to learn about the latest applications, techniques, technologies, and opportunities in computer vision and deep learning. And in 2019, the event will feature new, deeper and more technical sessions, with more than 90 expert presenters in 4 conference tracks and 100+ demonstrations in the Technology Showcase. Register today using promotion code SUPEREBNL19 to save 25% at our limited-time Super Early Bird Discount rate.

Are you an early-stage start-up company developing a new product or service incorporating or enabling computer vision or visual AI? Do you want to raise awareness of your company and its products with industry experts, investors and entrepreneurs? The 4th annual Vision Tank competition offers startup companies the opportunity to present their new products and product ideas to attendees at the 2019 Embedded Vision Summit. The Vision Tank is a unique startup competition, judged by accomplished vision industry investors, and entrepreneurs. The deadline to enter is January 31, 2019. Finalists will receive a free two-day Embedded Vision Summit registration package. The final round of competition takes place during the Embedded Vision Summit. In addition to other prizes, the Judge's Choice and Audience Choice winners will each receive a free one-year membership in the Embedded Vision Alliance, providing unique access to the embedded vision industry ecosystem. For more information, including detailed instructions and an online submission form, please see the event page on the Alliance website. Good luck!!

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

CLOUD, EDGE AND HYBRID PROCESSING

Harnessing the Edge and the Cloud Together for Visual AIAu-Zone Technologies
Embedded developers are increasingly comfortable deploying trained neural networks as static elements in edge devices, as well as using cloud-based vision services to implement visual intelligence remotely. In this presentation, Sébastien Taylor, Vision Technology Architect at Au-Zone Technologies, explores the benefits of combining edge and cloud computing to bring added capability and flexibility to edge devices. For example, an edge device can use a locally implemented neural network to address common cases, and utilize larger models in the cloud for unfamiliar events, localization differences and corner cases. Cloud resources can also be used to provide updated neural network models to edge devices. Taylor explores a cascaded machine learning architecture that takes advantage of both edge and cloud computing to create a system that can dynamically adapt to new conditions. Using image classification use cases, he describes the solution in detail, including system capabilities, applications and design trade-offs.

At the Edge of AI At the Edge: Ultra-efficient AI on Low-power Compute PlatformsXNOR.AI
Improvements in deep learning models have increased the demand for AI in several domains. These models demand massive amounts of computation and memory, so current AI applications have to resort to cloud-based solutions. However, AI applications cannot scale via cloud solutions, and sending data over the cloud is not always desired for many reasons (e.g. privacy, bandwidth, …). Therefore, there is a significant demand for running AI models on edge devices. These devices often have limited compute and memory capacity, so porting deep learning algorithms to these platforms is extremely challenging. In this presentation, Mohammad Rastegari, CTO of XNOR.ai, introduces XNOR.ai’s optimized software platforms, which enable deploying AI models on a variety of low-power compute platforms with extreme resource constraints. The company's solution is rooted in the efficient design of deep neural networks using binary operations and network compression, along with optimization algorithms for training.

AUTOMOTIVE VISION APPLICATIONS

Computer Vision Hardware Acceleration for Driver AssistanceBosch
With highly automated and fully automated driver assistance system just around the corner, next generation ADAS sensors and central ECUs will have much higher safety and functional requirements to cope with. This trend directly translates into a huge increase in the required calculation performance and hence into much higher power consumption and system cost. Due to the increased amount and complexity of visual sensors around the car, the embedded computer vision subsystem carries a major stake of this increase. To realize efficient, safe and affordable L3 and higher ADAS solutions, it is important to use the best possible compromise of hardware acceleration between fixed logic and specialized processing architectures. This presentation from Markus Tremmel, Chief Expert for ADAS at Bosch, gives an overview of the different computer vision method options available in a next-generation ADAS system and looks at the trade-offs between fixed logic and programmable processing units with an eye on the latest developments in deep learning.

Designing Smarter, Safer Cars with Embedded Vision Using EV Processor CoresSynopsys
Consumers, the automotive industry and government regulators are requiring greater levels of automotive functional safety with each new generation of cars. Embedded vision, using advanced neural networks, plays a critical role in bringing these high levels of safety to market. This presentation from Fergus Casey, R&D Director for ARC Processors at Synopsys, provides an overview of the latest safety standards, e.g. ISO 26262, explains how they apply to embedded vision applications, and describes the technical features that system architects should look for when selecting an embedded vision processor for their safety-critical automotive ICs/SoCs.

UPCOMING INDUSTRY EVENTS

Consumer Electronics Show: January 8-11, 2019, Las Vegas, Nevada

Embedded Vision Summit: May 20-23, 2019, Santa Clara, California

More Events


FEATURED NEWS

Intel AI Protects Animals with National Geographic Society, Leonardo DiCaprio Foundation

STMicroelectronics Drives AI to Edge and Node Embedded Devices with STM32 Neural-Network Developer Toolbox

ON Semiconductor Announces Strata Developer Studio, Industry’s Most Comprehensive Research, Evaluation and Design Tool

New OmniVision Image Sensor Provides Cost-Effective 16 MP Upgrade for Rear- and Front-Facing Cameras on Mainstream Smartphones With Thin Bezels

A Sharper Digital Eye For Intelligent Devices With the Latest Arm ISP Technology

More News

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top