Embedded Vision Insights: December 3, 2019 Edition

EVA180x100


LETTER FROM THE EDITOR

Dear Colleague,Hailo webinar

In two weeks, on Tuesday, December 17 at 9 am PT, Hailo will deliver the free webinar "A Computer Architecture Renaissance: Energy-efficient Deep Learning Processors for Machine Vision" in partnership with the Embedded Vision Alliance. Hailo has developed a specialized deep learning processor that delivers the performance of a data center-class computer to edge devices. Hailo's AI microprocessor is the product of a rethinking of traditional computer architectures, enabling smart devices to perform sophisticated deep learning tasks such as imagery and sensory processing in real time with minimal power consumption, size and cost. In this webinar, the company will navigate through the undercurrents that drove the definition and development of its AI processor, beginning with the theoretical reasoning behind domain-specific architectures and their implementation in the field of deep learning, specifically for machine vision applications. The presenters will also describe various quantitative measures, presenting detailed design examples in order to make a link between theory and practice. For more information and to register, please see the event page.

The Embedded Vision Alliance is now accepting applications for the 2020 Vision Product of the Year Awards competition. The Vision Product of the Year Awards are open to Member companies of the Alliance and celebrate the innovation of the industry's leading companies that are developing and enabling the next generation of computer vision products. Winning a Vision Product of the Year award recognizes your leadership in computer vision as evaluated by independent industry experts. For more information and to enter. please see the program page.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

EMERGING VISION OPPORTUNITIES

Enabling the Next Kitchen Experience Through Embedded VisionWhirlpool
Our kitchens are the hubs where we spend quality time with family and friends, preparing and eating meals. Today, instructions for cooking a particular meal are just a few clicks away. However, consumers must still work to translate recipes to the operation of kitchen appliances and, often, the results are disappointing. There’s clearly an unmet need for simplifying the cooking process and ensuring quality results. In addition, appliance performance, which used to be the key differentiator, is rapidly becoming a given, so manufacturers must differentiate through enabling effortless, results-based cooking. For example, in the near future consumers will be offered recipes based on the ingredients in their kitchens and their preferences. The meal preparation process will be effortless and the results predictable. Embedded vision and machine learning will be key enablers of this future. This presentation from Sugosh Venkataraman, Vice President of Technology at Whirlpool, describes the journey that his company, a leading appliance manufacturer, has embarked on to apply embedded vision to realize the next level of consumer kitchen experiences. Based on Whirlpool’s experience to date, Venkataraman assesses aspects of embedded vision technology that are mature and areas where there are opportunities for innovation.


Vision Tank Start-up Competition Finalist PresentationsVision Tank
Bo Zhu, CTO and Co-founder of BlinkAI Technologies, Dwight Linden, COO and Co-founder of Entropix, Austin Miller, Robotics Engineer at Robotic Materials, Ravi Sahu, CEO of Strayos, and Barbara Rosario, CTO and Co-founder of Vyrill, deliver their Vision Tank finalist presentations at the May 2019 Embedded Vision Summit. The Vision Tank recognizes companies that incorporate visual intelligence in their products in an innovative way and that are looking for investment, partnerships, technology, and customers. In a lively, engaging, and interactive format, these companies compete for awards and prizes as well as benefiting from the feedback of an expert panel of judges: Lina Karam, Professor and Computer Engineering Director at Arizona State University; Derek Meyer, experienced semiconductor industry executive; Vin Ratford, Executive Director of the Embedded Vision Alliance; and John Feland, Master of Ceremonies and CEO, Argus Insights.

PROGRAMMABLE LOGIC-BASED VISION PROCESSING

Accelerate the Adoption of AI at the Edge with Easy to Use, Low-power Programmable SolutionsLattice Semiconductor
In this talk, Hussein Osman, Consumer Segment Manager at Lattice Semiconductor, shows why Lattice’s low-power FPGA devices, coupled with the sensAI software stack, are a compelling solution for implementation of sophisticated AI capabilities in edge devices. The latest release of the sensAI stack provides a performance increase of more than 10X compared with the previous release. This performance increase is driven by updates to the CNN IP and the neural network compiler tool, including a number of new features, such as support for 8-bit activation quantization and smart merging of layers. For a seamless user experience, the new release expands the list of neural network topologies and machine learning frameworks supported, and automates the quantization and fraction settings processes. In addition, Lattice Semiconductor provides a comprehensive set of reference designs which include training datasets and scripts for popular machine learning frameworks, enabling easy customization. And, to speed time to market for popular use cases, the company provides full turnkey solutions for human counting, human presence detection and key phrase detection. Also see Lattice Semiconductor's recent webinar on this topic, now available as an on-demand archive.


AI+: Combining AI and Other Critical Functions Using FPGAsIntel
AI is increasingly being deployed in vision applications, and most of these applications require other functionality in addition to AI. For example, a system may require AI plus high-performance I/O for lower system latency, AI plus encryption for security or AI plus networking to achieve persistence. In cases like these, the unique flexibility of the Intel FPGA fabric and I/O, along with the FPGA’s optimized compute engines, enable system developers to deliver truly differentiated solutions. In this presentation, Ronak Shah, Director of AI Marketing Strategy at Intel's Programmable Solutions Group, explains how Intel FPGAs enable “AI+”—combining AI and other functionality to create efficient, highly integrated, high-performance solutions. He presents several examples showing how AI+ is being used today. He also explains how Intel’s OpenVINO toolchain enables you to easily develop AI+ solutions.

UPCOMING INDUSTRY EVENTS

Embedded AI Summit: December 6-8, 2019, Shenzhen, China

Hailo Webinar – A Computer Architecture Renaissance: Energy-efficient Deep Learning Processors for Machine Vision: December 17, 2019, 9:00 am PT

Consumer Electronics Show: January 7-10, 2020, Last Vegas, Nevada

Embedded Vision Summit: May 18-21, 2020, Santa Clara, California

More Events

FEATURED NEWS

MediaTek Announces Dimensity Advanced 5G Chipset Family and Dimensity 1000 5G SoC

Microsoft and Graphcore Collaborate to Accelerate Artificial Intelligence

OmniVision Announces Compact Medical Camera Module With Fast Frame Rates at High Resolutions

Codeplay Software Announces Acoran, the Standards-based Platform for AI Programmers

Basler Expands Portfolio for NXP’s i.MX 8 Processor Series

More News

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top