LETTER FROM THE EDITOR |
Dear Colleague,
Are you interested in learning more about key trends driving the proliferation of visual AI? The Embedded Vision Alliance will deliver a free webinar on this topic on October 16. Jeff Bier, founder of the Alliance and co-founder and President of BDTI, will examine the four most important trends that are fueling the development of vision applications and influencing the future of the industry. He will explain what's fueling each of these key trends, and will highlight key implications for technology suppliers, solution developers and end-users. He will also provide technology and application examples illustrating each of these trends, including spotlighting the winners of the Alliance's yearly Vision Product of the Year Awards (see below for more information on the awards). Two webinar sessions will be offered: the first will take place at 9 am Pacific Time (noon Eastern Time), timed for attendees in Europe and the Americas, while the second, at 6 pm Pacific Time (9 am China Standard Time on October 17), is intended for attendees in Asia. To register, please see the event page for the session you're interested in.
The next session of the Embedded Vision Alliance's in-person, hands-on technical training class series, Deep Learning for Computer Vision with TensorFlow 2.0, takes place November 1 in Fremont, California, hosted by Alliance Member company Mentor. This one-day hands-on overview will give you the critical knowledge you need to develop deep learning computer vision applications with TensorFlow. Details, including online registration, can be found here.
Brian Dipert
Editor-In-Chief, Embedded Vision Alliance
|
LOW POWER DESIGN |
Optimizing SSD Object Detection for Low-power Devices
Deep learning-based computer vision models have gained traction in applications requiring object detection, thanks to their accuracy and flexibility. For deployment on low-power hardware, single-shot detection (SSD) models are attractive due to their speed when operating on inputs with small spatial dimensions. The key challenge in creating efficient embedded implementations of SSD is not in the feature extraction module, but rather is due to the non-linear bottleneck in the detection stage, which does not lend itself to parallelization. This hinders the ability to lower the processing time per frame, even with custom hardware. In this presentation, Moses Guttmann, CTO and founder of Allegro, describes in detail a data-centric optimization approach to SSD. The approach drastically lowers the number of priors (“anchors”) needed for the detection, and thus linearly decreases time spent on this costly part of the computation. With this approach, specialized processors and custom hardware may be better utilized, yielding higher performance and lower latency regardless of the specific hardware used.
Low-power Computer Vision: Status, Challenges and Opportunities
Energy efficiency plays a crucial role in making computer vision successful in battery-powered systems, including drones, mobile phones and autonomous robots. Since 2015, the IEEE has been organizing an annual competition on low-power computer vision to identify the most energy-efficient technologies for detecting objects in images. The scores are the ratio of accuracy and energy consumption. Over the four years, the winning solutions have improved the scores by a factor of 24. In this presentation, Professor Yung-Hsiang Lu of Purdue University describes this competition and summarizes the winning solutions, including quantization and accuracy-energy tradeoffs. Based on technology trends, he identifies the challenges and opportunities in enabling energy-efficient computer vision.
|
AUTONOMOUS VEHICLES |
What’s Changing in Autonomous Vehicle Investments Worldwide—and Why?
So far, over $100B has been invested by industry into the development of autonomous vehicles (AVs), and the pace of investment has recently accelerated. In this talk, Rudy Burger, Managing Partner at Woodside Capital Partners, presents extensive data and analysis showing how this capital is being allocated among different categories of companies, including the autonomous vehicle makers themselves and the developers of enabling technologies such as sensors, AI software and AI processors. Burger also shows how the flows of AV investment capital are shifting worldwide. He examines the regulatory forces at work in different parts of the world and their likely impact on investment capital, and analyzes the impact of the U.S. government’s recent foreign-investment-related actions on investments in U.S. AV companies. Finally, he discusses where the AV market is today relative to Gartner’s “hype curve” and how long it will likely take the market participants to start generating returns on invested capital.
Making Cars That See—Failure is Not an Option
Drivers are the biggest source of uncertainty in the operation of cars. Computer vision is helping to eliminate human error and make the roads safer. But 14 years after autonomous vehicles successfully completed the DARPA Grand Challenge, the question remains: “Where’s my driverless car?” In this talk, Burkhard Huhnke, Vice President of Automotive Strategy for Synopsys, examines three key areas where development of automotive computer vision and related technologies has been slower than expected. First, achieving robust designs with very low failure rates has proven more difficult than expected. Second, the technology is expensive, and current business cases don’t support these costs. And, third, manufacturing has not yet scaled up to mass-produce self-driving cars. Huhnke explains why progress in these areas has been slower than expected and explores the vision processing performance requirements that have proven challenging. Finally, he shares his views on how collaboration between innovative ecosystem suppliers will enable consumer-ready cars with high reliability, safety and security.
|
UPCOMING INDUSTRY EVENTS |
Drive World with ESC: August 27-29, 2019, Santa Clara, California
Embedded Vision Alliance Webinar – Key Trends in the Deployment of Visual AI: October 16, 2019, 9:00 am PT and 6:00 pm PT
Technical Training Class – Deep Learning for Computer Vision with TensorFlow 2.0: November 1, 2019, Fremont, California
Embedded AI Summit: December 6-8, 2019, Shenzhen, China
More Events
|
VISION PRODUCT OF THE YEAR SHOWCASE |
Horizon Robotics Horizon Matrix (Best Automotive Solution)
Horizon Robotics' Horizon Matrix is the 2019 Vision Product of the Year Award Winner in the Automotive Solutions category. The Horizon Matrix is an open autonomous driving computing platform. The platform is based on Horizon Robotics’ own edge AI processing unit, BPU2.0, including both hardware and deep learning software technologies. The design is power efficient, yet it delivers powerful visual perception computing capabilities that are already in use in L3 and L4 driving automation systems. Thanks to its open design, the platform offers developers the freedom to implement perceptual and sensing tasks with their preferred algorithms and models.
Please see here for more information on Horizon Robotics and its Horizon Matrix product. The Vision Product of the Year Awards are open to Member companies of the Embedded Vision Alliance and celebrate the innovation of the industry's leading companies that are developing and enabling the next generation of computer vision products. Winning a Vision Product of the Year award recognizes leadership in computer vision as evaluated by independent industry experts.
|
FEATURED NEWS |
Huawei Launches Ascend 910, the World's Most Powerful AI Processor, and MindSpore, an All-scenario AI Computing Framework
At Hot Chips, Intel Pushes "AI Everywhere"
Khronos Releases New NNEF Convertors, Extensions, and Model Zoo
Basler Embedded Vision Solutions Now Also Available for NXP’s i.MX 8 Applications Processor Series – First Products Launched
MediaTek Introduces New Helio G Series Chipsets – Helio G90 & G90T – and HyperEngine Game Technology to Power Incredible Smartphone Gaming Experiences
More News
|