Embedded Vision Insights: May 31, 2018 Edition

EVA180x100





LETTER FROM THE EDITOR

Dear Colleague,Embedded Vision Summit

This year’s Embedded
Vision Summit
, which took place last week, was the best yet; more
than 1,000 attendees, more than 90 speakers across six presentation
tracks, and more than 50 exhibitors demonstrating more than 100 vision
technologies and products. A downloadable slide set of the
presentations from the Summit is now
available on the Embedded Vision
Alliance website
. Demo and presentation videos from the event will
follow in the coming weeks. For timely notification of the publication
of this and other new website content, subscribe to the Alliance’s RSS feed and Facebook,
Google+,
LinkedIn
company
and group,
and Twitter social
media channels.

Special congratulations go to the winners of the premier Vision
Product of the Year awards
:

  • Best Processor: AImotive aiWare
  • Best Camera: Intel RealSense Depth Cameras: D415 / D435
  • Best Software or Algorithm: MathWorks GPU Coder
  • Best Automotive Solution: Algolux CANA
  • Best AI Technology: Morpho SoftNeuro
  • Best Cloud Technology: Xilinx Machine Learning Suite
  • Best Developer Tools: AImotive aiSim
  • Best End Product: 8tree dentCHECK

Mark your calendar for next year’s Summit,
scheduled for May 20-23, 2019, again at the Santa Clara (California)
Convention Center!

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

DEEP LEARNING
IMPLEMENTATIONS

Designing Deep Neural Network Algorithms for Embedded DevicesIntel
Deep neural networks have shown
state-of-the-art results in a variety of vision tasks. Although
accurate, most of these deep neural networks are computationally
intensive, creating challenges for embedded devices. In this talk,
Minje Park, Software Engineering Manager at Intel, provides several
ideas and insights on how to design deep neural network architectures
small enough for embedded deployment. He also explores how to further
reduce the processing load by adopting simple but effective compression
and quantization techniques. Park shows a set of practical
applications, such as face recognition, facial attribute
classification, and person detection, which can be run in near
real-time without any heavy GPU or dedicated DSP and without losing
accuracy.


Deep Learning and Vision Algorithm Development in MATLAB
Targeting Embedded GPUs
MathWorks
In this presentation from Avinash
Nehemiah, Product Marketing Manager for Computer Vision at MathWorks,
and Girish Venkataramani, MathWorks’ Product Development Manager,
you’ll learn how to adopt a MATLAB-centric workflow to design, verify
and deploy your computer vision and deep learning applications onto
embedded NVIDIA Tegra-based platforms including Jetson TK1/TX1 and
DrivePX boards. The workflow starts with algorithm design in MATLAB,
which enjoys universal appeal among engineers and scientists because of
its expressive power and ease-of-use. The algorithm may employ deep
learning networks augmented with traditional computer vision techniques
and can be tested and verified within MATLAB. ext, a compiler
auto-generates portable and optimized CUDA code from the MATLAB
algorithm, which is then cross-compiled and deployed to the Tegra
board. The workflow affords on-board real-time prototyping and
verification controlled through MATLAB. Examples of common computer
vision algorithms and deep learning networks are used to describe this
workflow, and their performance benchmarks are presented.

ADAS AND AUTONOMOUS
VEHICLES

A Fast Object Detector for ADAS using Deep LearningPanasonic
Object detection has been one of the most
important research areas in computer vision for decades. Recently, deep
neural networks (DNNs) have led to significant improvement in several
machine learning domains, including computer vision, achieving the
state-of-the-art performance thanks to their theoretically proven
modeling and generalization capabilities. However, it is still
challenging to deploy such DNNs on embedded systems, for applications
such as advanced driver assistance systems (ADAS), where computation
power is limited. Minyoung Kim, Senior Research Engineer at Panasonic
Silicon Valley Laboratory, and her team focus on reducing the size of
the network and required computations, and thus building a fast,
real-time object detection system. They propose a fully convolutional
neural network that can achieve at least 45 fps on 640×480 frames with
competitive performance. With this network, there is no proposal
generation step, which can cause a speed bottleneck; instead, a single
forward propagation of the network approximates the locations of
objects directly.


The Path from ADAS to AutonomyStrategy Analytics
In this presentation, Roger Lanctot,
Director of Automotive Connected Mobility at Strategy Analytics, shares
his unique perspective on what the industry can realistically expect to
achieve with ADAS and autonomous vehicles, using computer vision and
other technologies.

UPCOMING INDUSTRY
EVENTS

Bay Area Computer Vision and Deep Learning Meetup Group: June 6, 2018, 6:00 pm PT, Sunnyvale,
California

Embedded
Vision Summit
: May 20-23, 2019, Santa Clara,
California

More Events


FEATURED NEWS

Ultra-Low
Power Lattice sensAI
Leads Mass Market Enablement of
Artificial Intelligence in Edge Devices

Intel Vision Intelligence Transforms
IoT Industry

Synopsys
Introduces Industry’s First ASIL D Ready Embedded Vision Processor IP

for ADAS Applications and Self-Driving Vehicles

Embedded Vision Alliance Announces 2018
Vision Tank Winners

Khronos
Group and Au-Zone Technologies to Develop
Open Source
TensorFlow and Caffe2 Converters for NNEF

More News

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top