Embedded Vision Insights: October 7, 2014 Edition

EVA180x100

In this edition of Embedded Vision Insights:




LETTER FROM THE EDITOR

Dear Colleague,ADAS

Advanced driver assistance systems (ADAS) are, as I’m sure any
of you who’ve followed computer vision applications will agree, one of
the fastest growing areas of this technology space. Roger Lanctot from
market analyst firm Strategy Analytics spoke on ADAS trends and
opportunities
at May’s Embedded Vision Alliance Member Meeting, echoing
and updating the data which IHS’s senior analyst Helena Perslow had
shared with the Alliance membership two years earlier. I also encourage
you to see my mid-2012 interview with Perslow for additional
insights.

Market analyst firms aren’t the only companies paying
attention to ADAS. Alliance member company representatives frequently
present technical tutorials and demonstrations of ADAS capabilities at
the Embedded Vision Summit conferences, most recently earlier this
year
. And all of this industry attention certainly seems justified,
when you consider the rapid spread of foundation ADAS features from
initial luxury-car designs into today’s higher volume mainstream and
even entry-level vehicles. Consider, too, that features provided by
ADAS are ever-expanding, as cars and trucks drive down the road toward
full autonomy
.

Most industry discussion of ADAS has centered on applications
that leverage cameras mounted on the vehicle exterior; front- and
rear-collision passive alerts and active avoidance, as well as parking
assistance, inadvertent lane-change warnings and automated correction,
adaptive cruise control, rear- and side-mirror replacement, and other
opportunities. Such functions were explored in an ADAS overview
technical article
published by the Alliance earlier this year. That
same article also introduced ADAS capabilities that harness in-vehicle
cameras; monitoring the driver for signs of drowsiness and distraction,
for example. And those same in-car cameras and associated vision
processors are also capable of supporting facial analysis biometrics,
gesture interfaces, and other features, for drivers and passengers
alike.

These concepts are covered in depth in a just-published
technical article, “Smart In-Vehicle Cameras Increase Driver and
Passenger Safety
,” authored by the Alliance and four of its member
companies along with Strategy Analytics director Ian Riches, which I
commend to your inspection. After you read it, please let us know which
topics resonate with you, as well as what additional topics you’d like
to see covered. And while you’re on the Alliance website, make sure you
check out all the other great new content published there in recent
weeks. Thanks as always for your support of the Embedded Vision
Alliance, and for your interest in and contributions to embedded vision
technologies, products and applications. Please don’t hesitate to let
me know
how the Alliance can better serve your needs.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

FEATURED VIDEOS


“Feature Detection: How It Works, When to
Use It, and a Sample Implementation,” an Embedded Vision Summit
Technical Presentation from Marco Jacobs of videantis
videantis
Marco Jacobs, Vice President of Marketing
at videantis, presents the “Feature Detection: How It Works,
When to Use It, and a Sample Implementation” tutorial within the
“Object and Feature Detection” technical session at the October 2013
Embedded Vision Summit East. Feature detection and tracking are key
components of many computer vision applications. In this talk, Jacobs
gives an overview of commonly used feature detectors, and explains in
detail how the Harris feature detector works. He then presents a
pyramidal implementation of the Lucas-Kanade algorithm to track these
features across a series of images. Finally, he shows how videantis has
optimized and parallelized the OpenCV versions of these algorithms,
resulting in a real-time, power efficient embedded implementation on a
videantis unified video/vision processor..


“Forecasting
Consumer Adoption of Embedded Vision in Mobile Devices in 2014,” an
Embedded VIsion Alliance Member Meeting Presentation from John Feland
of Argus Insights
Argus Insights
John Feland, Ph.D., CEO of Argus
Insights, delivers the market trends presentation, “Forecasting
Consumer Adoption of Embedded Vision in Mobile Devices in 2014,” at the
March 2014 Embedded Vision Alliance Member Meeting.


More Videos

FEATURED ARTICLES

Accelerate Machine Learning with the cuDNN Deep Neural Network LibraryNVIDIA
Machine Learning (ML) has its origins in
the field of Artificial Intelligence, which started out decades ago
with the lofty goals of creating a computer that could do any work a
human can do. While attaining that goal still appears to be in the
distant future, many useful tools have been developed and successfully
applied to a wide variety of problems. In fact, ML has now become a
pervasive technology, underlying many modern applications. There is a
wide variety of algorithms and processes for implementing ML systems.
The hottest area in ML today however, is the area of Deep Neural
Networks (DNNs). More

Complex Trends and Challenges in Designing ADAS SystemsAltera
In the race to develop reliable and
cost-effective advanced driver assistance systems (ADAS), designers are
presented with challenges to integrate functionality, develop scalable
platforms and design systems that are robust enough to work in various
operating conditions. Traditional approaches to add discrete electronic
control units (ECUs) for each ADAS function, like lane departure
warning and forward collision warning, are not scalable and simple
microcontrollers (MCUs) do not have the processing horsepower to
process the various sensor inputs from multiple radars, cameras, laser
scanners, ultrasonic sensors, on-board telemetry and vehicle-to-vehicle
or vehicle-to-infrastructure (V2X) communications. In this article, we
will look at the various trends impacting system-level design for ADAS
systems and how a new approach is necessary in order to deliver these
systems into production applications. More


More Articles

FEATURED NEWS

ARM Supercharges MCU Market with High Performance Cortex-M7 Processor

ON Semiconductor Expands Options for High-Quality CCD Image Capture

Imagination Drives Highly-Advanced PowerVR Series6 Architecture into All Key Entry-Level Mobile and Consumer Segments

More News

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top