Dear Colleague,
Happy 2016! I hope that the holiday weeks have been relaxing
and rejuvenating for you, and I wish you the very best professionally
and personally in the new year.
Recently, we have greatly expanded the amount of information
on the Alliance website regarding the upcoming Embedded Vision Summit.
This event, an educational forum for product creators interested in
incorporating visual intelligence into electronic systems and software,
will take place May 2-4, 2016 at the Santa Clara (California)
Convention Center. Register now and receive a 15% Early Bird discount using promotional
code 01EVI! And keep a close eye on the event
highlights page for news on additional Summit activities such as
the Vision
Tank, a deep learning- and vision-based product competition. The
deadline for Vision Tank entries is March
1, so start brainstorming; application submission information is
available on the competition
page.
I’d also like to take this opportunity to remind you about next
month’s full-day tutorial focused on convolutional neural networks
for vision and the Caffe framework for deep learning. Presented by the
primary Caffe developers from U.C. Berkeley’s Vision and Learning
Center and organized by the Embedded Vision Alliance and BDTI, the
tutorial will take place February 22nd from 9AM to 5:30PM at the Hyatt
Regency Hotel in Santa Clara, California. An Early Bird discount of $720 is available
until January 22, so don’t delay in signing up! For more
tutorial details, including online registration, please visit the event
page.
While you’re on the Alliance website, make sure to also check
out all the other great recently published content, such as an article
on vision in drones from CEVA, and a presentation
video on open standard APIs for vision from Khronos. Thanks as
always for your support of the Embedded Vision Alliance, and for your
interest in and contributions to embedded vision technologies, products
and applications. I
welcome your suggestions on what the Alliance can do to better
service your needs.
Brian Dipert
Editor-In-Chief, Embedded Vision Alliance
|
“Computational Photography: An Introduction and Highlights of
Recent Research,” a Presentation from
the University of Wisconsin
Professor Li Zhang of the University of
Wisconsin presents an introduction to computational photography at the
December 2013 Embedded Vision Alliance Member Meeting.
Introduction to OpenCV for Tegra
OpenCV for Tegra is a highly optimized
port of the OpenCV library for NVIDIA’s Tegra chip. It runs on Android,
and has ~2500 image processing and computer vision functions. In this
video, NVIDIA introduces you to OpenCV for Tegra, its functionality,
and some things you can do with it.
More Videos
|
A Primer on Mobile Systems Used for Heterogeneous Computing
In the mobile and embedded market, the
design constraints of electronic products can sometimes be seen as
tight and contradictory; the market demands higher performance yet
lower power consumption, reductions in cost but shorter time-to-market.
These constraints have created a trend for more specialized hardware
designs that fit a particular application; if each task is well matched
to a functional unit, fewer transistors are wasted and power efficiency
is better. As a result, application processors have become increasingly
heterogeneous over time, integrating multiple components into a single
SoC. More
Driving Automotive Vision Applications with the Right DSP
We’re not yet riding in driverless cars,
yet today’s vehicles are doing more for us than perhaps Henry Ford
could have imagined, suggests Cadence. From pedestrian detection to
driver monitoring, parking assistance and infotainment systems, these
features are making our rides safer, smoother, and more interesting.
And, they all have at least one thing in common: their reliance on
image and video data. Indeed, the single biggest driver of modern
electronics is vision and image processing. And automotive applications
like ADAS are among the highest profile users of this type of
processing. More
More Articles
|