Embedded Vision Insights: September 27, 2016 Edition

EVA180x100


LETTER FROM THE EDITOR

Dear Colleague,Webinars

In the coming months, Embedded Vision Alliance member companies VeriSilicon and Xilinx will deliver several free, hour-long webinars on various computer vision topics, in partnership with the Alliance.

  • On October 19 at 10 am Pacific Time, VeriSilicon will present "Learning at the Speed of Sight," which discusses how the company's line of vision and image processing IP cores addresses various computer vision design challenges.
  • On November 2, also at 10 am PT, Xilinx will deliver "Vision with Precision: Medical Imaging," which covers the critical challenges facing developers of advanced medical imaging systems.
  • And on December 6, again at 10 am PT, Xilinx will present "Vision with Precision: Augmented Reality," which discusses a number of augmented reality use cases outside of the more commonly known consumer-oriented examples.

Visit the above linked event pages for more information on these free webinars, and to register online. And keep an eye on the webinars page at the Alliance website for more webinars to come.

Brian Dipert
Editor-In-Chief, Embedded Vision Alliance

FEATURED VIDEOS

"Enabling Ubiquitous Visual Intelligence Through Deep Learning," a Keynote Presentation from Dr. Ren WuBaidu
Deep learning techniques have been making headlines lately in computer vision research. Using techniques inspired by the human brain, deep learning employs massive replication of simple algorithms which learn to distinguish objects through training on vast numbers of examples. Neural networks trained in this way are gaining the ability to recognize objects as accurately as humans. Some experts believe that deep learning will transform the field of vision, enabling the widespread deployment of visual intelligence in many types of systems and applications. But there are many practical problems to be solved before this goal can be reached. For example, how can we create the massive sets of real-world images required to train neural networks? And given their massive computational requirements, how can we deploy neural networks into applications like mobile and wearable devices with tight cost and power consumption constraints? In this talk, neural network pioneer Dr. Ren Wu shares an insider’s perspective on these and other critical questions related to the practical use of neural networks for vision.

"Overcoming Barriers to Consumer Adoption of Vision-enabled Products and Services," a Presentation from Argus InsightsArgus Insights
Visual intelligence is being deployed in a growing range of consumer products, including smartphones, tablets, security cameras, laptops, and even smartwatches. The demos are always cool. But does vision work for regular consumers? Do consumers see vision as a value add or just another feature to be ignored? In this talk, John Feland, CEO and Founder of Argus Insights, investigates the best and worst of consumer product embedded vision implementations as told by real consumers, based on Argus Insights’ extensive portfolio of consumer data. Feland examines where current products fall short of consumers’ needs, and illuminates successful implementations to show how their vision capabilities create value in the lives of consumers. Case studies include examples from Dropcam, Intel, HTC, and DJI.

More Videos

FEATURED ARTICLES

Vision Processing Opportunities in DronesDrones
UAVs (unmanned aerial vehicles), commonly known as drones, are a rapidly growing market and increasingly leverage embedded vision technology for digital video stabilization, autonomous navigation, and terrain analysis, among other functions. This article reviews drone market sizes and trends, and then discusses embedded vision technology applications in drones, such as image quality optimization, autonomous navigation, collision avoidance, terrain analysis, and subject tracking. More

Going Deep: Why Depth Sensing Will ProliferateBDTI
"I believe," writes Embedded Vision Alliance founder Jeff Bier, "that embedded vision – enabling devices to understand the world visually – will be a game-changer for many industries. For humans, vision enables many diverse capabilities: reading your spouse’s facial expression, navigating your car through a parking garage, or threading a needle. Similarly, embedded vision is now enabling all sorts of devices (from vacuum cleaning robots to cars) to be more autonomous, easier to use, safer, more efficient and more capable. When we think about embedded vision (or, more generically, computer vision), we typically think about algorithms for identifying objects: a car, a curb, a pedestrian, etc. And, to be sure, identifying objects is an important part of visual intelligence. But it’s only one part. Particularly for devices that interact with the physical world, it’s important to know not only what objects are in the vicinity, but also where they are." More

More Articles

FEATURED NEWS

ON Semiconductor Expands Breadth of Options for Low-Light Industrial Imaging Applications

NVIDIA Unveils Palm-Sized, Energy-Efficient AI Computer for Self-Driving Cars

Make Amazing Things Happen in IoT and Entrepreneurship with Intel Joule

SoftKinetic Pushes Boundaries of 3D Vision for AR, VR and Automotive Environments

Morpho's MovieSolid and Morpho Video WDR Now Available on Cadence Tensilica Imaging/Vision DSPs

More News

UPCOMING INDUSTRY EVENTS

VeriSilicon Webinar – Learning at the Speed of Sight: October 19, 2016, 10 am PT

Xilinx Webinar Series – Vision with Precision: Medical Imaging: November 2, 2016, 10 am PT

Xilinx Webinar Series – Vision with Precision: Augmented Reality: December 6, 2016, 10 am PT

Embedded Vision Summit: May 1-3, 2017, Santa Clara, California

More Events

 

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top