Sensors and Cameras

“Using Inertial Sensors and Sensor Fusion to Enhance the Capabilities of Embedded Vision Systems,” a Presentation from Sensor Platforms

Kevin Shaw, Chief Technology Officer at Sensor Platforms, delivers the presentation "Using Inertial Sensors and Sensor Fusion to Enhance the Capabilities of Embedded Vision Systems" at the May 2014 Embedded Vision Alliance Member Meeting.

“Using Inertial Sensors and Sensor Fusion to Enhance the Capabilities of Embedded Vision Systems,” a Presentation from Sensor Platforms Read More +

“Project Tango: Integrating 3D Vision Into Smartphones,” a Presentation From Google

Johnny Lee, Technical Program Lead at Google, delivers the presentation "Google Project Tango: Integrating 3D Vision Into Smartphones," at the May 2014 Embedded Vision Alliance Member Meeting. Project Tango is an effort to harvest research in computer vision and robotics and concentrate that technology into a mobile platform. It uses vision and sensor fusion to

“Project Tango: Integrating 3D Vision Into Smartphones,” a Presentation From Google Read More +

May 2014 Embedded Vision Summit Technical Presentation: “Vision-Based Gesture User Interfaces,” Francis MacDougall, Qualcomm

Francis MacDougall, Senior Director of Technology at Qualcomm, presents the "Vision-Based Gesture User Interfaces" tutorial at the May 2014 Embedded Vision Summit. The means by which we interact with the machines around us is undergoing a fundamental transformation. While we may still sometimes need to push buttons, touch displays and trackpads, and raise our voices,

May 2014 Embedded Vision Summit Technical Presentation: “Vision-Based Gesture User Interfaces,” Francis MacDougall, Qualcomm Read More +

“Self-Driving Cars,” an Embedded Vision Summit Keynote Presentation from Google

Nathaniel Fairfield, Technical Lead at Google, presents the "Self-Driving Cars" keynote at the May 2014 Embedded Vision Summit. Self-driving cars have the potential to transform how we move: they promise to make us safer, give freedom to millions of people who can't drive, and give people back their time. The Google Self-Driving Car project was

“Self-Driving Cars,” an Embedded Vision Summit Keynote Presentation from Google Read More +

EVSummit_West2014e

May 2014 Embedded Vision Summit Proceedings

The Embedded Vision Summit was held on May 29, 2014 in Santa Clara, California, as a technical educational forum for product creators interested in incorporating visual intelligence into electronic systems and software. The program for the event included the following presentations, whose PDF-formatted foilsets are available for download as a… May 2014 Embedded Vision Summit

May 2014 Embedded Vision Summit Proceedings Read More +

GPUTech

Embedded Vision: Enabling Smarter Mobile Apps and Devices

For decades, computer vision technology was found mainly in university laboratories and a few niche applications. Today, virtually every tablet and smartphone is capable of sophisticated vision functions such as hand gesture recognition, face recognition, gaze tracking, and object recognition. These capabilities are being used to enable new types of applications, user interfaces, and use

Embedded Vision: Enabling Smarter Mobile Apps and Devices Read More +

March 2014 Embedded Vision Alliance Member Meeting Presentation: “Vision-Based Navigation Applications: From Planetary Exploration to Consumer Devices,” Larry Matthies, NASA

Larry Matthies, Supervisor of the Computer Vision Group at NASA's Jet Propulsion Laboratory, delivers the technology presentation, "Vision-Based Navigation Applications: From Planetary Exploration to Consumer Devices," at the March 2014 Embedded Vision Alliance Member Meeting. Dr. Matthies is a Senior Research Scientist at JPL and is the Supervisor of the Computer Vision Group in the

March 2014 Embedded Vision Alliance Member Meeting Presentation: “Vision-Based Navigation Applications: From Planetary Exploration to Consumer Devices,” Larry Matthies, NASA Read More +

ee_journal_logo

Augmented Reality: A Compelling Mobile Embedded Vision Opportunity

This article was originally published at Electronic Engineering Journal. It is reprinted here with the permission of TechFocus Media. Although augmented reality was first proposed and crudely demonstrated nearly fifty years ago, its implementation was until recently only possible on bulky and expensive computers. Nowadays, however, the fast, low power and cost-effective processors and high

Augmented Reality: A Compelling Mobile Embedded Vision Opportunity Read More +

johnday-blog

Improved Vision Processors, Sensors Enable Proliferation of New and Enhanced ADAS Functions

This article was originally published at John Day's Automotive Electronics News. It is reprinted here with the permission of JHDay Communications. Thanks to the emergence of increasingly capable and cost-effective processors, image sensors, memories and other semiconductor devices, along with robust algorithms, it's now practical to incorporate computer vision into a wide range of embedded

Improved Vision Processors, Sensors Enable Proliferation of New and Enhanced ADAS Functions Read More +

“Computational Photography: An Introduction and Highlights of Recent Research,” a Presentation from the University of Wisconsin

Professor Li Zhang of the University of Wisconsin presents an introduction to computational photography at the December 2013 Embedded Vision Alliance Member Meeting.

“Computational Photography: An Introduction and Highlights of Recent Research,” a Presentation from the University of Wisconsin Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top