Processors

EVSummit_West2014e

May 2014 Embedded Vision Summit Proceedings

The Embedded Vision Summit was held on May 29, 2014 in Santa Clara, California, as a technical educational forum for product creators interested in incorporating visual intelligence into electronic systems and software. The program for the event included the following presentations, whose PDF-formatted foilsets are available for download as a… May 2014 Embedded Vision Summit […]

May 2014 Embedded Vision Summit Proceedings Read More +

GPUTech

Embedded Vision: Enabling Smarter Mobile Apps and Devices

For decades, computer vision technology was found mainly in university laboratories and a few niche applications. Today, virtually every tablet and smartphone is capable of sophisticated vision functions such as hand gesture recognition, face recognition, gaze tracking, and object recognition. These capabilities are being used to enable new types of applications, user interfaces, and use

Embedded Vision: Enabling Smarter Mobile Apps and Devices Read More +

March 2014 Embedded Vision Alliance Member Meeting Presentation: “Vision-Based Navigation Applications: From Planetary Exploration to Consumer Devices,” Larry Matthies, NASA

Larry Matthies, Supervisor of the Computer Vision Group at NASA's Jet Propulsion Laboratory, delivers the technology presentation, "Vision-Based Navigation Applications: From Planetary Exploration to Consumer Devices," at the March 2014 Embedded Vision Alliance Member Meeting. Dr. Matthies is a Senior Research Scientist at JPL and is the Supervisor of the Computer Vision Group in the

March 2014 Embedded Vision Alliance Member Meeting Presentation: “Vision-Based Navigation Applications: From Planetary Exploration to Consumer Devices,” Larry Matthies, NASA Read More +

ee_journal_logo

Augmented Reality: A Compelling Mobile Embedded Vision Opportunity

This article was originally published at Electronic Engineering Journal. It is reprinted here with the permission of TechFocus Media. Although augmented reality was first proposed and crudely demonstrated nearly fifty years ago, its implementation was until recently only possible on bulky and expensive computers. Nowadays, however, the fast, low power and cost-effective processors and high

Augmented Reality: A Compelling Mobile Embedded Vision Opportunity Read More +

GPUTech

Real-Time Traffic Sign Recognition on Mobile Processors

There is a growing need for fast and power-efficient computer vision on embedded devices. This session will focus on computer vision capabilities on embedded platforms available to ADAS developers, covering the OpenCV CUDA implementation and the new computer vision standard, OpenVX. In addition, Itseez traffic sign detection will be showcased. The algorithm is capable of

Real-Time Traffic Sign Recognition on Mobile Processors Read More +

GPUTech

Getting Started With GPU-Accelerated Computer Vision Using OpenCV and CUDA

OpenCV is a free library for research and commercial purposes that includes hundreds of optimized computer vision and image processing algorithms. NVIDIA and Itseez have optimized many OpenCV functions using CUDA on desktop machines equipped with NVIDIA GPUs. These functions are 5 to 100 times faster in wall-clock time compared to their CPU counterparts. Anatoly

Getting Started With GPU-Accelerated Computer Vision Using OpenCV and CUDA Read More +

johnday-blog

Improved Vision Processors, Sensors Enable Proliferation of New and Enhanced ADAS Functions

This article was originally published at John Day's Automotive Electronics News. It is reprinted here with the permission of JHDay Communications. Thanks to the emergence of increasingly capable and cost-effective processors, image sensors, memories and other semiconductor devices, along with robust algorithms, it's now practical to incorporate computer vision into a wide range of embedded

Improved Vision Processors, Sensors Enable Proliferation of New and Enhanced ADAS Functions Read More +

October 2013 Embedded Vision Summit Technical Presentation: “Implementing Real-Time Hyperspectral Imaging,” Kalyanramu Vemishetty, National Instruments

Kalyanramu Vemishetty, Senior Systems Engineer at National Instruments, presents the "Implementing Real-Time Hyperspectral Imaging tutorial within the "Front-End Image Processing for Vision Applications" technical session at the October 2013 Embedded Vision Summit East. Hyperspectral imaging enables vision systems to use many spectral bands rather than just the typical red, green, blue bands. This can be

October 2013 Embedded Vision Summit Technical Presentation: “Implementing Real-Time Hyperspectral Imaging,” Kalyanramu Vemishetty, National Instruments Read More +

Stereo Vision for 3D Depth Perception

Jeff Bier, founder of the Embedded Vision Alliance, interviews Goksel Dedeoglu, Manager of Embedded Vision R&D at Texas Instruments. Beginning with a hands-on demonstration of TI's real-time stereo vision prototype on the C6678 Keystone DSP, Jeff and Goksel touch upon various trade-offs in designing a stereo depth camera: the separation between the sensors, image resolution, field-of-view, and finally,

Stereo Vision for 3D Depth Perception Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top