PROVIDER

May 2014 Embedded Vision Summit Technical Presentation: “Evolving Algorithmic Requirements for Recognition and Classification in Augmented Reality,” Simon Morris, CogniVue

Simon Morris, CEO of CogniVue, presents the "Evolving Algorithmic Requirements for Recognition and Classification in Augmented Reality" tutorial at the May 2014 Embedded Vision Summit. Augmented reality (AR) applications are based on accurately computing a camera's 6 degrees of freedom (6DOF) position in 3-dimensional space, also known as its "pose". In vision-based approaches to AR, […]

May 2014 Embedded Vision Summit Technical Presentation: “Evolving Algorithmic Requirements for Recognition and Classification in Augmented Reality,” Simon Morris, CogniVue Read More +

EVA180x100

Embedded Vision Insights: June 17, 2014 Edition

In this edition of Embedded Vision Insights: Embedded Vision Summit West Content 3D Stereo Vision Training Resources Embedded Vision in the News LETTER FROM THE EDITOR Dear Colleague, Videos of presentations from the recent Embedded Vision Summit West have begun to appear on the Alliance website. We’ve just published the two outstanding keynotes delivered that

Embedded Vision Insights: June 17, 2014 Edition Read More +

May 2014 Embedded Vision Summit Technical Presentation: “Vision-Based Gesture User Interfaces,” Francis MacDougall, Qualcomm

Francis MacDougall, Senior Director of Technology at Qualcomm, presents the "Vision-Based Gesture User Interfaces" tutorial at the May 2014 Embedded Vision Summit. The means by which we interact with the machines around us is undergoing a fundamental transformation. While we may still sometimes need to push buttons, touch displays and trackpads, and raise our voices,

May 2014 Embedded Vision Summit Technical Presentation: “Vision-Based Gesture User Interfaces,” Francis MacDougall, Qualcomm Read More +

May 2014 Embedded Vision Summit Technical Presentation: “Programming Novel Recognition Algorithms on Heterogeneous Architectures,” Kees Vissers, Xilinx

Kees Vissers, Distinguished Engineer at Xilinx, presents the "Programming Novel Recognition Algorithms on Heterogeneous Architectures" tutorial at the May 2014 Embedded Vision Summit. The combination of heterogeneous systems, consisting of processors and FPGA, is a high-performance implementation platform for image and vision processing. One of the significant hurdles in leveraging the compute potential was the

May 2014 Embedded Vision Summit Technical Presentation: “Programming Novel Recognition Algorithms on Heterogeneous Architectures,” Kees Vissers, Xilinx Read More +

“Convolutional Neural Networks,” an Embedded Vision Summit Keynote Presentation from Facebook

Yann LeCun, Director of AI Research at Facebook and Silver Professor of Data Science, Computer Science, Neural Science, and Electrical Engineering at New York University, presents the "Convolutional Networks: Unleashing the Potential of Machine Learning for Robust Perception Systems" keynote at the May 2014 Embedded Vision Summit. Convolutional Networks (ConvNets) have become the dominant method

“Convolutional Neural Networks,” an Embedded Vision Summit Keynote Presentation from Facebook Read More +

May 2014 Embedded Vision Summit Technical Presentation: “Fast 3D Object Recognition in Real-World Environments,” Ken Lee, VanGogh Imaging

Ken Lee, Founder of VanGogh Imaging, presents the "Fast 3D Object Recognition in Real-World Environments" tutorial at the May 2014 Embedded Vision Summit. Real-time 3D object recognition can be computationally intensive and difficult to implement when there are a lot of other objects (i.e. clutter) around the target. There are several approaches to deal with

May 2014 Embedded Vision Summit Technical Presentation: “Fast 3D Object Recognition in Real-World Environments,” Ken Lee, VanGogh Imaging Read More +

“Self-Driving Cars,” an Embedded Vision Summit Keynote Presentation from Google

Nathaniel Fairfield, Technical Lead at Google, presents the "Self-Driving Cars" keynote at the May 2014 Embedded Vision Summit. Self-driving cars have the potential to transform how we move: they promise to make us safer, give freedom to millions of people who can't drive, and give people back their time. The Google Self-Driving Car project was

“Self-Driving Cars,” an Embedded Vision Summit Keynote Presentation from Google Read More +

May 2014 Embedded Vision Summit Technical Presentation: “Taming the Beast: Performance and Energy Optimization Across Embedded Feature Detection and Tracking,” Chris Rowen, Cadence

Chris Rowen, Fellow at Cadence, presents the "Taming the Beast: Performance and Energy Optimization Across Embedded Feature Detection and Tracking" tutorial at the May 2014 Embedded Vision Summit. This presentation looks at a cross-section of advanced feature detectors, and considers the algorithm, bit precision, arithmetic primitives and implementation optimizations that yield high pixel processing rates,

May 2014 Embedded Vision Summit Technical Presentation: “Taming the Beast: Performance and Energy Optimization Across Embedded Feature Detection and Tracking,” Chris Rowen, Cadence Read More +

May 2014 Embedded Vision Summit Technical Presentation: “How to Create a Great Object Detector,” Avinash Nehemiah, MathWorks

Avinash Nehemiah, Product Marketing Manager for Computer Vision at MathWorks, presents the "How to Create a Great Object Detector" tutorial at the May 2014 Embedded Vision Summit. Detecting objects of interest in images and video is a key part of practical embedded vision systems. Impressive progress has been made over the past few years by

May 2014 Embedded Vision Summit Technical Presentation: “How to Create a Great Object Detector,” Avinash Nehemiah, MathWorks Read More +

Tegra K1-Powered Project Tango DevKit Opens Door to New Worlds Enabled by Computer Vision

This article was originally published at NVIDIA's blog. It is reprinted here with the permission of NVIDIA. Google’s new Project Tango Tablet Developers’ Kit puts powerful new capabilities in the hands of those ready to harness the promise of computer vision. Fast-forwarding Google’s Project Tango from experimental device to developer kit, the tablet incorporates cameras

Tegra K1-Powered Project Tango DevKit Opens Door to New Worlds Enabled by Computer Vision Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top