FUNCTIONS

May 2014 Embedded Vision Summit Technical Presentation: “Embedded Lucas-Kanade Tracking: How It Works, How to Implement It, and How to Use It,” Goksel Dedeoglu, PercepTonic

Goksel Dedeoglu, Ph.D., Founder and Lab Director of PercepTonic, presents the "Embedded Lucas-Kanade Tracking: How It Works, How to Implement It, and How to Use It" tutorial at the May 2014 Embedded Vision Summit. This tutorial is intended for technical audiences interested in learning about the Lucas-Kanade (LK) tracker, also known as the Kanade-Lucas-Tomasi (KLT) […]

May 2014 Embedded Vision Summit Technical Presentation: “Embedded Lucas-Kanade Tracking: How It Works, How to Implement It, and How to Use It,” Goksel Dedeoglu, PercepTonic Read More +

May 2014 Embedded Vision Summit Technical Presentation: “Evolving Algorithmic Requirements for Recognition and Classification in Augmented Reality,” Simon Morris, CogniVue

Simon Morris, CEO of CogniVue, presents the "Evolving Algorithmic Requirements for Recognition and Classification in Augmented Reality" tutorial at the May 2014 Embedded Vision Summit. Augmented reality (AR) applications are based on accurately computing a camera's 6 degrees of freedom (6DOF) position in 3-dimensional space, also known as its "pose". In vision-based approaches to AR,

May 2014 Embedded Vision Summit Technical Presentation: “Evolving Algorithmic Requirements for Recognition and Classification in Augmented Reality,” Simon Morris, CogniVue Read More +

May 2014 Embedded Vision Summit Technical Presentation: “Vision-Based Gesture User Interfaces,” Francis MacDougall, Qualcomm

Francis MacDougall, Senior Director of Technology at Qualcomm, presents the "Vision-Based Gesture User Interfaces" tutorial at the May 2014 Embedded Vision Summit. The means by which we interact with the machines around us is undergoing a fundamental transformation. While we may still sometimes need to push buttons, touch displays and trackpads, and raise our voices,

May 2014 Embedded Vision Summit Technical Presentation: “Vision-Based Gesture User Interfaces,” Francis MacDougall, Qualcomm Read More +

“Convolutional Neural Networks,” an Embedded Vision Summit Keynote Presentation from Facebook

Yann LeCun, Director of AI Research at Facebook and Silver Professor of Data Science, Computer Science, Neural Science, and Electrical Engineering at New York University, presents the "Convolutional Networks: Unleashing the Potential of Machine Learning for Robust Perception Systems" keynote at the May 2014 Embedded Vision Summit. Convolutional Networks (ConvNets) have become the dominant method

“Convolutional Neural Networks,” an Embedded Vision Summit Keynote Presentation from Facebook Read More +

May 2014 Embedded Vision Summit Technical Presentation: “Fast 3D Object Recognition in Real-World Environments,” Ken Lee, VanGogh Imaging

Ken Lee, Founder of VanGogh Imaging, presents the "Fast 3D Object Recognition in Real-World Environments" tutorial at the May 2014 Embedded Vision Summit. Real-time 3D object recognition can be computationally intensive and difficult to implement when there are a lot of other objects (i.e. clutter) around the target. There are several approaches to deal with

May 2014 Embedded Vision Summit Technical Presentation: “Fast 3D Object Recognition in Real-World Environments,” Ken Lee, VanGogh Imaging Read More +

May 2014 Embedded Vision Summit Technical Presentation: “How to Create a Great Object Detector,” Avinash Nehemiah, MathWorks

Avinash Nehemiah, Product Marketing Manager for Computer Vision at MathWorks, presents the "How to Create a Great Object Detector" tutorial at the May 2014 Embedded Vision Summit. Detecting objects of interest in images and video is a key part of practical embedded vision systems. Impressive progress has been made over the past few years by

May 2014 Embedded Vision Summit Technical Presentation: “How to Create a Great Object Detector,” Avinash Nehemiah, MathWorks Read More +

ee_journal_logo

Augmented Reality: A Compelling Mobile Embedded Vision Opportunity

This article was originally published at Electronic Engineering Journal. It is reprinted here with the permission of TechFocus Media. Although augmented reality was first proposed and crudely demonstrated nearly fifty years ago, its implementation was until recently only possible on bulky and expensive computers. Nowadays, however, the fast, low power and cost-effective processors and high

Augmented Reality: A Compelling Mobile Embedded Vision Opportunity Read More +

johnday-blog

Improved Vision Processors, Sensors Enable Proliferation of New and Enhanced ADAS Functions

This article was originally published at John Day's Automotive Electronics News. It is reprinted here with the permission of JHDay Communications. Thanks to the emergence of increasingly capable and cost-effective processors, image sensors, memories and other semiconductor devices, along with robust algorithms, it's now practical to incorporate computer vision into a wide range of embedded

Improved Vision Processors, Sensors Enable Proliferation of New and Enhanced ADAS Functions Read More +

“Computational Photography: An Introduction and Highlights of Recent Research,” a Presentation from the University of Wisconsin

Professor Li Zhang of the University of Wisconsin presents an introduction to computational photography at the December 2013 Embedded Vision Alliance Member Meeting.

“Computational Photography: An Introduction and Highlights of Recent Research,” a Presentation from the University of Wisconsin Read More +

Lucas-Kanade Feature Tracking

Jeff Bier, founder of the Embedded Vision Alliance, interviews Goksel Dedeoglu, Manager of Embedded Vision R&D at Texas Instruments. They begin with a hands-on demonstration of real-time Lucas-Kanade tracking using TI's Vision Library VLIB on the C6678 Keystone DSP, wherein thousands of Harris corner features are detected and tracked in 1080p HD resolution images at 15 frames per

Lucas-Kanade Feature Tracking Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top