Object Tracking

evsummit_logo

May 2016 Embedded Vision Summit Proceedings

The Embedded Vision Summit was held on May 2-4, 2016 in Santa Clara, California, as a educational forum for product creators interested in incorporating visual intelligence into electronic systems and software. The presentations presented at the Summit are listed below. All of the slides from these presentations are included in… May 2016 Embedded Vision Summit […]

May 2016 Embedded Vision Summit Proceedings Read More +

“Techniques for Efficient Implementation of Deep Neural Networks,” a Presentation from Stanford

Song Han, graduate student at Stanford, delivers the presentation "Techniques for Efficient Implementation of Deep Neural Networks" at the March 2016 Embedded Vision Alliance Member Meeting. Song presents recent findings on techniques for the efficient implementation of deep neural networks.

“Techniques for Efficient Implementation of Deep Neural Networks,” a Presentation from Stanford Read More +

“Harman’s Augmented Navigation Platform—The Convergence of ADAS and Navigation,” a Presentation from Harman

Alon Atsmon, Vice President of Technology Strategy at Harman International, presents the "Harman’s Augmented Navigation Platform—The Convergence of ADAS and Navigation" tutorial at the May 2015 Embedded Vision Summit. Until recently, advanced driver assistance systems (ADAS) and in-car navigation systems have evolved as separate standalone systems. Today, however, the combination of available embedded computing power

“Harman’s Augmented Navigation Platform—The Convergence of ADAS and Navigation,” a Presentation from Harman Read More +

“Bringing Computer Vision to the Consumer,” a Keynote Presentation from Dyson

Mike Aldred, Electronics Lead at Dyson, presents the "Bringing Computer Vision to the Consumer" keynote at the May 2015 Embedded Vision Summit. While vision has been a research priority for decades, the results have often remained out of reach of the consumer. Huge strides have been made, but the final, and perhaps toughest, hurdle is

“Bringing Computer Vision to the Consumer,” a Keynote Presentation from Dyson Read More +

Gaze Tracking Using CogniMem Technologies’ CM1K and a Freescale i.MX53

This demonstration, which pairs a Freescale i.MX Quick Start board and CogniMem Technologies CM1K evaluation module, showcases how to use your eyes (specifically where you are looking at any particular point in time) as a mouse. Translating where a customer is looking to actions on a screen, and using gaze tracking to electronically control objects

Gaze Tracking Using CogniMem Technologies’ CM1K and a Freescale i.MX53 Read More +

“Keeping Brick and Mortar Relevant, A Look Inside Retail Analytics,” A Presentation from Prism Skylabs

Doug Johnston, Founder and Vice President of Technology at Prism Skylabs, delivers the presentation "Keeping Brick and Mortar Relevant: A Look Inside Prism Skylabs and Retail Analytics" at the December 2014 Embedded Vision Alliance Member Meeting. Doug explains how his firm is using vision to provide retailers with actionable intelligence based on consumer behavior.

“Keeping Brick and Mortar Relevant, A Look Inside Retail Analytics,” A Presentation from Prism Skylabs Read More +

May 2014 Embedded Vision Summit Technical Presentation: “Embedded Lucas-Kanade Tracking: How It Works, How to Implement It, and How to Use It,” Goksel Dedeoglu, PercepTonic

Goksel Dedeoglu, Ph.D., Founder and Lab Director of PercepTonic, presents the "Embedded Lucas-Kanade Tracking: How It Works, How to Implement It, and How to Use It" tutorial at the May 2014 Embedded Vision Summit. This tutorial is intended for technical audiences interested in learning about the Lucas-Kanade (LK) tracker, also known as the Kanade-Lucas-Tomasi (KLT)

May 2014 Embedded Vision Summit Technical Presentation: “Embedded Lucas-Kanade Tracking: How It Works, How to Implement It, and How to Use It,” Goksel Dedeoglu, PercepTonic Read More +

May 2014 Embedded Vision Summit Technical Presentation: “How to Create a Great Object Detector,” Avinash Nehemiah, MathWorks

Avinash Nehemiah, Product Marketing Manager for Computer Vision at MathWorks, presents the "How to Create a Great Object Detector" tutorial at the May 2014 Embedded Vision Summit. Detecting objects of interest in images and video is a key part of practical embedded vision systems. Impressive progress has been made over the past few years by

May 2014 Embedded Vision Summit Technical Presentation: “How to Create a Great Object Detector,” Avinash Nehemiah, MathWorks Read More +

johnday-blog

Improved Vision Processors, Sensors Enable Proliferation of New and Enhanced ADAS Functions

This article was originally published at John Day's Automotive Electronics News. It is reprinted here with the permission of JHDay Communications. Thanks to the emergence of increasingly capable and cost-effective processors, image sensors, memories and other semiconductor devices, along with robust algorithms, it's now practical to incorporate computer vision into a wide range of embedded

Improved Vision Processors, Sensors Enable Proliferation of New and Enhanced ADAS Functions Read More +

“Computational Photography: An Introduction and Highlights of Recent Research,” a Presentation from the University of Wisconsin

Professor Li Zhang of the University of Wisconsin presents an introduction to computational photography at the December 2013 Embedded Vision Alliance Member Meeting.

“Computational Photography: An Introduction and Highlights of Recent Research,” a Presentation from the University of Wisconsin Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top