Robotics

“Bringing Computer Vision to the Consumer,” a Keynote Presentation from Dyson

Mike Aldred, Electronics Lead at Dyson, presents the "Bringing Computer Vision to the Consumer" keynote at the May 2015 Embedded Vision Summit. While vision has been a research priority for decades, the results have often remained out of reach of the consumer. Huge strides have been made, but the final, and perhaps toughest, hurdle is […]

“Bringing Computer Vision to the Consumer,” a Keynote Presentation from Dyson Read More +

Figure1b

Visual Intelligence Gives Robotic Systems Spatial Sense

This article is an expanded version of one originally published at EE Times' Embedded.com Design Line. It is reprinted here with the permission of EE Times. In order for robots to meaningfully interact with objects around them as well as move about their environments, they must be able to see and discern their surroundings. Cost-effective

Visual Intelligence Gives Robotic Systems Spatial Sense Read More +

October 2013 Embedded Vision Summit Technical Presentation: “Better Image Understanding Through Better Sensor Understanding,” Michael Tusch, Apical

Michael Tusch, Founder and CEO of Apical Imaging, presents the "Better Image Understanding Through Better Sensor Understanding" tutorial within the "Front-End Image Processing for Vision Applications" technical session at the October 2013 Embedded Vision Summit East. One of the main barriers to widespread use of embedded vision is its reliability. For example, systems which detect

October 2013 Embedded Vision Summit Technical Presentation: “Better Image Understanding Through Better Sensor Understanding,” Michael Tusch, Apical Read More +

September 2013 Qualcomm UPLINQ Conference Presentation: “Accelerating Computer Vision Applications with the Hexagon DSP,” Eric Gregori, BDTI

Eric Gregori, Senior Software Engineer at BDTI, presents the "Accelerating Computer Vision Applications with the Hexagon DSP" tutorial at the September 2013 Qualcomm UPLINQ Conference. Smartphones, tablets and embedded systems increasingly use sophisticated vision algorithms to deliver capabilities like augmented reality and gesture user interfaces. Since vision algorithms are computationally demanding, a key challenge when

September 2013 Qualcomm UPLINQ Conference Presentation: “Accelerating Computer Vision Applications with the Hexagon DSP,” Eric Gregori, BDTI Read More +

October 2013 Embedded Vision Summit Technical Presentation: “Embedded Lucas-Kanade Tracking: How it Works, How to Implement It, and How to Use It,” Goksel Dedeoglu, Texas Instruments

Goksel Dedeoglu, Embedded Vision R&D Manager at Texas Instruments, presents the "Embedded Lucas-Kanade Tracking: How it Works, How to Implement It, and How to Use It" tutorial within the "Algorithms and Implementations" technical session at the October 2013 Embedded Vision Summit East. This tutorial is intended for technical audiences interested in learning about the Lucas-Kanade

October 2013 Embedded Vision Summit Technical Presentation: “Embedded Lucas-Kanade Tracking: How it Works, How to Implement It, and How to Use It,” Goksel Dedeoglu, Texas Instruments Read More +

October 2013 Embedded Vision Summit Technical Presentation: “Using FPGAs to Accelerate 3D Vision Processing: A System Developer’s View,” Ken Lee, VanGogh Imaging

Ken Lee, CEO of VanGogh Imaging, presents the "Using FPGAs to Accelerate 3D Vision Processing: A System Developer's View" tutorial within the "Implementing Vision Systems" technical session at the October 2013 Embedded Vision Summit East. Embedded vision system designers must consider many factors in choosing a processor. This is especially true for 3D vision systems,

October 2013 Embedded Vision Summit Technical Presentation: “Using FPGAs to Accelerate 3D Vision Processing: A System Developer’s View,” Ken Lee, VanGogh Imaging Read More +

October 2013 Embedded Vision Summit Technical Presentation: “Feature Detection: How It Works, When to Use It, and a Sample Implementation,” Marco Jacobs, videantis

Marco Jacobs, Technical Marketing Director at videantis, presents the "Feature Detection: How It Works, When to Use It, and a Sample Implementation" tutorial within the "Object and Feature Detection" technical session at the October 2013 Embedded Vision Summit East. Feature detection and tracking are key components of many computer vision applications. In this talk, Jacobs

October 2013 Embedded Vision Summit Technical Presentation: “Feature Detection: How It Works, When to Use It, and a Sample Implementation,” Marco Jacobs, videantis Read More +

“Embedding Computer Vision in Everyday Life,” a Keynote Presentation from iRobot

Mario E. Munich, Vice President of Advanced Development at iRobot, presents the "Embedding Computer Vision in Everyday Life" keynote at the October 2013 Embedded Vision Summit East. Munich speaks about adapting highly complex computer vision technologies to cost-effective consumer robotics applications. Munich currently manages iRobot's research and advanced development efforts. He was formerly the CTO

“Embedding Computer Vision in Everyday Life,” a Keynote Presentation from iRobot Read More +

“High Speed Vision and Its Applications,” a Presentation from Professor Masatoshi Ishikawa

Professor Masatoshi Ishikawa of Tokyo University delivers the keynote presentation, "High Speed Vision and Its Applications — Sensor Fusion, Dynamic Image Control, Vision Architecture, and Meta-Perception," at the July 2013 Embedded Vision Alliance Member Meeting. High speed vision processing and various applications based on it are expected to become increasingly common due to continued improvement

“High Speed Vision and Its Applications,” a Presentation from Professor Masatoshi Ishikawa Read More +

Moving Object Detection Through Background Subtraction, Part Two

Jeff Bier, founder of the Embedded Vision Alliance, interviews Goksel Dedeoglu, Manager of Embedded Vision R&D at Texas Instruments. Beginning with a hands-on demonstration of TI's Vision Library VLIB on a DaVinci DM6437 evaluation board, Jeff and Goksel touch upon various aspects of embedded vision engineering: algorithm design and prototyping in a PC environment, embedded

Moving Object Detection Through Background Subtraction, Part Two Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top