Algorithms

“Implementing Eye Tracking for Medical, Automotive and Headset Applications,” a Presentation From Xilinx and EyeTech Digital Systems

Dan Isaacs, Director of Smarter Connected Systems at Xilinx, and Robert Chappell, Founder of EyeTech Digital Systems, co-present the "Implementing Eye Tracking for Medical, Automotive and Headset Applications" tutorial at the May 2015 Embedded Vision Summit. When humans communicate with each other, we get important cues from watching each other’s eyes. Similarly, machines can gain […]

“Implementing Eye Tracking for Medical, Automotive and Headset Applications,” a Presentation From Xilinx and EyeTech Digital Systems Read More +

May 2015 Embedded Vision Summit Technical Presentation: “System-level Design for Embedded Vision with FPGA-based Programmable SoCs,” Mario Bergeron, Avnet Electronics Marketing

Mario Bergeron, Technical Marketing Engineer at Avnet Electronics Marketing, presents the "System-Level Design for Embedded Vision with FPGA-based Programmable SoCs" tutorial at the May 2015 Embedded Vision Summit. FPGA-based programmable system-on-chip (SoC) devices offer capabilities beyond those found in traditional embedded processors. The programmability and vast parallel processing capabilities of the FPGA fabric allow developers

May 2015 Embedded Vision Summit Technical Presentation: “System-level Design for Embedded Vision with FPGA-based Programmable SoCs,” Mario Bergeron, Avnet Electronics Marketing Read More +

May 2015 Embedded Vision Summit Technical Presentation: “3D from 2D: Theory, Implementation, and Applications of Structure from Motion,” Marco Jacobs, videantis

Marco Jacobs, Vice President of Marketing at videantis, presents the "3D from 2D: Theory, Implementation, and Applications of Structure from Motion" tutorial at the May 2015 Embedded Vision Summit. Structure from motion uses a unique combination of algorithms that extract depth information using a single 2D moving camera. Using a calibrated camera, feature detection, and

May 2015 Embedded Vision Summit Technical Presentation: “3D from 2D: Theory, Implementation, and Applications of Structure from Motion,” Marco Jacobs, videantis Read More +

May 2015 Embedded Vision Summit Technical Presentation: “Low-power Embedded Vision: A Face Tracker Case Study,” Pierre Paulin, Synopsys

Pierre Paulin, R&D Director for Embedded Vision at Synopsys, presents the "Low-power Embedded Vision: A Face Tracker Case Study" tutorial at the May 2015 Embedded Vision Summit. The ability to reliably detect and track individual objects or people has numerous applications, for example in the video-surveillance and home entertainment fields. While this has proven to

May 2015 Embedded Vision Summit Technical Presentation: “Low-power Embedded Vision: A Face Tracker Case Study,” Pierre Paulin, Synopsys Read More +

“Tailoring Convolutional Neural Networks for Low-Cost, Low-Power Implementation,” a Presentation From Synopsys

Bruno Lavigueur, Project Leader for Embedded Vision at Synopsys, presents the "Tailoring Convolutional Neural Networks for Low-Cost, Low-Power Implementation" tutorial at the May 2015 Embedded Vision Summit. Deep learning-based object detection using convolutional neural networks (CNN) has recently emerged as one of the leading approaches for achieving state-of-the-art detection accuracy for a wide range of

“Tailoring Convolutional Neural Networks for Low-Cost, Low-Power Implementation,” a Presentation From Synopsys Read More +

evsummit_logo

May 2015 Embedded Vision Summit Proceedings

The Embedded Vision Summit was held on May 12, 2015 in Santa Clara, California, as a educational forum for product creators interested in incorporating visual intelligence into electronic systems and software. The presentations presented at the Summit are listed below. All of the slides from these presentations are included in… May 2015 Embedded Vision Summit

May 2015 Embedded Vision Summit Proceedings Read More +

OpenCL Eases Development of Computer Vision Software for Heterogeneous Processors

OpenCL™, a maturing set of programming languages and APIs from the Khronos Group, enables software developers to efficiently harness the profusion of diverse processing resources in modern SoCs, in an abundance of applications including embedded vision. Computer scientists describe computer vision, the use of digital processing and intelligent algorithms to interpret meaning from still and

OpenCL Eases Development of Computer Vision Software for Heterogeneous Processors Read More +

OpenCLLogo_678x452

OpenCL Eases Development of Computer Vision Software for Heterogeneous Processors

OpenCL™, a maturing set of programming languages and APIs from the Khronos Group, enables software developers to efficiently harness the profusion of diverse processing resources in modern SoCs, in an abundance of applications including embedded vision. Computer scientists describe computer vision, the use of digital processing and intelligent algorithms to interpret meaning from still and

OpenCL Eases Development of Computer Vision Software for Heterogeneous Processors Read More +

Gaze Tracking Using CogniMem Technologies’ CM1K and a Freescale i.MX53

This demonstration, which pairs a Freescale i.MX Quick Start board and CogniMem Technologies CM1K evaluation module, showcases how to use your eyes (specifically where you are looking at any particular point in time) as a mouse. Translating where a customer is looking to actions on a screen, and using gaze tracking to electronically control objects

Gaze Tracking Using CogniMem Technologies’ CM1K and a Freescale i.MX53 Read More +

Adding Precise Finger Gesture Recognition Capabilities to the Microsoft Kinect

CogniMem’s Chris McCormick, application engineer, demonstrates how the addition of general-purpose and scalable pattern recognition can be used to bring enhanced gesture control to the Microsoft Kinect. Envisioned applications include augmenting or eliminating the TV remote control, using American Sign Language for direct text translation, and expanding the game-playing experience. To process even more gestures

Adding Precise Finger Gesture Recognition Capabilities to the Microsoft Kinect Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top