Processors

May 2014 Embedded Vision Summit Technical Presentation: “Challenges in Object Detection on Embedded Devices,” Adar Paz, CEVA

Adar Paz, Imaging and Computer Vision Team Leader at CEVA, presents the "Challenges in Object Detection on Embedded Devices" tutorial at the May 2014 Embedded Vision Summit. As more products ship with integrated cameras, there is an increased potential for computer vision (CV) to enable innovation. For instance, CV can tackle the "scene understanding" problem […]

May 2014 Embedded Vision Summit Technical Presentation: “Challenges in Object Detection on Embedded Devices,” Adar Paz, CEVA Read More +

May 2014 Embedded Vision Summit Technical Presentation: “Trends and Recent Developments in Processors for Vision,” Jeff Bier, BDTI

Jeff Bier, President and co-founder of BDTI and founder of the Embedded Vision Alliance, presents the "Trends and Recent Developments in Processors for Vision" tutorial at the May 2014 Embedded Vision Summit. Processor suppliers are investing intensively in new processors for vision applications, employing a diverse range of architecture approaches to meet the conflicting requirements

May 2014 Embedded Vision Summit Technical Presentation: “Trends and Recent Developments in Processors for Vision,” Jeff Bier, BDTI Read More +

May 2014 Embedded Vision Summit Technical Presentation: “Computer Vision Powered by Heterogeneous System Architecture (HSA),” Harris Gasparakis, AMD

Harris Gasparakis, Ph.D., OpenCV manager at AMD, presents the "Computer Vision Powered by Heterogeneous System Architecture (HSA)" tutorial at the May 2014 Embedded Vision Summit. Gasparakis reviews the HSA vision and its current incarnation though OpenCL 2.0, and discusses its relevance and advantages for Computer Vision applications. HSA unifies CPU cores, GPU compute units, and

May 2014 Embedded Vision Summit Technical Presentation: “Computer Vision Powered by Heterogeneous System Architecture (HSA),” Harris Gasparakis, AMD Read More +

May 2014 Embedded Vision Summit Technical Presentation: “Multiple Uses of Pipelined Video Pre-Processor Hardware in Vision Applications,” Rajesh Mahapatra, Analog Devices

Rajesh Mahapatra, Engineering Manager at Analog Devices, presents the "Multiple Uses of Pipelined Video Pre-Processor Hardware in Vision Applications" tutorial at the May 2014 Embedded Vision Summit. Significant resemblance and overlap exist among the pre-processing blocks of different vision applications. For instance, image gradients and edges have proven beneficial for a variety of applications, such

May 2014 Embedded Vision Summit Technical Presentation: “Multiple Uses of Pipelined Video Pre-Processor Hardware in Vision Applications,” Rajesh Mahapatra, Analog Devices Read More +

“Project Tango: Integrating 3D Vision Into Smartphones,” a Presentation From Google

Johnny Lee, Technical Program Lead at Google, delivers the presentation "Google Project Tango: Integrating 3D Vision Into Smartphones," at the May 2014 Embedded Vision Alliance Member Meeting. Project Tango is an effort to harvest research in computer vision and robotics and concentrate that technology into a mobile platform. It uses vision and sensor fusion to

“Project Tango: Integrating 3D Vision Into Smartphones,” a Presentation From Google Read More +

May 2014 Embedded Vision Summit Technical Presentation: “Embedded Lucas-Kanade Tracking: How It Works, How to Implement It, and How to Use It,” Goksel Dedeoglu, PercepTonic

Goksel Dedeoglu, Ph.D., Founder and Lab Director of PercepTonic, presents the "Embedded Lucas-Kanade Tracking: How It Works, How to Implement It, and How to Use It" tutorial at the May 2014 Embedded Vision Summit. This tutorial is intended for technical audiences interested in learning about the Lucas-Kanade (LK) tracker, also known as the Kanade-Lucas-Tomasi (KLT)

May 2014 Embedded Vision Summit Technical Presentation: “Embedded Lucas-Kanade Tracking: How It Works, How to Implement It, and How to Use It,” Goksel Dedeoglu, PercepTonic Read More +

May 2014 Embedded Vision Summit Technical Presentation: “Evolving Algorithmic Requirements for Recognition and Classification in Augmented Reality,” Simon Morris, CogniVue

Simon Morris, CEO of CogniVue, presents the "Evolving Algorithmic Requirements for Recognition and Classification in Augmented Reality" tutorial at the May 2014 Embedded Vision Summit. Augmented reality (AR) applications are based on accurately computing a camera's 6 degrees of freedom (6DOF) position in 3-dimensional space, also known as its "pose". In vision-based approaches to AR,

May 2014 Embedded Vision Summit Technical Presentation: “Evolving Algorithmic Requirements for Recognition and Classification in Augmented Reality,” Simon Morris, CogniVue Read More +

May 2014 Embedded Vision Summit Technical Presentation: “Vision-Based Gesture User Interfaces,” Francis MacDougall, Qualcomm

Francis MacDougall, Senior Director of Technology at Qualcomm, presents the "Vision-Based Gesture User Interfaces" tutorial at the May 2014 Embedded Vision Summit. The means by which we interact with the machines around us is undergoing a fundamental transformation. While we may still sometimes need to push buttons, touch displays and trackpads, and raise our voices,

May 2014 Embedded Vision Summit Technical Presentation: “Vision-Based Gesture User Interfaces,” Francis MacDougall, Qualcomm Read More +

May 2014 Embedded Vision Summit Technical Presentation: “Programming Novel Recognition Algorithms on Heterogeneous Architectures,” Kees Vissers, Xilinx

Kees Vissers, Distinguished Engineer at Xilinx, presents the "Programming Novel Recognition Algorithms on Heterogeneous Architectures" tutorial at the May 2014 Embedded Vision Summit. The combination of heterogeneous systems, consisting of processors and FPGA, is a high-performance implementation platform for image and vision processing. One of the significant hurdles in leveraging the compute potential was the

May 2014 Embedded Vision Summit Technical Presentation: “Programming Novel Recognition Algorithms on Heterogeneous Architectures,” Kees Vissers, Xilinx Read More +

“Convolutional Neural Networks,” an Embedded Vision Summit Keynote Presentation from Facebook

Yann LeCun, Director of AI Research at Facebook and Silver Professor of Data Science, Computer Science, Neural Science, and Electrical Engineering at New York University, presents the "Convolutional Networks: Unleashing the Potential of Machine Learning for Robust Perception Systems" keynote at the May 2014 Embedded Vision Summit. Convolutional Networks (ConvNets) have become the dominant method

“Convolutional Neural Networks,” an Embedded Vision Summit Keynote Presentation from Facebook Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top