Summit 2018

“Real-time Calibration for Stereo Cameras Using Machine Learning,” a Presentation from Lucid VR

Sheldon Fernandes, Senior Software and Algorithms Engineer at Lucid VR, presents the “Real-time Calibration for Stereo Cameras Using Machine Learning” tutorial at the May 2018 Embedded Vision Summit. Calibration involves capturing raw data and processing it to get useful information about a camera’s properties. Calibration is essential to ensure that a camera’s output is as […]

“Real-time Calibration for Stereo Cameras Using Machine Learning,” a Presentation from Lucid VR Read More +

“Think Like an Amateur, Do As an Expert: Lessons from a Career in Computer Vision,” a Keynote Presentation from Dr. Takeo Kanade

Dr. Takeo Kanade, U.A. and Helen Whitaker Professor at Carnegie Mellon University, presents the “Think Like an Amateur, Do As an Expert: Lessons from a Career in Computer Vision” tutorial at the May 2018 Embedded Vision Summit. In this keynote presentation, Dr. Kanade shares his experiences and lessons learned in developing a vast range of

“Think Like an Amateur, Do As an Expert: Lessons from a Career in Computer Vision,” a Keynote Presentation from Dr. Takeo Kanade Read More +

“Even Faster CNNs: Exploring the New Class of Winograd Algorithms,” a Presentation from Arm

Gian Marco Iodice, Senior Software Engineer in the Machine Learning Group at Arm, presents the “Even Faster CNNs: Exploring the New Class of Winograd Algorithms” tutorial at the May 2018 Embedded Vision Summit. Over the past decade, deep learning networks have revolutionized the task of classification and recognition in a broad area of applications. Deeper

“Even Faster CNNs: Exploring the New Class of Winograd Algorithms,” a Presentation from Arm Read More +

“A Physics-based Approach to Removing Shadows and Shading in Real Time,” a Presentation from Tandent Vision Science

Bruce Maxwell, Director of Research at Tandent Vision Science, presents the “A Physics-based Approach to Removing Shadows and Shading in Real Time” tutorial at the May 2018 Embedded Vision Summit. Shadows cast on ground surfaces can create false features and modify the color and appearance of real features, masking important information used by autonomous vehicles,

“A Physics-based Approach to Removing Shadows and Shading in Real Time,” a Presentation from Tandent Vision Science Read More +

“Generative Sensing: Reliable Recognition from Unreliable Sensor Data,” a Presentation from Arizona State University

Lina Karam, Professor and Computer Engineering Director at Arizona State University, presents the “Generative Sensing: Reliable Recognition from Unreliable Sensor Data” tutorial at the May 2018 Embedded Vision Summit. While deep neural networks (DNNs) perform on par with – or better than – humans on pristine high-resolution images, DNN performance is significantly worse than human

“Generative Sensing: Reliable Recognition from Unreliable Sensor Data,” a Presentation from Arizona State University Read More +

“Infusing Visual Understanding in Cloud and Edge Solutions Using State Of-the-Art Microsoft Algorithms,” a Presentation from Microsoft

Anirudh Koul, Senior Data Scientist, and Jin Yamamoto, Principal Program Manager, both from Microsoft, present the “Infusing Visual Understanding in Cloud and Edge Solutions Using State Of-the-Art Microsoft Algorithms” tutorial at the May 2018 Embedded Vision Summit. Microsoft offers its state-of-the-art computer vision algorithms, used internally in several products, through the Cognitive Services cloud APIs.

“Infusing Visual Understanding in Cloud and Edge Solutions Using State Of-the-Art Microsoft Algorithms,” a Presentation from Microsoft Read More +

“A New Generation of Camera Modules: A Novel Approach and Its Benefits for Embedded Systems,” a Presentation from Allied Vision Technologies

Paul Maria Zalewski, Product Line Manager at Allied Vision Technologies, presents the “A New Generation of Camera Modules: A Novel Approach and Its Benefits for Embedded Systems” tutorial at the May 2018 Embedded Vision Summit. Embedded vision systems have typically relied on low-cost image sensor modules with a MIPI CSI-2 interface. Now, machine vision camera

“A New Generation of Camera Modules: A Novel Approach and Its Benefits for Embedded Systems,” a Presentation from Allied Vision Technologies Read More +

“The OpenVX Computer Vision and Neural Network Inference Library Standard for Portable, Efficient Code,” a Presentation from AMD

Radhakrishna Giduthuri, Software Architect at Advanced Micro Devices (AMD), presents the “OpenVX Computer Vision and Neural Network Inference Library Standard for Portable, Efficient Code” tutorial at the May 2018 Embedded Vision Summit. OpenVX is an industry-standard computer vision and neural network inference API designed for efficient implementation on a variety of embedded platforms. The API

“The OpenVX Computer Vision and Neural Network Inference Library Standard for Portable, Efficient Code,” a Presentation from AMD Read More +

“High-end Multi-camera Technology, Applications and Examples,” a Presentation from XIMEA

Max Larin, CEO of XIMEA, presents the “High-end Multi-camera Technology, Applications and Examples” tutorial at the May 2018 Embedded Vision Summit. For OEMs and system integrators, many of today’s applications in VR/AR/MR, ADAS, measurement and automation require multiple coordinated high performance cameras. Current generic components are not optimized to achieve the desired traits in terms

“High-end Multi-camera Technology, Applications and Examples,” a Presentation from XIMEA Read More +

“Deploying CNN-based Vision Solutions on a $3 Microcontroller,” a Presentation from Au-Zone Technologies

Greg Lytle, VP of Engineering at Au-Zone Technologies, presents the “Deploying CNN-based Vision Solutions on a $3 Microcontroller” tutorial at the May 2018 Embedded Vision Summit. In this presentation, Lytle explains how his company designed, trained and deployed a CNN-based embedded vision solution on a low-cost, Cortex-M-based microcontroller (MCU). He describes the steps taken to

“Deploying CNN-based Vision Solutions on a $3 Microcontroller,” a Presentation from Au-Zone Technologies Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top