Algorithms

“Getting More from Your Datasets: Data Augmentation, Annotation and Generative Techniques,” a Presentation from Xperi

Peter Corcoran, co-founder of FotoNation (now a core business unit of Xperi) and lead principle investigator and director of C3Imaging (a research partnership between Xperi and the National University of Ireland, Galway), presents the “Getting More from Your Datasets: Data Augmentation, Annotation and Generative Techniques” tutorial at the May 2018 Embedded Vision Summit. Deep learning […]

“Getting More from Your Datasets: Data Augmentation, Annotation and Generative Techniques,” a Presentation from Xperi Read More +

“Deep Quantization for Energy Efficient Inference at the Edge,” a Presentation from Lattice Semiconductor

Hoon Choi, Senior Director of Design Engineering at Lattice Semiconductor, presents the “Deep Quantization for Energy Efficient Inference at the Edge” tutorial at the May 2018 Embedded Vision Summit. Intelligence at the edge is different from intelligence in the cloud in terms of requirements for energy, cost, accuracy and latency. Due to limits on battery

“Deep Quantization for Energy Efficient Inference at the Edge,” a Presentation from Lattice Semiconductor Read More +

“Real-time Calibration for Stereo Cameras Using Machine Learning,” a Presentation from Lucid VR

Sheldon Fernandes, Senior Software and Algorithms Engineer at Lucid VR, presents the “Real-time Calibration for Stereo Cameras Using Machine Learning” tutorial at the May 2018 Embedded Vision Summit. Calibration involves capturing raw data and processing it to get useful information about a camera’s properties. Calibration is essential to ensure that a camera’s output is as

“Real-time Calibration for Stereo Cameras Using Machine Learning,” a Presentation from Lucid VR Read More +

“Think Like an Amateur, Do As an Expert: Lessons from a Career in Computer Vision,” a Keynote Presentation from Dr. Takeo Kanade

Dr. Takeo Kanade, U.A. and Helen Whitaker Professor at Carnegie Mellon University, presents the “Think Like an Amateur, Do As an Expert: Lessons from a Career in Computer Vision” tutorial at the May 2018 Embedded Vision Summit. In this keynote presentation, Dr. Kanade shares his experiences and lessons learned in developing a vast range of

“Think Like an Amateur, Do As an Expert: Lessons from a Career in Computer Vision,” a Keynote Presentation from Dr. Takeo Kanade Read More +

“Even Faster CNNs: Exploring the New Class of Winograd Algorithms,” a Presentation from Arm

Gian Marco Iodice, Senior Software Engineer in the Machine Learning Group at Arm, presents the “Even Faster CNNs: Exploring the New Class of Winograd Algorithms” tutorial at the May 2018 Embedded Vision Summit. Over the past decade, deep learning networks have revolutionized the task of classification and recognition in a broad area of applications. Deeper

“Even Faster CNNs: Exploring the New Class of Winograd Algorithms,” a Presentation from Arm Read More +

“A Physics-based Approach to Removing Shadows and Shading in Real Time,” a Presentation from Tandent Vision Science

Bruce Maxwell, Director of Research at Tandent Vision Science, presents the “A Physics-based Approach to Removing Shadows and Shading in Real Time” tutorial at the May 2018 Embedded Vision Summit. Shadows cast on ground surfaces can create false features and modify the color and appearance of real features, masking important information used by autonomous vehicles,

“A Physics-based Approach to Removing Shadows and Shading in Real Time,” a Presentation from Tandent Vision Science Read More +

“Generative Sensing: Reliable Recognition from Unreliable Sensor Data,” a Presentation from Arizona State University

Lina Karam, Professor and Computer Engineering Director at Arizona State University, presents the “Generative Sensing: Reliable Recognition from Unreliable Sensor Data” tutorial at the May 2018 Embedded Vision Summit. While deep neural networks (DNNs) perform on par with – or better than – humans on pristine high-resolution images, DNN performance is significantly worse than human

“Generative Sensing: Reliable Recognition from Unreliable Sensor Data,” a Presentation from Arizona State University Read More +

“Infusing Visual Understanding in Cloud and Edge Solutions Using State Of-the-Art Microsoft Algorithms,” a Presentation from Microsoft

Anirudh Koul, Senior Data Scientist, and Jin Yamamoto, Principal Program Manager, both from Microsoft, present the “Infusing Visual Understanding in Cloud and Edge Solutions Using State Of-the-Art Microsoft Algorithms” tutorial at the May 2018 Embedded Vision Summit. Microsoft offers its state-of-the-art computer vision algorithms, used internally in several products, through the Cognitive Services cloud APIs.

“Infusing Visual Understanding in Cloud and Edge Solutions Using State Of-the-Art Microsoft Algorithms,” a Presentation from Microsoft Read More +

“A New Generation of Camera Modules: A Novel Approach and Its Benefits for Embedded Systems,” a Presentation from Allied Vision Technologies

Paul Maria Zalewski, Product Line Manager at Allied Vision Technologies, presents the “A New Generation of Camera Modules: A Novel Approach and Its Benefits for Embedded Systems” tutorial at the May 2018 Embedded Vision Summit. Embedded vision systems have typically relied on low-cost image sensor modules with a MIPI CSI-2 interface. Now, machine vision camera

“A New Generation of Camera Modules: A Novel Approach and Its Benefits for Embedded Systems,” a Presentation from Allied Vision Technologies Read More +

“The OpenVX Computer Vision and Neural Network Inference Library Standard for Portable, Efficient Code,” a Presentation from AMD

Radhakrishna Giduthuri, Software Architect at Advanced Micro Devices (AMD), presents the “OpenVX Computer Vision and Neural Network Inference Library Standard for Portable, Efficient Code” tutorial at the May 2018 Embedded Vision Summit. OpenVX is an industry-standard computer vision and neural network inference API designed for efficient implementation on a variety of embedded platforms. The API

“The OpenVX Computer Vision and Neural Network Inference Library Standard for Portable, Efficient Code,” a Presentation from AMD Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top