Technical Insights

“New Deep Learning Techniques for Embedded Systems,” a Presentation from Synopsys

Tom Michiels, System Architect for Embedded Vision at Synopsys, presents the “New Deep Learning Techniques for Embedded Systems” tutorial at the May 2018 Embedded Vision Summit. In the past few years, the application domain of deep learning has rapidly expanded. Constant innovation has improved the accuracy and speed of learning and inference. Many techniques are […]

“New Deep Learning Techniques for Embedded Systems,” a Presentation from Synopsys Read More +

“Creating a Computationally Efficient Embedded CNN Face Recognizer,” a Presentation from PathPartner Technology

Praveen G.B., Technical Lead at PathPartner Technology, presents the “Creating a Computationally Efficient Embedded CNN Face Recognizer” tutorial at the May 2018 Embedded Vision Summit. Face recognition systems have made great progress thanks to availability of data, deep learning algorithms and better image sensors. Face recognition systems should be tolerant of variations in illumination, pose

“Creating a Computationally Efficient Embedded CNN Face Recognizer,” a Presentation from PathPartner Technology Read More +

“Getting More from Your Datasets: Data Augmentation, Annotation and Generative Techniques,” a Presentation from Xperi

Peter Corcoran, co-founder of FotoNation (now a core business unit of Xperi) and lead principle investigator and director of C3Imaging (a research partnership between Xperi and the National University of Ireland, Galway), presents the “Getting More from Your Datasets: Data Augmentation, Annotation and Generative Techniques” tutorial at the May 2018 Embedded Vision Summit. Deep learning

“Getting More from Your Datasets: Data Augmentation, Annotation and Generative Techniques,” a Presentation from Xperi Read More +

“Deep Quantization for Energy Efficient Inference at the Edge,” a Presentation from Lattice Semiconductor

Hoon Choi, Senior Director of Design Engineering at Lattice Semiconductor, presents the “Deep Quantization for Energy Efficient Inference at the Edge” tutorial at the May 2018 Embedded Vision Summit. Intelligence at the edge is different from intelligence in the cloud in terms of requirements for energy, cost, accuracy and latency. Due to limits on battery

“Deep Quantization for Energy Efficient Inference at the Edge,” a Presentation from Lattice Semiconductor Read More +

“Real-time Calibration for Stereo Cameras Using Machine Learning,” a Presentation from Lucid VR

Sheldon Fernandes, Senior Software and Algorithms Engineer at Lucid VR, presents the “Real-time Calibration for Stereo Cameras Using Machine Learning” tutorial at the May 2018 Embedded Vision Summit. Calibration involves capturing raw data and processing it to get useful information about a camera’s properties. Calibration is essential to ensure that a camera’s output is as

“Real-time Calibration for Stereo Cameras Using Machine Learning,” a Presentation from Lucid VR Read More +

“Even Faster CNNs: Exploring the New Class of Winograd Algorithms,” a Presentation from Arm

Gian Marco Iodice, Senior Software Engineer in the Machine Learning Group at Arm, presents the “Even Faster CNNs: Exploring the New Class of Winograd Algorithms” tutorial at the May 2018 Embedded Vision Summit. Over the past decade, deep learning networks have revolutionized the task of classification and recognition in a broad area of applications. Deeper

“Even Faster CNNs: Exploring the New Class of Winograd Algorithms,” a Presentation from Arm Read More +

“Generative Sensing: Reliable Recognition from Unreliable Sensor Data,” a Presentation from Arizona State University

Lina Karam, Professor and Computer Engineering Director at Arizona State University, presents the “Generative Sensing: Reliable Recognition from Unreliable Sensor Data” tutorial at the May 2018 Embedded Vision Summit. While deep neural networks (DNNs) perform on par with – or better than – humans on pristine high-resolution images, DNN performance is significantly worse than human

“Generative Sensing: Reliable Recognition from Unreliable Sensor Data,” a Presentation from Arizona State University Read More +

“The OpenVX Computer Vision and Neural Network Inference Library Standard for Portable, Efficient Code,” a Presentation from AMD

Radhakrishna Giduthuri, Software Architect at Advanced Micro Devices (AMD), presents the “OpenVX Computer Vision and Neural Network Inference Library Standard for Portable, Efficient Code” tutorial at the May 2018 Embedded Vision Summit. OpenVX is an industry-standard computer vision and neural network inference API designed for efficient implementation on a variety of embedded platforms. The API

“The OpenVX Computer Vision and Neural Network Inference Library Standard for Portable, Efficient Code,” a Presentation from AMD Read More +

“Deploying CNN-based Vision Solutions on a $3 Microcontroller,” a Presentation from Au-Zone Technologies

Greg Lytle, VP of Engineering at Au-Zone Technologies, presents the “Deploying CNN-based Vision Solutions on a $3 Microcontroller” tutorial at the May 2018 Embedded Vision Summit. In this presentation, Lytle explains how his company designed, trained and deployed a CNN-based embedded vision solution on a low-cost, Cortex-M-based microcontroller (MCU). He describes the steps taken to

“Deploying CNN-based Vision Solutions on a $3 Microcontroller,” a Presentation from Au-Zone Technologies Read More +

“Deep Understanding of Shopper Behaviors and Interactions Using Computer Vision,” a Presentation from the Università Politecnica delle Marche

Emanuele Frontoni, Professor, and Rocco Pietrini, Ph.D. student, both of the Università Politecnica delle Marche, present the “Deep Understanding of Shopper Behaviors and Interactions Using Computer Vision” tutorial at the May 2018 Embedded Vision Summit. In retail environments, there’s great value in understanding how shoppers move in the space and interact with products. And, while

“Deep Understanding of Shopper Behaviors and Interactions Using Computer Vision,” a Presentation from the Università Politecnica delle Marche Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top