Fundamentals

“CMOS Image Sensors: A Guide to Building the Eyes of a Vision System,” a Presentation from GoPro

Jon Stern, Director of Optical Systems at GoPro, presents the “CMOS Image Sensors: A Guide to Building the Eyes of a Vision System” tutorial at the September 2020 Embedded Vision Summit. Improvements in CMOS image sensors have been instrumental in lowering barriers for embedding vision into a broad range of systems. For example, a high […]

“CMOS Image Sensors: A Guide to Building the Eyes of a Vision System,” a Presentation from GoPro Read More +

“Deep Learning on Mobile Devices,” a Presentation from Siddha Ganju

Independent AI architect Siddha Ganju  presents the “Deep Learning on Mobile Devices” tutorial at the September 2020 Embedded Vision Summit. Over the last few years, convolutional neural networks (CNNs) have grown enormously in popularity, especially for computer vision. Many applications running on smartphones and wearable devices could benefit from the capabilities of CNNs. However, CNN’s

“Deep Learning on Mobile Devices,” a Presentation from Siddha Ganju Read More +

“Practical Image Data Augmentation Methods for Training Deep Learning Object Detection Models,” a Presentation from EJ Technology Consultants

Evan Juras, Computer Vision Engineer at EJ Technology Consultants, presents the “Practical Image Data Augmentation Methods for Training Deep Learning Object Detection Models” tutorial at the September 2020 Embedded Vision Summit. Data augmentation is a method of expanding deep learning training datasets by making various automated modifications to existing images in the dataset. The resulting

“Practical Image Data Augmentation Methods for Training Deep Learning Object Detection Models,” a Presentation from EJ Technology Consultants Read More +

“Deploying AI Software to Embedded Devices Using Open Standards,” a Presentation from Codeplay Software

Andrew Richards, Co-founder and CEO of Codeplay Software, presents the “Deploying AI Software to Embedded Devices Using Open Standards” tutorial at the September 2020 Embedded Vision Summit. AI software developers need to deploy diverse classes of algorithms in embedded devices, including deep learning, machine vision and sensor fusion. Adapting these algorithms to run efficiently on

“Deploying AI Software to Embedded Devices Using Open Standards,” a Presentation from Codeplay Software Read More +

“Trends in Neural Network Topologies for Vision at the Edge,” a Presentation from Synopsys

Pierre Paulin, Director of R&D for Embedded Vision at Synopsys, presents the “Trends in Neural Network Topologies for Vision at the Edge” tutorial at the September 2020 Embedded Vision Summit. The widespread adoption of deep neural networks (DNNs) in embedded vision applications has increased the importance of creating DNN topologies that maximize accuracy while minimizing

“Trends in Neural Network Topologies for Vision at the Edge,” a Presentation from Synopsys Read More +

“How to Choose a 3D Vision Sensor,” a Presentation from Capable Robot Components

Chris Osterwood, Founder and CEO of Capable Robot Components, presents the “How to Choose a 3D Vision Sensor” tutorial at the May 2019 Embedded Vision Summit. Designers of autonomous vehicles, robots and many other systems are faced with a critical challenge: Which 3D vision sensor technology to use? There are a wide variety of sensors

“How to Choose a 3D Vision Sensor,” a Presentation from Capable Robot Components Read More +

“Five+ Techniques for Efficient Implementation of Neural Networks,” a Presentation from Synopsys

Bert Moons, Hardware Design Architect at Synopsys, presents the “Five+ Techniques for Efficient Implementation of Neural Networks” tutorial at the May 2019 Embedded Vision Summit. Embedding real-time, large-scale deep learning vision applications at the edge is challenging due to their huge computational, memory and bandwidth requirements. System architects can mitigate these demands by modifying deep

“Five+ Techniques for Efficient Implementation of Neural Networks,” a Presentation from Synopsys Read More +

“Building Complete Embedded Vision Systems on Linux — From Camera to Display,” a Presentation from Montgomery One

Clay D. Montgomery, Freelance Embedded Multimedia Developer at Montgomery One, presents the “Building Complete Embedded Vision Systems on Linux—From Camera to Display” tutorial at the May 2019 Embedded Vision Summit. There’s a huge wealth of open-source software components available today for embedding vision on the latest SoCs from suppliers such as NXP, Broadcom, TI and

“Building Complete Embedded Vision Systems on Linux — From Camera to Display,” a Presentation from Montgomery One Read More +

“Creating Efficient, Flexible and Scalable Cloud Computer Vision Applications: An Introduction,” a Presentation from GumGum

Nishita Sant, Computer Vision Manager, and Greg Chu, Senior Computer Vision Scientist, both of GumGum, present the “Creating Efficient, Flexible and Scalable Cloud Computer Vision Applications: An Introduction” tutorial at the May 2019 Embedded Vision Summit. Given the growing utility of computer vision applications, how can you deploy these services in high-traffic production environments? Sant

“Creating Efficient, Flexible and Scalable Cloud Computer Vision Applications: An Introduction,” a Presentation from GumGum Read More +

“Selecting the Right Imager for Your Embedded Vision Application,” a Presentation from Capable Robot Components

Chris Osterwood, Founder and CEO of Capable Robot Components, presents the “Selecting the Right Imager for Your Embedded Vision Application” tutorial at the May 2019 Embedded Vision Summit. The performance of your embedded vision product is inexorably linked to the imager and lens it uses. Selecting these critical components is sometimes overwhelming due to the

“Selecting the Right Imager for Your Embedded Vision Application,” a Presentation from Capable Robot Components Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top