Fundamentals

“DNN Training Data: How to Know What You Need and How to Get It,” a Presentation from Tech Mahindra

Abhishek Sharma, Practice Head for Engineering AI at Tech Mahindra, presents the “DNN Training Data: How to Know What You Need and How to Get It” tutorial at the May 2021 Embedded Vision Summit. Successful training of deep neural networks requires the right amounts and types of annotated training data. Collecting, curating and labeling this […]

“DNN Training Data: How to Know What You Need and How to Get It,” a Presentation from Tech Mahindra Read More +

“10 Things You Must Know Before Designing Your Own Camera,” a Presentation from Panopteo

Alex Fink, consultant at Panopteo, presents the “10 Things You Must Know Before Designing Your Own Camera” tutorial at the May 2021 Embedded Vision Summit. Computer vision requires vision. This is why companies that use computer vision often decide they need to create a custom camera module (and perhaps other custom sensors) that meets the

“10 Things You Must Know Before Designing Your Own Camera,” a Presentation from Panopteo Read More +

“Maintaining DNN Accuracy When the Real World is Changing,” a Presentation from Observa

Erik Chelstad, CTO and co-founder of Observa, presents the “Maintaining DNN Accuracy When the Real World is Changing” tutorial at the May 2021 Embedded Vision Summit. We commonly train deep neural networks (DNNs) on existing data and then use the trained model to make predictions on new data. Once trained, these predictive models approximate a

“Maintaining DNN Accuracy When the Real World is Changing,” a Presentation from Observa Read More +

“Optimizing ML Systems for Real-World Deployment,” a Presentation from iRobot

Danielle Dean, Technical Director of Machine Learning at iRobot, presents the “Optimizing ML Systems for Real-World Deployment” tutorial at the May 2021 Embedded Vision Summit. In the real world, machine learning models are components of a broader software application or system. In this talk, Dean explores the importance of optimizing the system as a whole–not

“Optimizing ML Systems for Real-World Deployment,” a Presentation from iRobot Read More +

“Introduction to Simultaneous Localization and Mapping (SLAM),” a Presentation from Gareth Cross

Independent game developer (and former technical lead of state estimation at Skydio) Gareth Cross presents the “Introduction to Simultaneous Localization and Mapping (SLAM)” tutorial at the May 2021 Embedded Vision Summit. This talk provides an introduction to the fundamentals of simultaneous localization and mapping (SLAM). Cross aims to provide foundational knowledge, and viewers are not

“Introduction to Simultaneous Localization and Mapping (SLAM),” a Presentation from Gareth Cross Read More +

“Building the Eyes of a Vision System: From Photons to Bits,” a Presentation from GoPro

Jon Stern, Director of Optical Systems at GoPro, presents the “Building the Eyes of a Vision System: From Photons to Bits” tutorial at the May 2021 Embedded Vision Summit. In this tutorial, Stern presents a guide to the multidisciplinary science of building the eyes of a vision system. CMOS image sensors have been instrumental in

“Building the Eyes of a Vision System: From Photons to Bits,” a Presentation from GoPro Read More +

“Modern Machine Vision from Basics to Advanced Deep Learning,” a Presentation from Deep Netts

Zoran Sevarac, Associate Professor at the University of Belgrade and Co-founder and CEO of Deep Netts, presents the “Modern Machine Vision from Basics to Advanced Deep Learning” tutorial at the May 2021 Embedded Vision Summit. In this talk, Sevarac introduces the fundamentals of deep learning for image understanding. He begins by explaining the basics of

“Modern Machine Vision from Basics to Advanced Deep Learning,” a Presentation from Deep Netts Read More +

“A Survey of CMOS Imagers and Lenses—and the Trade-offs You Should Consider,” a Presentation from Capable Robot Components

Chris Osterwood, Founder and CEO of Capable Robot Components, presents the “Survey of CMOS Imagers and Lenses—and the Trade-offs You Should Consider” tutorial at the May 2021 Embedded Vision Summit. Selecting the right imager and lens for your vision application is often a daunting challenge due to the vast number of products on the market

“A Survey of CMOS Imagers and Lenses—and the Trade-offs You Should Consider,” a Presentation from Capable Robot Components Read More +

“Data Collection in the Wild,” a Presentation from BMW Group

Vladimir Haltakov, Self-Driving Car Engineer at BMW Group, presents the “Data Collection in the Wild” tutorial at the May 2021 Embedded Vision Summit. In scientific papers, computer vision models are usually evaluated on well-defined training and test datasets. In practice, however, collecting high-quality data that accurately represents the real world is a challenging problem. Developing

“Data Collection in the Wild,” a Presentation from BMW Group Read More +

“Introduction to DNN Model Compression Techniques,” a Presentation from Xailient

Sabina Pokhrel, Customer Success AI Engineer at Xailient, presents the “Introduction to DNN Model Compression Techniques” tutorial at the May 2021 Embedded Vision Summit. Embedding real-time large-scale deep learning vision applications at the edge is challenging due to their huge computational, memory, and bandwidth requirements. System architects can mitigate these demands by modifying deep-neural networks

“Introduction to DNN Model Compression Techniques,” a Presentation from Xailient Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top