Summit 2019

“Five+ Techniques for Efficient Implementation of Neural Networks,” a Presentation from Synopsys

Bert Moons, Hardware Design Architect at Synopsys, presents the “Five+ Techniques for Efficient Implementation of Neural Networks” tutorial at the May 2019 Embedded Vision Summit. Embedding real-time, large-scale deep learning vision applications at the edge is challenging due to their huge computational, memory and bandwidth requirements. System architects can mitigate these demands by modifying deep […]

“Five+ Techniques for Efficient Implementation of Neural Networks,” a Presentation from Synopsys Read More +

“Building Complete Embedded Vision Systems on Linux — From Camera to Display,” a Presentation from Montgomery One

Clay D. Montgomery, Freelance Embedded Multimedia Developer at Montgomery One, presents the “Building Complete Embedded Vision Systems on Linux—From Camera to Display” tutorial at the May 2019 Embedded Vision Summit. There’s a huge wealth of open-source software components available today for embedding vision on the latest SoCs from suppliers such as NXP, Broadcom, TI and

“Building Complete Embedded Vision Systems on Linux — From Camera to Display,” a Presentation from Montgomery One Read More +

“Creating Efficient, Flexible and Scalable Cloud Computer Vision Applications: An Introduction,” a Presentation from GumGum

Nishita Sant, Computer Vision Manager, and Greg Chu, Senior Computer Vision Scientist, both of GumGum, present the “Creating Efficient, Flexible and Scalable Cloud Computer Vision Applications: An Introduction” tutorial at the May 2019 Embedded Vision Summit. Given the growing utility of computer vision applications, how can you deploy these services in high-traffic production environments? Sant

“Creating Efficient, Flexible and Scalable Cloud Computer Vision Applications: An Introduction,” a Presentation from GumGum Read More +

“Selecting the Right Imager for Your Embedded Vision Application,” a Presentation from Capable Robot Components

Chris Osterwood, Founder and CEO of Capable Robot Components, presents the “Selecting the Right Imager for Your Embedded Vision Application” tutorial at the May 2019 Embedded Vision Summit. The performance of your embedded vision product is inexorably linked to the imager and lens it uses. Selecting these critical components is sometimes overwhelming due to the

“Selecting the Right Imager for Your Embedded Vision Application,” a Presentation from Capable Robot Components Read More +

“Game Changing Depth Sensing Technique Enables Simpler, More Flexible 3D Solutions,” a Presentation from Magik Eye

Takeo Miyazawa, Founder and CEO of Magik Eye, presents the “Game Changing Depth Sensing Technique Enables Simpler, More Flexible 3D Solutions” tutorial at the May 2019 Embedded Vision Summit. Magik Eye is a global team of computer vision veterans that have developed a new method to determine depth from light directly without the need to

“Game Changing Depth Sensing Technique Enables Simpler, More Flexible 3D Solutions,” a Presentation from Magik Eye Read More +

“Machine Learning at the Edge in Smart Factories Using TI Sitara Processors,” a Presentation from Texas Instruments

Manisha Agrawal, Software Applications Engineer at Texas Instruments, presents the “Machine Learning at the Edge in Smart Factories Using TI Sitara Processors” tutorial at the May 2019 Embedded Vision Summit. Whether it’s called “Industry 4.0,” “industrial internet of things” (IIOT) or “smart factories,” a fundamental shift is underway in manufacturing: factories are becoming smarter. This

“Machine Learning at the Edge in Smart Factories Using TI Sitara Processors,” a Presentation from Texas Instruments Read More +

“Using High-level Synthesis to Bridge the Gap Between Deep Learning Frameworks and Custom Hardware Accelerators,” a Presentation from Mentor

Michael Fingeroff, HLS Technologist at Mentor, presents the “Using High-level Synthesis to Bridge the Gap Between Deep Learning Frameworks and Custom Hardware Accelerators” tutorial at the May 2019 Embedded Vision Summit. Recent years have seen an explosion in machine learning/AI algorithms with a corresponding need to use custom hardware for best performance and power efficiency.

“Using High-level Synthesis to Bridge the Gap Between Deep Learning Frameworks and Custom Hardware Accelerators,” a Presentation from Mentor Read More +

“Fundamental Security Challenges of Embedded Vision,” a Presentation from Synopsys

Mike Borza, Principal Security Technologist at Synopsys, presents the “Fundamental Security Challenges of Embedded Vision” tutorial at the May 2019 Embedded Vision Summit. As facial recognition, surveillance and smart vehicles become an accepted part of our daily lives, product and chip designers are coming to grips with the business need to secure the data that

“Fundamental Security Challenges of Embedded Vision,” a Presentation from Synopsys Read More +

“Introduction to Optics for Embedded Vision,” a Presentation from Jessica Gehlhar

Jessica Gehlhar, formerly an imaging engineer at Edmund Optics, presents the “Introduction to Optics for Embedded Vision” tutorial at the May 2019 Embedded Vision Summit. This talk provides an introduction to optics for embedded vision system and algorithm developers. Gehlhar begins by presenting fundamental imaging lens specifications and quality metrics such as MTF. She explains

“Introduction to Optics for Embedded Vision,” a Presentation from Jessica Gehlhar Read More +

“Practical Approaches to Training Data Strategy: Bias, Legal and Ethical Considerations,” a Presentation from Samasource

Audrey Jill Boguchwal, Senior Product Manager at Samasource, presents the “Practical Approaches to Training Data Strategy: Bias, Legal and Ethical Considerations” tutorial at the May 2019 Embedded Vision Summit. Recent McKinsey research cites the top five limitations that prevent companies from adopting AI technology. Training data strategy is a common thread. Companies face challenges obtaining

“Practical Approaches to Training Data Strategy: Bias, Legal and Ethical Considerations,” a Presentation from Samasource Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top