Edge AI and Vision Alliance

“Selecting the Right Imager for Your Embedded Vision Application,” a Presentation from Capable Robot Components

Chris Osterwood, Founder and CEO of Capable Robot Components, presents the “Selecting the Right Imager for Your Embedded Vision Application” tutorial at the May 2019 Embedded Vision Summit. The performance of your embedded vision product is inexorably linked to the imager and lens it uses. Selecting these critical components is sometimes overwhelming due to the […]

“Selecting the Right Imager for Your Embedded Vision Application,” a Presentation from Capable Robot Components Read More +

DeveloperSurvey_600

Computer Vision Developer Survey from the Embedded Vision Alliance — Tell Us What You Think!

The Embedded Vision Alliance is conducting our annual survey to understand what types of technologies are needed by product developers who are incorporating computer vision in new systems and applications. This is our sixth year conducting this survey and we want to make sure we have your input to help guide the focus of technology

Computer Vision Developer Survey from the Embedded Vision Alliance — Tell Us What You Think! Read More +

“Introduction to Optics for Embedded Vision,” a Presentation from Jessica Gehlhar

Jessica Gehlhar, formerly an imaging engineer at Edmund Optics, presents the “Introduction to Optics for Embedded Vision” tutorial at the May 2019 Embedded Vision Summit. This talk provides an introduction to optics for embedded vision system and algorithm developers. Gehlhar begins by presenting fundamental imaging lens specifications and quality metrics such as MTF. She explains

“Introduction to Optics for Embedded Vision,” a Presentation from Jessica Gehlhar Read More +

EVA180x100

Embedded Vision Insights: September 24, 2019 Edition

LETTER FROM THE EDITOR Dear Colleague, Every year the Embedded Vision Alliance surveys computer vision developers to understand what chips and tools they use to build visual AI systems. This is our sixth year conducting the survey and we want to make sure we have your input, since many technology suppliers use the survey results

Embedded Vision Insights: September 24, 2019 Edition Read More +

“OpenCV: Current Status and Future Plans,” a Presentation from OpenCV.org

Satya Mallick, Interim CEO of OpenCV.org, presents the “OpenCV: Current Status and Future Plans” tutorial at the May 2019 Embedded Vision Summit. With over two million downloads per week, OpenCV is the most popular open source computer vision library in the world. It implements over 2500 opt- imized algorithms, works on all major operating systems,

“OpenCV: Current Status and Future Plans,” a Presentation from OpenCV.org Read More +

“Improving the Safety and Performance of Automated Vehicles Through Precision Localization,” a Presentation from VSI Labs

Phil Magney, founder of VSI Labs, presents the “Improving the Safety and Performance of Automated Vehicles Through Precision Localization” tutorial at the May 2019 Embedded Vision Summit. How does a self-driving car know where it is? Magney explains how autonomous vehicles localize themselves against their surroundings through the use of a variety of sensors along

“Improving the Safety and Performance of Automated Vehicles Through Precision Localization,” a Presentation from VSI Labs Read More +

EVA180x100

Embedded Vision Insights: September 10, 2019 Edition

LETTER FROM THE EDITOR Dear Colleague, Deep Learning for Computer Vision with TensorFlow 2.0 is the Embedded Vision Alliance's in-person, hands-on technical training class. The next session will take place November 1 in Fremont, California, hosted by Alliance Member company Mentor. This one-day hands-on overview will give you the critical knowledge you need to develop

Embedded Vision Insights: September 10, 2019 Edition Read More +

“Eye Tracking for the Future: The Eyes Have It,” a Presentation from Parallel Rules

Peter Milford, President of Parallel Rules, presents the “Eye Tracking for the Future: The Eyes Have It” tutorial at the May 2019 Embedded Vision Summit. Eye interaction technologies complement augmented and virtual reality head-mounted displays. In this presentation, Milford reviews eye tracking technology, concentrating mainly on camera-based solutions and associated system requirements. Wearable eye tracking

“Eye Tracking for the Future: The Eyes Have It,” a Presentation from Parallel Rules Read More +

“Hardware-aware Deep Neural Network Design,” a Presentation from Facebook

Peter Vajda, Research Manager at Facebook, presents the “Hardware-aware Deep Neural Network Design” tutorial at the May 2019 Embedded Vision Summit. A central problem in the deployment of deep neural networks is maximizing accuracy within the compute performance constraints of embedded devices. In this talk, Vajda discusses approaches to addressing this challenge based on automated

“Hardware-aware Deep Neural Network Design,” a Presentation from Facebook Read More +

“Training Data for Your CNN: What You Need and How to Get It,” a Presentation from Aquifi

Carlo Dal Mutto, CTO of Aquifi, presents the “Training Data for Your CNN: What You Need and How to Get It” tutorial at the May 2019 Embedded Vision Summit. A fundamental building block for AI development is the development of a proper training set to allow effective training of neural nets. Developing such a training

“Training Data for Your CNN: What You Need and How to Get It,” a Presentation from Aquifi Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top