Fundamentals

“Fundamental Security Challenges of Embedded Vision,” a Presentation from Synopsys

Mike Borza, Principal Security Technologist at Synopsys, presents the “Fundamental Security Challenges of Embedded Vision” tutorial at the May 2019 Embedded Vision Summit. As facial recognition, surveillance and smart vehicles become an accepted part of our daily lives, product and chip designers are coming to grips with the business need to secure the data that […]

“Fundamental Security Challenges of Embedded Vision,” a Presentation from Synopsys Read More +

“Introduction to Optics for Embedded Vision,” a Presentation from Jessica Gehlhar

Jessica Gehlhar, formerly an imaging engineer at Edmund Optics, presents the “Introduction to Optics for Embedded Vision” tutorial at the May 2019 Embedded Vision Summit. This talk provides an introduction to optics for embedded vision system and algorithm developers. Gehlhar begins by presenting fundamental imaging lens specifications and quality metrics such as MTF. She explains

“Introduction to Optics for Embedded Vision,” a Presentation from Jessica Gehlhar Read More +

“Eye Tracking for the Future: The Eyes Have It,” a Presentation from Parallel Rules

Peter Milford, President of Parallel Rules, presents the “Eye Tracking for the Future: The Eyes Have It” tutorial at the May 2019 Embedded Vision Summit. Eye interaction technologies complement augmented and virtual reality head-mounted displays. In this presentation, Milford reviews eye tracking technology, concentrating mainly on camera-based solutions and associated system requirements. Wearable eye tracking

“Eye Tracking for the Future: The Eyes Have It,” a Presentation from Parallel Rules Read More +

“Fundamentals of Monocular SLAM,” a Presentation from Cadence

Shrinivas Gadkari, Design Engineering Director at Cadence, presents the “Fundamentals of Monocular SLAM” tutorial at the May 2019 Embedded Vision Summit. Simultaneous Localization and Mapping (SLAM) refers to a class of algorithms that enables a device with one or more cameras and/or other sensors to create an accurate map of its surroundings, to determine the

“Fundamentals of Monocular SLAM,” a Presentation from Cadence Read More +

“Training Data for Your CNN: What You Need and How to Get It,” a Presentation from Aquifi

Carlo Dal Mutto, CTO of Aquifi, presents the “Training Data for Your CNN: What You Need and How to Get It” tutorial at the May 2019 Embedded Vision Summit. A fundamental building block for AI development is the development of a proper training set to allow effective training of neural nets. Developing such a training

“Training Data for Your CNN: What You Need and How to Get It,” a Presentation from Aquifi Read More +

“Object Detection for Embedded Markets,” a Presentation from Imagination Technologies

Paul Brasnett, PowerVR Business Development Director for Vision and AI at Imagination Technologies, presents the “Object Detection for Embedded Markets” tutorial at the May 2019 Embedded Vision Summit. While image classification was the breakthrough use case for deep learning-based computer vision, today it has a limited number of real-world applications. In contrast, object detection is

“Object Detection for Embedded Markets,” a Presentation from Imagination Technologies Read More +

“Deep Learning for Manufacturing Inspection Applications,” a Presentation from FLIR Systems

Stephen Se, Research Manager at FLIR Systems, presents the “Deep Learning for Manufacturing Inspection Applications” tutorial at the May 2019 Embedded Vision Summit. Recently, deep learning has revolutionized artificial intelligence and has been shown to provide the best solutions to many problems in computer vision, image classification, speech recognition and natural language processing. Se presents

“Deep Learning for Manufacturing Inspection Applications,” a Presentation from FLIR Systems Read More +

“Separable Convolutions for Efficient Implementation of CNNs and Other Vision Algorithms,” a Presentation from Phiar

Chen-Ping Yu, Co-founder and CEO of Phiar, presents the “Separable Convolutions for Efficient Implementation of CNNs and Other Vision Algorithms” tutorial at the May 2019 Embedded Vision Summit. Separable convolutions are an important technique for implementing efficient convolutional neural networks (CNNs), made popular by MobileNet’s use of depthwise separable convolutions. But separable convolutions are not

“Separable Convolutions for Efficient Implementation of CNNs and Other Vision Algorithms,” a Presentation from Phiar Read More +

“An Introduction to Machine Learning and How to Teach Machines to See,” a Presentation from Tryolabs

Facundo Parodi, Research and Machine Learning Engineer at Tryolabs, presents the "An Introduction to Machine Learning and How to Teach Machines to See" tutorial at the May 2019 Embedded Vision Summit. What is machine learning? How can machines distinguish a cat from a dog in an image? What’s the magic behind convolutional neural networks? These

“An Introduction to Machine Learning and How to Teach Machines to See,” a Presentation from Tryolabs Read More +

“Introduction to LiDAR for Machine Perception,” a Presentation from Deepen AI

Mohammad Musa, the Founder and CEO of Deepen AI, presents the “Introduction to LiDAR for Machine Perception” tutorial at the May 2018 Embedded Vision Summit. LiDAR sensors use pulsed laser light to construct 3D representations of objects and terrain. Recently, interest in LiDAR has grown, for example for generating high-definition maps required for autonomous vehicles

“Introduction to LiDAR for Machine Perception,” a Presentation from Deepen AI Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top