Edge AI and Vision Alliance

Use a Camera Model to Accelerate Camera System Design

This blog post was originally published by Twisthink. It is reprinted here with the permission of Twisthink. The exciting world of embedded cameras is experiencing rapid growth. Digital-imaging technology is being integrated into a wide range of new products and systems. Embedded cameras are becoming widely adopted in the automotive market, security and surveillance markets, […]

Use a Camera Model to Accelerate Camera System Design Read More +

“Enabling the Full Potential of Machine Learning,” a Presentation from Wave Computing

Derek Meyer, CEO of Wave Computing, presents the "Enabling the Full Potential of Machine Learning" tutorial at the May 2017 Embedded Vision Summit. With the growing recognition that “data is the new oil,” more companies are looking to machine learning to gain competitive advantages and create new business models. But the machine learning industry is

“Enabling the Full Potential of Machine Learning,” a Presentation from Wave Computing Read More +

EVA180x100

Embedded Vision Insights: September 12, 2017 Edition

LETTER FROM THE EDITOR Dear Colleague, Deep neural networks (DNNs) are proving very effective for a variety of challenging machine perception tasks, but these algorithms are very computationally demanding. To enable DNNs to be used in practical applications, it’s critical to find efficient ways to implement them. The Embedded Vision Alliance will delve into these

Embedded Vision Insights: September 12, 2017 Edition Read More +

“How Image Sensor and Video Compression Parameters Impact Vision Algorithms,” a Presentation from Amazon Lab126

Ilya Brailovskiy, Principal Engineer at Amazon Lab126, presents the "How Image Sensor and Video Compression Parameters Impact Vision Algorithms" tutorial at the May 2017 Embedded Vision Summit. Recent advances in deep learning algorithms have brought automated object detection and recognition to human accuracy levels on various test datasets. But algorithms that work well on an

“How Image Sensor and Video Compression Parameters Impact Vision Algorithms,” a Presentation from Amazon Lab126 Read More +

Visual Intelligence Opportunities in Industry 4.0

In order for industrial automation systems to meaningfully interact with the objects they're identifying, inspecting and assembling, they must be able to see and understand their surroundings. Cost-effective and capable vision processors, fed by depth-discerning image sensors and running robust software algorithms, continue to transform longstanding industrial automation aspirations into reality. And, with the emergence

Visual Intelligence Opportunities in Industry 4.0 Read More +

“Adventures in DIY Embedded Vision: The Can’t-miss Dartboard,” a Presentation from Mark Rober

Engineer, inventor and YouTube personality Mark Rober presents the "Adventures in DIY Embedded Vision: The Can’t-miss Dartboard" tutorial at the May 2017 Embedded Vision Summit. Can a mechanical engineer with no background in computer vision build a complex, robust, real-time computer vision system? Yes, with a little help from his friends. Rober fulfilled a three-year

“Adventures in DIY Embedded Vision: The Can’t-miss Dartboard,” a Presentation from Mark Rober Read More +

“Performing Multiple Perceptual Tasks With a Single Deep Neural Network,” a Presentation from Magic Leap

Andrew Rabinovich, Director of Deep Learning at Magic Leap, presents the "Performing Multiple Perceptual Tasks With a Single Deep Neural Network" tutorial at the May 2017 Embedded Vision Summit. As more system developers consider incorporating visual perception into smart devices such as self-driving cars, drones and wearable computers, attention is shifting toward practical formulation and

“Performing Multiple Perceptual Tasks With a Single Deep Neural Network,” a Presentation from Magic Leap Read More +

EVA180x100

Embedded Vision Insights: August 29, 2017 Edition

LETTER FROM THE EDITOR Dear Colleague, TensorFlow has become a popular framework for creating machine learning-based computer vision applications, especially for the development of deep neural networks (DNNs). If you’re planning to develop computer vision applications using deep learning and want to understand how to use TensorFlow to do it, then don’t miss an upcoming

Embedded Vision Insights: August 29, 2017 Edition Read More +

“Using Satellites to Extract Insights on the Ground,” a Presentation from Orbital Insight

Boris Babenko, Senior Software Engineer at Orbital Insight, presents the "Using Satellites to Extract Insights on the Ground" tutorial at the May 2017 Embedded Vision Summit. Satellites are great for seeing the world at scale, but analyzing petabytes of images can be extremely time-consuming for humans alone. This is why machine vision is a perfect

“Using Satellites to Extract Insights on the Ground,” a Presentation from Orbital Insight Read More +

“How to Choose a 3D Vision Technology,” a Presentation from Carnegie Robotics

Chris Osterwood, Chief Technical Officer at Carnegie Robotics, presents the "How to Choose a 3D Vision Technology" tutorial at the May 2017 Embedded Vision Summit. Designers of autonomous vehicles, robots, and many other systems are faced with a critical challenge: Which 3D perception technology to use? There are a wide variety of sensors on the

“How to Choose a 3D Vision Technology,” a Presentation from Carnegie Robotics Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top