Summit

“Object Detection for Embedded Markets,” a Presentation from Imagination Technologies

Paul Brasnett, PowerVR Business Development Director for Vision and AI at Imagination Technologies, presents the “Object Detection for Embedded Markets” tutorial at the May 2019 Embedded Vision Summit. While image classification was the breakthrough use case for deep learning-based computer vision, today it has a limited number of real-world applications. In contrast, object detection is […]

“Object Detection for Embedded Markets,” a Presentation from Imagination Technologies Read More +

“Sensory Fusion for Scalable Indoor Navigation,” a Presentation from Brain Corp

Oleg Sinyavskiy, Director of Research and Development at Brain Corp, presents the “Sensory Fusion for Scalable Indoor Navigation” tutorial at the May 2019 Embedded Vision Summit. Indoor autonomous navigation requires using a variety of sensors in different modalities. Merging together RGB, depth, lidar and odometry data streams to achieve autonomous operation requires a fusion of

“Sensory Fusion for Scalable Indoor Navigation,” a Presentation from Brain Corp Read More +

“Enabling the Next Kitchen Experience Through Embedded Vision,” a Presentation from Whirlpool

Sugosh Venkataraman, Vice President of Technology at Whirlpool, presents the “Enabling the Next Kitchen Experience Through Embedded Vision,” tutorial at the May 2019 Embedded Vision Summit. Our kitchens are the hubs where we spend quality time with family and friends, preparing and eating meals. Today, instructions for cooking a particular meal are just a few

“Enabling the Next Kitchen Experience Through Embedded Vision,” a Presentation from Whirlpool Read More +

“Applied Depth Sensing with Intel RealSense,” a Presentation from Intel

Sergey Dorodnicov, Software Architect at Intel, presents the “Applied Depth Sensing with Intel RealSense” tutorial at the May 2019 Embedded Vision Summit. As robust depth cameras become more affordable, many new products will benefit from true 3D vision. This presentation highlights the benefits of depth sensing for tasks such as autonomous navigation, collision avoidance and

“Applied Depth Sensing with Intel RealSense,” a Presentation from Intel Read More +

“A Self-service Platform to Deploy State-of-the-art Deep Learning Models in Under 30 Minutes,” a Presentation from Xnor.ai

Peter Zatloukal, VP of Engineering at Xnor.ai, presents the “A Self-service Platform to Deploy State-of-the-art Deep Learning Models in Under 30 Minutes” tutorial at the May 2019 Embedded Vision Summit. The first-of-its-kind, self-service platform described in this presentation makes it possible for software and hardware developers—even those who aren’t skilled in artificial intelligence—to deploy hyper-efficient,

“A Self-service Platform to Deploy State-of-the-art Deep Learning Models in Under 30 Minutes,” a Presentation from Xnor.ai Read More +

“Teaching Machines to See, Understand, Describe and Predict Sports Games in Real Time,” a Presentation from Sportlogiq

Mehrsan Javan, CTO of Sportlogiq, presents the “Teaching Machines to See, Understand, Describe and Predict Sports Games in Real Time” tutorial at the May 2019 Embedded Vision Summit. Sports analytics is about observing, understanding and describing the game in an intelligent manner. In practice, this means designing a fully-automated, robust, end-to-end pipeline; from visual input,

“Teaching Machines to See, Understand, Describe and Predict Sports Games in Real Time,” a Presentation from Sportlogiq Read More +

“Addressing Corner Cases in Embedded Computer Vision Applications,” a Presentation from Netradyne

David Julian, CTO and Founder of Netradyne, presents the “Addressing Corner Cases in Embedded Computer Vision Applications” tutorial at the May 2019 Embedded Vision Summit. Many embedded vision applications require solutions that are robust in the face of very diverse real-world inputs. For example, in automotive applications, vision-based safety systems may encounter unusual configurations of

“Addressing Corner Cases in Embedded Computer Vision Applications,” a Presentation from Netradyne Read More +

“Neuromorphic Event-based Vision: From Disruption to Adoption at Scale,” a Presentation from Prophesee

Luca Verre, Co-founder and CEO of Prophesee, presents the “Neuromorphic Event-based Vision: From Disruption to Adoption at Scale” tutorial at the May 2019 Embedded Vision Summit. Neuromorphic event-based vision is a new paradigm in imaging technology, inspired by human biology. It promises to dramatically improve machines’ ability to sense their environments and make intelligent decisions

“Neuromorphic Event-based Vision: From Disruption to Adoption at Scale,” a Presentation from Prophesee Read More +

“Deep Learning for Manufacturing Inspection Applications,” a Presentation from FLIR Systems

Stephen Se, Research Manager at FLIR Systems, presents the “Deep Learning for Manufacturing Inspection Applications” tutorial at the May 2019 Embedded Vision Summit. Recently, deep learning has revolutionized artificial intelligence and has been shown to provide the best solutions to many problems in computer vision, image classification, speech recognition and natural language processing. Se presents

“Deep Learning for Manufacturing Inspection Applications,” a Presentation from FLIR Systems Read More +

“Separable Convolutions for Efficient Implementation of CNNs and Other Vision Algorithms,” a Presentation from Phiar

Chen-Ping Yu, Co-founder and CEO of Phiar, presents the “Separable Convolutions for Efficient Implementation of CNNs and Other Vision Algorithms” tutorial at the May 2019 Embedded Vision Summit. Separable convolutions are an important technique for implementing efficient convolutional neural networks (CNNs), made popular by MobileNet’s use of depthwise separable convolutions. But separable convolutions are not

“Separable Convolutions for Efficient Implementation of CNNs and Other Vision Algorithms,” a Presentation from Phiar Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top