Enabling Technologies

“Pioneering Analog Compute for Edge AI to Overcome the End of Digital Scaling,” a Presentation from Mythic

Mike Henry, CEO and Founder of Mythic, presents the “Pioneering Analog Compute for Edge AI to Overcome the End of Digital Scaling” tutorial at the May 2019 Embedded Vision Summit. AI inference at the edge will continue to create insatiable demand for compute performance in power- and cost-constrained form factors. Taking into account past trends, […]

“Pioneering Analog Compute for Edge AI to Overcome the End of Digital Scaling,” a Presentation from Mythic Read More +

“The Xilinx AI Engine: High Performance with Future-proof Architecture Adaptability,” a Presentation from Xilinx

Nick Ni, Director of Product Marketing at Xilinx, presents the “Xilinx AI Engine: High Performance with Future-proof Architecture Adaptability” tutorial at the May 2019 Embedded Vision Summit. AI inference demands orders- of-magnitude more compute capacity than what today’s SoCs offer. At the same time, neural network topologies are changing too quickly to be addressed by

“The Xilinx AI Engine: High Performance with Future-proof Architecture Adaptability,” a Presentation from Xilinx Read More +

“Efficient Deployment of Quantized ML Models at the Edge Using Snapdragon SoCs,” a Presentation from Qualcomm

Felix Baum, Director of Product Management for AI Software at Qualcomm, presents the “Efficient Deployment of Quantized ML Models at the Edge Using Snapdragon SoCs” tutorial at the May 2019 Embedded Vision Summit. Increasingly, machine learning models are being deployed at the edge, and these models are getting bigger. As a result, we are hitting

“Efficient Deployment of Quantized ML Models at the Edge Using Snapdragon SoCs,” a Presentation from Qualcomm Read More +

“Using Blockchain to Create Trusted Embedded Vision Systems,” a Presentation from Basler

Thies Möller, Technical Architect at Basler, presents the “Using Blockchain to Create Trusted Embedded Vision Systems” tutorial at the May 2019 Embedded Vision Summit. In many IoT architectures, sensor data must be passed to cloud services for further processing. Traditionally, “trusted third parties” have been used to secure this data. In this talk, Möller explores

“Using Blockchain to Create Trusted Embedded Vision Systems,” a Presentation from Basler Read More +

“Tools and Techniques for Optimizing DNNs on Arm-based Processors with Au-Zone’s DeepView ML Toolkit,” a Presentation from Au-Zone Technologies

Sébastien Taylor, Vision Technology Architect at Au-Zone Technologies, presents the “Tools and Techniques for Optimizing DNNs on Arm-based Processors with Au-Zone’s DeepView ML Toolkit” tutorial at the May 2019 Embedded Vision Summit. In this presentation, Taylor describes methods and tools for developing, profiling and optimizing neural network solutions for deployment on Arm MCUs, CPUs and

“Tools and Techniques for Optimizing DNNs on Arm-based Processors with Au-Zone’s DeepView ML Toolkit,” a Presentation from Au-Zone Technologies Read More +

“REAL3 Time of Flight: A New Differentiator for Mobile Phones,” a Presentation from Infineon Technologies

Walter Bell, 3D Imaging Application Engineer at Infineon Technologies, presents the “REAL3 Time of Flight: A New Differentiator for Mobile Phones” tutorial at the May 2019 Embedded Vision Summit. In 2019, 3D imaging has become mainstream in mobile phone cameras. What started in 2016 with the first two smartphones using an Infineon 3D time-of-flight (ToF)

“REAL3 Time of Flight: A New Differentiator for Mobile Phones,” a Presentation from Infineon Technologies Read More +

“Applied Depth Sensing with Intel RealSense,” a Presentation from Intel

Sergey Dorodnicov, Software Architect at Intel, presents the “Applied Depth Sensing with Intel RealSense” tutorial at the May 2019 Embedded Vision Summit. As robust depth cameras become more affordable, many new products will benefit from true 3D vision. This presentation highlights the benefits of depth sensing for tasks such as autonomous navigation, collision avoidance and

“Applied Depth Sensing with Intel RealSense,” a Presentation from Intel Read More +

“A Self-service Platform to Deploy State-of-the-art Deep Learning Models in Under 30 Minutes,” a Presentation from Xnor.ai

Peter Zatloukal, VP of Engineering at Xnor.ai, presents the “A Self-service Platform to Deploy State-of-the-art Deep Learning Models in Under 30 Minutes” tutorial at the May 2019 Embedded Vision Summit. The first-of-its-kind, self-service platform described in this presentation makes it possible for software and hardware developers—even those who aren’t skilled in artificial intelligence—to deploy hyper-efficient,

“A Self-service Platform to Deploy State-of-the-art Deep Learning Models in Under 30 Minutes,” a Presentation from Xnor.ai Read More +

“Neuromorphic Event-based Vision: From Disruption to Adoption at Scale,” a Presentation from Prophesee

Luca Verre, Co-founder and CEO of Prophesee, presents the “Neuromorphic Event-based Vision: From Disruption to Adoption at Scale” tutorial at the May 2019 Embedded Vision Summit. Neuromorphic event-based vision is a new paradigm in imaging technology, inspired by human biology. It promises to dramatically improve machines’ ability to sense their environments and make intelligent decisions

“Neuromorphic Event-based Vision: From Disruption to Adoption at Scale,” a Presentation from Prophesee Read More +

“Deploying Visual SLAM in Low-power Devices,” a Presentation from CEVA

Ben Weiss, Customer Solutions Engineer in the CSG Group at CEVA, presents the “Deploying Visual SLAM in Low-power Devices” tutorial at the May 2019 Embedded Vision Summit. Simultaneous localization and mapping (SLAM) technology has been evolving for quite some time, including visual SLAM, which relies primarily on image data. But implementing fast, accurate visual SLAM

“Deploying Visual SLAM in Low-power Devices,” a Presentation from CEVA Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top