Summit 2022

“Event-Based Neuromorphic Perception and Computation: The Future of Sensing and AI,” a Keynote Presentation from Ryad Benosman

Ryad Benosman, Professor at the University of Pittsburgh and Adjunct Professor at the CMU Robotics Institute, presents the “Event-Based Neuromorphic Perception and Computation: The Future of Sensing and AI” tutorial at the May 2022 Embedded Vision Summit. We say that today’s mainstream computer vision technologies enable machines to “see,” much as humans do. We refer […]

“Event-Based Neuromorphic Perception and Computation: The Future of Sensing and AI,” a Keynote Presentation from Ryad Benosman Read More +

“A New AI Platform Architecture for the Smart Toys of the Future,” a Presentation from Xperi

Gabriel Costache, Senior R&D Director at Xperi, presents the “New AI Platform Architecture for the Smart Toys of the Future” tutorial at the May 2022 Embedded Vision Summit. From a parent’s perspective, toys should be safe, private, entertaining and educational, with the ability to adapt and grow with the child. For natural interaction, a toy

“A New AI Platform Architecture for the Smart Toys of the Future,” a Presentation from Xperi Read More +

“Human-centric Computer Vision with Synthetic Data,” a Presentation from Unity Technologies

Alex Thaman, Chief Software Architect at Unity Technologies, presents the “Human-centric Computer Vision with Synthetic Data” tutorial at the May 2022 Embedded Vision Summit. Companies are continuing to accelerate the adoption of computer vision to detect, identify and understand humans from camera imagery. Unity sees these human-centric use cases in a growing range of applications

“Human-centric Computer Vision with Synthetic Data,” a Presentation from Unity Technologies Read More +

“Build Smarter, Safer and Efficient Autonomous Robots and Mobile Machines,” a Presentation from Texas Instruments

Manisha Agrawal, Product Marketing Manager at Texas Instruments, presents the “Build Smarter, Safer and Efficient Autonomous Robots and Mobile Machines” tutorial at the May 2022 Embedded Vision Summit. Automation is expanding rapidly from the factory floor to the consumer’s front door. Examples include autonomous mobile robots used in warehouses and last mile delivery and service

“Build Smarter, Safer and Efficient Autonomous Robots and Mobile Machines,” a Presentation from Texas Instruments Read More +

“Deploying Visual AI on Edge Devices: Lessons From the Real World,” a Presentation from Teledyne Imaging

Luc Chouinard, AI Specialist and Design Architect at Teledyne Imaging, presents the “Deploying Visual AI on Edge Devices: Lessons From the Real World” tutorial at the May 2022 Embedded Vision Summit. Developing an AI-based edge device is complex, involving hardware, algorithms and software. It requires making many technology-selection decisions and balancing numerous trade-offs. In this

“Deploying Visual AI on Edge Devices: Lessons From the Real World,” a Presentation from Teledyne Imaging Read More +

“New Imager Modules and Tools Enable Bringing High-quality Vision Systems to Market Quickly,” a Presentation from onsemi

Ganesh Narayanaswamy, Senior Business Marketing Manager in the Industrial and Commercial Solutions Division at onsemi, presents the “New Imager Modules and Tools Enable Bringing High-quality Vision Systems to Market Quickly” tutorial at the May 2022 Embedded Vision Summit. Traditionally, developers have struggled to select the best image sensors and lenses, integrate these with the rest

“New Imager Modules and Tools Enable Bringing High-quality Vision Systems to Market Quickly,” a Presentation from onsemi Read More +

“Intelligent Vision for the Industrial,​ Automotive and IoT Edge with the i.MX 8M Plus Applications Processor,” a Presentation from NXP Semiconductors

Ali Osman Örs, Director of AI ML Strategy and Technologies for Edge Processing at NXP Semiconductors, presents the “Intelligent Vision for the Industrial,​ Automotive and IoT Edge with the i.MX 8M Plus Applications Processor” tutorial at the May 2022 Embedded Vision Summit. Today’s edge-based ML solutions need a powerful multicore processor packed with features and

“Intelligent Vision for the Industrial,​ Automotive and IoT Edge with the i.MX 8M Plus Applications Processor,” a Presentation from NXP Semiconductors Read More +

“Creating Better Datasets for Training More Robust Models in FiftyOne,” a Presentation from Voxel51

Jason Corso, CEO of Voxel51, presents the “Creating Better Datasets for Training More Robust Models in FiftyOne” tutorial at the May 2022 Embedded Vision Summit. Nothing hinders the success of computer vision and machine learning systems more than poor-quality data. Gone are the days of focusing only on the model while assuming the data is

“Creating Better Datasets for Training More Robust Models in FiftyOne,” a Presentation from Voxel51 Read More +

“Accelerate All Your Algorithms with the quadric q16 Processor,” a Presentation from Quadric

Daniel Firu, Co-founder and CPO of Quadric, presents the “Accelerate All Your Algorithms with the quadric q16 Processor,” tutorial at the May 2022 Embedded Vision Summit. As edge and embedded vision applications increasingly incorporate neural networks, developers are looking to add neural network accelerator functionality to their systems. There is just one problem: we need

“Accelerate All Your Algorithms with the quadric q16 Processor,” a Presentation from Quadric Read More +

“Natural Intelligence Outperforms Artificial Intelligence for Autonomy and Vision,” a Presentation from Opteran Technologies

James Marshall, Chief Scientific Officer at Opteran Technologies, presents the “Natural Intelligence Outperforms Artificial Intelligence for Autonomy and Vision” tutorial at the May 2022 Embedded Vision Summit. Mainstream approaches to AI for autonomy and computer vision make use of data-, energy- and compute-intensive techniques such as deep learning, which struggle to generalize and are fragile

“Natural Intelligence Outperforms Artificial Intelligence for Autonomy and Vision,” a Presentation from Opteran Technologies Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top