Tools

“A Platform Approach to Developing Networked Visual AI Systems,” a Presentation from Network Optix

Nathan Wheeler, Chairman and CEO, and Tony Luce, Vice President of Product Marketing, both of Network Optix, present the “Platform Approach to Developing Networked Visual AI Systems” tutorial at the May 2022 Embedded Vision Summit. Connected cameras are becoming ubiquitous. Coupled with CV and ML, they enable a growing range of applications that monitor people, […]

“A Platform Approach to Developing Networked Visual AI Systems,” a Presentation from Network Optix Read More +

Neural Network Optimization with AIMET

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. To run neural networks efficiently at the edge on mobile, IoT, and other embedded devices, developers strive to optimize their machine learning (ML) models’ size and complexity while taking advantage of hardware acceleration for inference. For these

Neural Network Optimization with AIMET Read More +

“Enable Spatial Understanding for Embedded/Edge Devices with DepthAI,” a Presentation from Luxonis

Erik Kokalj, Director of Applications Engineering at Luxonis, presents the “Enable Spatial Understanding for Embedded/Edge Devices with DepthAI” tutorial at the May 2022 Embedded Vision Summit. Many systems need to understand not only what objects are nearby, but also where those objects are in the physical world. This is “spatial AI”. In this talk, Kokalj

“Enable Spatial Understanding for Embedded/Edge Devices with DepthAI,” a Presentation from Luxonis Read More +

“Accelerate Tomorrow’s Models with Lattice FPGAs,” a Presentation from Lattice Semiconductor

Hussein Osman, Segment Marketing Director at Lattice Semiconductor, presents the “Accelerate Tomorrow’s Models with Lattice FPGAs” tutorial at the May 2022 Embedded Vision Summit. Deep learning models are advancing at a dizzying pace, creating difficult dilemmas for system developers. When you begin developing an edge AI system, you select the best available model for your

“Accelerate Tomorrow’s Models with Lattice FPGAs,” a Presentation from Lattice Semiconductor Read More +

NVIDIA DeepStream Technical Deep Dive: Multi-object Tracker

This video was originally published at the NVIDIA Developer YouTube channel. It is reprinted here with the permission of NVIDIA. NVIDIA’s DeepStream SDK delivers a complete streaming analytics toolkit for AI-based multi-sensor processing, video, audio and image understanding. This video covers the fundamentals of NVIDIA’s new tracker unified architecture. From the video, you will: Learn

NVIDIA DeepStream Technical Deep Dive: Multi-object Tracker Read More +

“Designing the Next Ultra-Low-Power Always-On Solution,” a Presentation from Cadence

Amol Borkar, Director of Product Management and Marketing for Tensilica Vision and AI DSPs at Cadence, presents the “Designing the Next Ultra-Low-Power Always-On Solution” tutorial at the May 2022 Embedded Vision Summit. Increasingly, users expect their systems to be ready to respond at any time—for example, using a voice command to launch a music playlist.

“Designing the Next Ultra-Low-Power Always-On Solution,” a Presentation from Cadence Read More +

“TensorFlow Lite for Microcontrollers (TFLM): Recent Developments,” a Presentation from BDTI and Google

David Davis, Senior Embedded Software Engineer, and John Withers, Automation and Systems Engineer, both of BDTI, present the “TensorFlow Lite for Microcontrollers (TFLM): Recent Developments” tutorial at the May 2022 Embedded Vision Summit. TensorFlow Lite Micro (TFLM) is a generic inference framework designed to run TensorFlow models on digital signal processors (DSPs), microcontrollers and other

“TensorFlow Lite for Microcontrollers (TFLM): Recent Developments,” a Presentation from BDTI and Google Read More +

“Jumpstart Your Edge AI Vision Application with New Development Kits from Avnet,” a Presentation from Avnet

Monica Houston, Technical Solutions Manager at Avnet, presents the “Jumpstart Your Edge AI Vision Application with New Development Kits from Avnet” tutorial at the May 2022 Embedded Vision Summit. Choosing the right processing solution for your embedded vision application can make or break your next development effort. This presentation introduces three next-generation embedded vision platforms

“Jumpstart Your Edge AI Vision Application with New Development Kits from Avnet,” a Presentation from Avnet Read More +

“Arm Cortex-M Series Processors Spark a New Era of Use Cases, Enabling Low-cost, Low-power Computer Vision and Machine Learning,” A Presentation from Arm

Stephen Su, Senior Product Manager at Arm, presents the “Arm Cortex-M Series Processors Spark a New Era of Use Cases, Enabling Low-cost, Low-power Computer Vision and Machine Learning” tutorial at the May 2022 Embedded Vision Summit. The Arm Cortex-M processor family of microcontrollers is designed and optimized for cost- and energy-efficient devices, and can be

“Arm Cortex-M Series Processors Spark a New Era of Use Cases, Enabling Low-cost, Low-power Computer Vision and Machine Learning,” A Presentation from Arm Read More +

“Introducing the Kria Robotics Starter Kit: Robotics and Machine Vision for Smart Factories,” a Presentation from AMD

Chetan Khona, Director of Industrial, Vision, Healthcare and Sciences Markets at AMD, presents the “Introducing the Kria Robotics Starter Kit: Robotics and Machine Vision for Smart Factories” tutorial at the May 2022 Embedded Vision Summit. A robot is a system of systems with diverse sensors and embedded processing nodes focused on core capabilities such as

“Introducing the Kria Robotics Starter Kit: Robotics and Machine Vision for Smart Factories,” a Presentation from AMD Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top