Object Identification

Object Identification Functions

“Augmenting Visual AI through Radar and Camera Fusion,” a Presentation from Au-Zone Technologies

Sébastien Taylor, Vice President of Research and Development for Au-Zone Technologies, presents the “Augmenting Visual AI through Radar and Camera Fusion” tutorial at the May 2024 Embedded Vision Summit. In this presentation Taylor discusses well-known limitations of camera-based AI and how radar can be leveraged to address these limitations. He covers common radar data representations

Read More »

“Introduction to Visual Simultaneous Localization and Mapping (VSLAM),” a Presentation from Cadence

Amol Borkar, Product Marketing Director, and Shrinivas Gadkari, Design Engineering Director, both of Cadence, co-present the “Introduction to Visual Simultaneous Localization and Mapping (VSLAM)” tutorial at the May 2024 Embedded Vision Summit. Simultaneous localization and mapping (SLAM) is widely used in industry and has numerous applications where camera or ego-motion needs to be accurately determined.

Read More »

Scalable Public Safety with On-device AI: How Startup FocusAI is Filling Enterprise Security Market Gaps

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm Enterprise security is not just big business, it’s about keeping you safe: Here’s how engineer-turned-CTO Sudhakaran Ram collaborated with us to do just that. Key Takeaways: On-device AI enables superior enterprise-grade security. Distributed computing cost-efficiently enables actionable

Read More »

Untether AI Demonstration of Video Analysis Using the runAI Family of Inference Accelerators

Max Sbabo, Senior Application Engineer at Untether AI, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Sbabo demonstrates his company’s its AI inference technology with AI accelerator cards that leverage the capabilities of the runAI family of ICs in a PCI-Express form factor. This demonstration

Read More »

Inuitive Demonstration of the M4.51 Depth and AI Sensor Module Based on the NU4100 Vision Processor

Shay Harel, field application engineer at Inuitive, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Harel demonstrates the capabilities of his company’s M4.51 sensor module using a simple Python script that leverages Inuitive’s API for real-time object detection. The M4.51 sensor module, based on the

Read More »

Gigantor Technologies Demonstration of Removing Resource Contention for Real-time Object Detection

Jessica Jones, Vice President and Chief Marketing Officer at Gigantor Technologies, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Jones demonstrates her company’s GigaMAACS’ Synthetic Scaler with a live facial tracking demo that enables real-time, unlimited objection detection at all ranges while only requiring training

Read More »

Avnet Demonstration of an AI-driven Smart Parking Lot Monitoring System Using the RZBoard V2L

Monica Houston, AI Manager of the Advanced Applications Group at Avnet, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Houston demonstrates a smart city application based on her company’s RZBoard single-board computer. Using embedded vision and combination of edge AI and cloud connectivity, the demo

Read More »

Advantech Demonstration of AI Vision with an Edge AI Camera and Deep Learning Software

Brian Lin, Field Sales Engineer at Advantech, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Lin demonstrates his company’s edge AI vision solution embedded with NVIDIA Jetson platforms. Lin demonstrates how Advantech’s industrial cameras, equipped with Overview’s deep-learning software, effortlessly capture even the tiniest defects

Read More »

Analog Devices Demonstration of the MAX78000 AI Microcontroller Performing Action Recognition

Navdeep Dhanjal, Executive Business and Product Manager for AI microcontrollers at Analog Devices, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Dhanjal demonstrates the MAX78000 AI microcontroller performing action recognition using a temporal convolutional network (TCN). Using a TCN-based model, the MAX78000 accurately recognizes a

Read More »

Accelerating Transformer Neural Networks for Autonomous Driving

This blog post was originally published at Ambarella’s website. It is reprinted here with the permission of Ambarella. Autonomous driving (AD) and advanced driver assistance system (ADAS) providers are deploying more and more AI neural networks (NNs) to offer human-like driving experience. Several of the leading AD innovators have either deployed, or have a roadmap

Read More »

Sensor Cortek Demonstration of SmarterRoad Running on Synopsys ARC NPX6 NPU IP

Fahed Hassanhat, head of engineering at Sensor Cortek, demonstrates the company’s latest edge AI and vision technologies and products in Synopsys’ booth at the 2024 Embedded Vision Summit. Specifically, Hassanhat demonstrates his company’s latest ADAS neural network (NN) model, SmarterRoad, combining lane detection and open space detection. SmarterRoad is a light integrated convolutional network that

Read More »

Build VLM-powered Visual AI Agents Using NVIDIA NIM and NVIDIA VIA Microservices

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Traditional video analytics applications and their development workflow are typically built on fixed-function, limited models that are designed to detect and identify only a select set of predefined objects. With generative AI, NVIDIA NIM microservices, and foundation

Read More »

NXP Semiconductors Demonstration of Smart Fitness with the i.MX 93 Apps Processor

Manish Bajaj, Systems Engineer at NXP Semiconductors, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Bajaj demonstrates how the i.MX 93 applications processor can run machine learning applications with an Arm Ethos U-65 microNPU to accelerate inference on two simultaneously running deep learning vision- based

Read More »

Enhance Multi-camera Tracking Accuracy by Fine-tuning AI Models with Synthetic Data

This article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Large-scale, use–case-specific synthetic data has become increasingly important in real-world computer vision and AI workflows. That’s because digital twins are a powerful way to create physics-based virtual replicas of factories, retail spaces, and other assets, enabling precise simulations

Read More »

Nota AI Demonstration of Elevating Traffic Safety with Vision Language Models

Tae-Ho Kim, CTO and Co-founder of Nota AI, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Kim demonstrates his company’s Vision Language Model (VLM) solution, designed to elevate vehicle safety. Advanced models analyze and interpret visual data to prevent accidents and enhance driving experiences. The

Read More »

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top