Object Tracking Functions
“Enabling Smart Retail with Visual AI,” a Presentation from 365 Retail Markets
Himanshu Vajaria, Engineering Manager at 365 Retail Markets, presents the “Enabling Smart Retail with Visual AI” tutorial at the May 2024 Embedded Vision Summit. Automated checkout systems are on the rise—preferred by customers and businesses alike. However, most systems rely on the customer scanning one product at a time and… “Enabling Smart Retail with Visual
Simplifying Camera Calibration to Enhance AI-powered Multi-Camera Tracking
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. This post is the third in a series on building multi-camera tracking vision AI applications. We introduce the overall end-to-end workflow and fine-tuning process to enhance system accuracy in the first part and second part. NVIDIA Metropolis
ADAS & Autonomous Cars: A $500 Million Opportunity for LWIR Cameras
Autonomous emergency braking (AEB) is one of the most important ADAS (advanced driver assistance systems) features a vehicle can have. Studies have shown that pedestrian crash risk is reduced by 25-27% and pedestrian injury crash risk by 29-30% through the implementation of AEB. As discussed in IDTechEx‘s new report, “Infrared (IR) Cameras for Automotive 2025-2035:
“Understand the Multimodal World with Minimal Supervision,” a Keynote Presentation from Yong Jae Lee
Yong Jae Lee, Associate Professor in the Department of Computer Sciences at the University of Wisconsin-Madison and CEO of GivernyAI, presents the “Learning to Understand Our Multimodal World with Minimal Supervision” tutorial at the May 2024 Embedded Vision Summit. The field of computer vision is undergoing another profound change. Recently,… “Understand the Multimodal World with
Ceva Wins Prestigious OFweek China Automotive Industry Award 2024
Ceva-Waves UWB low power ultra-wideband IP named a winner of the Cabin-Driving Integrated Technology Breakthrough Award in China ROCKVILLE, MD., August 28, 2024 – Ceva, Inc. (NASDAQ: CEVA), the leading licensor of silicon and software IP that enables Smart Edge devices to connect, sense and infer data more reliably and efficiently, announced today that its
Endeavor Air Expands dentCHECK Use to Enhance the Quality and Efficiency of Dent-mapping Workflows
Endeavor Air has implemented dentCHECK at multiple bases to streamline dent-mapping and reporting workflows. Constance, Germany and Rancho Cucamonga, California – Aug 22, 2024 – “dentCHECK was the right device to expand our capabilities and advance Endeavor Air’s efforts of integrating more technology in our hangars,” said Bob Olson, Director of Quality and Training, Endeavor
“An Introduction to Semantic Segmentation,” a Presentation from Au-Zone Technologies
Sébastien Taylor, Vice President of Research and Development for Au-Zone Technologies, presents the “Introduction to Semantic Segmentation” tutorial at the May 2024 Embedded Vision Summit. Vision applications often rely on object detectors, which determine the nature and location of objects in a scene. But many vision applications require a different… “An Introduction to Semantic Segmentation,”
“Augmenting Visual AI through Radar and Camera Fusion,” a Presentation from Au-Zone Technologies
Sébastien Taylor, Vice President of Research and Development for Au-Zone Technologies, presents the “Augmenting Visual AI through Radar and Camera Fusion” tutorial at the May 2024 Embedded Vision Summit. In this presentation Taylor discusses well-known limitations of camera-based AI and how radar can be leveraged to address these limitations. He… “Augmenting Visual AI through Radar
“Introduction to Visual Simultaneous Localization and Mapping (VSLAM),” a Presentation from Cadence
Amol Borkar, Product Marketing Director, and Shrinivas Gadkari, Design Engineering Director, both of Cadence, co-present the “Introduction to Visual Simultaneous Localization and Mapping (VSLAM)” tutorial at the May 2024 Embedded Vision Summit. Simultaneous localization and mapping (SLAM) is widely used in industry and has numerous applications where camera or ego-motion… “Introduction to Visual Simultaneous Localization
Scalable Public Safety with On-device AI: How Startup FocusAI is Filling Enterprise Security Market Gaps
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm Enterprise security is not just big business, it’s about keeping you safe: Here’s how engineer-turned-CTO Sudhakaran Ram collaborated with us to do just that. Key Takeaways: On-device AI enables superior enterprise-grade security. Distributed computing cost-efficiently enables actionable
Untether AI Demonstration of Video Analysis Using the runAI Family of Inference Accelerators
Max Sbabo, Senior Application Engineer at Untether AI, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Sbabo demonstrates his company’s its AI inference technology with AI accelerator cards that leverage the capabilities of the runAI family of ICs in a PCI-Express form factor. This demonstration
The Role of AI-driven Embedded Vision Cameras in Self-checkout Loss Prevention
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Self-checkout usage is rapidly growing and redefining retail experiences. This shift has led to retail losses that can only be overcome by AI-based embedded vision. Explore the types of retail shrinkage, how AI helps, and
Inuitive Demonstration of the M4.51 Depth and AI Sensor Module Based on the NU4100 Vision Processor
Shay Harel, field application engineer at Inuitive, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Harel demonstrates the capabilities of his company’s M4.51 sensor module using a simple Python script that leverages Inuitive’s API for real-time object detection. The M4.51 sensor module, based on the
Interactive AI Tool Delivers Immersive Video Content to Blind and Low-vision Viewers
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. New research aims to revolutionize video accessibility for blind or low-vision (BLV) viewers with an AI-powered system that gives users the ability to explore content interactively. The innovative system, detailed in a recent paper, addresses significant gaps
Gigantor Technologies Demonstration of Removing Resource Contention for Real-time Object Detection
Jessica Jones, Vice President and Chief Marketing Officer at Gigantor Technologies, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Jones demonstrates her company’s GigaMAACS’ Synthetic Scaler with a live facial tracking demo that enables real-time, unlimited objection detection at all ranges while only requiring training
Avnet Demonstration of an AI-driven Smart Parking Lot Monitoring System Using the RZBoard V2L
Monica Houston, AI Manager of the Advanced Applications Group at Avnet, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Houston demonstrates a smart city application based on her company’s RZBoard single-board computer. Using embedded vision and combination of edge AI and cloud connectivity, the demo
Advantech Demonstration of AI Vision with an Edge AI Camera and Deep Learning Software
Brian Lin, Field Sales Engineer at Advantech, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Lin demonstrates his company’s edge AI vision solution embedded with NVIDIA Jetson platforms. Lin demonstrates how Advantech’s industrial cameras, equipped with Overview’s deep-learning software, effortlessly capture even the tiniest defects
Analog Devices Demonstration of the MAX78000 AI Microcontroller Performing Action Recognition
Navdeep Dhanjal, Executive Business and Product Manager for AI microcontrollers at Analog Devices, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Dhanjal demonstrates the MAX78000 AI microcontroller performing action recognition using a temporal convolutional network (TCN). Using a TCN-based model, the MAX78000 accurately recognizes a