Object Identification Functions
Inuitive Demonstration of the M4.51 Depth and AI Sensor Module Based on the NU4100 Vision Processor
Shay Harel, field application engineer at Inuitive, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Harel demonstrates the capabilities of his company’s M4.51 sensor module using a simple Python script that leverages Inuitive’s API for real-time object detection. The M4.51 sensor module, based on the
Interactive AI Tool Delivers Immersive Video Content to Blind and Low-vision Viewers
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. New research aims to revolutionize video accessibility for blind or low-vision (BLV) viewers with an AI-powered system that gives users the ability to explore content interactively. The innovative system, detailed in a recent paper, addresses significant gaps
Gigantor Technologies Demonstration of Removing Resource Contention for Real-time Object Detection
Jessica Jones, Vice President and Chief Marketing Officer at Gigantor Technologies, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Jones demonstrates her company’s GigaMAACS’ Synthetic Scaler with a live facial tracking demo that enables real-time, unlimited objection detection at all ranges while only requiring training
Avnet Demonstration of an AI-driven Smart Parking Lot Monitoring System Using the RZBoard V2L
Monica Houston, AI Manager of the Advanced Applications Group at Avnet, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Houston demonstrates a smart city application based on her company’s RZBoard single-board computer. Using embedded vision and combination of edge AI and cloud connectivity, the demo
Advantech Demonstration of AI Vision with an Edge AI Camera and Deep Learning Software
Brian Lin, Field Sales Engineer at Advantech, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Lin demonstrates his company’s edge AI vision solution embedded with NVIDIA Jetson platforms. Lin demonstrates how Advantech’s industrial cameras, equipped with Overview’s deep-learning software, effortlessly capture even the tiniest defects
Analog Devices Demonstration of the MAX78000 AI Microcontroller Performing Action Recognition
Navdeep Dhanjal, Executive Business and Product Manager for AI microcontrollers at Analog Devices, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Dhanjal demonstrates the MAX78000 AI microcontroller performing action recognition using a temporal convolutional network (TCN). Using a TCN-based model, the MAX78000 accurately recognizes a
Accelerating Transformer Neural Networks for Autonomous Driving
This blog post was originally published at Ambarella’s website. It is reprinted here with the permission of Ambarella. Autonomous driving (AD) and advanced driver assistance system (ADAS) providers are deploying more and more AI neural networks (NNs) to offer human-like driving experience. Several of the leading AD innovators have either deployed, or have a roadmap
Sensor Cortek Demonstration of SmarterRoad Running on Synopsys ARC NPX6 NPU IP
Fahed Hassanhat, head of engineering at Sensor Cortek, demonstrates the company’s latest edge AI and vision technologies and products in Synopsys’ booth at the 2024 Embedded Vision Summit. Specifically, Hassanhat demonstrates his company’s latest ADAS neural network (NN) model, SmarterRoad, combining lane detection and open space detection. SmarterRoad is a light integrated convolutional network that
Build VLM-powered Visual AI Agents Using NVIDIA NIM and NVIDIA VIA Microservices
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Traditional video analytics applications and their development workflow are typically built on fixed-function, limited models that are designed to detect and identify only a select set of predefined objects. With generative AI, NVIDIA NIM microservices, and foundation
Why Ethernet Cameras are Increasingly Used in Medical and Life Sciences Applications
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. In this blog, we will uncover the current medical and life sciences use cases in which Ethernet cameras are integral. The pace of technological transformations in medicine and life sciences is rapid. Imaging technologies used
NXP Semiconductors Demonstration of Smart Fitness with the i.MX 93 Apps Processor
Manish Bajaj, Systems Engineer at NXP Semiconductors, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Bajaj demonstrates how the i.MX 93 applications processor can run machine learning applications with an Arm Ethos U-65 microNPU to accelerate inference on two simultaneously running deep learning vision- based
Enhance Multi-camera Tracking Accuracy by Fine-tuning AI Models with Synthetic Data
This article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Large-scale, use–case-specific synthetic data has become increasingly important in real-world computer vision and AI workflows. That’s because digital twins are a powerful way to create physics-based virtual replicas of factories, retail spaces, and other assets, enabling precise simulations
Nota AI Demonstration of Elevating Traffic Safety with Vision Language Models
Tae-Ho Kim, CTO and Co-founder of Nota AI, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Kim demonstrates his company’s Vision Language Model (VLM) solution, designed to elevate vehicle safety. Advanced models analyze and interpret visual data to prevent accidents and enhance driving experiences. The
Nextchip Demonstration of Its Vision Professional ISP Optimization for Computer Vision
Sophie Jeon, Global Strategy Marketing Manager at Nextchip, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Jeon demonstrates her company’s expertise in optimizing ISPs for computer vision by comparing the tuning technologies used for human vision and machine vision applications.
Nextchip Demonstration of the APACHE5 ADAS SoC
Sophie Jeon, Global Strategy Marketing Manager at Nextchip, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Jeon demonstrates her company’s APACHE5 ADAS SoC. APACHE5 is ready for market with an accompanying SDK, and has passed all qualifications for production such as PPAP (the Production Part
Nextchip Demonstration of the APACHE6 ADAS SoC
Sophie Jeon, Global Strategy Marketing Manager at Nextchip, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Jeon demonstrates her company’s APACHE6 ADAS SoC. With advanced computing power, APACHE6 makes your vehicle smarter, avoiding risk while driving and parking.
Top Camera Features that Empower Smart Traffic Management Systems
This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Traffic systems leverage camera solutions to empower smart cities to handle major traffic challenges. Some of their capabilities include real-time monitoring, incident detection, and law enforcement. Discover the camera’s role in these systems and the
Cadence Demonstration of a Large Vision Model for Generative AI on the Tensilica Vision P6 DSP
Amol Borkar, Director of Product Marketing for Cadence Tensilica DSPs and Automotive Segment Director, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Borkar demonstrates the use of a Tensilica Vision P6 DSP for the latest generative AI (GenAI) applications. The Vision P6 DSP is a