Automotive Applications for Embedded Vision
Vision products in automotive applications can make us better and safer drivers
Vision products in automotive applications can serve to enhance the driving experience by making us better and safer drivers through both driver and road monitoring.
Driver monitoring applications use computer vision to ensure that driver remains alert and awake while operating the vehicle. These systems can monitor head movement and body language for indications that the driver is drowsy, thus posing a threat to others on the road. They can also monitor for driver distraction behaviors such as texting, eating, etc., responding with a friendly reminder that encourages the driver to focus on the road instead.
In addition to monitoring activities occurring inside the vehicle, exterior applications such as lane departure warning systems can use video with lane detection algorithms to recognize the lane markings and road edges and estimate the position of the car within the lane. The driver can then be warned in cases of unintentional lane departure. Solutions exist to read roadside warning signs and to alert the driver if they are not heeded, as well as for collision mitigation, blind spot detection, park and reverse assist, self-parking vehicles and event-data recording.
Eventually, this technology will to lead cars with self-driving capability; Google, for example, is already testing prototypes. However many automotive industry experts believe that the goal of vision in vehicles is not so much to eliminate the driving experience but to just to make it safer, at least in the near term.
Avnet Demonstration of an AI-driven Smart Parking Lot Monitoring System Using the RZBoard V2L
Monica Houston, AI Manager of the Advanced Applications Group at Avnet, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Houston demonstrates a smart city application based on her company’s RZBoard single-board computer. Using embedded vision and combination of edge AI and cloud connectivity, the demo
Accelerating Transformer Neural Networks for Autonomous Driving
This blog post was originally published at Ambarella’s website. It is reprinted here with the permission of Ambarella. Autonomous driving (AD) and advanced driver assistance system (ADAS) providers are deploying more and more AI neural networks (NNs) to offer human-like driving experience. Several of the leading AD innovators have either deployed, or have a roadmap
Sensor Cortek Demonstration of SmarterRoad Running on Synopsys ARC NPX6 NPU IP
Fahed Hassanhat, head of engineering at Sensor Cortek, demonstrates the company’s latest edge AI and vision technologies and products in Synopsys’ booth at the 2024 Embedded Vision Summit. Specifically, Hassanhat demonstrates his company’s latest ADAS neural network (NN) model, SmarterRoad, combining lane detection and open space detection. SmarterRoad is a light integrated convolutional network that
Ambarella and Plus Announce High Performance Transformer-based AD Perception Software Stack, PlusVision, for CV3-AD AI Domain Controller Family With Industry-leading Power Efficiency
Birds-Eye-View Vision Technology Enables OEMs to Offer L2+/L3 Autonomy Across Vehicle Models With Uniform Perception Software SANTA CLARA, Calif., July 31, 2024 — Ambarella, Inc. (NASDAQ: AMBA), an edge AI semiconductor company, and Plus, an AI-based driver assist and autonomous driving (AD) solutions provider, today announced that Plus’s PlusVision™—a high-performance transformer-based AD perception software stack
Super-Safety and Self-Steering: Exploring Autonomous Vehicles
With more than 90% of road traffic accidents coming down to human error, the importance of autonomy and its safety benefits cannot be overlooked. As the industry moves into a new era of autonomous driving technologies, safety benchmarks and standards are evolving, forcing car companies to adopt new technologies to keep their cars competitive. IDTechEx‘s
Nota AI Demonstration of Elevating Traffic Safety with Vision Language Models
Tae-Ho Kim, CTO and Co-founder of Nota AI, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Kim demonstrates his company’s Vision Language Model (VLM) solution, designed to elevate vehicle safety. Advanced models analyze and interpret visual data to prevent accidents and enhance driving experiences. The
Nota AI Demonstration of Revolutionizing Driver Monitoring Systems
Tae-Ho Kim, CTO and Co-founder of Nota AI, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Kim demonstrates Nota DMS, his company’s state-of-the-art driver monitoring system. The solution enhances driver safety by monitoring attention and detecting drowsiness in real-time. Cutting-edge AI techniques make Nota DMS
Nextchip Demonstration of Its Vision Professional ISP Optimization for Computer Vision
Sophie Jeon, Global Strategy Marketing Manager at Nextchip, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Jeon demonstrates her company’s expertise in optimizing ISPs for computer vision by comparing the tuning technologies used for human vision and machine vision applications.
Steering a Revolution: Optimized Automated Driving with Heterogeneous Compute
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm Qualcomm Technologies’ latest whitepaper navigates the advantages of Snapdragon Ride Solutions based on heterogeneous compute SoCs. As the automotive industry continues to progress toward automated driving, advanced driver assistance systems (ADAS) are in high demand. These systems
Nextchip Demonstration of the APACHE5 ADAS SoC
Sophie Jeon, Global Strategy Marketing Manager at Nextchip, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Jeon demonstrates her company’s APACHE5 ADAS SoC. APACHE5 is ready for market with an accompanying SDK, and has passed all qualifications for production such as PPAP (the Production Part