LETTER FROM THE EDITOR |
Dear Colleague,
The 2024 Embedded Vision Summit Call for Presentation Proposals is still open, but not for much longer! I invite you to share your expertise. Our team is curating what will be more than 100 expert sessions and we’d love to see your proposal. From case studies on integrating perceptual AI into products to tutorials on the latest tools and algorithms, send in your session idea today. And if you’re not sure about your topic, check out the topics list to see what’s trending for 2024. The deadline for submissions is December 8.
On Wednesday, January 24, 2024 at 9 am PT, e-con Systems will deliver the free webinar “Mastering Image Quality: The Power of Imaging Signal Processors in Embedded Vision” in partnership with the Edge AI and Vision Alliance. Maximizing image quality is a key objective when developing embedded vision systems, since computer vision algorithms can only interpret what they can “see.” Many factors, such as lighting, optics, image sensor sensitivity, and various resolution attributes, are critical in determining image quality. Another important factor in image quality is the Image Signal Processor (ISP), which analyzes and optimizes the raw data coming from the image sensor.
In this webinar, Suresh Madhu, Head of Product Marketing, and Arun Asokan, Head of the ISP Division, both of e-Con Systems, will explore the major types of ISPs currently in the market. They will offer insights into the role of ISPs in embedded vision systems, explore why ISPs are the backbone of superior image quality, and explain how ISPs can be fine-tuned. Madhu and Asokan will also share real-life case studies that illustrate the power of ISPs, and an interactive question-and-answer session will follow the presentation. For more information and to register, please see the event page.
Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance |
AUTOMOTIVE SENSING AND SIMULATION TECHNIQUES |
ADAS and AV Sensors: What’s Winning and Why?
It’s clear that the number of sensors per vehicle—and the sophistication of these sensors—is growing rapidly, largely thanks to increased adoption of advanced safety and driver assistance features. In this presentation, Ian Riches, Vice President of the Global Automotive Practice at TechInsights, explores likely future demand for automotive radars, cameras and LiDARs. Riches examines which vehicle features will drive demand out to 2030, how vehicle architecture change is impacting the market and what sorts of compute platforms these sensors will be connected to. Finally, he shares his firm’s vision of what the landscape could look like far beyond 2030, considering scenarios out to 2050 for automated driving and the resulting sensor demand.
Developing an Efficient Automotive Augmented Reality Solution Using Teacher-student Learning and Sprints
ImmersiView is a deep learning–based augmented reality solution for automotive safety. It uses a head-up display to draw a driver’s attention to important objects. The development of such solutions often resembles research more than development, with long development cycles and unpredictable accuracy and inference speed advances. In this talk, Jack Sim, CTO of STRADVISION, presents an efficient development process for projects involving multiple deep learning tasks. This process decouples task dependencies through teacher-student learning and concurrently improves accuracy and speed via sprints. In each sprint, STRADVISION trains teacher networks for each task, focusing only on improving accuracy. In the same sprint, a unified student network learns all tasks from the most accurate teacher networks. To optimize accuracy and speed, STRADVISION applies neural architecture search to the student network in the initial sprints and then fixes the architecture. This development process enabled STRADVISION to create the ImmersiView prototype in three months, followed by monthly releases.
|
TUTORIALS ON INDUSTRY STANDARDS |
The OpenVX Standard API: Computer Vision for the Masses
Today, a great deal of effort is wasted in optimizing and re-optimizing computer vision and machine learning application software as algorithms change and as developers target different processors. Developers need a way to deploy their applications on any processor, taking advantage of processor features that boost performance without having to worry about the details of mapping each algorithm to each processor. OpenVX is a mature, open and royalty-free standard for cross-platform acceleration. It enables computer vision and machine learning applications to be written once, and then run on a variety of target processors, taking advantage of each processor’s unique capabilities. In this talk, Kiriti Nagesh Gowda, Senior Member of the Technical Staff at AMD and Chair of the OpenVX Working Group at the Khronos Group, explores the key features of OpenVX 1.3.1 and shows how developers are leveraging them. He illustrates the performance, portability and memory footprint advantages of OpenVX via open-source code samples. He also shares new OpenVX extensions and usability enhancements.
An Introduction to the CSI-2 Image Sensor Interface Standard
By taking advantage of select features in standardized interfaces, vision system architects can help reduce processor load, cost and power consumption while gaining flexibility to source components from multiple vendors. In this presentation, Haran Thanigasalam, Camera and Imaging Consultant to the MIPI Alliance, introduces the standardized MIPI CSI-2 imaging conduit solution for interfacing image sensors to processors. Thanigasalam explains how the solution supports basic operations of a camera sensor, including physical frame transport options and orthogonal commands for autoexposure, autofocus, auto white balance and event detection. He also introduces command sets to help bring up and tune sensors. In addition, Thanigasalam explores provisions to alleviate RF emissions, enable aggregation of data from multiple remote sensors and alleviate the need for dual voltage signaling to avoid electrical overstress issues. He also touches on emerging support for frame transport using WiFi.
|
UPCOMING INDUSTRY EVENTS |
Mastering Image Quality: The Power of Imaging Signal Processors in Embedded Vision – e-con Systems Webinar: January 24, 2024, 9:00 am PT
Embedded Vision Summit: May 21-23, 2024, Santa Clara, California
More Events
|
FEATURED NEWS |
AMD Expands Its Ryzen Embedded Processor Family for High-performance Industrial Automation, Machine Vision and Edge Applications
NVIDIA Supercharges Its Hopper AI Computing Platform
FRAMOS Adds an IR Pass Filter to Its D400e-f Stereo Depth Camera Series
IDS’ New Class of Edge AI Industrial Cameras Enables AI Overlays in Live Video Streams
e-con Systems’ Latest 5 Mpixel Monochrome USB Camera Supports NIR Sensitivity
More News
|
EDGE AI AND VISION PRODUCT OF THE YEAR WINNER SHOWCASE |
Synopsys ARC NPX6 NPU IP (Best Edge AI Processor)
Synopsys’ ARC NPX6 NPU IP is the 2023 Edge AI and Vision Product of the Year Award winner in the Edge AI Processors category. The ARC NPX6 NPU IP is an AI inference engine optimized for the latest neural network models – including newly emerging transformers – which can scale from 8 to 3,500 TOPS to meet the demands of AI applications requiring real-time compute with ultra-low power consumption for consumer and safety-critical automotive applications. Key innovations of the NPX6 NPU IP start with a highly optimized 4K MAC building block with enhanced utilization, new sparsity features and hardware support for transformer networks. The (optional) integrated floating-point data types (FP16/BF16) are embedded in the existing 8-bit data paths to minimize area increase and maximize software flexibility. Scaling is accomplished with an innovative interconnect supporting scales of up to 24 4K MAC cores for 96K MAC (440 TOPS with sparsity) single-engine performance. Up to eight engines can be combined to achieve up to 3,500 TOPS performance on a single SoC. The NPX6 also expands on the ISO 26262 functional safety features of its predecessor, the EV7x vision processor. Both the NPU IP and the new ARC MetaWare MX Development Toolkit integrate connectivity features that enable implementation of multiple NPU instances to achieve up to 3,500 TOPS performance on a single SoC.
Please see here for more information on Synopsys’ ARC NPX6 NPU IP. The Edge AI and Vision Product of the Year Awards celebrate the innovation of the industry’s leading companies that are developing and enabling the next generation of edge AI and computer vision products. Winning a Product of the Year award recognizes a company’s leadership in edge AI and computer vision as evaluated by independent industry experts. |