Dear Colleague,
Next Wednesday, July 24, 2024 at 9 am PT, TechInsights will deliver the free webinar “Who is Winning the Battle for ADAS and Autonomous Vehicle Processing, and How Large is the Prize?” in partnership with the Edge AI and Vision Alliance. Ian Riches, Vice President of the Global Automotive Practice at TechInsights, will share insights on the current and likely future landscape for advanced driver assistance systems (ADAS) and autonomous vehicle processing.
Riches will give an independent assessment of how TechInsights sees NVIDIA, Mobileye, Qualcomm, and other competitors stacking up across the global and Chinese markets. With the ADAS processor market forecast by TechInsights to be worth more than $14B by 2030, the presentation will also look at how this market will likely break down by application and region, as well as make projections for 2040 and 2050, based upon forecasts for the roll-out of automated driving.
A question-and-answer session will follow the presentation. For more information and to register, please see the event page.
Brian Dipert
Editor-In-Chief, Edge AI and Vision Alliance |
Building and Scaling AI Applications with the Nx AI Manager
In this presentation, Robin van Emden, Senior Director of Data Science at Network Optix, covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing. van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications. He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Build a Tiny Vision Application in Minutes with the Edge App SDK
In the fast-paced world of embedded vision applications, moving rapidly from concept to deployment is crucial. In this talk, Dan Mihai Dumitriu, Chief Technology Officer at Midokura, a Sony Group company, introduces the Edge App Runtime and SDK—groundbreaking tools designed to streamline and accelerate the development process for edge computing solutions. Leveraging a pre-built app skeleton, the SDK simplifies the development journey, allowing developers to focus on customizing event handlers using popular high-level languages such as JavaScript and Python. This approach not only democratizes edge application development, but also significantly reduces the time to market. With an integrated local tool that supports development, testing, building and deployment, the transition from a local environment to cloud deployment becomes seamless. Dumitriu explores how the Edge App Runtime and SDK are enabling the creation and deployment of edge applications in a matter of minutes, making edge application development more accessible and efficient than ever before.
|
Squeezing the Last Milliwatt and Cubic Millimeter from Smart Cameras Using the Latest FPGAs and DRAMs
Attaining the lowest power, size and cost for a smart camera requires carefully matching the hardware to the actual application requirements. General-purpose media processors may appear attractive and easy to use, but often include unneeded features which increase system size, weight, power and cost. “Right-sizing” the camera design for the application requirements can save significant power, cost, size and weight. In this talk, Hussein Osman, Segment Marketing Director at Lattice Semiconductor, and Richard Crisp, Vice President and Chief Scientist at Etron Technology America, show how you can leverage an advanced power-optimized FPGA incorporating a soft RISC-V core combined with a video-bandwidth, low-pin-count DRAM to cut power consumption roughly in half for endpoint smart cameras used in automotive, industrial and other applications. They examine techniques for reducing power, cost and size including system architecture, memory architecture, packaging, and signaling and termination schemes. They also explore techniques for enhancing system reliability.
How to Run Audio and Vision AI Algorithms at Ultra-low Power
Running AI algorithms on battery-powered, low-cost devices requires a different approach to designing hardware and software. The power requirements are stringent at standby, but the device needs to be able to awaken quickly when an event is detected. The device needs to “pseudo” wake up, determine if the event needs attention, and then either go back to standby or become active to classify the event. This multistage wake-up process and the associated intelligence requires tight orchestration of hardware and software. Apart from runtime software, the AI models must be highly optimized to fit and run on the constrained device. To show how this can be done, Deepak Mital, Senior Director of Architectures at Synaptics, presents a solution that combines hardware, software and AI models to enable running audio and video AI algorithms at ultra-low power.
|