Development Tools for Embedded Vision
ENCOMPASSING MOST OF THE STANDARD ARSENAL USED FOR DEVELOPING REAL-TIME EMBEDDED PROCESSOR SYSTEMS
The software tools (compilers, debuggers, operating systems, libraries, etc.) encompass most of the standard arsenal used for developing real-time embedded processor systems, while adding in specialized vision libraries and possibly vendor-specific development tools for software development. On the hardware side, the requirements will depend on the application space, since the designer may need equipment for monitoring and testing real-time video data. Most of these hardware development tools are already used for other types of video system design.
Both general-purpose and vender-specific tools
Many vendors of vision devices use integrated CPUs that are based on the same instruction set (ARM, x86, etc), allowing a common set of development tools for software development. However, even though the base instruction set is the same, each CPU vendor integrates a different set of peripherals that have unique software interface requirements. In addition, most vendors accelerate the CPU with specialized computing devices (GPUs, DSPs, FPGAs, etc.) This extended CPU programming model requires a customized version of standard development tools. Most CPU vendors develop their own optimized software tool chain, while also working with 3rd-party software tool suppliers to make sure that the CPU components are broadly supported.
Heterogeneous software development in an integrated development environment
Since vision applications often require a mix of processing architectures, the development tools become more complicated and must handle multiple instruction sets and additional system debugging challenges. Most vendors provide a suite of tools that integrate development tasks into a single interface for the developer, simplifying software development and testing.
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/07/53817580969_84f61b886a_o-300x167.jpg)
Microchip Technology Expands Processing Portfolio to Include Multi-core 64-bit Microprocessors
PIC64GX MPU is the first of several product lines planned for Microchip’s PIC64 portfolio CHANDLER, Ariz., July 9, 2024—Real-time, compute intensive applications such as smart embedded vision and Machine Learning (ML) are pushing the boundaries of embedded processing requirements, demanding more power-efficiency, hardware-level security and high reliability at the edge. With the launch of its
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/07/canvas-app-nv-blog-1280x680-1-300x159.jpg)
Decoding How the Generative AI Revolution BeGAN
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. NVIDIA Research’s GauGAN demo set the scene for a new wave of generative AI apps supercharging creative workflows. Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology more
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/07/BhCeHtIM1Mw-300x169.jpg)
Network Optix Demonstration of Instantly Scalable AI Models Using the Nx AI Manager
Wim De Wispelaere, Nx Senior Business Development Manager at Network Optix, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, De Wispelaere demonstrates the Nx AI Manager, a plugin specifically designed to add AI functionality on the edge to video solutions built using the Nx Enterprise
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/07/n0hA7jOX-AQ-300x169.jpg)
Network Optix Demonstration of Seamlessly Deploying AI Models at the Edge with Nx AI Manager
Wim De Wispelaere, Nx Senior Business Development Manager at Network Optix, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, De Wispelaere demonstrates Nx AI Manager, the newest addition to the Nx Toolkit, a plugin specifically designed to add AI functionality on the edge to video
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/07/AI-cover-2-300x168.jpg)
AI Startups Continue Breaking Records with $33 Billion Raised in H1, the Highest Figure in the Market’s History
Contrary to some predictions of a slowdown in AI funding for 2024, the reality is quite the opposite. AI startups are not just breaking records; they are doing so at an accelerated pace, with more money raised than ever before. According to data presented by Stocklytics.com, AI startups have achieved an unprecedented feat, raising a
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/07/WardenP_SpeakerCard-300x158.jpg)
“Deploying Large Language Models on a Raspberry Pi,” a Presentation from Useful Sensors
Pete Warden, CEO of Useful Sensors, presents the “Deploying Large Language Models on a Raspberry Pi,” tutorial at the May 2024 Embedded Vision Summit. In this presentation, Warden outlines the key steps required to implement a large language model (LLM) on a Raspberry Pi. He begins by outlining the motivations… “Deploying Large Language Models on
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/06/Awards-June-Cover-Image-300x160.png)
Broad Industry Recognition for Our Centrally Processed 4D Imaging Radar Architecture and Corporate Culture
This blog post was originally published at Ambarella’s website. It is reprinted here with the permission of Ambarella. Winning an award is always exciting, but winning eight is truly exhilarating! We’re honored that our groundbreaking architecture—which includes both Ambarella’s Oculii™ adaptive AI radar software and our CV3-AD family of highly efficient 5nm AI central domain
![](https://www.edge-ai-vision.com/wp-content/uploads/2024/07/MitalD_SpeakerCard-300x158.jpg)
“How to Run Audio and Vision AI Algorithms at Ultra-low Power,” a Presentation from Synaptics
Deepak Mital, Senior Director of Architectures at Synaptics, presents the “How to Run Audio and Vision AI Algorithms at Ultra-low Power” tutorial at the May 2024 Embedded Vision Summit. Running AI algorithms on battery-powered, low-cost devices requires a different approach to designing hardware and software. The power requirements are stringent… “How to Run Audio and