Development Tools

Development Tools for Embedded Vision

ENCOMPASSING MOST OF THE STANDARD ARSENAL USED FOR DEVELOPING REAL-TIME EMBEDDED PROCESSOR SYSTEMS

The software tools (compilers, debuggers, operating systems, libraries, etc.) encompass most of the standard arsenal used for developing real-time embedded processor systems, while adding in specialized vision libraries and possibly vendor-specific development tools for software development. On the hardware side, the requirements will depend on the application space, since the designer may need equipment for monitoring and testing real-time video data. Most of these hardware development tools are already used for other types of video system design.

Both general-purpose and vender-specific tools

Many vendors of vision devices use integrated CPUs that are based on the same instruction set (ARM, x86, etc), allowing a common set of development tools for software development. However, even though the base instruction set is the same, each CPU vendor integrates a different set of peripherals that have unique software interface requirements. In addition, most vendors accelerate the CPU with specialized computing devices (GPUs, DSPs, FPGAs, etc.) This extended CPU programming model requires a customized version of standard development tools. Most CPU vendors develop their own optimized software tool chain, while also working with 3rd-party software tool suppliers to make sure that the CPU components are broadly supported.

Heterogeneous software development in an integrated development environment

Since vision applications often require a mix of processing architectures, the development tools become more complicated and must handle multiple instruction sets and additional system debugging challenges. Most vendors provide a suite of tools that integrate development tasks into a single interface for the developer, simplifying software development and testing.

Zero-Shot AI: The End of Fine-tuning as We Know It?

This article was originally published at Tenyks’ website. It is reprinted here with the permission of Tenyks. Models like SAM 2, LLaVA or ChatGPT can do tasks without special training. This has people wondering if the old way (i.e., fine-tuning) of training AI is becoming outdated. In this article, we compare two models: YOLOv8 (fine-tuning)

Read More »

Axelera AI Secures Up to €61.6 Million Grant to Develop Scalable AI Chiplet for High-performance Computing

March 6, 2025 – Axelera AI, the leading provider of purpose-built AI hardware acceleration technology for generative AI and computer vision inference at the edge, unveiled Titania™, a high-performance, energy efficient and scalable AI inference chiplet. The development of this chiplet builds on Axelera AI’s innovative approach to Digital In-Memory Computing (D-IMC) architecture, which provides

Read More »

3LC: What is It and Who is It For?

This blog post was originally published at 3LC’s website. It is reprinted here with the permission of 3LC. AI performance isn’t just about better architectures or more compute – it’s about better data. Even perfectly labeled datasets can hold hidden inefficiencies that limit accuracy. See how teams use 3LC to refine datasets, optimize labeling strategies,

Read More »

MIPS Drives Real-time Intelligence into Physical AI Platforms

The new MIPS Atlas product suite delivers cutting-edge compute subsystems that empower autonomous edge solutions to sense, think and act with precision, driving innovation across the growing physical AI opportunity in industrial robotics and autonomous platform markets. SAN JOSE, CA. – March 4th, 2025 – MIPS, the world’s leading supplier of compute subsystems for autonomous

Read More »

Fine-tuning LLMs for Cost-effective GenAI Inference at Scale

This article was originally published at Tryolabs’ website. It is reprinted here with the permission of Tryolabs. Data is the new oil, fueling the AI revolution. From user-tailored shopping assistants to AI researchers, to recreating the King, the applicability of AI models knows no bounds. Yet these models are only as good as the data

Read More »

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top