Development Tools

Development Tools for Embedded Vision

ENCOMPASSING MOST OF THE STANDARD ARSENAL USED FOR DEVELOPING REAL-TIME EMBEDDED PROCESSOR SYSTEMS

The software tools (compilers, debuggers, operating systems, libraries, etc.) encompass most of the standard arsenal used for developing real-time embedded processor systems, while adding in specialized vision libraries and possibly vendor-specific development tools for software development. On the hardware side, the requirements will depend on the application space, since the designer may need equipment for monitoring and testing real-time video data. Most of these hardware development tools are already used for other types of video system design.

Both general-purpose and vender-specific tools

Many vendors of vision devices use integrated CPUs that are based on the same instruction set (ARM, x86, etc), allowing a common set of development tools for software development. However, even though the base instruction set is the same, each CPU vendor integrates a different set of peripherals that have unique software interface requirements. In addition, most vendors accelerate the CPU with specialized computing devices (GPUs, DSPs, FPGAs, etc.) This extended CPU programming model requires a customized version of standard development tools. Most CPU vendors develop their own optimized software tool chain, while also working with 3rd-party software tool suppliers to make sure that the CPU components are broadly supported.

Heterogeneous software development in an integrated development environment

Since vision applications often require a mix of processing architectures, the development tools become more complicated and must handle multiple instruction sets and additional system debugging challenges. Most vendors provide a suite of tools that integrate development tasks into a single interface for the developer, simplifying software development and testing.

Harnessing the Power of LLM Models on Arm CPUs for Edge Devices

This blog post was originally published at Digica’s website. It is reprinted here with the permission of Digica. In recent years, the field of machine learning has witnessed significant advancements, particularly with the development of Large Language Models (LLMs) and image generation models. Traditionally, these models have relied on powerful cloud-based infrastructures to deliver impressive

Read More »

AI On the Road: Why AI-powered Cars are the Future

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. AI transforms your driving experience in unexpected ways as showcased by Qualcomm Technologies collaborations As automotive technology rapidly advances, consumers are looking for vehicles that deliver AI-enhanced experiences through conversational voice assistants and sophisticated user interfaces. Automotive

Read More »

CPUs, GPUs, and AI: Exploring High-performance Computing Hardware

IDTechEx‘s latest report, “Hardware for HPC and AI 2025-2035: Technologies, Markets, Forecasts“, provides data to show that the use of graphics processing units (GPUs) within high performance computing (HPC) has been increasingly adopted since the introduction of generative AI and large language models, as well as advanced memory, storage, networking, and cooling technologies. Drivers for

Read More »

Visual Intelligence at the Edge

This blog post was originally published at Au-Zone Technologies’ website. It is reprinted here with the permission of Au-Zone Technologies. Optimizing AI-based video telematics deployments on constrained SoCs platforms The demand for advanced video telematics systems is growing rapidly as companies seek to enhance road safety, improve operational efficiency, and manage liability costs with AI-powered

Read More »

Improving Vision Model Performance Using Roboflow and Tenyks

This blog post was originally published at Tenyks’ website. It is reprinted here with the permission of Tenyks. When improving an object detection model, many engineers focus solely on tweaking the model architecture and hyperparameters. However, the root cause of mediocre performance often lies in the data itself. ‍In this collaborative post between Roboflow and

Read More »

Federated Learning: Risks and Challenges

This blog post was originally published at Digica’s website. It is reprinted here with the permission of Digica. In the first article of our mini-series on Federated Learning (FL), Privacy-First AI: Exploring Federated Learning, we introduced the basic concepts behind the decentralized training approach, and we also presented potential applications in certain domains. Undoubtedly, FL

Read More »

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top