Development Tools

Development Tools for Embedded Vision

ENCOMPASSING MOST OF THE STANDARD ARSENAL USED FOR DEVELOPING REAL-TIME EMBEDDED PROCESSOR SYSTEMS

The software tools (compilers, debuggers, operating systems, libraries, etc.) encompass most of the standard arsenal used for developing real-time embedded processor systems, while adding in specialized vision libraries and possibly vendor-specific development tools for software development. On the hardware side, the requirements will depend on the application space, since the designer may need equipment for monitoring and testing real-time video data. Most of these hardware development tools are already used for other types of video system design.

Both general-purpose and vender-specific tools

Many vendors of vision devices use integrated CPUs that are based on the same instruction set (ARM, x86, etc), allowing a common set of development tools for software development. However, even though the base instruction set is the same, each CPU vendor integrates a different set of peripherals that have unique software interface requirements. In addition, most vendors accelerate the CPU with specialized computing devices (GPUs, DSPs, FPGAs, etc.) This extended CPU programming model requires a customized version of standard development tools. Most CPU vendors develop their own optimized software tool chain, while also working with 3rd-party software tool suppliers to make sure that the CPU components are broadly supported.

Heterogeneous software development in an integrated development environment

Since vision applications often require a mix of processing architectures, the development tools become more complicated and must handle multiple instruction sets and additional system debugging challenges. Most vendors provide a suite of tools that integrate development tasks into a single interface for the developer, simplifying software development and testing.

“Transforming Enterprise Intelligence: The Power of Computer Vision and Gen AI at the Edge with OpenVINO,” a Presentation from Intel

Leila Sabeti, Americas AI Technical Sales Lead at Intel, presents the “Transforming Enterprise Intelligence: The Power of Computer Vision and Gen AI at the Edge with OpenVINO” tutorial at the May 2024 Embedded Vision Summit. In this talk, Sabeti focuses on the transformative impact of AI at the edge, highlighting… “Transforming Enterprise Intelligence: The Power

Read More »

MEMS Studio: Software Inspired Solution to Make Machine Learning on Sensors Even More Approachable

This blog post was originally published at STMicroelectronics’ website. It is reprinted here with the permission of STMicroelectronics. MEMS Studio now supports ST’s intelligent sensor processing units (ISPUs) like the ISM330ISN and ISM330IS. The tool is ST’s most extensive visualization, evaluation, profiling, processing, and optimization software for machine learning applications running on our sensors. It

Read More »

“A Cutting-edge Memory Optimization Method for Embedded AI Accelerators,” a Presentation from 7 Sensing Software

Arnaud Collard, Technical Leader for Embedded AI at 7 Sensing Software, presents the “Cutting-edge Memory Optimization Method for Embedded AI Accelerators” tutorial at the May 2024 Embedded Vision Summit. AI hardware accelerators are playing a growing role in enabling AI in embedded systems such as smart devices. In most cases… “A Cutting-edge Memory Optimization Method

Read More »

Decoding How NVIDIA AI Workbench Powers App Development

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Free tool lets developers experiment with, test and prototype AI applications. Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible and showcases new hardware, software, tools

Read More »

“Implementing Transformer Neural Networks for Visual Perception on Embedded Devices,” a Presentation from VeriSilicon

Shang-Hung Lin, Vice President of Neural Processing Products at VeriSilicon, presents the “Implementing Transformer Neural Networks for Visual Perception on Embedded Devices” tutorial at the May 2024 Embedded Vision Summit. Transformers are a class of neural network models originally designed for natural language processing. Transformers are also powerful for visual… “Implementing Transformer Neural Networks for

Read More »

“Efficiency Unleashed: The Next-gen NXP i.MX 95 Applications Processor for Embedded Vision,” a Presentation from NXP Semiconductors

James Prior, Senior Product Manager at NXP Semiconductors, presents the “Efficiency Unleashed: The Next-gen NXP i.MX 95 Applications Processor for Embedded Vision” tutorial at the May 2024 Embedded Vision Summit. Machine vision is the most obvious way to help humans live better, enabling hundreds of applications spanning security, monitoring, inspection… “Efficiency Unleashed: The Next-gen NXP

Read More »

“Optimized Vision Language Models for Intelligent Transportation System Applications,” a Presentation from Nota AI

Tae-Ho Kim, Co-founder and CTO of Nota AI, presents the “Optimized Vision Language Models for Intelligent Transportation System Applications” tutorial at the May 2024 Embedded Vision Summit. In the rapidly evolving landscape of intelligent transportation systems (ITSs), the demand for efficient and reliable solutions has never been greater. In this… “Optimized Vision Language Models for

Read More »

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top