Object Tracking

Build VLM-powered Visual AI Agents Using NVIDIA NIM and NVIDIA VIA Microservices

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Traditional video analytics applications and their development workflow are typically built on fixed-function, limited models that are designed to detect and identify only a select set of predefined objects. With generative AI, NVIDIA NIM microservices, and foundation […]

Build VLM-powered Visual AI Agents Using NVIDIA NIM and NVIDIA VIA Microservices Read More +

Why Ethernet Cameras are Increasingly Used in Medical and Life Sciences Applications

This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. In this blog, we will uncover the current medical and life sciences use cases in which Ethernet cameras are integral. The pace of technological transformations in medicine and life sciences is rapid. Imaging technologies used

Why Ethernet Cameras are Increasingly Used in Medical and Life Sciences Applications Read More +

NXP Semiconductors Demonstration of Smart Fitness with the i.MX 93 Apps Processor

Manish Bajaj, Systems Engineer at NXP Semiconductors, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Bajaj demonstrates how the i.MX 93 applications processor can run machine learning applications with an Arm Ethos U-65 microNPU to accelerate inference on two simultaneously running deep learning vision- based

NXP Semiconductors Demonstration of Smart Fitness with the i.MX 93 Apps Processor Read More +

Enhance Multi-camera Tracking Accuracy by Fine-tuning AI Models with Synthetic Data

This article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Large-scale, use–case-specific synthetic data has become increasingly important in real-world computer vision and AI workflows. That’s because digital twins are a powerful way to create physics-based virtual replicas of factories, retail spaces, and other assets, enabling precise simulations

Enhance Multi-camera Tracking Accuracy by Fine-tuning AI Models with Synthetic Data Read More +

NXP Semiconductors Demonstration of Face Following with the MCX N Microcontroller

Anthony Huereca, Systems Engineer at NXP Semiconductors, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Huereca demonstrates some of the features of his company’s MCX N microcontroller. The AI-based face detection model in this demo enables the screen to react to the movements of the

NXP Semiconductors Demonstration of Face Following with the MCX N Microcontroller Read More +

Nota AI Demonstration of Elevating Traffic Safety with Vision Language Models

Tae-Ho Kim, CTO and Co-founder of Nota AI, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Kim demonstrates his company’s Vision Language Model (VLM) solution, designed to elevate vehicle safety. Advanced models analyze and interpret visual data to prevent accidents and enhance driving experiences. The

Nota AI Demonstration of Elevating Traffic Safety with Vision Language Models Read More +

Nextchip Demonstration of Its Vision Professional ISP Optimization for Computer Vision

Sophie Jeon, Global Strategy Marketing Manager at Nextchip, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Jeon demonstrates her company’s expertise in optimizing ISPs for computer vision by comparing the tuning technologies used for human vision and machine vision applications.

Nextchip Demonstration of Its Vision Professional ISP Optimization for Computer Vision Read More +

Nextchip Demonstration of the APACHE5 ADAS SoC

Sophie Jeon, Global Strategy Marketing Manager at Nextchip, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Jeon demonstrates her company’s APACHE5 ADAS SoC. APACHE5 is ready for market with an accompanying SDK, and has passed all qualifications for production such as PPAP (the Production Part

Nextchip Demonstration of the APACHE5 ADAS SoC Read More +

Nextchip Demonstration of the APACHE6 ADAS SoC

Sophie Jeon, Global Strategy Marketing Manager at Nextchip, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Jeon demonstrates her company’s APACHE6 ADAS SoC. With advanced computing power, APACHE6 makes your vehicle smarter, avoiding risk while driving and parking.

Nextchip Demonstration of the APACHE6 ADAS SoC Read More +

Top Camera Features that Empower Smart Traffic Management Systems

This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Traffic systems leverage camera solutions to empower smart cities to handle major traffic challenges. Some of their capabilities include real-time monitoring, incident detection, and law enforcement. Discover the camera’s role in these systems and the

Top Camera Features that Empower Smart Traffic Management Systems Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top