Object Tracking

SWIR Vision Systems in Agricultural Production

This blog post was originally published at Basler’s website. It is reprinted here with the permission of Basler. Improved produce inspection through short-wave infrared light Ensuring the quality of fruits and vegetables such as apples or potatoes is crucial to meet market standards and consumer expectations. Traditional inspection methods are often based only on visual […]

SWIR Vision Systems in Agricultural Production Read More +

Qualcomm Dragonwing Intelligent Video Suite Modernizes Video Management with Generative AI at Its Core

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Video cameras generate a lot of data. Companies that use a video management system (VMS) are left wanting to get more value out of all the video data they generate, enabling them to take the actions that

Qualcomm Dragonwing Intelligent Video Suite Modernizes Video Management with Generative AI at Its Core Read More +

Visual Intelligence: Foundation Models + Satellite Analytics for Deforestation (Part 2)

This blog post was originally published at Tenyks’ website. It is reprinted here with the permission of Tenyks. In Part 2, we explore how Foundation Models can be leveraged to track deforestation patterns. ‍Building upon the insights from our Sentinel-2 pipeline and Central Balkan case study, we dive into the revolution that foundation models have

Visual Intelligence: Foundation Models + Satellite Analytics for Deforestation (Part 2) Read More +

Visual Intelligence: Foundation Models + Satellite Analytics for Deforestation (Part 1)

This blog post was originally published at Tenyks’ website. It is reprinted here with the permission of Tenyks. Satellite imagery has revolutionized how we monitor Earth’s forests, offering unprecedented insights into deforestation patterns. ‍In this two-part series, we explore both traditional and cutting-edge approaches to forest monitoring, using Bulgaria’s Central Balkan National Park as our

Visual Intelligence: Foundation Models + Satellite Analytics for Deforestation (Part 1) Read More +

Andes Technology Demonstration of Its RISC-V IP in a Spherical Image Processor and Meta’s AI Accelerator

Marc Evans, Director of Business Development and Marketing at Andes Technology, demonstrates the company’s latest edge AI and vision technologies and products at the March 2025 Edge AI and Vision Alliance Forum. Specifically, Evans demonstrates the company’s RISC-V semiconductor processor IP, which enables customers to develop leading SoCs for AI, computer vision and other market

Andes Technology Demonstration of Its RISC-V IP in a Spherical Image Processor and Meta’s AI Accelerator Read More +

Radar-enhanced Safety for Advancing Autonomy

Front and side radars may have different primary uses and drivers for their innovation, but together, they form a vital part of ADAS for autonomous vehicles. IDTechEx‘s report, “Automotive Radar Market 2025-2045: Robotaxis & Autonomous Cars“, showcases the latest radar developments and explores autonomy leveling up as a result, with Level 2+ asserting itself within

Radar-enhanced Safety for Advancing Autonomy Read More +

Scalable Video Search: Cascading Foundation Models

This article was originally published at Tenyks’ website. It is reprinted here with the permission of Tenyks. Video has become the lingua franca of the digital age, but its ubiquity presents a unique challenge: how do we efficiently extract meaningful information from this ocean of visual data? ‍In Part 1 of this series, we navigate

Scalable Video Search: Cascading Foundation Models Read More +

Vision Language Model Prompt Engineering Guide for Image and Video Understanding

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Vision language models (VLMs) are evolving at a breakneck speed. In 2020, the first VLMs revolutionized the generative AI landscape by bringing visual understanding to large language models (LLMs) through the use of a vision encoder. These

Vision Language Model Prompt Engineering Guide for Image and Video Understanding Read More +

SAM 2 + GPT-4o: Cascading Foundation Models via Visual Prompting (Part 2)

This article was originally published at Tenyks’ website. It is reprinted here with the permission of Tenyks. In Part 2 of our Segment Anything Model 2 (SAM 2) Series, we show how foundation models (e.g., GPT-4o, Claude Sonnet 3.5 and YOLO-World) can be used to generate visual inputs (e.g., bounding boxes) for SAM 2. Learn

SAM 2 + GPT-4o: Cascading Foundation Models via Visual Prompting (Part 2) Read More +

Nearly $1B Flows into Automotive Radar Startups

According to IDTechEx‘s latest report, “Automotive Radar Market 2025-2045: Robotaxis & Autonomous Cars“, newly established radar startups worldwide have raised nearly US$1.2 billion over the past 12 years; approximately US$980 million of which is predominantly directed toward the automotive sector. Through more than 40 funding rounds, these companies have driven the implementation and advancement of

Nearly $1B Flows into Automotive Radar Startups Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top