Blog Posts

R²D²: Advancing Robot Mobility and Whole-body Control with Novel Workflows and AI Foundation Models from NVIDIA Research

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Welcome to the first edition of the NVIDIA Robotics Research and Development Digest (R2D2). This technical blog series will give developers and researchers deeper insight and access to the latest physical AI and robotics research breakthroughs across […]

R²D²: Advancing Robot Mobility and Whole-body Control with Novel Workflows and AI Foundation Models from NVIDIA Research Read More +

LLMOps Unpacked: The Operational Complexities of LLMs

This blog post was originally published at Tryolabs’ website. It is reprinted here with the permission of Tryolabs. Incorporating a Large Language Model (LLM) into a commercial product is a complex endeavor, far beyond the simplicity of prototyping. As Machine Learning and Generative AI (GenAI) evolve, so does the need for specialized operational practices, leading

LLMOps Unpacked: The Operational Complexities of LLMs Read More +

Visual Intelligence: Foundation Models + Satellite Analytics for Deforestation (Part 1)

This blog post was originally published at Tenyks’ website. It is reprinted here with the permission of Tenyks. Satellite imagery has revolutionized how we monitor Earth’s forests, offering unprecedented insights into deforestation patterns. ‍In this two-part series, we explore both traditional and cutting-edge approaches to forest monitoring, using Bulgaria’s Central Balkan National Park as our

Visual Intelligence: Foundation Models + Satellite Analytics for Deforestation (Part 1) Read More +

RGo Robotics Implements Vision-based Perception Engine on Qualcomm SoCs for Robotics Market

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Mobile robotics developers equip their machines to behave autonomously in the real world by generating facility maps, localizing within them and understanding the geometry of their surroundings. Machines like autonomous mobile robots (AMR), automated guided vehicles (AGV)

RGo Robotics Implements Vision-based Perception Engine on Qualcomm SoCs for Robotics Market Read More +

A Comprehensive Guide to Understand Camera Projection and Parameters

This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Understanding camera projection and parameters is essential for mapping the 3D world into a 2D representation. This blog delves into key concepts like camera projection, intrinsic and extrinsic parameters, and distortion correction, offering a clear

A Comprehensive Guide to Understand Camera Projection and Parameters Read More +

Explaining Tokens — The Language and Currency of AI

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Under the hood of every AI application are algorithms that churn through data in their own language, one based on a vocabulary of tokens. Tokens are tiny units of data that come from breaking down bigger chunks

Explaining Tokens — The Language and Currency of AI Read More +

Removing the Barriers to Edge and Generative AI in Embedded Vision

This blog post was originally published at Macnica’s website. It is reprinted here with the permission of Macnica. The introduction of artificial intelligence (AI) has ushered in exciting new applications for surveillance cameras and other devices with embedded vision technology. Tools like generative AI (gen AI), ChatGPT and Midjourney are augmenting computer vision-based results. At

Removing the Barriers to Edge and Generative AI in Embedded Vision Read More +

Powering IoT Developers with Edge AI: the Qualcomm RB3 Gen 2 Kit is Now Supported in the Edge Impulse Platform

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. The Qualcomm RB3 Gen 2 Development Kit has been designed to help you develop high-performance IoT and edge AI applications. With powerful AI acceleration, pre-validated peripherals, and extensive software support, this kit enables every engineer to move

Powering IoT Developers with Edge AI: the Qualcomm RB3 Gen 2 Kit is Now Supported in the Edge Impulse Platform Read More +

What is the Role of Vision Systems in Autonomous Mobility?

This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. Vision systems are crucial for autonomous mobility, enabling vehicles to navigate and respond to their environment. This blog covers key camera types like surround-view, forward-facing, and driver monitoring cameras, highlighting features like HDR, LED Flicker

What is the Role of Vision Systems in Autonomous Mobility? Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top