Algorithms

Productionizing State-of-the-art Models at the Edge for Smart City Use Cases (Part I)

This blog post was originally published at CLIKA’s website. It is reprinted here with the permission of CLIKA. Approaches to productionizing models for edge applications can vary greatly depending on user priorities, with some models not requiring model optimization at all. An organization can choose pre-existing models designed specifically for edge use cases with performance […]

Productionizing State-of-the-art Models at the Edge for Smart City Use Cases (Part I) Read More +

Qualcomm at Embedded World: Accelerating Digital Transformation with Edge AI

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. An essential partner to the embedded community, Qualcomm Technologies, Inc. strengthens its leadership in intelligent computing with several key announcements The AI revolution is sparking a wave of innovation in the embedded community, spawning a flurry of

Qualcomm at Embedded World: Accelerating Digital Transformation with Edge AI Read More +

AutoML Decoded: The Ultimate Guide and Tools Comparison

This article was originally published at Tryolabs’ website. It is reprinted here with the permission of Tryolabs. The quest for efficient and user-friendly solutions has led to the emergence of a game-changing concept: Automated Machine Learning (AutoML). AutoML is the process of automating the tasks involved in the entire Machine Learning lifecycle, such as data

AutoML Decoded: The Ultimate Guide and Tools Comparison Read More +

Zero-Shot AI: The End of Fine-tuning as We Know It?

This article was originally published at Tenyks’ website. It is reprinted here with the permission of Tenyks. Models like SAM 2, LLaVA or ChatGPT can do tasks without special training. This has people wondering if the old way (i.e., fine-tuning) of training AI is becoming outdated. In this article, we compare two models: YOLOv8 (fine-tuning)

Zero-Shot AI: The End of Fine-tuning as We Know It? Read More +

3LC: What is It and Who is It For?

This blog post was originally published at 3LC’s website. It is reprinted here with the permission of 3LC. AI performance isn’t just about better architectures or more compute – it’s about better data. Even perfectly labeled datasets can hold hidden inefficiencies that limit accuracy. See how teams use 3LC to refine datasets, optimize labeling strategies,

3LC: What is It and Who is It For? Read More +

How e-con Systems’ TintE ISP IP Core Increases the Efficiency of Embedded Vision Applications

This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. e-con Systems has developed TintE™, a ready to deploy ISP IP core engineered to enhance image quality in camera systems. Built to deliver high performance on leading FPGA platforms, it accelerates real-time image processing with

How e-con Systems’ TintE ISP IP Core Increases the Efficiency of Embedded Vision Applications Read More +

Vision Language Model Prompt Engineering Guide for Image and Video Understanding

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Vision language models (VLMs) are evolving at a breakneck speed. In 2020, the first VLMs revolutionized the generative AI landscape by bringing visual understanding to large language models (LLMs) through the use of a vision encoder. These

Vision Language Model Prompt Engineering Guide for Image and Video Understanding Read More +

Fine-tuning LLMs for Cost-effective GenAI Inference at Scale

This article was originally published at Tryolabs’ website. It is reprinted here with the permission of Tryolabs. Data is the new oil, fueling the AI revolution. From user-tailored shopping assistants to AI researchers, to recreating the King, the applicability of AI models knows no bounds. Yet these models are only as good as the data

Fine-tuning LLMs for Cost-effective GenAI Inference at Scale Read More +

SAM 2 + GPT-4o: Cascading Foundation Models via Visual Prompting (Part 2)

This article was originally published at Tenyks’ website. It is reprinted here with the permission of Tenyks. In Part 2 of our Segment Anything Model 2 (SAM 2) Series, we show how foundation models (e.g., GPT-4o, Claude Sonnet 3.5 and YOLO-World) can be used to generate visual inputs (e.g., bounding boxes) for SAM 2. Learn

SAM 2 + GPT-4o: Cascading Foundation Models via Visual Prompting (Part 2) Read More +

Taming LLMs: Strategies and Tools for Controlling Responses

This article was originally published at Tryolabs’ website. It is reprinted here with the permission of Tryolabs. In the ever-evolving landscape of natural language processing, the advent of Large Language Models (LLMs) has ushered in a new era of possibilities and challenges. While these models showcase remarkable capabilities in generating human-like text, the potential for

Taming LLMs: Strategies and Tools for Controlling Responses Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top