Articles

Taming LLMs: Strategies and Tools for Controlling Responses

This article was originally published at Tryolabs’ website. It is reprinted here with the permission of Tryolabs. In the ever-evolving landscape of natural language processing, the advent of Large Language Models (LLMs) has ushered in a new era of possibilities and challenges. While these models showcase remarkable capabilities in generating human-like text, the potential for […]

Taming LLMs: Strategies and Tools for Controlling Responses Read More +

AI Disruption is Driving Innovation in On-device Inference

This article was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. How the proliferation and evolution of generative models will transform the AI landscape and unlock value. The introduction of DeepSeek R1, a cutting-edge reasoning AI model, has caused ripples throughout the tech industry. That’s because its performance is on

AI Disruption is Driving Innovation in On-device Inference Read More +

From Brain to Binary: Can Neuro-inspired Research Make CPUs the Future of AI Inference?

This article was originally published at Tryolabs’ website. It is reprinted here with the permission of Tryolabs. In the ever-evolving landscape of AI, the demand for powerful Large Language Models (LLMs) has surged. This has led to an unrelenting thirst for GPUs and a shortage that causes headaches for many organizations. But what if there

From Brain to Binary: Can Neuro-inspired Research Make CPUs the Future of AI Inference? Read More +

DALL-E vs Gemini vs Stability: GenAI Evaluations

This article was originally published at Tenyks’ website. It is reprinted here with the permission of Tenyks. We performed a side-by-side comparison of three models from leading providers in Generative AI for Vision. This is what we found: Despite the subjectivity involved in Human Evaluation, this is the best approach to evaluate state-of-the-art GenAI Vision

DALL-E vs Gemini vs Stability: GenAI Evaluations Read More +

Enhance Multi-camera Tracking Accuracy by Fine-tuning AI Models with Synthetic Data

This article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Large-scale, use–case-specific synthetic data has become increasingly important in real-world computer vision and AI workflows. That’s because digital twins are a powerful way to create physics-based virtual replicas of factories, retail spaces, and other assets, enabling precise simulations

Enhance Multi-camera Tracking Accuracy by Fine-tuning AI Models with Synthetic Data Read More +

Generate Traffic Insights Using YOLOv8 and NVIDIA JetPack 6.0

This article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Intelligent Transportation Systems (ITS) applications are becoming increasingly valuable and prevalent in modern urban environments. The benefits of using ITS applications include: Increasing traffic efficiency: By analyzing real-time traffic data, ITS can optimize traffic flow, reducing congestion and

Generate Traffic Insights Using YOLOv8 and NVIDIA JetPack 6.0 Read More +

NVIDIA DeepStream 7.0 Milestone Release for Next-gen Vision AI Development

This article was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. NVIDIA DeepStream is a powerful SDK that unlocks GPU-accelerated building blocks to build end-to-end vision AI pipelines. With more than 40+ plugins available off-the-shelf, you can deploy fully optimized pipelines with cutting-edge AI Inference, object tracking, and seamless

NVIDIA DeepStream 7.0 Milestone Release for Next-gen Vision AI Development Read More +

Quantization of Convolutional Neural Networks: Quantization Analysis

See “Quantization of Convolutional Neural Networks: Model Quantization” for the previous article in this series. In the previous articles in this series, we discussed quantization schemes and the effect of different choices on model accuracy. The ultimate choice of quantization scheme depends on the available tools. TFlite and Pytorch are the most popular tools used

Quantization of Convolutional Neural Networks: Quantization Analysis Read More +

Quantization of Convolutional Neural Networks: Model Quantization

See “From Theory to Practice: Quantizing Convolutional Neural Networks for Practical Deployment” for the previous article in this series. Significant progress in Convolutional Neural Networks (CNNs) has focused on enhancing model complexity while managing computational demands. Key advancements include efficient architectures like MobileNet1, SqueezeNet2, ShuffleNet3, and DenseNet4, which prioritize compute and memory efficiency. Further innovations

Quantization of Convolutional Neural Networks: Model Quantization Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top