Software

“Bridging Vision and Language: Designing, Training and Deploying Multimodal Large Language Models,” a Presentation from Meta Reality Labs

Adel Ahmadyan, Staff Engineer at Meta Reality Labs, presents the “Bridging Vision and Language: Designing, Training and Deploying Multimodal Large Language Models” tutorial at the May 2024 Embedded Vision Summit. In this talk, Ahmadyan explores the use of multimodal large language models in real-world edge applications. He begins by explaining… “Bridging Vision and Language: Designing, […]

“Bridging Vision and Language: Designing, Training and Deploying Multimodal Large Language Models,” a Presentation from Meta Reality Labs Read More +

Qualcomm Partners with Meta to Support Llama 3.2. Why This is a Big Deal for On-device AI

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. On-device artificial intelligence (AI) is critical to making your everyday AI experiences fast and security-rich. That’s why it’s such a win that Qualcomm Technologies and Meta have worked together to support the Llama 3.2 large language models (LLMs)

Qualcomm Partners with Meta to Support Llama 3.2. Why This is a Big Deal for On-device AI Read More +

“Introduction to Depth Sensing,” a Presentation from Meta

Harish Venkataraman, Depth Cameras Architecture and Tech Lead at Meta, presents the “Introduction to Depth Sensing” tutorial at the May 2024 Embedded Vision Summit. We live in a three-dimensional world, and the ability to perceive in three dimensions is essential for many systems. In this talk, Venkataraman introduced the main… “Introduction to Depth Sensing,” a

“Introduction to Depth Sensing,” a Presentation from Meta Read More +

Deploying Accelerated Llama 3.2 from the Edge to the Cloud

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Expanding the open-source Meta Llama collection of models, the Llama 3.2 collection includes vision language models (VLMs), small language models (SLMs), and an updated Llama Guard model with support for vision. When paired with the NVIDIA accelerated

Deploying Accelerated Llama 3.2 from the Edge to the Cloud Read More +

“Advancing Embedded Vision Systems: Harnessing Hardware Acceleration and Open Standards,” a Presentation from the Khronos Group

Neil Trevett, President of the Khronos Group, presents the “Advancing Embedded Vision Systems: Harnessing Hardware Acceleration and Open Standards” tutorial at the May 2024 Embedded Vision Summit. Offloading processing to accelerators enables embedded vision systems to process workloads that exceed the capabilities of CPUs. However, parallel processors add complexity as… “Advancing Embedded Vision Systems: Harnessing

“Advancing Embedded Vision Systems: Harnessing Hardware Acceleration and Open Standards,” a Presentation from the Khronos Group Read More +

When, Where and How AI Should Be Applied

Phil Koopman dissects strengths and weaknesses of machine learning based AI AI does amazing stuff. No question about it. But how hard have we really thought about “machine-learning capabilities” for applications? Phil Koopman, professor at Carnegie Mellon University, delivered a keynote on Sept. 11, 2024 at the Business of Semiconductor Summit, (BOSS 2024), concentrating on

When, Where and How AI Should Be Applied Read More +

“Using AI to Enhance the Well-being of the Elderly,” a Presentation from Kepler Vision Technologies

Harro Stokman, CEO of Kepler Vision Technologies, presents the “Using Artificial Intelligence to Enhance the Well-being of the Elderly” tutorial at the May 2024 Embedded Vision Summit. This presentation provides insights into an innovative application of artificial intelligence and advanced computer vision technologies in the healthcare sector, specifically focused on… “Using AI to Enhance the

“Using AI to Enhance the Well-being of the Elderly,” a Presentation from Kepler Vision Technologies Read More +

AI Model Training Cost Have Skyrocketed by More than 4,300% Since 2020

Over the past five years, AI models have become much more complex and capable, tailored to perform specific tasks across industries and provide better efficiency, accuracy and automation. However, the cost of training in these systems has exploded. According to data presented by AltIndex.com, AI model training costs have skyrocketed by more than 4,300% since

AI Model Training Cost Have Skyrocketed by More than 4,300% Since 2020 Read More +

BrainChip Demonstration of LLM-RAG with a Custom Trained TENNs Model

Kurt Manninen, Senior Solutions Architect at BrainChip, demonstrates the company’s latest edge AI and vision technologies and products at the September 2024 Edge AI and Vision Alliance Forum. Specifically, Manninen demonstrates his company’s Temporal Event-Based Neural Network (TENN) foundational large language model with 330M parameters, augmented with a Retrieval-Augmented Generative (RAG) output to replace user

BrainChip Demonstration of LLM-RAG with a Custom Trained TENNs Model Read More +

Advex AI Demonstration of Accelerating Machine Vision with Synthetic AI Data

Pedro Pachuca, CEO at Advex AI, demonstrates the company’s latest edge AI and vision technologies and products at the September 2024 Edge AI and Vision Alliance Forum. Specifically, Pachuca demonstrates Advex’s ability to ingest just 10 images and produce thousands of labeled, synthetic images in just hours. These synthetic images cover the distribution of variations

Advex AI Demonstration of Accelerating Machine Vision with Synthetic AI Data Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top