Multimodal

Smart Glasses for the Consumer Market

There are currently about 250 companies in the head mounted wearables category and these companies in aggregate have received over $5B in funding. $700M has been invested in this category just since the beginning of the year. On the M&A front, there have already been a number of significant acquisitions in the space, notably the […]

Smart Glasses for the Consumer Market Read More +

“Data-efficient and Generalizable: The Domain-specific Small Vision Model Revolution,” a Presentation from Pixel Scientia Labs

Heather Couture, Founder and Computer Vision Consultant at Pixel Scientia Labs, presents the “Data-efficient and Generalizable: The Domain-specific Small Vision Model Revolution” tutorial at the May 2024 Embedded Vision Summit. Large vision models (LVMs) trained on a large and diverse set of imagery are revitalizing computer vision, just as LLMs… “Data-efficient and Generalizable: The Domain-specific

“Data-efficient and Generalizable: The Domain-specific Small Vision Model Revolution,” a Presentation from Pixel Scientia Labs Read More +

Accelerating LLMs with llama.cpp on NVIDIA RTX Systems

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. The NVIDIA RTX AI for Windows PCs platform offers a thriving ecosystem of thousands of open-source models for application developers to leverage and integrate into Windows applications. Notably, llama.cpp is one popular tool, with over 65K GitHub

Accelerating LLMs with llama.cpp on NVIDIA RTX Systems Read More +

“Bridging Vision and Language: Designing, Training and Deploying Multimodal Large Language Models,” a Presentation from Meta Reality Labs

Adel Ahmadyan, Staff Engineer at Meta Reality Labs, presents the “Bridging Vision and Language: Designing, Training and Deploying Multimodal Large Language Models” tutorial at the May 2024 Embedded Vision Summit. In this talk, Ahmadyan explores the use of multimodal large language models in real-world edge applications. He begins by explaining… “Bridging Vision and Language: Designing,

“Bridging Vision and Language: Designing, Training and Deploying Multimodal Large Language Models,” a Presentation from Meta Reality Labs Read More +

Qualcomm Partners with Meta to Support Llama 3.2. Why This is a Big Deal for On-device AI

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. On-device artificial intelligence (AI) is critical to making your everyday AI experiences fast and security-rich. That’s why it’s such a win that Qualcomm Technologies and Meta have worked together to support the Llama 3.2 large language models (LLMs)

Qualcomm Partners with Meta to Support Llama 3.2. Why This is a Big Deal for On-device AI Read More +

Deploying Accelerated Llama 3.2 from the Edge to the Cloud

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Expanding the open-source Meta Llama collection of models, the Llama 3.2 collection includes vision language models (VLMs), small language models (SLMs), and an updated Llama Guard model with support for vision. When paired with the NVIDIA accelerated

Deploying Accelerated Llama 3.2 from the Edge to the Cloud Read More +

BrainChip Demonstration of LLM-RAG with a Custom Trained TENNs Model

Kurt Manninen, Senior Solutions Architect at BrainChip, demonstrates the company’s latest edge AI and vision technologies and products at the September 2024 Edge AI and Vision Alliance Forum. Specifically, Manninen demonstrates his company’s Temporal Event-Based Neural Network (TENN) foundational large language model with 330M parameters, augmented with a Retrieval-Augmented Generative (RAG) output to replace user

BrainChip Demonstration of LLM-RAG with a Custom Trained TENNs Model Read More +

How AI and Smart Glasses Give You a New Perspective on Real Life

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. When smart glasses are paired with generative artificial intelligence, they become the ideal way to interact with your digital assistant They may be shades, but smart glasses are poised to give you a clearer view of everything

How AI and Smart Glasses Give You a New Perspective on Real Life Read More +

Using Generative AI to Enable Robots to Reason and Act with ReMEmbR

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Vision-language models (VLMs) combine the powerful language understanding of foundational LLMs with the vision capabilities of vision transformers (ViTs) by projecting text and images into the same embedding space. They can take unstructured multimodal data, reason over

Using Generative AI to Enable Robots to Reason and Act with ReMEmbR Read More +

“Entering the Era of Multimodal Perception,” a Presentation from Connected Vision Advisors

Simon Morris, Serial Tech Entrepreneur and Start-Up Advisor at Connected Vision Advisors, presents the “Entering the Era of Multimodal Perception” tutorial at the May 2024 Embedded Vision Summit. Humans rely on multiple senses to quickly and accurately obtain the most important information we need. Similarly, developers have begun using multiple… “Entering the Era of Multimodal

“Entering the Era of Multimodal Perception,” a Presentation from Connected Vision Advisors Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top