Software

“Data-efficient and Generalizable: The Domain-specific Small Vision Model Revolution,” a Presentation from Pixel Scientia Labs

Heather Couture, Founder and Computer Vision Consultant at Pixel Scientia Labs, presents the “Data-efficient and Generalizable: The Domain-specific Small Vision Model Revolution” tutorial at the May 2024 Embedded Vision Summit. Large vision models (LVMs) trained on a large and diverse set of imagery are revitalizing computer vision, just as LLMs… “Data-efficient and Generalizable: The Domain-specific […]

“Data-efficient and Generalizable: The Domain-specific Small Vision Model Revolution,” a Presentation from Pixel Scientia Labs Read More +

“Omnilert Gun Detect: Harnessing Computer Vision to Tackle Gun Violence,” a Presentation from Omnilert

Chad Green, Director of Artificial Intelligence at Omnilert, presents the “Omnilert Gun Detect: Harnessing Computer Vision to Tackle Gun Violence” tutorial at the May 2024 Embedded Vision Summit. In the United States in 2023, there were 658 mass shootings, and 42,996 people lost their lives to gun violence. Detecting and… “Omnilert Gun Detect: Harnessing Computer

“Omnilert Gun Detect: Harnessing Computer Vision to Tackle Gun Violence,” a Presentation from Omnilert Read More +

Accelerating LLMs with llama.cpp on NVIDIA RTX Systems

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. The NVIDIA RTX AI for Windows PCs platform offers a thriving ecosystem of thousands of open-source models for application developers to leverage and integrate into Windows applications. Notably, llama.cpp is one popular tool, with over 65K GitHub

Accelerating LLMs with llama.cpp on NVIDIA RTX Systems Read More +

“Adventures in Moving a Computer Vision Solution from Cloud to Edge,” a Presentation from MetaConsumer

Nate D’Amico, CTO and Head of Product at MetaConsumer, presents the “Adventures in Moving a Computer Vision Solution from Cloud to Edge” tutorial at the May 2024 Embedded Vision Summit. Optix is a computer vision-based AI system that measures advertising and media exposures on mobile devices for real-time marketing optimization.… “Adventures in Moving a Computer

“Adventures in Moving a Computer Vision Solution from Cloud to Edge,” a Presentation from MetaConsumer Read More +

Redefining Hybrid Meetings With AI-powered 360° Videoconferencing

This blog post was originally published at Ambarella’s website. It is reprinted here with the permission of Ambarella. The global pandemic catalyzed a boom in videoconferencing that continues to grow as companies embrace hybrid work models and seek more sustainable approaches to business communication with less travel. Now, with videoconferencing becoming a cornerstone of modern

Redefining Hybrid Meetings With AI-powered 360° Videoconferencing Read More +

“Bridging Vision and Language: Designing, Training and Deploying Multimodal Large Language Models,” a Presentation from Meta Reality Labs

Adel Ahmadyan, Staff Engineer at Meta Reality Labs, presents the “Bridging Vision and Language: Designing, Training and Deploying Multimodal Large Language Models” tutorial at the May 2024 Embedded Vision Summit. In this talk, Ahmadyan explores the use of multimodal large language models in real-world edge applications. He begins by explaining… “Bridging Vision and Language: Designing,

“Bridging Vision and Language: Designing, Training and Deploying Multimodal Large Language Models,” a Presentation from Meta Reality Labs Read More +

Qualcomm Partners with Meta to Support Llama 3.2. Why This is a Big Deal for On-device AI

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. On-device artificial intelligence (AI) is critical to making your everyday AI experiences fast and security-rich. That’s why it’s such a win that Qualcomm Technologies and Meta have worked together to support the Llama 3.2 large language models (LLMs)

Qualcomm Partners with Meta to Support Llama 3.2. Why This is a Big Deal for On-device AI Read More +

“Introduction to Depth Sensing,” a Presentation from Meta

Harish Venkataraman, Depth Cameras Architecture and Tech Lead at Meta, presents the “Introduction to Depth Sensing” tutorial at the May 2024 Embedded Vision Summit. We live in a three-dimensional world, and the ability to perceive in three dimensions is essential for many systems. In this talk, Venkataraman introduced the main… “Introduction to Depth Sensing,” a

“Introduction to Depth Sensing,” a Presentation from Meta Read More +

Deploying Accelerated Llama 3.2 from the Edge to the Cloud

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Expanding the open-source Meta Llama collection of models, the Llama 3.2 collection includes vision language models (VLMs), small language models (SLMs), and an updated Llama Guard model with support for vision. When paired with the NVIDIA accelerated

Deploying Accelerated Llama 3.2 from the Edge to the Cloud Read More +

“Advancing Embedded Vision Systems: Harnessing Hardware Acceleration and Open Standards,” a Presentation from the Khronos Group

Neil Trevett, President of the Khronos Group, presents the “Advancing Embedded Vision Systems: Harnessing Hardware Acceleration and Open Standards” tutorial at the May 2024 Embedded Vision Summit. Offloading processing to accelerators enables embedded vision systems to process workloads that exceed the capabilities of CPUs. However, parallel processors add complexity as… “Advancing Embedded Vision Systems: Harnessing

“Advancing Embedded Vision Systems: Harnessing Hardware Acceleration and Open Standards,” a Presentation from the Khronos Group Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top