TECHNOLOGIES

Accelerating LLMs with llama.cpp on NVIDIA RTX Systems

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. The NVIDIA RTX AI for Windows PCs platform offers a thriving ecosystem of thousands of open-source models for application developers to leverage and integrate into Windows applications. Notably, llama.cpp is one popular tool, with over 65K GitHub […]

Accelerating LLMs with llama.cpp on NVIDIA RTX Systems Read More +

“Adventures in Moving a Computer Vision Solution from Cloud to Edge,” a Presentation from MetaConsumer

Nate D’Amico, CTO and Head of Product at MetaConsumer, presents the “Adventures in Moving a Computer Vision Solution from Cloud to Edge” tutorial at the May 2024 Embedded Vision Summit. Optix is a computer vision-based AI system that measures advertising and media exposures on mobile devices for real-time marketing optimization.… “Adventures in Moving a Computer

“Adventures in Moving a Computer Vision Solution from Cloud to Edge,” a Presentation from MetaConsumer Read More +

Transforming Interconnects in AI Systems: Co-Packaged Optics’ Role

In recent years, there has been a noticeable trend in optical transceiver technology, moving toward bringing the transceiver closer to the ASIC. Traditionally, pluggable optics—optical modules inserted and removed from the front panel of a switch—have been located near the edge of the printed circuit board (PCB). These pluggable optics are widely used in data

Transforming Interconnects in AI Systems: Co-Packaged Optics’ Role Read More +

Redefining Hybrid Meetings With AI-powered 360° Videoconferencing

This blog post was originally published at Ambarella’s website. It is reprinted here with the permission of Ambarella. The global pandemic catalyzed a boom in videoconferencing that continues to grow as companies embrace hybrid work models and seek more sustainable approaches to business communication with less travel. Now, with videoconferencing becoming a cornerstone of modern

Redefining Hybrid Meetings With AI-powered 360° Videoconferencing Read More +

“Bridging Vision and Language: Designing, Training and Deploying Multimodal Large Language Models,” a Presentation from Meta Reality Labs

Adel Ahmadyan, Staff Engineer at Meta Reality Labs, presents the “Bridging Vision and Language: Designing, Training and Deploying Multimodal Large Language Models” tutorial at the May 2024 Embedded Vision Summit. In this talk, Ahmadyan explores the use of multimodal large language models in real-world edge applications. He begins by explaining… “Bridging Vision and Language: Designing,

“Bridging Vision and Language: Designing, Training and Deploying Multimodal Large Language Models,” a Presentation from Meta Reality Labs Read More +

Qualcomm Partners with Meta to Support Llama 3.2. Why This is a Big Deal for On-device AI

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. On-device artificial intelligence (AI) is critical to making your everyday AI experiences fast and security-rich. That’s why it’s such a win that Qualcomm Technologies and Meta have worked together to support the Llama 3.2 large language models (LLMs)

Qualcomm Partners with Meta to Support Llama 3.2. Why This is a Big Deal for On-device AI Read More +

“Using MIPI CSI to Interface with Multiple Cameras,” a Presentation from Meta

Karthick Kumaran Ayyalluseshagiri Viswanathan, Staff Software Engineer at Meta, presents the “Using MIPI CSI to Interface with Multiple Cameras” tutorial at the May 2024 Embedded Vision Summit. As demand rises for vision capabilities in robotics, virtual/augmented reality, drones and automotive, there’s a growing need for systems to incorporate multiple cameras.… “Using MIPI CSI to Interface

“Using MIPI CSI to Interface with Multiple Cameras,” a Presentation from Meta Read More +

What Is GMSL Technology And How Does It Work?

This blog post was originally published at e-con Systems’ website. It is reprinted here with the permission of e-con Systems. The GMSL interface plays a key role in embedded vision systems across industries. It can handle high-resolution video with low latency for long-distance data transmission. Discover more about the GMSL camera interface, its principles and

What Is GMSL Technology And How Does It Work? Read More +

“Introduction to Depth Sensing,” a Presentation from Meta

Harish Venkataraman, Depth Cameras Architecture and Tech Lead at Meta, presents the “Introduction to Depth Sensing” tutorial at the May 2024 Embedded Vision Summit. We live in a three-dimensional world, and the ability to perceive in three dimensions is essential for many systems. In this talk, Venkataraman introduced the main… “Introduction to Depth Sensing,” a

“Introduction to Depth Sensing,” a Presentation from Meta Read More +

Synaptics Astra AI-native IoT Compute Platform Wins 2024 EDGE Award

SAN JOSE, Calif., Oct. 01, 2024 (GLOBE NEWSWIRE) — Synaptics® Incorporated (Nasdaq: SYNA) today announced that its Synaptics Astra™ AI-native IoT compute platform won in the Machine Learning and Deep Learning category of the 2024 EDGE Awards. The annual awards from Endeavor Media celebrate outstanding innovation in product design and function for the engineering industry.

Synaptics Astra AI-native IoT Compute Platform Wins 2024 EDGE Award Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top