Multimodal

“Multimodal LLMs at the Edge: Are We There Yet?,” An Embedded Vision Summit Expert Panel Discussion

Sally Ward-Foxton, Senior Reporter at EE Times, moderates the “Multimodal LLMs at the Edge: Are We There Yet?” Expert Panel at the May 2024 Embedded Vision Summit. Other panelists include Adel Ahmadyan, Staff Engineer at Meta Reality Labs, Jilei Hou, Vice President of Engineering and Head of AI Research at… “Multimodal LLMs at the Edge: […]

“Multimodal LLMs at the Edge: Are We There Yet?,” An Embedded Vision Summit Expert Panel Discussion Read More +

May 2024 Embedded Vision Summit Opening Remarks (May 23)

Jeff Bier, Founder of the Edge AI and Vision Alliance, welcomes attendees to the May 2024 Embedded Vision Summit on May 23, 2024. Bier provides an overview of the edge AI and vision market opportunities, challenges, solutions and trends. He also introduces the Edge AI and Vision Alliance and the… May 2024 Embedded Vision Summit

May 2024 Embedded Vision Summit Opening Remarks (May 23) Read More +

May 2024 Embedded Vision Summit Opening Remarks (May 22)

Jeff Bier, Founder of the Edge AI and Vision Alliance, welcomes attendees to the May 2024 Embedded Vision Summit on May 22, 2024. Bier provides an overview of the edge AI and vision market opportunities, challenges, solutions and trends. He also introduces the Edge AI and Vision Alliance and the… May 2024 Embedded Vision Summit

May 2024 Embedded Vision Summit Opening Remarks (May 22) Read More +

“Understand the Multimodal World with Minimal Supervision,” a Keynote Presentation from Yong Jae Lee

Yong Jae Lee, Associate Professor in the Department of Computer Sciences at the University of Wisconsin-Madison and CEO of GivernyAI, presents the “Learning to Understand Our Multimodal World with Minimal Supervision” tutorial at the May 2024 Embedded Vision Summit. The field of computer vision is undergoing another profound change. Recently,… “Understand the Multimodal World with

“Understand the Multimodal World with Minimal Supervision,” a Keynote Presentation from Yong Jae Lee Read More +

Snapdragon Powers the Future of AI in Smart Glasses. Here’s How

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. A Snapdragon Insider chats with Qualcomm Technologies’ Said Bakadir about the future of smart glasses and Qualcomm Technologies’ role in turning it into a critical AI tool Artificial intelligence (AI) is increasingly winding its way through our

Snapdragon Powers the Future of AI in Smart Glasses. Here’s How Read More +

Build VLM-powered Visual AI Agents Using NVIDIA NIM and NVIDIA VIA Microservices

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Traditional video analytics applications and their development workflow are typically built on fixed-function, limited models that are designed to detect and identify only a select set of predefined objects. With generative AI, NVIDIA NIM microservices, and foundation

Build VLM-powered Visual AI Agents Using NVIDIA NIM and NVIDIA VIA Microservices Read More +

Quantization: Unlocking Scalability for Large Language Models

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm Find out how LLM quantization solves the challenges of making AI work on device In the rapidly evolving world of artificial intelligence (AI), the growth of large language models (LLMs) has been nothing short of astounding. These

Quantization: Unlocking Scalability for Large Language Models Read More +

Nota AI Demonstration of Elevating Traffic Safety with Vision Language Models

Tae-Ho Kim, CTO and Co-founder of Nota AI, demonstrates the company’s latest edge AI and vision technologies and products at the 2024 Embedded Vision Summit. Specifically, Kim demonstrates his company’s Vision Language Model (VLM) solution, designed to elevate vehicle safety. Advanced models analyze and interpret visual data to prevent accidents and enhance driving experiences. The

Nota AI Demonstration of Elevating Traffic Safety with Vision Language Models Read More +

Develop Generative AI-powered Visual AI Agents for the Edge

This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. An exciting breakthrough in AI technology—Vision Language Models (VLMs)—offers a more dynamic and flexible method for video analysis. VLMs enable users to interact with image and video input using natural language, making the technology more accessible and

Develop Generative AI-powered Visual AI Agents for the Edge Read More +

What’s Next in On-device Generative AI?

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. Upcoming generative AI trends and Qualcomm Technologies’ role in enabling the next wave of innovation on-device The generative artificial intelligence (AI) era has begun. Generative AI innovations continue at a rapid pace and are being woven into

What’s Next in On-device Generative AI? Read More +

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top