LLMs and MLLMs
The past decade-plus has seen incredible progress in practical computer vision. Thanks to deep learning, computer vision is dramatically more robust and accessible, and has enabled compelling capabilities in thousands of applications, from automotive safety to healthcare. But today’s widely used deep learning techniques suffer from serious limitations. Often, they struggle when confronted with ambiguity (e.g., are those people fighting or dancing?) or with challenging imaging conditions (e.g., is that shadow in the fog a person or a shrub?). And, for many product developers, computer vision remains out of reach due to the cost and complexity of obtaining the necessary training data, or due to lack of necessary technical skills.
Recent advances in large language models (LLMs) and their variants such as vision language models (VLMs, which comprehend both images and text), hold the key to overcoming these challenges. VLMs are an example of multimodal large language models (MLLMs), which integrate multiple data modalities such as language, images, audio, and video to enable complex cross-modal understanding and generation tasks. MLLMs represent a significant evolution in AI by combining the capabilities of LLMs with multimodal processing to handle diverse inputs and outputs.
The purpose of this portal is to facilitate awareness of, and education regarding, the challenges and opportunities in using LLMs, VLMs, and other types of MLLMs in practical applications — especially applications involving edge AI and machine perception. The content that follows (which is updated regularly) discusses these topics. As a starting point, we encourage you to watch the recording of the symposium “Your Next Computer Vision Model Might be an LLM: Generative AI and the Move From Large Language Models to Vision Language Models“, sponsored by the Edge AI and Vision Alliance. A preview video of the symposium introduction by Jeff Bier, Founder of the Alliance, follows:
If there are topics related to LLMs, VLMs or other types of MLLMs that you’d like to learn about and don’t find covered below, please email us at [email protected] and we’ll consider adding content on these topics in the future.
View all LLM and MLLM Content
How AI On the Edge Fuels the 7 Biggest Consumer Tech Trends of 2025
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. From more on-device AI features on your phone to the future of cars, 2025 is shaping up to be a big year Over the last two years, generative AI (GenAI) has shaken up, well, everything. Heading into
NVIDIA Expands Omniverse With Generative Physical AI
New Models, Including Cosmos World Foundation Models, and Omniverse Mega Factory and Robotic Digital Twin Blueprint Lay the Foundation for Industrial AI Leading Developers Accenture, Altair, Ansys, Cadence, Microsoft and Siemens Among First to Adopt Platform Libraries January 6, 2025 — CES — NVIDIA today announced generative AI models and blueprints that expand NVIDIA Omniverse™
NVIDIA Launches Cosmos World Foundation Model Platform to Accelerate Physical AI Development
New State-of-the-Art Models, Video Tokenizers and an Accelerated Data Processing Pipeline, Optimized for NVIDIA Data Center GPUs, Are Purpose-Built for Developing Robots and Autonomous Vehicles First Wave of Open Models Available Now to Developer Community Global Physical AI Leaders 1X, Agile Robots, Agility, Figure AI, Foretellix, Uber, Waabi and XPENG Among First to Adopt January 6, 2025 —
NVIDIA Launches AI Foundation Models for RTX AI PCs
NVIDIA NIM Microservices and AI Blueprints Help Developers and Enthusiasts Build AI Agents and Creative Workflows on PC January 6, 2025 — CES — NVIDIA today announced foundation models running locally on NVIDIA RTX™ AI PCs that supercharge digital humans, content creation, productivity and development. These models — offered as NVIDIA NIM™ microservices — are accelerated by
Optimizing Multimodal AI Inference
This blog post was originally published at Intel’s website. It is reprinted here with the permission of Intel. Multimodal models are becoming essential for AI, enabling the integration of diverse data types into a single model for richer insights. During the second Intel® Liftoff Days 2024 hackathon, Rahul Nair’s workshop on Inference of Multimodal Models
Qualcomm Launches On-prem AI Appliance Solution and Inference Suite to Step-up AI Inference Privacy, Flexibility and Cost Savings Across Enterprise and Industrial Verticals
Highlights: Qualcomm AI On-Prem Appliance Solution is designed for generative AI inference and computer vision workloads on dedicated on-premises hardware – allowing sensitive customer data, fine-tuned models, and inference loads to remain on premises. Qualcomm AI Inference Suite provides ready-to-use AI applications and agents, tools and libraries for operationalizing AI from computer vision to generative
Why Generative AI is the Catalyst That Mixed Reality Needs
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. From content creation to digital avatars, generative AI is the critical ingredient for building immersive worlds in mixed reality The promise of mixed reality fundamentally changing the way we interact and live our lives has always been
From Generative to Agentic AI, Wrapping the Year’s AI Advancements
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and showcases new hardware, software, tools and accelerations for GeForce RTX PC and NVIDIA RTX workstation users.
Qualcomm CEO Cristiano Amon at Web Summit: GenAI is the New UI
This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm. How generative AI (GenAI)-powered “agents” will change the way you interact with the digital world The rise of artificial intelligence (AI) opens the door to a vast array of possibilities. AI-powered agents will be the key to
An Easy Introduction to Multimodal Retrieval-augmented Generation for Video and Audio
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. Building a multimodal retrieval augmented generation (RAG) system is challenging. The difficulty comes from capturing and indexing information from across multiple modalities, including text, images, tables, audio, video, and more. In our previous post, An Easy Introduction
Computer Vision Pipeline v2.0
This blog post was originally published at Tenyks’ website. It is reprinted here with the permission of Tenyks. In the realm of computer vision, a shift is underway. This article explores the transformative power of foundation models, digging into their role in reshaping the entire computer vision pipeline. It also demystifies the hype behind the
Virtual and Augmented Reality: The Rise and Drawbacks of AR
While being closed off from the real world is an experience achievable with virtual reality (VR) headsets, augmented reality (AR) offers images and data combined with real-time views to create an enriched and computing-enhanced experience. IDTechEx‘s portfolio of reports, including “Optics for Virtual, Augmented and Mixed Reality 2024-2034: Technologies, Players and Markets“, explore the latest
An Easy Introduction to Multimodal Retrieval-augmented Generation
This blog post was originally published at NVIDIA’s website. It is reprinted here with the permission of NVIDIA. A retrieval-augmented generation (RAG) application has exponentially higher utility if it can work with a wide variety of data types—tables, graphs, charts, and diagrams—and not just text. This requires a framework that can understand and generate responses
“Using Computer Vision-powered Robots to Improve Retail Operations,” a Presentation from Simbe Robotics
Durgesh Tiwari, VP of Hardware Systems, R&D at Simbe Robotics, presents the “Using Computer Vision-powered Robots to Improve Retail Operations” tutorial at the December 2024 Edge AI and Vision Innovation Forum. In this presentation, you’ll learn how Simbe Robotics’ AI- and CV-enabled robot, Tally, provides store operators with real-time intelligence to improve inventory management, streamline
Amid the Rise of LLMs, is Computer Vision Dead?
This blog post was originally published at Tenyks’ website. It is reprinted here with the permission of Tenyks. The field of computer vision has seen incredible progress, but some believe there are signs it is stalling. At the International Conference on Computer Vision 2023 workshop “Quo Vadis, Computer Vision?”, researchers discussed what’s next for computer
“Vision Language Models for Regulatory Compliance, Quality Control and Safety Applications,” a Presentation from Camio
Carter Maslan, CEO of Camio, presents the “Vision Language Models for Regulatory Compliance, Quality Control and Safety Applications” tutorial at the December 2024 Edge AI and Vision Innovation Forum. In this presentation, you’ll learn how vision language models interpret policy text to enable much more sophisticated understanding of scenes and human behavior compared with current-generation