Adel Ahmadyan, Staff Engineer at Meta Reality Labs, presents the “Bridging Vision and Language: Designing, Training and Deploying Multimodal Large Language Models” tutorial at the May 2024 Embedded Vision Summit. In this talk, Ahmadyan explores the use of multimodal large language models in real-world edge applications. He begins by explaining…
Register or sign in to access this content.
Registration is free and takes less than one minute. Click here to register and get full access to the Edge AI and Vision Alliance's valuable content.