How AI and Smart Glasses Give You a New Perspective on Real Life

This blog post was originally published at Qualcomm’s website. It is reprinted here with the permission of Qualcomm.

When smart glasses are paired with generative artificial intelligence, they become the ideal way to interact with your digital assistant

They may be shades, but smart glasses are poised to give you a clearer view of everything around you.

The biggest tech innovations of the last decade, whether it’s the widespread adoption of the smartphone or the rise of social media, have resulted in you spending more time staring at a screen. Smart glasses paired with a digital assistant powered by generative artificial intelligence (AI) have the potential to break this trend, and instead let you better live in — and appreciate — the moment.

Think about how you react when you see something funny, unique or memorable: You pull out your phone to take a video. Smart glasses let you capture that moment without taking your eyes off it, letting you fully take it in. This behavior of pulling out your phone is repeated over and over many of the activities, interactions and search queries we do throughout the day — but there is another way.

Smart glasses and AI together represent a massive shift in how we will interact with the technology around us — one where the screen doesn’t have to be the focal point of your life. It’s just one of a myriad of benefits that will follow a wave of devices hitting the market, starting with the Ray-Ban Meta smart glasses.

You can expect even more pairs of smart glasses to appear in the coming months and with that, a proliferation of features and benefits.

The ideal interface

Walking around with a pair of glasses that just snap pictures or play audio is handy, but for those that don’t wear glasses regularly, it might not be enough reason to turn around if you left them at home. The real value comes in the integration of generative AI and a digital assistant that understands your preferences and needs.

Rather than pull out a phone to take a photo or type in a query, your glasses are on your face, and the assistant can see and hear all around you, giving it access to important context at a moment’s notice. AI is now smart enough to accept multimodal inputs, meaning it can understand not just written questions, but images, video and sound. That kind of capability supercharges the ability of your smart glasses and means you can speak to them naturally to get the answers you want or actions like playing music or hands-free picture taking.

Here are a few examples of what your glasses can do when powered by generative AI:

  • They can scan a sign or menu in a foreign language and translate the contents back to you.
  • You can look at a toy or shirt you like and ask the glasses where you can buy it, or have the glasses initiate the purchase and have it automatically shipped to your home.
  • Pull a blouse from your closet and ask your assistant for the ideal matching pair of shoes or pants.
  • It will be able to scan the food you’re about to eat to calculate how many calories you’re consuming. Your assistant can take that information and offer appropriate workout recommendations.

That last part is key — the idea that it can take the important information around you and proactively offer you suggestions and recommendations to make sure life is a little easier and smarter.

Smart glasses see what we see and hear what we hear, making them the best device for getting camera and audio data. They’re also able to provide output to the speakers in the smart glass from a large language model (LLM) or a large multimodal model (LMM).

Qualcomm Technologies’ expertise

Behind Meta, TCL, Xiaomi and a myriad of other potential smart glass makers is Qualcomm Technologies. We’ve tapped into our expertise in phones and other mobile devices to lay out many of the building blocks for more capable and powerful smart glasses. Who better than the company that’s been intensely focused on devices capable of running on a battery for extended periods of time?

Our Snapdragon AR1 Gen 1 Platform has served as a foundation for many of the smart glasses either in the market about to arrive, and it’s one built specifically to handle the unique demands of this category.

We’ve also drawn on our extensive experience building low power devices, applying that expertise to glasses, which are particularly sensitive with small batteries packed into a frame that wraps around your head.

But it’s our work on ensuring that AI operations can be performed on the device itself, rather than through the cloud, that will truly unlock the abilities of smart glasses.

Smart glasses and on-device benefits

Qualcomm Technologies has already established itself as a leader in enabling AI operations to run on a device. Earlier this year, Qualcomm Technologies partnered with Microsoft to introduce a series of Copilot+ PCs running on the Snapdragon X Elite and Snapdragon X Plus platforms, enabling them to run AI tasks without pinging the cloud for extra processing power.

In July, Samsung showed off the AI capabilities of the Galaxy Z Fold6 and Flip6, both running on the Snapdragon 8 Gen 3 for Galaxy processor.  Snapdragon MR and AR platforms have neural processor units (NPUs) that are derived from the same core technology, giving them the unique advantage of delivering high AI performance for a given amount of power (performance per watt). This means you can get a large amount of calculation done with a constrained power budget. This metric is key for AI inferencing, or calculations, and measuring performance on the device.

There are a lot of advantages to running AI operations on the device:

  • Having to avoid pinging the cloud means the whole process is much faster, more responsive and doesn’t require a cellular or Wi-Fi connection to accomplish a task.
  • Keeping things in one location also means that your data — whether it’s your queries or the photos you take stays local and, most importantly, stays private.

Smart glasses have an accompanying device that has a lot of on-device AI processing capability: a smartphone. If you’re walking around with a pair of smart glasses on your face, chances are you’ve got a smartphone in your pocket too. Making that short Bluetooth or Wi-Fi hop between the glasses and the phone is a lot more efficient and faster than needing to ping the cloud.

What’s coming next? Qualcomm Technologies is working on hybrid systems that utilize the AI you have on various devices — whether it’s your phone or computer — to give you the right processing for the activity you achieve. We were already the first to demonstrate multi-modal AI capabilities working on a phone — and eventually you’ll see smart glasses take advantage of that kind of power.

Ziad Asghar
SVP & GM, XR, Qualcomm Technologies

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top