Google Expands AI-Driven Search Functions With Lens and Multisearch

Powered by Google Bard

Many of Google's Lens search functions are set to receive AI-driven upgrades which take advantage of the recently announced Lamda spin-off Bard and make visual searches more intuitive and helpful.

If you've ever wanted more from Google Lens or its somewhat recent multisearch function (introduced in 2022), new features and enhancements are on the way. Google has revealed plans for more in-depth image and video searching with Lens and more specific options when you use multisearch with both an image and text.

"Search your screen" with Google Lens


Lens' upcoming upgrade allows you to "search your screen" for information about what you see in photos and videos—even if you didn't take them. In one example, Lens can search for and provide links to details about a background landmark seen in a friend's video of Paris.

Google's AI-powered multisearch will continue letting you search for more specific subjects using both an image and text—for example, a photo of a chair with text to indicate your preferred color. But it's also receiving more of a fleshing-out with Near Me (a.k.a. local) multisearch, which will check for nearby stores and other locations relevant to your inquiries.

Multisearch is also going global, meaning you can soon search for more variations of the items you're curious about, using any image in your search results, without the local constraints.

Google Lens multisearch


The Lens screen search update will begin rolling out over "the coming months" for Android devices. Multisearch has been available in the US for some time but is out on all Lens-capable devices globally (in 70+ languages) today.

Near Me multisearch is also available in the US now and is also expected to roll out globally soon. As for global multisearch, Google says it should be releasing worldwide within the next few months.

Was this page helpful?