What is Google Lens?

google lens
Google Lens is a visual search engine that can analyze pictures and then provide useful information or perform specific tasks. Anton Petrus / Moment / Getty

Google Lens is an app that analyses images in order to bring up relevant information and perform other specific tasks. The app is integrated with both Google Photos and Google Assistant, and it leverages artificial intelligence and deep learning to work better, and faster, than earlier image recognition apps like Google Goggles. It was first announced alongside Google's Pixel 2 and Pixel 2 XL phones, with a wider release to the first generation Pixel phones, and other Android devices, to come later.

Google Lens is a Visual Search Engine

Search has always been Google's flagship product, and Google Lens expands on that core competency in new and exciting ways. At a very basic level, Google Lens is a visual search engine, which means it can analyze the visual data of an image and then perform a number of different tasks based on the contents of the image.

Google, and most other search engines, have included image search functions for a long time, but Google Lens is a different animal.

While some regular search engines are capable of performing a reverse image search, which involves analyzing an image and then searching for similar content on the web, Google Lens goes a whole lot further than that.

One very simple example is that if you take a picture of a landmark, and then tap the Google Lens icon, it will recognize the landmark and pull up relevant information from the internet.

Depending on the specific landmark, this information can include a description, reviews, and even contact information if it's a business.

How Does Google Lens Work?

Google Lens is integrated into Google Photos and Google Assistant, so you can access it directly from those apps. If your phone is capable of using Google Lens, you'll see an icon, indicated by the red arrow in the above illustration, in your Google Photos app. Tapping that icon activates Lens.

When you use Google Lens, an image is uploaded from your phone to Google's servers, and that's when the magic starts. Using artificial neural networks, Google Lens analyses the image to determine what it contains.

Once Google Lens figures out the content and context of a picture, the app provides you with information or gives you the option to perform a contextually appropriate action.

For instance, if you see a book sitting on your friend's coffee table, snap a picture, and tap the Google Lens icon, it will automatically determine the author, title of the book, and provide you with reviews and other details. 

Using Google Lens to Capture Email Addresses and Other Information

Google Lens is also able to recognize and transcribe text, like business names on signs, phone numbers, and even email addresses.

This is sort of like old-school optical character recognition (OCR) that you may have used to scan documents in the past, but with far more utility and a great deal of accuracy thanks to help from Google DeepMind.

This feature is pretty easy to use:

  1. Aim your camera at something that includes text.
  2. Press the Google Lens button.

Depending on what you took a picture of, this will bring up different options.

  • Call phone numbers.
  • Send email to email addresses.
  • Add phone numbers or email addresses to your contacts.
  • Copy the text to paste elsewhere or run a search.

Google Lens and Google Assistant

Google Assistant is, as the name implies, Google's virtual assistant that comes built right in to Android phones, Google Home, and many other Android devices. It's also available, in app form, on iPhones.

Assistant is primarily a way to interact with your phone by talking to it, but it also has a text option that allows you to type requests. By speaking the wake word, which is "Okay, Google" by default, you can have Google Assistant place phone calls, check your appointments, search the Internet, or even activate your phone's flashlight function.

Google Assistant integration was announced alongside the initial Google Lens reveal. This integration allows you to use Lens directly from Assistant if your phone is capable of doing so, and it works by activating a live feed from the phone's camera.

When you tap a part of the image, Google Lens analyzes it, and Assistant provides information or performs a contextually relevant task.