UPDATED: Final Coverage of the Google Event: ‘Live in Paris’

Not a ton of info, but at least there were Blobs

Live Blog

Update: The Google event is over; feel free to peruse our stunning coverage below, and stay tuned for more live events in the future. See our Google Event page for even more details.

8:30 AM: We're just getting started here; stay tuned for our take on the Google Event

8:34 AM: Prabhakar Raghavan acknowledges the earthquakes in Turkey and Syria, then begins the presentation to talk about how Search uses AI to inform queries and Lens

8:36 AM: The idea for the next level of Search is to use AI to understand information,

Access to info empowers people, he says, but only if the language is understandable. Google Translate can do this. Translate now has 133 languages it can work with, with 33 new ones added to its offline mode.

8:39 AM: "Your camera is the next keyboard." He's addressing Lens capabilities, including shopping and homework.

New Announcement: Visual search has moved from a novelty to a reality. The age of visual search is here." He's saying that translation will help you translate the whole picture, not just the text. So rolling out now, you can use Lens to translate even more information from a picture.

8:42 AM: Liz Reid is now demoing some Lens capabilities. "With Lens, if you can see it, you can search it."

Lens' multi-search lets you search for variations of color for products. The demo phone is not available, so she's moving on. It's now officially live on mobile, globally. There's also a new "near me" available in the US, so you can grab a picture of a food item, for example, and find out where to get it locally.

8:46 AM: Raghavan is talking about large language models. He's introduced LaMDA and now Bard and is discussing how these technologies can help find helpful insights, including example of using Bard to create a road trip map.

8:48 AM: He's still on the announcement that was made earlier this week about Bard, but it feels like he's leading into some new information, maybe.

8:49 AM: No One Right Answer, or NORA. Raghavan talks how Bard can offer up contextual information, rather than just facts.

8:50 AM: Generative AI can make things like 3D looks at products like sneakers or a cake, lets people interact with visual information more easily.

8:51 AM: Next month they'll start onboarding developers for additional work on Generative AI.

8:52 AM: Exploring the real world, via Google Maps. Chris Phillips is up now to talk about how AI will improve things. He's talking about how maps used to be - gasp - printed on paper.

Immersive view is a new way to explore. It uses AI to fuse "billions" of images from street view. Demo of a museum, with a time slider that can help you know the weather when you want to visit.

8:56 AM: You can also use Immersive View to dig into a map, see inside restaurants, etc. Rolling out today in London, Tokyo, and a few other cities with more coming in the future.

Also introduced Search with Live View... Currently in cities like Paris, coming to more soon.

8:57 AM: Demo showing Rachel looking for a coffee shop in a neighborhood by just raising her phone up. Once she found a coffee shop, she can tap on it and get all the regular Google Map info about the shop.

8:59 AM: Indoor Live View is expanding to 1000 new places.

Google AI Presentation

9:00 AM: Maps is adding AI-powered EV charging features, with things like a "very fast charging filter" and icons for charging at stores on the map.

Waze will let you specify your EV plug type to find the right charger, as well.

9:02 AM: Marzia Niccoli - Arts & Culture: Marzia is talking about how Google has worked to help expand and unearth different types of art. And how they're integrating artificial intelligence (AI).

She brought out the Blobs!

Google singing blobs

9:06 AM: She’s talking about women in science, using AI to analyze images and historical records to show what these women have achieved. 

9:08 AM: She's talking about how AI will allow greater access to classical art and more.

And now they're adding 3D AR models to the art and culture app.

9:09 AM: Prabhakar Raghavan is back on the stage, summing things up. He’s explaining how AI will help us all communicate via various languages and images.

The presentation ended at 9:10 AM.

Google Presents Live in Paris

Google just announced its next streamed event, "Live in Paris." We'll have a live blog right here to help you stay on top of the latest from the internet behemoth.

The Google event will take place on Wednesday at 8:30 am Eastern Time via YouTube and will likely include new reveals about the company's search, maps, and AI products.

"We're reimagining how people search for, explore and interact with information, making it more natural and intuitive than ever before to find what you need," the company wrote in its YouTube description. "Join us to learn how we're opening up greater access to information for people everywhere, through Search, Maps and beyond."

Google Presents Live from Paris YouTube splash page

Google

Was this page helpful?