Why Google Bard in Search May Mean You Can No Longer Trust Results

AI doesn’t always get it right

  • Google is adding AI to its search capabilities.
  • Experts warn that current AI chatbots can produce inaccurate results. 
  • Users should confirm the results they get through chatbot searches.
Someone using a computer with a the new Google Generative Search Experience displayed on the screen.
Stocksnap / Mockup Photos.

Google is making AI the centerpiece of its search capabilities, and experts say the move could mean users will have to deal with inaccuracies. 

The next evolution of Google Search will use its AI-powered Bard chatbot in an attempt to help users get information quicker. But the current generation of AI chatbots is notoriously prone to errors.

"Bard is, at the end of the day, an automated computer model," Jason Downie, a former director at Google and the current US CEO of Making Science, told Lifewire in an email interview. "And is only as good and accurate as its inputs. 'Garbage in, garbage out,' as the old saying goes. And there is a lot of inaccurate info in the dataset."

Google Search Becomes Search Generative Experience

The familiar Google search results page will soon be a thing of the past. Instead, the company said that Search Generative Experience—which will be part of Google—will respond to open-ended queries. When you type a query into the main search bar, you will get a pop-up with an AI-generated response in addition to displaying a version of traditional results.

Users can sign up for a waitlist for the new Google Search, which will first launch in the United States, via the Google app or Chrome's desktop browser. 

'Garbage in, garbage out,' as the old saying goes. And there is a lot of inaccurate info in the dataset.

"With new breakthroughs in generative AI, we're again reimagining what a search engine can do," Elizabeth Reid, the Vice President & GM, of Search for Google, wrote on the company's website. "With this powerful new technology, we can unlock entirely new types of questions you never thought Search could answer, and transform the way information is organized, to help you sort through and make sense of what's out there."

But Reid acknowledged that AI search could be better. "There are known limitations with generative AI and LLMs, and Search, even today, will not always get it right," she wrote. "We're taking a responsible and deliberate approach to bringing new generative AI capabilities to Search. We've trained these models to uphold Search's high bar for quality, and we will continue to make improvements over time."

Bard's main difference from other chatbots on the market is its implementation of AI search to provide more comprehensive and context-aware responses, Daniel Chabert, the CEO of the software development firm PurpleFire said in an email interview. 

"While many chatbots rely on simple keyword matching or pre-written responses, Bard leverages Google's search capabilities and natural language understanding to produce more relevant and valuable information to users," he added. 

But Chabert said that issues surrounding Bard and accuracy stem from the fact that it relies heavily on AI search algorithms. He added that despite advancements in AI technology, semantic understanding, and context interpretation, algorithms won't always perfectly grasp the intent and nuance behind every user query. 

"This may result in responses that fail to address the question, are factually inaccurate, or lack depth."

Keeping AI Search Results Accurate

To ensure accuracy, users should compare Bard's responses with other sources of information and do additional research when necessary, Chabert said. This approach will help users verify the information provided and learn from different perspectives. 

"Users should also make use of clarifying questions and precise language when interacting with the chatbot to minimize misunderstandings,” he added. 

Users should be specific in their queries to get the most relevant results, Vladimir Fomenko, the director of Infatica, said via email. 

Someone sitting behind a laptop that's facing the camera with a search graphic overlaying the image.

Surasak Suwanmake / Getty Images

"For example, they should know that the chatbot has its capabilities and restrictions and might only be able to answer some of their questions,” he added. “They should also be wary of the sources offered by Bard and verify their credibility before relying on them in their studies.”

Users should not take search results "as gospel truth in these early stages,” Downie said. “Lots of testing and learning need to happen going forward,” he added. “It's akin to self-driving cars: you can use the computer to park your car or switch lanes, but folks are not ready to turn the entire driving operation over to the machine yet. Much more development is needed.”

The good news is that AI search is likely to get better.  

“Of course, the model's job is to get better at interpreting data, throwing out inaccuracies, and learning from its mistakes,” Downie said. “The more the model is trained, the better it becomes. This technology is so new—it will be infinitely better, and quickly, over time.”

Was this page helpful?