What went wrong with Google's new AI search feature — and what the company is doing to try to fix it

Alphabet CEO Sundar Pichai giving a presentation at Google's I/O developer conference in Mountain View, Calif.
Alphabet CEO Sundar Pichai at Google’s I/O developer conference on May 14. (Jeff Chiu/AP)

Over the last two weeks, Google started rolling out AI Overviews to users in the U.S. The feature uses generative artificial intelligence to gather information from around the internet and summarize it in a concise synopsis at the top of its first page of search results.

For over a year, Google has been upfront with its plans to incorporate AI into its platform. In May 2023, the company detailed its plans to integrate AI into its search engine — which controls 91% of the search market online.

Following the launch of the new feature, users have been complaining about the results and mocking the AI-generated answers on social media platforms. Google employees have since manually corrected some of the AI Overviews answers, and the tech giant has detailed what engineers are doing to improve the system.

Since the AI Overviews rollout, social media users quickly pointed out apparent flaws in the search results. In an example that went viral, someone searched “cheese not sticking to pizza,” and AI Overviews suggested adding “1/8 cup of non-toxic glue to the sauce to give it more tackiness.” Users traced the information back to a humorous comment on an 11-year-old Reddit post.

In a May 30 blog post, Liz Reid, the vice president of Google Search, called the viral AI Overviews examples circulating online “odd and erroneous” and alleged “a very large number” of them were “faked screenshots.”

“We hold ourselves to a high standard, as do our users, so we expect and appreciate the feedback, and take it seriously,” she wrote. “At the scale of the web, with billions of queries coming in every day, there are bound to be some oddities and errors.”

Some AI mistakes, however, are more serious than the answer generated by a search of whether dogs have ever played in the NBA. One reported example illustrated how Google’s AI regurgitated a widely circulated piece of misinformation that former President Barack Obama is Muslim.

Google struck a $60 million deal with Reddit and said in a policy update in 2023 that it would use “publicly available information” to train its AI models. But AI is not infallible — it pulls results from anywhere, including low-quality sites that may not be fact-checked, such as Reddit comments, and even the satirical news website the Onion.

AI does not recognize the difference between a joke or sarcasm and facts. This becomes especially difficult when the search query is obscure enough that there aren’t many available sources.

It’s not an issue exclusive to Google’s AI practices. Other popular AI tools like OpenAI’s ChatGPT have produced wrong answers. A Purdue University study found that the app presented wrong information as facts 52% of the time, especially when presented with more complex or complicated questions.

Google’s ecosystem has been built on presenting links to other content and platforms to help users find answers to their questions. As the leading global search engine, with tens of millions of visitors daily, Google Search is responsible for an estimated 63% of all U.S. website traffic referrals.

Not only does AI Overviews push other publishers and links down the first page of search results, but it also paraphrases content taken directly from other websites by writers who do not get credited.

Following the release of AI Overviews, the News/Media Alliance, which represents more than 2,000 print and digital news media companies, called Google’s incorporation of AI into its search engine “catastrophic to our traffic.” Danielle Coffey, the nonprofit’s chief executive, said Google has created “a product that directly competes with our content, using our content to fuel it.”

Gartner, a tech research firm, estimated that publisher traffic generated from search engines will fall 25% within the next two years.

Social media users have noticed that some of their previous Google searches that were generating “weird and inconsistent” answers were suddenly no longer offering an AI Overviews answer — some just hours after their original search.

According to the Verge, a Google spokesperson said the company is removing AI Overviews on certain searches and using those failed AI Overviews answers as “examples to develop broader improvements to our systems.”

For the search engine that processes approximately 99,000 queries per second, this uncertainty and misinformation could “slowly erode our trust in Google,” Chinmay Hegde, associate professor of computer science and engineering at NYU, told Yahoo Finance.

In an announcement on May 21, Google also shared that it would start testing placing search and shopping advertisements in its AI Overviews answers to boost ad sales. Google said it would mark ads as “sponsored” within the AI Overviews results.

Users cannot turn off the feature, but there are work-arounds.

  1. Use a web browser that isn’t Google Chrome. This trick only works with desktops, but using Safari or Firefox should eliminate AI Overviews in Google searches.

  2. Click the “Web” tab that displays above Google search results. Next to tabs like “All” and “Images,” there should be a section titled “Web” that will eliminate the Overviews section from view and only show you links. This should work on desktop and mobile browsers.

  3. Someone created a Hide Google AI Overviews Chrome extension. You can download the extension here.