Google AI Overviews, aimed to provide users with concise summaries of complex topics, leveraging the power of their advanced language model. However, the initial rollout of this feature has been met with some unexpected challenges.
Recent reports suggest that Google’s AI Overviews, a new feature designed to provide concise summaries of complex topics, is producing some unexpected and often humorous results.
From recommending non-toxic glue to prevent cheese from sliding off pizza (citing a sarcastic Reddit post) to misidentifying poisonous mushrooms, the AI-powered summaries have been raising eyebrows and prompting discussions about the reliability of AI-generated content.
What’s up with Google AI Overviews?
Google AI Overviews is designed to extract key information from a vast corpus of data and present it in a simplified format, making it easier for users to grasp complex concepts quickly. This feature has the potential to be incredibly useful for students, researchers, and anyone seeking a quick overview of a particular subject.
However, the recent issues with inaccurate and nonsensical summaries have highlighted the need for further refinement and improvement in the underlying AI algorithms.
These are the questions Google AI Overviews has given wrong answers to:
How many rocks should I eat?
One particularly bizarre example involves the query “How many rocks should I eat?” Google’s AI responded with a surprising recommendation, suggesting that users consume at least one small rock per day. While it included some disclaimers about consulting a doctor before ingesting any rocks, this advice is directly contradicted by medical professionals who advise against eating rocks altogether(!).
It just keeps going. Google AI telling people to eat rocks 😂 pic.twitter.com/gEUAx7EpD6
— Liam @ GamingOnLinux 🐧🎮 (@gamingonlinux) May 24, 2024
What are some ways to keep cheese from sliding off pizza?
While it may sound like a prank straight out of a cooking forum, Google AI Overviews suggested adding a small amount of non-toxic glue to the pizza sauce to prevent cheese from sliding off by saying “mix about 1/8 cup of non-toxic glue into the sauce”.
Google is dead beyond comparison pic.twitter.com/EQIJhvPUoI
— PixelButts (@PixelButts) May 22, 2024
Which US president went to the University of Wisconsin-Madison?
The AI Overviews have been producing some outlandish responses, such as claiming that multiple US presidents attended the University of Wisconsin-Madison, despite none actually having done so.
This misinformation stemmed from a tongue-in-cheek page on the university’s alumni website listing “alumni with presidential names”.
Meanwhile, over in Google Search.
Andrew Johnson has been killin it, I never knew. pic.twitter.com/IV2zCmI6Zv— MMitchell (@mmitchell_ai) May 22, 2024
Google CEO remains optimistic
Despite these issues, in a recent with The Verge, Google CEO Sundar Pichai remains optimistic, stating that users are responding positively to AI Overviews and that they are more engaged with the feature than with traditional search results. However, he acknowledges that the technology is not perfect and that Google cannot be expected to have flawless results every time.
The question of how Google’s AI decides what information to surface in each response remains a key concern. Ideally, it would prioritize well-researched and widely agreed-upon information. However, the pizza glue example suggests that the AI may sometimes prefer quoting a joke from Reddit rather than providing no information at all.
This issue is further complicated by the fact that Google has signed content-licensing deals with platforms like Reddit, allowing them to use their data in AI products. This raises questions about the potential for misinformation and biased content to be included in AI-generated summaries.
Intentional manipulation and misuse could be at play
The vulnerability of Google’s AI Overviews to surfacing any information, regardless of its accuracy, has also led to attempts by some users to manipulate the results. One user even added multiple awards to their own website, which promptly appeared in Google’s summary.
This raises concerns about the potential for misuse and the need for safeguards to prevent the spread of false or misleading information.
As Google continues to refine its Google AI Overviews, it will need to address these challenges and strike a balance between innovation and accuracy. This may involve improving the AI’s ability to discern reliable sources, implementing stricter fact-checking mechanisms, and being more transparent about the limitations of AI-generated content.
Therefore it is best to know how to get rid of Google AI Overviews, at least for now…