
TL;DR
- Google Search’s AI-powered summary, called AI Overviews, is generating false definitions for entirely fictional idioms.
- Users have found they could type nonsensical phrases into Google paired with “meaning” to receive fabricated but very confident explanations.
- Even though Google labels AI Overviews as experimental, this behavior raises significant concerns about trust and accuracy in Google’s search results.
AI is already everywhere, but companies aren’t stopping from adding it to even more places. Google is betting big on AI, adding it across every surface with Gemini Advanced and even bringing AI to Google Search with Google AI Overviews. However, the elephant in the room is that AI can hallucinate, confidently making up facts that never existed. The latest instance of AI hallucinations comes from Google AI Overviews, which confidently provides meaning to made-up idioms and phrases.
Bluesky user Greg Jenner highlights (via Engadget) that you can type any random phrase into Google Search and append “meaning” to the search query to get a completely made-up explanation for the words.
Someone on Threads noticed you can type any random sentence into Google, then add “meaning” afterwards, and you’ll get an AI explanation of a famous idiom or phrase you just made up. Here is mine
The user searched for the meaning of “you can’t lick a badger twice, ” and Google AI Overviews confidently answered that it meant you couldn’t trick someone twice. However, the phrase is completely fake and made up by the user, and Google explains it as if it is an established idiom.

I tried out some of my own fake phrases as search queries, and sure enough, Google AI Overviews is confident about what they mean, even though these “idioms” came into existence mere moments ago. The same search result also mentions that no results were found for the exact string, practically confessing that Google dreamt up the meaning!
Google merely warns on the results page that “Generative AI is experimental,” which isn’t enough of a disclaimer to users that the results could be entirely made up. Both fiction and fact co-exist on the same search results page, as AI Overviews is a part of Google Search.
As the user pointed out, such hallucinations are a warning sign for us and Google. “Googling” has long become synonymous with the ability to search and fact-check a quote, verify a source, or track down something people only partially remember. However, such confident hallucinations shake the very foundations of Google Search, as it will become so much harder to separate facts from fiction if AI holds the truth to such little importance.
We’ve contacted Google for a statement on these hallucinations from AI Overviews. We’ll keep you updated when we learn more.
Have you tried out AI Overviews? Have you ever received a hallucinated response? What fake phrase did you try, and what did Google say it meant? Let us know in the comments below!