Android

Here’s what’s going on with Google’s funny explanations of made-up expressions


Google Logo at Google NYC Headquarters with Plants Surroudning it

C. Scott Brown / Android Authority

TL;DR

  • Google AI Overviews are confidently trying to explain nonsense phrases, to great hilarity.
  • Ideally this wouldn’t happen, because AI Overviews are only supposed to appear when Google’s confident in the quality of its output.
  • The line between new phrases and nonsense phrases is a fine one, though, and it’s easy to see the logic Google tries to use to divine meaning.

If you haven’t heard about this phenomenon yet, people are asking Google Search to find the meaning behind various phrases. For actual idioms, this can be really useful, but the problem is that Google’s also pretty willing to dream up its own explanations for expressions that aren’t true idioms at all — just meaningless gibberish. Ask Google what “an empty cat is worth a dog’s ransom” means, and it will do its darndest to extract some semblance of meaning there, even if that’s squeezing blood from a stone.

We got in touch we Google to see what was going on here and the company lays its case out in this official statement:

When people do nonsensical or ‘false premise’ searches, our systems will try to find the most relevant results based on the limited web content available. This is true of Search overall, and in some cases, AI Overviews will also trigger in an effort to provide helpful context. AI Overviews are designed to show information backed up by top web results, and their high accuracy rate is on par with other Search features like Featured Snippets.

It seems like the big problem is that it’s not always obvious what these “false premise” searches are in the first place. Language is an evolving thing, and new expressions come into being all the time. People are also prone to mishearing or misremembering things, and may not search for a phrase exactly as it’s intended to be used.

What seems clear from the explanation Google provides alongside these nonsense queries is that it’s still approaching them logically, trying to break down each part and figure out what the speaker could have possibly meant:

search idiom ai explanation

And honestly, it doesn’t do a half bad job. For novel expressions, AI Overview has the resources to draw upon to at least give it a fighting chance of figuring out the intended meaning. So how do you tell the difference between a genuine novel expression, and a nonsense one — a situation Google refers to as a “data void?”

That’s tricky, and Google tells us that it tries to only surface an AI Overview like this when Search has a certain degree of confidence that a summary would be both helpful and of high quality. It’s also constantly refining that cutoff, and while these public fails may just seem silly and entertaining to us, they give Google useful information about the edge cases where AI Overviews struggle to perform as desired.

One of these “hallucinations” on its own is funny, sure, but in the larger context of Google’s AI efforts, we can absolutely appreciate the genuine attempts the company’s systems are making to successfully communicate with users. When we intentionally try to trip it up, should we be surprised that it stumbles?

Maybe the most frustrating part right now is that it’s less than always obvious just how confident Google is in any of the AI Overview results it presents, and a user may not immediately understand if Google’s actually citing someone else’s answer, or just making a best guess. The more clearly it’s able to communicate that with users, the less of an actual problem this sort of situation should prove.

Got a tip? Talk to us! Email our staff at news@androidauthority.com. You can stay anonymous or get credit for the info, it’s your choice.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.