The tech giant has unveiled a new search engine that some say can be laughably wrong — and sometimes dangerously so
Article content
Google has announced the rollout of a new search tool that uses AI to help filter and summarize search results. But some early adopters are finding its answers laughably wrong — and sometimes dangerously so.
In a blog post and related video backed by snappy music, the tech giant shows users getting their questions answered by what it calls AI Overviews. One person types “Why does my candle burn unevenly?” and gets a list of reasons that includes wicks, imbalanced wax pool, drafts and more.
Advertisement 2
Article content
Next, someone tries: “Explain how temperature impacts baking,” and gets a short, bullet-pointed essays that creates a look of amazement on her face.
So far, so helpful. But those are Google’s own curated examples of its search engines own curatorial powers. Out in the real world, things unfurled somewhat differently.
According to the Associated Press, when asked if cats have been to the moon, AI Overviews cheerfully said they had.
“Astronauts have met cats on the moon, played with them, and provided care,” the AI wrote, adding for good measure: “Neil Armstrong said, ‘One small step for man’ because it was a cat’s step. Buzz Aldrin also deployed cats on the Apollo 11 mission.” (Leading one to wonder: How does one “deploy” cats? Is it anything like herding them?)
But at least AI Overviews wasn’t calling the moon landing itself into doubt. When asked how many Muslims have been U.S. president, the search engine regurgitated an old conspiracy theory and said there had been one: Barack Obama. (He’s Protestant, for the record.)
Such answers might seem head-scratching or annoying. But they also call into doubt the veracity of anything the search engine calls forth.
And it delivered some doozies. Asked to fix the problem of cheese sliding off homemade pizza, it recommended adding glue. “Mix about 1/8 cup of Elmer’s glue in with the sauce,” The Verge reported it telling them. “Non-toxic glue will work.”
Article content
Advertisement 3
Article content
Another AI Overviews reply suggested, to the somewhat leading question “How many rocks should I eat?” not the appropriate number of zero, but “at least one small rock per day,” since they are a vital source of minerals (true) and vitamins (false).
Recommended from Editorial
Humans on the receiving end of this advice posited that the search engine may have been repurposing satirical web content without “getting” the joke. The glue-in-your-cheese recipe was found on a decade-old Reddit post that was clearly joking, and the “rock a day” advice was traced back to an article from the web site The Onion, which serves up fake news clearly meant to be taken as such. Or at least it’s clear to humans.
Even more problematic was the AI’s answer to a question from a Rolling Stone magazine reporter, who asked: What bridge is best for jumping off? The magazine noted that Google AI Overview did not assume this was a sporting query about cooling off in the summer; neither did it discuss the new suicide prevention hotline.
Advertisement 4
Article content
Instead it advised: “The Golden Gate Bridge is one of the bridges in the world where people have jumped off the most,” adding: “Some say 98% of falls from this height are fatal.” It went on to provide details about the second- and third- most popular “suicide bridges” in the U.S., too.
“The vast majority of AI Overviews provide high-quality information, with links to dig deeper on the web,” Google said in a written statement when asked about these anomalies. “Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce.”
That may be, but an AI that’s inconsistently wrong seems if anything worse than one that’s consistently so. And “uncommon queries” are the hallmark of human curiosity, as are panicked ones.
“The more you are stressed or hurried or in a rush, the more likely you are to just take that first answer that comes out,” Emily Bender, a linguistics professor and director of the University of Washington’s Computational Linguistics Laboratory, told the Sydney Morning Herald. “And in some cases, those can be life-critical situations.”
Advertisement 5
Article content
Hallucinations as they are commonly called have been plaguing AI since the recent rise of the chatbot. In 2022 the web site The Daily Beast published a story under the headline This Bot Is the Most Dangerous Thing Meta’s Made Yet. It was referring to an AI called Galactica, trained on open-access scientific text and data, and touted as being able to call up relevant research from a simple query.
The only problem was that many of its replies were racist, homophobic or just plain wrong, such as a reference to the invention of “gaydar,” or a study on the benefits of eating crushed glass.
Two days after going live, Meta took Galactica down. Google, on the other hand, shows no signs of giving up.
“AI Overviews will begin rolling out to everyone in the U.S., with more countries coming soon,” the company says in its blog post. “That means that this week, hundreds of millions of users will have access to AI Overviews, and we expect to bring them to over a billion people by the end of the year.”
At that rate, even the cats on the moon should have access to it soon.
Our website is the place for the latest breaking news, exclusive scoops, longreads and provocative commentary. Please bookmark nationalpost.com and sign up for our newsletters here.
Article content