“Running with scissors is a cardio exercise that may increase your heart rate and require concentration and focus,” says Google’s latest AI search feature. “Some say it may possibly also improve your pores and offer you strength.”
Google’s AI feature pulled this response from an internet site called Little Old Lady Comedy, which, as its name makes clear, is a comedy blog. However the gaffe is so ridiculous that it’s been circulating on social media, together with other obviously incorrect AI overviews on Google. Effectively, on a regular basis users at the moment are red teaming these products on social media.
In cybersecurity, some corporations will hire “red teams” – ethical hackers – who try to breach their products as if they’re bad actors. If a red team finds a vulnerability, then the corporate can fix it before the product ships. Google actually conducted a type of red teaming before releasing an AI product on Google Search, which is estimated to process trillions of queries per day.
It’s surprising, then, when a highly resourced company like Google still ships products with obvious flaws. That’s why it’s now develop into a meme to clown on the failures of AI products, especially in a time when AI is becoming more ubiquitous. We’ve seen this with bad spelling on ChatGPT, video generators’ failure to grasp how humans eat spaghetti, and Grok AI news summaries on X that, like Google, don’t understand satire. But these memes could actually function useful feedback for corporations developing and testing AI.
Despite the high-profile nature of those flaws, tech corporations often downplay their impact.
“The examples we’ve seen are generally very unusual queries, and aren’t representative of most individuals’s experiences,” Google told TechCrunch in an emailed statement. “We conducted extensive testing before launching this latest experience, and can use these isolated examples as we proceed to refine our systems overall.”
Not all users see the identical AI results, and by the point a very bad AI suggestion gets around, the difficulty has often already been rectified. In a newer case that went viral, Google suggested that when you’re making pizza but the cheese won’t stick, you can add about an eighth of a cup of glue to the sauce to “give it more tackiness.” Because it turned out, the AI is pulling this answer from an eleven-year-old Reddit comment from a user named “f––smith.”
Beyond being an incredible blunder, it also signals that AI content deals could also be overvalued. Google has a $60 million contract with Reddit to license its content for AI model training, as an example. Reddit signed the same cope with OpenAI last week, and Automattic properties WordPress.org and Tumblr are rumored to be in talks to sell data to Midjourney and OpenAI.
To Google’s credit, lots of the errors which can be circulating on social media come from unconventional searches designed to trip up the AI. A minimum of I hope nobody is seriously looking for “health advantages of running with scissors.” But a few of these screw-ups are more serious. Science journalist Erin Ross posted on X that Google spit out misinformation about what to do when you get a rattlesnake bite.
Ross’s post, which got over 13,000 likes, shows that AI really helpful applying a tourniquet to the wound, cutting the wound and sucking out the venom. In line with the U.S. Forest Service, these are all things you must not do, must you get bitten. Meanwhile on Bluesky, the creator T Kingfisher amplified a post that shows Google’s Gemini misidentifying a toxic mushroom as a standard white button mushroom – screenshots of the post have spread to other platforms as a cautionary tale.
When a nasty AI response goes viral, the AI could get more confused by the brand new content across the topic that comes about consequently. On Wednesday, Recent York Times reporter Aric Toler posted a screenshot on X that shows a question asking if a dog has ever played within the NHL. The AI’s response was yes – for some reason, the AI called the Calgary Flames player Martin Pospisil a dog. Now, while you make that very same query, the AI pulls up an article from the Each day Dot about how Google’s AI keeps pondering that dogs are playing sports. The AI is being fed its own mistakes, poisoning it further.
That is the inherent problem of coaching these large-scale AI models on the web: sometimes, people on the web lie. But identical to how there’s no rule against a dog playing basketball, there’s unfortunately no rule against big tech corporations shipping bad AI products.
Because the saying goes: garbage in, garbage out.