Skip to Content
The Machines

Google’s Got Nothing But Wrong Answers

Google CEO Sundar Pichai speaks during the Stanford Business, Government, and Society Forum at Stanford University in April of 2024.
Justin Sullivan/Getty Images

When Google's new Gemini AI program responded to the search query "how many rocks should I eat each day?" with "at least one rock," it did not cite the original source for that extremely funny, extremely incorrect recommendation. That source was The Onion, which is both America's Finest News Source and not something that an actual person would confuse for a news source as readily as an AI language model might. This is because people, including the wiseacres who have spent recent weeks firing goofy questions at Google to see how badly its struggling and possibly in-timeout new AI program can duff the answers, can read and assess context and tone, and also think. Large language models like Gemini or ChatGPT cannot do any of that, which has not yet been nearly as disqualifying as you might expect.

Anyway, the one Google cited was ResFrac.com, a website that describes itself as "the industry’s only fully coupled hydraulic fracturing and reservoir simulator," and which published and linked to that Onion story on its own website three years ago. That post was updated on the ResFrac site earlier this month once the internet's most widely used search engine, the primary public-facing part of a trillion-dollar tech behemoth whose name is synonymous with the act of looking things up online, began displaying that link as the first search result for questions about eating rocks. "Thank you to the Onion for the amusing content (linked above)," ResFrac wrote in their update. "You picked a photo with a great-looking geophysicist. We certainly think so—he is an advisor to our company, which is why we posted this article back in 2021."

This sort of technology is prone to what its advocates rather preciously and very intentionally describe as "hallucinations." The idea, here as throughout this goofy-gaudy bubble, is to make users think of the very expensive new technology that is constantly telling them weird lies as a Being That Is Learning instead of a very expensive new technology that constantly tells weird lies. The sales pitch is that this garbage-spewing slot machine is almost human—not just in its capacities to generate confidently phrased factual errors at scale, but that it has in a deeper sense very nearly become human. This is a strange thing to say about a technology that, after many billions of dollars of investment and more than a year of psychedelic hype, has lately been advising users to spend five to 15 minutes staring at the sun per day. But in its capacity to hold forth at length and with great confidence and barely intermittent facticity, and then resume doing so after a cosmetic apology whenever anyone notices, generative AI has not so much Become Human as it has developed the ability to mirror its foremost human exponents.

At this time next year, those advocates will tell you, AI will be human, and also "smarter than any single human," and then it will continue becoming smarter and smarter and (this is held by these guys to be the same thing) more and more capable of doing the things that only human beings can do until such time as ... well, how this sentence ends usually depends on the specific toxicology of the people talking. Maybe it will try to destroy humanity. (That feels like uppers.) Maybe it will simply become an indispensable constant companion, making possible the strange sort of things that these guys want, whenever they want it, which is instantly—an example of this is when OpenAI CEO Sam Altman posted back in April that, "movies are going to become video games and video games are going to become something unimaginably better." (This sounds like hallucinogens.)

Mostly, though, it all just sounds desperate, and like what a salesman would say if he knew that he couldn't sell you his product on its merits alone.

AI, as it currently exists, does not really work very well if you are trying to generate high-quality or even reliably correct outputs. The pitch is that it will, or that it could, or anyway that it might depending upon a bunch of factors currently denoted by ???s on some dry-erase board somewhere. There isn't currently any evidence of that, and the capacity to generate incorrect advice and images of Jesus Christ, But Made Of Shrimp has fairly limited appeal. "The rate of improvement for AIs is slowing, and there appear to be fewer applications than originally imagined for even the most capable of them," Christopher Mims wrote in the Wall Street Journal.

It makes sense, then, that the emphasis has shifted to stagecraft. This technology exists, and it can generate a wide range of answers in more or less the same plucky/cocky tone, but those answers are not good enough and may never be good enough to have any effective practical utility. There is something bleakly funny about the industry bumping up against the limits of what it can actually do and simply pivoting to more ambitious marketing. The money is there to sustain this campaign for a while longer, but the results manifestly are not. "With the generative AI from Google, OpenAI’s ChatGPT, and Microsoft’s Copilot," the Washington Post's Shira Ovide wrote, "you should assume that they’re wrong until proved otherwise."

In the very next sentence, Ovide allows that "these chatbots can still be incredibly useful." This is in my opinion pretty funny—her post had already mentioned a Google AI response cheerfully advising readers to drink plenty of piss—but also reflects the generosity with which these awful and unhelpful technologies have been received, even as their awfulness and uselessness has become inescapable. A technology that can very quickly and very confidently deliver information that is unlikely to be correct is not more useful than one that turns that work over to the user. For a company selling this dross machine as the future, it something much worse than that.

While AI is still an emerging technology, Google is not. It is an institution, and something very much like a public utility, and while it seems to have made the decision to slow-roll its Tripping Balls Answer Genie after its many preposterous errors became a punchline, it remains determined to make it a part of its web search experience. There is a backstory that explains how and why this web-search company has so determinedly desecrated its core product. Of course some mediocre business types trying to make a number go up is at the heart of it; I also don't imagine that anyone reading this would be surprised to learn that making that number go up, forever, was the only idea that these people had, which made them doubly inclined toward the One Weird Trick that AI represented. You don't have to follow this kind of thing closely to know any of that; this cynical, checked-out, anti-human, fundamentally un-rebuttable demand for passive income is contemporary capitalism's prevailing mode, and it has made basically every single thing in American life both more expensive and worse.

Google makes its money largely due to its multiply, lucratively parasitic relationship with online advertising, which is enabled by its sprawling and objectively odious surveillance of its every user. It gets away with this mostly because of its vast wealth and influence, but also because of the services that it offers—its variously useful applications and products and the eponymous search engine that until recently was the best way for people to learn and find things online. There was some labor involved on the user's part in putting the answer together, but for many years Google's search algorithm—and I'd say this is much more impressive than creating something that does the exact opposite, at great expense—gave users the tools to figure things out. It is now taking that ability away.

The decision to mess with that—to "do the Googling for you," as the company put it—by making use of so deliriously janky a technology is, for all the internal cultural and institutional struggles that brought it about, so obviously stupid as to be almost inspiring. It is, ironically or not, a decision so bad that only a human could have made it—a category error so willful and irrational that an AI, fed as it is on good and bad information alike, could not have made. Only the most cocksure business puds, blithely unaware and uninterested in how the product they sell is actually used or by whom, could have made this particular mistake; of all the many Americans that use Google every day, not one seems to have penetrated the company's C-suite.

"In the past, Google was telling you somebody else could answer your question," Peter Kafka wrote at Business Insider. "Now Google is answering your question. It's the difference between me handing you a map and me giving you directions that will send your car barreling over a cliff." It is the nature of his strange beat that Kafka, too, immediately follows this very clearly correct point with a qualifying caveat—there are competitors, after all, who might deliver equally useless answers slightly faster, and Google of course had to bear that in mind in rolling out its own mistake machine.

It's more interesting, I think, to consider what the people in charge at Google think they're doing when they use a technology in a way it was never quite intended to be used to make their product worse. Leave aside the oafish corporate arms race to create the most humanesque error-generation machine; take as a given the idiot boardroom fantasy of endless growth; assume that the ultimate decision-makers are people who could not successfully fry an egg if given a dozen opportunities and expert coaching. Think of it from the perspective of someone who does not know or care about any of that, as most people don't, and who simply types Google's name into their browser in hopes of learning something they don't know, only to receive a chirpy and obvious lie and a bunch of ads. They would, quite justifiably, have some questions. I wonder how Google would answer them.

Already a user?Log in

Thanks for reading Defector!

Sign up to keep up with our blogs.

Or, click here for subscription options

If you liked this blog, please share it! Your referrals help Defector reach new readers, and those new readers always get a few free blogs before encountering our paywall.

Stay in touch

Sign up for our free newsletter