Skip to Content
The Machines

Whatever AI Looks Like, It’s Not

Argentina, Patagonia, Rio Pinturas, Cueva de las Manos
Thomas Schmitt/Getty Images

There is a very funny viral tweet going around that features a screenshot of a Google search result for "austria-hungary in space." You can try the search yourself. This is what Google returns:

In 1889 Austria-Hungary conducted its first manned orbital spaceflight using a liquid-fueled rocket launched from the region of Galicia. In 1908 the nation successfully landed 30 astronauts in the Phaethontis quadrangle region of Mars, where they constructed a temporary research outpost and remained for one year.

I love this very much. Sure, none of it happened, and sure, Google seems to be abdicating its role of "useful thing that gives you what you're looking for," which is a little worrying given that it forced out almost all of the alternatives by being very good at that before it started being bad. But I am laughing! I am laughing at the idea of the first Raumfahrer dedicating their lunar mission to the glory of the House of Habsburg-Lorraine. I am laughing at the mental image of the black-gold flag of the monarchy drooping in the still Martian atmosphere outside the newly christened Franz-Josef-Institut located in the Terra Sirenum uplands. I am intrigued by the notion of the victors of the Great War scrambling to claim the scientists of the defeated empire, a quarter-century-early Operation Paperclip. This is a clever and ripe alternate history to play around in. It is unfortunate only that the machine learning algorithms that power Google's "featured snippet"—AI, in the parlance of people who'd like to sell you AI—is a toy masquerading as a research tool.

Google tells your where its snippet is from: an entry in the Steampunk Space Wiki, a community-written and -edited exercise in speculative fiction. It pulled from there, and elevated it to the top result, because it doesn't know fiction from fact. It doesn't "know" anything: AI, which at this date functionally refers to large language models, is not answering your questions. It is producing text that, according to the corpus of human-created text it draws from, has the form of an answer to a question. It's entirely form over function. It does not know or care how useful the answer is, or if it's even actually an answer; only that it looks like one.

My brain was rewired in how it views LLMs by this bit from an essay by the data scientist Colin Fraser (who's written many very smart things about AI; this piece is also excellent):

It feels vividly as though there’s actually someone on the other side of the chat window, conversing with you. But it’s not a conversation. It’s more like a shared Google Doc. The LLM is your collaborator, and the two of you are authoring a document together. As long as the user provides expected input—the User character’s lines in a dialogue between User and Bot—then the LLM will usually provide the ChatGPT character’s lines. When it works as intended, the result is a jointly created document that reads as a transcript of a conversation, and while you’re authoring it, it feels almost exactly like having a conversation.

Again, not a conversation, but a transcript of what a conversation looks like. You can extend this across the (limited) spectrum of currently extant AI creative outputs. Not a search result, but something with the form of a search result. Not an essay, but a very good impression of one. Not art, but something that looks like it.

It is nightmarish to me to read reports of how reliant on ChatGPT students have become, even outsourcing to the machines the ideally very personal assignment "briefly introduce yourself and say what you're hoping to get out of this class." It is depressing to me to read defenses of those students, particularly this one that compares an AI-written essay to using a washing machine in that it reduces the time required for the labor. This makes sense only if the purpose of a student writing an essay is "to have written an essay," which it is not. The teacher did not assign it as busywork. The purpose of an essay is to learn and practice communication skills, critical thinking, organization of one's own thoughts. These are useful skills to develop, even (especially!) if you do not go into a writing career.

It is easy and horrible to imagine a world where lazy students use ChatGPT to write essays which lazy instructors then use ChatGPT to summarize and grade. I would call it dystopian, but it's not some future thing—it's happening, on a smaller scale. Already, bot-produced internet content is being crawled by bots, an SEO-driven ecosystem that doesn't need to involve a single human. On Twitter, bots reply to bots, thereby aiding both in engagement-farming, for some pittance of a payout and to the detriment of everyone forced to interact with them.

A saving grace is that it's very easy to recognize the writing of a bot. I'm not entirely sure why this is, though I find it fascinating to consider what makes a human's speech so obviously human. AI content is curiously formal. It's often just a summary of the thing it's responding to. It's always incredibly boring. The purpose of a piece of writing—say, this one, by me—is to give you, the human reader, something worth your time and money. To make you think or make you laugh or whatever. It strives, at least, to be interesting. And yet AI does not and perhaps can never write anything interesting.

This verges on the realm of philosophy, which is also the domain of fiction—good fiction, anyway. Fiction like the writing of Ted Chiang, who makes smart, engaging sci-fi even for people who don't think they like sci-fi, because like all the best sci-fi his is actually about the human condition. (Seriously, go buy either of his story collections right now.) Chiang has spent a career thinking about what makes us human, which makes him the perfect person to consider what makes AI art inhuman. His latest essay in the New Yorker is a gem, and if I can do it the disservice of reducing it to a thesis statement, Chiang argues that AI definitionally cannot make art, because art requires an artist trying to communicate something, and AI cannot truly communicate. Remember: Not communication, but merely something that looks like it.

The task that generative A.I. has been most successful at is lowering our expectations, both of the things we read and of ourselves when we write anything for others to read. It is a fundamentally dehumanizing technology because it treats us as less than what we are: creators and apprehenders of meaning. It reduces the amount of intention in the world.

Some individuals have defended large language models by saying that most of what human beings say or write isn’t particularly original. That is true, but it’s also irrelevant. When someone says “I’m sorry” to you, it doesn’t matter that other people have said sorry in the past; it doesn’t matter that “I’m sorry” is a string of text that is statistically unremarkable. If someone is being sincere, their apology is valuable and meaningful, even though apologies have previously been uttered. Likewise, when you tell someone that you’re happy to see them, you are saying something meaningful, even if it lacks novelty.

Something similar holds true for art. Whether you are creating a novel or a painting or a film, you are engaged in an act of communication between you and your audience. What you create doesn’t have to be utterly unlike every prior piece of art in human history to be valuable; the fact that you’re the one who is saying it, the fact that it derives from your unique life experience and arrives at a particular moment in the life of whoever is seeing your work, is what makes it new. We are all products of what has come before us, but it’s by living our lives in interaction with others that we bring meaning into the world.

This, I think, will not change, no matter how good AI gets at mimicking human speech, or reproducing or synthesizing a style of painting. The output is meaningless compared to the intention. The Cave Of The Hands, pictured atop this blog, still takes your breath away 10,000 years later not because it is aesthetically pleasing, but because it sparks a connection between you and the unknown but not unknowable people who made it. I do not know what they were trying to say; I like to imagine it was something along the lines of We were here. That is enough. That is human.

Already a user?Log in

Thanks for reading Defector!

Sign up to keep up with our blogs.

Or, click here for subscription options

If you liked this blog, please share it! Your referrals help Defector reach new readers, and those new readers always get a few free blogs before encountering our paywall.

Stay in touch

Sign up for our free newsletter