Grave editorial failure only seldom can be located in something as precise as a single word choice. The following block-quoted sentence, from a Thursday article by New York Times technology reporter Kevin Roose, contains one, and it feels like journalistic malpractice that his editor or editors let him type it. I bet you can spot the word:
There is a small body of academic research on A.I. model welfare, and a modest but growing number of experts in fields like philosophy and neuroscience are taking the prospect of A.I. consciousness more seriously, as A.I. systems grow more intelligent.
Here's a hint:
—> INTELLIGENT <—
For good measure, the intelligence claim repeats itself a few paragraphs farther along in Roose's article:
Jared Kaplan, [name of company for which Roose's article is plain stoogery]’s chief science officer, told me in a separate interview that he thought it was “pretty reasonable” to study A.I. welfare, given how intelligent the models are getting.
Passing the word "intelligent" into print in this way—casually crediting both intelligence and growing intelligence to the various technologies now doing business as artificial intelligence—should be embarrassing to both Roose and the Times. At best—at absolute best!—it reflects genuine and appalling ignorance by a guy who draws a salary from the most powerful print outlet in the English-speaking world ostensibly for paying close, critical attention to the technology industry and reporting on it for a readership at least theoretically less knowledgeable than he. At worst, it is knowing and intentional bullshit. In any event, it is an editorial failure. For the sake of readers it ought to bear a footnote, clarifying that like everything else in the article it is according to a guy writing like he'd just won a breath-holding contest and whose eyes are moving independently of one another.
These AIs—and they are not AIs, first of all; calling them that is just a branding strategy that blank-eyed tech reporters have agreed to accommodate for fear of irritating the extremely rich men they eventually want to go to work for—are not "intelligent" by any understanding of intelligence that would complicate anyone's understanding of them as insensate, unthinking, lifeless tools. All but the most brazenly cynical pitchmen among their makers and exponents know better than to claim they are.
Intelligence—to whatever extent it truly is any one thing and not a name for a hazily perceived nebula of things which human inquiry has spent millennia trying to understand and map in relation to one another—certainly is not just some word that describes the far-right end of a Calculating Power gradient. It is not an emergent property of being sufficiently good at math. Even if it was, I certainly hope at least someone at the Times would agree that assigning moral consideration and rights on the basis of it—as perks a reasoning apparatus attains with the equivalent of an acceptable SAT score—would be monstrously evil, for reasons that would be depressing to have to enumerate here.
Somehow even more horrible and revolting is the idea, implicit in the following Roose construction—
After all, more people are beginning to treat A.I. systems as if they are conscious—falling in love with them, using them as therapists and soliciting their advice. The smartest A.I. systems are surpassing humans in some domains. Is there any threshold at which an A.I. would start to deserve, if not human-level rights, at least the same moral consideration we give to animals?
—that, say, ChatGPT could unlock moral consideration through a sufficiently convincing imitation of consciousness, experience, and interiority. As though by programming a computer to say "That makes me sad!" with enough pathos to make you feel weird about calling it Lance Corporal Shitbutt, a bunch of pasty Silicon Valley effective altruists could plant a flag and claim to have created an artificial being capable of sadness. Examine the converse to see how obscene this idea is: Should a living human being ever lose their rights because nobody falls in love with them or solicits their advice? Because they are not smarter than any other humans?
I hope Kevin Roose would not need to be told that what endows another person, or an animal, with whatever rights they have is not the quality of their affect, nor their ability to convince him or anyone else that they are conscious and capable of suffering—for all the very same reasons I hope he would not need to be told to stop trying to have a conversation with the characters on his TV screen, and then some. But then again, this is a man who was once convinced a chatbot wanted him to leave his wife.
I suppose the reasonable question to ask in response to all of this foaming at the mouth is something along the lines of: OK then, wise guy, what is intelligence? How could an authentically sentient artificial mind prompt a person of conscience to at least wonder whether certain types of interaction with it might be cruel and immoral? Unfortunately I read all of Kevin Roose's article for the sake of writing this blog, and now all I want in the world is to drill a hole behind my ear and fire a bottle rocket into it.
But seriously: There may someday be conversations to have and questions to examine about the possible rights of, and ethics of dealing with, theoretical sentient and conscious manmade artificial beings. (Those questions will begin with "Will anyone ever dream up even one single not-appalling argument on behalf of the effort to create those beings?" swiftly followed by "Since the answer to the previous question is 'obviously no,' does that imply a societal duty to halt that effort and reassign its resources to something of actual value?") But the effort to spark those conversations about today's large language and generative text models is marketing. That is why it is being driven and publicized by the very companies building and selling those products.
It is so clearly and obviously not anything else but marketing that the awareness that I am typing this sentence right now instead of doing anything else makes me want to drop a grand piano on my own head. I would not feel any less insane if I were explaining to an extravagantly paid professional journalist that Spongebob Squarepants is not a real person, or that the guy in the Female Body Inspector T-shirt on the boardwalk is not actually an official female body inspector, or that Mommy goes right on existing even when she leaves the room.
Parents, never let your children run with scissors. Pet owners, never let your dog roam off-leash near a busy eight-lane superhighway. Editors, never let Kevin Roose use the word "intelligent" unsupervised, ever again. If not for his sake then for mine.