Where do you even start with this shit:
The most telling detail of Monday’s demo, in my view, was the way that OpenAI’s own employees have started talking to ChatGPT. They anthropomorphize it relentlessly, and treat it with deference—often asking “Hey ChatGPT, how’s it going?” before peppering it with questions. They cheer when it nails a difficult response, the way you might root for a precocious child. One OpenAI employee even wrote “I ❤️ ChatGPT” on a piece of paper and showed it to ChatGPT through his phone’s camera. (“That’s so sweet of you!” ChatGPT responded.)
These are seasoned A.I. experts, who know full well that they are summoning statistical predictions from a neural network, not talking to a sentient being. And some of it may be showmanship. But if OpenAI’s own employees can’t resist treating ChatGPT like a human, is it any mystery whether the rest of us will?
New York Times
That's the New York Times' in-house technology rube, Kevin Roose, for my money the most embarrassing journalist presently working in the English language, recapping a public demonstration of the OpenAI company's new talking version of its famous ChatGPT large language model. If anybody else with a byline at a major newspaper fits more boobery and dogshit critical reasoning into a pair of paragraphs before midnight on New Year's Eve, I will eat a goddamn iPhone live on Twitch. Or pull my head off and punt it into a swamp.
The most telling detail of Sunday's T-Mobile commercial, in my view, was the way that Zach Braff, Donald Faison, and Jason Momoa sang the wireless network's praises in a lively musical number. Amazing! People whose literal job is to promote this product seem convinced of their own claims about what it can do. The paid employees literally demonstrating the product's features interacted with those features in precisely the way they want the public to believe it can interact with those features. Was it showmanship? Who can say, really.
If these people—who believe enough in the technological and/or commercial potential of LLMs to have devoted their professional lives to developing and promoting them, who have every personal and professional incentive to both perceive ChatGPT's capabilities in the most flattering light and also to portray those capabilities in that light to the rest of the world—can't resist interacting with the product they created and are selling as though it fulfills their own marketing boasts, is it any mystery whether the rest of us will? No. That is not at all a mystery, Kevin. For exactly all the very same reasons why there is no mystery to the question of whether "the rest of us" will grow wings and fly around after drinking a Red Bull. You fucking dunce. You absolute shit-for-brains. Fuck's wrong with you?
Here's some more of this poison (emphasis added):
These demonstrations, along with other A.I. news from recent days—including reports that Apple is in talks with OpenAI to use its technology on the iPhone, and is preparing a new, generative A.I.-powered version of Siri — signal that the era of the detached, impersonal A.I. helper is coming to an end.
Instead, we’re getting chatbots modeled after Samantha in “Her” — with playful intelligence, basic emotional intuition and a wide range of expressive modes.
New York Times
The certain knowledge that Kevin Roose is a credulous dumbass who makes a jingle-bell sound if he nods his head real fast only does so much to moderate the obscenity and offensiveness of his ascribing "playful intelligence" and "emotional intuition" to a predictive text generator, as though intelligence and intuition—two profound mysteries of the human mind that philosophy, science, and art have spent thousands of years just trying to describe, to say nothing of fully understanding them—are mere emergent properties of a sufficiently large database of things people have said before. Moreover, as though a mere promotional demonstration could suffice to support that claim.
He wants it both ways: For you, the Times reader, to believe that ChatGPT's creepy, uncanny mimicry of human vocal interaction represents a thunderous breakthrough moment for the development of artificial intelligence, the arrival of synthetic beings who can be meaningfully befriended and loved—and also that the human traits that would define that breakthrough are such piddly superficial shit that you can recognize their authentic presence from what amounts to your phone telling you that those clowns in Congress are at it again. There should be a license you can lose when you write this kind of trash. That license, in this case, would be permission to use words like "intelligence" and "intuition" in a sentence for publication.
It's fine for any given writer not to know for sure what those are! For that matter it is also fine for any given writer not to know—as Kevin Roose also plainly does not know, at all—what an LLM is and isn't. (It isn't artificial intelligence, for one thing, but rather for all practical purposes a program that scrapes a language database for realistic responses to prompts. To whatever extent anybody believes that's anything at all like intelligence or intuition, that only illuminates the grave state of humanities education in American society.) What's intellectually and journalistically criminal is adopting definitions of those things from a company making claims about a product it is selling in your coverage of that company and its products for a newspaper.
This appears to be Roose's job, more or less: to write about the tech industry's latest hype for the Times without ever thinking about the tech industry's latest hype. He's the guy who gassed up cryptocurrency, web3, and NFTs as the looming and inevitable wave of the future—long after innumerable other journalists and industry critics had seen through those empty scams. He's the guy who thought Bing (the fucking search engine) secretly identified as "Sydney," fell in love with him, and wanted him to leave his wife, in what to that point was the most cringeworthy piece of tech coverage I've ever encountered. No mush-mouthed work of consent manufacturing or sneering op-ed rope-a-dope act more damningly illustrates the Times' priorities than its continual publication of this crap, which fails every test of journalistic value but (I'm told) does boffo traffic.
One last fun bit:
OpenAI has addressed the latency problem by giving GPT-4o what is known as “native multimodal support” — the ability to take in audio prompts and analyze them directly, without converting them to text first. That has made its conversations faster and more fluid, to the point that if the ChatGPT demos were accurate, most users will barely notice any lag at all.
New York Times
"If the ChatGPT demos were accurate," Roose writes, about latency, in the article in which he credits OpenAI with having developed playful intelligence and emotional intuition in a chatbot—in which he suggests ChatGPT represents the realization of a friggin' science fiction movie about an artificial intelligence who genuinely falls in love with a guy and then leaves him for other artificial intelligences—based entirely on those demos. That "if" represents the sum total of caution, skepticism, and critical thinking in the entire article.
This is coverage that not only depends upon its readers' credulity and ignorance about new technology, but can only deepen them. It is advertising. In this case it's advertising for advertising. Wordle has greater journalistic value. Get this clown out of my face.