Give OpenAI CEO Sam Altman this much credit: I did not think he had the capacity to surprise. Among the uppermost tier of Silicon Valley types, the ones that get referred to as "legends" in the press, Altman is remarkable chiefly for his vacuousness. He is a tech leader without any tech skills or original ideas, someone who internalized the correct lessons first at Stanford and then Y Combinator, which are: what matters most is who you know, the industry is primarily about marketing, and there is a percentage in being willing to break the rules.
I was thinking about that last bit a fair amount over the past week as Altman has posted his way into a stupid and entirely avoidable legal battle with Scarlett Johansson. One of the foundational principles of YC is "naughtiness," which is a cutesy way to describe repeatedly breaking the law; in a 2010 blog post outlining the virtue, YC founder Paul Graham singles out Altman as the alumnus who most embodies the principle. That assessment tracks across Altman's career as a huckster and operator, from dropping out of Stanford as a marketing ploy to getting fired from YC for being too selfish even for them, to nearly getting fired twice from his deeply stupid first startup Loopt to actually getting fired by the OpenAI board, only to weaponize his network of powerful rich guys to crush his own board and remake the company in and around his own image at last.
Failure is lionized when you fail upwards; Altman has done this enough to learn that there's nothing he can't get away with. And so when his company's engineers tricked out ChatGPT's new fake voice thingy to give it the illusion of personhood and make Kevin Roose, whose job it is to write about stuff like this, turn into this guy, Altman probably thought it was a good idea to hint that this new technology was just like the 2013 movie Her, in which Joaquin Phoenix turns into that guy over an AI voice lady. The new fake voice thingy offers five options, one of which is named Sky and sounds very much like Johansson. While "Sky's" "voice" is faintly distinguishable from Johansson's, to the point that OpenAI could make some claim of plausible deniability, they gave the game away a number of times. They did this first by repeatedly asking Johansson to do the voice herself.
Per a statement the actress gave to NPR, Altman asked her in September, and she declined. At that point, tech reporters were already noting the similarities between the two voices. Two days before the launch earlier this month, Altman contacted Johansson again, and before she responded, OpenAI released Sky and convinced Roose that it would pay him Thursday for a hamburger today. OpenAI paused the use of Sky and claimed they had a different actress lined up before ever reaching out to Johansson.
Let us consider the two positions as they stood then, given the evidence laid out in the paragraphs above: Johansson knows what OpenAI is going for, OpenAI knows they have to avoid the ire of the litigious actress but would also like to do so without having to change anything about Sky. You can see a narrow path to OpenAI getting what they want, by artfully hinting at what they're going for without actually saying it. All they'd have to do is keep from explicitly invoking the movie. Altman decided to go in a different direction.
Altman's post was fairly naughty, but it was also a stellar example of how to elicit a lawsuit using the fewest characters possible. Johansson referenced the tweet days later when she released a statement threatening legal action and connecting OpenAI's brazen attempted theft of her voice to the larger-order legal battle over large language models. "In a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity," Johansson wrote. "I look forward to resolution in the form of transparency and the passage of appropriate legislation."
The rollout of this mildly spruced-up ChatGPT product is the first big public move since the briefly successful, eventually futile OpenAI revolt against Altman last November. At that point, the company was still notionally a "capped for-profit" business—a marketing scheme designed to assure the public that, while they surely could, they would not Build SkyNet in the name of maximizing profit. The people who wanted to keep those guardrails on saw the increasing greed at the highest levels of the company and sought to boot Altman before his leadership took a dark turn. They lost.
Very quickly, Altman was back on top of the company, having consolidated power. Chief Scientist Ilya Sutskever, who had a hand in the coup, left on May 15, as did fellow executive Jan Leike, who wrote "safety culture and processes have taken a backseat to shiny products." In that quote, he was referring to OpenAI's decision to dissolve OpenAI's team focused on reigning in the potential long-term risks of "AI" less than one year after announcing the initiative. Vox reported that employees who leave OpenAI are subject to ironclad NDAs that forbid them "for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it." OpenAI, embarrassed, has since released former employees from the provision.
There is something remarkable about the arrogance and hubris at work here. Altman runs a company built on stealing and aggregating other people's work and presenting it tied up in a neat bow—they have not really figured out how to do this yet, but they're stealing and aggregating as fast as they can—and which has internalized the value of theft for theft's sake and views illegality as something approximating a market inefficiency. And why would they not? The point of the faux-human turn in ChatGPT's marketing is to obfuscate that the product is, still, just an aggregation machine. And as Kyle Chayka noted in the New Yorker, OpenAI's totalizing aims have at the very least an uneasy relationship with the creation of new stuff on the internet, which is the raw material the company's technology mines and adorns as if it's novel. OpenAI has reportedly been aggressively pursuing deals with media outlets and Hollywood studios, which I read as them essentially running a protection racket on the sum total of human endeavor on the internet.
As with the company's chief, there is also something equally dull and sinister about all this. Here as with its signature technology, OpenAI is more eager than capable about the stuff it swipes. It is, among other things, very tacky to pay such uncanny, sloppy homage to this particular movie, which is after all not about how cool and fun it is to fall in love with a robot that has Scarlett Johansson's voice but about someone who tricks himself into falling in love with a robot, only to eventually be forced to grapple with the artifice and self-delusion in his own life. There is a sadness that can't be magicked away with the panacea of a sexy robot. In the movie, anyway.
"Magic" comes up a lot in the AI business, and not just from critics or sycophants. Altman used the word to refer to the quality of the recent demo. As John Herrman wrote for New York, that "could be read as a warning or an admission: ChatGPT is now better than ever at pretending it’s something that it’s not." Between Altman's own marketeering and the way he explicitly aligned his elaborately dressed-up search engine in a costume made in tribute to a sci-fi story he did not understand, he is aligning himself with a storied, ignoble huckster's tradition of pretending that you are crafting a reality out of the movies.
People in Altman's economic and cultural super-class—fabulously wealthy, megalomaniacally ambitious tech executives, imbued with an unearned and flimsy aura of brilliance because of their wealth and megalomania—have expressed a very particular and wildly dishonest reverence for other people's fictional imaginings for decades. They become enamored of some piece of sci-fi technology or fantasy magic-ware and either pay homage to them by giving their own products names inspired by or taken from those works (TASER, the names of SpaceX drone ships, the relentless copping of basically everything J.R.R. Tolkien ever wrote, et cetera). The concept of the "metaverse," a word now most associated with Facebook's largely abandoned digital fiefdom, was coined by Neal Stephenson. Elon Musk cannot stop smirking about how the Cybertruck is like something "Blade Runner would have driven."
Doing this is usually corny and unoriginal, and while there's plenty of that here I also detect a certain blissful malice in things like Musk or Zuckerberg adopting essentially dystopian nomenclature, or Peter Thiel naming his company Palantir, or the Thiel acolyte and weird little guy Palmer Luckey doing double-time cover-band shit and naming his Anduril. Those two companies exist to grease the wheels of the ugliest and most cynical cogs in the American empire. They take their names from a pair of on-the-nose items in the Tolkien mythos: respectively, those are a group of seeing stones that give the wielders power and deceive them and a sword whose name translates to "Flame of the West." Taken together, it is a stunning riposte to the most basic notion of subtlety.
These people are not quite stupid enough to miss the overtones in the works they're aping, nor do I think the mimesis ends at the point of founders seeing themselves as the protagonists of stories about broken worlds. The appeal of the worlds of Bladerunner or the Tolkien legendarium is that each represents a brilliant leap of imagination. Yoinking a name from one of them then associates the yoinker with that leap, but also places them within neatly circumscribed bounds. There's a goofy, cartoon quality to a name like Anduril. How evil, really, could you be if you're playful enough to pay homage to Aragorn? If you get the reference, you see the name and think of Viggo Mortensen's fantastic hair, not autonomous drones coldly vaporizing climate refugees.
This is, in essence, the joke Altman is telling by ripping off Spike Jonze. You are not supposed to fall in love with ChatGPT, or really believe anyone could. You are supposed to think OpenAI has a sense of humor; not so much to laugh at the joke as recognize the shape of something joke-like. The trust that a sufficiently desperate/motivated audience would do the work upon being presented the barest facsimile of something human is, amusingly, the only bit that suggests Altman has perhaps seen the movie. It's a good thing these bozos are also so full of themselves that they couldn't keep from doing something so hubristic that it actually became funny.