Skip to Content
The Machines

How Not To Interrogate The Ethics Of Tesla’s Busted Autopilot Technology

Elon Musk makes a weird face next to a Tesla car
Christian Marquardt - Pool/Getty Images

On Thursday The New York Times magazine published a big reported feature on the Tesla car company's skittish, unreliable, occasionally deadly autonomous driving technology, and what that technology's successes and failures might reveal about Elon Musk. Which, right up front, I must say that to me that is a very bizarre framing: On any list of subjects possibly illuminated by the presence on public roads of a fleet of imperfectly autonomized cars with a penchant for mowing down pedestrians and plowing into stationary objects, surely "the personality and ethics of Elon Musk" is among the least important, as well as the absolute least interesting.

For all that, though, it's mostly a fun read. The reporter, Christopher Cox, goes on a pair of surreal and blackly comic ride-alongs with some sweaty Tesla enthusiasts in California who find their proselytizing about the revolutionary lifesaving potential of autonomous self-driving cars occasionally interrupted by the urgent need to prevent their own autonomous self-driving cars from spontaneously killing someone.

After a minute, the car warned Key to keep his hands on the wheel and eyes on the road. “Tesla now is kind of a nanny about that,” he complained. If Autopilot was once dangerously permissive of inattentive drivers — allowing them to nod off behind the wheel, even — that flaw, like the stationary-object bug, had been fixed. “Between the steering wheel and the eye tracking, that’s just a solved problem,” Key said.

[...]

Eventually, Key told F.S.D. to take us back to the cafe. As we started our left turn, though, the steering wheel spasmed and the brake pedal juddered. Key muttered a nervous, “OK. … ”

After another moment, the car pulled halfway across the road and stopped. A line of cars was bearing down on our broadside. Key hesitated a second but then quickly took over and completed the turn. “It probably could have then accelerated, but I wasn’t willing to cut it that close,” he said. If he was wrong, of course, there was a good chance that he would have had his second A.I.-caused accident on the same one-mile stretch of road.

There's also some good reporting in there on Tesla's and Musk's habitual shiftiness in communicating both what the company's cars can do—Musk is, by light years, Earth's most reckless overpromiser—and what those cars have done. On the latter, Tesla compares its self-driving tech's crash statistics to those of human-operated vehicles in ways that at least appear designed to blur context, in support of the false claim that its AI drives better and more safely than humans. That's bad.

The Times blog raises this subject in the context of utilitarianism and risk-reward calculus. You can understand the impulse: Just about the only thing Elon Musk does with any reliability is retreat to simultaneously half-baked and messianic longterm-ism when confronted with his own malevolence toward others or the fact that his cars kill people. The fact that, well, his cars kill people does give some regrettable weight to that crap. And so, like, here comes Peter Singer, every exhausting online guy's favorite utilitarian philosopher, to analyze the ethics of Musk's willingness to flood the roads with cars that sometimes spontaneously decide the time has come to flatten a small child:

Singer told me that even if Autopilot and human drivers were equally deadly, we should prefer the A.I., provided that the next software update, based on data from crash reports and near misses, would make the system even safer. “That’s a little bit like surgeons doing experimental surgery,” he said. “Probably the first few times they do the surgery, they’re going to lose patients, but the argument for that is they will save more patients in the long run.” It was important, however, Singer added, that the surgeons get the informed consent of the patients.

And here, at the top of the very next paragraph, is the precise moment where my hair burst into flames:

Does Tesla have the informed consent of its drivers?

That's a fine question, in the abstract: Tesla overpromises its cars' capabilities and fudges its safety record; plenty of human Tesla drivers—or, like, operators?—may not know what they're buying, or what they're not buying. It is also the complete wrong question.

The salient informed-consent question to ask about unpredictable self-driving cars careening around public roads is not whether Tesla drivers have enough information to consent to risks to their own safety; the "patients," in Singer's analogy—whether or not he or Cox are aware of it—are not the Tesla drivers, but all the other people out there using public roads and sidewalks and crosswalks, any of whom might get killed or maimed at any time by an unproven technology being tested on them without their knowledge or consent. No plausible experimental surgical technique might randomly slaughter an innocent bystander minding their own business within the same building—but a self-driving car transporting even the very most informed and enthusiastically consenting of Tesla superfans might spontaneously kill just about anybody else it comes across. In fact, that has happened many times. The Times article itself later recounts one such case, when an autonomous-driving Tesla sped through a red light at a Los Angeles intersection and smashed into a human-driven Honda, killing the Honda's occupants—who had no say over whether the Tesla's driver would engage the autopilot system that killed them.

That is to say that in Singer's analogy, the Tesla owners engaging their cars' autonomous driving systems are the surgeons. And not even normal surgeons, but basically the Human Centipede guy, operating on unwitting strangers chosen at random. Who gives a fuck whether that guy gives fully informed consent to the risks he's inflicting on everybody else, in the face of the reality that everybody else doesn't even have a choice in the matter?

This is like reporting on the dangers of assault weapons, and focusing on the risk to a cheerfully ignorant AR-15 owner that his machine gun might blow up in his hands while he's using it to spray bullets at schoolchildren. It's like wringing your hands over the right of antivaxxers to decide what goes in their body, and ignoring that this amounts to granting them unilateral power over what goes in everybody else's. You keep thinking the blog might wend its way around to considering the consent of the public—members of which might expect at least a consult on the question of whether they'd like to share their morning commute with psychotic two-ton killbot technology systematically exempted from normal safety inspections and being road-tested in live traffic at highway speeds by amateur fanboy volunteers—but somehow it never gets there.

In this way, intentionally or not, Cox blunders into adopting Musk's and Tesla's either libertarian or sociopathic (assuming you grant, generously, that those are not synonyms) view: that open roads in use by the unwitting public are a legitimate place for testing an inherently dangerous and unproven experimental technology, and building—through trial and sometimes fatal error—the dataset that at some hypothetical future point might make that technology capable of fulfilling its makers' marketing boasts. Further: that unfalsifiable claims about what the ideal future version of self-driving artificial intelligence could do make a self-evident and unalloyed good out of handing over the public infrastructure for what amounts to crash-test trials—like making Big Macs a mandatory part of all grade-school lunches because the CEO of McDonald's says he dreams of the Big Mac one day preventing cancer.

Lurking behind all of this, unexamined in the Times blog, is the question of who, or what, bears ultimate responsibility for keeping the public reasonably protected against the dangers a class of unconstrained shit-for-brains hypercapitalists and their sad personality cults would inflict on all the rest of us. Maybe that fight's already lost: Autonomously incompetent Tesla cars, after all, already are out there on the roads, flipping out spontaneously, creaming pedestrians, broadsiding hapless human drivers in intersections, bursting into flames for no reason, and there appears to be no civic or political will to just... take them off the road. Nor even any widely shared sense that that's a thing any institutional authority has the wherewithal to accomplish.

But still. Imagine the ideal world implied by the Times blog's handwringing over the informed consent of Tesla drivers whose cars are killing other people: They'll know the dangers they themselves face when they engage the full self-driving technology in their new vehicle. Great. And I suppose all the rest of us just have to accept the possibility of getting smeared across a mile of pavement by a marauding robo-car as the cost of walking from here to there outdoors. Does that seem to you like the way things should be? At least we'll be informed!

Dressing up that bleak capitulation with chin-stroking about the painful or possibly necessary trade-offs of coherent ethical philosophies, however well-intentioned the examination might be, is giving away the game. It's simple enough to say that these things are broken-down death machines that have no business on the road, and that in an even marginally functional society the choice about whether to put them there would not reside with a shitposting imbecile like Elon Musk. For the press to dance around any part of that bald truth seems ... well, let's settle for ethically shaky.

Already a user?Log in

Thanks for reading Defector!

Sign up to keep up with our blogs.

Or, click here for subscription options

If you liked this blog, please share it! Your referrals help Defector reach new readers, and those new readers always get a few free blogs before encountering our paywall.

Stay in touch

Sign up for our free newsletter