The Body Count of Self-Driving Cars is Dangerously Low

On March 18, 2018, self-driving cars claimed their first victim. Elaine Hertzberg was crossing the street, and an autonomous car ran her over, killing her. The car’s human minder was being distracted by anther marvelous piece of technology (she was watching a video on her phone), but still: without self-driving cars, Elaine would still be alive.

As it turns out, a whistleblower had suggested grounding that fleet of cars just days before, which could have averted the tragedy.

What can self-driving car advocates say to this?

We can say: what took so long?

Technology is risky, and technologies involving moving tons of metal around places where humans walk are riskier still. But we shouldn’t privilege the status quo: in the US, there are 30,000 motor vehicle fatalities per year, mostly caused by distracted drivers. Evolution didn’t optimize humans for making split-second decisions at seventy miles per hour, so we’re likely close to the local minimum for preventing traffic fatalities; we can lower them, but only at the cost of tradeoffs (segregating car traffic and foot traffic, reducing speed limits).

You can think of it as an efficient frontier where we use regulations to trade off between convenience and casualties. For now, we’ve decided that 30k is about the lowest we’re willing to go.

Of course, efficient frontiers aren’t static. When technology changes, they shift. Better driving technology could either get us faster commutes for the same death toll, fewer deaths for the same commute time, or some happy compromise between the two.

I’m rooting for compromise.

But to get there, we have to think seriously about risks. This forces us to make some hard decisions, but we’re not the first. When Winston Churchill was the British Secretary of the Navy, he was a strong advocate for oil over coal. Referring to the Germans, he once wrote “They have killed 15 men in experiments with oil engines and we have not killed one! And a damned fool of an English politician told me the other day that he thinks this is creditable to us!”

More recently, the US space program had its share of tragic deaths early on. And what’s happened since is that, through tens of billions of dollars of R&D and decades of research, NASA has discovered a surefire method for safe space travel: staying put. Perhaps you feel that this is all for the best, that space travel isn’t a worthy use of our finite resources. In that case, I hope when we spot the asteroid I or my descendants have time to tell you or yours that we told you so.

The Calculus

To decide on an appropriate tradeoff between deaths now and deaths later, we need some measure of the exchange rate between present and future lives. This is hard to calculate: on the one hand, the world’s population is expanding; there are more future people than present people, so perhaps any one of them matters less. On the other hand, rich countries spend a lot on healthcare, and the richer they are, the more of their incremental income they spend this way. So it’s entirely possible that the subjective value of a human life is a superlinear function of economic growth.

(If you don’t think the economy will grow in the future, why are you bothering to read an essay about the future? You should be having more fun—since the only coherent options are prosperity or apocalypse, a lack of long-term optimism is just unthinking nihilism. Go see if there’s anything good on TV.)

We can probably settle on the theory that future lives are worth at least as much as present lives, perhaps much more. But to be conservative, we can stick with a 1:1 tradeoff: anything we can do that has an expected cost of roughly one human life due to a poorly-configured self-driving car, but that is likely to accelerate the arrival of self-driving cars by 18 minutes, is a win.

This sounds like one of those creepy bloodless calculations economists are fond of making, but don’t worry: it’s plenty bloody! Tens of thousands of lives per year are at stake!

The question of whether to accept deaths from self-driving cars, or to hold researchers to a zero-casualties standard, is only easy to answer if you massively privilege the status quo. But by the standards of the future, the present is rife with needless death. Refuse to make hard decisions now, and your ancestors will despise you forever.

Stalin probably never said “A single death is a tragedy. A million is a statistic.” Think of it: by delaying self-driving cars, you’re espousing a view so bad that one of the most evil humans in history wasn’t actually willing to say it. (“Maybe he was thinking it, too,” you say. But Stalin probably said what was on his mind: his management strategy was to get all of Russia’s senior leadership blind drunk every single night to see who would admit something unfortunate, and presumably Uncle Joe let some things slip then, too. We probably have a good idea of the most evil things he ever explicitly thought.)

We should view self-driving cars as akin to the Manhattan Project, but with more certainty of long-term benefits. The Manhattan Project claimed some lives directly (Louis Slotin, for example), and of course dropping the bomb killed tens of thousands of civilians; since it diverted 0.25% of GDP during wartime, the loss of men and materiel probably also led indirectly to allied servicemen’s deaths. But in the end, it was a massive saver of lives. Not because of its impact on the war itself—the Russian invasion probably played a larger role. But by pushing forward the development of nuclear power, the Manhattan Project eliminated the possibility of direct conventional war between superpowers, and also gave us a low-emissions source of energy that crowded out some coal, which now kills a mere 800,000 people per year.

(This calculation is a little tricky as well, since one effect of nuclear power is that it raised the odds of a world-ending war from zero to something higher. War deaths follow a power law distribution, and nuclear weapons raise the alpha by some hard-to-estimate increment. A nuclear exchange now seems unlikely now. Do our bombs even work any more? Is there anyone still alive who knows how they operate?)

Concluding Thoughts

It’s a bit extreme to say that self-driving car research is acceptable if it starts killing as many people as conventional cars. On the other hand, it sounds extreme to me to say that we should accept the preventable deaths of millions of people because we’ve always killed lots of innocent people that way.

I can say this, because I don’t work in the self-driving car industry. (Having said it, I never will.)

But we can compromise. Let’s set an acceptable death threshold for self-driving car research: the baseline is that R&D can kill 1% of the people that not-having-self-driving-cars kills. As self-driving cars rise from a minuscule fraction of total miles driven to a single-digit share to the vast majority (expect that last transition to happen startlingly fast. Double-digit months, not double-digit years), we can raise their casualty budget as we reduce the acceptable death toll from the status quo.

Eventually, if the world doesn’t end, I expect commutes to happen in autonomous vehicles; manually-steered cars will be fun for hobbyists and essential for revolutionaries, but will otherwise be a nonissue. The only question to me is: how many people have to die before we get there?

Advertisements