Why the Tesla Autopilot Crash Matters

We talked about this a little on a recent episode of the Beyond Devices Podcast, but I wanted to write down some thoughts as well. The fatal crash involving a Tesla running the Autopilot mode has already sparked lots of news articles, handwringing, an NHTSA investigation, and possibly even an SEC investigation, as well as several defensive tweets and blog posts from Tesla and Elon Musk. But as is so often the case, it feels like there’s not enough nuance on either side. The issues here are complex, and I want to address two specific ones here, one about Tesla’s statistical defense, and one about the danger of a narrative developing about autonomous driving.

Sample size problems

First off, there are the statistics that Elon Musk and Tesla have used to defend Tesla’s Autopilot mode. The one they’ve cited most frequently is that Tesla vehicles had driven 130 million miles before there was a fatal crash, while the US national rate is around 94 million miles per fatality. On paper, that makes Teslas look really good, but it’s a fairly fundamental statistical error to take those numbers at face value.

Most importantly, the sample size for Teslas is much too small. A simple thought experiment will suffice here:

  • The day before the fatal accident, Tesla’s rate was zero per 130 million miles, infinitely superior to the national rate
  • The day after the accident, the rate was one per 130 million miles, somewhat better than the national rate
  • If there had been another accident the day after, the rate would have been one per 65 million miles, worse than the national rate.

There wasn’t another accident the day after, but in such a small sample size, and given standard probabilities, there might easily have been, or there might not have been another for months or years. The point is that the sample size is far too small to derive any kind of statistical average at this point with any real rigor. Consider that Tesla has racked up 130 million miles, while those NHTSA stats are based on over 3 trillion miles traveled by car in the US in 2014.

Driving conditions

The other issue with these statistics is that the NHTSA numbers are for all driving under all conditions and on all roads in the US in 2014. The Tesla figures, by contrast, are only for those conditions where Autopilot can be activated, which in many cases is going to be restricted to freeways and other larger roads. The problem with that is that fatal car accidents aren’t evenly distributed across all road types and conditions – they disproportionately happen on certain road types including rural roads, where something like Autopilot is less likely to be used. It’s frustratingly difficult to find good statistics on this breakdown, but I suspect Tesla’s stats benefit from the fact that Autopilot is used in scenarios that are generally lower risk.

It’s the narrative that matters

So far, I’ve dealt solely with the statistics, but I want to turn to what’s actually the bigger issue here, which is the narrative. The power of narratives is something I’ve written about elsewhere, and it’s a theme I often find myself returning to, because it’s very powerful and often underestimated. The problem with the Tesla Autopilot crash is that it challenges the narrative about autonomous vehicles being safer than human driving. That’s not because it proves they’re less safe – if you take Tesla’s numbers at face value, which I see many people doing, they appear to show the opposite.

But the simple fact of such a crash featuring prominently in the news is something that will stick in many people’s minds and affect their perceptions of autonomous driving. And here I think Tesla has done itself a disfavor, by over-selling their feature. The very name Autopilot connotes something very different from and far beyond what the feature actually promises to do. Tesla has been at pains to point out since the crash that its own detailed descriptions of the feature indicate drivers should keep their hands on the wheel and stay alert and attentive, ready to take over at any moment should the need arise. But the Autopilot branding doesn’t connote that at all.

The secondary problem is that such a feature will inherently lull people into a sense of ease and less focus as they drive. What’s the point of the feature unless it frees up the driver in some way, and once they’re freed up, aren’t they almost guaranteed to want to do other things with their time while in the car? There have been news reports about people using their phones more while using Autopilot, and there were suggestions that the driver of the car involved in the fatal crash might have been watching a movie on a portable DVD player.  There’s a paradox here, where on the one hand the driver is freed up for other activities because the car takes over, and on the other they’re supposed to stay focused and not take advantage of that increased freedom.

This is the risk of Tesla’s incremental approach to autonomous driving. In general, I think there are significant advantages to this approach, which helps to build driver trust over time on an incremental basis. But the downside here is that the vehicle isn’t really capable of fully taking over yet, and yet lulls drivers into a sense that it is. That, in turn, helps to feed that negative narrative about self-driving cars in general and Teslas in particular. Tesla and Musk need to tread more carefully in both their branding of these features and their response to these tragedies if they want to avoid that narrative taking hold.