Third Tesla Crash in Two Months Raises Safety Questions About Self-driving Vehicles

Semi-autonomous vehicles claim to be safer than vehicles with human drivers because they operate through a sophisticated technological system of computers, cameras, sensors, and radar, thereby eliminating the unpredictable, error-prone and distracted driving of human drivers. Automakers market self-driving technology as a safety feature that improves roadway safety by reducing the number of crashes and deaths caused by human error, but three recent crashes involving self-driving vehicles have automakers, regulators and consumers re-evaluating the safety of self-driving technology, and whether semi-autonomous vehicles are ready for the open road.

The three crashes, including one fatality, all involve Tesla electric vehicles operating in autopilot mode. Autopilot mode is an “assist feature that requires you to keep your hands on the steering wheel at all times” according to a Tesla statement. Before engaging the autopilot assist system, Tesla vehicles caution the driver the technology is in beta mode, and ‘‘requires explicit acknowledgment that the system is new technology’’ by the driver before the vehicle will operate in autopilot mode. Tesla CEO Elon Musk tweeted of the autopilot assist system, “[P]oint of calling it ‘beta’ was to emphasize to those who chose to use it that it wasn’t perfect.”

The first crash involved a Tesla 2015 Model S sedan operating in autopilot mode in Williston, Florida on May 7, 2016 when the vehicle failed to detect a tractor-trailer turning in front of it against the bright sky. The driver, Tesla enthusiast Joshua Brown, was killed when his vehicle drove under the 18-wheeler after neither Brown nor the vehicle’s autopilot system detected the semi turning in front of the vehicle, and neither applied the brakes. The driver of the tractor-trailer contends Brown was distracted watching a Harry Potter movie after he heard the movie playing in the Tesla after the crash.

Since Brown’s fatal May crash, two more crashes in the last two weeks have occurred involving Tesla vehicles allegedly operating in autopilot mode.

A Michigan man and his son-in-law escaped serious injury on the Pennsylvania Turnpike July 1, 2016 after the 2016 Tesla Model X SUV they were riding in struck a guardrail, and hit a concrete embankment, before coming to rest on its roof. The driver claims the Tesla autopilot system was in use at the time of the crash but Pennsylvania State Police cited him for careless driving.

The third crash occurred July 10, 2016 in Whitehall, Montana. The 2016 Tesla Model X SUV was operating in autopilot mode travelling approximately 60 mph in a 55-mph zone, when it suddenly jerked to the right and crashed into a guardrail. The 2 occupants in the vehicle sustained only minor injuries despite the right side of the Tesla suffering significant body damage. The driver blames the autopilot system for failing to detect an object in the road.

NHTSA is currently investigating the first two crashes, and the safety of the autopilot system, or whether any defect in the system contributed to the crashes. In an unusual move, the NTSB (National Transportation Safety Board) has opened its own investigation into the fatal accident in May. Tesla is also investigating if the autopilot feature was, in fact, engaged in all of the crashes as claimed, and whether the autopilot technology is safe. To date, neither NHTSA nor Tesla have found any evidence the autopilot features are defective. Musk told Bloomberg News, “We’re going to be quite clear with customers that the responsibility remains with the driver. [W]e’re not asserting that the car is capable of driving in the absence of driver oversight.”

The three crashes come at a time when the NHTSA (National Highway Traffic Safety Administration) is set to release new rules and regulations about test-driving semi-autonomous vehicles on public roadways. NHTSA Administrator Mark Rosekind believes semi-autonomous vehicles need to be “at least twice as safe as human drivers” to significantly reduce the number of accidents and roadway deaths due to human error. Automakers counter that this life-saving technology will eliminate human error in 94% of traffic deaths, and are racing to get their version of a self-driving vehicle to market. Currently, there are over 70,000 vehicles equipped with autopilot features on roads worldwide.

This quick succession of crashes involving the autopilot assist system begs the question about the safety of self-driving technology, and whether we are ready for its use in the real world. Do you embrace this new technology? Do you believe self-driving vehicles will be more or less safe than vehicles operated by human beings? Do you expect self-driving vehicles to operate accident-free? Do you think people have unrealistic expectations about the safety features and operating capabilities of semi-autonomous vehicles? Do you believe operators of these vehicles have more or less responsibility when in autopilot mode? Who should be responsible if a crash occurs while the vehicle is operating in autopilot mode: the driver or the automaker?

There are also questions surrounding the ethics of operating semi-autonomous vehicles, and whether a computer, versus a human being, can make ethical distinctions in “split-second, life or death driving decisions on the highway,” according to a related story in the New York Times.

We’re still in the dark ages when it comes to self-driving vehicles, and there are still many questions about the use of semi-autonomous vehicles, their safety, and their place on American roadways. Federal and state rules and regulations are still being developed to guide their safe operation on public roadways where most vehicles are still being driven by error-prone, distracted human drivers, but we are not far from a time where self-driving vehicles will become more commonplace on American roadways, and maybe even in your own driveway.