Tesla Motors

Joshua Brown was killed on May 7, 2016, when his Tesla Model S crashed into a tractor-trailer. It’s the first official fatality specifically attributed to the use of a vehicle with robust autonomous features. Tesla is currently being investigated by the National Highway Traffic Safety Administration (NHTSA) and forced to explain to the agency and the public why it released autonomous driving technology that was, as some might argue, not ready for prime time and not properly explained to its users.

Starting with the introduction of the dual-motor all-wheel-drive version of the Model S, the company equipped its vehicles with special cameras, radar sensors, and ultrasonic sensors located around the car’s perimeter. The Autopilot functions became active when Tesla completed a wireless update of vehicles to Version 7.0 of Tesla software in October 2015.

The name “Autopilot” refers to an airplane’s pilot-assistance features. In practical terms, Tesla’s Autopilot functionality is marketed as a set of “active safety” or “driver assistance” features. They are nearly identical to advanced safety systems – such as automatic parking, adaptive cruise control, and lane change assistance – found in many other vehicles in today’s market.

[Read Where Tesla's Free Superchargers Are]

Despite the similarities – and perhaps based on the faith and devotion of Tesla owners to the brand and the remarkable leadership of Elon Musk, the company’s chief executive – some Model S owners quickly tested the limits of Autopilot, posting their real-world experiments on YouTube and social media. In that sense, the technology itself is less the issue than the reaction from Tesla drivers who tested the system beyond its intended purpose.

At the same time, Tesla arguably deserves some of the blame for any accidents associated with Autopilot because it either explicitly or implicitly downplayed risks associated with testing the limits of the technology.

Those limits include the inability of the sensors and cameras to detect vehicles and other roadway objects in every possible situation. In fact, Tesla’s Autopilot does not use the most robust technology available for assisted and automated driving: Lidar. Meanwhile, radar and camera sensors, which were used, and which have been employed for self-parking and blind spot monitoring systems for years, are known to have occasional hiccups. At the same time, Autopilot is described by Tesla as being in “Beta,” and therefore not as mature in its development as it will be in the future.

Before his accident, Joshua Brown was known (even by Musk) for posting videos online that demonstrated his Model S in Autopilot mode. In fact, he noted in a video posted a month before his fatal crash that the car had no problems sensing moving vehicles, but that it had a hard time reacting to stopped vehicles. In a statement on Tesla’s blog, the company admitted: “Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied.” Other Model S drivers reported that their cars would try to exit the highway, even when not desired, forcing the driver to intervene to manually steer the car back on the intended path.

Tesla Motors

When auto critics were given the opportunity to test Autopilot last October, several seemed surprised by how well the auto-steering function worked. However, almost all the reviewers experienced problems, such as Autopilot becoming confused when two lanes merged into one. These experienced drivers were well aware of the technology’s limitations and fully prepared to respond to any shortcomings – rather than fully trusting the system as Brown did.

[Read Is Tesla The Right Electric For You?]

In Popular Mechanics, automotive critic Ezra Dyer explains how easy it is to overestimate the capabilities of radar technology. Generally, Dyer argues, radar-based safety technology is very good, to the extent that many drivers might be lulled into a false sense of security and let the vehicle take over the driving tasks. However, this is not the intent of active safety technology – at least not as it currently exists.

As Dyer explains, drivers who get too complacent behind the wheel of an autonomous car will most likely face one of these failures, sooner or later. When the technology fails, he says, it’s too late for a driver to intervene. In other words, drivers are supposed to be the primary source of control over the car; active safety features, no matter how advanced, are intended to play an assistive role. (In a recent editorial for Mashable, Nick Jaynes notes that a fully autonomous car, one of Tesla’s areas of development, should have backup systems in place that the Model S currently does not have.)

Musk often speaks in hyperbolic terms about his company’s products. At the same time, the company has repeatedly stated that human drivers are expected to remain alert and in charge of driving at all times. This contradiction could have played a role in Brown’s death.

The fact that Autopilot is technically in Beta, a testing stage, means that Tesla was collecting data from customers’ cars and using it to improve the software. This is a common strategy with web products or consumer electronics, but controversial when used to develop something as potentially dangerous as an automobile. Essentially, Autopilot’s Beta status suggests that Tesla knows the technology has kinks to be worked out, yet still chose to make it available to actual customers on public roads. According to multiple reports, Tesla’s choice to Beta test safety features on its customers is the key cause behind NHTSA’s ongoing investigation.