Thursday, July 14, 2016

On the Business Ethics and Technology of Self-Driving Cars at Tesla

During the summer of 2016, Tesla was under fire societally with charges regarding the technology and ethics. Both of these issues can be put into a wider perspective in the company’s favor. Put another way, both technological and ethical analyses can be enhanced by putting the specific problems within a larger perspective—even in terms of time.

Regarding the technological problem at issue, the company’s cars running on “auto-pilot” could not yet take into account another car’s lateral movement. For example, the technology could not detect another car travelling alongside in an adjacent lane shifting over into the Tesla car. A man died from just such an occurrence.  He was not paying attention at the time, and yet Tesla’s incomplete technology also received a lot of blame.

Given the incomplete condition of the technology and simply for safety’s sake, the company was communicating to the buyers that even though the cars could self-drive, the drivers still needed to pay active attention. So a driver who was filmed sleeping while behind the wheel of a Tesla car during a slow-paced commute was culpable even though the car did not crash.

The argument that the company was culpable held that it had misled customers by stating that the cars were self-driving. In other words, drivers could reasonably assume that they need not pay attention. This argument fails because pilots know they must still pay attention when the airplanes are on auto-pilot. Therefore, that a car can self-drive does not imply that drivers can take naps or fixate on their smartphones. Such people are not smart at all.

The wider perspective shows the early smart-driving technology is apt to have limitations and even faults. Drivers dismissing these were missing the point regarding how technology progresses. Technological development takes a while, rather than being perfected at launch. At the start, drivers keeping this in mind could not reasonably conclude that they could drive as if the technology could support them sleeping or being distracted.

As the self-driving technology develops—sadly in part from trial and error—drivers may one day be able to sleep or play on their smart-phones with the reasonable expectation that paying active attention is not necessary. Also, as the proportion of cars that are self-driving increases on the roads, the case for not paying attention while the cars are self-driving improves still more. 

In 2016 and likely in years to come, the roads could even more dangerous than before the advent of the self-driving cars and after the technology and proportion of self-driving cars is made more complete. The temporal vulnerability resulting in the problems during the summer of 2016 is like a donut hole because a sufficient number of drivers of self-driving cars did not adequately understand the risks from the technology not yet complete enough to justify what those drivers were doing at the expense of paying active attention. Perhaps it is human nature, but Tesla was not at fault either on technological or ethical grounds.