How Safe Are Driverless Cars?

Exploring the ethics of autonomous vehicles.

by

Science-fiction author Isaac Asimov, imagining a future of artificial intelligence, pictured the 21st century as dominated by humanoid robots. He’s perhaps best known for his three laws of robotics, which were intended to guide the moral decision making of these “mechanical men.”

Now that we are in the future that Asimov tried to imagine, we see that humanoid, all-purpose robots are still more a fantasy than a reality, and in their place, we’ve made our everyday devices “smart” instead, our phones being the most obvious example. However, with self-driving cars on the horizon, the question of how to program these autonomous vehicles to resolve ethical issues on the road has come to the forefront.

Let’s say your self-driving car of the near future encounters a situation in which it will have to either hit a girl on a bicycle or a homeless man on the shoulder. Which life should it choose to destroy for the sake of saving the other? This kind of moral dilemma is much discussed among philosophers and psychologists interested in the field of artificial intelligence. However, as Harvard psychologist Julian De Freitas and colleagues argue in a recent opinion piece they published in the journal Perspectives on Psychological Science, this debate misses the truly important issues surrounding the interaction between humans and self-driving cars.

People’s thinking on moral dilemmas is often explored by using some version of the so-called “trolley problem.” Imagine an out-of-control trolley racing down its track, and ahead are five workmen who will be killed when they are hit by it. Next to you is a switch, and if you pull it, the trolley will be shunted onto a side track, where it will instead hit and kill one workman. Do you pull the switch or not?

This situation presents a dilemma, in that two moral frameworks are brought into conflict. On the one hand, the utilitarian approach calls for the greatest good for the greatest number. From the utilitarian perspective, you should let one workman die to save five.

On the other hand, the deontological approach defines morality in terms of what one must or must not do in a given situation. The Ten Commandments may be the best-known example of a deontological moral code. However, philosophers since then have thought much more elaborately about the question of how we should behave.

Since deontological morality does not allow us to kill another person in most cases, it directs us to the passive acceptance that five workmen will die as a result of our inaction rather than the active decision to kill one person for the sake of the others. When confronted with the trolley problem, most people opt for the deontological (let five men die) rather than the utilitarian (kill one to save five) approach.

But, De Freitas and colleagues ask, how often are we actually faced with a “trolley problem” type of moral dilemma, especially when we’re behind the wheel of a car? Let’s go back to our earlier example. Instead of your car driving itself, you yourself are driving when you suddenly notice a girl on a bicycle and a homeless man on the shoulder. What will you do?

The answer should be obvious—you’ll try to avoid hitting either of them. Rarely are we faced with a forced choice between two equally bad alternatives, whether in our everyday life or on the road. And since such situations are so rare, De Freitas and colleagues argue, there’s really no need to expend so much effort on trying to figure out how we can program our driverless cars to make the right moral decisions in these hypothetical cases.

Furthermore, the researchers argue, “trolley” dilemmas such as the ones presented here are virtually impossible to perceive in real-time. That’s because the split-second reactions that are needed when driving don’t allow for thoughtful moral consideration. Rather, we react automatically, according to learned behaviors so ingrained in us that our decisions never reach the level of consciousness. Our foot is already on the brake before we’re even consciously aware of the girl on the bike and man on the shoulder.

In theory, at least, driverless cars have more complete knowledge of their surroundings, faster reaction times, and no lapses in attention. To the extent that these can be realized in practice, driverless cars will be safer than human drivers, so using them will result in fewer traffic fatalities. Whether you take a utilitarian or deontological approach to morality, you no doubt agree that we should save human lives whenever possible.

Finally, there’s the issue of how we would go about programming moral decision making into an autonomous vehicle in the first place. When Isaac Asimov proposed his three laws of robotics, artificial intelligence was still more of a dream than a reality. It was imagined back then that humans would have to write the code that their robots would follow.

By the 1980s, this so-called “good old-fashioned artificial intelligence” had stalled out. A top-down approach to designing smart devices has many limitations, perhaps the most important among them being the fact that we often don’t know how humans accomplish such mundane tasks as driving a car. That means that we can’t tell a computer how to do it, either.

Instead, the rapid advances in artificial intelligence that we’ve seen in the last 30 years are largely due to the use of artificial neural networks. These brain-like computational structures learn from their own experiences, rather than being told in advance what to do. In other words, self-driving cars learn to drive in much the same way humans do: by encountering a wide variety of traffic situations and remembering the outcomes of the decisions they made.

In sum, De Freitas and colleagues maintain that the debate about autonomous agents making moral decisions in real-time is a red herring. That’s because humans can’t make such split-second decisions either. Rather, the researchers argue, engineers should focus their efforts on improving the safety of driverless cars, while legal scholars should work out the assignment of responsibility for those few accidents that do occur. When the data demonstrate that driverless cars save a significant number of lives, the moral decision to use them will be clear.