Last Week in Tech Law and Policy, Vol. 24: Will Your Autonomous Car be Programmed to Kill You?

(by R. Kolton Ray, Colorado Law 2L)

Back to the Future Day—October 1, 2015—was celebrated this past week to commemorate the day that Marty McFly and Doc Brown traveled through time to save Marty’s future son in Back to the Future II. It’s easy to laugh at the zany fashion and technology—i.e., fax machines—but director Robert Zemeckis got a lot right about 2015. For example, Nike will release a pair of self-lacing sneakers next year, and hover boards have become close to a reality. The film even portrayed a current political candidate as a wacky villain.

While we have yet to reach the Back to the Future-style flying cars depicted in the second film, we are very close to the introduction of self-driving cars into our travel ecosystem. Google’s self-driving car has successfully completed 1 million miles and the company is planning to release a model to the general public by 2017. Automotive powerhouses like GM, Ford, Toyota, Daimler-Chrystler and Volkswagen have all partnered with Google, and Tesla CEO Elon Musk has said that manually-operated cars will be illegal once autonomous cars reach 100% penetration.

However, laws have been slow to keep up with the advent of self-driving cars. Some of our driving laws date back to the days of horses and carriages, and Google has sought special legislation to test its self-driving cars on public roads. Despite clear legislation, Google is standing by its vision that autonomous vehicles will be a new form of public transportation and an important facet of our future lives.

Many questions must be answered by lawmakers to deal with the new problems arising from autonomous cars. For example, should we require an individual to be in the car for the vehicle to operate, or can it be operated remotely akin to the Batmobile? Will autonomous cars be permitted to drive in any weather conditions, or will autopilot be limited to clear, blue skies? What will the security standards be for self-driving cars, and what do we do in the event that an autonomous vehicle is hacked or tampered with?

While these decisions are critical, some of the most important questions policymakers will have to answer are philosophical. In particular, lawmakers will need to answer a version of the Trolley problem before self-driving cars can reach the marketplace:

 You are riding in a trolley atop railroad tracks when you reach a fork in the path. If the trolley continues on its current path, it will surely kill five individuals who are tied to the tracks. However, if you pull a lever and change course to the other path, you will surely kill one person tied down to the train tracks. Do you do nothing, and 5 people die, or do you choose to change course, and kill 1 individual who would have otherwise lived?

This is a basic philosophical question whose answer could be guided in different directions by both J.S. Mill’s Utilitarianism—where a moral action is described as the one that results in the greatest net happiness for the greatest number of people—and Kant’s Categorical Imperative—where an action is immoral if it uses someone as a means and not an end.

However, what was once a thought exercise will likely play out before self-driving cars are allowed to take the road. Autonomous vehicles are not immune  from traffic accidents: Google’s current fleet of 23 self-driving cars have been involved in 14 minor traffic accidents on public roads. Furthermore, the current technology relies heavily on stored data; and the vehicles do not operate as efficiently in heavy fog, snow or when following temporary road instructions.

At some point in time, will an autonomous car will kill a human being?

This dynamic stirs a debate surrounding the basic precepts of artificial intelligence and protecting humanity. Author Issac Asimov created the three laws of robotics to resolve this issue:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Under the first law, autonomous machines cannot allow a human being to come to harm. But in the world of self-driving cars, this seems nearly inevitable. Traffic accidents are called accidents because they do not require intent. Through actions, or inaction, they result in harm to another person.

More troubling is how to solve the Trolley problem for autonomous vehicles. Imagine a self-driving car cruising across a bridge at night. A couple, holding hands and unaware of the vehicle, step into the roadway a mere second before the vehicle reaches them. Through millions of calculations in that second, the car determines that there are two, and only two, possibilities: the car will either 1) hit the pedestrians at full speed, surely killing them, or 2) steer to avoid the pedestrians, but careen off the bridge and surely kill the vehicle’s occupants.

Autonomous vehicles and the nature of automobile accidents make it nearly impossible for an autonomous vehicle to follow the three laws of robotics. A never-ending computing loop would surely kill the pedestrians; thus it is necessary for someone to program autonomous cars on what to do in this scenario. Asimov, possibly anticipating a similar possibility, articulated a “zeroith” law to address this paradox:

A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

Asimov seems to endorse the Utilitarian camp of thinking: that moral actions are those that promote the greatest net happiness to the greatest number of people. Therefore, in the above example, an autonomous vehicle would be programmed to kill their owner before killing a group of humans that fate has placed in the vehicle’s path.

But is this the right approach? If a self-driving car is programmed to kill its owner, than it would necessarily affect the adoption of self-driving cars by society. People may prefer the choice to kill themselves or others if the situation arises, not the requirement that they sacrifice themselves. Additionally, the choice to program a car to kill the driver offends notions of fate, the difference between killing and letting die, and Kant’s categorical imperative.

Back to the Future failed to envision the philosophical conundrums that would arise with the advent of new technologies. How policymakers should decide this question are endeavors in morality that might not have a “best” answer.