Autonomous Cars: A Rawlsian Solution

Picture of a autonomous car
Image courtesy of Michael Shick

In the months following the September 11th attack on the twin towers, Americans took to the roads. As passenger air miles fell, the freeways buzzed. Sadly the risk of dying in a hijacked plane was at worst 1 in 540,000. The risk of dying in a car accident? 1 in 7,000. People’s attempt to avoid flying in the 12 months after the attack has been estimated to have caused another 1,595 deaths. [1]

Yes improvements to cars have reduced the consequences of collision and 2014’s fatalities were the lowest since 1975. We are, however, still losing tens of thousands of people every single year. Over 32,000 Americans lost their lives on the road in 2014.[2]

The self driving car, as pioneered by Google, can prevent the vast majority of these deaths by taking human error out of the equation. There are, however, barriers to the driverless car making it from Google test routes to the highway.

One of these is an ethical problem which many see as insurmountable. It is this:

Inevitably some driverless car will find itself in a position where all possible decisions lead to harm.

Here’s two examples recently discussed by the Washington Post

“should your vehicle drive off a bridge to avoid hitting a Boy Scout troop, sacrificing your life to save a dozen?”


“Should a self-driving car veer away from the pedestrians in a crosswalk with a baby stroller and instead hit a lone pedestrian on a sidewalk?”

For the last few decades Ethicists have discussed similar thought experiments called Trolley Problems. The original trolley problem was presented by the philosopher Philippa Foot but has since been expanded by other philosophers. Here’s a basic trolley problem:

A runaway trolley/tram is headed for five people who are tied to the track. Luckily, you have the option to pull a lever which will divert the trolley onto a sidetrack and away from the five people. Unluckily, another individual is tied to the sidetrack.

What should you do? Is it better to kill one rather than let five die? It’s a fascinating problem without an easy solution.

Excited by this analogy, journalists have rushed to see if Trolley Problem Philosophers can solve the real world problem of driverless cars. They are right to turn to ethicists for a solution but are looking in the wrong place. Traditional trolley problems are a dead end and here’s why.

In their traditional framing, trolley problems create an impasse. Any philosopher’s answer to a trolley problem depends on the ethical system they subscribe to. Unfortunately the difference between these ethical systems come down to beliefs, values and intuitions that are so foundational their only support is either circular[3] or a sort of self-evident justification where you either “get it” … or you don’t.

The problem is we simply cannot afford an impasse on driverless cars. The lack of an agreed decision framework holds back their adoption and allows a large number of people to die in accidents caused by human error in the meantime.

One solution to the car problem starts by reframing it as public policy problem. The question is how we should organise as a society on the issue not how we as individuals should act. These questions doubtlessly overlap but they are distinct. We should start, then, by reframing the trolley problem as a policy problem.

Reframed, the trolley problem takes place in a society where technology exists that can automatically switch the trolley on to the sidetrack where one person is tied down. The problem is then an attempt to make trolley policy by people living in this society where these trolley problems are an inevitability (albeit a rare one) rather than a hypothetical. Things get easier when we realise the policy problem is primarily about the distribution of risk in a society.

Spelled out the question is:

“Knowing that trolley problems are an inevitability, what is the fair distribution of their associated risk across society?”

An answer to this question can be found in the philosopher John Rawl’s “veil of ignorance” heuristic which he applied to the basic structure of society’s primary institutions.

Rawls suggested that for the basic structure of society’s primary institutions to be fair they would need to be such that we would agree to them under a hypothetical “veil of ignorance.” That is, the fair basic structure is one we would all agree to if we did not know what position in society we actually occupied or anything about ourselves and our talents, strengths, weaknesses and so on. As an idea this is not that radical and similar to the method we use when we divide up a cake along “one cuts one chooses” lines. The fair distribution of slices is the one we’d all agree to if we didn’t know which slice we were going to get.

Rawls was quite opposed to applying the veil of ignorance to specific policies but, with apologies, this is exactly what I’d like to do.

The fair distribution of risk in trolley problem policy is, I propose, one which we would agree to if we didn’t know whether we’d be one of the five on the main track, the person tied down in the sidetrack or lucky enough to be uninvolved.

I believe that in this situation we would all agree to a trolley policy which minimised the aggregate number of deaths and injuries from trolley accidents by programming the trolley diversion technology to divert to a sidetrack in cases like the original trolley problem. So far so good for trolley problems but a disanalogy with driverless cars means our policy requires a special exception before it can apply to the freeway.

Whatever our driverless car policy, if it significantly reduces the adoption of driverless cars by traditional drivers, then the policy may kill the very people it tries to protect by keeping

human error on the road. Any policy will then also need to be acceptable outside of the veil of ignorance in the real world by traditional drivers. These drivers are unlikely to accept any policy which would increase their own risk of death or injury in a specific scenario. This is, I think, true even where the policy in question not only reduced aggregate deaths and injuries but reduced any driver’s total risk of the same.

Let’s then combine what we have learned from the reframed trolley case and add in the need to allay the fears of traditional drivers. The result, it seems to me, is that the fair driverless car policy is one where the car is programmed to minimise the aggregate number of deaths and injuries with one condition. The “driver’s” risk of death or injury in any particular scenario must not be greater than it would have been if they were themselves driving.

Here my conclusion hinges on the empirical assumption that this stipulation will be enough to get traditional drivers to switch over. It is possible they will only be satisfied if the car preserves their own life regardless of the cost to others. I do not, however, think this is the case because in the real world drivers are not only also pedestrians but are also parents, grandparents brothers, sisters, uncles and aunts. I do not believe they would want, literally and metaphorically, to throw those they care about in front of the bus. I hope I am not wrong.

[1] Passenger miles fell by 12-20%


[3] Whether these circles are fallacious or not (i.e. ‘virtuous’ or ‘vicious’) is a matter of debate

Leave a Reply

Your email address will not be published. Required fields are marked *