A shorter – and probably better edited – version of this piece can be found in the University Observer (both online and in the November edition of the broadsheet). An excerpt also features on the Irish Times website.
The vehicle manufacturer Volvo is aiming to bring an end to all traffic fatalities caused by their new vehicles from the year 2020, and the optimism in this statement is typical of the industry in general. With 1.2 million people across the world dying annually on the roads, it is a bold claim — but such is the promise of self-driving cars. The technology is set to put an end to fatigued, angry, careless drivers and this should come as no surprise. In all domains in which computers are useful, their proficiency is streets ahead of what humans are capable of, and driving will be no different. Road deaths are sure to drop immensely.
In a world where a substantial fraction of the cars on the road are self-driving, how we program the vehicles to behave will be of huge importance. The code beneath the hood will designate the car’s actions and such decisions cannot be left up to software engineers alone. Desirable values should be discussed by society as a whole as the technology will greatly impact our lives. All new technology comes with ethical concerns centred on topics such as privacy and employment but the stakes are higher for autonomous vehicles. This technology is poised to play out in society front-and-centre, testing our intuitions and judgements on a range of values we hold dear.
Decisions, decisions, decisions
Once consigned to the depths of humdrum philosophy journals, the trolley problem is fast becoming a pertinent issue thanks to self-driving cars. The dilemma centres on how a car should behave when confronted with options in an inevitable collision. In the simple case, consider a pedestrian stepping out in front of an oncoming vehicle. Whether the car should be programmed to swerve into the wall (killing the driver) or keep going (killing the pedestrian) is a matter of debate. MIT developed an online experiment where you can test your intuitions around this topic.
The permutations of the trolley problem are never-ending: If there are passengers in the car, whose life should the car prioritise – the driver, the passengers, or the pedestrians? We might want the car to protect the passerby since the passenger and driver assume some level of risk by getting in the car but injuring multiple passengers over a single pedestrian doesn’t seem desirable.
The utilitarian approach aims to minimise the number of total fatalities, regardless of their role in the collision. Wanting cars to treat all people equally seems logical but the manufacturer would surely need to maximise the safety of the driver or else they won’t sell many vehicles. From the start, there will have to be some bias towards saving the driver.
It might also be sensible to take account of all possible future flourishing thereby minimising the total fatalities over all timescales. In that case, it could be argued that the life of a doctor should be prioritised in any collision over that of a convicted murder, as during the doctor’s career they will go on to save countless others. However, the boundaries quickly blur if their societal value is less clear, such as whether the life of an athlete, musician, or actor who entertains millions should be prioritised over the driver. Plus, ranking people by their value to society – a glance at the history books makes it clear that wandering down this path leads only to bigotry.
The car could seek to protect the party that was not at fault. There is logic to this as the consequences should focus on the person that breaks the law but then the car will have the really tricky job of deciding who is breaking the law, which comes with many ambiguous edge cases. Even if your set of wheels is the perfect judge, is it right to say that justice should always prevail? When a child wanders out from behind a parked van, I wouldn’t be comfortable saying she deserves what she gets.
Instead, the car can be programmed to merely not take any corrective action. Let the chips fall where they may. This is clearly overly simplistic because, by avoiding intervention, the car will never take deliberate actions when an accident is inbound, even when there are a bunch of children caught in the headlights and it could swerve into a tree. The other issue here is that taking no action at all is in itself an action. The car will have to do something and even it is means it will just indiscriminately keep on its collision course, this is still a chosen action.
All of this assumes we could influence the cars exactly as we desire. In 2016, a computer program used by a US court for risk assessment was biased against black prisoners. Also last year, an AI that Microsoft unleashed on Twitter had to be deleted after less than a day because it developed a fondness for a certain Adolf, spouting genocidal views. These examples highlight what is known as the value alignment problem. Unintentional biases could easily creep into any algorithm, that results in preferencing certain demographics based on social factors such as age, weight, race, and gender.
At the end of the day, the issue boils down to the fact that, on the one hand, human lives are incommensurable and each encounter would ideally be dealt with in a wholly unique way. On the other hand, it is necessary to program the car with a certain set of hard rules to govern its actions.
Accidents will inevitably occur (and already have). The question of responsibility then arises. It is tough to hold the owner of the car fully accountable given that they won’t understand know how it actually operates. Some blame should sit with the manufacturer but they will undoubtedly have laid bare the limitations of the vehicle’s artificial intelligence. If we determine that the manufacturer is culpable, there is no reason we shouldn’t continue down this rabbit hole to the individuals within the corporation who are directly responsible. However, it would seem unjustified to convict a software engineer of manslaughter for a having a bug in their code, even if it did lead to a death. You could envisage the pain felt by loved ones of a deceased relative upon hearing that line #825127 of the code in the car’s operating system directly lead to their death.
The correct course of action is to not apply blame to anyone but this is easier said than done. Nearly every legal system implemented by a society has centred around guilt and intentions. Holding nobody accountable would, to some, feel like an injustice.
The questions under the framework of the trolley problem are perhaps the most pertinent issue to be considered with respect to driverless cars but the ethical consequences reach deeper into society than that.
The good news is that every type of accident need only happen once because the AI will use machine-learning techniques to improve over time, but this will rely on all data being shared. When a crash occurs, the details will be sent to the manufacturer instantly who can then write a patch for the bug in the code to be downloaded by every model around the world in real time, thereby reducing any further risk of accidents. The price for this is, of course, privacy. From location information to biometrics, your car manufacturer will know you intimately. The questions are, firstly, should societal benefits outweigh personal rights to privacy, and secondly, who owns the wealth of data generated? The rights of the consumers, manufacturers, insurers, and government agencies have yet to be specified.
Invariably, we want to minimise the number of road deaths but those accidents are a major source of organs used for donation. Where we get organs will be a new problem to solve. Speaking from a utilitarian viewpoint, you could argue that the algorithm should take into account who would likely be the best organ donor, making their deaths more palatable as they can go on to save several others.
In terms of employment, many people rely on driving for a living and all of these jobs are at risk. Truck drivers and taxis will be under the most pressure. With very few accidents, insurance premiums should drop dramatically, which could cause difficulties for the insurance industry. Given that it will be rare for a driverless vehicle to pick up a traffic violation, the revenue generated by the state from traffic fines will also drastically diminish. In the beginning, when driverless cars are owned exclusively by the wealthiest echelon in society, the police will end up penalising solely the least affluent in our society, and this will only worsen until they too can afford the driverless cars.
On the flipside, there is a massive financial burden on the economy which is caused by traffic collisions. If this is reduced to a negligible amount, it’d lead to an increase in prosperity.
The Bad Egg
Where there is software, there is the potential for it to be hacked. Being connected to the cloud will be required to keep the software up to date but if the cyber security isn’t up to scratch, your car could be hacked and stolen or, in the worst case, used as a weapon as part of a terrorist attack. Oppressive governments could use similar tactics to immobilise the transport of dissidents.
After all of these complications, we should end on a positive note. The obvious benefit to all of this is the reduction in road deaths. Apart from negating the danger poised by cars, the other major upside of autonomous vehicles is the additional free time people will have to be creative and spend it how they wish. In Ireland, 1.2m people commute to work by car. Even in this narrow case, we’re talking hundreds of thousands of cumulative hours wasted behind the wheel which can now be used for leisure. You could also potentially begin your work day from your car, with your commute being incorporated into the hours you spend at work. Then a 40-hour workweek amount to 40-hours inclusive of the commute.
The last word
Within a decade, the internet changed the way we do everything, from communication to shopping. It did, however, bring with it many problems. Many world leaders believe AI stands to have a similar impact on society, with driving being just one aspect. However, it is imperative we are more prepared as AI stands to be more enmeshed with the real world, making the issues that we’ll face even more challenging. This is just the tip of the iceberg and there are surely many other pragmatic questions that have not been considered here.
Whether society is ready or not, the technology is coming and the time to seek clarity on these ethical issues is now. As soon as we program the car to preference the life of the driver over hitting a mailbox, we’re encoding this very value: Human life outweighs postbox structure. This trivial example is all we need to admit moral relativism just won’t cut it. There are right and wrong answers to questions of morality, even if we don’t have them yet. We’ll be encoding our ethical framework into the technology and from that point, there’s no way back.
Autonomous vehicle technology marks a rare intersection between abstract philosophy and everyday life. Our intuitions will undoubtedly be tested as the pragmatic aspects become more salient. Although road deaths will drop considerably, the margin for error is razor-thin. Bugs in the software cost lives on the street and the most ethically sanguine course of action will mean little to those who are killed.