The Ethical Knob and Self-driving Vehicles

A- A A+

Francesca Lagioia*

Accidents involving Autonomous Vehicles (AVs) raise difficult ethical dilemmas; legal issues emerge from hypothetical accident scenarios, where AVs have to make decisions involving human lives.

A classic example with which to introduce such an ethical dilemma is based on the following   scenario: in a dangerous and unavoidable accident situation, your AV must decide between staying on course and hitting several pedestrians, or swerve, thus killing one passer-by or maybe endangering and killing you. In such a critical scenario, would the AV sacrifice one person to save the lives of many? And what about your life?

Imagine, for instance, an AV carrying a family of three on a one-lane highway. As the AV moves forward, four children run out in front of the car to retrieve a ball, crossing the street without realizing they are about to get hit, and it is too late for the car to stop. Should the AV swerve to the side causing a guardrail collision and risking its passengers’ lives? Or should the AV continue on its path, ensuring its passengers’ safety at the children’s expense?

According to a recent study by the MIT LAB, conducted through three on-line surveys, people are comfortable with the idea that AVs should be programmed to minimise the death toll. However, participants showed, at the same time, a preference for riding in cars that would protect their passengers. Paradoxically, it appears that most participants would prefer others to use impartial AVs – for which all lives have equal importance –, while each of them, as a passenger, would make a more selfish choice.

Who should select the criteria the AV should follow in making such choices: should the same mandatory ethics setting (MES) be implemented in all cars or should every driver have the choice to select his or her own personal ethics setting (PES)?

This paradoxical situation shows that it would be difficult to pre-program a fixed ethical approach in AVs. People could be unwilling to buy a car that could sacrifice his/her life in order to save others. Moreover, if the choice of a fixed ethical setting were to be made by the manufacturer, market pressures would encourage pre-programmed AVs not to refrain from choices harming pedestrians whenever such choices may contribute to the safety of passengers.

Our research team – myself, Giovanni Sartor and Giuseppe Contissa – has explored the legal and ethical implications of using pre-programmed autonomous cars; we then came up with the idea of the so-called Ethical Knob (‘The Ethical Knob: ethically-customisable automated vehicles and the law’, AI&LAW n.25, 2017), shifting the base of moral choices to the AVs’ passengers rather than pre-programming them. In this work – which sparked animated discussion in the general media as well as in the research community – we assume that AVs may be designed in such a way that the passenger has the task of deciding what ethical approach should be adopted in unavoidable accident scenarios.

This novel design approach proposes an additional control to those of the AV, the ‘Ethical Knob’. It tells the vehicle the value that the driver gives to his/her life relative to the value of others. Hence, this control can be set by the owner/driver by choosing a linear continuous value from a range of three:

  • Altruistic mode (-1), preference for third parties
  • Impartial mode (0), equal importance to all parties involved
  • Egoistic mode (1), preference for passengers of the vehicle.

Respectively, in the altruistic mode, the AV is more likely to favour other persons than its own passengers; in the intermediate ‘impartial’ position the AV assigns equal importance to the life and health of the involved parties; finally, in the ‘egoistic’ mode the AV would operate assigning more importance to the lives of passengers, likely favouring them over pedestrians. The car would use this information to calculate the action to be executed, also taking into account the probability that the passenger or other parties suffer harm as a consequence of the driving decision, as well as the individual amount of probable damage for each person involved.

In comparison to the opportunity of purchasing AVs with different pre-programmed and fixed ethical settings, an Ethical Knob would provide the user with a greater set of choices that can vary over time, depending on age and number of passengers, life expectancy and other factors.

Our legal analysis has shown that, with regard to their legal regime, ethically customisable AV in life and death dilemmas would significantly differ from both pre-programmed AV and human driven cars.

With manned cars, in a situation in which the law cannot impose a choice between lives that are of equal importance, such a choice rests with the driver, under the protection of the state-of-necessity defence, even in cases in which the driver chooses to save him or herself at the cost of killing many pedestrians.

The pre-programmed choices could be considered both morally and legally unacceptable in some scenarios, ‘With pre-programmed AVs, such a choice is shifted to the programmer, who would not be protected by the state-of-necessity defence whenever the choice would result in killing many agents rather than one’.

However, the fact that the settings on the knob must be selected in advance of the accident can affect some contexts, particularly where the distinction between action and omission could justify a non-consequentialist approach for a human driver. In this regard, ethically customizable AVs would be treated similarly to pre-programmed AVs.

If, on one hand, this approach is a promising solution to some problems, on the other it leaves open some important issues. First, people could be unwilling to take on legal and moral responsibility; secondly, this large amount of pre-emptive control could lead to an unbalanced situation, where everyone sets the most self-protective mode, causing an increase in risk in road security. Thus we could have a ‘Tragedy of Commons’ type scenario. This is currently under investigation – using numerical agent-based simulations – to assess and verify all the possible scenarios.

 

* Francesca Lagioia is a Max Weber Fellow at the EUI Law department. Francesca’s research interests lie at the intersection of law and computer science, with a focus on the legal issues related to the development of artificial intelligence systems.

(*) The MWPBlog is a platform for MW Fellows and former Fellows to address scholarly topics and comment on current affairs. The thoughts expressed in the posts represent solely the views of the posting Fellows and not of the Max Weber Programme