1
|
Cecchini D, Dubljević V. Moral Complexity in Traffic: Advancing the ADC Model for Automated Driving Systems. SCIENCE AND ENGINEERING ETHICS 2025; 31:5. [PMID: 39853674 PMCID: PMC11761772 DOI: 10.1007/s11948-025-00528-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Academic Contribution Register] [Received: 05/06/2024] [Accepted: 01/03/2025] [Indexed: 01/26/2025]
Abstract
The incorporation of ethical settings in Automated Driving Systems (ADSs) has been extensively discussed in recent years with the goal of enhancing potential stakeholders' trust in the new technology. However, a comprehensive ethical framework for ADS decision-making, capable of merging multiple ethical considerations and investigating their consistency is currently missing. This paper addresses this gap by providing a taxonomy of ADS decision-making based on the Agent-Deed-Consequences (ADC) model of moral judgment. Specifically, we identify three main components of traffic moral judgment: driving style, traffic rules compliance, and risk distribution. Then, we suggest distinguishable ethical settings for each traffic component.
Collapse
Affiliation(s)
- Dario Cecchini
- Department of Philosophy and Religious Studies, North Carolina State University, Raleigh, NC, USA
| | - Veljko Dubljević
- Department of Philosophy and Religious Studies, North Carolina State University, Raleigh, NC, USA.
| |
Collapse
|
2
|
Liu P, Chu Y, Zhai S, Zhang T, Awad E. Morality on the road: Should machine drivers be more utilitarian than human drivers? Cognition 2025; 254:106011. [PMID: 39561525 DOI: 10.1016/j.cognition.2024.106011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Academic Contribution Register] [Received: 04/08/2024] [Revised: 11/05/2024] [Accepted: 11/09/2024] [Indexed: 11/21/2024]
Abstract
Machines powered by artificial intelligence have the potential to replace or collaborate with human decision-makers in moral settings. In these roles, machines would face moral tradeoffs, such as automated vehicles (AVs) distributing inevitable risks among road users. Do people believe that machines should make moral decisions differently from humans? If so, why? To address these questions, we conducted six studies (N = 6805) to examine how people, as observers, believe human drivers and AVs should act in similar moral dilemmas and how they judge their moral decisions. In pedestrian-only dilemmas where the two agents had to sacrifice one pedestrian to save more pedestrians, participants held them to similar utilitarian norms (Study 1). In occupant dilemmas where the agents needed to weigh the in-vehicle occupant against more pedestrians, participants were less accepting of AVs sacrificing their passenger compared to human drivers sacrificing themselves (Studies 1-3) or another passenger (Studies 5-6). The difference was not driven by reduced occupant agency in AVs (Study 4) or by non-voluntary occupant sacrifice in AVs (Study 5), but rather by the perceived social relationship between AVs and their users (Study 6). Thus, even when people adopt an impartial stance as observers, they are more likely to believe that AVs should prioritize serving their users in moral dilemmas. We discuss the theoretical and practical implications for AV morality.
Collapse
Affiliation(s)
- Peng Liu
- Center for Psychological Sciences, Zhejiang University, Hangzhou, Zhejiang, China.
| | - Yueying Chu
- Center for Psychological Sciences, Zhejiang University, Hangzhou, Zhejiang, China; Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, Zhejiang, China
| | - Siming Zhai
- College of Management and Economics, Tianjin University, Tianjin, China
| | - Tingru Zhang
- Institute of Human Factors and Ergonomics, Shenzhen University, Shenzhen, China.
| | - Edmond Awad
- The Uehiro Oxford Institute, University of Oxford, Oxford, UK; Department of Economics, University of Exeter, Exeter, UK; Center for Humans and Machines, Max Planck Institute for Human Development, Berlin, Germany
| |
Collapse
|
3
|
Takemoto K. The moral machine experiment on large language models. ROYAL SOCIETY OPEN SCIENCE 2024; 11:231393. [PMID: 38328569 PMCID: PMC10846950 DOI: 10.1098/rsos.231393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Academic Contribution Register] [Received: 09/16/2023] [Accepted: 01/17/2024] [Indexed: 02/09/2024]
Abstract
As large language models (LLMs) have become more deeply integrated into various sectors, understanding how they make moral judgements has become crucial, particularly in the realm of autonomous driving. This study used the moral machine framework to investigate the ethical decision-making tendencies of prominent LLMs, including GPT-3.5, GPT-4, PaLM 2 and Llama 2, to compare their responses with human preferences. While LLMs' and humans' preferences such as prioritizing humans over pets and favouring saving more lives are broadly aligned, PaLM 2 and Llama 2, especially, evidence distinct deviations. Additionally, despite the qualitative similarities between the LLM and human preferences, there are significant quantitative disparities, suggesting that LLMs might lean toward more uncompromising decisions, compared with the milder inclinations of humans. These insights elucidate the ethical frameworks of LLMs and their potential implications for autonomous driving.
Collapse
Affiliation(s)
- Kazuhiro Takemoto
- Department of Bioscience and Bioinformatics, Kyushu Institute of Technology, Iizuka, Fukuoka 820-8502, Japan
| |
Collapse
|
4
|
Tolmeijer S, Arpatzoglou V, Rossetto L, Bernstein A. Trolleys, crashes, and perception-a survey on how current autonomous vehicles debates invoke problematic expectations. AI AND ETHICS 2023; 4:473-484. [PMID: 38737783 PMCID: PMC11078731 DOI: 10.1007/s43681-023-00284-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Academic Contribution Register] [Received: 02/04/2023] [Accepted: 03/28/2023] [Indexed: 05/14/2024]
Abstract
Ongoing debates about ethical guidelines for autonomous vehicles mostly focus on variations of the 'Trolley Problem'. Using variations of this ethical dilemma in preference surveys, possible implications for autonomous vehicles policy are discussed. In this work, we argue that the lack of realism in such scenarios leads to limited practical insights. We run an ethical preference survey for autonomous vehicles by including more realistic features, such as time pressure and a non-binary decision option. Our results indicate that such changes lead to different outcomes, calling into question how the current outcomes can be generalized. Additionally, we investigate the framing effects of the capabilities of autonomous vehicles and indicate that ongoing debates need to set realistic expectations on autonomous vehicle challenges. Based on our results, we call upon the field to re-frame the current debate towards more realistic discussions beyond the Trolley Problem and focus on which autonomous vehicle behavior is considered not to be acceptable, since a consensus on what the right solution is, is not reachable.
Collapse
Affiliation(s)
- Suzanne Tolmeijer
- Information Systems, Socio-Technical Systems Design (WISTS), University of Hamburg, Vogt-Kölln-Straße 30, 22527 Hamburg, Germany
| | - Vicky Arpatzoglou
- Department of Informatics, University of Zurich, Binzmühlestrasse 14, 8050 Zurich, Switzerland
| | - Luca Rossetto
- Department of Informatics, University of Zurich, Binzmühlestrasse 14, 8050 Zurich, Switzerland
| | - Abraham Bernstein
- Department of Informatics, University of Zurich, Binzmühlestrasse 14, 8050 Zurich, Switzerland
| |
Collapse
|
5
|
An ethical trajectory planning algorithm for autonomous vehicles. NAT MACH INTELL 2023. [DOI: 10.1038/s42256-022-00607-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Academic Contribution Register] [Indexed: 02/05/2023]
|
6
|
Mayer MM, Buchner A, Bell R. Humans, machines, and double standards? The moral evaluation of the actions of autonomous vehicles, anthropomorphized autonomous vehicles, and human drivers in road-accident dilemmas. Front Psychol 2023; 13:1052729. [PMID: 36687966 PMCID: PMC9847390 DOI: 10.3389/fpsyg.2022.1052729] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Academic Contribution Register] [Received: 09/24/2022] [Accepted: 11/24/2022] [Indexed: 01/05/2023] Open
Abstract
A more critical evaluation of the actions of autonomous vehicles in comparison to those of human drivers in accident scenarios may complicate the introduction of autonomous vehicles into daily traffic. In two experiments, we tested whether the evaluation of actions in road-accident scenarios differs as a function of whether the actions were performed by human drivers or autonomous vehicles. Participants judged how morally adequate they found the actions of a non-anthropomorphized autonomous vehicle (Experiments 1 and 2), an anthropomorphized autonomous vehicle (Experiment 2), and a human driver (Experiments 1 and 2) in otherwise identical road-accident scenarios. The more lives were spared, the better the action was evaluated irrespective of the agent. However, regardless of the specific action that was chosen, the actions of the human driver were always considered more morally justifiable than the corresponding actions of the autonomous vehicle. The differences in the moral evaluations between the human driver and the autonomous vehicle were reduced, albeit not completely eliminated, when the autonomous vehicle was anthropomorphized (Experiment 2). Anthropomorphizing autonomous vehicles may thus influence the processes underlying moral judgments about the actions of autonomous vehicles such that the actions of anthropomorphized autonomous vehicles appear closer in moral justifiability to the actions of humans. The observed differences in the moral evaluation of the actions of human drivers and autonomous vehicles could cause a more critical public response to accidents involving autonomous vehicles compared to those involving human drivers which might be reduced by anthropomorphizing the autonomous vehicles.
Collapse
|
7
|
Aguiar F, Hannikainen IR, Aguilar P. Guilt Without Fault: Accidental Agency in the Era of Autonomous Vehicles. SCIENCE AND ENGINEERING ETHICS 2022; 28:11. [PMID: 35201428 DOI: 10.1007/s11948-022-00363-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Academic Contribution Register] [Received: 02/05/2021] [Accepted: 01/22/2022] [Indexed: 06/14/2023]
Abstract
The control principle implies that people should not feel guilt for outcomes beyond their control. Yet, the so-called 'agent and observer puzzles' in philosophy demonstrate that people waver in their commitment to the control principle when reflecting on accidental outcomes. In the context of car accidents involving conventional or autonomous vehicles (AVs), Study 1 established that judgments of responsibility are most strongly associated with expressions of guilt-over and above other negative emotions, such as sadness, remorse or anger. Studies 2 and 3 then confirmed that, while people generally endorse the control principle, and deny that occupants in an AV should feel guilt when involved in an accident, they nevertheless ascribe guilt to those same occupants. Study 3 also uncovered novel implications of the observer puzzle in the legal context: Passengers in an AV were seen as more legally liable than either passengers in a conventional vehicle, or even their drivers-especially when participants were prompted to reflect on the passengers' affective experience of guilt. Our findings document an important conflict-in the context of AV accidents-between people's prescriptive reasoning about responsibility and guilt on one hand, and their counter-normative experience of guilt on the other, with apparent implications for liability decisions.
Collapse
Affiliation(s)
| | | | - Pilar Aguilar
- Department of Psychology, Universidad Loyola Andalucía, Sevilla, Spain.
| |
Collapse
|