1
|
Ju U, Kim S. Willingness to take responsibility: Self-sacrifice versus sacrificing others in takeover decisions during autonomous driving. Heliyon 2024; 10:e29616. [PMID: 38698973 PMCID: PMC11064069 DOI: 10.1016/j.heliyon.2024.e29616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Revised: 04/01/2024] [Accepted: 04/10/2024] [Indexed: 05/05/2024] Open
Abstract
In Level-3 autonomous driving, drivers are required to take over in an emergency upon receiving a request from an autonomous vehicle (AV). However, before the deadline for the takeover request expires, drivers are not considered fully responsible for the accident, which may make them hesitant to assume control and take on full liability before the time runs out. Therefore, to prevent problems caused by late takeover, it is important to know which factors influence a driver's willingness to take over in an emergency. To address this issue, we recruited 250 participants each for both video-based and text-based surveys to investigate the takeover decision in a dilemmatic situation that can endanger the driver, with the AV either sacrificing a group of pedestrians or the driver if the participants do not intervene. The results showed that 88.2% of respondents chose to take over when the AV intended to sacrifice the driver, while only 59.4% wanted to take over when the pedestrians would be sacrificed. Additionally, when the AV's chosen path matched the participant's intention, 77.4% chose to take over when the car intended to sacrifice the driver compared with only 34.3% when the pedestrians would be sacrificed. Furthermore, other factors such as sex, driving experience, and driving preferences partially influenced takeover decisions; however, they had a smaller effect than the situational context. Overall, our findings show that regardless of the driving intention of an AV, informing drivers that their safety is at risk can enhance their willingness to take over control of an AV in critical situations.
Collapse
Affiliation(s)
- Uijong Ju
- Department of Information Display, Kyung Hee University, Seoul, South Korea
| | - Sanghyeon Kim
- Department of Information Display, Kyung Hee University, Seoul, South Korea
| |
Collapse
|
2
|
Salagean A, Wu M, Fletcher G, Cosker D, Fraser DS. The Utilitarian Virtual Self - Using Embodied Personalized Avatars to Investigate Moral Decision-Making in Semi-Autonomous Vehicle Dilemmas. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2162-2172. [PMID: 38437115 DOI: 10.1109/tvcg.2024.3372121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
Embodied personalized avatars are a promising new tool to investigate moral decision-making by transposing the user into the "middle of the action" in moral dilemmas. Here, we tested whether avatar personalization and motor control could impact moral decision-making, physiological reactions and reaction times, as well as embodiment, presence and avatar perception. Seventeen participants, who had their personalized avatars created in a previous study, took part in a range of incongruent (i.e., harmful action led to better overall outcomes) and congruent (i.e., harmful action led to trivial outcomes) moral dilemmas as the drivers of a semi-autonomous car. They embodied four different avatars (counterbalanced - personalized motor control, personalized no motor control, generic motor control, generic no motor control). Overall, participants took a utilitarian approach by performing harmful actions only to maximize outcomes. We found increased physiological arousal (SCRs and heart rate) for personalized avatars compared to generic avatars, and increased SCRs in motor control conditions compared to no motor control. Participants had slower reaction times when they had motor control over their avatars, possibly hinting at more elaborate decision-making processes. Presence was also higher in motor control compared to no motor control conditions. Embodiment ratings were higher for personalized avatars, and generally, personalization and motor control were perceptually positive features. These findings highlight the utility of personalized avatars and open up a range of future research possibilities that could benefit from the affordances of this technology and simulate, more closely than ever, real-life action.
Collapse
|
3
|
Resnik DB, Andrews SL. A precautionary approach to autonomous vehicles. AI AND ETHICS 2024; 4:403-418. [PMID: 38770187 PMCID: PMC11105117 DOI: 10.1007/s43681-023-00277-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Accepted: 03/07/2023] [Indexed: 05/22/2024]
Abstract
In this article, we defend an approach to autonomous vehicle ethics and policy based on the precautionary principle. We argue that a precautionary approach is warranted, given the significant scientific and moral uncertainties related to autonomous vehicles, especially higher-level ones. While higher-level autonomous vehicles may offer many important benefits to society, they also pose significant risks, which are not fully understood at this juncture. Risk management strategies traditionally used by government officials to make decisions about new technologies cannot be applied to higher-level autonomous vehicles because these strategies require accurate and reliable probability estimates concerning the outcomes of different policy options and extensive agreement about values, which are not currently available for autonomous vehicles. Although we describe our approach as precautionary, that does not mean that we are opposed to autonomous vehicle development and deployment, because autonomous vehicles offer benefits that should be pursued. The optimal approach to managing the risks of autonomous vehicles is to take reasonable precautions; that is, to adopt policies that attempt to deal with serious risks in a responsible way without depriving society of important benefits.
Collapse
Affiliation(s)
- David B. Resnik
- National Institute of Environmental Health Sciences, National Institutes of Health, 111 TW Alexander Drive, Mail Drop E1-06, PO Box 12233, Research Triangle Park, NC 27709, USA
| | - Suzanne L. Andrews
- Master of Science in Bioethics Student, Columbia University, 203 Lewisohn Hall, 2970 Broadway, Mail Code 4119, New York, NY 10027-6902, USA
| |
Collapse
|
4
|
Chen N, Zu Y, Song J. Research on the influence and mechanism of human-vehicle moral matching on trust in autonomous vehicles. Front Psychol 2023; 14:1071872. [PMID: 37325750 PMCID: PMC10262084 DOI: 10.3389/fpsyg.2023.1071872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Accepted: 03/09/2023] [Indexed: 06/17/2023] Open
Abstract
Introduction Autonomous vehicles can have social attributes and make ethical decisions during driving. In this study, we investigated the impact of human-vehicle moral matching on trust in autonomous vehicles and its mechanism. Methods A 2*2 experiment involving 200 participants was conducted. Results The results of the data analysis show that utilitarian moral individuals have greater trust than deontological moral individuals. Perceived value and perceived risk play a double-edged role in people's trust in autonomous vehicles. People's moral type has a positive impact on trust through perceived value and a negative impact through perceived risk. Vehicle moral type moderates the impact of human moral type on trust through perceived value and perceived risk. Discussion The conclusion shows that heterogeneous moral matching (people are utilitarian, vehicles are deontology) has a more positive effect on trust than homogenous moral matching (both people and vehicles are deontology or utilitarian), which is consistent with the assumption of selfish preferences of individuals. The results of this study provide theoretical expansion for the fields related to human-vehicle interaction and AI social attributes and provide exploratory suggestions for the functional design of autonomous vehicles.
Collapse
Affiliation(s)
- Na Chen
- College of Economics and Management, Beijing University of Chemical Technology, Beijing, China
| | - Yao Zu
- College of Economics and Management, Beijing University of Chemical Technology, Beijing, China
| | - Jing Song
- Management College, Beijing Union University, Beijing, China
| |
Collapse
|
5
|
An ethical trajectory planning algorithm for autonomous vehicles. NAT MACH INTELL 2023. [DOI: 10.1038/s42256-022-00607-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
|
6
|
Takaguchi K, Kappes A, Yearsley JM, Sawai T, Wilkinson DJC, Savulescu J. Personal ethical settings for driverless cars and the utility paradox: An ethical analysis of public attitudes in UK and Japan. PLoS One 2022; 17:e0275812. [PMID: 36378636 PMCID: PMC9665398 DOI: 10.1371/journal.pone.0275812] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Accepted: 09/25/2022] [Indexed: 11/17/2022] Open
Abstract
Driverless cars are predicted to dramatically reduce collisions and casualties on the roads. However, there has been controversy about how they should be programmed to respond in the event of an unavoidable collision. Should they aim to save the most lives, prioritise the lives of pedestrians, or occupants of the vehicle? Some have argued that driverless cars should all be programmed to minimise total casualties. While this would appear to have wide international public support, previous work has also suggested regional variation and public reluctance to purchase driverless cars with such a mandated ethical setting. The possibility that algorithms designed to minimise collision fatalities would lead to reduced consumer uptake of driverless cars and thereby to higher overall road deaths, represents a potential "utility paradox". To investigate this paradox further, we examined the views of the general public about driverless cars in two online surveys in the UK and Japan, examining the influence of choice of a "personal ethical setting" as well as of framing on hypothetical purchase decisions. The personal ethical setting would allow respondents to choose between a programme which would save the most lives, save occupants or save pedestrians. We found striking differences between UK and Japanese respondents. While a majority of UK respondents wished to buy driverless cars that prioritise the most lives or their family members' lives, Japanese survey participants preferred to save pedestrians. We observed reduced willingness to purchase driverless cars with a mandated ethical setting (compared to offering choice) in both countries. It appears that the public values relevant to programming of driverless cars differ between UK and Japan. The highest uptake of driverless cars in both countries can be achieved by providing a personal ethical setting. Since uptake of driverless cars (rather than specific algorithm used) is potentially the biggest factor in reducing in traffic related accidents, providing some choice of ethical settings may be optimal for driverless cars according to a range of plausible ethical theories.
Collapse
Affiliation(s)
- Kazuya Takaguchi
- Oxford Uehiro Centre for Practical Ethics, Faculty of Philosophy, University of Oxford, Oxford, United Kingdom
- The Department of Ethics, Kyoto University, Kyoto, Japan
| | | | | | - Tsutomu Sawai
- Graduate School of Humanities and Social Sciences, Hiroshima University, Hiroshima, Japan
- Institute for the Advanced Study of Human Biology (ASHBi), Kyoto University, Kyoto, Japan
| | - Dominic J. C. Wilkinson
- Oxford Uehiro Centre for Practical Ethics, Faculty of Philosophy, University of Oxford, Oxford, United Kingdom
- John Radcliffe Hospital, Oxford, United Kingdom
- Murdoch Children’s Research Institute, Melbourne, Australia
- * E-mail:
| | - Julian Savulescu
- Oxford Uehiro Centre for Practical Ethics, Faculty of Philosophy, University of Oxford, Oxford, United Kingdom
- Melbourne Law School, Melbourne, Australia
| |
Collapse
|
7
|
A Study of Common Principles for Decision-Making in Moral Dilemmas for Autonomous Vehicles. Behav Sci (Basel) 2022; 12:bs12090344. [PMID: 36135148 PMCID: PMC9495613 DOI: 10.3390/bs12090344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Revised: 09/09/2022] [Accepted: 09/16/2022] [Indexed: 11/23/2022] Open
Abstract
How decisions are made when autonomous vehicles (AVs) are faced with moral dilemmas is still a challenge. For this problem, this paper proposed the concept of common principles, which were drawn from the general public choice and could be generally accepted by society. This study established five moral dilemma scenarios with variables including the number of sacrifices, passenger status, presence of children, decision-making power subjects, and laws. Based on existing questionnaire data, we used gray correlation analysis to analyze the influence of the individual and psychological factors of participants in decision-making. Then, an independent sample t-test and analysis of covariance were selected to analyze the influence relationship between individual and psychological factors. Finally, by induction statistics of decision choices and related parameters of participants, we obtain common principles of autonomous vehicles, including the principle of protecting law-abiding people, the principle of protecting the majority, and the principle of protecting children. The principles have different priorities in different scenarios and can meet the complex changes in moral dilemmas. This study can alleviate the contradiction between utilitarianism and deontology, the conflict between public needs and individualized needs, and it can provide a code of conduct for ethical decision-making in future autonomous vehicles.
Collapse
|
8
|
Wang Q, Zhou Q, Lin M, Nie B. Human injury-based safety decision of automated vehicles. iScience 2022; 25:104703. [PMID: 35856029 PMCID: PMC9287800 DOI: 10.1016/j.isci.2022.104703] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Revised: 06/02/2022] [Accepted: 06/27/2022] [Indexed: 12/01/2022] Open
Abstract
Automated vehicles (AVs) are anticipated to improve road traffic safety. However, prevailing decision-making algorithms have largely neglected the potential to mitigate injuries when confronting inevitable obstacles. To explore whether, how, and to what extent AVs can enhance human protection, we propose an injury risk mitigation-based decision-making algorithm. The algorithm is guided by a real-time, data-driven human injury prediction model and is assessed using detailed first-hand information collected from real-world crashes. The results demonstrate that integrating injury prediction into decision-making is promising for reducing traffic casualties. Because safety decisions involve harm distribution for different participants, we further analyze the potential ethical issues quantitatively, providing a technically critical step closer to settling such dilemmas. This work demonstrates the feasibility of applying mining tools to identify the underlying mechanisms embedded in crash data accumulated over time and opens the way for future AVs to facilitate optimal road traffic safety. We propose an injury risk mitigation-based decision-making algorithm for AVs A real-time, data-driven human injury prediction model was established We applied mining tools to identify mechanisms embedded in accumulated crash data We analyzed traffic ethical issues quantitatively, closer to feasible solutions
Collapse
Affiliation(s)
- Qingfan Wang
- State Key Laboratory of Automotive Safety and Energy, School of Vehicle and Mobility, Tsinghua University, Beijing 100084, China
| | - Qing Zhou
- State Key Laboratory of Automotive Safety and Energy, School of Vehicle and Mobility, Tsinghua University, Beijing 100084, China
| | - Miao Lin
- China Automotive Technology & Research Center (CATARC), Tianjin 300399, China
| | - Bingbing Nie
- State Key Laboratory of Automotive Safety and Energy, School of Vehicle and Mobility, Tsinghua University, Beijing 100084, China
| |
Collapse
|
9
|
Arfini S, Spinelli D, Chiffi D. Ethics of Self-driving Cars: A Naturalistic Approach. Minds Mach (Dordr) 2022. [DOI: 10.1007/s11023-022-09604-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
AbstractThe potential development of self-driving cars (also known as autonomous vehicles or AVs – particularly Level 5 AVs) has called the attention of different interested parties. Yet, there are still only a few relevant international regulations on them, no emergency patterns accepted by communities and Original Equipment Manufacturers (OEMs), and no publicly accepted solutions to some of their pending ethical problems. Thus, this paper aims to provide some possible answers to these moral and practical dilemmas. In particular, we focus on what AVs should do in no-win scenarios and on who should be held responsible for these types of decisions. A naturalistic perspective on ethics informs our proposal, which, we argue, could represent a pragmatic and realistic solution to the regulation of AVs. We discuss the proposals already set out in the current literature regarding both policy-making strategies and theoretical accounts. In fact, we consider and reject descriptive approaches to the problem as well as the option of using either a strict deontological view or a solely utilitarian one to set AVs’ ethical choices. Instead, to provide concrete answers to AVs’ ethical problems, we examine three hierarchical levels of decision-making processes: country-wide regulations, OEM policies, and buyers’ moral attitudes. By appropriately distributing ethical decisions and considering their practical implications, we maintain that our proposal based on ethical naturalism recognizes the importance of all stakeholders and allows the most able of them to take actions (the OEMs and buyers) to reflect on the moral leeway and weight of their options.
Collapse
|
10
|
Mayer MM, Bell R, Buchner A. Self-protective and self-sacrificing preferences of pedestrians and passengers in moral dilemmas involving autonomous vehicles. PLoS One 2021; 16:e0261673. [PMID: 34941936 PMCID: PMC8700044 DOI: 10.1371/journal.pone.0261673] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Accepted: 12/08/2021] [Indexed: 12/03/2022] Open
Abstract
Upon the introduction of autonomous vehicles into daily traffic, it becomes increasingly likely that autonomous vehicles become involved in accident scenarios in which decisions have to be made about how to distribute harm among involved parties. In four experiments, participants made moral decisions from the perspective of a passenger, a pedestrian, or an observer. The results show that the preferred action of an autonomous vehicle strongly depends on perspective. Participants’ judgments reflect self-protective tendencies even when utilitarian motives clearly favor one of the available options. However, with an increasing number of lives at stake, utilitarian preferences increased. In a fifth experiment, we tested whether these results were tainted by social desirability but this was not the case. Overall, the results confirm that strong differences exist among passengers, pedestrians, and observers about the preferred course of action in critical incidents. It is therefore important that the actions of autonomous vehicles are not only oriented towards the needs of their passengers, but also take the interests of other road users into account. Even though utilitarian motives cannot fully reconcile the conflicting interests of passengers and pedestrians, there seem to be some moral preferences that a majority of the participants agree upon regardless of their perspective, including the utilitarian preference to save several other lives over one’s own.
Collapse
Affiliation(s)
- Maike M. Mayer
- Department of Experimental Psychology, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
- * E-mail:
| | - Raoul Bell
- Department of Experimental Psychology, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
| | - Axel Buchner
- Department of Experimental Psychology, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
| |
Collapse
|
11
|
Yokoi R, Nakayachi K. Trust in Autonomous Cars: Exploring the Role of Shared Moral Values, Reasoning, and Emotion in Safety-Critical Decisions. HUMAN FACTORS 2021; 63:1465-1484. [PMID: 32663047 DOI: 10.1177/0018720820933041] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
OBJECTIVE Autonomous cars (ACs) controlled by artificial intelligence are expected to play a significant role in transportation in the near future. This study investigated determinants of trust in ACs. BACKGROUND Trust in ACs influences different variables, including the intention to adopt AC technology. Several studies on risk perception have verified that shared value determines trust in risk managers. Previous research has confirmed the effect of value similarity on trust in artificial intelligence. We focused on moral beliefs, specifically utilitarianism (belief in promoting a greater good) and deontology (belief in condemning deliberate harm), and tested the effects of shared moral beliefs on trust in ACs. METHOD We conducted three experiments (N = 128, 71, and 196, for each), adopting a thought experiment similar to the well-known trolley problem. We manipulated shared moral beliefs (shared vs. unshared) and driver (AC vs. human), providing participants with different moral dilemma scenarios. Trust in ACs was measured through a questionnaire. RESULTS The results of Experiment 1 showed that shared utilitarian belief strongly influenced trust in ACs. In Experiment 2 and Experiment 3, however, we did not find statistical evidence that shared deontological belief had an effect on trust in ACs. CONCLUSION The results of the three experiments suggest that the effect of shared moral beliefs on trust varies depending on the values that ACs share with humans. APPLICATION To promote AC implementation, policymakers and developers need to understand which values are shared between ACs and humans to enhance trust in ACs.
Collapse
|
12
|
Siegel J, Pappas G. Morals, ethics, and the technology capabilities and limitations of automated and self-driving vehicles. AI & SOCIETY 2021. [DOI: 10.1007/s00146-021-01277-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
13
|
Autonomous systems in ethical dilemmas: Attitudes toward randomization. COMPUTERS IN HUMAN BEHAVIOR REPORTS 2021. [DOI: 10.1016/j.chbr.2021.100145] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
|
14
|
Tamai R, Igarashi T. Odd man out for everyone: The justification of ostracism to maximize the whole group’s benefits. EUROPEAN JOURNAL OF SOCIAL PSYCHOLOGY 2021. [DOI: 10.1002/ejsp.2725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Affiliation(s)
- Ryuichi Tamai
- School of InformationKochi University of Technology Kami Japan
| | - Tasuku Igarashi
- Graduate School of Education and Human DevelopmentNagoya University Nagoya Japan
| |
Collapse
|
15
|
Martin R, Kusev P, Teal J, Baranova V, Rigal B. Moral Decision Making: From Bentham to Veil of Ignorance via Perspective Taking Accessibility. Behav Sci (Basel) 2021; 11:bs11050066. [PMID: 34062808 PMCID: PMC8147336 DOI: 10.3390/bs11050066] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Revised: 04/26/2021] [Accepted: 04/29/2021] [Indexed: 11/25/2022] Open
Abstract
Making morally sensitive decisions and evaluations pervade many human everyday activities. Philosophers, economists, psychologists and behavioural scientists researching such decision-making typically explore the principles, processes and predictors that constitute human moral decision-making. Crucially, very little research has explored the theoretical and methodological development (supported by empirical evidence) of utilitarian theories of moral decision-making. Accordingly, in this critical review article, we invite the reader on a moral journey from Jeremy Bentham’s utilitarianism to the veil of ignorance reasoning, via a recent theoretical proposal emphasising utilitarian moral behaviour—perspective-taking accessibility (PT accessibility). PT accessibility research revealed that providing participants with access to all situational perspectives in moral scenarios, eliminates (previously reported in the literature) inconsistency between their moral judgements and choices. Moreover, in contrast to any previous theoretical and methodological accounts, moral scenarios/tasks with full PT accessibility provide the participants with unbiased even odds (neither risk averse nor risk seeking) and impartiality. We conclude that the proposed by Martin et al. PT Accessibility (a new type of veil of ignorance with even odds that do not trigger self-interest, risk related preferences or decision biases) is necessary in order to measure humans’ prosocial utilitarian behaviour and promote its societal benefits.
Collapse
Affiliation(s)
- Rose Martin
- Department of People and Organisations, Surrey Business School, University of Surrey, Guildford GU2 7XH, UK;
| | - Petko Kusev
- Behavioural Research Centre, Huddersfield Business School, The University of Huddersfield, Huddersfield HD1 3DH, UK;
- Correspondence:
| | - Joseph Teal
- Behavioural Research Centre, Huddersfield Business School, The University of Huddersfield, Huddersfield HD1 3DH, UK;
| | - Victoria Baranova
- Department of Psychology, Lomonosov Moscow State University, 125009 Moscow, Russia;
| | - Bruce Rigal
- Institute of Business, Law and Society, St Mary’s University, London TW1 4SX, UK;
| |
Collapse
|
16
|
Categorization and challenges of utilitarianisms in the context of artificial intelligence. AI & SOCIETY 2021. [DOI: 10.1007/s00146-021-01169-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
17
|
Computer Says I Don’t Know: An Empirical Approach to Capture Moral Uncertainty in Artificial Intelligence. Minds Mach (Dordr) 2021. [DOI: 10.1007/s11023-021-09556-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
AbstractAs AI Systems become increasingly autonomous, they are expected to engage in decision-making processes that have moral implications. In this research we integrate theoretical and empirical lines of thought to address the matters of moral reasoning and moral uncertainty in AI Systems. We reconceptualize the metanormative framework for decision-making under moral uncertainty and we operationalize it through a latent class choice model. The core idea being that moral heterogeneity in society can be codified in terms of a small number of classes with distinct moral preferences and that this codification can be used to express moral uncertainty of an AI. Choice analysis allows for the identification of classes and their moral preferences based on observed choice data. Our reformulation of the metanormative framework is theory-rooted and practical in the sense that it avoids runtime issues in real time applications. To illustrate our approach we conceptualize a society in which AI Systems are in charge of making policy choices. While one of the systems uses a baseline morally certain model, the other uses a morally uncertain model. We highlight cases in which the AI Systems disagree about the policy to be chosen, thus illustrating the need to capture moral uncertainty in AI systems.
Collapse
|
18
|
de Melo CM, Marsella S, Gratch J. Risk of Injury in Moral Dilemmas With Autonomous Vehicles. Front Robot AI 2021; 7:572529. [PMID: 34212006 PMCID: PMC8239464 DOI: 10.3389/frobt.2020.572529] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2020] [Accepted: 12/10/2020] [Indexed: 11/21/2022] Open
Abstract
As autonomous machines, such as automated vehicles (AVs) and robots, become pervasive in society, they will inevitably face moral dilemmas where they must make decisions that risk injuring humans. However, prior research has framed these dilemmas in starkly simple terms, i.e., framing decisions as life and death and neglecting the influence of risk of injury to the involved parties on the outcome. Here, we focus on this gap and present experimental work that systematically studies the effect of risk of injury on the decisions people make in these dilemmas. In four experiments, participants were asked to program their AVs to either save five pedestrians, which we refer to as the utilitarian choice, or save the driver, which we refer to as the nonutilitarian choice. The results indicate that most participants made the utilitarian choice but that this choice was moderated in important ways by perceived risk to the driver and risk to the pedestrians. As a second contribution, we demonstrate the value of formulating AV moral dilemmas in a game-theoretic framework that considers the possible influence of others’ behavior. In the fourth experiment, we show that participants were more (less) likely to make the utilitarian choice, the more utilitarian (nonutilitarian) other drivers behaved; furthermore, unlike the game-theoretic prediction that decision-makers inevitably converge to nonutilitarianism, we found significant evidence of utilitarianism. We discuss theoretical implications for our understanding of human decision-making in moral dilemmas and practical guidelines for the design of autonomous machines that solve these dilemmas while, at the same time, being likely to be adopted in practice.
Collapse
Affiliation(s)
- Celso M de Melo
- CCDC US Army Research Laboratory, Playa Vista, CA, United States
| | - Stacy Marsella
- College of Computer and Information Science, Northeastern University, Boston, MA, United States
| | - Jonathan Gratch
- Institute for Creative Technologies, University of Southern, Playa Vista, CA, United States
| |
Collapse
|
19
|
|
20
|
Kallioinen N, Pershina M, Zeiser J, Nosrat Nezami F, Pipa G, Stephan A, König P. Moral Judgements on the Actions of Self-Driving Cars and Human Drivers in Dilemma Situations From Different Perspectives. Front Psychol 2019; 10:2415. [PMID: 31749736 PMCID: PMC6844247 DOI: 10.3389/fpsyg.2019.02415] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2019] [Accepted: 10/10/2019] [Indexed: 11/25/2022] Open
Abstract
Self-driving cars have the potential to greatly improve public safety. However, their introduction onto public roads must overcome both ethical and technical challenges. To further understand the ethical issues of introducing self-driving cars, we conducted two moral judgement studies investigating potential differences in the moral norms applied to human drivers and self-driving cars. In the experiments, participants made judgements on a series of dilemma situations involving human drivers or self-driving cars. We manipulated which perspective situations were presented from in order to ascertain the effect of perspective on moral judgements. Two main findings were apparent from the results of the experiments. First, human drivers and self-driving cars were largely judged similarly. However, there was a stronger tendency to prefer self-driving cars to act in ways to minimize harm, compared to human drivers. Second, there was an indication that perspective influences judgements in some situations. Specifically, when considering situations from the perspective of a pedestrian, people preferred actions that would endanger car occupants instead of themselves. However, they did not show such a self-preservation tendency when the alternative was to endanger other pedestrians to save themselves. This effect was more prevalent for judgements on human drivers than self-driving cars. Overall, the results extend and agree with previous research, again contradicting existing ethical guidelines for self-driving car decision making and highlighting the difficulties with adapting public opinion to decision making algorithms.
Collapse
Affiliation(s)
- Noa Kallioinen
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
| | - Maria Pershina
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
| | - Jannik Zeiser
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany.,Institute of Philosophy, Leibniz University Hannover, Hanover, Germany
| | | | - Gordon Pipa
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
| | - Achim Stephan
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
| | - Peter König
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany.,Department of Neurophysiology and Pathophysiology, Center of Experimental Medicine, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|
21
|
Sütfeld LR, Ehinger BV, König P, Pipa G. How does the method change what we measure? Comparing virtual reality and text-based surveys for the assessment of moral decisions in traffic dilemmas. PLoS One 2019; 14:e0223108. [PMID: 31596864 PMCID: PMC6785059 DOI: 10.1371/journal.pone.0223108] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2019] [Accepted: 09/13/2019] [Indexed: 11/26/2022] Open
Abstract
The question of how self-driving cars should behave in dilemma situations has recently attracted a lot of attention in science, media and society. A growing number of publications amass insight into the factors underlying the choices we make in such situations, often using forced-choice paradigms closely linked to the trolley dilemma. The methodology used to address these questions, however, varies widely between studies, ranging from fully immersive virtual reality settings to completely text-based surveys. In this paper we compare virtual reality and text-based assessments, analyzing the effect that different factors in the methodology have on decisions and emotional response of participants. We present two studies, comparing a total of six different conditions varying across three dimensions: The level of abstraction, the use of virtual reality, and time-constraints. Our results show that the moral decisions made in this context are not strongly influenced by the assessment, and the compared methods ultimately appear to measure very similar constructs. Furthermore, we add to the pool of evidence on the underlying factors of moral judgment in traffic dilemmas, both in terms of general preferences, i.e., features of the particular situation and potential victims, as well as in terms of individual differences between participants, such as their age and gender.
Collapse
Affiliation(s)
- Leon René Sütfeld
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
| | | | - Peter König
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
| | - Gordon Pipa
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
| |
Collapse
|
22
|
Sivill T. Ethical and Statistical Considerations in Models of Moral Judgments. Front Robot AI 2019; 6:39. [PMID: 33501055 PMCID: PMC7805917 DOI: 10.3389/frobt.2019.00039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2019] [Accepted: 04/30/2019] [Indexed: 12/02/2022] Open
Abstract
This work extends recent advancements in computational models of moral decision making by using mathematical and philosophical theory to suggest adaptations to state of the art. It demonstrates the importance of model assumptions and considers alternatives to the normal distribution when modeling ethical principles. We show how the ethical theories, utilitarianism and deontology can be embedded into informative prior distributions. We continue to expand the state of the art to consider ethical dilemmas beyond the Trolley Problem and show the adaptations needed to address this complexity. The adaptations made in this work are not solely intended to improve recent models but aim to raise awareness of the importance of interpreting results relative to assumptions made, either implicitly or explicitly, in model construction.
Collapse
|
23
|
Uijong J, Kang J, Wallraven C. You or Me? Personality Traits Predict Sacrificial Decisions in an Accident Situation. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:1898-1907. [PMID: 30802865 DOI: 10.1109/tvcg.2019.2899227] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Emergency situations during car driving sometimes force the driver to make a sudden decision. Predicting these decisions will have important applications in updating risk analyses in insurance applications, but also can give insights for drafting autonomous vehicle guidelines. Studying such behavior in experimental settings, however, is limited by ethical issues as it would endanger peoples' lives. Here, we employed the potential of virtual reality (VR) to investigate decision-making in an extreme situation in which participants would have to sacrifice others in order to save themselves. In a VR driving simulation, participants first trained to complete a difficult course with multiple crossroads in which the wrong turn would lead the car to fall down a cliff. In the testing phase, obstacles suddenly appeared on the "safe" turn of a crossroad: for the control group, obstacles consisted of trees, whereas for the experimental group, they were pedestrians. In both groups, drivers had to decide between falling down the cliff or colliding with the obstacles. Results showed that differences in personality traits were able to predict this decision: in the experimental group, drivers who collided with the pedestrians had significantly higher psychopathy and impulsivity traits, whereas impulsivity alone was to some degree predictive in the control group. Other factors like heart rate differences, gender, video game expertise, and driving experience were not predictive of the emergency decision in either group. Our results show that self-interest related personality traits affect decision-making when choosing between preservation of self or others in extreme situations and showcase the potential of virtual reality in studying and modeling human decision-making.
Collapse
|
24
|
Wolff A, Gomez-Pilar J, Nakao T, Northoff G. Interindividual neural differences in moral decision-making are mediated by alpha power and delta/theta phase coherence. Sci Rep 2019; 9:4432. [PMID: 30872647 PMCID: PMC6418194 DOI: 10.1038/s41598-019-40743-y] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2018] [Accepted: 02/21/2019] [Indexed: 01/08/2023] Open
Abstract
As technology in Artificial Intelligence has developed, the question of how to program driverless cars to respond to an emergency has arisen. It was recently shown that approval of the consequential behavior of driverless cars varied with the number of lives saved and showed interindividual differences, with approval increasing alongside the number of lives saved. In the present study, interindividual differences in individualized moral decision-making at both the behavioral and neural level were investigated using EEG. It was found that alpha event-related spectral perturbation (ERSP) and delta/theta phase-locking - intertrial coherence (ITC) and phase-locking value (PLV) - play a central role in mediating interindividual differences in Moral decision-making. In addition, very late alpha activity differences between individualized and shared stimuli, and delta/theta ITC, where shown to be closely related to reaction time and subjectively perceived emotional distress. This demonstrates that interindividual differences in Moral decision-making are mediated neuronally by various markers - late alpha ERSP, and delta/theta ITC - as well as psychologically by reaction time and perceived emotional distress. Our data show, for the first time, how and according to which neuronal and behavioral measures interindividual differences in Moral dilemmas can be measured.
Collapse
Affiliation(s)
- Annemarie Wolff
- Institute of Mental Health Research, University of Ottawa, Ottawa, Canada.
| | - Javier Gomez-Pilar
- Biomedical Engineering Group, Higher Technical School of Telecommunications Engineering, University of Valladolid, Valladolid, Spain
| | - Takashi Nakao
- Department of Psychology, Graduate School of Education, Hiroshima University, Hiroshima, Japan
| | - Georg Northoff
- Institute of Mental Health Research, University of Ottawa, Ottawa, Canada
| |
Collapse
|