1
|
Salagean A, Wu M, Fletcher G, Cosker D, Fraser DS. The Utilitarian Virtual Self - Using Embodied Personalized Avatars to Investigate Moral Decision-Making in Semi-Autonomous Vehicle Dilemmas. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:2162-2172. [PMID: 38437115 DOI: 10.1109/tvcg.2024.3372121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
Embodied personalized avatars are a promising new tool to investigate moral decision-making by transposing the user into the "middle of the action" in moral dilemmas. Here, we tested whether avatar personalization and motor control could impact moral decision-making, physiological reactions and reaction times, as well as embodiment, presence and avatar perception. Seventeen participants, who had their personalized avatars created in a previous study, took part in a range of incongruent (i.e., harmful action led to better overall outcomes) and congruent (i.e., harmful action led to trivial outcomes) moral dilemmas as the drivers of a semi-autonomous car. They embodied four different avatars (counterbalanced - personalized motor control, personalized no motor control, generic motor control, generic no motor control). Overall, participants took a utilitarian approach by performing harmful actions only to maximize outcomes. We found increased physiological arousal (SCRs and heart rate) for personalized avatars compared to generic avatars, and increased SCRs in motor control conditions compared to no motor control. Participants had slower reaction times when they had motor control over their avatars, possibly hinting at more elaborate decision-making processes. Presence was also higher in motor control compared to no motor control conditions. Embodiment ratings were higher for personalized avatars, and generally, personalization and motor control were perceptually positive features. These findings highlight the utility of personalized avatars and open up a range of future research possibilities that could benefit from the affordances of this technology and simulate, more closely than ever, real-life action.
Collapse
|
2
|
Tolmeijer S, Arpatzoglou V, Rossetto L, Bernstein A. Trolleys, crashes, and perception-a survey on how current autonomous vehicles debates invoke problematic expectations. AI AND ETHICS 2023; 4:473-484. [PMID: 38737783 PMCID: PMC11078731 DOI: 10.1007/s43681-023-00284-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/04/2023] [Accepted: 03/28/2023] [Indexed: 05/14/2024]
Abstract
Ongoing debates about ethical guidelines for autonomous vehicles mostly focus on variations of the 'Trolley Problem'. Using variations of this ethical dilemma in preference surveys, possible implications for autonomous vehicles policy are discussed. In this work, we argue that the lack of realism in such scenarios leads to limited practical insights. We run an ethical preference survey for autonomous vehicles by including more realistic features, such as time pressure and a non-binary decision option. Our results indicate that such changes lead to different outcomes, calling into question how the current outcomes can be generalized. Additionally, we investigate the framing effects of the capabilities of autonomous vehicles and indicate that ongoing debates need to set realistic expectations on autonomous vehicle challenges. Based on our results, we call upon the field to re-frame the current debate towards more realistic discussions beyond the Trolley Problem and focus on which autonomous vehicle behavior is considered not to be acceptable, since a consensus on what the right solution is, is not reachable.
Collapse
Affiliation(s)
- Suzanne Tolmeijer
- Information Systems, Socio-Technical Systems Design (WISTS), University of Hamburg, Vogt-Kölln-Straße 30, 22527 Hamburg, Germany
| | - Vicky Arpatzoglou
- Department of Informatics, University of Zurich, Binzmühlestrasse 14, 8050 Zurich, Switzerland
| | - Luca Rossetto
- Department of Informatics, University of Zurich, Binzmühlestrasse 14, 8050 Zurich, Switzerland
| | - Abraham Bernstein
- Department of Informatics, University of Zurich, Binzmühlestrasse 14, 8050 Zurich, Switzerland
| |
Collapse
|
3
|
Bruno G, Sarlo M, Lotto L, Cellini N, Cutini S, Spoto A. Moral judgment, decision times and emotional salience of a new developed set of sacrificial manual driving dilemmas. CURRENT PSYCHOLOGY 2022; 42:1-14. [PMID: 35035197 PMCID: PMC8752177 DOI: 10.1007/s12144-021-02511-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/11/2021] [Indexed: 11/11/2022]
Abstract
The growing interest in the subject of moral judgment in driver and autonomous vehicle behavior highlights the importance of investigating the suitability of sacrificial dilemmas as experimental tools in the context of traffic psychology. To this aim a set of validated sacrificial trolley problems and a new set of trolley-like driving dilemmas were compared through an online survey experiment, providing normative values for rates of participants' choices; decision times; evaluation of emotional valence and arousal experienced during the decision process; and ratings of the moral acceptability. Results showed that while both sets of dilemmas led to a more frequent selection of utilitarian outcomes, the driving-type dilemmas seemed to enhance faster decisions mainly based on the utilitarian moral code. No further differences were observed between the two sets, confirming the reliability of the moral dilemma tool in the investigation of moral driving behaviors. We suggest that as moral judgments and behaviors become more lifelike, the individual's moral inclination emerge more automatically and effectively. This new driving-type dilemma set may help researchers who work in traffic psychology and moral decision-making to approach the complex task of developing realistic moral scenarios more easily in the context of autonomous and nonautonomous transportation.
Collapse
Affiliation(s)
- Giovanni Bruno
- Department of General Psychology, University of Padua, Via Venezia 8, 35131 Padua, Italy
| | - Michela Sarlo
- Department of Communication Sciences, Humanities and International Studies, University of Urbino Carlo Bo, Urbino, Italy
| | - Lorella Lotto
- Department of Developmental Psychology and Socialization, University of Padua, Padua, Italy
| | - Nicola Cellini
- Department of General Psychology, University of Padua, Via Venezia 8, 35131 Padua, Italy
- Department of Biomedical Sciences, University of Padua, Padua, Italy
- Padova Neuroscience Center, University of Padua, Padua, Italy
- Human Inspired Technology Center, University of Padua, Padua, Italy
| | - Simone Cutini
- Department of Developmental Psychology and Socialization, University of Padua, Padua, Italy
- Padova Neuroscience Center, University of Padua, Padua, Italy
| | - Andrea Spoto
- Department of General Psychology, University of Padua, Via Venezia 8, 35131 Padua, Italy
| |
Collapse
|
4
|
Mayer MM, Bell R, Buchner A. Self-protective and self-sacrificing preferences of pedestrians and passengers in moral dilemmas involving autonomous vehicles. PLoS One 2021; 16:e0261673. [PMID: 34941936 PMCID: PMC8700044 DOI: 10.1371/journal.pone.0261673] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Accepted: 12/08/2021] [Indexed: 12/03/2022] Open
Abstract
Upon the introduction of autonomous vehicles into daily traffic, it becomes increasingly likely that autonomous vehicles become involved in accident scenarios in which decisions have to be made about how to distribute harm among involved parties. In four experiments, participants made moral decisions from the perspective of a passenger, a pedestrian, or an observer. The results show that the preferred action of an autonomous vehicle strongly depends on perspective. Participants’ judgments reflect self-protective tendencies even when utilitarian motives clearly favor one of the available options. However, with an increasing number of lives at stake, utilitarian preferences increased. In a fifth experiment, we tested whether these results were tainted by social desirability but this was not the case. Overall, the results confirm that strong differences exist among passengers, pedestrians, and observers about the preferred course of action in critical incidents. It is therefore important that the actions of autonomous vehicles are not only oriented towards the needs of their passengers, but also take the interests of other road users into account. Even though utilitarian motives cannot fully reconcile the conflicting interests of passengers and pedestrians, there seem to be some moral preferences that a majority of the participants agree upon regardless of their perspective, including the utilitarian preference to save several other lives over one’s own.
Collapse
Affiliation(s)
- Maike M. Mayer
- Department of Experimental Psychology, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
- * E-mail:
| | - Raoul Bell
- Department of Experimental Psychology, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
| | - Axel Buchner
- Department of Experimental Psychology, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
| |
Collapse
|
5
|
Yokoi R, Nakayachi K. Trust in Autonomous Cars: Exploring the Role of Shared Moral Values, Reasoning, and Emotion in Safety-Critical Decisions. HUMAN FACTORS 2021; 63:1465-1484. [PMID: 32663047 DOI: 10.1177/0018720820933041] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
OBJECTIVE Autonomous cars (ACs) controlled by artificial intelligence are expected to play a significant role in transportation in the near future. This study investigated determinants of trust in ACs. BACKGROUND Trust in ACs influences different variables, including the intention to adopt AC technology. Several studies on risk perception have verified that shared value determines trust in risk managers. Previous research has confirmed the effect of value similarity on trust in artificial intelligence. We focused on moral beliefs, specifically utilitarianism (belief in promoting a greater good) and deontology (belief in condemning deliberate harm), and tested the effects of shared moral beliefs on trust in ACs. METHOD We conducted three experiments (N = 128, 71, and 196, for each), adopting a thought experiment similar to the well-known trolley problem. We manipulated shared moral beliefs (shared vs. unshared) and driver (AC vs. human), providing participants with different moral dilemma scenarios. Trust in ACs was measured through a questionnaire. RESULTS The results of Experiment 1 showed that shared utilitarian belief strongly influenced trust in ACs. In Experiment 2 and Experiment 3, however, we did not find statistical evidence that shared deontological belief had an effect on trust in ACs. CONCLUSION The results of the three experiments suggest that the effect of shared moral beliefs on trust varies depending on the values that ACs share with humans. APPLICATION To promote AC implementation, policymakers and developers need to understand which values are shared between ACs and humans to enhance trust in ACs.
Collapse
|
6
|
Sosa FA, Ullman T, Tenenbaum JB, Gershman SJ, Gerstenberg T. Moral dynamics: Grounding moral judgment in intuitive physics and intuitive psychology. Cognition 2021; 217:104890. [PMID: 34487974 DOI: 10.1016/j.cognition.2021.104890] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2019] [Revised: 08/17/2021] [Accepted: 08/19/2021] [Indexed: 11/19/2022]
Abstract
When holding others morally responsible, we care about what they did, and what they thought. Traditionally, research in moral psychology has relied on vignette studies, in which a protagonist's actions and thoughts are explicitly communicated. While this research has revealed what variables are important for moral judgment, such as actions and intentions, it is limited in providing a more detailed understanding of exactly how these variables affect moral judgment. Using dynamic visual stimuli that allow for a more fine-grained experimental control, recent studies have proposed a direct mapping from visual features to moral judgments. We embrace the use of visual stimuli in moral psychology, but question the plausibility of a feature-based theory of moral judgment. We propose that the connection from visual features to moral judgments is mediated by an inference about what the observed action reveals about the agent's mental states, and what causal role the agent's action played in bringing about the outcome. We present a computational model that formalizes moral judgments of agents in visual scenes as computations over an intuitive theory of physics combined with an intuitive theory of mind. We test the model's quantitative predictions in three experiments across a wide variety of dynamic interactions.
Collapse
Affiliation(s)
- Felix A Sosa
- Department of Psychology, Harvard University, United States; Center for Brains, Minds, and Machines, MIT, United States
| | - Tomer Ullman
- Department of Psychology, Harvard University, United States; Center for Brains, Minds, and Machines, MIT, United States
| | - Joshua B Tenenbaum
- Department of Brain and Cognitive Sciences, MIT, United States; Center for Brains, Minds, and Machines, MIT, United States
| | - Samuel J Gershman
- Department of Psychology, Harvard University, United States; Center for Brain Science, Harvard University, United States; Center for Brains, Minds, and Machines, MIT, United States
| | | |
Collapse
|
7
|
Savulescu J, Gyngell C, Kahane G. Collective Reflective Equilibrium in Practice (CREP) and controversial novel technologies. BIOETHICS 2021; 35:652-663. [PMID: 33945162 PMCID: PMC8581760 DOI: 10.1111/bioe.12869] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/08/2020] [Revised: 12/08/2020] [Accepted: 02/12/2021] [Indexed: 06/12/2023]
Abstract
In this paper, we investigate how data about public preferences may be used to inform policy around the use of controversial novel technologies, using public preferences about autonomous vehicles (AVs) as a case study. We first summarize the recent 'Moral Machine' study, which generated preference data from millions of people regarding how they think AVs should respond to emergency situations. We argue that while such preferences cannot be used to directly inform policy, they should not be disregarded. We defend an approach that we call 'Collective Reflective Equilibrium in Practice' (CREP). In CREP, data on public attitudes function as an input into a deliberative process that looks for coherence between attitudes, behaviours and competing ethical principles. We argue that in cases of reasonable moral disagreement, data on public attitudes should play a much greater role in shaping policies than in areas of ethical consensus. We apply CREP to some of the global preferences about AVs uncovered by the Moral Machines study. We intend this discussion both as a substantive contribution to the debate about the programming of ethical AVs, and as an illustration of how CREP works. We argue that CREP provides a principled way of using some public preferences as an input for policy, while justifiably disregarding others.
Collapse
Affiliation(s)
- Julian Savulescu
- Oxford Uehiro Centre for Practical EthicsUniversity of OxfordOxfordUnited Kingdom of Great Britain and Northern Ireland
- Wellcome Centre for Ethics and HumanitiesUniversity of OxfordOxfordUnited Kingdom of Great Britain and Northern Ireland
- Biomedical Ethics Research GroupMurdoch Children’s Research InstituteParkvilleAustralia
- Melbourne Law SchoolUniversity of MelbourneMelbourneAustralia
| | - Christopher Gyngell
- Biomedical Ethics Research GroupMurdoch Children’s Research InstituteParkvilleAustralia
- Melbourne Law SchoolUniversity of MelbourneMelbourneAustralia
- Department of PaediatricsUniversity of MelbourneMelbourneAustralia
| | - Guy Kahane
- Oxford Uehiro Centre for Practical EthicsUniversity of OxfordOxfordUnited Kingdom of Great Britain and Northern Ireland
- Wellcome Centre for Ethics and HumanitiesUniversity of OxfordOxfordUnited Kingdom of Great Britain and Northern Ireland
| |
Collapse
|
8
|
Holl E, Bernard S, Melzer A. Moral decision‐making in video games: A focus group study on player perceptions. HUMAN BEHAVIOR AND EMERGING TECHNOLOGIES 2020. [DOI: 10.1002/hbe2.189] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Affiliation(s)
- Elisabeth Holl
- Department of Behavioural and Cognitive Science, Institute for Health and Behaviour, Media and Experimental LabUniversity of Luxembourg Luxembourg
| | - Steve Bernard
- Department of Behavioural and Cognitive Science, Institute for Health and Behaviour, Media and Experimental LabUniversity of Luxembourg Luxembourg
| | - André Melzer
- Department of Behavioural and Cognitive Science, Institute for Health and Behaviour, Media and Experimental LabUniversity of Luxembourg Luxembourg
| |
Collapse
|
9
|
Kallioinen N, Pershina M, Zeiser J, Nosrat Nezami F, Pipa G, Stephan A, König P. Moral Judgements on the Actions of Self-Driving Cars and Human Drivers in Dilemma Situations From Different Perspectives. Front Psychol 2019; 10:2415. [PMID: 31749736 PMCID: PMC6844247 DOI: 10.3389/fpsyg.2019.02415] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2019] [Accepted: 10/10/2019] [Indexed: 11/25/2022] Open
Abstract
Self-driving cars have the potential to greatly improve public safety. However, their introduction onto public roads must overcome both ethical and technical challenges. To further understand the ethical issues of introducing self-driving cars, we conducted two moral judgement studies investigating potential differences in the moral norms applied to human drivers and self-driving cars. In the experiments, participants made judgements on a series of dilemma situations involving human drivers or self-driving cars. We manipulated which perspective situations were presented from in order to ascertain the effect of perspective on moral judgements. Two main findings were apparent from the results of the experiments. First, human drivers and self-driving cars were largely judged similarly. However, there was a stronger tendency to prefer self-driving cars to act in ways to minimize harm, compared to human drivers. Second, there was an indication that perspective influences judgements in some situations. Specifically, when considering situations from the perspective of a pedestrian, people preferred actions that would endanger car occupants instead of themselves. However, they did not show such a self-preservation tendency when the alternative was to endanger other pedestrians to save themselves. This effect was more prevalent for judgements on human drivers than self-driving cars. Overall, the results extend and agree with previous research, again contradicting existing ethical guidelines for self-driving car decision making and highlighting the difficulties with adapting public opinion to decision making algorithms.
Collapse
Affiliation(s)
- Noa Kallioinen
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
| | - Maria Pershina
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
| | - Jannik Zeiser
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany.,Institute of Philosophy, Leibniz University Hannover, Hanover, Germany
| | | | - Gordon Pipa
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
| | - Achim Stephan
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
| | - Peter König
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany.,Department of Neurophysiology and Pathophysiology, Center of Experimental Medicine, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|
10
|
Sütfeld LR, Ehinger BV, König P, Pipa G. How does the method change what we measure? Comparing virtual reality and text-based surveys for the assessment of moral decisions in traffic dilemmas. PLoS One 2019; 14:e0223108. [PMID: 31596864 PMCID: PMC6785059 DOI: 10.1371/journal.pone.0223108] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2019] [Accepted: 09/13/2019] [Indexed: 11/26/2022] Open
Abstract
The question of how self-driving cars should behave in dilemma situations has recently attracted a lot of attention in science, media and society. A growing number of publications amass insight into the factors underlying the choices we make in such situations, often using forced-choice paradigms closely linked to the trolley dilemma. The methodology used to address these questions, however, varies widely between studies, ranging from fully immersive virtual reality settings to completely text-based surveys. In this paper we compare virtual reality and text-based assessments, analyzing the effect that different factors in the methodology have on decisions and emotional response of participants. We present two studies, comparing a total of six different conditions varying across three dimensions: The level of abstraction, the use of virtual reality, and time-constraints. Our results show that the moral decisions made in this context are not strongly influenced by the assessment, and the compared methods ultimately appear to measure very similar constructs. Furthermore, we add to the pool of evidence on the underlying factors of moral judgment in traffic dilemmas, both in terms of general preferences, i.e., features of the particular situation and potential victims, as well as in terms of individual differences between participants, such as their age and gender.
Collapse
Affiliation(s)
- Leon René Sütfeld
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
| | | | - Peter König
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
| | - Gordon Pipa
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
| |
Collapse
|
11
|
Frank DA, Chrysochou P, Mitkidis P, Ariely D. Human decision-making biases in the moral dilemmas of autonomous vehicles. Sci Rep 2019; 9:13080. [PMID: 31511560 PMCID: PMC6739396 DOI: 10.1038/s41598-019-49411-7] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2018] [Accepted: 08/24/2019] [Indexed: 11/17/2022] Open
Abstract
The development of artificial intelligence has led researchers to study the ethical principles that should guide machine behavior. The challenge in building machine morality based on people’s moral decisions, however, is accounting for the biases in human moral decision-making. In seven studies, this paper investigates how people’s personal perspectives and decision-making modes affect their decisions in the moral dilemmas faced by autonomous vehicles. Moreover, it determines the variations in people’s moral decisions that can be attributed to the situational factors of the dilemmas. The reported studies demonstrate that people’s moral decisions, regardless of the presented dilemma, are biased by their decision-making mode and personal perspective. Under intuitive moral decisions, participants shift more towards a deontological doctrine by sacrificing the passenger instead of the pedestrian. In addition, once the personal perspective is made salient participants preserve the lives of that perspective, i.e. the passenger shifts towards sacrificing the pedestrian, and vice versa. These biases in people’s moral decisions underline the social challenge in the design of a universal moral code for autonomous vehicles. We discuss the implications of our findings and provide directions for future research.
Collapse
Affiliation(s)
| | - Polymeros Chrysochou
- Department of Management, Aarhus University, Aarhus, Denmark.,Ehrenberg-Bass Institute for Marketing Science, School of Marketing, University of South Australia, South Australia, Australia
| | - Panagiotis Mitkidis
- Department of Management, Aarhus University, Aarhus, Denmark.,Center for Advanced Hindsight, Duke University, Durham, United States
| | - Dan Ariely
- Center for Advanced Hindsight, Duke University, Durham, United States
| |
Collapse
|
12
|
Uijong J, Kang J, Wallraven C. You or Me? Personality Traits Predict Sacrificial Decisions in an Accident Situation. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2019; 25:1898-1907. [PMID: 30802865 DOI: 10.1109/tvcg.2019.2899227] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Emergency situations during car driving sometimes force the driver to make a sudden decision. Predicting these decisions will have important applications in updating risk analyses in insurance applications, but also can give insights for drafting autonomous vehicle guidelines. Studying such behavior in experimental settings, however, is limited by ethical issues as it would endanger peoples' lives. Here, we employed the potential of virtual reality (VR) to investigate decision-making in an extreme situation in which participants would have to sacrifice others in order to save themselves. In a VR driving simulation, participants first trained to complete a difficult course with multiple crossroads in which the wrong turn would lead the car to fall down a cliff. In the testing phase, obstacles suddenly appeared on the "safe" turn of a crossroad: for the control group, obstacles consisted of trees, whereas for the experimental group, they were pedestrians. In both groups, drivers had to decide between falling down the cliff or colliding with the obstacles. Results showed that differences in personality traits were able to predict this decision: in the experimental group, drivers who collided with the pedestrians had significantly higher psychopathy and impulsivity traits, whereas impulsivity alone was to some degree predictive in the control group. Other factors like heart rate differences, gender, video game expertise, and driving experience were not predictive of the emergency decision in either group. Our results show that self-interest related personality traits affect decision-making when choosing between preservation of self or others in extreme situations and showcase the potential of virtual reality in studying and modeling human decision-making.
Collapse
|
13
|
Faulhaber AK, Dittmer A, Blind F, Wächter MA, Timm S, Sütfeld LR, Stephan A, Pipa G, König P. Human Decisions in Moral Dilemmas are Largely Described by Utilitarianism: Virtual Car Driving Study Provides Guidelines for Autonomous Driving Vehicles. SCIENCE AND ENGINEERING ETHICS 2019; 25:399-418. [PMID: 29357047 DOI: 10.1007/s11948-018-0020-x] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/24/2017] [Accepted: 01/10/2018] [Indexed: 06/07/2023]
Abstract
Ethical thought experiments such as the trolley dilemma have been investigated extensively in the past, showing that humans act in utilitarian ways, trying to cause as little overall damage as possible. These trolley dilemmas have gained renewed attention over the past few years, especially due to the necessity of implementing moral decisions in autonomous driving vehicles (ADVs). We conducted a set of experiments in which participants experienced modified trolley dilemmas as drivers in virtual reality environments. Participants had to make decisions between driving in one of two lanes where different obstacles came into view. Eventually, the participants had to decide which of the objects they would crash into. Obstacles included a variety of human-like avatars of different ages and group sizes. Furthermore, the influence of sidewalks as potential safe harbors and a condition implicating self-sacrifice were tested. Results showed that participants, in general, decided in a utilitarian manner, sparing the highest number of avatars possible with a limited influence by the other variables. Derived from these findings, which are in line with the utilitarian approach in moral decision making, it will be argued for an obligatory ethics setting implemented in ADVs.
Collapse
Affiliation(s)
- Anja K Faulhaber
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrück, Germany
| | - Anke Dittmer
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrück, Germany
| | - Felix Blind
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrück, Germany
| | - Maximilian A Wächter
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrück, Germany.
| | - Silja Timm
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrück, Germany
| | - Leon R Sütfeld
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrück, Germany
| | - Achim Stephan
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrück, Germany
| | - Gordon Pipa
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrück, Germany
| | - Peter König
- Institute of Cognitive Science, University of Osnabrück, Wachsbleiche 27, 49090, Osnabrück, Germany
- Department of Neurophysiology and Pathophysiology, Center of Experimental Medicine, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|
14
|
Ramirez EJ. Ecological and ethical issues in virtual reality research: A call for increased scrutiny. PHILOSOPHICAL PSYCHOLOGY 2018. [DOI: 10.1080/09515089.2018.1532073] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
15
|
Sütfeld LR, Gast R, König P, Pipa G. Response: Commentary: Using Virtual Reality to Assess Ethical Decisions in Road Traffic Scenarios: Applicability of Value-of-Life-Based Models and Influences of Time Pressure. Front Behav Neurosci 2018; 12:128. [PMID: 29997485 PMCID: PMC6028605 DOI: 10.3389/fnbeh.2018.00128] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2018] [Accepted: 06/07/2018] [Indexed: 11/25/2022] Open
Affiliation(s)
- Leon R Sütfeld
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
| | - Richard Gast
- Nuclear Magnetic Resonance Unit, Max-Planck-Institut für Kognitions- und Neurowissenschaften, Leipzig, Germany
| | - Peter König
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
| | - Gordon Pipa
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
| |
Collapse
|
16
|
Bergmann LT, Schlicht L, Meixner C, König P, Pipa G, Boshammer S, Stephan A. Autonomous Vehicles Require Socio-Political Acceptance-An Empirical and Philosophical Perspective on the Problem of Moral Decision Making. Front Behav Neurosci 2018. [PMID: 29541023 PMCID: PMC5835928 DOI: 10.3389/fnbeh.2018.00031] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Autonomous vehicles, though having enormous potential, face a number of challenges. As a computer system interacting with society on a large scale and human beings in particular, they will encounter situations, which require moral assessment. What will count as right behavior in such situations depends on which factors are considered to be both morally justified and socially acceptable. In an empirical study we investigated what factors people recognize as relevant in driving situations. The study put subjects in several “dilemma” situations, which were designed to isolate different and potentially relevant factors. Subjects showed a surprisingly high willingness to sacrifice themselves to save others, took the age of potential victims in a crash into consideration and were willing to swerve onto a sidewalk if this saved more lives. The empirical insights are intended to provide a starting point for a discussion, ultimately yielding societal agreement whereby the empirical insights should be balanced with philosophical considerations.
Collapse
Affiliation(s)
- Lasse T Bergmann
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
| | - Larissa Schlicht
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
| | - Carmen Meixner
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
| | - Peter König
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
| | - Gordon Pipa
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
| | | | - Achim Stephan
- Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany
| |
Collapse
|
17
|
Keeling G. Commentary: Using Virtual Reality to Assess Ethical Decisions in Road Traffic Scenarios: Applicability of Value-of-Life-Based Models and Influences of Time Pressure. Front Behav Neurosci 2017; 11:247. [PMID: 29311864 PMCID: PMC5733039 DOI: 10.3389/fnbeh.2017.00247] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2017] [Accepted: 12/01/2017] [Indexed: 11/13/2022] Open
Affiliation(s)
- Geoff Keeling
- Department of Philosophy, University of Bristol, Bristol, United Kingdom
| |
Collapse
|