1
|
Washington Z, Friedman O. The Double Standard of Ownership. Open Mind (Camb) 2025; 9:325-339. [PMID: 40013089 PMCID: PMC11864797 DOI: 10.1162/opmi_a_00190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2024] [Accepted: 01/12/2025] [Indexed: 02/28/2025] Open
Abstract
Owners are often blamed when their property causes harm but might not receive corresponding praise when their property does good. This suggests a double standard of ownership, wherein owning property poses risks for moral blame that are not balanced with equal opportunities for credit. We investigated this possibility in three preregistered experiments on 746 US residents. Participants read vignettes where agentic property (e.g., animals, robots) produced bad or good outcomes, and judged whether owners and the property were morally responsible. With bad outcomes, participants assigned owners more blame than property (Experiments 1 and 2) or similar blame (Experiment 3). But with good outcomes, participants consistently assigned owners much less praise relative to their property. The first two experiments also examined if the double standard arises in two other relationships between authorities and subordinates; participants showed the double standard when assessing moral responsibility for parents and children, but not for employers and employees. Together, these findings point to a novel asymmetry in how owners are assigned responsibility.
Collapse
Affiliation(s)
- Zofia Washington
- Department of Psychology, University of Waterloo, Waterloo, Canada
| | - Ori Friedman
- Department of Psychology, University of Waterloo, Waterloo, Canada
| |
Collapse
|
2
|
Malle BF, Scheutz M, Cusimano C, Voiklis J, Komatsu T, Thapa S, Aladia S. People's judgments of humans and robots in a classic moral dilemma. Cognition 2025; 254:105958. [PMID: 39362054 DOI: 10.1016/j.cognition.2024.105958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2024] [Revised: 09/02/2024] [Accepted: 09/07/2024] [Indexed: 10/05/2024]
Abstract
How do ordinary people evaluate robots that make morally significant decisions? Previous work has found both equal and different evaluations, and different ones in either direction. In 13 studies (N = 7670), we asked people to evaluate humans and robots that make decisions in norm conflicts (variants of the classic trolley dilemma). We examined several conditions that may influence whether moral evaluations of human and robot agents are the same or different: the type of moral judgment (norms vs. blame); the structure of the dilemma (side effect vs. means-end); salience of particular information (victim, outcome); culture (Japan vs. US); and encouraged empathy. Norms for humans and robots are broadly similar, but blame judgments show a robust asymmetry under one condition: Humans are blamed less than robots specifically for inaction decisions-here, refraining from sacrificing one person for the good of many. This asymmetry may emerge because people appreciate that the human faces an impossible decision and deserves mitigated blame for inaction; when evaluating a robot, such appreciation appears to be lacking. However, our evidence for this explanation is mixed. We discuss alternative explanations and offer methodological guidance for future work into people's moral judgment of robots and humans.
Collapse
|
3
|
Wang Y, Zhu J, Wang J, Mu Y. Oxytocin modulation of explicit pandemic stigma in men with varying social anxiety levels. Neuropharmacology 2024; 261:110140. [PMID: 39251086 DOI: 10.1016/j.neuropharm.2024.110140] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2024] [Revised: 08/17/2024] [Accepted: 09/02/2024] [Indexed: 09/11/2024]
Abstract
OBJECTIVE Stigma can create divisions within societies, hindering social cohesion and cooperation. Notably, it has significant public health implications, especially during infectious disease outbreaks like COVID-19. However, little is known about the neural and molecular basis of disease-related stigma and their association with individual differences. METHODS To address this gap, we performed a double-blind, placebo-controlled, within-subject design study with 70 males, to investigate the effect of intranasal oxytocin (OT) administration on the explicit and implicit processing of disease-related stigma (i.e., COVID-19 stigma). After self-administrated 24 IU of OT or placebo, participants completed a stigma evaluation task and an Implicit Association Test (IAT) to assess the explicit and implicit processes of stigma evaluation, respectively. RESULTS The results showed that oxytocin amplified the differences between participants with high and low social anxiety in explicit COVID-19 stigma, with a higher inclination to attribute the stigmatized status of the stigmatized targets (i.e., COVID-19 related group) to personal causes in high social anxiety individuals, but reduced blame towards the stigmatized targets in low social anxiety individuals under oxytocin compared to placebo treatment. Furthermore, oxytocin strengthened the connections between responsibility attribution and the other processes (i.e., emotional, approach motivation, social deviance). While no modulation of oxytocin on implicit stigma emerged, oxytocin did modulate the associations between specific dimensions of explicit stigma (i.e., social deviance and approach motivation) and implicit stigma. CONCLUSION In conclusion, these findings demonstrated that intranasal oxytocin administration could temporally impact the explicit cognitive judgment in disease-related stigma but not the implicit aspect; furthermore, it modulated in distinct ways in individuals with different levels of social anxiety. These findings highlight the trait-dependent oxytocin modulation on disease-related stigma, implying that oxytocin is partly involved in the endocrine system of disease-related stigma. By unraveling the molecular basis of stigma and its association with individual traits, such as social anxiety, we can tailor interventions to meet specific needs of different individuals in the future.
Collapse
Affiliation(s)
- Yuwei Wang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Jiajia Zhu
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Jiaxi Wang
- Beijing Technology and Business University, China
| | - Yan Mu
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.
| |
Collapse
|
4
|
Arnestad MN, Meyers S, Gray K, Bigman YE. The existence of manual mode increases human blame for AI mistakes. Cognition 2024; 252:105931. [PMID: 39208639 DOI: 10.1016/j.cognition.2024.105931] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2024] [Revised: 08/14/2024] [Accepted: 08/21/2024] [Indexed: 09/04/2024]
Abstract
People are offloading many tasks to artificial intelligence (AI)-including driving, investing decisions, and medical choices-but it is human nature to want to maintain ultimate control. So even when using autonomous machines, people want a "manual mode", an option that shifts control back to themselves. Unfortunately, the mere existence of manual mode leads to more human blame when AI makes mistakes. When observers know that a human agent theoretically had the option to take control, the humans are assigned more responsibility, even when agents lack the time or ability to actually exert control, as with self-driving car crashes. Four experiments reveal that though people prefer having a manual mode, even if the AI mode is more efficient and adding the manual mode is more expensive (Study 1), the existence of a manual mode increases human blame (Studies 2a-3c). We examine two mediators for this effect: increased perceptions of causation and counterfactual cognition (Study 4). The results suggest that the human thirst for illusory control comes with real costs. Implications of AI decision-making are discussed.
Collapse
Affiliation(s)
- Mads N Arnestad
- Department of Leadership and Organization, BI Norwegian Business School, Norway
| | | | - Kurt Gray
- University of North Carolina at Chapel Hill, USA
| | | |
Collapse
|
5
|
Kneer M, Christen M. Responsibility Gaps and Retributive Dispositions: Evidence from the US, Japan and Germany. SCIENCE AND ENGINEERING ETHICS 2024; 30:51. [PMID: 39419906 PMCID: PMC11486783 DOI: 10.1007/s11948-024-00509-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 07/26/2024] [Indexed: 10/19/2024]
Abstract
Danaher (2016) has argued that increasing robotization can lead to retribution gaps: Situations in which the normative fact that nobody can be justly held responsible for a harmful outcome stands in conflict with our retributivist moral dispositions. In this paper, we report a cross-cultural empirical study based on Sparrow's (2007) famous example of an autonomous weapon system committing a war crime, which was conducted with participants from the US, Japan and Germany. We find that (1) people manifest a considerable willingness to hold autonomous systems morally responsible, (2) partially exculpate human agents when interacting with such systems, and that more generally (3) the possibility of normative responsibility gaps is indeed at odds with people's pronounced retributivist inclinations. We discuss what these results mean for potential implications of the retribution gap and other positions in the responsibility gap literature.
Collapse
Affiliation(s)
| | - Markus Christen
- Digital Society Initiative, University of Zurich, Zurich, Switzerland
| |
Collapse
|
6
|
Malakar Y, Lacey J. On the interconnected nature of risk and responsibility in the research and development of new and emerging technologies. RISK ANALYSIS : AN OFFICIAL PUBLICATION OF THE SOCIETY FOR RISK ANALYSIS 2024; 44:1325-1338. [PMID: 37748933 DOI: 10.1111/risa.14229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/20/2021] [Revised: 08/14/2023] [Accepted: 09/11/2023] [Indexed: 09/27/2023]
Abstract
Risk analysis of new and emerging technologies requires innovative approaches that are agile, exploratory, and can accommodate broad stakeholder engagement and perspectives. Existing theories of risk governance and responsible innovation suggest that operationalizing guiding principles for engagement such as inclusion and reflection may provide a useful approach to the risk analysis of these technologies. Yet, methodologies to systematically assess how we might operationalize such guiding principles in risk analysis are limited in existing risk research. We contribute to filling this gap by demonstrating a practical methodology for examining and documenting how research and development (R&D) professionals operationalize inclusion and reflection in risk analysis and what value this provides to risk analysis in the R&D context. We use the Australian nanotechnology R&D sector as our case study, interviewing 28 experts to examine how R&D professionals have operationalized inclusion and reflection into their risk analysis practices, generating three findings. First, we describe how our research design enables the successful translation of theory into a methodology that supports an empirical assessment of the integration of these guiding principles into risk analysis practice. Second, we argue that successfully and systematically integrating inclusion and reflection in risk analysis fosters a wider understanding and identification of risk through the activation of multi-actor and multi-institutional stakeholder engagement processes. Third, we outline how this research depicts the outward-facing and introspective nature of risk analysis.
Collapse
Affiliation(s)
- Yuwan Malakar
- Responsible Innovation Future Science Platform, Commonwealth Scientific and Industrial Research Organisation, Brisbane, Queensland, Australia
| | - Justine Lacey
- Responsible Innovation Future Science Platform, Commonwealth Scientific and Industrial Research Organisation, Brisbane, Queensland, Australia
| |
Collapse
|
7
|
Gall J, Stanton CJ. Low-rank human-like agents are trusted more and blamed less in human-autonomy teaming. Front Artif Intell 2024; 7:1273350. [PMID: 38742120 PMCID: PMC11089226 DOI: 10.3389/frai.2024.1273350] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2023] [Accepted: 04/01/2024] [Indexed: 05/16/2024] Open
Abstract
If humans are to team with artificial teammates, factors that influence trust and shared accountability must be considered when designing agents. This study investigates the influence of anthropomorphism, rank, decision cost, and task difficulty on trust in human-autonomous teams (HAT) and how blame is apportioned if shared tasks fail. Participants (N = 31) completed repeated trials with an artificial teammate using a low-fidelity variation of an air-traffic control game. We manipulated anthropomorphism (human-like or machine-like), military rank of artificial teammates using three-star (superiors), two-star (peers), or one-star (subordinate) agents, the perceived payload of vehicles with people or supplies onboard, and task difficulty with easy or hard missions using a within-subject design. A behavioural measure of trust was inferred when participants accepted agent recommendations, and a measure of no trust when recommendations were rejected or ignored. We analysed the data for trust using binomial logistic regression. After each trial, blame was apportioned using a 2-item scale and analysed using a one-way repeated measures ANOVA. A post-experiment questionnaire obtained participants' power distance orientation using a seven-item scale. Possible power-related effects on trust and blame apportioning are discussed. Our findings suggest that artificial agents with higher levels of anthropomorphism and lower levels of rank increased trust and shared accountability, with human team members accepting more blame for team failures.
Collapse
Affiliation(s)
- Jody Gall
- The MARCS Institute for Brain Behaviour and Development, Western Sydney University, Parramatta, NSW, Australia
| | - Christopher J. Stanton
- The MARCS Institute for Brain Behaviour and Development, Western Sydney University, Parramatta, NSW, Australia
| |
Collapse
|
8
|
Ma Z, Zhang Y. Driver-Automated Vehicle Interaction in Mixed Traffic: Types of Interaction and Drivers' Driving Styles. HUMAN FACTORS 2024; 66:544-561. [PMID: 35469464 PMCID: PMC10757400 DOI: 10.1177/00187208221088358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
OBJECTIVE This study investigated drivers' subjective feelings and decision making in mixed traffic by quantifying driver's driving style and type of interaction. BACKGROUND Human-driven vehicles (HVs) will share the road with automated vehicles (AVs) in mixed traffic. Previous studies focused on simulating the impacts of AVs on traffic flow, investigating car-following situations, and using simulation analysis lacking experimental tests of human drivers. METHOD Thirty-six drivers were classified into three driver groups (aggressive, moderate, and defensive drivers) and experienced HV-AV interaction and HV-HV interaction in a supervised web-based experiment. Drivers' subjective feelings and decision making were collected via questionnaires. RESULTS Results revealed that aggressive and moderate drivers felt significantly more anxious, less comfortable, and were more likely to behave aggressively in HV-AV interaction than in HV-HV interaction. Aggressive drivers were also more likely to take advantage of AVs on the road. In contrast, no such differences were found for defensive drivers indicating they were not significantly influenced by the type of vehicles with which they were interacting. CONCLUSION Driving style and type of interaction significantly influenced drivers' subjective feelings and decision making in mixed traffic. This study brought insights into how human drivers perceive and interact with AVs and HVs on the road and how human drivers take advantage of AVs. APPLICATION This study provided a foundation for developing guidelines for mixed transportation systems to improve driver safety and user experience.
Collapse
Affiliation(s)
- Zheng Ma
- Penn State College of Engineering, State College, PA, USA
| | - Yiqi Zhang
- Pennsylvania State University, University Park, PA, USA
| |
Collapse
|
9
|
Pai SN, Jeyaraman M, Jeyaraman N, Nallakumarasamy A, Yadav S. In the Hands of a Robot, From the Operating Room to the Courtroom: The Medicolegal Considerations of Robotic Surgery. Cureus 2023; 15:e43634. [PMID: 37719624 PMCID: PMC10504870 DOI: 10.7759/cureus.43634] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/17/2023] [Indexed: 09/19/2023] Open
Abstract
Robotic surgery has rapidly evolved as a groundbreaking field in medicine, revolutionizing surgical practices across various specialties. Despite its numerous benefits, the adoption of robotic surgery faces significant medicolegal challenges. This article delves into the underexplored legal implications of robotic surgery and identifies three distinct medicolegal problems. First, the lack of standardized training and credentialing for robotic surgery poses potential risks to patient safety and surgeon competence. Second, informed consent processes require additional considerations to ensure patients are fully aware of the technology's capabilities and potential risks. Finally, the issue of legal liability becomes complex due to the involvement of multiple stakeholders in the functioning of robotic systems. The article highlights the need for comprehensive guidelines, regulations, and training programs to navigate the medicolegal aspects of robotic surgery effectively, thereby unlocking its full potential for the future..
Collapse
Affiliation(s)
- Satvik N Pai
- Orthopaedic Surgery, Hospital for Orthopedics, Sports Medicine, Arthritis, and Trauma (HOSMAT) Hospital, Bangalore, IND
| | - Madhan Jeyaraman
- Orthopaedics, ACS Medical College and Hospital, Dr. MGR Educational and Research Institute, Chennai, IND
| | - Naveen Jeyaraman
- Orthopaedics, ACS Medical College and Hospital, Dr. MGR Educational and Research Institute, Chennai, IND
| | - Arulkumar Nallakumarasamy
- Orthopaedics, ACS Medical College and Hospital, Dr. MGR Educational and Research Institute, Chennai, IND
| | - Sankalp Yadav
- Medicine, Shri Madan Lal Khurana Chest Clinic, New Delhi, IND
| |
Collapse
|
10
|
Zhai S, Wang L, Liu P. Human and machine drivers: Sharing control, sharing responsibility. ACCIDENT; ANALYSIS AND PREVENTION 2023; 188:107096. [PMID: 37148677 DOI: 10.1016/j.aap.2023.107096] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2022] [Revised: 04/28/2023] [Accepted: 04/28/2023] [Indexed: 05/08/2023]
Abstract
Machines are empowered with ever-increasing agency and decision-making authority to augment or even replace humans in various settings, making responsibility attribution less straightforward when they cause harm. Focusing on their applications in transportation, we consider human judgments of responsibility for automated vehicle crashes through a cross-national survey (N = 1657) and design hypothetical crashes after the 2018 Uber automated vehicle crash reportedly caused by a distracted human driver and an inaccurate machine driver. We examine the association between automation level-the human and machine drivers have different levels of agency (i.e., the human as a supervisor, backup driver, and mere passenger, respectively)-and human responsibility through the lens of perceived human controllability. We show the negative association between automation level and human responsibility, partly mediated by perceived human controllability, regardless of the involved responsibility metric (rating and allocation), the nationality of the involved participant (China and South Korea), and crash severity (injury and fatality). When the human and machine drivers in a conditionally automated vehicle jointly cause a crash (e.g., the 2018 Uber crash), the human driver and car manufacturer are asked to share responsibility. Our findings imply that the driver-centric tort law needs to be control-centric. They offer insights for attributing human responsibility for crashes involving automated vehicles.
Collapse
Affiliation(s)
- Siming Zhai
- Center for Psychological Sciences, Zhejiang University, China; College of Management and Economics, Tianjin University, China
| | - Lin Wang
- Department of Library and Information Science, Incheon National University, Republic of Korea
| | - Peng Liu
- Center for Psychological Sciences, Zhejiang University, China.
| |
Collapse
|