1
|
Kerstan S, Bienefeld N, Grote G. Choosing human over AI doctors? How comparative trust associations and knowledge relate to risk and benefit perceptions of AI in healthcare. RISK ANALYSIS : AN OFFICIAL PUBLICATION OF THE SOCIETY FOR RISK ANALYSIS 2024; 44:939-957. [PMID: 37722964 DOI: 10.1111/risa.14216] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 07/05/2023] [Accepted: 07/08/2023] [Indexed: 09/20/2023]
Abstract
The development of artificial intelligence (AI) in healthcare is accelerating rapidly. Beyond the urge for technological optimization, public perceptions and preferences regarding the application of such technologies remain poorly understood. Risk and benefit perceptions of novel technologies are key drivers for successful implementation. Therefore, it is crucial to understand the factors that condition these perceptions. In this study, we draw on the risk perception and human-AI interaction literature to examine how explicit (i.e., deliberate) and implicit (i.e., automatic) comparative trust associations with AI versus physicians, and knowledge about AI, relate to likelihood perceptions of risks and benefits of AI in healthcare and preferences for the integration of AI in healthcare. We use survey data (N = 378) to specify a path model. Results reveal that the path for implicit comparative trust associations on relative preferences for AI over physicians is only significant through risk, but not through benefit perceptions. This finding is reversed for AI knowledge. Explicit comparative trust associations relate to AI preference through risk and benefit perceptions. These findings indicate that risk perceptions of AI in healthcare might be driven more strongly by affect-laden factors than benefit perceptions, which in turn might depend more on reflective cognition. Implications of our findings and directions for future research are discussed considering the conceptualization of trust as heuristic and dual-process theories of judgment and decision-making. Regarding the design and implementation of AI-based healthcare technologies, our findings suggest that a holistic integration of public viewpoints is warranted.
Collapse
Affiliation(s)
- Sophie Kerstan
- Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland
| | - Nadine Bienefeld
- Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland
| | - Gudela Grote
- Department of Management, Technology, and Economics, ETH Zurich, Zurich, Switzerland
| |
Collapse
|
2
|
Zhou Y, Guo H, Shi H, Jiang S, Liao Y. Key factors capturing the willingness to use automated vehicles for travel in China. PLoS One 2024; 19:e0298348. [PMID: 38363740 PMCID: PMC10871520 DOI: 10.1371/journal.pone.0298348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Accepted: 01/22/2024] [Indexed: 02/18/2024] Open
Abstract
With the continuous advancement of technology, automated vehicle technology is progressively maturing. It is crucial to comprehend the factors influencing individuals' intention to utilize automated vehicles. This study examined user willingness to adopt automated vehicles. By incorporating age and educational background as random parameters, an ordered Probit model with random parameters was constructed to analyze the influential factors affecting respondents' adoption of automated vehicles. We devised and conducted an online questionnaire survey, yielding 2105 valid questionnaires. The findings reveal significant positive correlations between positive social trust, perceived ease of use, perceived usefulness, low levels of perceived risk, and the acceptance of automated vehicles. Additionally, our study identifies extraversion and openness as strong mediators in shaping individuals' intentions to use automated vehicles. Furthermore, prior experience with assisted driving negatively impacts people's inclination toward embracing automated vehicles. Our research also provides insights for promoting the adoption of automated vehicles: favorable media coverage and a reasonable division of responsibilities can enhance individuals' intentions to adopt this technology.
Collapse
Affiliation(s)
- Yongjiang Zhou
- School of Automobile and Transportation, Xihua University, Chengdu, Sichuan, China
| | - Hanying Guo
- School of Automobile and Transportation, Xihua University, Chengdu, Sichuan, China
| | - Hongguo Shi
- School of Transportation and Logistics, Southwest Jiaotong University, Chengdu, Sichuan, China
| | - Siyi Jiang
- School of Automobile and Transportation, Xihua University, Chengdu, Sichuan, China
| | - Yang Liao
- School of Automobile and Transportation, Xihua University, Chengdu, Sichuan, China
| |
Collapse
|
3
|
Kox ES, Siegling LB, Kerstholt JH. Trust Development in Military and Civilian Human–Agent Teams: The Effect of Social-Cognitive Recovery Strategies. Int J Soc Robot 2022; 14:1323-1338. [PMID: 35432627 PMCID: PMC8994847 DOI: 10.1007/s12369-022-00871-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/21/2022] [Indexed: 11/06/2022]
Abstract
Autonomous agents (AA) will increasingly be deployed as teammates instead of tools. In many operational situations, flawless performance from AA cannot be guaranteed. This may lead to a breach in the human’s trust, which can compromise collaboration. This highlights the importance of thinking about how to deal with error and trust violations when designing AA. The aim of this study was to explore the influence of uncertainty communication and apology on the development of trust in a Human–Agent Team (HAT) when there is a trust violation. Two experimental studies following the same method were performed with (I) a civilian group and (II) a military group of participants. The online task environment resembled a house search in which the participant was accompanied and advised by an AA as their artificial team member. Halfway during the task, an incorrect advice evoked a trust violation. Uncertainty communication was manipulated within-subjects, apology between-subjects. Our results showed that (a) communicating uncertainty led to higher levels of trust in both studies, (b) an incorrect advice by the agent led to a less severe decline in trust when that advice included a measure of uncertainty, and (c) after a trust violation, trust recovered significantly more when the agent offered an apology. The two latter effects were only found in the civilian study. We conclude that tailored agent communication is a key factor in minimizing trust reduction in face of agent failure to maintain effective long-term relationships in HATs. The difference in findings between participant groups emphasizes the importance of considering the (organizational) culture when designing artificial team members.
Collapse
|
4
|
Efendić E, Chandrashekar SP, Lee CS, Yeung LY, Kim MJ, Lee CY, Feldman G. Risky Therefore Not Beneficial: Replication and Extension of Finucane et al.’s (2000) Affect Heuristic Experiment. SOCIAL PSYCHOLOGICAL AND PERSONALITY SCIENCE 2021. [DOI: 10.1177/19485506211056761] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Risks and benefits are negatively related in people’s minds. Finucane et al. causally demonstrated that increasing risks of a hazard leads people to judge its benefits as lower. Vice versa, increasing benefits leads people to judge its risks as lower (original: r = −.74 [−0.92, −0.30]). This finding is consistent with an affective explanation, and the negative relationship is often presented as evidence for an affect heuristic. In two well-powered studies, using a more stringent analytic strategy, we replicated the original finding. We observed a strong negative relationship between judgments of risks and benefits across three technologies, although we do find that there was no change in risks when highlighting low benefits. We note that risks seem to be more responsive to manipulation (as opposed to benefits) and find evidence that the negative relationship can depend on incidental mood. We provided materials, data sets, and analyses on https://osf.io/sufjn/?view_only=6f8f5dc6ff524149a4ed5c6de9296ae8 .
Collapse
Affiliation(s)
- Emir Efendić
- Maastricht University, School of Business and Economics, Department of Marketing and Supply Chain Management, the Netherlands
| | | | | | | | | | | | | |
Collapse
|
5
|
Yokoi R, Nakayachi K. Trust in Autonomous Cars: Exploring the Role of Shared Moral Values, Reasoning, and Emotion in Safety-Critical Decisions. HUMAN FACTORS 2021; 63:1465-1484. [PMID: 32663047 DOI: 10.1177/0018720820933041] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
OBJECTIVE Autonomous cars (ACs) controlled by artificial intelligence are expected to play a significant role in transportation in the near future. This study investigated determinants of trust in ACs. BACKGROUND Trust in ACs influences different variables, including the intention to adopt AC technology. Several studies on risk perception have verified that shared value determines trust in risk managers. Previous research has confirmed the effect of value similarity on trust in artificial intelligence. We focused on moral beliefs, specifically utilitarianism (belief in promoting a greater good) and deontology (belief in condemning deliberate harm), and tested the effects of shared moral beliefs on trust in ACs. METHOD We conducted three experiments (N = 128, 71, and 196, for each), adopting a thought experiment similar to the well-known trolley problem. We manipulated shared moral beliefs (shared vs. unshared) and driver (AC vs. human), providing participants with different moral dilemma scenarios. Trust in ACs was measured through a questionnaire. RESULTS The results of Experiment 1 showed that shared utilitarian belief strongly influenced trust in ACs. In Experiment 2 and Experiment 3, however, we did not find statistical evidence that shared deontological belief had an effect on trust in ACs. CONCLUSION The results of the three experiments suggest that the effect of shared moral beliefs on trust varies depending on the values that ACs share with humans. APPLICATION To promote AC implementation, policymakers and developers need to understand which values are shared between ACs and humans to enhance trust in ACs.
Collapse
|
6
|
Kaye SA, Somoray K, Rodwell D, Lewis I. Users' acceptance of private automated vehicles: A systematic review and meta-analysis. JOURNAL OF SAFETY RESEARCH 2021; 79:352-367. [PMID: 34848015 DOI: 10.1016/j.jsr.2021.10.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/13/2020] [Revised: 06/09/2021] [Accepted: 10/04/2021] [Indexed: 06/13/2023]
Abstract
INTRODUCTION This research systematically reviewed relevant studies on users' acceptance of conditional (Level 3) to full (Level 5) automated vehicles when such vehicles are to be used privately (herein referred to as 'private automated vehicles or private AVs). METHOD The search followed the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines, and was undertaken in three databases: APA PsycINFO, Transport Research International Documentation, and Web of Science. Articles were required to focus on individuals' acceptance of private SAE Level 3-5 AVs. Acceptance was defined as individuals' attitudes towards or intentions and/or willingness to use AVs in the future. A total of 2,354 articles were identified in the database search. Thirty-five articles were included in the review, six of which included multiple studies and/or comparison groups. RESULTS Most studies (n = 31) applied self-reported measures to assess user acceptance together with a range of psychosocial factors predicting such acceptance. The meta-analytic correlations revealed that perceived behavioral control, perceived benefits/usefulness, perceived ease of use, and subjective/social norms had significant positive pooled relationships with attitudes and intentions. Trust and sensation seeking also had significant positive pooled correlations with intentions, while knowledge of AVs had a significant and negative pooled correlation with intentions. Age did not show any significant pooled relationship with attitudes, intentions, or willingness. CONCLUSIONS The findings obtained from the systematic review and meta-analysis provide support for psychosocial models to aid understanding of users' acceptance of private AVs. Practical applications: Examining acceptance of AVs after participants have experienced these vehicles on closed tracks or open roads would advance contemporary knowledge of users' intentions to use these vehicles in the future. Further, experiencing these vehicles firsthand may also help with addressing any perceived barriers reducing acceptance of future use of private AVs.
Collapse
Affiliation(s)
- Sherrie-Anne Kaye
- Queensland University of Technology (QUT), Centre for Accident Research and Road Safety - Queensland (CARRS-Q), Institute of Health and Biomedical Innovation (IHBI), 130 Victoria Park Road, Kelvin Grove, Queensland 4059, Australia.
| | - Klaire Somoray
- James Cook University, 1 James Cook Drive, Townsvile, Queensland 4811, Australia.
| | - David Rodwell
- Queensland University of Technology (QUT), Centre for Accident Research and Road Safety - Queensland (CARRS-Q), Institute of Health and Biomedical Innovation (IHBI), 130 Victoria Park Road, Kelvin Grove, Queensland 4059, Australia.
| | - Ioni Lewis
- Queensland University of Technology (QUT), Centre for Accident Research and Road Safety - Queensland (CARRS-Q), Institute of Health and Biomedical Innovation (IHBI), 130 Victoria Park Road, Kelvin Grove, Queensland 4059, Australia.
| |
Collapse
|
7
|
Kim H, Moon H. Heterogeneous attitudes toward autonomous vehicles: evaluation of consumer acceptance of vehicle automation technology using a latent class approach. TECHNOLOGY ANALYSIS & STRATEGIC MANAGEMENT 2021. [DOI: 10.1080/09537325.2021.1962522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Hana Kim
- Business and Technology Management, Korea Advanced Institute of Science and Technology, Daejeon, South Korea
| | - HyungBin Moon
- Graduate School of Management of Technology, Pukyong National University, Busan, South Korea
| |
Collapse
|
8
|
Siegrist M, Árvai J. Risk Perception: Reflections on 40 Years of Research. RISK ANALYSIS : AN OFFICIAL PUBLICATION OF THE SOCIETY FOR RISK ANALYSIS 2020; 40:2191-2206. [PMID: 32949022 DOI: 10.1111/risa.13599] [Citation(s) in RCA: 85] [Impact Index Per Article: 21.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Accepted: 09/02/2020] [Indexed: 05/07/2023]
Abstract
Numerous studies and practical experiences with risk have demonstrated the importance of risk perceptions for people's behavior. In this narrative review, we describe and reflect upon some of the lines of research that we feel have been important in helping us understand the factors and processes that shape people's risk perceptions. In our review, we propose that much of the research on risk perceptions to date can be grouped according to three dominant perspectives and, thus, approaches to study design; they are: the characteristics of hazards, the characteristics of risk perceivers, and the application of heuristics to inform risk judgments. In making these distinctions, we also highlight what we see as outstanding challenges for researchers and practitioners. We also highlight a few new research questions that we feel warrant attention.
Collapse
Affiliation(s)
- Michael Siegrist
- Institute for Environmental Decisions (IED), Zurich, Switzerland
| | - Joseph Árvai
- Department of Psychology and Wrigley Institute for Environmental Studies, University of Southern California, Los Angeles, CA, USA
- Decision Research, Eugene, OR, USA
| |
Collapse
|
9
|
Insights Before Flights: How Community Perceptions Can Make or Break Medical Drone Deliveries. DRONES 2020. [DOI: 10.3390/drones4030051] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
Drones are increasingly used to transport health products, but life-saving interventions can be stalled if local community concerns and preferences are not assessed and addressed. In order to inform the introduction of drones in new contexts, this paper analyzed similarities and differences in community perceptions of medical delivery drones in Malawi, Mozambique, the Democratic Republic of the Congo (DRC) and the Dominican Republic (DR). Community perceptions were assessed using focus group discussions (FGDs) and key informant interviews (KIIs) conducted with stakeholders at the national level, at health facilities and in communities. Data were collected on respondents’ familiarity with drones, perceptions of benefits and risks of drones, advice on drone operations and recommendations on sharing information with the community. The comparative analysis found similar perceptions around the potential benefits of using drones, as well as important differences in the perceived risks of flying drones and culturally appropriate communication mechanisms based on the local context. Because community perceptions are heavily influenced by culture and local experiences, a similar assessment should be conducted before introducing drone activities in new areas and two-way feedback channels should be established once drone operations are established in an area. The extent to which a community understands and supports the use of drones to transport health products will ultimately play a critical role in the success or failure of the drone’s ability to bring life-saving products to those who need them.
Collapse
|
10
|
Lijarcio I, Useche SA, Llamazares J, Montoro L. Perceived benefits and constraints in vehicle automation: Data to assess the relationship between driver's features and their attitudes towards autonomous vehicles. Data Brief 2019; 27:104662. [PMID: 31720323 PMCID: PMC6838434 DOI: 10.1016/j.dib.2019.104662] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2019] [Revised: 08/26/2019] [Accepted: 10/04/2019] [Indexed: 11/30/2022] Open
Abstract
This data article examines the association driver's features, perceptions and attitudes towards autonomous vehicles (AVs). The data was collected using a structured self-administrable and online-based questionnaire, applied to a full sample of 1205 Spanish drivers. The data contains 4 parts: the full set of bivariate correlations between study variables; descriptive statistics and graphical trends for each main study variable according to gender, age group and city/town size; and, finally, the dataset for further explorations in this regard. For more information, it is convenient to read the full article entitled “Perceived safety and attributed value as predictors of the intention to use autonomous vehicles: A national study with Spanish drivers” [1].
Collapse
Affiliation(s)
- Ignacio Lijarcio
- INTRAS (Research Institute on Traffic and Road Safety), University of Valencia, Spain.,Spanish Foundation for Road Safety (FESVIAL), Spain
| | - Sergio A Useche
- INTRAS (Research Institute on Traffic and Road Safety), University of Valencia, Spain
| | | | - Luis Montoro
- INTRAS (Research Institute on Traffic and Road Safety), University of Valencia, Spain.,Spanish Foundation for Road Safety (FESVIAL), Spain
| |
Collapse
|
11
|
Nordhoff S, Kyriakidis M, van Arem B, Happee R. A multi-level model on automated vehicle acceptance (MAVA): a review-based study. THEORETICAL ISSUES IN ERGONOMICS SCIENCE 2019. [DOI: 10.1080/1463922x.2019.1621406] [Citation(s) in RCA: 45] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Affiliation(s)
- Sina Nordhoff
- Department Transport & Planning, Delft University of Technology, Delft, The Netherlands
- Innovation Centre for Mobility and Societal Change, Berlin, Germany
| | - Miltos Kyriakidis
- Laboratory for Energy Systems Analysis, Paul Scherrer Institute,Switzerland
| | - Bart van Arem
- Department Transport & Planning, Delft University of Technology, Delft, The Netherlands
| | - Riender Happee
- Department Transport & Planning, Delft University of Technology, Delft, The Netherlands
- Department Cognitive Robotics, Delft University of Technology, The Netherlands
| |
Collapse
|
12
|
Coughlin JF, Raue M, D'Ambrosio LA, Ward C, Lee C. Special Series: Social Science of Automated Driving. RISK ANALYSIS : AN OFFICIAL PUBLICATION OF THE SOCIETY FOR RISK ANALYSIS 2019; 39:293-294. [PMID: 30731034 DOI: 10.1111/risa.13271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Affiliation(s)
| | - Martina Raue
- Massachusetts Institute of Technology, AgeLab, Cambridge, MA, USA
| | | | - Carley Ward
- Massachusetts Institute of Technology, AgeLab, Cambridge, MA, USA
| | - Chaiwoo Lee
- Massachusetts Institute of Technology, AgeLab, Cambridge, MA, USA
| |
Collapse
|