1
|
Alarcon GM, Capiola A, Lee MA, Willis S, Hamdan IA, Jessup SA, Harris KN. Development and Validation of the System Trustworthiness Scale. HUMAN FACTORS 2024; 66:1893-1913. [PMID: 37458319 DOI: 10.1177/00187208231189000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2024]
Abstract
OBJECTIVE We created and validated a scale to measure perceptions of system trustworthiness. BACKGROUND Several scales exist in the literature that attempt to assess trustworthiness of system referents. However, existing measures suffer from limitations in their development and validation. The current study sought to develop a scale based on theory and methodological rigor. METHOD We conducted exploratory and confirmatory factor analyses on data from two online studies to develop the System Trustworthiness Scale (STS). Additional analyses explored the manipulation of the factors and assessed convergent and divergent validity. RESULTS The exploratory factor analyses resulted in a three-factor solution that represented the theoretical constructs of trustworthiness: performance, purpose, and process. Confirmatory factor analyses confirmed the three-factor solution. In addition, correlation and regression analyses demonstrated the scale's divergent and predictive validity. CONCLUSION The STS is a psychometrically valid and predictive scale for assessing trustworthiness perceptions of system referents. APPLICATIONS The STS assesses trustworthiness perceptions of systems. Importantly, the scale differentiates performance, purpose, and process constructs and is adaptable to a variety of system referents.
Collapse
Affiliation(s)
- Gene M Alarcon
- Air Force Research Laboratory, Wright Patterson AFB, OH, USA
| | - August Capiola
- Air Force Research Laboratory, Wright Patterson AFB, OH, USA
| | - Michael A Lee
- General Dynamics Information Technology Inc, Falls Church, VA, USA
| | - Sasha Willis
- General Dynamics Information Technology Inc, Falls Church, VA, USA
| | - Izz Aldin Hamdan
- General Dynamics Information Technology Inc, Falls Church, VA, USA
| | - Sarah A Jessup
- Air Force Research Laboratory, Wright Patterson AFB, OH, USA
| | | |
Collapse
|
2
|
Zhao J, Wang Y, Mancenido MV, Chiou EK, Maciejewski R. Evaluating the Impact of Uncertainty Visualization on Model Reliance. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:4093-4107. [PMID: 37028077 DOI: 10.1109/tvcg.2023.3251950] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Machine learning models have gained traction as decision support tools for tasks that require processing copious amounts of data. However, to achieve the primary benefits of automating this part of decision-making, people must be able to trust the machine learning model's outputs. In order to enhance people's trust and promote appropriate reliance on the model, visualization techniques such as interactive model steering, performance analysis, model comparison, and uncertainty visualization have been proposed. In this study, we tested the effects of two uncertainty visualization techniques in a college admissions forecasting task, under two task difficulty levels, using Amazon's Mechanical Turk platform. Results show that (1) people's reliance on the model depends on the task difficulty and level of machine uncertainty and (2) ordinal forms of expressing model uncertainty are more likely to calibrate model usage behavior. These outcomes emphasize that reliance on decision support tools can depend on the cognitive accessibility of the visualization technique and perceptions of model performance and task difficulty.
Collapse
|
3
|
Li M, Erickson IM. It's Not Only What You Say, But Also How You Say It: Machine Learning Approach to Estimate Trust from Conversation. HUMAN FACTORS 2024; 66:1724-1741. [PMID: 37116009 PMCID: PMC11044523 DOI: 10.1177/00187208231166624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
OBJECTIVE The objective of this study was to estimate trust from conversations using both lexical and acoustic data. BACKGROUND As NASA moves to long-duration space exploration operations, the increasing need for cooperation between humans and virtual agents requires real-time trust estimation by virtual agents. Measuring trust through conversation is a novel and unintrusive approach. METHOD A 2 (reliability) × 2 (cycles) × 3 (events) within-subject study with habitat system maintenance was designed to elicit various levels of trust in a conversational agent. Participants had trust-related conversations with the conversational agent at the end of each decision-making task. To estimate trust, subjective trust ratings were predicted using machine learning models trained on three types of conversational features (i.e., lexical, acoustic, and combined). After training, model explanation was performed using variable importance and partial dependence plots. RESULTS Results showed that a random forest algorithm, trained using the combined lexical and acoustic features, predicted trust in the conversational agent most accurately ( R a d j 2 = 0.71 ) . The most important predictors were a combination of lexical and acoustic cues: average sentiment considering valence shifters, the mean of formants, and Mel-frequency cepstral coefficients (MFCC). These conversational features were identified as partial mediators predicting people's trust. CONCLUSION Precise trust estimation from conversation requires lexical cues and acoustic cues. APPLICATION These results showed the possibility of using conversational data to measure trust, and potentially other dynamic mental states, unobtrusively and dynamically.
Collapse
Affiliation(s)
- Mengyao Li
- Department of Industrial and Systems Engineering, University of Wisconsin-Madison, Madison, Wisconsin, USA
| | - Isabel M Erickson
- Department of Industrial and Systems Engineering, University of Wisconsin-Madison, Madison, Wisconsin, USA
| |
Collapse
|
4
|
Li M, Guo F, Li Z, Ma H, Duffy VG. Interactive effects of users' openness and robot reliability on trust: evidence from psychological intentions, task performance, visual behaviours, and cerebral activations. ERGONOMICS 2024:1-21. [PMID: 38635303 DOI: 10.1080/00140139.2024.2343954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Accepted: 04/09/2024] [Indexed: 04/19/2024]
Abstract
Although trust plays a vital role in human-robot interaction, there is currently a dearth of literature examining the effect of users' openness personality on trust in actual interaction. This study aims to investigate the interaction effects of users' openness and robot reliability on trust. We designed a voice-based walking task and collected subjective trust ratings, task metrics, eye-tracking data, and fNIRS signals from users with different openness to unravel the psychological intentions, task performance, visual behaviours, and cerebral activations underlying trust. The results showed significant interaction effects. Users with low openness exhibited lower subjective trust, more fixations, and higher activation of rTPJ in the highly reliable condition than those with high openness. The results suggested that users with low openness might be more cautious and suspicious about the highly reliable robot and allocate more visual attention and neural processing to monitor and infer robot status than users with high openness.
Collapse
Affiliation(s)
- Mingming Li
- Department of Industrial Engineering, College of Management Science and Engineering, Anhui University of Technology, Maanshan, China
- Department of Industrial Engineering, School of Business Administration, Northeastern University, Shenyang, China
| | - Fu Guo
- Department of Industrial Engineering, School of Business Administration, Northeastern University, Shenyang, China
| | - Zhixing Li
- Department of Industrial Engineering, School of Business Administration, Northeastern University, Shenyang, China
| | - Haiyang Ma
- Department of Industrial Engineering, School of Business Administration, Northeastern University, Shenyang, China
| | - Vincent G Duffy
- School of Industrial Engineering, Purdue University, West Lafayette, IN, USA
| |
Collapse
|
5
|
Li Y, Wu B, Huang Y, Luan S. Developing trustworthy artificial intelligence: insights from research on interpersonal, human-automation, and human-AI trust. Front Psychol 2024; 15:1382693. [PMID: 38694439 PMCID: PMC11061529 DOI: 10.3389/fpsyg.2024.1382693] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Accepted: 04/04/2024] [Indexed: 05/04/2024] Open
Abstract
The rapid advancement of artificial intelligence (AI) has impacted society in many aspects. Alongside this progress, concerns such as privacy violation, discriminatory bias, and safety risks have also surfaced, highlighting the need for the development of ethical, responsible, and socially beneficial AI. In response, the concept of trustworthy AI has gained prominence, and several guidelines for developing trustworthy AI have been proposed. Against this background, we demonstrate the significance of psychological research in identifying factors that contribute to the formation of trust in AI. Specifically, we review research findings on interpersonal, human-automation, and human-AI trust from the perspective of a three-dimension framework (i.e., the trustor, the trustee, and their interactive context). The framework synthesizes common factors related to trust formation and maintenance across different trust types. These factors point out the foundational requirements for building trustworthy AI and provide pivotal guidance for its development that also involves communication, education, and training for users. We conclude by discussing how the insights in trust research can help enhance AI's trustworthiness and foster its adoption and application.
Collapse
Affiliation(s)
- Yugang Li
- CAS Key Laboratory for Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing, China
| | - Baizhou Wu
- CAS Key Laboratory for Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing, China
| | - Yuqi Huang
- CAS Key Laboratory for Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing, China
| | - Shenghua Luan
- CAS Key Laboratory for Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
6
|
Deng M, Chen J, Wu Y, Ma S, Li H, Yang Z, Shen Y. Using voice recognition to measure trust during interactions with automated vehicles. APPLIED ERGONOMICS 2024; 116:104184. [PMID: 38048717 DOI: 10.1016/j.apergo.2023.104184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Revised: 11/10/2023] [Accepted: 11/20/2023] [Indexed: 12/06/2023]
Abstract
Trust in an automated vehicle system (AVs) can impact the experience and safety of drivers and passengers. This work investigates the effects of speech to measure drivers' trust in the AVs. Seventy-five participants were randomly assigned to high-trust (the AVs with 100% correctness, 0 crash, and 4 system messages with visual-auditory TORs) and low-trust group (the AVs with a correctness of 60%, a crash rate of 40%, 2 system messages with visual-only TORs). Voice interaction tasks were used to collect speech information during the driving process. The results revealed that our settings successfully induced trust and distrust states. The corresponding extracted speech feature data of the two trust groups were used for back-propagation neural network training and evaluated for its ability to accurately predict the trust classification. The highest classification accuracy of trust was 90.80%. This study proposes a method for accurately measuring trust in automated vehicles using voice recognition.
Collapse
Affiliation(s)
- Miaomiao Deng
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Jiaqi Chen
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Yue Wu
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Shu Ma
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Hongting Li
- Institute of Applied Psychology, College of Education, Zhejiang University of Technology, Hangzhou, China
| | - Zhen Yang
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, China.
| | - Yi Shen
- Department of Mathematics, Zhejiang Sci-Tech University, Hangzhou, China.
| |
Collapse
|
7
|
Yi B, Cao H, Song X, Wang J, Zhao S, Guo W, Cao D. How Can the Trust-Change Direction be Measured and Identified During Takeover Transitions in Conditionally Automated Driving? Using Physiological Responses and Takeover-Related Factors. HUMAN FACTORS 2024; 66:1276-1301. [PMID: 36625335 DOI: 10.1177/00187208221143855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
OBJECTIVE This paper proposes an objective method to measure and identify trust-change directions during takeover transitions (TTs) in conditionally automated vehicles (AVs). BACKGROUND Takeover requests (TORs) will be recurring events in conditionally automated driving that could undermine trust, and then lead to inappropriate reliance on conditionally AVs, such as misuse and disuse. METHOD 34 drivers engaged in the non-driving-related task were involved in a sequence of takeover events in a driving simulator. The relationships and effects between drivers' physiological responses, takeover-related factors, and trust-change directions during TTs were explored by the combination of an unsupervised learning algorithm and statistical analyses. Furthermore, different typical machine learning methods were applied to establish recognition models of trust-change directions during TTs based on takeover-related factors and physiological parameters. RESULT Combining the change values in the subjective trust rating and monitoring behavior before and after takeover can reliably measure trust-change directions during TTs. The statistical analysis results showed that physiological parameters (i.e., skin conductance and heart rate) during TTs are negatively linked with the trust-change directions. And drivers were more likely to increase trust during TTs when they were in longer TOR lead time, with more takeover frequencies, and dealing with the stationary vehicle scenario. More importantly, the F1-score of the random forest (RF) model is nearly 77.3%. CONCLUSION The features investigated and the RF model developed can identify trust-change directions during TTs accurately. APPLICATION Those findings can provide additional support for developing trust monitoring systems to mitigate both drivers' overtrust and undertrust in conditionally AVs.
Collapse
Affiliation(s)
| | | | | | | | - Song Zhao
- University of Waterloo, Waterloo, ON, Canada
| | | | - Dongpu Cao
- University of Waterloo, Waterloo, ON, Canada
| |
Collapse
|
8
|
Carter OBJ, Loft S, Visser TAW. Meaningful Communication but not Superficial Anthropomorphism Facilitates Human-Automation Trust Calibration: The Human-Automation Trust Expectation Model (HATEM). HUMAN FACTORS 2023:187208231218156. [PMID: 38041565 DOI: 10.1177/00187208231218156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2023]
Abstract
OBJECTIVE The objective was to demonstrate anthropomorphism needs to communicate contextually useful information to increase user confidence and accurately calibrate human trust in automation. BACKGROUND Anthropomorphism is believed to improve human-automation trust but supporting evidence remains equivocal. We test the Human-Automation Trust Expectation Model (HATEM) that predicts improvements to trust calibration and confidence in accepted advice arising from anthropomorphism will be weak unless it aids naturalistic communication of contextually useful information to facilitate prediction of automation failures. METHOD Ninety-eight undergraduates used a submarine periscope simulator to classify ships, aided by the Ship Automated Modelling (SAM) system that was 50% reliable. A between-subjects 2 × 3 design compared SAM appearance (anthropomorphic avatar vs. camera eye) and voice inflection (monotone vs. meaningless vs. meaningful), with the meaningful inflections communicating contextually useful information about automated advice regarding certainty and uncertainty. RESULTS Avatar SAM appearance was rated as more anthropomorphic than camera eye, and meaningless and meaningful inflections were both rated more anthropomorphic than monotone. However, for subjective trust, trust calibration, and confidence in accepting SAM advice, there was no evidence of anthropomorphic appearance having any impact, while there was decisive evidence that meaningful inflections yielded better outcomes on these trust measures than monotone and meaningless inflections. CONCLUSION Anthropomorphism had negligible impact on human-automation trust unless its execution enhanced communication of relevant information that allowed participants to better calibrate expectations of automation performance. APPLICATION Designers using anthropomorphism to calibrate trust need to consider what contextually useful information will be communicated via anthropomorphic features.
Collapse
Affiliation(s)
| | - Shayne Loft
- The University of Western Australia, Australia
| | | |
Collapse
|
9
|
Alsaid A, Li M, Chiou EK, Lee JD. Measuring trust: a text analysis approach to compare, contrast, and select trust questionnaires. Front Psychol 2023; 14:1192020. [PMID: 38034296 PMCID: PMC10684734 DOI: 10.3389/fpsyg.2023.1192020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Accepted: 10/27/2023] [Indexed: 12/02/2023] Open
Abstract
Introduction Trust has emerged as a prevalent construct to describe relationships between people and between people and technology in myriad domains. Across disciplines, researchers have relied on many different questionnaires to measure trust. The degree to which these questionnaires differ has not been systematically explored. In this paper, we use a word-embedding text analysis technique to identify the differences and common themes across the most used trust questionnaires and provide guidelines for questionnaire selection. Methods A review was conducted to identify the existing trust questionnaires. In total, we included 46 trust questionnaires from three main domains (i.e., Automation, Humans, and E-commerce) with a total of 626 items measuring different trust layers (i.e., Dispositional, Learned, and Situational). Next, we encoded the words within each questionnaire using GloVe word embeddings and computed the embedding for each questionnaire item, and for each questionnaire. We reduced the dimensionality of the resulting dataset using UMAP to visualize these embeddings in scatterplots and implemented the visualization in a web app for interactive exploration of the questionnaires (https://areen.shinyapps.io/Trust_explorer/). Results At the word level, the semantic space serves to produce a lexicon of trust-related words. At the item and questionnaire level, the analysis provided recommendation on questionnaire selection based on the dispersion of questionnaires' items and at the domain and layer composition of each questionnaire. Along with the web app, the results help explore the semantic space of trust questionnaires and guide the questionnaire selection process. Discussion The results provide a novel means to compare and select trust questionnaires and to glean insights about trust from spoken dialog or written comments.
Collapse
Affiliation(s)
- Areen Alsaid
- Department of Industrial and Manufacturing Systems Engineering, University of Michigan-Dearborn, Dearborn, MI, United States
| | - Mengyao Li
- Department of Industrial and Systems Engineering, University of Wisconsin-Madison, Madison, WI, United States
| | - Erin K. Chiou
- Department of Human Systems Engineering, Arizona State University, Mesa, AZ, United States
| | - John D. Lee
- Department of Industrial and Systems Engineering, University of Wisconsin-Madison, Madison, WI, United States
| |
Collapse
|
10
|
Bocklisch F, Huchler N. Humans and cyber-physical systems as teammates? Characteristics and applicability of the human-machine-teaming concept in intelligent manufacturing. Front Artif Intell 2023; 6:1247755. [PMID: 38028669 PMCID: PMC10655019 DOI: 10.3389/frai.2023.1247755] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Accepted: 10/10/2023] [Indexed: 12/01/2023] Open
Abstract
The paper explores and comments on the theoretical concept of human-machine-teaming in intelligent manufacturing. Industrial production is an important area of work applications and should be developed toward a more anthropocentric Industry 4.0/5.0. Teaming is used a design metaphor for human-centered integration of workers and complex cyber-physical-production systems using artificial intelligence. Concrete algorithmic solutions for technical processes should be based on theoretical concepts. A combination of literature scoping review and commentary was used to identify key characteristics for teaming applicable to the work environment addressed. From the body of literature, five criteria were selected and commented on. Two characteristics seemed particularly promising to guide the development of human-centered artificial intelligence and create tangible benefits in the mid-term: complementarity and shared knowledge/goals. These criteria are outlined with two industrial examples: human-robot-collaboration in assembly and intelligent decision support in thermal spraying. The main objective of the paper is to contribute to the discourse on human-centered artificial intelligence by exploring the theoretical concept of human-machine-teaming from a human-oriented perspective. Future research should focus on the empirical implementation and evaluation of teaming characteristics from different transdisciplinary viewpoints.
Collapse
Affiliation(s)
- Franziska Bocklisch
- Department of Mechanical Engineering, Chemnitz University of Technology, Chemnitz, Germany
- Fraunhofer Institute for Machine Tools and Forming Technology, Chemnitz, Germany
| | | |
Collapse
|
11
|
Schreibelmayr S, Moradbakhti L, Mara M. First impressions of a financial AI assistant: differences between high trust and low trust users. Front Artif Intell 2023; 6:1241290. [PMID: 37854078 PMCID: PMC10579608 DOI: 10.3389/frai.2023.1241290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Accepted: 09/05/2023] [Indexed: 10/20/2023] Open
Abstract
Calibrating appropriate trust of non-expert users in artificial intelligence (AI) systems is a challenging yet crucial task. To align subjective levels of trust with the objective trustworthiness of a system, users need information about its strengths and weaknesses. The specific explanations that help individuals avoid over- or under-trust may vary depending on their initial perceptions of the system. In an online study, 127 participants watched a video of a financial AI assistant with varying degrees of decision agency. They generated 358 spontaneous text descriptions of the system and completed standard questionnaires from the Trust in Automation and Technology Acceptance literature (including perceived system competence, understandability, human-likeness, uncanniness, intention of developers, intention to use, and trust). Comparisons between a high trust and a low trust user group revealed significant differences in both open-ended and closed-ended answers. While high trust users characterized the AI assistant as more useful, competent, understandable, and humanlike, low trust users highlighted the system's uncanniness and potential dangers. Manipulating the AI assistant's agency had no influence on trust or intention to use. These findings are relevant for effective communication about AI and trust calibration of users who differ in their initial levels of trust.
Collapse
Affiliation(s)
| | | | - Martina Mara
- Robopsychology Lab, Linz Institute of Technology, Johannes Kepler University Linz, Linz, Austria
| |
Collapse
|
12
|
Roeder L, Hoyte P, van der Meer J, Fell L, Johnston P, Kerr G, Bruza P. A Quantum Model of Trust Calibration in Human-AI Interactions. ENTROPY (BASEL, SWITZERLAND) 2023; 25:1362. [PMID: 37761661 PMCID: PMC10528121 DOI: 10.3390/e25091362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 09/14/2023] [Accepted: 09/15/2023] [Indexed: 09/29/2023]
Abstract
This exploratory study investigates a human agent's evolving judgements of reliability when interacting with an AI system. Two aims drove this investigation: (1) compare the predictive performance of quantum vs. Markov random walk models regarding human reliability judgements of an AI system and (2) identify a neural correlate of the perturbation of a human agent's judgement of the AI's reliability. As AI becomes more prevalent, it is important to understand how humans trust these technologies and how trust evolves when interacting with them. A mixed-methods experiment was developed for exploring reliability calibration in human-AI interactions. The behavioural data collected were used as a baseline to assess the predictive performance of the quantum and Markov models. We found the quantum model to better predict the evolving reliability ratings than the Markov model. This may be due to the quantum model being more amenable to represent the sometimes pronounced within-subject variability of reliability ratings. Additionally, a clear event-related potential response was found in the electroencephalographic (EEG) data, which is attributed to the expectations of reliability being perturbed. The identification of a trust-related EEG-based measure opens the door to explore how it could be used to adapt the parameters of the quantum model in real time.
Collapse
Affiliation(s)
- Luisa Roeder
- School of Information Systems, Queensland University of Technology, Brisbane 4000, Australia (J.v.d.M.)
| | - Pamela Hoyte
- School of Information Systems, Queensland University of Technology, Brisbane 4000, Australia (J.v.d.M.)
| | - Johan van der Meer
- School of Information Systems, Queensland University of Technology, Brisbane 4000, Australia (J.v.d.M.)
| | - Lauren Fell
- School of Information Systems, Queensland University of Technology, Brisbane 4000, Australia (J.v.d.M.)
| | - Patrick Johnston
- School of Exercise and Nutrition Sciences, Queensland University of Technology, Brisbane 4000, Australia
| | - Graham Kerr
- School of Exercise and Nutrition Sciences, Queensland University of Technology, Brisbane 4000, Australia
| | - Peter Bruza
- School of Information Systems, Queensland University of Technology, Brisbane 4000, Australia (J.v.d.M.)
| |
Collapse
|
13
|
Centeio Jorge C, Bouman NH, Jonker CM, Tielman ML. Exploring the effect of automation failure on the human's trustworthiness in human-agent teamwork. Front Robot AI 2023; 10:1143723. [PMID: 37680760 PMCID: PMC10482046 DOI: 10.3389/frobt.2023.1143723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 08/09/2023] [Indexed: 09/09/2023] Open
Abstract
Introduction: Collaboration in teams composed of both humans and automation has an interdependent nature, which demands calibrated trust among all the team members. For building suitable autonomous teammates, we need to study how trust and trustworthiness function in such teams. In particular, automation occasionally fails to do its job, which leads to a decrease in a human's trust. Research has found interesting effects of such a reduction of trust on the human's trustworthiness, i.e., human characteristics that make them more or less reliable. This paper investigates how automation failure in a human-automation collaborative scenario affects the human's trust in the automation, as well as a human's trustworthiness towards the automation. Methods: We present a 2 × 2 mixed design experiment in which the participants perform a simulated task in a 2D grid-world, collaborating with an automation in a "moving-out" scenario. During the experiment, we measure the participants' trustworthiness, trust, and liking regarding the automation, both subjectively and objectively. Results: Our results show that automation failure negatively affects the human's trustworthiness, as well as their trust in and liking of the automation. Discussion: Learning the effects of automation failure in trust and trustworthiness can contribute to a better understanding of the nature and dynamics of trust in these teams and improving human-automation teamwork.
Collapse
Affiliation(s)
- Carolina Centeio Jorge
- Interactive Intelligence, Intelligent Systems Department, Delft University of Technology, Delft, Netherlands
| | - Nikki H. Bouman
- Interactive Intelligence, Intelligent Systems Department, Delft University of Technology, Delft, Netherlands
| | - Catholijn M. Jonker
- Interactive Intelligence, Intelligent Systems Department, Delft University of Technology, Delft, Netherlands
- Leiden Institute of Advanced Computer Science (LIACS), University of Leiden, Leiden, Netherlands
| | - Myrthe L. Tielman
- Interactive Intelligence, Intelligent Systems Department, Delft University of Technology, Delft, Netherlands
| |
Collapse
|
14
|
Momen A, de Visser EJ, Fraune MR, Madison A, Rueben M, Cooley K, Tossell CC. Group trust dynamics during a risky driving experience in a Tesla Model X. Front Psychol 2023; 14:1129369. [PMID: 37408965 PMCID: PMC10319128 DOI: 10.3389/fpsyg.2023.1129369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Accepted: 05/23/2023] [Indexed: 07/07/2023] Open
Abstract
The growing concern about the risk and safety of autonomous vehicles (AVs) has made it vital to understand driver trust and behavior when operating AVs. While research has uncovered human factors and design issues based on individual driver performance, there remains a lack of insight into how trust in automation evolves in groups of people who face risk and uncertainty while traveling in AVs. To this end, we conducted a naturalistic experiment with groups of participants who were encouraged to engage in conversation while riding a Tesla Model X on campus roads. Our methodology was uniquely suited to uncover these issues through naturalistic interaction by groups in the face of a risky driving context. Conversations were analyzed, revealing several themes pertaining to trust in automation: (1) collective risk perception, (2) experimenting with automation, (3) group sense-making, (4) human-automation interaction issues, and (5) benefits of automation. Our findings highlight the untested and experimental nature of AVs and confirm serious concerns about the safety and readiness of this technology for on-road use. The process of determining appropriate trust and reliance in AVs will therefore be essential for drivers and passengers to ensure the safe use of this experimental and continuously changing technology. Revealing insights into social group-vehicle interaction, our results speak to the potential dangers and ethical challenges with AVs as well as provide theoretical insights on group trust processes with advanced technology.
Collapse
Affiliation(s)
- Ali Momen
- United States Air Force Academy, Colorado Springs, CO, United States
| | | | - Marlena R. Fraune
- Department of Psychology, New Mexico State University, Las Cruces, NM, United States
| | - Anna Madison
- United States Air Force Academy, Colorado Springs, CO, United States
- United States Army Research Laboratory, Aberdeen Proving Ground, Aberdeen, MD, United States
| | - Matthew Rueben
- Department of Psychology, New Mexico State University, Las Cruces, NM, United States
| | - Katrina Cooley
- United States Air Force Academy, Colorado Springs, CO, United States
| | - Chad C. Tossell
- United States Air Force Academy, Colorado Springs, CO, United States
| |
Collapse
|
15
|
Rodriguez Rodriguez L, Bustamante Orellana CE, Chiou EK, Huang L, Cooke N, Kang Y. A review of mathematical models of human trust in automation. FRONTIERS IN NEUROERGONOMICS 2023; 4:1171403. [PMID: 38234493 PMCID: PMC10790856 DOI: 10.3389/fnrgo.2023.1171403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Accepted: 05/23/2023] [Indexed: 01/19/2024]
Abstract
Understanding how people trust autonomous systems is crucial to achieving better performance and safety in human-autonomy teaming. Trust in automation is a rich and complex process that has given rise to numerous measures and approaches aimed at comprehending and examining it. Although researchers have been developing models for understanding the dynamics of trust in automation for several decades, these models are primarily conceptual and often involve components that are difficult to measure. Mathematical models have emerged as powerful tools for gaining insightful knowledge about the dynamic processes of trust in automation. This paper provides an overview of various mathematical modeling approaches, their limitations, feasibility, and generalizability for trust dynamics in human-automation interaction contexts. Furthermore, this study proposes a novel and dynamic approach to model trust in automation, emphasizing the importance of incorporating different timescales into measurable components. Due to the complex nature of trust in automation, it is also suggested to combine machine learning and dynamic modeling approaches, as well as incorporating physiological data.
Collapse
Affiliation(s)
- Lucero Rodriguez Rodriguez
- Simon A. Levin Mathematical and Computational Modeling Sciences Center, Arizona State University, Tempe, AZ, United States
| | - Carlos E. Bustamante Orellana
- Simon A. Levin Mathematical and Computational Modeling Sciences Center, Arizona State University, Tempe, AZ, United States
| | - Erin K. Chiou
- Human Systems Engineering, Arizona State University, Mesa, AZ, United States
| | - Lixiao Huang
- Center for Human, Artificial Intelligence, and Robot Teaming, Global Security Initiative, Arizona State University, Mesa, AZ, United States
| | - Nancy Cooke
- Human Systems Engineering, Arizona State University, Mesa, AZ, United States
- Center for Human, Artificial Intelligence, and Robot Teaming, Global Security Initiative, Arizona State University, Mesa, AZ, United States
| | - Yun Kang
- Sciences and Mathematics Faculty, College of Integrative Sciences and Arts, Arizona State University, Mesa, AZ, United States
| |
Collapse
|
16
|
Building trust in automatic video interviews using various AI interfaces: Tangibility, immediacy, and transparency. COMPUTERS IN HUMAN BEHAVIOR 2023. [DOI: 10.1016/j.chb.2023.107713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/16/2023]
|
17
|
Alarcon GM, Capiola A, Hamdan IA, Lee MA, Jessup SA. Differential biases in human-human versus human-robot interactions. APPLIED ERGONOMICS 2023; 106:103858. [PMID: 35994948 DOI: 10.1016/j.apergo.2022.103858] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Revised: 07/08/2022] [Accepted: 07/21/2022] [Indexed: 06/15/2023]
Abstract
The research on human-robot interactions indicates possible differences toward robot trust that do not exist in human-human interactions. Research on these differences has traditionally focused on performance degradations. The current study sought to explore differences in human-robot and human-human trust interactions with performance, consideration, and morality trustworthiness manipulations, which are based on ability/performance, benevolence/purpose, and integrity/process manipulations, respectively, from previous research. We used a mixed factorial hierarchical linear model design to explore the effects of trustworthiness manipulations on trustworthiness perceptions, trust intentions, and trust behaviors in a trust game. We found partner (human versus robot) differences across all three trustworthiness perceptions, indicating biases towards robots may be more expansive than previously thought. Additionally, there were marginal effects of partner differences on trust intentions. Interestingly, there were no differences between partners on trust behaviors. Results indicate human biases toward robots may be more complex than considered in the literature.
Collapse
|
18
|
Walliser AC, de Visser EJ, Shaw TH. Exploring system wide trust prevalence and mitigation strategies with multiple autonomous agents. COMPUTERS IN HUMAN BEHAVIOR 2023. [DOI: 10.1016/j.chb.2023.107671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
|
19
|
Okuoka K, Enami K, Kimoto M, Imai M. Multi-device trust transfer: Can trust be transferred among multiple devices? Front Psychol 2022; 13:920844. [PMID: 35992472 PMCID: PMC9382300 DOI: 10.3389/fpsyg.2022.920844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 06/27/2022] [Indexed: 11/13/2022] Open
Abstract
Recent advances in automation technology have increased the opportunity for collaboration between humans and multiple autonomous systems such as robots and self-driving cars. In research on autonomous system collaboration, the trust users have in autonomous systems is an important topic. Previous research suggests that the trust built by observing a task can be transferred to other tasks. However, such research did not focus on trust in multiple different devices but in one device or several of the same devices. Thus, we do not know how trust changes in an environment involving the operation of multiple different devices such as a construction site. We investigated whether trust can be transferred among multiple different devices, and investigated the effect of two factors: the similarity among multiple devices and the agency attributed to each device, on trust transfer among multiple devices. We found that the trust a user has in a device can be transferred to other devices and that attributing different agencies to each device can clarify the distinction among devices, preventing trust from transferring.
Collapse
Affiliation(s)
- Kohei Okuoka
- Graduate School of Science and Technology, Keio University, Yokohama, Japan
- *Correspondence: Kohei Okuoka
| | - Kouichi Enami
- Graduate School of Science and Technology, Keio University, Yokohama, Japan
| | - Mitsuhiko Kimoto
- Graduate School of Science and Technology, Keio University, Yokohama, Japan
- Interaction Science Laboratories, ATR, Kyoto, Japan
| | - Michita Imai
- Graduate School of Science and Technology, Keio University, Yokohama, Japan
| |
Collapse
|
20
|
Abubshait A, Parenti L, Perez-Osorio J, Wykowska A. Misleading Robot Signals in a Classification Task Induce Cognitive Load as Measured by Theta Synchronization Between Frontal and Temporo-parietal Brain Regions. FRONTIERS IN NEUROERGONOMICS 2022; 3:838136. [PMID: 38235447 PMCID: PMC10790903 DOI: 10.3389/fnrgo.2022.838136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Accepted: 06/01/2022] [Indexed: 01/19/2024]
Abstract
As technological advances progress, we find ourselves in situations where we need to collaborate with artificial agents (e.g., robots, autonomous machines and virtual agents). For example, autonomous machines will be part of search and rescue missions, space exploration and decision aids during monitoring tasks (e.g., baggage-screening at the airport). Efficient communication in these scenarios would be crucial to interact fluently. While studies examined the positive and engaging effect of social signals (i.e., gaze communication) on human-robot interaction, little is known about the effects of conflicting robot signals on the human actor's cognitive load. Moreover, it is unclear from a social neuroergonomics perspective how different brain regions synchronize or communicate with one another to deal with the cognitive load induced by conflicting signals in social situations with robots. The present study asked if neural oscillations that correlate with conflict processing are observed between brain regions when participants view conflicting robot signals. Participants classified different objects based on their color after a robot (i.e., iCub), presented on a screen, simulated handing over the object to them. The robot proceeded to cue participants (with a head shift) to the correct or incorrect target location. Since prior work has shown that unexpected cues can interfere with oculomotor planning and induces conflict, we expected that conflicting robot social signals which would interfere with the execution of actions. Indeed, we found that conflicting social signals elicited neural correlates of cognitive conflict as measured by mid-brain theta oscillations. More importantly, we found higher coherence values between mid-frontal electrode locations and posterior occipital electrode locations in the theta-frequency band for incongruent vs. congruent cues, which suggests that theta-band synchronization between these two regions allows for communication between cognitive control systems and gaze-related attentional mechanisms. We also find correlations between coherence values and behavioral performance (Reaction Times), which are moderated by the congruency of the robot signal. In sum, the influence of irrelevant social signals during goal-oriented tasks can be indexed by behavioral, neural oscillation and brain connectivity patterns. These data provide insights about a new measure for cognitive load, which can also be used in predicting human interaction with autonomous machines.
Collapse
Affiliation(s)
- Abdulaziz Abubshait
- Social Cognition in Human Robot Interaction (S4HRI), Italian Institute of Technology, Genova, Italy
| | - Lorenzo Parenti
- Social Cognition in Human Robot Interaction (S4HRI), Italian Institute of Technology, Genova, Italy
- Department of Psychology, University of Torino, Turin, Italy
| | - Jairo Perez-Osorio
- Social Cognition in Human Robot Interaction (S4HRI), Italian Institute of Technology, Genova, Italy
| | - Agnieszka Wykowska
- Social Cognition in Human Robot Interaction (S4HRI), Italian Institute of Technology, Genova, Italy
| |
Collapse
|
21
|
Krausman A, Neubauer C, Forster D, Lakhmani S, Baker AL, Fitzhugh SM, Gremillion G, Wright JL, Metcalfe JS, Schaefer KE. Trust Measurement in Human-Autonomy Teams: Development of a Conceptual Toolkit. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3530874] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
The rise in artificial intelligence capabilities in autonomy-enabled systems and robotics has pushed research to address the unique nature of human-autonomy team collaboration. The goal of these advanced technologies is to enable rapid decision making, enhance situation awareness, promote shared understanding, and improve team dynamics. Simultaneously, use of these technologies is expected to reduce risk to those who collaborate with these systems. Yet, for appropriate human- autonomy teaming to take place, especially as we move beyond dyadic partnerships, proper calibration of team trust is needed to effectively coordinate interactions during high-risk operations. But to meet this end, critical measures of team trust for this new dynamic of human-autonomy teams are needed. This paper seeks to expand on trust measurement principles and the foundation of human-autonomy teaming to propose a “toolkit” of novel methods that support the development, maintenance and calibration of trust in human-autonomy teams operating within uncertain, risky, and dynamic environments.
Collapse
Affiliation(s)
- Andrea Krausman
- US Army Combat Capabilities Development Command, Army Research Laboratory
| | - Catherine Neubauer
- US Army Combat Capabilities Development Command, Army Research Laboratory
| | - Daniel Forster
- US Army Combat Capabilities Development Command, Army Research Laboratory
| | - Shan Lakhmani
- US Army Combat Capabilities Development Command, Army Research Laboratory
| | - Anthony L Baker
- Oak Ridge Associated Universities, US Army Combat Capabilities Development Command, Army Research Laboratory
| | - Sean M. Fitzhugh
- US Army Combat Capabilities Development Command, Army Research Laboratory
| | - Gregory Gremillion
- US Army Combat Capabilities Development Command, Army Research Laboratory
| | - Julia L. Wright
- US Army Combat Capabilities Development Command, Army Research Laboratory
| | - Jason S. Metcalfe
- US Army Combat Capabilities Development Command, Army Research Laboratory
| | | |
Collapse
|
22
|
Solberg E, Kaarstad M, Eitrheim MHR, Bisio R, Reegård K, Bloch M. A Conceptual Model of Trust, Perceived Risk, and Reliance on AI Decision Aids. GROUP & ORGANIZATION MANAGEMENT 2022. [DOI: 10.1177/10596011221081238] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
There is increasing interest in the use of artificial intelligence (AI) to improve organizational decision-making. However, research indicates that people’s trust in and choice to rely on “AI decision aids” can be tenuous. In the present paper, we connect research on trust in AI with Mayer, Davis, and Schoorman’s (1995) model of organizational trust to elaborate a conceptual model of trust, perceived risk, and reliance on AI decision aids at work. Drawing from the trust in technology, trust in automation, and decision support systems literatures, we redefine central concepts in Mayer et al.’s (1995) model, expand the model to include new, relevant constructs (like perceived control over an AI decision aid), and refine propositions about the relationships expected in this context. The conceptual model put forward presents a framework that can help researchers studying trust in and reliance on AI decision aids develop their research models, build systematically on each other’s research, and contribute to a more cohesive understanding of the phenomenon. Our paper concludes with five next steps to take research on the topic forward.
Collapse
Affiliation(s)
- Elizabeth Solberg
- Department of Human-Centred Digitalization, Institute for Energy Technology, Halden, Norway
| | - Magnhild Kaarstad
- Department of Human-Centred Digitalization, Institute for Energy Technology, Halden, Norway
| | - Maren H. Rø Eitrheim
- Department of Human-Centred Digitalization, Institute for Energy Technology, Halden, Norway
| | - Rossella Bisio
- Department of Humans and Automation, Institute for Energy Technology, Halden, Norway
| | - Kine Reegård
- Department of Human-Centred Digitalization, Institute for Energy Technology, Halden, Norway
| | - Marten Bloch
- Department of Humans and Automation, Institute for Energy Technology, Halden, Norway
| |
Collapse
|