1
|
Rittenberg BSP, Holland CW, Barnhart GE, Gaudreau SM, Neyedli HF. Trust with increasing and decreasing reliability. HUMAN FACTORS 2024; 66:2569-2589. [PMID: 38445652 PMCID: PMC11487872 DOI: 10.1177/00187208241228636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 12/20/2023] [Indexed: 03/07/2024]
Abstract
OBJECTIVE The primary purpose was to determine how trust changes over time when automation reliability increases or decreases. A secondary purpose was to determine how task-specific self-confidence is associated with trust and reliability level. BACKGROUND Both overtrust and undertrust can be detrimental to system performance; therefore, the temporal dynamics of trust with changing reliability level need to be explored. METHOD Two experiments used a dominant-color identification task, where automation provided a recommendation to users, with the reliability of the recommendation changing over 300 trials. In Experiment 1, two groups of participants interacted with the system: one group started with a 50% reliable system which increased to 100%, while the other used a system that decreased from 100% to 50%. Experiment 2 included a group where automation reliability increased from 70% to 100%. RESULTS Trust was initially high in the decreasing group and then declined as reliability level decreased; however, trust also declined in the 50% increasing reliability group. Furthermore, when user self-confidence increased, automation reliability had a greater influence on trust. In Experiment 2, the 70% increasing reliability group showed increased trust in the system. CONCLUSION Trust does not always track the reliability of automated systems; in particular, it is difficult for trust to recover once the user has interacted with a low reliability system. APPLICATIONS This study provides initial evidence into the dynamics of trust for automation that gets better over time suggesting that users should only start interacting with automation when it is sufficiently reliable.
Collapse
|
2
|
Carter OBJ, Loft S, Visser TAW. Meaningful Communication but not Superficial Anthropomorphism Facilitates Human-Automation Trust Calibration: The Human-Automation Trust Expectation Model (HATEM). HUMAN FACTORS 2024; 66:2485-2502. [PMID: 38041565 PMCID: PMC11457490 DOI: 10.1177/00187208231218156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2023] [Accepted: 11/09/2023] [Indexed: 12/03/2023]
Abstract
OBJECTIVE The objective was to demonstrate anthropomorphism needs to communicate contextually useful information to increase user confidence and accurately calibrate human trust in automation. BACKGROUND Anthropomorphism is believed to improve human-automation trust but supporting evidence remains equivocal. We test the Human-Automation Trust Expectation Model (HATEM) that predicts improvements to trust calibration and confidence in accepted advice arising from anthropomorphism will be weak unless it aids naturalistic communication of contextually useful information to facilitate prediction of automation failures. METHOD Ninety-eight undergraduates used a submarine periscope simulator to classify ships, aided by the Ship Automated Modelling (SAM) system that was 50% reliable. A between-subjects 2 × 3 design compared SAM appearance (anthropomorphic avatar vs. camera eye) and voice inflection (monotone vs. meaningless vs. meaningful), with the meaningful inflections communicating contextually useful information about automated advice regarding certainty and uncertainty. RESULTS Avatar SAM appearance was rated as more anthropomorphic than camera eye, and meaningless and meaningful inflections were both rated more anthropomorphic than monotone. However, for subjective trust, trust calibration, and confidence in accepting SAM advice, there was no evidence of anthropomorphic appearance having any impact, while there was decisive evidence that meaningful inflections yielded better outcomes on these trust measures than monotone and meaningless inflections. CONCLUSION Anthropomorphism had negligible impact on human-automation trust unless its execution enhanced communication of relevant information that allowed participants to better calibrate expectations of automation performance. APPLICATION Designers using anthropomorphism to calibrate trust need to consider what contextually useful information will be communicated via anthropomorphic features.
Collapse
Affiliation(s)
| | - Shayne Loft
- The University of Western Australia, Australia
| | | |
Collapse
|
3
|
Rossignoli D, Manzi F, Gaggioli A, Marchetti A, Massaro D, Riva G, Maggioni MA. The Importance of Being Consistent: Attribution of Mental States in Strategic Human-Robot Interactions. CYBERPSYCHOLOGY, BEHAVIOR AND SOCIAL NETWORKING 2024; 27:498-506. [PMID: 38770627 DOI: 10.1089/cyber.2023.0353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2024]
Abstract
This article investigates the attribution of mental state (AMS) to an anthropomorphic robot by humans in a strategic interaction. We conducted an experiment in which human subjects are paired with either a human or an anthropomorphic robot to play an iterated Prisoner's Dilemma game, and we tested whether AMS is dependent on the robot "consistency," that is, the correspondence between the robot's verbal reaction and its behavior after a nonoptimal social outcome of the game is obtained. We find that human partners are attributed a higher mental state level than robotic partners, regardless of the partner's consistency between words and actions. Conversely, the level of AMS assigned to the robot is significantly higher when the robot is consistent in its words and actions. This finding is robust to the inclusion of psychological factors such as risk attitude and trust, and it holds regardless of subjects' initial beliefs about the adaptability of the robot. Finally, we find that when the robot apologizes for its behavior and defects in the following stage, the epistemic component of the AMS significantly increases.
Collapse
Affiliation(s)
- Domenico Rossignoli
- DISEIS, Department of International Economics, Institutions and Development, Università Cattolica del Sacro Cuore, Milano, Italy
- CSCC, Cognitive Science and Communication research Center, Università Cattolica del Sacro Cuore, Milano, Italy
- HuRoLab, Università Cattolica del Sacro Cuore, Milano, Italy
| | - Federico Manzi
- HuRoLab, Università Cattolica del Sacro Cuore, Milano, Italy
- UniToM, Università Cattolica del Sacro Cuore, Milano, Italy
- Department of Psychology, Università Cattolica del Sacro Cuore, Milano, Italy
| | - Andrea Gaggioli
- Research Center of Communication Psychology, Università Cattolica del Sacro Cuore, Milano, Italy
- Humane Technology Lab, Università Cattolica del Sacro Cuore, Milano, Italy
- ATN-P Lab, IRCCS Istituto Auxologico Italiano, Milano, Italy
| | - Antonella Marchetti
- HuRoLab, Università Cattolica del Sacro Cuore, Milano, Italy
- UniToM, Università Cattolica del Sacro Cuore, Milano, Italy
- Humane Technology Lab, Università Cattolica del Sacro Cuore, Milano, Italy
- Department of Psychology, Università Cattolica del Sacro Cuore, Milano, Italy
| | - Davide Massaro
- HuRoLab, Università Cattolica del Sacro Cuore, Milano, Italy
- Humane Technology Lab, Università Cattolica del Sacro Cuore, Milano, Italy
- Department of Psychology, Università Cattolica del Sacro Cuore, Milano, Italy
| | - Giuseppe Riva
- Research Center of Communication Psychology, Università Cattolica del Sacro Cuore, Milano, Italy
- Humane Technology Lab, Università Cattolica del Sacro Cuore, Milano, Italy
- Department of Psychology, Università Cattolica del Sacro Cuore, Milano, Italy
| | - Mario A Maggioni
- DISEIS, Department of International Economics, Institutions and Development, Università Cattolica del Sacro Cuore, Milano, Italy
- CSCC, Cognitive Science and Communication research Center, Università Cattolica del Sacro Cuore, Milano, Italy
- HuRoLab, Università Cattolica del Sacro Cuore, Milano, Italy
- Humane Technology Lab, Università Cattolica del Sacro Cuore, Milano, Italy
| |
Collapse
|
4
|
Correia F, Melo FS, Paiva A. When a Robot Is Your Teammate. Top Cogn Sci 2024; 16:527-553. [PMID: 36573665 DOI: 10.1111/tops.12634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Revised: 08/19/2022] [Accepted: 11/02/2022] [Indexed: 12/28/2022]
Abstract
Creating effective teamwork between humans and robots involves not only addressing their performance as a team but also sustaining the quality and sense of unity among teammates, also known as cohesion. This paper explores the research problem of: how can we endow robotic teammates with social capabilities to improve the cohesive alliance with humans? By defining the concept of a human-robot cohesive alliance in the light of the multidimensional construct of cohesion from the social sciences, we propose to address this problem through the idea of multifaceted human-robot cohesion. We present our preliminary effort from previous works to examine each of the five dimensions of cohesion: social, collective, emotional, structural, and task. We finish the paper with a discussion on how human-robot cohesion contributes to the key questions and ongoing challenges of creating robotic teammates. Overall, cohesion in human-robot teams might be a key factor to propel team performance and it should be considered in the design, development, and evaluation of robotic teammates.
Collapse
Affiliation(s)
- Filipa Correia
- INESC-ID, Instituto Superior Técnico, Universidade de Lisboa
- ITI, LARSyS, Instituto Superior Técnico, Universidade de Lisboa
| | | | - Ana Paiva
- INESC-ID, Instituto Superior Técnico, Universidade de Lisboa
| |
Collapse
|
5
|
Griffiths N, Bowden V, Wee S, Loft S. Return-to-Manual Performance can be Predicted Before Automation Fails. HUMAN FACTORS 2024; 66:1333-1349. [PMID: 36538745 DOI: 10.1177/00187208221147105] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
OBJECTIVE This study aimed to examine operator state variables (workload, fatigue, and trust in automation) that may predict return-to-manual (RTM) performance when automation fails in simulated air traffic control. BACKGROUND Prior research has largely focused on triggering adaptive automation based on reactive indicators of performance degradation or operator strain. A more direct and effective approach may be to proactively engage/disengage automation based on predicted operator RTM performance (conflict detection accuracy and response time), which requires analyses of within-person effects. METHOD Participants accepted and handed-off aircraft from their sector and were assisted by imperfect conflict detection/resolution automation. To avoid aircraft conflicts, participants were required to intervene when automation failed to detect a conflict. Participants periodically rated their workload, fatigue and trust in automation. RESULTS For participants with the same or higher average trust than the sample average, an increase in their trust (relative to their own average) slowed their subsequent RTM response time. For participants with lower average fatigue than the sample average, an increase in their fatigue (relative to own average) improved their subsequent RTM response time. There was no effect of workload on RTM performance. CONCLUSIONS RTM performance degraded as trust in automation increased relative to participants' own average, but only for individuals with average or high levels of trust. APPLICATIONS Study outcomes indicate a potential for future adaptive automation systems to detect vulnerable operator states in order to predict subsequent RTM performance decrements.
Collapse
Affiliation(s)
| | - Vanessa Bowden
- The University of Western Australia, Crawley, WA, Australia
| | - Serena Wee
- The University of Western Australia, Crawley, WA, Australia
| | - Shayne Loft
- The University of Western Australia, Crawley, WA, Australia
| |
Collapse
|
6
|
Li M, Guo F, Li Z, Ma H, Duffy VG. Interactive effects of users' openness and robot reliability on trust: evidence from psychological intentions, task performance, visual behaviours, and cerebral activations. ERGONOMICS 2024:1-21. [PMID: 38635303 DOI: 10.1080/00140139.2024.2343954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Accepted: 04/09/2024] [Indexed: 04/19/2024]
Abstract
Although trust plays a vital role in human-robot interaction, there is currently a dearth of literature examining the effect of users' openness personality on trust in actual interaction. This study aims to investigate the interaction effects of users' openness and robot reliability on trust. We designed a voice-based walking task and collected subjective trust ratings, task metrics, eye-tracking data, and fNIRS signals from users with different openness to unravel the psychological intentions, task performance, visual behaviours, and cerebral activations underlying trust. The results showed significant interaction effects. Users with low openness exhibited lower subjective trust, more fixations, and higher activation of rTPJ in the highly reliable condition than those with high openness. The results suggested that users with low openness might be more cautious and suspicious about the highly reliable robot and allocate more visual attention and neural processing to monitor and infer robot status than users with high openness.
Collapse
Affiliation(s)
- Mingming Li
- Department of Industrial Engineering, College of Management Science and Engineering, Anhui University of Technology, Maanshan, China
- Department of Industrial Engineering, School of Business Administration, Northeastern University, Shenyang, China
| | - Fu Guo
- Department of Industrial Engineering, School of Business Administration, Northeastern University, Shenyang, China
| | - Zhixing Li
- Department of Industrial Engineering, School of Business Administration, Northeastern University, Shenyang, China
| | - Haiyang Ma
- Department of Industrial Engineering, School of Business Administration, Northeastern University, Shenyang, China
| | - Vincent G Duffy
- School of Industrial Engineering, Purdue University, West Lafayette, IN, USA
| |
Collapse
|
7
|
Schelble BG, Lopez J, Textor C, Zhang R, McNeese NJ, Pak R, Freeman G. Towards Ethical AI: Empirically Investigating Dimensions of AI Ethics, Trust Repair, and Performance in Human-AI Teaming. HUMAN FACTORS 2024; 66:1037-1055. [PMID: 35938319 DOI: 10.1177/00187208221116952] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
OBJECTIVE Determining the efficacy of two trust repair strategies (apology and denial) for trust violations of an ethical nature by an autonomous teammate. BACKGROUND While ethics in human-AI interaction is extensively studied, little research has investigated how decisions with ethical implications impact trust and performance within human-AI teams and their subsequent repair. METHOD Forty teams of two participants and one autonomous teammate completed three team missions within a synthetic task environment. The autonomous teammate made an ethical or unethical action during each mission, followed by an apology or denial. Measures of individual team trust, autonomous teammate trust, human teammate trust, perceived autonomous teammate ethicality, and team performance were taken. RESULTS Teams with unethical autonomous teammates had significantly lower trust in the team and trust in the autonomous teammate. Unethical autonomous teammates were also perceived as substantially more unethical. Neither trust repair strategy effectively restored trust after an ethical violation, and autonomous teammate ethicality was not related to the team score, but unethical autonomous teammates did have shorter times. CONCLUSION Ethical violations significantly harm trust in the overall team and autonomous teammate but do not negatively impact team score. However, current trust repair strategies like apologies and denials appear ineffective in restoring trust after this type of violation. APPLICATION This research highlights the need to develop trust repair strategies specific to human-AI teams and trust violations of an ethical nature.
Collapse
Affiliation(s)
- Beau G Schelble
- Human-Centered Computing, Clemson University, Clemson, SC, USA
| | - Jeremy Lopez
- Department of Psychology, Clemson University, Clemson, SC, USA
| | - Claire Textor
- Department of Psychology, Clemson University, Clemson, SC, USA
| | - Rui Zhang
- Human-Centered Computing, Clemson University, Clemson, SC, USA
| | | | - Richard Pak
- Department of Psychology, Clemson University, Clemson, SC, USA
| | - Guo Freeman
- Human-Centered Computing, Clemson University, Clemson, SC, USA
| |
Collapse
|
8
|
Conlon N, Ahmed N, Szafir D. Event-triggered robot self-assessment to aid in autonomy adjustment. Front Robot AI 2024; 10:1294533. [PMID: 38239275 PMCID: PMC10794385 DOI: 10.3389/frobt.2023.1294533] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Accepted: 12/05/2023] [Indexed: 01/22/2024] Open
Abstract
Introduction: Human-robot teams are being called upon to accomplish increasingly complex tasks. During execution, the robot may operate at different levels of autonomy (LOAs), ranging from full robotic autonomy to full human control. For any number of reasons, such as changes in the robot's surroundings due to the complexities of operating in dynamic and uncertain environments, degradation and damage to the robot platform, or changes in tasking, adjusting the LOA during operations may be necessary to achieve desired mission outcomes. Thus, a critical challenge is understanding when and how the autonomy should be adjusted. Methods: We frame this problem with respect to the robot's capabilities and limitations, known as robot competency. With this framing, a robot could be granted a level of autonomy in line with its ability to operate with a high degree of competence. First, we propose a Model Quality Assessment metric, which indicates how (un)expected an autonomous robot's observations are compared to its model predictions. Next, we present an Event-Triggered Generalized Outcome Assessment (ET-GOA) algorithm that uses changes in the Model Quality Assessment above a threshold to selectively execute and report a high-level assessment of the robot's competency. We validated the Model Quality Assessment metric and the ET-GOA algorithm in both simulated and live robot navigation scenarios. Results: Our experiments found that the Model Quality Assessment was able to respond to unexpected observations. Additionally, our validation of the full ET-GOA algorithm explored how the computational cost and accuracy of the algorithm was impacted across several Model Quality triggering thresholds and with differing amounts of state perturbations. Discussion: Our experimental results combined with a human-in-the-loop demonstration show that Event-Triggered Generalized Outcome Assessment algorithm can facilitate informed autonomy-adjustment decisions based on a robot's task competency.
Collapse
Affiliation(s)
- Nicholas Conlon
- Cooperative Human-Robot Intelligence Laboratory, University of Colorado at Boulder, Boulder, CO, United States
| | - Nisar Ahmed
- Cooperative Human-Robot Intelligence Laboratory, University of Colorado at Boulder, Boulder, CO, United States
| | - Daniel Szafir
- Interactive Robotics and Novel Technologies Laboratory, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| |
Collapse
|
9
|
Ismatullaev UVU, Kim SH. Review of the Factors Affecting Acceptance of AI-Infused Systems. HUMAN FACTORS 2024; 66:126-144. [PMID: 35344676 DOI: 10.1177/00187208211064707] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
OBJECTIVE The study aimed to provide a comprehensive overview of the factors impacting technology adoption, to predict the acceptance of artificial intelligence (AI)-based technologies. BACKGROUND Although the acceptance of AI devices is usually defined by behavioural factors in theories of user acceptance, the effects of technical and human factors are often overlooked. However, research shows that user behaviour can vary depending on a system's technical characteristics and differences in users. METHOD A systematic review was conducted. A total of 85 peer-reviewed journal articles that met the inclusion criteria and provided information on the factors influencing the adoption of AI devices were selected for the analysis. RESULTS Research on the adoption of AI devices shows that users' attitudes, trust and perceptions about the technology can be improved by increasing transparency, compatibility, and reliability, and simplifying tasks. Moreover, technological factors are also important for reducing issues related to human factors (e.g. distrust, scepticism, inexperience) and supporting users with lower intention to use and lower trust in AI-infused systems. CONCLUSION As prior research has confirmed the interrelationship among factors with and without behaviour theories, this review suggests extending the technology acceptance model that integrates the factors studied in this review to define the acceptance of AI devices across different application areas. However, further research is needed to collect more data and validate the study's findings. APPLICATION A comprehensive overview of factors influencing the acceptance of AI devices could help researchers and practitioners evaluate user behaviour when adopting new technologies.
Collapse
Affiliation(s)
| | - Sang-Ho Kim
- Department of Industrial Engineering, Kumoh National Institute of Technology, South Korea
| |
Collapse
|
10
|
Müller LS, Hertel G. Trusting information systems in everyday work events - effects on cognitive resources, performance, and well-being. ERGONOMICS 2023:1-18. [PMID: 38018481 DOI: 10.1080/00140139.2023.2286910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Accepted: 11/17/2023] [Indexed: 11/30/2023]
Abstract
In today's data-intensive work environments, information systems are crucial for supporting workers. However, workers often do not rely on these systems but resort to workarounds. We argue that trust is essential for workers' reliance on information systems, positively affecting workers' cognitive resources, performance, and well-being. Moreover, we argue that the organisational context (accountability, distractions) and user-related factors qualify trust-outcome associations by affecting workers' trust calibration. In a preregistered study, we asked N = 291 employed users of information systems to re-experience prior everyday usage events (event reconstruction method) and assess event-specific trust in the system, work outcomes, and context conditions. Results confirmed the assumed association between trust in the information system and workers' ratings of both performance and well-being. Moreover, workers' technology competence and need for cognition - but not contextual conditions - qualified trust-outcome associations. Our results offer specific suggestions for achieving successful use of information systems at work.
Collapse
Affiliation(s)
- Lea S Müller
- Institute of Psychology, University of Münster, Münster, Germany
| | - Guido Hertel
- Institute of Psychology, University of Münster, Münster, Germany
| |
Collapse
|
11
|
Qu J, Zhou R, Zhang Y, Ma Q. Understanding trust calibration in automated driving: the effect of time, personality, and system warning design. ERGONOMICS 2023; 66:2165-2181. [PMID: 36920361 DOI: 10.1080/00140139.2023.2191907] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 03/13/2023] [Indexed: 06/18/2023]
Abstract
Under the human-automation codriving future, dynamic trust should be considered. This paper explored how trust changes over time and how multiple factors (time, trust propensity, neuroticism, and takeover warning design) calibrate trust together. We launched two driving simulator experiments to measure drivers' trust before, during, and after the experiment under takeover scenarios. The results showed that trust in automation increased during short-term interactions and dropped after four months, which is still higher than pre-experiment trust. Initial trust and trust propensity had a stable impact on trust. Drivers trusted the system more with the two-stage (MR + TOR) warning design than the one-stage (TOR). Neuroticism had a significant effect on the countdown compared with the content warning.Practitioner summary: The results provide new data and knowledge for trust calibration in the takeover scenario. The findings can help design a more reasonable automated driving system in long-term human-automation interactions.
Collapse
Affiliation(s)
- Jianhong Qu
- School of Economics and Management, Beihang University, Beijing, P. R. China
| | - Ronggang Zhou
- School of Economics and Management, Beihang University, Beijing, P. R. China
| | - Yaping Zhang
- School of Economics and Management, Beihang University, Beijing, P. R. China
| | - Qianli Ma
- School of Economics and Management, Beihang University, Beijing, P. R. China
| |
Collapse
|
12
|
Johnson CJ, Demir M, McNeese NJ, Gorman JC, Wolff AT, Cooke NJ. The Impact of Training on Human-Autonomy Team Communications and Trust Calibration. HUMAN FACTORS 2023; 65:1554-1570. [PMID: 34595958 DOI: 10.1177/00187208211047323] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
OBJECTIVE This work examines two human-autonomy team (HAT) training approaches that target communication and trust calibration to improve team effectiveness under degraded conditions. BACKGROUND Human-autonomy teaming presents challenges to teamwork, some of which may be addressed through training. Factors vital to HAT performance include communication and calibrated trust. METHOD Thirty teams of three, including one confederate acting as an autonomous agent, received either entrainment-based coordination training, trust calibration training, or control training before executing a series of missions operating a simulated remotely piloted aircraft. Automation and autonomy failures simulating degraded conditions were injected during missions, and measures of team communication, trust, and task efficiency were collected. RESULTS Teams receiving coordination training had higher communication anticipation ratios, took photos of targets faster, and overcame more autonomy failures. Although autonomy failures were introduced in all conditions, teams receiving the calibration training reported that their overall trust in the agent was more robust over time. However, they did not perform better than the control condition. CONCLUSIONS Training based on entrainment of communications, wherein introduction of timely information exchange through one team member has lasting effects throughout the team, was positively associated with improvements in HAT communications and performance under degraded conditions. Training that emphasized the shortcomings of the autonomous agent appeared to calibrate expectations and maintain trust. APPLICATIONS Team training that includes an autonomous agent that models effective information exchange may positively impact team communication and coordination. Training that emphasizes the limitations of an autonomous agent may help calibrate trust.
Collapse
|
13
|
Schreibelmayr S, Moradbakhti L, Mara M. First impressions of a financial AI assistant: differences between high trust and low trust users. Front Artif Intell 2023; 6:1241290. [PMID: 37854078 PMCID: PMC10579608 DOI: 10.3389/frai.2023.1241290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Accepted: 09/05/2023] [Indexed: 10/20/2023] Open
Abstract
Calibrating appropriate trust of non-expert users in artificial intelligence (AI) systems is a challenging yet crucial task. To align subjective levels of trust with the objective trustworthiness of a system, users need information about its strengths and weaknesses. The specific explanations that help individuals avoid over- or under-trust may vary depending on their initial perceptions of the system. In an online study, 127 participants watched a video of a financial AI assistant with varying degrees of decision agency. They generated 358 spontaneous text descriptions of the system and completed standard questionnaires from the Trust in Automation and Technology Acceptance literature (including perceived system competence, understandability, human-likeness, uncanniness, intention of developers, intention to use, and trust). Comparisons between a high trust and a low trust user group revealed significant differences in both open-ended and closed-ended answers. While high trust users characterized the AI assistant as more useful, competent, understandable, and humanlike, low trust users highlighted the system's uncanniness and potential dangers. Manipulating the AI assistant's agency had no influence on trust or intention to use. These findings are relevant for effective communication about AI and trust calibration of users who differ in their initial levels of trust.
Collapse
Affiliation(s)
| | | | - Martina Mara
- Robopsychology Lab, Linz Institute of Technology, Johannes Kepler University Linz, Linz, Austria
| |
Collapse
|
14
|
Momen A, de Visser EJ, Fraune MR, Madison A, Rueben M, Cooley K, Tossell CC. Group trust dynamics during a risky driving experience in a Tesla Model X. Front Psychol 2023; 14:1129369. [PMID: 37408965 PMCID: PMC10319128 DOI: 10.3389/fpsyg.2023.1129369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Accepted: 05/23/2023] [Indexed: 07/07/2023] Open
Abstract
The growing concern about the risk and safety of autonomous vehicles (AVs) has made it vital to understand driver trust and behavior when operating AVs. While research has uncovered human factors and design issues based on individual driver performance, there remains a lack of insight into how trust in automation evolves in groups of people who face risk and uncertainty while traveling in AVs. To this end, we conducted a naturalistic experiment with groups of participants who were encouraged to engage in conversation while riding a Tesla Model X on campus roads. Our methodology was uniquely suited to uncover these issues through naturalistic interaction by groups in the face of a risky driving context. Conversations were analyzed, revealing several themes pertaining to trust in automation: (1) collective risk perception, (2) experimenting with automation, (3) group sense-making, (4) human-automation interaction issues, and (5) benefits of automation. Our findings highlight the untested and experimental nature of AVs and confirm serious concerns about the safety and readiness of this technology for on-road use. The process of determining appropriate trust and reliance in AVs will therefore be essential for drivers and passengers to ensure the safe use of this experimental and continuously changing technology. Revealing insights into social group-vehicle interaction, our results speak to the potential dangers and ethical challenges with AVs as well as provide theoretical insights on group trust processes with advanced technology.
Collapse
Affiliation(s)
- Ali Momen
- United States Air Force Academy, Colorado Springs, CO, United States
| | | | - Marlena R. Fraune
- Department of Psychology, New Mexico State University, Las Cruces, NM, United States
| | - Anna Madison
- United States Air Force Academy, Colorado Springs, CO, United States
- United States Army Research Laboratory, Aberdeen Proving Ground, Aberdeen, MD, United States
| | - Matthew Rueben
- Department of Psychology, New Mexico State University, Las Cruces, NM, United States
| | - Katrina Cooley
- United States Air Force Academy, Colorado Springs, CO, United States
| | - Chad C. Tossell
- United States Air Force Academy, Colorado Springs, CO, United States
| |
Collapse
|
15
|
Sharp WH, Jackson KM, Shaw TH. The frequency of positive and negative interactions influences relationship equity and trust in automation. APPLIED ERGONOMICS 2023; 108:103961. [PMID: 36640742 DOI: 10.1016/j.apergo.2022.103961] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 12/16/2022] [Accepted: 12/30/2022] [Indexed: 06/17/2023]
Abstract
The purpose of this study was to 1) examine whether frequency of positive and negative interactions (manipulated via reliability) with a computer agent had an impact on an individual's trust resilience after a major error occurs and 2) empirically test the notion of relationship equity, which encompasses the total accumulation of positive and negative interactions and experiences between two actors, on user trust on a separate transfer task. Participants were randomized into one of four groups, differing in agent positivity and frequency of interaction, and completed both a pattern recognition task and transfer task with the aid of the same computer agent. Subjective trust ratings, performance data, compliance, and agreement were collected and analyzed. Results demonstrated that frequency of positive and negative interactions did have an impact on user trust and trust resilience after a major error. Additionally, it was shown that relationship equity has an impact on user trust and trust resilience. This is the first empirical demonstration of relationship equity's impact on user trust in an automated teammate.
Collapse
Affiliation(s)
- William H Sharp
- Department of Psychology, George Mason University, Fairfax, VA, USA.
| | | | - Tyler H Shaw
- Department of Psychology, George Mason University, Fairfax, VA, USA
| |
Collapse
|
16
|
Begerowski SR, Hedrick KN, Waldherr F, Mears L, Shuffler ML. The forgotten teammate: Considering the labor perspective in human-autonomy teams. COMPUTERS IN HUMAN BEHAVIOR 2023. [DOI: 10.1016/j.chb.2023.107763] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
|
17
|
Algorithmic Fairness in AI. BUSINESS & INFORMATION SYSTEMS ENGINEERING 2023. [DOI: 10.1007/s12599-023-00787-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/09/2023]
|
18
|
Abstract
OBJECTIVE This paper reviews recent articles related to human trust in automation to guide research and design for increasingly capable automation in complex work environments. BACKGROUND Two recent trends-the development of increasingly capable automation and the flattening of organizational hierarchies-suggest a reframing of trust in automation is needed. METHOD Many publications related to human trust and human-automation interaction were integrated in this narrative literature review. RESULTS Much research has focused on calibrating human trust to promote appropriate reliance on automation. This approach neglects relational aspects of increasingly capable automation and system-level outcomes, such as cooperation and resilience. To address these limitations, we adopt a relational framing of trust based on the decision situation, semiotics, interaction sequence, and strategy. This relational framework stresses that the goal is not to maximize trust, or to even calibrate trust, but to support a process of trusting through automation responsivity. CONCLUSION This framing clarifies why future work on trust in automation should consider not just individual characteristics and how automation influences people, but also how people can influence automation and how interdependent interactions affect trusting automation. In these new technological and organizational contexts that shift human operators to co-operators of automation, automation responsivity and the ability to resolve conflicting goals may be more relevant than reliability and reliance for advancing system design. APPLICATION A conceptual model comprising four concepts-situation, semiotics, strategy, and sequence-can guide future trust research and design for automation responsivity and more resilient human-automation systems.
Collapse
Affiliation(s)
| | - John D Lee
- 5228 University of Wisconsin Madison, USA
| |
Collapse
|
19
|
Walliser AC, de Visser EJ, Shaw TH. Exploring system wide trust prevalence and mitigation strategies with multiple autonomous agents. COMPUTERS IN HUMAN BEHAVIOR 2023. [DOI: 10.1016/j.chb.2023.107671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
|
20
|
“I Believe AI Can Learn from the Error. Or Can It Not?”: The Effects of Implicit Theories on Trust Repair of the Intelligent Agent. Int J Soc Robot 2022. [DOI: 10.1007/s12369-022-00951-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
21
|
Clavel C, Labeau M, Cassell J. Socio-conversational systems: Three challenges at the crossroads of fields. Front Robot AI 2022; 9:937825. [PMID: 36591412 PMCID: PMC9797522 DOI: 10.3389/frobt.2022.937825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Accepted: 11/17/2022] [Indexed: 12/23/2022] Open
Abstract
Socio-conversational systems are dialogue systems, including what are sometimes referred to as chatbots, vocal assistants, social robots, and embodied conversational agents, that are capable of interacting with humans in a way that treats both the specifically social nature of the interaction and the content of a task. The aim of this paper is twofold: 1) to uncover some places where the compartmentalized nature of research conducted around socio-conversational systems creates problems for the field as a whole, and 2) to propose a way to overcome this compartmentalization and thus strengthen the capabilities of socio-conversational systems by defining common challenges. Specifically, we examine research carried out by the signal processing, natural language processing and dialogue, machine/deep learning, social/affective computing and social sciences communities. We focus on three major challenges for the development of effective socio-conversational systems, and describe ways to tackle them.
Collapse
Affiliation(s)
- Chloé Clavel
- LTCI, Telecom-Paris, Institut Polytechnique de Paris, Paris, France,*Correspondence: Chloé Clavel,
| | - Matthieu Labeau
- LTCI, Telecom-Paris, Institut Polytechnique de Paris, Paris, France
| | - Justine Cassell
- School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, United States,Inria, Paris, France
| |
Collapse
|
22
|
Vero: An accessible method for studying human-AI teamwork. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107606] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
23
|
Lyons J, Highland P, Bos N, Lyons D, Skinner A, Schnell T, Hefron R. Measuring Perceived Agent Appropriateness in a Live-Flight Human-Autonomy Teaming Scenario. ERGONOMICS IN DESIGN 2022. [DOI: 10.1177/10648046221129393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
United States Air Force Test Pilot School students ( N = 6) participated in a study involving an agent-directed human pilot (“Blue agent”) in dogfighting scenarios against an adversary (“Red agent”). The adversary used three levels of difficulty as follows: low, medium, and high. An agent appropriateness scale was developed to gauge how appropriate the Blue agent’s behaviors were during each dogfight. Results demonstrated that agent appropriateness varied by Red agent difficulty. These results suggest that agent appropriateness is an essential element in human-autonomy teaming research. Practitioners should seek to develop agent appropriateness measures suitable for the particular context and technology in question.
Collapse
|
24
|
Gualtieri L, Fraboni F, De Marchi M, Rauch E. Development and evaluation of design guidelines for cognitive ergonomics in human-robot collaborative assembly systems. APPLIED ERGONOMICS 2022; 104:103807. [PMID: 35763990 DOI: 10.1016/j.apergo.2022.103807] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Revised: 08/15/2021] [Accepted: 05/18/2022] [Indexed: 06/15/2023]
Abstract
Industry 4.0 is the concept used to summarize the ongoing fourth industrial revolution, which is profoundly changing the manufacturing systems and business models all over the world. Collaborative robotics is one of the most promising technologies of Industry 4.0. Human-robot interaction and human-robot collaboration will be crucial for enhancing the operator's work conditions and production performance. In this regard, this enabling technology opens new possibilities but also new challenges. There is no doubt that safety is of primary importance when humans and robots interact in industrial settings. Nevertheless, human factors and cognitive ergonomics (i.e. cognitive workload, usability, trust, acceptance, stress, frustration, perceived enjoyment) are crucial, even if they are often underestimated or ignored. Therefore, this work refers to cognitive ergonomics in the design of human-robot collaborative assembly systems. A set of design guidelines has been developed according to the analysis of the scientific literature. Their effectiveness has been evaluated through multiple experiments based on a laboratory case study where different participants interacted with a low-payload collaborative robotic system for the joint assembly of a manufacturing product. The main assumption to be tested is that it is possible to improve the operator's experience and efficiency by manipulating the system features and interaction patterns according to the proposed design guidelines. Results confirmed that participants improved their cognitive response to human-robot interaction as well as the assembly performance with the enhancement of workstation features and interaction conditions by implementing an increasing number of guidelines.
Collapse
Affiliation(s)
- Luca Gualtieri
- Industrial Engineering and Automation (IEA), Faculty of Science and Technology, Free University of Bozen-Bolzano, Piazza Università 5, 39100, Bolzano, Italy.
| | - Federico Fraboni
- Department of Psychology, Università di Bologna, Via Zamboni 33, 40126, Bologna, Italy
| | - Matteo De Marchi
- Industrial Engineering and Automation (IEA), Faculty of Science and Technology, Free University of Bozen-Bolzano, Piazza Università 5, 39100, Bolzano, Italy
| | - Erwin Rauch
- Industrial Engineering and Automation (IEA), Faculty of Science and Technology, Free University of Bozen-Bolzano, Piazza Università 5, 39100, Bolzano, Italy.
| |
Collapse
|
25
|
Verhagen RS, Neerincx MA, Tielman ML. The influence of interdependence and a transparent or explainable communication style on human-robot teamwork. Front Robot AI 2022; 9:993997. [PMID: 36158603 PMCID: PMC9493028 DOI: 10.3389/frobt.2022.993997] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Accepted: 08/15/2022] [Indexed: 11/13/2022] Open
Abstract
Humans and robots are increasingly working together in human-robot teams. Teamwork requires communication, especially when interdependence between team members is high. In previous work, we identified a conceptual difference between sharing what you are doing (i.e., being transparent) and why you are doing it (i.e., being explainable). Although the second might sound better, it is important to avoid information overload. Therefore, an online experiment (n = 72) was conducted to study the effect of communication style of a robot (silent, transparent, explainable, or adaptive based on time pressure and relevancy) on human-robot teamwork. We examined the effects of these communication styles on trust in the robot, workload during the task, situation awareness, reliance on the robot, human contribution during the task, human communication frequency, and team performance. Moreover, we included two levels of interdependence between human and robot (high vs. low), since mutual dependency might influence which communication style is best. Participants collaborated with a virtual robot during two simulated search and rescue tasks varying in their level of interdependence. Results confirm that in general robot communication results in more trust in and understanding of the robot, while showing no evidence of a higher workload when the robot communicates or adds explanations to being transparent. Providing explanations, however, did result in more reliance on RescueBot. Furthermore, compared to being silent, only being explainable results a higher situation awareness when interdependence is high. Results further show that being highly interdependent decreases trust, reliance, and team performance while increasing workload and situation awareness. High interdependence also increases human communication if the robot is not silent, human rescue contribution if the robot does not provide explanations, and the strength of the positive association between situation awareness and team performance. From these results, we can conclude that robot communication is crucial for human-robot teamwork, and that important differences exist between being transparent, explainable, or adaptive. Our findings also highlight the fundamental importance of interdependence in studies on explainability in robots.
Collapse
Affiliation(s)
- Ruben S. Verhagen
- Interactive Intelligence, Intelligent Systems Department, Delft University of Technology, Delft, Netherlands
- *Correspondence: Ruben S. Verhagen,
| | - Mark A. Neerincx
- Interactive Intelligence, Intelligent Systems Department, Delft University of Technology, Delft, Netherlands
- Human-Machine Teaming, Netherlands Organization for Applied Scientific Research (TNO), Amsterdam, Netherlands
| | - Myrthe L. Tielman
- Interactive Intelligence, Intelligent Systems Department, Delft University of Technology, Delft, Netherlands
| |
Collapse
|
26
|
Lyons JB, Hamdan IA, Vo TQ. Explanations and trust: What happens to trust when a robot partner does something unexpected? COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
27
|
Murphy RR. Would you trust an intelligent robot? Sci Robot 2022. [PMID: 36044557 DOI: 10.1126/scirobotics.ade0862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
Providence delivers a science fiction primer on why assured autonomy and explainable AI are critical to the success of real-world robots.
Collapse
|
28
|
Kraus J, Babel F, Hock P, Hauber K, Baumann M. The trustworthy and acceptable HRI checklist (TA-HRI): questions and design recommendations to support a trust-worthy and acceptable design of human-robot interaction. GIO-GRUPPE-INTERAKTION-ORGANISATION-ZEITSCHRIFT FUER ANGEWANDTE ORGANISATIONSPSYCHOLOGIE 2022. [DOI: 10.1007/s11612-022-00643-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
Abstract
AbstractThis contribution to the journal Gruppe. Interaktion. Organisation. (GIO) presents a checklist of questions and design recommendations for designing acceptable and trustworthy human-robot interaction (HRI). In order to extend the application scope of robots towards more complex contexts in the public domain and in private households, robots have to fulfill requirements regarding social interaction between humans and robots in addition to safety and efficiency. In particular, this results in recommendations for the design of the appearance, behavior, and interaction strategies of robots that can contribute to acceptance and appropriate trust. The presented checklist was derived from existing guidelines of associated fields of application, the current state of research on HRI, and the results of the BMBF-funded project RobotKoop. The trustworthy and acceptable HRI checklist (TA-HRI) contains 60 design topics with questions and design recommendations for the development and design of acceptable and trustworthy robots. The TA-HRI Checklist provides a basis for discussion of the design of service robots for use in public and private environments and will be continuously refined based on feedback from the community.
Collapse
|
29
|
Henry KE, Kornfield R, Sridharan A, Linton RC, Groh C, Wang T, Wu A, Mutlu B, Saria S. Human-machine teaming is key to AI adoption: clinicians' experiences with a deployed machine learning system. NPJ Digit Med 2022; 5:97. [PMID: 35864312 PMCID: PMC9304371 DOI: 10.1038/s41746-022-00597-7] [Citation(s) in RCA: 57] [Impact Index Per Article: 28.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2021] [Accepted: 03/09/2022] [Indexed: 12/23/2022] Open
Abstract
While a growing number of machine learning (ML) systems have been deployed in clinical settings with the promise of improving patient care, many have struggled to gain adoption and realize this promise. Based on a qualitative analysis of coded interviews with clinicians who use an ML-based system for sepsis, we found that, rather than viewing the system as a surrogate for their clinical judgment, clinicians perceived themselves as partnering with the technology. Our findings suggest that, even without a deep understanding of machine learning, clinicians can build trust with an ML system through experience, expert endorsement and validation, and systems designed to accommodate clinicians’ autonomy and support them across their entire workflow.
Collapse
Affiliation(s)
- Katharine E Henry
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Rachel Kornfield
- Department of Preventive Medicine, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA.,Center for Behavioral Intervention Technologies, Northwestern University, Chicago, IL, USA
| | | | | | - Catherine Groh
- Department of Industrial Engineering, University of Wisconsin-Madison, Madison, WI, USA
| | - Tony Wang
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Albert Wu
- Department of Health Policy and Management, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA
| | - Bilge Mutlu
- Department of Industrial Engineering, University of Wisconsin-Madison, Madison, WI, USA. .,Department of Computer Sciences, University of Wisconsin-Madison, Madison, WI, USA.
| | - Suchi Saria
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA. .,Department of Health Policy and Management, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA. .,Department of Applied Mathematics and Statistics, Johns Hopkins University, Baltimore, MD, USA. .,Bayesian Health, New York, NY, 10005, USA.
| |
Collapse
|
30
|
Wolf FD, Stock-Homburg RM. How and When Can Robots Be Team Members? Three Decades of Research on Human–Robot Teams. GROUP & ORGANIZATION MANAGEMENT 2022. [DOI: 10.1177/10596011221076636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Artificial intelligence and robotic technologies have grown in sophistication and reach. Accordingly, research into mixed human–robot teams that comprise both robots and humans has expanded as well, attracting the attention of researchers from different disciplines, such as organizational behavior, human–robot interaction, cognitive science, and robotics. With this systematic literature review, the authors seek to establish deeper insights into existing research and sharpen the definitions of relevant terms. With a close consideration of 150 studies published between 1990 and 2020 that investigate mixed human–robot teams, conceptually or empirically, this article provides both a systematic evaluation of extant research and propositions for further research.
Collapse
Affiliation(s)
- Franziska Doris Wolf
- Chair for Marketing and Human Resource Management, Technical University of Darmstadt, Darmstadt, Germany
| | - Ruth Maria Stock-Homburg
- Chair for Marketing and Human Resource Management, Technical University of Darmstadt, Darmstadt, Germany
| |
Collapse
|
31
|
Hsieh SJ, Wang AR, Madison A, Tossell C, Visser ED. Adaptive Driving Assistant Model (ADAM) for Advising Drivers of Autonomous Vehicles. ACM T INTERACT INTEL 2022. [DOI: 10.1145/3545994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022]
Abstract
Fully autonomous driving is on the horizon; vehicles with advanced driver assistance systems (ADAS) such as Tesla's Autopilot are already available to consumers. However, all currently available ADAS applications require a human driver to be alert and ready to take control if needed. Partially automated driving introduces new complexities to human interactions with cars and can even increase collision risk. A better understanding of drivers’ trust in automation may help reduce these complexities. Much of the existing research on trust in ADAS has relied on use of surveys and physiological measures to assess trust and has been conducted using driving simulators. There have been relatively few studies that use telemetry data from real automated vehicles to assess trust in ADAS. In addition, although some ADAS technologies provide alerts when, for example, drivers’ hands are not on the steering wheel, these systems are not personalized to individual drivers. Needed are adaptive technologies that can help drivers of autonomous vehicles avoid crashes based on multiple real-time data streams. In this paper, we propose an architecture for adaptive autonomous driving assistance. Two layers of multiple sensory fusion models are developed to provide appropriate voice reminders to increase driving safety based on predicted driving status. Results suggest that human trust in automation can be quantified and predicted with 80% accuracy based on vehicle data, and that adaptive speech-based advice can be provided to drivers with 90 to 95% accuracy. With more data, these models can be used to evaluate trust in driving assistance tools, which can ultimately lead to safer and appropriate use of these features.
Collapse
|
32
|
Artificial agents’ explainability to support trust: considerations on timing and context. AI & SOCIETY 2022. [DOI: 10.1007/s00146-022-01462-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
AbstractStrategies for improving the explainability of artificial agents are a key approach to support the understandability of artificial agents’ decision-making processes and their trustworthiness. However, since explanations are not inclined to standardization, finding solutions that fit the algorithmic-based decision-making processes of artificial agents poses a compelling challenge. This paper addresses the concept of trust in relation to complementary aspects that play a role in interpersonal and human–agent relationships, such as users’ confidence and their perception of artificial agents’ reliability. Particularly, this paper focuses on non-expert users’ perspectives, since users with little technical knowledge are likely to benefit the most from “post-hoc”, everyday explanations. Drawing upon the explainable AI and social sciences literature, this paper investigates how artificial agent’s explainability and trust are interrelated at different stages of an interaction. Specifically, the possibility of implementing explainability as a trust building, trust maintenance and restoration strategy is investigated. To this extent, the paper identifies and discusses the intrinsic limits and fundamental features of explanations, such as structural qualities and communication strategies. Accordingly, this paper contributes to the debate by providing recommendations on how to maximize the effectiveness of explanations for supporting non-expert users’ understanding and trust.
Collapse
|
33
|
Krausman A, Neubauer C, Forster D, Lakhmani S, Baker AL, Fitzhugh SM, Gremillion G, Wright JL, Metcalfe JS, Schaefer KE. Trust Measurement in Human-Autonomy Teams: Development of a Conceptual Toolkit. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3530874] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
The rise in artificial intelligence capabilities in autonomy-enabled systems and robotics has pushed research to address the unique nature of human-autonomy team collaboration. The goal of these advanced technologies is to enable rapid decision making, enhance situation awareness, promote shared understanding, and improve team dynamics. Simultaneously, use of these technologies is expected to reduce risk to those who collaborate with these systems. Yet, for appropriate human- autonomy teaming to take place, especially as we move beyond dyadic partnerships, proper calibration of team trust is needed to effectively coordinate interactions during high-risk operations. But to meet this end, critical measures of team trust for this new dynamic of human-autonomy teams are needed. This paper seeks to expand on trust measurement principles and the foundation of human-autonomy teaming to propose a “toolkit” of novel methods that support the development, maintenance and calibration of trust in human-autonomy teams operating within uncertain, risky, and dynamic environments.
Collapse
Affiliation(s)
- Andrea Krausman
- US Army Combat Capabilities Development Command, Army Research Laboratory
| | - Catherine Neubauer
- US Army Combat Capabilities Development Command, Army Research Laboratory
| | - Daniel Forster
- US Army Combat Capabilities Development Command, Army Research Laboratory
| | - Shan Lakhmani
- US Army Combat Capabilities Development Command, Army Research Laboratory
| | - Anthony L Baker
- Oak Ridge Associated Universities, US Army Combat Capabilities Development Command, Army Research Laboratory
| | - Sean M. Fitzhugh
- US Army Combat Capabilities Development Command, Army Research Laboratory
| | - Gregory Gremillion
- US Army Combat Capabilities Development Command, Army Research Laboratory
| | - Julia L. Wright
- US Army Combat Capabilities Development Command, Army Research Laboratory
| | - Jason S. Metcalfe
- US Army Combat Capabilities Development Command, Army Research Laboratory
| | | |
Collapse
|
34
|
Caldwell S, Sweetser P, O’Donnell N, Knight MJ, Aitchison M, Gedeon T, Johnson D, Brereton M, Gallagher M, Conroy D. An Agile New Research Framework for Hybrid Human-AI Teaming: Trust, Transparency, and Transferability. ACM T INTERACT INTEL 2022. [DOI: 10.1145/3514257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022]
Abstract
We propose a new research framework by which the nascent discipline of human-AI teaming can be explored within experimental environments in preparation for transferal to real-world contexts. We examine the existing literature and unanswered research questions through the lens of an Agile approach to construct our proposed framework. Our framework aims to provide a structure for understanding the macro features of this research landscape, supporting holistic research into the acceptability of human-AI teaming to human team members and the affordances of AI team members. The framework has the potential to enhance decision-making and performance of hybrid human-AI teams. Further, our framework proposes the application of Agile methodology for research management and knowledge discovery. We propose a transferability pathway for hybrid teaming to be initially tested in a safe environment, such as a real-time strategy video game, with elements of lessons learned that can be transferred to real-world situations.
Collapse
Affiliation(s)
| | | | | | | | | | - Tom Gedeon
- Australian National University, Australia
| | | | | | | | | |
Collapse
|
35
|
Kox ES, Siegling LB, Kerstholt JH. Trust Development in Military and Civilian Human–Agent Teams: The Effect of Social-Cognitive Recovery Strategies. Int J Soc Robot 2022; 14:1323-1338. [PMID: 35432627 PMCID: PMC8994847 DOI: 10.1007/s12369-022-00871-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/21/2022] [Indexed: 11/06/2022]
Abstract
Autonomous agents (AA) will increasingly be deployed as teammates instead of tools. In many operational situations, flawless performance from AA cannot be guaranteed. This may lead to a breach in the human’s trust, which can compromise collaboration. This highlights the importance of thinking about how to deal with error and trust violations when designing AA. The aim of this study was to explore the influence of uncertainty communication and apology on the development of trust in a Human–Agent Team (HAT) when there is a trust violation. Two experimental studies following the same method were performed with (I) a civilian group and (II) a military group of participants. The online task environment resembled a house search in which the participant was accompanied and advised by an AA as their artificial team member. Halfway during the task, an incorrect advice evoked a trust violation. Uncertainty communication was manipulated within-subjects, apology between-subjects. Our results showed that (a) communicating uncertainty led to higher levels of trust in both studies, (b) an incorrect advice by the agent led to a less severe decline in trust when that advice included a measure of uncertainty, and (c) after a trust violation, trust recovered significantly more when the agent offered an apology. The two latter effects were only found in the civilian study. We conclude that tailored agent communication is a key factor in minimizing trust reduction in face of agent failure to maintain effective long-term relationships in HATs. The difference in findings between participant groups emphasizes the importance of considering the (organizational) culture when designing artificial team members.
Collapse
|
36
|
Borsci S, Lehtola VV, Nex F, Yang MY, Augustijn EW, Bagheriye L, Brune C, Kounadi O, Li J, Moreira J, Van Der Nagel J, Veldkamp B, Le DV, Wang M, Wijnhoven F, Wolterink JM, Zurita-Milla R. Embedding artificial intelligence in society: looking beyond the EU AI master plan using the culture cycle. AI & SOCIETY 2022. [DOI: 10.1007/s00146-021-01383-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
AbstractThe European Union (EU) Commission’s whitepaper on Artificial Intelligence (AI) proposes shaping the emerging AI market so that it better reflects common European values. It is a master plan that builds upon the EU AI High-Level Expert Group guidelines. This article reviews the masterplan, from a culture cycle perspective, to reflect on its potential clashes with current societal, technical, and methodological constraints. We identify two main obstacles in the implementation of this plan: (i) the lack of a coherent EU vision to drive future decision-making processes at state and local levels and (ii) the lack of methods to support a sustainable diffusion of AI in our society. The lack of a coherent vision stems from not considering societal differences across the EU member states. We suggest that these differences may lead to a fractured market and an AI crisis in which different members of the EU will adopt nation-centric strategies to exploit AI, thus preventing the development of a frictionless market as envisaged by the EU. Moreover, the Commission aims at changing the AI development culture proposing a human-centred and safety-first perspective that is not supported by methodological advancements, thus taking the risks of unforeseen social and societal impacts of AI. We discuss potential societal, technical, and methodological gaps that should be filled to avoid the risks of developing AI systems at the expense of society. Our analysis results in the recommendation that the EU regulators and policymakers consider how to complement the EC programme with rules and compensatory mechanisms to avoid market fragmentation due to local and global ambitions. Moreover, regulators should go beyond the human-centred approach establishing a research agenda seeking answers to the technical and methodological open questions regarding the development and assessment of human-AI co-action aiming for a sustainable AI diffusion in the society.
Collapse
|
37
|
Kohn SC, de Visser EJ, Wiese E, Lee YC, Shaw TH. Measurement of Trust in Automation: A Narrative Review and Reference Guide. Front Psychol 2021; 12:604977. [PMID: 34737716 PMCID: PMC8562383 DOI: 10.3389/fpsyg.2021.604977] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Accepted: 08/25/2021] [Indexed: 02/05/2023] Open
Abstract
With the rise of automated and autonomous agents, research examining Trust in Automation (TiA) has attracted considerable attention over the last few decades. Trust is a rich and complex construct which has sparked a multitude of measures and approaches to study and understand it. This comprehensive narrative review addresses known methods that have been used to capture TiA. We examined measurements deployed in existing empirical works, categorized those measures into self-report, behavioral, and physiological indices, and examined them within the context of an existing model of trust. The resulting work provides a reference guide for researchers, providing a list of available TiA measurement methods along with the model-derived constructs that they capture including judgments of trustworthiness, trust attitudes, and trusting behaviors. The article concludes with recommendations on how to improve the current state of TiA measurement.
Collapse
Affiliation(s)
| | - Ewart J de Visser
- Warfighter Effectiveness Research Center, United States Air Force Academy, Colorado Springs, CO, United States
| | - Eva Wiese
- George Mason University, Fairfax, VA, United States
| | - Yi-Ching Lee
- George Mason University, Fairfax, VA, United States
| | - Tyler H Shaw
- George Mason University, Fairfax, VA, United States
| |
Collapse
|
38
|
Human-Robot Interaction in Groups: Methodological and Research Practices. MULTIMODAL TECHNOLOGIES AND INTERACTION 2021. [DOI: 10.3390/mti5100059] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Understanding the behavioral dynamics that underline human-robot interactions in groups remains one of the core challenges in social robotics research. However, despite a growing interest in this topic, there is still a lack of established and validated measures that allow researchers to analyze human-robot interactions in group scenarios; and very few that have been developed and tested specifically for research conducted in-the-wild. This is a problem because it hinders the development of general models of human-robot interaction, and makes the comprehension of the inner workings of the relational dynamics between humans and robots, in group contexts, significantly more difficult. In this paper, we aim to provide a reflection on the current state of research on human-robot interaction in small groups, as well as to outline directions for future research with an emphasis on methodological and transversal issues.
Collapse
|
39
|
Yang XJ, Schemanske C, Searle C. Toward Quantifying Trust Dynamics: How People Adjust Their Trust After Moment-to-Moment Interaction With Automation. HUMAN FACTORS 2021:187208211034716. [PMID: 34459266 PMCID: PMC10374998 DOI: 10.1177/00187208211034716] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
OBJECTIVE We examine how human operators adjust their trust in automation as a result of their moment-to-moment interaction with automation. BACKGROUND Most existing studies measured trust by administering questionnaires at the end of an experiment. Only a limited number of studies viewed trust as a dynamic variable that can strengthen or decay over time. METHOD Seventy-five participants took part in an aided memory recognition task. In the task, participants viewed a series of images and later on performed 40 trials of the recognition task to identify a target image when it was presented with a distractor. In each trial, participants performed the initial recognition by themselves, received a recommendation from an automated decision aid, and performed the final recognition. After each trial, participants reported their trust on a visual analog scale. RESULTS Outcome bias and contrast effect significantly influence human operators' trust adjustments. An automation failure leads to a larger trust decrement if the final outcome is undesirable, and a marginally larger trust decrement if the human operator succeeds the task by him/herself. An automation success engenders a greater trust increment if the human operator fails the task. Additionally, automation failures have a larger effect on trust adjustment than automation successes. CONCLUSION Human operators adjust their trust in automation as a result of their moment-to-moment interaction with automation. Their trust adjustments are significantly influenced by decision-making heuristics/biases. APPLICATION Understanding the trust adjustment process enables accurate prediction of the operators' moment-to-moment trust in automation and informs the design of trust-aware adaptive automation.
Collapse
|
40
|
Communication Models in Human–Robot Interaction: An Asymmetric MODel of ALterity in Human–Robot Interaction (AMODAL-HRI). Int J Soc Robot 2021. [DOI: 10.1007/s12369-021-00785-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
AbstractWe argue for an interdisciplinary approach that connects existing models and theories in Human–Robot Interaction (HRI) to traditions in communication theory. In this article, we review existing models of interpersonal communication and interaction models that have been applied and developed in the contexts of HRI and social robotics. We argue that often, symmetric models are proposed in which the human and robot agents are depicted as having similar ways of functioning (similar capabilities, components, processes). However, we argue that models of human–robot interaction or communication should be asymmetric instead. We propose an asymmetric interaction model called AMODAL-HRI (an Asymmetric MODel of ALterity in Human–Robot Interaction). This model is based on theory on joint action, common robot architectures and cognitive architectures, and Kincaid’s model of communication. On the basis of this model, we discuss key differences between humans and robots that influence human expectations regarding interacting with robots, and identify design implications.
Collapse
|
41
|
|
42
|
Feil-Seifer D, Haring KS, Rossi S, Wagner AR, Williams T. Where to Next? The Impact of COVID-19 on Human-Robot Interaction Research. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2021. [DOI: 10.1145/3405450] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
The COVID-19 pandemic will have a profound and long-lasting impact on the entire scientific endeavor. Scientists already are adapting research programs to adapt to changes in what is prioritized—and what is possible; educators are changing the way that the next generation of researchers are trained, and flagship conferences in many fields are being cancelled, postponed, and fundamentally transformed.
These broad-reaching changes are particularly impactful to human-oriented domains such as human-robot interaction (HRI). Because in-person human-subject experiments can take a year or more to conduct, the research we will see published in the field in the immediate future may appear to be “business as usual,” with accounts of laboratory studies with large numbers of in-person participants. The research currently being performed, however, is of course a different story entirely. Studies that were under way when the current crisis began will be truncated, resulting either in work that cannot be published or in work whose true impact is difficult to accurately assess. Yet HRI research performed in the coming years will be changed in fundamentally different ways; the inability to perform—or expect future performance of—in-person human subjects research, especially research involving tactile or multiparty interaction, will change both the dominant methodological techniques employed by HRI researchers and the very research questions that the field chooses to—and is able to—address.
These challenges demand that HRI researchers identify precisely how the field can maintain research quality and impact while the ability to conduct human-subject studies is severely impaired for an undetermined amount of time. A natural inclination may be simply to wait the crisis out in the hope of a speedy return to normalcy; however, in this article, we argue that the community can also take this opportunity to reevaluate and refocus how research in this field is conducted and how students are mentored in ways that will yield benefits for years to come after the current crisis has ended.
Collapse
|
43
|
|
44
|
Guo Y, Yang XJ. Modeling and Predicting Trust Dynamics in Human–Robot Teaming: A Bayesian Inference Approach. Int J Soc Robot 2020. [DOI: 10.1007/s12369-020-00703-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
AbstractTrust in automation, or more recently trust in autonomy, has received extensive research attention in the past three decades. The majority of prior literature adopted a “snapshot” view of trust and typically evaluated trust through questionnaires administered at the end of an experiment. This “snapshot” view, however, does not acknowledge that trust is a dynamic variable that can strengthen or decay over time. To fill the research gap, the present study aims to model trust dynamics when a human interacts with a robotic agent over time. The underlying premise of the study is that by interacting with a robotic agent and observing its performance over time, a rational human agent will update his/her trust in the robotic agent accordingly. Based on this premise, we develop a personalized trust prediction model and learn its parameters using Bayesian inference. Our proposed model adheres to three properties of trust dynamics characterizing human agents’ trust development process de facto and thus guarantees high model explicability and generalizability. We tested the proposed method using an existing dataset involving 39 human participants interacting with four drones in a simulated surveillance mission. The proposed method obtained a root mean square error of 0.072, significantly outperforming existing prediction methods. Moreover, we identified three distinct types of trust dynamics, the Bayesian decision maker, the oscillator, and the disbeliever, respectively. This prediction model can be used for the design of individualized and adaptive technologies.
Collapse
|
45
|
Azevedo-Sa H, Jayaraman SK, Esterwood CT, Yang XJ, Robert LP, Tilbury DM. Real-Time Estimation of Drivers’ Trust in Automated Driving Systems. Int J Soc Robot 2020. [DOI: 10.1007/s12369-020-00694-1] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
AbstractTrust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers’ trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers’ trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers’ performance on a non-driving-related task. We conducted a study ($$n=80$$
n
=
80
) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers’ trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers’ trust levels to mitigate both undertrust and overtrust.
Collapse
|
46
|
Peeters MMM, van Diggelen J, van den Bosch K, Bronkhorst A, Neerincx MA, Schraagen JM, Raaijmakers S. Hybrid collective intelligence in a human–AI society. AI & SOCIETY 2020. [DOI: 10.1007/s00146-020-01005-y] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
|