1
|
Nare M, Jurewicz K. Assessing Patient Trust in Automation in Health Care Systems: Within-Subjects Experimental Study. JMIR Hum Factors 2024; 11:e48584. [PMID: 39106096 DOI: 10.2196/48584] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2023] [Revised: 10/11/2023] [Accepted: 12/15/2023] [Indexed: 08/07/2024] Open
Abstract
BACKGROUND Health care technology has the ability to change patient outcomes for the betterment when designed appropriately. Automation is becoming smarter and is increasingly being integrated into health care work systems. OBJECTIVE This study focuses on investigating trust between patients and an automated cardiac risk assessment tool (CRAT) in a simulated emergency department setting. METHODS A within-subjects experimental study was performed to investigate differences in automation modes for the CRAT: (1) no automation, (2) automation only, and (3) semiautomation. Participants were asked to enter their simulated symptoms for each scenario into the CRAT as instructed by the experimenter, and they would automatically be classified as high, medium, or low risk depending on the symptoms entered. Participants were asked to provide their trust ratings for each combination of risk classification and automation mode on a scale of 1 to 10 (1=absolutely no trust and 10=complete trust). RESULTS Results from this study indicate that the participants significantly trusted the semiautomation condition more compared to the automation-only condition (P=.002), and they trusted the no automation condition significantly more than the automation-only condition (P=.03). Additionally, participants significantly trusted the CRAT more in the high-severity scenario compared to the medium-severity scenario (P=.004). CONCLUSIONS The findings from this study emphasize the importance of the human component of automation when designing automated technology in health care systems. Automation and artificially intelligent systems are becoming more prevalent in health care systems, and this work emphasizes the need to consider the human element when designing automation into care delivery.
Collapse
Affiliation(s)
- Matthew Nare
- School of Industrial Engineering and Management, Oklahoma State University, Stillwater, OK, United States
| | - Katherina Jurewicz
- School of Industrial Engineering and Management, Oklahoma State University, Stillwater, OK, United States
| |
Collapse
|
2
|
Jessup SA, Alarcon GM, Willis SM, Lee MA. A closer look at how experience, task domain, and self-confidence influence reliance towards algorithms. APPLIED ERGONOMICS 2024; 121:104363. [PMID: 39096745 DOI: 10.1016/j.apergo.2024.104363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 06/20/2024] [Accepted: 07/25/2024] [Indexed: 08/05/2024]
Abstract
Prior research has demonstrated experience with a forecasting algorithm decreases reliance behaviors (i.e., the action of relying on the algorithm). However, the influence of model experience on reliance intentions (i.e., an intention or willingness to rely on the algorithm) has not been explored. Additionally, other factors such as self-confidence and domain knowledge are posited to influence algorithm reliance. The objective of this research was to examine how experience with a statistical model, task domain (used car sales, college grade point average (GPA), GitHub pull requests), and self-confidence influence reliance intentions, reliance behaviors, and perceived accuracy of one's own estimates and the model's estimates. Participants (N = 347) were recruited online and completed a forecasting task. Results indicate that there was a statistically significant effect of self-confidence and task domain on reliance intentions, reliance behaviors, and perceived accuracy. However, unlike previous findings, model experience did not significantly influence reliance behavior, nor did it lead to significant changes in reliance intentions or perceived accuracy of oneself or the model. Our data suggest that factors such as task domain and self-confidence influence algorithm use more so than model experience. Individual differences and situational factors should be considered important aspects that influence forecasters' decisions to rely on predictions from a model or to instead use their own estimates, which can lead to sub-optimal performance.
Collapse
Affiliation(s)
- Sarah A Jessup
- Consortium of Universities, Wright-Patterson AFB, OH, United States.
| | - Gene M Alarcon
- Air Force Research Laboratory, Wright-Patterson AFB, OH, United States
| | - Sasha M Willis
- General Dynamics Information Technology, Dayton, OH, United States
| | - Michael A Lee
- General Dynamics Information Technology, Dayton, OH, United States
| |
Collapse
|
3
|
Candrian C. How Terminology Affects Users' Responses to System Failures. HUMAN FACTORS 2024; 66:2082-2103. [PMID: 37734726 PMCID: PMC11141081 DOI: 10.1177/00187208231202572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Accepted: 09/05/2023] [Indexed: 09/23/2023]
Abstract
OBJECTIVE The objective of our research is to advance the understanding of behavioral responses to a system's error. By examining trust as a dynamic variable and drawing from attribution theory, we explain the underlying mechanism and suggest how terminology can be used to mitigate the so-called algorithm aversion. In this way, we show that the use of different terms may shape consumers' perceptions and provide guidance on how these differences can be mitigated. BACKGROUND Previous research has interchangeably used various terms to refer to a system and results regarding trust in systems have been ambiguous. METHODS Across three studies, we examine the effect of different system terminology on consumer behavior following a system failure. RESULTS Our results show that terminology crucially affects user behavior. Describing a system as "AI" (i.e., self-learning and perceived as more complex) instead of as "algorithmic" (i.e., a less complex rule-based system) leads to more favorable behavioral responses by users when a system error occurs. CONCLUSION We suggest that in cases when a system's characteristics do not allow for it to be called "AI," users should be provided with an explanation of why the system's error occurred, and task complexity should be pointed out. We highlight the importance of terminology, as this can unintentionally impact the robustness and replicability of research findings. APPLICATION This research offers insights for industries utilizing AI and algorithmic systems, highlighting how strategic terminology use can shape user trust and response to errors, thereby enhancing system acceptance.
Collapse
|
4
|
Alarcon GM, Capiola A, Lee MA, Willis S, Hamdan IA, Jessup SA, Harris KN. Development and Validation of the System Trustworthiness Scale. HUMAN FACTORS 2024; 66:1893-1913. [PMID: 37458319 DOI: 10.1177/00187208231189000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2024]
Abstract
OBJECTIVE We created and validated a scale to measure perceptions of system trustworthiness. BACKGROUND Several scales exist in the literature that attempt to assess trustworthiness of system referents. However, existing measures suffer from limitations in their development and validation. The current study sought to develop a scale based on theory and methodological rigor. METHOD We conducted exploratory and confirmatory factor analyses on data from two online studies to develop the System Trustworthiness Scale (STS). Additional analyses explored the manipulation of the factors and assessed convergent and divergent validity. RESULTS The exploratory factor analyses resulted in a three-factor solution that represented the theoretical constructs of trustworthiness: performance, purpose, and process. Confirmatory factor analyses confirmed the three-factor solution. In addition, correlation and regression analyses demonstrated the scale's divergent and predictive validity. CONCLUSION The STS is a psychometrically valid and predictive scale for assessing trustworthiness perceptions of system referents. APPLICATIONS The STS assesses trustworthiness perceptions of systems. Importantly, the scale differentiates performance, purpose, and process constructs and is adaptable to a variety of system referents.
Collapse
Affiliation(s)
- Gene M Alarcon
- Air Force Research Laboratory, Wright Patterson AFB, OH, USA
| | - August Capiola
- Air Force Research Laboratory, Wright Patterson AFB, OH, USA
| | - Michael A Lee
- General Dynamics Information Technology Inc, Falls Church, VA, USA
| | - Sasha Willis
- General Dynamics Information Technology Inc, Falls Church, VA, USA
| | - Izz Aldin Hamdan
- General Dynamics Information Technology Inc, Falls Church, VA, USA
| | - Sarah A Jessup
- Air Force Research Laboratory, Wright Patterson AFB, OH, USA
| | | |
Collapse
|
5
|
Lyons JB, Jessup SA, Vo TQ. The Role of Decision Authority and Stated Social Intent as Predictors of Trust in Autonomous Robots. Top Cogn Sci 2024; 16:430-449. [PMID: 35084796 DOI: 10.1111/tops.12601] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2021] [Revised: 12/20/2021] [Accepted: 12/21/2021] [Indexed: 11/30/2022]
Abstract
Prior research has demonstrated that trust in robots and performance of robots are two important factors that influence human-autonomy teaming. However, other factors may influence users' perceptions and use of autonomous systems, such as perceived intent of robots and decision authority of the robots. The current study experimentally examined participants' trust in an autonomous security robot (ASR), perceived trustworthiness of the ASR, and desire to use an ASR that varied in levels of decision authority and benevolence. Participants (N = 340) were recruited from Amazon Mechanical Turk. Results revealed the participants had increased trust in the ASR when the robot was described as having benevolent intent compared to self-protective intent. There were several interactions between decision authority and intent when predicting the trust process, showing that intent may matter the most when the robot has discretion on executing that intent. Participants stated a desire to use the ASR in a military context compared to a public context. Implications for this research demonstrate that as robots become more prevalent in jobs paired with humans, factors such as transparency provided for the robot's intent and its decision authority will influence users' trust and trustworthiness.
Collapse
Affiliation(s)
- Joseph B Lyons
- Air Force Research Laboratory, Wright-Patterson Air Force Base
| | | | | |
Collapse
|
6
|
Dunning RE, Fischhoff B, Davis AL. When Do Humans Heed AI Agents' Advice? When Should They? HUMAN FACTORS 2024; 66:1914-1927. [PMID: 37553098 PMCID: PMC11089830 DOI: 10.1177/00187208231190459] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 06/20/2023] [Indexed: 08/10/2023]
Abstract
OBJECTIVE We manipulate the presence, skill, and display of artificial intelligence (AI) recommendations in a strategy game to measure their effect on users' performance. BACKGROUND Many applications of AI require humans and AI agents to make decisions collaboratively. Success depends on how appropriately humans rely on the AI agent. We demonstrate an evaluation method for a platform that uses neural network agents of varying skill levels for the simple strategic game of Connect Four. METHODS We report results from a 2 × 3 between-subjects factorial experiment that varies the format of AI recommendations (categorical or probabilistic) and the AI agent's amount of training (low, medium, or high). On each round of 10 games, participants proposed a move, saw the AI agent's recommendations, and then moved. RESULTS Participants' performance improved with a highly skilled agent, but quickly plateaued, as they relied uncritically on the agent. Participants relied too little on lower skilled agents. The display format had no effect on users' skill or choices. CONCLUSIONS The value of these AI agents depended on their skill level and users' ability to extract lessons from their advice. APPLICATION Organizations employing AI decision support systems must consider behavioral aspects of the human-agent team. We demonstrate an approach to evaluating competing designs and assessing their performance.
Collapse
|
7
|
Bostrom A, Demuth JL, Wirz CD, Cains MG, Schumacher A, Madlambayan D, Bansal AS, Bearth A, Chase R, Crosman KM, Ebert-Uphoff I, Gagne DJ, Guikema S, Hoffman R, Johnson BB, Kumler-Bonfanti C, Lee JD, Lowe A, McGovern A, Przybylo V, Radford JT, Roth E, Sutter C, Tissot P, Roebber P, Stewart JQ, White M, Williams JK. Trust and trustworthy artificial intelligence: A research agenda for AI in the environmental sciences. RISK ANALYSIS : AN OFFICIAL PUBLICATION OF THE SOCIETY FOR RISK ANALYSIS 2024; 44:1498-1513. [PMID: 37939398 DOI: 10.1111/risa.14245] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 07/10/2023] [Accepted: 09/29/2023] [Indexed: 11/10/2023]
Abstract
Demands to manage the risks of artificial intelligence (AI) are growing. These demands and the government standards arising from them both call for trustworthy AI. In response, we adopt a convergent approach to review, evaluate, and synthesize research on the trust and trustworthiness of AI in the environmental sciences and propose a research agenda. Evidential and conceptual histories of research on trust and trustworthiness reveal persisting ambiguities and measurement shortcomings related to inconsistent attention to the contextual and social dependencies and dynamics of trust. Potentially underappreciated in the development of trustworthy AI for environmental sciences is the importance of engaging AI users and other stakeholders, which human-AI teaming perspectives on AI development similarly underscore. Co-development strategies may also help reconcile efforts to develop performance-based trustworthiness standards with dynamic and contextual notions of trust. We illustrate the importance of these themes with applied examples and show how insights from research on trust and the communication of risk and uncertainty can help advance the understanding of trust and trustworthiness of AI in the environmental sciences.
Collapse
Affiliation(s)
- Ann Bostrom
- Evans School of Public Policy & Governance, University of Washington, Seattle, Washington, USA
| | - Julie L Demuth
- Mesoscale & Microscale Meteorology Lab, National Center for Atmospheric Research (NCAR), Boulder, Colorado, USA
| | - Christopher D Wirz
- Mesoscale & Microscale Meteorology Lab, National Center for Atmospheric Research (NCAR), Boulder, Colorado, USA
| | - Mariana G Cains
- Mesoscale & Microscale Meteorology Lab, National Center for Atmospheric Research (NCAR), Boulder, Colorado, USA
| | - Andrea Schumacher
- Mesoscale & Microscale Meteorology Lab, National Center for Atmospheric Research (NCAR), Boulder, Colorado, USA
| | - Deianna Madlambayan
- Evans School of Public Policy & Governance, University of Washington, Seattle, Washington, USA
| | - Akansha Singh Bansal
- Cooperative Institute for Research in the Atmosphere, Colorado State University, Fort Collins, Colorado, USA
| | - Angela Bearth
- Consumer Behavior, Institute for Environmental Decisions, ETH Zürich, Zürich, Switzerland
| | - Randy Chase
- School of Meteorology, University of Oklahoma, Norman, Oklahoma, USA
| | - Katherine M Crosman
- Department of Marine Technology, Faculty of Engineering, Norwegian University of Science and Technology, Trondheim, Norway
| | - Imme Ebert-Uphoff
- Cooperative Institute for Research in the Atmosphere, Colorado State University, Fort Collins, Colorado, USA
| | - David John Gagne
- Computational & Information Systems Lab, National Center for Atmospheric Research, Boulder, Colorado, USA
| | - Seth Guikema
- Industrial & Operations Engineering, University of Michigan, Ann Arbor, Michigan, USA
| | - Robert Hoffman
- Institute for Human & Machine Cognition, Pensacola, Florida, USA
| | | | - Christina Kumler-Bonfanti
- Cooperative Institute for Research in Environmental Sciences, University of Colorado Boulder, Boulder, Colorado, USA
| | - John D Lee
- Industrial and Systems Engineering, University of Wisconsin-Madison, Madison, Wisconsin, USA
| | - Anna Lowe
- Marine, Earth and Atmospheric Sciences, North Carolina State University, Raleigh, North Carolina, USA
| | - Amy McGovern
- School of Meteorology, University of Oklahoma, Norman, Oklahoma, USA
- School of Computer Science, University of Oklahoma, Norman, Oklahoma, USA
| | - Vanessa Przybylo
- Department of Atmospheric and Environmental Sciences, University at Albany, State University of New York, Albany, New York, USA
| | - Jacob T Radford
- Cooperative Institute for Research in the Atmosphere, Colorado State University, Fort Collins, Colorado, USA
| | - Emilie Roth
- Roth Cognitive Engineering, Brookline, Massachusetts, USA
| | - Carly Sutter
- Department of Atmospheric and Environmental Sciences, University at Albany, State University of New York, Albany, New York, USA
| | - Philippe Tissot
- Conrad Blucher Institute for Surveying and Science, Texas A&M University-Corpus Christi, Corpus Christi, Texas, USA
| | - Paul Roebber
- School of Freshwater Sciences, University of Wisconsin-Milwaukee, Milwaukee, Wisconsin, USA
| | - Jebb Q Stewart
- Global Systems Laboratory, Oceanic and Atmospheric Research, National Oceanic and Atmospheric Administration, Boulder, Colorado, USA
| | - Miranda White
- Conrad Blucher Institute for Surveying and Science, Texas A&M University-Corpus Christi, Corpus Christi, Texas, USA
| | - John K Williams
- The Weather Company, an IBM Business, Andover, Massachusetts, USA
| |
Collapse
|
8
|
Flohr LA, Schuß M, Wallach DP, Krüger A, Riener A. Designing for passengers' information needs on fellow travelers: A comparison of day and night rides in shared automated vehicles. APPLIED ERGONOMICS 2024; 116:104198. [PMID: 38091694 DOI: 10.1016/j.apergo.2023.104198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 11/08/2023] [Accepted: 12/01/2023] [Indexed: 01/16/2024]
Abstract
Shared automated mobility-on-demand promises efficient, sustainable, and flexible transportation. Nevertheless, security concerns, resilience, and their mutual influence - especially at night - will likely be the most critical barriers to public adoption since passengers have to share rides with strangers without a human driver on board. Prior research points out that having information about fellow travelers could alleviate the concerns of passengers and we designed two user interface variants to investigate the role of this information in an exploratory within-subjects user study (N=24). Participants experienced four automated day and night rides with varying personal information about co-passengers in a simulated environment. The results of the mixed-method study indicate that having information about other passengers (e.g., photo, gender, and name) positively affects user experience at night. In contrast, it is less necessary during the day. Considering participants' simultaneously raised privacy concerns, balancing security and privacy demands poses a substantial challenge for resilient system design.
Collapse
Affiliation(s)
- Lukas A Flohr
- Ergosign GmbH, Saarbrücken & Munich, Germany; Saarbrücken Graduate School of Computer Science, Saarland Informatics Campus, Saarbrücken, Germany
| | - Martina Schuß
- Technische Hochschule Ingolstadt (THI), Ingolstadt, Germany; Johannes Kepler Universität, Linz, Austria.
| | - Dieter P Wallach
- Ergosign GmbH, Saarbrücken & Munich, Germany; University of Applied Sciences Kaiserslautern, Kaiserslautern, Germany
| | - Antonio Krüger
- German Research Center for Artificial Intelligence (DFKI), Saarland Informatics Campus, Saarbrücken, Germany
| | - Andreas Riener
- Technische Hochschule Ingolstadt (THI), Ingolstadt, Germany
| |
Collapse
|
9
|
Hopko SK, Mehta RK. Trust in Shared-Space Collaborative Robots: Shedding Light on the Human Brain. HUMAN FACTORS 2024; 66:490-509. [PMID: 35707995 DOI: 10.1177/00187208221109039] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
BACKGROUND Industry 4.0 is currently underway allowing for improved manufacturing processes that leverage the collective advantages of human and robot agents. Consideration of trust can improve the quality and safety in such shared-space human-robot collaboration environments. OBJECTIVE The use of physiological response to monitor and understand trust is currently limited due to a lack of knowledge on physiological indicators of trust. This study examines neural responses to trust within a shared-workcell human-robot collaboration task as well as discusses the use of granular and multimodal perspectives to study trust. METHODS Sixteen sex-balanced participants completed a surface finishing task in collaboration with a UR10 collaborative robot. All participants underwent robot reliability conditions and robot assistance level conditions. Brain activation and connectivity using functional near infrared spectroscopy, subjective responses, and performance were measured throughout the study. RESULTS Significantly, increased neural activation was observed in response to faulty robot behavior within the medial and right dorsolateral prefrontal cortex (PFC). A similar trend was observed for the anterior PFC, primary motor cortex, and primary visual cortex. Faulty robot behavior also resulted in reduced functional connectivity strengths throughout the brain. DISCUSSION These findings implicate regions in the prefrontal cortex along with specific connectivity patterns as signifiers of distrusting conditions. The neural response may be indicative of how trust is influenced, measured, and manifested for human-robot collaboration that requires active teaming. APPLICATION Neuroergonomic response metrics can reveal new perspectives on trust in automation that subjective responses alone are not able to provide.
Collapse
|
10
|
Qu J, Zhou R, Zhang Y, Ma Q. Understanding trust calibration in automated driving: the effect of time, personality, and system warning design. ERGONOMICS 2023; 66:2165-2181. [PMID: 36920361 DOI: 10.1080/00140139.2023.2191907] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 03/13/2023] [Indexed: 06/18/2023]
Abstract
Under the human-automation codriving future, dynamic trust should be considered. This paper explored how trust changes over time and how multiple factors (time, trust propensity, neuroticism, and takeover warning design) calibrate trust together. We launched two driving simulator experiments to measure drivers' trust before, during, and after the experiment under takeover scenarios. The results showed that trust in automation increased during short-term interactions and dropped after four months, which is still higher than pre-experiment trust. Initial trust and trust propensity had a stable impact on trust. Drivers trusted the system more with the two-stage (MR + TOR) warning design than the one-stage (TOR). Neuroticism had a significant effect on the countdown compared with the content warning.Practitioner summary: The results provide new data and knowledge for trust calibration in the takeover scenario. The findings can help design a more reasonable automated driving system in long-term human-automation interactions.
Collapse
Affiliation(s)
- Jianhong Qu
- School of Economics and Management, Beihang University, Beijing, P. R. China
| | - Ronggang Zhou
- School of Economics and Management, Beihang University, Beijing, P. R. China
| | - Yaping Zhang
- School of Economics and Management, Beihang University, Beijing, P. R. China
| | - Qianli Ma
- School of Economics and Management, Beihang University, Beijing, P. R. China
| |
Collapse
|
11
|
Alsaid A, Li M, Chiou EK, Lee JD. Measuring trust: a text analysis approach to compare, contrast, and select trust questionnaires. Front Psychol 2023; 14:1192020. [PMID: 38034296 PMCID: PMC10684734 DOI: 10.3389/fpsyg.2023.1192020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Accepted: 10/27/2023] [Indexed: 12/02/2023] Open
Abstract
Introduction Trust has emerged as a prevalent construct to describe relationships between people and between people and technology in myriad domains. Across disciplines, researchers have relied on many different questionnaires to measure trust. The degree to which these questionnaires differ has not been systematically explored. In this paper, we use a word-embedding text analysis technique to identify the differences and common themes across the most used trust questionnaires and provide guidelines for questionnaire selection. Methods A review was conducted to identify the existing trust questionnaires. In total, we included 46 trust questionnaires from three main domains (i.e., Automation, Humans, and E-commerce) with a total of 626 items measuring different trust layers (i.e., Dispositional, Learned, and Situational). Next, we encoded the words within each questionnaire using GloVe word embeddings and computed the embedding for each questionnaire item, and for each questionnaire. We reduced the dimensionality of the resulting dataset using UMAP to visualize these embeddings in scatterplots and implemented the visualization in a web app for interactive exploration of the questionnaires (https://areen.shinyapps.io/Trust_explorer/). Results At the word level, the semantic space serves to produce a lexicon of trust-related words. At the item and questionnaire level, the analysis provided recommendation on questionnaire selection based on the dispersion of questionnaires' items and at the domain and layer composition of each questionnaire. Along with the web app, the results help explore the semantic space of trust questionnaires and guide the questionnaire selection process. Discussion The results provide a novel means to compare and select trust questionnaires and to glean insights about trust from spoken dialog or written comments.
Collapse
Affiliation(s)
- Areen Alsaid
- Department of Industrial and Manufacturing Systems Engineering, University of Michigan-Dearborn, Dearborn, MI, United States
| | - Mengyao Li
- Department of Industrial and Systems Engineering, University of Wisconsin-Madison, Madison, WI, United States
| | - Erin K. Chiou
- Department of Human Systems Engineering, Arizona State University, Mesa, AZ, United States
| | - John D. Lee
- Department of Industrial and Systems Engineering, University of Wisconsin-Madison, Madison, WI, United States
| |
Collapse
|
12
|
Johnson CJ, Demir M, McNeese NJ, Gorman JC, Wolff AT, Cooke NJ. The Impact of Training on Human-Autonomy Team Communications and Trust Calibration. HUMAN FACTORS 2023; 65:1554-1570. [PMID: 34595958 DOI: 10.1177/00187208211047323] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
OBJECTIVE This work examines two human-autonomy team (HAT) training approaches that target communication and trust calibration to improve team effectiveness under degraded conditions. BACKGROUND Human-autonomy teaming presents challenges to teamwork, some of which may be addressed through training. Factors vital to HAT performance include communication and calibrated trust. METHOD Thirty teams of three, including one confederate acting as an autonomous agent, received either entrainment-based coordination training, trust calibration training, or control training before executing a series of missions operating a simulated remotely piloted aircraft. Automation and autonomy failures simulating degraded conditions were injected during missions, and measures of team communication, trust, and task efficiency were collected. RESULTS Teams receiving coordination training had higher communication anticipation ratios, took photos of targets faster, and overcame more autonomy failures. Although autonomy failures were introduced in all conditions, teams receiving the calibration training reported that their overall trust in the agent was more robust over time. However, they did not perform better than the control condition. CONCLUSIONS Training based on entrainment of communications, wherein introduction of timely information exchange through one team member has lasting effects throughout the team, was positively associated with improvements in HAT communications and performance under degraded conditions. Training that emphasized the shortcomings of the autonomous agent appeared to calibrate expectations and maintain trust. APPLICATIONS Team training that includes an autonomous agent that models effective information exchange may positively impact team communication and coordination. Training that emphasizes the limitations of an autonomous agent may help calibrate trust.
Collapse
|
13
|
Liefooghe B, Min E, Aarts H. The effects of social presence on cooperative trust with algorithms. Sci Rep 2023; 13:17463. [PMID: 37838816 PMCID: PMC10576745 DOI: 10.1038/s41598-023-44354-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Accepted: 10/06/2023] [Indexed: 10/16/2023] Open
Abstract
Algorithms support many processes in modern society. Research using trust games frequently reports that people are less inclined to cooperate when believed to play against an algorithm. Trust is, however, malleable by contextual factors and social presence can increase the willingness to collaborate. We investigated whether situating cooperation with an algorithm in the presence of another person increases cooperative trust. Three groups of participants played a trust game against a pre-programmed algorithm in an online webhosted experiment. The first group was told they played against another person who was present online. The second group was told they played against an algorithm. The third group was told they played against an algorithm while another person was present online. More cooperative responses were observed in the first group compared to the second group. A difference in cooperation that replicates previous findings. In addition, cooperative trust dropped more over the course of the trust game when participants interacted with an algorithm in the absence another person compared to the other two groups. This latter finding suggests that social presence can mitigate distrust in interacting with an algorithm. We discuss the cognitive mechanisms that can mediate this effect.
Collapse
Affiliation(s)
| | - Ebelien Min
- Utrecht University, Utrecht, The Netherlands
| | - Henk Aarts
- Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
14
|
Momen A, de Visser EJ, Fraune MR, Madison A, Rueben M, Cooley K, Tossell CC. Group trust dynamics during a risky driving experience in a Tesla Model X. Front Psychol 2023; 14:1129369. [PMID: 37408965 PMCID: PMC10319128 DOI: 10.3389/fpsyg.2023.1129369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Accepted: 05/23/2023] [Indexed: 07/07/2023] Open
Abstract
The growing concern about the risk and safety of autonomous vehicles (AVs) has made it vital to understand driver trust and behavior when operating AVs. While research has uncovered human factors and design issues based on individual driver performance, there remains a lack of insight into how trust in automation evolves in groups of people who face risk and uncertainty while traveling in AVs. To this end, we conducted a naturalistic experiment with groups of participants who were encouraged to engage in conversation while riding a Tesla Model X on campus roads. Our methodology was uniquely suited to uncover these issues through naturalistic interaction by groups in the face of a risky driving context. Conversations were analyzed, revealing several themes pertaining to trust in automation: (1) collective risk perception, (2) experimenting with automation, (3) group sense-making, (4) human-automation interaction issues, and (5) benefits of automation. Our findings highlight the untested and experimental nature of AVs and confirm serious concerns about the safety and readiness of this technology for on-road use. The process of determining appropriate trust and reliance in AVs will therefore be essential for drivers and passengers to ensure the safe use of this experimental and continuously changing technology. Revealing insights into social group-vehicle interaction, our results speak to the potential dangers and ethical challenges with AVs as well as provide theoretical insights on group trust processes with advanced technology.
Collapse
Affiliation(s)
- Ali Momen
- United States Air Force Academy, Colorado Springs, CO, United States
| | | | - Marlena R. Fraune
- Department of Psychology, New Mexico State University, Las Cruces, NM, United States
| | - Anna Madison
- United States Air Force Academy, Colorado Springs, CO, United States
- United States Army Research Laboratory, Aberdeen Proving Ground, Aberdeen, MD, United States
| | - Matthew Rueben
- Department of Psychology, New Mexico State University, Las Cruces, NM, United States
| | - Katrina Cooley
- United States Air Force Academy, Colorado Springs, CO, United States
| | - Chad C. Tossell
- United States Air Force Academy, Colorado Springs, CO, United States
| |
Collapse
|
15
|
Rodriguez Rodriguez L, Bustamante Orellana CE, Chiou EK, Huang L, Cooke N, Kang Y. A review of mathematical models of human trust in automation. FRONTIERS IN NEUROERGONOMICS 2023; 4:1171403. [PMID: 38234493 PMCID: PMC10790856 DOI: 10.3389/fnrgo.2023.1171403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Accepted: 05/23/2023] [Indexed: 01/19/2024]
Abstract
Understanding how people trust autonomous systems is crucial to achieving better performance and safety in human-autonomy teaming. Trust in automation is a rich and complex process that has given rise to numerous measures and approaches aimed at comprehending and examining it. Although researchers have been developing models for understanding the dynamics of trust in automation for several decades, these models are primarily conceptual and often involve components that are difficult to measure. Mathematical models have emerged as powerful tools for gaining insightful knowledge about the dynamic processes of trust in automation. This paper provides an overview of various mathematical modeling approaches, their limitations, feasibility, and generalizability for trust dynamics in human-automation interaction contexts. Furthermore, this study proposes a novel and dynamic approach to model trust in automation, emphasizing the importance of incorporating different timescales into measurable components. Due to the complex nature of trust in automation, it is also suggested to combine machine learning and dynamic modeling approaches, as well as incorporating physiological data.
Collapse
Affiliation(s)
- Lucero Rodriguez Rodriguez
- Simon A. Levin Mathematical and Computational Modeling Sciences Center, Arizona State University, Tempe, AZ, United States
| | - Carlos E. Bustamante Orellana
- Simon A. Levin Mathematical and Computational Modeling Sciences Center, Arizona State University, Tempe, AZ, United States
| | - Erin K. Chiou
- Human Systems Engineering, Arizona State University, Mesa, AZ, United States
| | - Lixiao Huang
- Center for Human, Artificial Intelligence, and Robot Teaming, Global Security Initiative, Arizona State University, Mesa, AZ, United States
| | - Nancy Cooke
- Human Systems Engineering, Arizona State University, Mesa, AZ, United States
- Center for Human, Artificial Intelligence, and Robot Teaming, Global Security Initiative, Arizona State University, Mesa, AZ, United States
| | - Yun Kang
- Sciences and Mathematics Faculty, College of Integrative Sciences and Arts, Arizona State University, Mesa, AZ, United States
| |
Collapse
|
16
|
Paliga M. The Relationships of Human-Cobot Interaction Fluency with Job Performance and Job Satisfaction among Cobot Operators-The Moderating Role of Workload. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:5111. [PMID: 36982018 PMCID: PMC10048792 DOI: 10.3390/ijerph20065111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 03/07/2023] [Accepted: 03/12/2023] [Indexed: 06/18/2023]
Abstract
Modern factories are subject to rapid technological changes, including the advancement of robotics. A key manufacturing solution in the fourth industrial revolution is the introduction of collaborative robots (cobots), which cooperate directly with human operators while executing shared tasks. Although collaborative robotics has tangible benefits, cobots pose several challenges to human-robot interaction. Proximity, unpredictable robot behavior, and switching the operator's role from a co-operant to a supervisor can negatively affect the operator's cognitive, emotional, and behavioral responses, resulting in their lower well-being and decreased job performance. Therefore, proper actions are necessary to improve the interaction between the robot and its human counterpart. Specifically, exploring the concept of human-robot interaction (HRI) fluency shows promising perspectives. However, research on conditions affecting the relationships between HRI fluency and its outcomes is still in its infancy. Therefore, the aim of this cross-sectional survey study was twofold. First, the relationships of HRI fluency with job performance (i.e., task performance, organizational citizenship behavior, and creative performance) and job satisfaction were investigated. Second, the moderating role of the quantitative workload in these associations was verified. The analyses carried out on data from 200 male and female cobot operators working on the shop floor showed positive relationships between HRI fluency, job performance, and job satisfaction. Moreover, the study confirmed the moderating role of the quantitative workload in these relations. The results showed that the higher the workload, the lower the relationships between HRI fluency and its outcomes. The study findings are discussed within the theoretical framework of the Job Demands-Control-Support model.
Collapse
Affiliation(s)
- Mateusz Paliga
- Institute of Psychology, Faculty of Social Sciences, University of Silesia in Katowice, 40-007 Katowice, Poland
| |
Collapse
|
17
|
Xie Y, Zhou R, Chan AHS, Jin M, Qu M. Motivation to interaction media: The impact of automation trust and self-determination theory on intention to use the new interaction technology in autonomous vehicles. Front Psychol 2023; 14:1078438. [PMID: 36844336 PMCID: PMC9945264 DOI: 10.3389/fpsyg.2023.1078438] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Accepted: 01/11/2023] [Indexed: 02/10/2023] Open
Abstract
Introduction This research investigated the effects of three psychological needs (competence, autonomy, and relatedness) of self-determination theory (SDT) and automation trust on the intention of users to employ new interaction technology brought by autonomous vehicles (AVs), especially interaction mode and virtual image. Method This study focuses on the discussion from the perspective of psychological motivation theory applied to AV interaction technology. With the use of a structured questionnaire, participants completed self-report measures related to these two interaction technologies; a total of 155 drivers' responses were analyzed. Result The results indicated that users' intentions were directly predicted by their perceived competence, autonomy, and relatedness of SDT and automation trust, which jointly explained at least 66% of the variance in behavioral intention. In addition to these results, the contribution of predictive components to behavioral intention is influenced by the type of interaction technology. Relatedness and competence significantly impacted the behavioral intention to use the interaction mode but not the virtual image. Discussion These findings are essential in that they support the necessity of distinguishing between types of AV interaction technology when predicting users' intentions to use.
Collapse
Affiliation(s)
- Yubin Xie
- School of Economics and Management, Beihang University, Beijing, China,Department of Advanced Design and Systems Engineering, City University of Hong Kong, Hong Kong, China
| | - Ronggang Zhou
- School of Economics and Management, Beihang University, Beijing, China,Key Laboratory of Complex System Analysis, Management and Decision (Beihang University), Ministry of Education of the People's Republic of China, Beijing, China,*Correspondence: Ronggang Zhou,
| | - Alan Hoi Shou Chan
- Department of Advanced Design and Systems Engineering, City University of Hong Kong, Hong Kong, China
| | - Mingyu Jin
- School of Economics and Management, Beihang University, Beijing, China
| | - Miao Qu
- School of Economics and Management, Beihang University, Beijing, China
| |
Collapse
|
18
|
Niu K, Liu W, Zhang J, Liang M, Li H, Zhang Y, Du Y. A Task Complexity Analysis Method to Study the Emergency Situation under Automated Metro System. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:2314. [PMID: 36767680 PMCID: PMC9916089 DOI: 10.3390/ijerph20032314] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 01/15/2023] [Accepted: 01/18/2023] [Indexed: 06/18/2023]
Abstract
System upgrades and team members interactions lead to changes in task structure. Therefore, in order to handle emergencies efficiently and safely, a comprehensive method of the traffic dispatching team task complexity (TDTTC) is proposed based on team cognitive work analysis (Team-CWA) and network feature analysis. The method comes from the perspective of the socio-technical system. Two stages were included in this method. In the first stage, four phases of Team-CWA, i.e., team work domain analysis, team control task analysis, team strategies analysis, and team worker competencies analysis, were applied in the qualitative analysis of TDTTC. Then in the second stage, a mapping process was established based on events and information cues. After the team task network was established, the characteristic indexes of node degree/average degree, average shortest path length, agglomeration coefficient, and overall network performance for TDTTC were extracted to analyze TDTTC quantitatively. The cases of tasks for screen door fault under grade of automation GOA1-GOA4 were compared. The results revealed that the more nodes and communication between nodes, the larger the network scale was, which would lead to the TDTTC being more complicated no matter what level of automation system it was under. This method is not only the exploration of cognitive engineering theory in the field of task complexity, but also the innovation of team task complexity in the development of automatic metro operation.
Collapse
Affiliation(s)
- Ke Niu
- Collaborative Innovation Center for HSR Driver Health and Safety, Zhengzhou Railway Vocational &Technical College, Zhengzhou 451460, China
- Henan Engineering Research Center of Rail Transit Intelligent Security, Zhengzhou Railway Vocational &Technical College, Zhengzhou 451460, China
| | - Wenbo Liu
- Collaborative Innovation Center for HSR Driver Health and Safety, Zhengzhou Railway Vocational &Technical College, Zhengzhou 451460, China
- Henan Engineering Research Center of Rail Transit Intelligent Security, Zhengzhou Railway Vocational &Technical College, Zhengzhou 451460, China
| | - Jia Zhang
- Collaborative Innovation Center for HSR Driver Health and Safety, Zhengzhou Railway Vocational &Technical College, Zhengzhou 451460, China
- Henan Engineering Research Center of Rail Transit Intelligent Security, Zhengzhou Railway Vocational &Technical College, Zhengzhou 451460, China
| | - Mengxuan Liang
- Department of Construction Engineering and Management, North China University of Water Resources and Electric Power, Zhengzhou 450046, China
| | - Huimin Li
- Department of Construction Engineering and Management, North China University of Water Resources and Electric Power, Zhengzhou 450046, China
| | - Yaqiong Zhang
- Collaborative Innovation Center for HSR Driver Health and Safety, Zhengzhou Railway Vocational &Technical College, Zhengzhou 451460, China
- Henan Engineering Research Center of Rail Transit Intelligent Security, Zhengzhou Railway Vocational &Technical College, Zhengzhou 451460, China
| | - Yihang Du
- New Media Arts Department, National Academy of Chinese Theatre Arts, Beijing 100073, China
| |
Collapse
|
19
|
Lyons J, Highland P, Bos N, Lyons D, Skinner A, Schnell T, Hefron R. Measuring Perceived Agent Appropriateness in a Live-Flight Human-Autonomy Teaming Scenario. ERGONOMICS IN DESIGN 2022. [DOI: 10.1177/10648046221129393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
United States Air Force Test Pilot School students ( N = 6) participated in a study involving an agent-directed human pilot (“Blue agent”) in dogfighting scenarios against an adversary (“Red agent”). The adversary used three levels of difficulty as follows: low, medium, and high. An agent appropriateness scale was developed to gauge how appropriate the Blue agent’s behaviors were during each dogfight. Results demonstrated that agent appropriateness varied by Red agent difficulty. These results suggest that agent appropriateness is an essential element in human-autonomy teaming research. Practitioners should seek to develop agent appropriateness measures suitable for the particular context and technology in question.
Collapse
|
20
|
Koren Y, Feingold Polak R, Levy-Tzedek S. Extended Interviews with Stroke Patients Over a Long-Term Rehabilitation Using Human–Robot or Human–Computer Interactions. Int J Soc Robot 2022; 14:1893-1911. [PMID: 36158255 PMCID: PMC9483483 DOI: 10.1007/s12369-022-00909-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/05/2022] [Indexed: 11/30/2022]
Abstract
AbstractSocially assistive robots (SARs) have been proposed to assist post-stroke patients in performing their exercise during their rehabilitation process, with the trust in the robot identified as an important factor in human–robot interaction. In the current study, we aimed to identify and characterize factors that influence post-stroke patients’ trust in a robot-operated and a computer-operated rehabilitation platform during and after a long-term experience with the platform. We conducted 29 interviews with 16 stroke patients who underwent a long-term rehabilitation process, assisted by either a SAR or a computer interface. The intervention lasted 5–7 weeks per patient, for a total of 229 sessions over 18 months. By using a qualitative research method—extended interviews “in the wild” with stroke patients, over a long-term rehabilitation process—our study reveals users’ perspectives regarding factors affecting trust in the SAR or in the computer interface during their rehabilitation process. The results support the assertion that SARs have an added value in the rehabilitative care of stroke patients; It appears that personal characteristics, such as age and gender, have an effect on the users’ acceptance of a non-human operator as a practice assistant. Our findings support the notion that SARs augment rehabilitative therapies beyond a standard computer; Importantly, patients appreciated different aspects of the non-human operator in the two groups: In the SAR group, users preferred its functional performance over its anthropomorphized social skills; In the Computer group, users highlighted its contribution to the training of their memory skills.
Collapse
Affiliation(s)
- Yaacov Koren
- Department of Sociology and Anthropology, Tel-Aviv University, Tel-Aviv, Israel
| | - Ronit Feingold Polak
- Recanati School for Community Health Professions, Department of Physical Therapy, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Shelly Levy-Tzedek
- Recanati School for Community Health Professions, Department of Physical Therapy, Ben-Gurion University of the Negev, Beer-Sheva, Israel
- Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beer-Sheva, Israel
- Freiburg Institute for Advanced Studies (FRIAS), University of Freiburg, Freiburg, Germany
| |
Collapse
|
21
|
Lyons JB, Hamdan IA, Vo TQ. Explanations and trust: What happens to trust when a robot partner does something unexpected? COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2022.107473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
22
|
Bobko P, Hirshfield L, Eloy L, Spencer C, Doherty E, Driscoll J, Obolsky H. Human-agent teaming and trust calibration: a theoretical framework, configurable testbed, empirical illustration, and implications for the development of adaptive systems. THEORETICAL ISSUES IN ERGONOMICS SCIENCE 2022. [DOI: 10.1080/1463922x.2022.2086644] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Affiliation(s)
| | | | - Lucca Eloy
- University of Colorado at Boulder, Boulder, CO, USA
| | - Cara Spencer
- University of Colorado at Boulder, Boulder, CO, USA
| | | | | | | |
Collapse
|
23
|
Eloy L, Doherty EJ, Spencer CA, Bobko P, Hirshfield L. Using fNIRS to Identify Transparency- and Reliability-Sensitive Markers of Trust Across Multiple Timescales in Collaborative Human-Human-Agent Triads. FRONTIERS IN NEUROERGONOMICS 2022; 3:838625. [PMID: 38235468 PMCID: PMC10790910 DOI: 10.3389/fnrgo.2022.838625] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/18/2021] [Accepted: 03/08/2022] [Indexed: 01/19/2024]
Abstract
Intelligent agents are rapidly evolving from assistants into teammates as they perform increasingly complex tasks. Successful human-agent teams leverage the computational power and sensory capabilities of automated agents while keeping the human operator's expectation consistent with the agent's ability. This helps prevent over-reliance on and under-utilization of the agent to optimize its effectiveness. Research at the intersection of human-computer interaction, social psychology, and neuroergonomics has identified trust as a governing factor of human-agent interactions that can be modulated to maintain an appropriate expectation. To achieve this calibration, trust can be monitored continuously and unobtrusively using neurophysiological sensors. While prior studies have demonstrated the potential of functional near-infrared spectroscopy (fNIRS), a lightweight neuroimaging technology, in the prediction of social, cognitive, and affective states, few have successfully used it to measure complex social constructs like trust in artificial agents. Even fewer studies have examined the dynamics of hybrid teams of more than 1 human or 1 agent. We address this gap by developing a highly collaborative task that requires knowledge sharing within teams of 2 humans and 1 agent. Using brain data obtained with fNIRS sensors, we aim to identify brain regions sensitive to changes in agent behavior on a long- and short-term scale. We manipulated agent reliability and transparency while measuring trust, mental demand, team processes, and affect. Transparency and reliability levels are found to significantly affect trust in the agent, while transparency explanations do not impact mental demand. Reducing agent communication is shown to disrupt interpersonal trust and team cohesion, suggesting similar dynamics as human-human teams. Contrasts of General Linear Model analyses identify dorsal medial prefrontal cortex activation specific to assessing the agent's transparency explanations and characterize increases in mental demand as signaled by dorsal lateral prefrontal cortex and frontopolar activation. Short scale event-level data is analyzed to show that predicting whether an individual will trust the agent, with data from 15 s before their decision, is feasible with fNIRS data. Discussing our results, we identify targets and directions for future neuroergonomics research as a step toward building an intelligent trust-modulation system to optimize human-agent collaborations in real time.
Collapse
Affiliation(s)
- Lucca Eloy
- Institute of Cognitive Science, University of Colorado Boulder, Boulder, CO, United States
| | - Emily J. Doherty
- Institute of Cognitive Science, University of Colorado Boulder, Boulder, CO, United States
| | - Cara A. Spencer
- Institute of Cognitive Science, University of Colorado Boulder, Boulder, CO, United States
| | - Philip Bobko
- Department of Management, Gettysburg College, Gettysburg, PA, United States
| | - Leanne Hirshfield
- Institute of Cognitive Science, University of Colorado Boulder, Boulder, CO, United States
| |
Collapse
|
24
|
Roscoe RD. Please Join Me/Us/Them on My/Our/Their Journey to Justice in STEM. DISCOURSE PROCESSES 2022. [DOI: 10.1080/0163853x.2022.2050084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Rod D. Roscoe
- Human Systems Engineering, The Polytechnic School, Ira A. Fulton Schools of Engineering, Arizona State University
| |
Collapse
|
25
|
Hopko S, Wang J, Mehta R. Human Factors Considerations and Metrics in Shared Space Human-Robot Collaboration: A Systematic Review. Front Robot AI 2022; 9:799522. [PMID: 35187093 PMCID: PMC8850717 DOI: 10.3389/frobt.2022.799522] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 01/07/2022] [Indexed: 11/13/2022] Open
Abstract
The degree of successful human-robot collaboration is dependent on the joint consideration of robot factors (RF) and human factors (HF). Depending on the state of the operator, a change in a robot factor, such as the behavior or level of autonomy, can be perceived differently and affect how the operator chooses to interact with and utilize the robot. This interaction can affect system performance and safety in dynamic ways. The theory of human factors in human-automation interaction has long been studied; however, the formal investigation of these HFs in shared space human-robot collaboration (HRC) and the potential interactive effects between covariate HFs (HF-HF) and HF-RF in shared space collaborative robotics requires additional investigation. Furthermore, methodological applications to measure or manipulate these factors can provide insights into contextual effects and potential for improved measurement techniques. As such, a systematic literature review was performed to evaluate the most frequently addressed operator HF states in shared space HRC, the methods used to quantify these states, and the implications of the states on HRC. The three most frequently measured states are: trust, cognitive workload, and anxiety, with subjective questionnaires universally the most common method to quantify operator states, excluding fatigue where electromyography is more common. Furthermore, the majority of included studies evaluate the effect of manipulating RFs on HFs, but few explain the effect of the HFs on system attributes or performance. For those that provided this information, HFs have been shown to impact system efficiency and response time, collaborative performance and quality of work, and operator utilization strategy.
Collapse
|
26
|
Chiou EK, Demir M, Buchanan V, Corral CC, Endsley MR, Lematta GJ, Cooke NJ, McNeese NJ. Towards Human–Robot Teaming: Tradeoffs of Explanation-Based Communication Strategies in a Virtual Search and Rescue Task. Int J Soc Robot 2021. [DOI: 10.1007/s12369-021-00834-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
27
|
Kohn SC, de Visser EJ, Wiese E, Lee YC, Shaw TH. Measurement of Trust in Automation: A Narrative Review and Reference Guide. Front Psychol 2021; 12:604977. [PMID: 34737716 PMCID: PMC8562383 DOI: 10.3389/fpsyg.2021.604977] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Accepted: 08/25/2021] [Indexed: 02/05/2023] Open
Abstract
With the rise of automated and autonomous agents, research examining Trust in Automation (TiA) has attracted considerable attention over the last few decades. Trust is a rich and complex construct which has sparked a multitude of measures and approaches to study and understand it. This comprehensive narrative review addresses known methods that have been used to capture TiA. We examined measurements deployed in existing empirical works, categorized those measures into self-report, behavioral, and physiological indices, and examined them within the context of an existing model of trust. The resulting work provides a reference guide for researchers, providing a list of available TiA measurement methods along with the model-derived constructs that they capture including judgments of trustworthiness, trust attitudes, and trusting behaviors. The article concludes with recommendations on how to improve the current state of TiA measurement.
Collapse
Affiliation(s)
| | - Ewart J de Visser
- Warfighter Effectiveness Research Center, United States Air Force Academy, Colorado Springs, CO, United States
| | - Eva Wiese
- George Mason University, Fairfax, VA, United States
| | - Yi-Ching Lee
- George Mason University, Fairfax, VA, United States
| | - Tyler H Shaw
- George Mason University, Fairfax, VA, United States
| |
Collapse
|
28
|
Zhang Y, Ma J, Pan C, Chang R. Effects of automation trust in drivers' visual distraction during automation. PLoS One 2021; 16:e0257201. [PMID: 34520500 PMCID: PMC8439472 DOI: 10.1371/journal.pone.0257201] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Accepted: 08/25/2021] [Indexed: 11/22/2022] Open
Abstract
With ongoing improvements in vehicle automation, research on automation trust has attracted considerable attention. In order to explore effects of automation trust on drivers’ visual distraction, we designed a three-factor 2 (trust type: high trust group, low trust group) × 2 (video entertainment: variety-show videos, news videos) × 3 (measurement stage: 1–3) experiment. 48 drivers were recruited in Dalian, China for the experiment. With a driving simulator, we used detection-response tasks (DRT) to measure each driver’s performance. Their eye movements were recorded, and automation-trust scale was used to divide participants into high trust group and low trust group. The results show that: (1) drivers in the high trust group has lower mental workload and paid more attention to visual non-driving-related tasks; (2) video entertainment also has an impact on distraction behavior, variety-show videos catch more attention than news videos. The findings of the present study indicate that drivers with high automation trust are more likely to be involved in non-driving-related visual tasks.
Collapse
Affiliation(s)
- Yijing Zhang
- School of Psychology, Liaoning Normal University, Dalian, China
| | - Jinfei Ma
- School of Psychology, Liaoning Normal University, Dalian, China
- * E-mail:
| | - Chunyang Pan
- School of Psychology, Liaoning Normal University, Dalian, China
| | - Ruosong Chang
- School of Psychology, Liaoning Normal University, Dalian, China
| |
Collapse
|
29
|
Hopko SK, Mehta RK. Neural Correlates of Trust in Automation: Considerations and Generalizability Between Technology Domains. FRONTIERS IN NEUROERGONOMICS 2021; 2:731327. [PMID: 38235218 PMCID: PMC10790920 DOI: 10.3389/fnrgo.2021.731327] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/26/2021] [Accepted: 08/10/2021] [Indexed: 01/19/2024]
Abstract
Investigations into physiological or neurological correlates of trust has increased in popularity due to the need for a continuous measure of trust, including for trust-sensitive or adaptive systems, measurements of trustworthiness or pain points of technology, or for human-in-the-loop cyber intrusion detection. Understanding the limitations and generalizability of the physiological responses between technology domains is important as the usefulness and relevance of results is impacted by fundamental characteristics of the technology domains, corresponding use cases, and socially acceptable behaviors of the technologies. While investigations into the neural correlates of trust in automation has grown in popularity, there is limited understanding of the neural correlates of trust, where the vast majority of current investigations are in cyber or decision aid technologies. Thus, the relevance of these correlates as a deployable measure for other domains and the robustness of the measures to varying use cases is unknown. As such, this manuscript discusses the current-state-of-knowledge in trust perceptions, factors that influence trust, and corresponding neural correlates of trust as generalizable between domains.
Collapse
Affiliation(s)
- Sarah K. Hopko
- Neuroergonomics Lab, Department of Industrial and Systems Engineering, Texas A&M University, College Station, TX, United States
| | | |
Collapse
|