1
|
Li M, Guo F, Li Z, Ma H, Duffy VG. Interactive effects of users' openness and robot reliability on trust: evidence from psychological intentions, task performance, visual behaviours, and cerebral activations. ERGONOMICS 2024; 67:1612-1632. [PMID: 38635303 DOI: 10.1080/00140139.2024.2343954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Accepted: 04/09/2024] [Indexed: 04/19/2024]
Abstract
Although trust plays a vital role in human-robot interaction, there is currently a dearth of literature examining the effect of users' openness personality on trust in actual interaction. This study aims to investigate the interaction effects of users' openness and robot reliability on trust. We designed a voice-based walking task and collected subjective trust ratings, task metrics, eye-tracking data, and fNIRS signals from users with different openness to unravel the psychological intentions, task performance, visual behaviours, and cerebral activations underlying trust. The results showed significant interaction effects. Users with low openness exhibited lower subjective trust, more fixations, and higher activation of rTPJ in the highly reliable condition than those with high openness. The results suggested that users with low openness might be more cautious and suspicious about the highly reliable robot and allocate more visual attention and neural processing to monitor and infer robot status than users with high openness.
Collapse
Affiliation(s)
- Mingming Li
- Department of Industrial Engineering, College of Management Science and Engineering, Anhui University of Technology, Maanshan, China
- Department of Industrial Engineering, School of Business Administration, Northeastern University, Shenyang, China
| | - Fu Guo
- Department of Industrial Engineering, School of Business Administration, Northeastern University, Shenyang, China
| | - Zhixing Li
- Department of Industrial Engineering, School of Business Administration, Northeastern University, Shenyang, China
| | - Haiyang Ma
- Department of Industrial Engineering, School of Business Administration, Northeastern University, Shenyang, China
| | - Vincent G Duffy
- School of Industrial Engineering, Purdue University, West Lafayette, IN, USA
| |
Collapse
|
2
|
Driggs J, Vangsness L. Judgments of Difficulty (JODs) While Observing an Automated System Support the Media Equation and Unique Agent Hypotheses. HUMAN FACTORS 2024:187208241273379. [PMID: 39155398 DOI: 10.1177/00187208241273379] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/20/2024]
Abstract
OBJECTIVE We investigated how people used cues to make Judgments of Difficulty (JODs) while observing automation perform a task and when performing this task themselves. BACKGROUND Task difficulty is a factor affecting trust in automation; however, no research has explored how individuals make JODs when watching automation or whether these judgments are similar to or different from those made while watching humans. Lastly, it is unclear how cue use when observing automation differs as a function of experience. METHOD The study involved a visual search task. Some participants performed the task first, then watched automation complete it. Others watched and then performed, and a third group alternated between performing and watching. After each trial, participants made a JOD by indicating if the task was easier or harder than before. Task difficulty randomly changed every five trials. RESULTS A Bayesian regression suggested that cue use is similar to and different from cue use while observing humans. For central cues, support for the UAH was bounded by experience: those who performed the task first underweighted central cues when making JODs, relative to their counterparts in a previous study involving humans. For peripheral cues, support for the MEH was unequivocal and participants weighted cues similarly across observation sources. CONCLUSION People weighted cues similar to and different from when they watched automation perform a task relative to when they watched humans, supporting the Media Equation and Unique Agent Hypotheses. APPLICATION This study adds to a growing understanding of judgments in human-human and human-automation interactions.
Collapse
|
3
|
Hopko SK, Mehta RK. Trust in Shared-Space Collaborative Robots: Shedding Light on the Human Brain. HUMAN FACTORS 2024; 66:490-509. [PMID: 35707995 DOI: 10.1177/00187208221109039] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
BACKGROUND Industry 4.0 is currently underway allowing for improved manufacturing processes that leverage the collective advantages of human and robot agents. Consideration of trust can improve the quality and safety in such shared-space human-robot collaboration environments. OBJECTIVE The use of physiological response to monitor and understand trust is currently limited due to a lack of knowledge on physiological indicators of trust. This study examines neural responses to trust within a shared-workcell human-robot collaboration task as well as discusses the use of granular and multimodal perspectives to study trust. METHODS Sixteen sex-balanced participants completed a surface finishing task in collaboration with a UR10 collaborative robot. All participants underwent robot reliability conditions and robot assistance level conditions. Brain activation and connectivity using functional near infrared spectroscopy, subjective responses, and performance were measured throughout the study. RESULTS Significantly, increased neural activation was observed in response to faulty robot behavior within the medial and right dorsolateral prefrontal cortex (PFC). A similar trend was observed for the anterior PFC, primary motor cortex, and primary visual cortex. Faulty robot behavior also resulted in reduced functional connectivity strengths throughout the brain. DISCUSSION These findings implicate regions in the prefrontal cortex along with specific connectivity patterns as signifiers of distrusting conditions. The neural response may be indicative of how trust is influenced, measured, and manifested for human-robot collaboration that requires active teaming. APPLICATION Neuroergonomic response metrics can reveal new perspectives on trust in automation that subjective responses alone are not able to provide.
Collapse
|
4
|
Ehrlich SK, Dean-Leon E, Tacca N, Armleder S, Dimova-Edeleva V, Cheng G. Human-robot collaborative task planning using anticipatory brain responses. PLoS One 2023; 18:e0287958. [PMID: 37432954 DOI: 10.1371/journal.pone.0287958] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Accepted: 06/19/2023] [Indexed: 07/13/2023] Open
Abstract
Human-robot interaction (HRI) describes scenarios in which both human and robot work as partners, sharing the same environment or complementing each other on a joint task. HRI is characterized by the need for high adaptability and flexibility of robotic systems toward their human interaction partners. One of the major challenges in HRI is task planning with dynamic subtask assignment, which is particularly challenging when subtask choices of the human are not readily accessible by the robot. In the present work, we explore the feasibility of using electroencephalogram (EEG) based neuro-cognitive measures for online robot learning of dynamic subtask assignment. To this end, we demonstrate in an experimental human subject study, featuring a joint HRI task with a UR10 robotic manipulator, the presence of EEG measures indicative of a human partner anticipating a takeover situation from human to robot or vice-versa. The present work further proposes a reinforcement learning based algorithm employing these measures as a neuronal feedback signal from the human to the robot for dynamic learning of subtask-assignment. The efficacy of this algorithm is validated in a simulation-based study. The simulation results reveal that even with relatively low decoding accuracies, successful robot learning of subtask-assignment is feasible, with around 80% choice accuracy among four subtasks within 17 minutes of collaboration. The simulation results further reveal that scalability to more subtasks is feasible and mainly accompanied with longer robot learning times. These findings demonstrate the usability of EEG-based neuro-cognitive measures to mediate the complex and largely unsolved problem of human-robot collaborative task planning.
Collapse
Affiliation(s)
- Stefan K Ehrlich
- Chair for Cognitive Systems, Department of Electrical Engineering, TUM School of Computation, Information and Technology, Technical University of Munich, Munich, Germany
| | - Emmanuel Dean-Leon
- Department of Electrical Engineering, Automation, Chalmers University of Technology, Göteborg, Sweden
| | - Nicholas Tacca
- Battelle Memorial Institute, Columbus, OH, United States of America
| | - Simon Armleder
- Chair for Cognitive Systems, Department of Electrical Engineering, TUM School of Computation, Information and Technology, Technical University of Munich, Munich, Germany
| | - Viktorija Dimova-Edeleva
- MIRMI - Munich Institute of Robotics and Machine Intelligence, formerly MSRM, Technical University of Munich, Munich, Germany
| | - Gordon Cheng
- Chair for Cognitive Systems, Department of Electrical Engineering, TUM School of Computation, Information and Technology, Technical University of Munich, Munich, Germany
- Center of Competence NeuroEngineering, Technical University of Munich, München, Germany
| |
Collapse
|
5
|
Perceptual confusion makes a significant contribution to the conflict effect: Insight from the flanker task and the majority function task. CURRENT PSYCHOLOGY 2023. [DOI: 10.1007/s12144-023-04318-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/19/2023]
|
6
|
Somon B, Campagne A, Delorme A, Berberian B. Brain mechanisms of automated conflict avoidance simulator supervision. Psychophysiology 2023; 60:e14171. [PMID: 36106765 PMCID: PMC10078105 DOI: 10.1111/psyp.14171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 07/25/2022] [Accepted: 08/03/2022] [Indexed: 01/04/2023]
Abstract
Supervision of automated systems is an ubiquitous aspect of most of our everyday life activities which is even more necessary in high risk industries (aeronautics, power plants, etc.). Performance monitoring related to our own error making has been widely studied. Here we propose to assess the neurofunctional correlates of system error detection. We used an aviation-based conflict avoidance simulator with a 40% error-rate and recorded the electroencephalographic activity of participants while they were supervising it. Neural dynamics related to the supervision of system's correct and erroneous responses were assessed in the time and time-frequency domains to address the dynamics of the error detection process in this environment. Two levels of perceptual difficulty were introduced to assess their effect on system's error detection-related evoked activity. Using a robust cluster-based permutation test, we observed a lower widespread evoked activity in the time domain for errors compared to correct responses detection, as well as a higher theta-band activity in the time-frequency domain dissociating the detection of erroneous from that of correct system responses. We also showed a significant effect of difficulty on time-domain evoked activity, and of the phase of the experiment on spectral activity: a decrease in early theta and alpha at the end of the experiment, as well as interaction effects in theta and alpha frequency bands. These results improve our understanding of the brain dynamics of performance monitoring activity in closer-to-real-life settings and are a promising avenue for the detection of error-related components in ecological and dynamic tasks.
Collapse
Affiliation(s)
- Bertille Somon
- Département d'Ingénierie Cognitive et Neurosciences Appliquées, Office National d'Etudes et de Recherches Aérospatiales, Salon-de-Provence, France.,LPNC, Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, Grenoble, France
| | - Aurélie Campagne
- LPNC, Univ. Grenoble Alpes, Univ. Savoie Mont Blanc, CNRS, Grenoble, France
| | - Arnaud Delorme
- Swartz Center for Computational Neuroscience, University of California San Diego, La Jolla, California, USA.,Centre de recherche Cerveau et Cognition, Université de Toulouse, Toulouse, France
| | - Bruno Berberian
- Département d'Ingénierie Cognitive et Neurosciences Appliquées, Office National d'Etudes et de Recherches Aérospatiales, Salon-de-Provence, France
| |
Collapse
|
7
|
Hopko SK, Mehta RK, Pagilla PR. Physiological and perceptual consequences of trust in collaborative robots: An empirical investigation of human and robot factors. APPLIED ERGONOMICS 2023; 106:103863. [PMID: 36055035 DOI: 10.1016/j.apergo.2022.103863] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Revised: 07/23/2022] [Accepted: 07/30/2022] [Indexed: 06/15/2023]
Abstract
Measuring trust is an important element of effective human-robot collaborations (HRCs). It has largely relied on subjective responses and thus cannot be readily used for adapting robots in shared operations, particularly in shared-space manufacturing applications. Additionally, whether trust in such HRCs differ under altered operator cognitive states or with sex remains unknown. This study examined the impacts of operator cognitive fatigue, robot reliability, and operator sex on trust symptoms in collaborative robots through both objective measures (i.e., performance, heart rate variability) and subjective measures (i.e., surveys). Male and female participants were recruited to perform a metal surface polishing task in partnership with a collaborative robot (UR10), in which they underwent reliability conditions (reliable, unreliable) and cognitive fatigue conditions (fatigued, not fatigued). As compared to the reliable conditions, unreliable robot manipulations resulted in perceived trust, an increase in both sympathetic and parasympathetic activity, and operator-induced reduction in task efficiency and accuracy but not precision. Cognitive fatigue was shown to correlate with higher fatigue scores and reduced task efficiency, more severely impacting females. The results highlight key interplays between operator states of fatigue, sex, and robot reliability on both subjective and objective responses of trust. These findings provide a strong foundation for future investigations on better understanding the relationship between human factors and trust in HRC as well as aid in developing more diagnostic and deployable measures of trust.
Collapse
Affiliation(s)
- Sarah K Hopko
- The Industrial and Systems Engineering Department, Texas A&M University, College Station, Tx, USA
| | - Ranjana K Mehta
- The Industrial and Systems Engineering Department, Texas A&M University, College Station, Tx, USA; The Mechanical Engineering Department, Texas A&M University, College Station, Tx, USA.
| | - Prabhakar R Pagilla
- The Mechanical Engineering Department, Texas A&M University, College Station, Tx, USA
| |
Collapse
|
8
|
Huang J, Choo S, Pugh ZH, Nam CS. Evaluating Effective Connectivity of Trust in Human-Automation Interaction: A Dynamic Causal Modeling (DCM) Study. HUMAN FACTORS 2022; 64:1051-1069. [PMID: 33657902 DOI: 10.1177/0018720820987443] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
OBJECTIVE Using dynamic causal modeling (DCM), we examined how credibility and reliability affected the way brain regions exert causal influence over each other-effective connectivity (EC)-in the context of trust in automation. BACKGROUND Multiple brain regions of the central executive network (CEN) and default mode network (DMN) have been implicated in trust judgment. However, the neural correlates of trust judgment are still relatively unexplored in terms of the directed information flow between brain regions. METHOD Sixteen participants observed the performance of four computer algorithms, which differed in credibility and reliability, of the system monitoring subtask of the Air Force Multi-Attribute Task Battery (AF-MATB). Using six brain regions of the CEN and DMN commonly identified to be activated in human trust, a total of 30 (forward, backward, and lateral) connection models were developed. Bayesian model averaging (BMA) was used to quantify the connectivity strength among the brain regions. RESULTS Relative to the high trust condition, low trust showed unique presence of specific connections, greater connectivity strengths from the prefrontal cortex, and greater network complexity. High trust condition showed no backward connections. CONCLUSION Results indicated that trust and distrust can be two distinctive neural processes in human-automation interaction-distrust being a more complex network than trust, possibly due to the increased cognitive load. APPLICATION The causal architecture of distributed brain regions inferred using DCM can help not only in the design of a balanced human-automation interface design but also in the proper use of automation in real-life situations.
Collapse
Affiliation(s)
- Jiali Huang
- 6798 North Carolina State University, Raleigh, USA
| | | | | | - Chang S Nam
- 6798 North Carolina State University, Raleigh, USA
| |
Collapse
|
9
|
Abubshait A, Parenti L, Perez-Osorio J, Wykowska A. Misleading Robot Signals in a Classification Task Induce Cognitive Load as Measured by Theta Synchronization Between Frontal and Temporo-parietal Brain Regions. FRONTIERS IN NEUROERGONOMICS 2022; 3:838136. [PMID: 38235447 PMCID: PMC10790903 DOI: 10.3389/fnrgo.2022.838136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Accepted: 06/01/2022] [Indexed: 01/19/2024]
Abstract
As technological advances progress, we find ourselves in situations where we need to collaborate with artificial agents (e.g., robots, autonomous machines and virtual agents). For example, autonomous machines will be part of search and rescue missions, space exploration and decision aids during monitoring tasks (e.g., baggage-screening at the airport). Efficient communication in these scenarios would be crucial to interact fluently. While studies examined the positive and engaging effect of social signals (i.e., gaze communication) on human-robot interaction, little is known about the effects of conflicting robot signals on the human actor's cognitive load. Moreover, it is unclear from a social neuroergonomics perspective how different brain regions synchronize or communicate with one another to deal with the cognitive load induced by conflicting signals in social situations with robots. The present study asked if neural oscillations that correlate with conflict processing are observed between brain regions when participants view conflicting robot signals. Participants classified different objects based on their color after a robot (i.e., iCub), presented on a screen, simulated handing over the object to them. The robot proceeded to cue participants (with a head shift) to the correct or incorrect target location. Since prior work has shown that unexpected cues can interfere with oculomotor planning and induces conflict, we expected that conflicting robot social signals which would interfere with the execution of actions. Indeed, we found that conflicting social signals elicited neural correlates of cognitive conflict as measured by mid-brain theta oscillations. More importantly, we found higher coherence values between mid-frontal electrode locations and posterior occipital electrode locations in the theta-frequency band for incongruent vs. congruent cues, which suggests that theta-band synchronization between these two regions allows for communication between cognitive control systems and gaze-related attentional mechanisms. We also find correlations between coherence values and behavioral performance (Reaction Times), which are moderated by the congruency of the robot signal. In sum, the influence of irrelevant social signals during goal-oriented tasks can be indexed by behavioral, neural oscillation and brain connectivity patterns. These data provide insights about a new measure for cognitive load, which can also be used in predicting human interaction with autonomous machines.
Collapse
Affiliation(s)
- Abdulaziz Abubshait
- Social Cognition in Human Robot Interaction (S4HRI), Italian Institute of Technology, Genova, Italy
| | - Lorenzo Parenti
- Social Cognition in Human Robot Interaction (S4HRI), Italian Institute of Technology, Genova, Italy
- Department of Psychology, University of Torino, Turin, Italy
| | - Jairo Perez-Osorio
- Social Cognition in Human Robot Interaction (S4HRI), Italian Institute of Technology, Genova, Italy
| | - Agnieszka Wykowska
- Social Cognition in Human Robot Interaction (S4HRI), Italian Institute of Technology, Genova, Italy
| |
Collapse
|
10
|
Kohn SC, de Visser EJ, Wiese E, Lee YC, Shaw TH. Measurement of Trust in Automation: A Narrative Review and Reference Guide. Front Psychol 2021; 12:604977. [PMID: 34737716 PMCID: PMC8562383 DOI: 10.3389/fpsyg.2021.604977] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Accepted: 08/25/2021] [Indexed: 02/05/2023] Open
Abstract
With the rise of automated and autonomous agents, research examining Trust in Automation (TiA) has attracted considerable attention over the last few decades. Trust is a rich and complex construct which has sparked a multitude of measures and approaches to study and understand it. This comprehensive narrative review addresses known methods that have been used to capture TiA. We examined measurements deployed in existing empirical works, categorized those measures into self-report, behavioral, and physiological indices, and examined them within the context of an existing model of trust. The resulting work provides a reference guide for researchers, providing a list of available TiA measurement methods along with the model-derived constructs that they capture including judgments of trustworthiness, trust attitudes, and trusting behaviors. The article concludes with recommendations on how to improve the current state of TiA measurement.
Collapse
Affiliation(s)
| | - Ewart J de Visser
- Warfighter Effectiveness Research Center, United States Air Force Academy, Colorado Springs, CO, United States
| | - Eva Wiese
- George Mason University, Fairfax, VA, United States
| | - Yi-Ching Lee
- George Mason University, Fairfax, VA, United States
| | - Tyler H Shaw
- George Mason University, Fairfax, VA, United States
| |
Collapse
|
11
|
Hopko SK, Mehta RK. Neural Correlates of Trust in Automation: Considerations and Generalizability Between Technology Domains. FRONTIERS IN NEUROERGONOMICS 2021; 2:731327. [PMID: 38235218 PMCID: PMC10790920 DOI: 10.3389/fnrgo.2021.731327] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/26/2021] [Accepted: 08/10/2021] [Indexed: 01/19/2024]
Abstract
Investigations into physiological or neurological correlates of trust has increased in popularity due to the need for a continuous measure of trust, including for trust-sensitive or adaptive systems, measurements of trustworthiness or pain points of technology, or for human-in-the-loop cyber intrusion detection. Understanding the limitations and generalizability of the physiological responses between technology domains is important as the usefulness and relevance of results is impacted by fundamental characteristics of the technology domains, corresponding use cases, and socially acceptable behaviors of the technologies. While investigations into the neural correlates of trust in automation has grown in popularity, there is limited understanding of the neural correlates of trust, where the vast majority of current investigations are in cyber or decision aid technologies. Thus, the relevance of these correlates as a deployable measure for other domains and the robustness of the measures to varying use cases is unknown. As such, this manuscript discusses the current-state-of-knowledge in trust perceptions, factors that influence trust, and corresponding neural correlates of trust as generalizable between domains.
Collapse
Affiliation(s)
- Sarah K. Hopko
- Neuroergonomics Lab, Department of Industrial and Systems Engineering, Texas A&M University, College Station, TX, United States
| | | |
Collapse
|
12
|
Allan K, Oren N, Hutchison J, Martin D. In search of a Goldilocks zone for credible AI. Sci Rep 2021; 11:13687. [PMID: 34211064 PMCID: PMC8249604 DOI: 10.1038/s41598-021-93109-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Accepted: 06/17/2021] [Indexed: 02/06/2023] Open
Abstract
If artificial intelligence (AI) is to help solve individual, societal and global problems, humans should neither underestimate nor overestimate its trustworthiness. Situated in-between these two extremes is an ideal 'Goldilocks' zone of credibility. But what will keep trust in this zone? We hypothesise that this role ultimately falls to the social cognition mechanisms which adaptively regulate conformity between humans. This novel hypothesis predicts that human-like functional biases in conformity should occur during interactions with AI. We examined multiple tests of this prediction using a collaborative remembering paradigm, where participants viewed household scenes for 30 s vs. 2 min, then saw 2-alternative forced-choice decisions about scene content originating either from AI- or human-sources. We manipulated the credibility of different sources (Experiment 1) and, from a single source, the estimated-likelihood (Experiment 2) and objective accuracy (Experiment 3) of specific decisions. As predicted, each manipulation produced functional biases for AI-sources mirroring those found for human-sources. Participants conformed more to higher credibility sources, and higher-likelihood or more objectively accurate decisions, becoming increasingly sensitive to source accuracy when their own capability was reduced. These findings support the hypothesised role of social cognition in regulating AI's influence, raising important implications and new directions for research on human-AI interaction.
Collapse
Affiliation(s)
- Kevin Allan
- School of Psychology, University of Aberdeen, Aberdeen, AB24 2UB, UK.
| | - Nir Oren
- School of Natural and Computing Sciences, University of Aberdeen, Aberdeen, AB24 2UB, UK
| | - Jacqui Hutchison
- School of Psychology, University of Aberdeen, Aberdeen, AB24 2UB, UK
| | - Douglas Martin
- School of Psychology, University of Aberdeen, Aberdeen, AB24 2UB, UK
| |
Collapse
|
13
|
Beatty PJ, Buzzell GA, Roberts DM, Voloshyna Y, McDonald CG. Subthreshold error corrections predict adaptive post-error compensations. Psychophysiology 2021; 58:e13803. [PMID: 33709470 DOI: 10.1111/psyp.13803] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2020] [Revised: 01/25/2021] [Accepted: 02/17/2021] [Indexed: 11/28/2022]
Abstract
Relatively little is known about the relation between subthreshold error corrections and post-error behavioral compensations. The present study utilized lateralized beta power, which has been shown to index response preparation, to examine subthreshold error corrections in a task known to produce response conflict, the Simon task. We found that even when an overt correction is not made, greater activation of the corrective response, indexed by beta suppression ipsilateral to the initial responding hand, predicted post-error speeding, and enhanced post-error accuracy at the single-trial level. This provides support for the notion that response conflict associated with errors can be adaptive, and suggests that subthreshold corrections should be taken into account to fully understand error-monitoring processes. Furthermore, we expand on previous findings that demonstrate that post-error slowing and post-error accuracy can be dissociated, as well as findings that suggest that frontal midline theta oscillations and the error-related negativity (ERN) are dissociable neurocognitive processes.
Collapse
Affiliation(s)
- Paul J Beatty
- Department of Psychology, George Mason University, Fairfax, VA, USA
| | - George A Buzzell
- Department of Psychology, Florida International University, Miami, FL, USA
| | - Daniel M Roberts
- Department of Psychology, George Mason University, Fairfax, VA, USA
| | | | - Craig G McDonald
- Department of Psychology, George Mason University, Fairfax, VA, USA
| |
Collapse
|
14
|
Fairclough SH, Lotte F. Grand Challenges in Neurotechnology and System Neuroergonomics. FRONTIERS IN NEUROERGONOMICS 2020; 1:602504. [PMID: 38234311 PMCID: PMC10790858 DOI: 10.3389/fnrgo.2020.602504] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/03/2020] [Accepted: 11/03/2020] [Indexed: 01/19/2024]
Affiliation(s)
| | - Fabien Lotte
- Inria Bordeaux Sud-Ouest, Talence, France
- LaBRI (CNRS/Univ. Bordeaux/Bordeaux INP), Bordeaux, France
| |
Collapse
|
15
|
|
16
|
Sanders N, Choo S, Kim N, Nam CS, Fitts EP. Neural Correlates of Trust During an Automated System Monitoring Task: Preliminary Results of an Effective Connectivity Study. ACTA ACUST UNITED AC 2019. [DOI: 10.1177/1071181319631409] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
As autonomous systems become more prevalent and their inner workings become more opaque, we increasingly rely on trust to guide our interactions with them especially in complex or rapidly evolving situations. When our expectations of what automation is capable of do not match reality, the consequences can be sub-optimal to say the least. The degree to which our trust reflects actual capability is known as trust calibration. One of the approaches to studying this is neuroergonomics. By understanding the neural mechanisms involved in human-machine trust, we can design systems which promote trust calibration and possibly measure trust in real time. Our study used the Multi Attribute Task Battery to investigate neural correlates of trust in automation. We used EEG to record brain activity of participants as they watched four algorithms of varying reliability perform the SYSMON subtask on the MATB. Subjects reported their subjective trust level after each round. We subsequently conducted an effective connectivity analysis and identified the cingulate cortex as a node, and its asymmetry ratio and incoming information flow as possible indices of trust calibration. We hope our study will inform future work involving decision-making and real-time cognitive state detection.
Collapse
Affiliation(s)
- Nathan Sanders
- Department of Industrial & Systems Engineering, North Carolina State University, Raleigh, NC, USA
| | - Sanghyun Choo
- Department of Industrial & Systems Engineering, North Carolina State University, Raleigh, NC, USA
| | - Nayoung Kim
- Department of Industrial & Systems Engineering, North Carolina State University, Raleigh, NC, USA
| | - Chang S. Nam
- Department of Industrial & Systems Engineering, North Carolina State University, Raleigh, NC, USA
| | - Edward P. Fitts
- Department of Industrial & Systems Engineering, North Carolina State University, Raleigh, NC, USA
| |
Collapse
|
17
|
Choo S, Sanders N, Kim N, Kim W, Nam CS, Fitts EP. Detecting Human Trust Calibration in Automation: A Deep Learning Approach. ACTA ACUST UNITED AC 2019. [DOI: 10.1177/1071181319631298] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Sanghyun Choo
- Department of Industrial & Systems Engineering, North Carolina State University, Raleigh, NC, USA
| | - Nathan Sanders
- Department of Industrial & Systems Engineering, North Carolina State University, Raleigh, NC, USA
| | - Nayoung Kim
- Department of Industrial & Systems Engineering, North Carolina State University, Raleigh, NC, USA
| | - Wonjoon Kim
- Department of Industrial & Systems Engineering, North Carolina State University, Raleigh, NC, USA
| | - Chang S. Nam
- Department of Industrial & Systems Engineering, North Carolina State University, Raleigh, NC, USA
| | - Edward P. Fitts
- Department of Industrial & Systems Engineering, North Carolina State University, Raleigh, NC, USA
| |
Collapse
|