1
|
Cansev ME, Miller AJ, Brown JD, Beckerle P. Implementing social and affective touch to enhance user experience in human-robot interaction. Front Robot AI 2024; 11:1403679. [PMID: 39188572 PMCID: PMC11345123 DOI: 10.3389/frobt.2024.1403679] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Accepted: 07/26/2024] [Indexed: 08/28/2024] Open
Abstract
In this paper, we discuss the potential contribution of affective touch to the user experience and robot performance in human-robot interaction, with an in-depth look into upper-limb prosthesis use as a well-suited example. Research on providing haptic feedback in human-robot interaction has worked to relay discriminative information during functional activities of daily living, like grasping a cup of tea. However, this approach neglects to recognize the affective information our bodies give and receive during social activities of daily living, like shaking hands. The discussion covers the emotional dimensions of affective touch and its role in conveying distinct emotions. In this work, we provide a human needs-centered approach to human-robot interaction design and argue for an equal emphasis to be placed on providing affective haptic feedback channels to meet the social tactile needs and interactions of human agents. We suggest incorporating affective touch to enhance user experience when interacting with and through semi-autonomous systems such as prosthetic limbs, particularly in fostering trust. Real-time analysis of trust as a dynamic phenomenon can pave the way towards adaptive shared autonomy strategies and consequently enhance the acceptance of prosthetic limbs. Here we highlight certain feasibility considerations, emphasizing practical designs and multi-sensory approaches for the effective implementation of affective touch interfaces.
Collapse
Affiliation(s)
- M. Ege Cansev
- Chair of Autonomous Systems and Mechatronics, Department of Electrical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Alexandra J. Miller
- Haptics and Medical Robotics Laboratory, Johns Hopkins University, Department of Mechanical Engineering, Baltimore, MD, United States
| | - Jeremy D. Brown
- Haptics and Medical Robotics Laboratory, Johns Hopkins University, Department of Mechanical Engineering, Baltimore, MD, United States
| | - Philipp Beckerle
- Chair of Autonomous Systems and Mechatronics, Department of Electrical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
- Department of Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| |
Collapse
|
2
|
Li M, Guo F, Li Z, Ma H, Duffy VG. Interactive effects of users' openness and robot reliability on trust: evidence from psychological intentions, task performance, visual behaviours, and cerebral activations. ERGONOMICS 2024:1-21. [PMID: 38635303 DOI: 10.1080/00140139.2024.2343954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Accepted: 04/09/2024] [Indexed: 04/19/2024]
Abstract
Although trust plays a vital role in human-robot interaction, there is currently a dearth of literature examining the effect of users' openness personality on trust in actual interaction. This study aims to investigate the interaction effects of users' openness and robot reliability on trust. We designed a voice-based walking task and collected subjective trust ratings, task metrics, eye-tracking data, and fNIRS signals from users with different openness to unravel the psychological intentions, task performance, visual behaviours, and cerebral activations underlying trust. The results showed significant interaction effects. Users with low openness exhibited lower subjective trust, more fixations, and higher activation of rTPJ in the highly reliable condition than those with high openness. The results suggested that users with low openness might be more cautious and suspicious about the highly reliable robot and allocate more visual attention and neural processing to monitor and infer robot status than users with high openness.
Collapse
Affiliation(s)
- Mingming Li
- Department of Industrial Engineering, College of Management Science and Engineering, Anhui University of Technology, Maanshan, China
- Department of Industrial Engineering, School of Business Administration, Northeastern University, Shenyang, China
| | - Fu Guo
- Department of Industrial Engineering, School of Business Administration, Northeastern University, Shenyang, China
| | - Zhixing Li
- Department of Industrial Engineering, School of Business Administration, Northeastern University, Shenyang, China
| | - Haiyang Ma
- Department of Industrial Engineering, School of Business Administration, Northeastern University, Shenyang, China
| | - Vincent G Duffy
- School of Industrial Engineering, Purdue University, West Lafayette, IN, USA
| |
Collapse
|
3
|
Deng M, Chen J, Wu Y, Ma S, Li H, Yang Z, Shen Y. Using voice recognition to measure trust during interactions with automated vehicles. APPLIED ERGONOMICS 2024; 116:104184. [PMID: 38048717 DOI: 10.1016/j.apergo.2023.104184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Revised: 11/10/2023] [Accepted: 11/20/2023] [Indexed: 12/06/2023]
Abstract
Trust in an automated vehicle system (AVs) can impact the experience and safety of drivers and passengers. This work investigates the effects of speech to measure drivers' trust in the AVs. Seventy-five participants were randomly assigned to high-trust (the AVs with 100% correctness, 0 crash, and 4 system messages with visual-auditory TORs) and low-trust group (the AVs with a correctness of 60%, a crash rate of 40%, 2 system messages with visual-only TORs). Voice interaction tasks were used to collect speech information during the driving process. The results revealed that our settings successfully induced trust and distrust states. The corresponding extracted speech feature data of the two trust groups were used for back-propagation neural network training and evaluated for its ability to accurately predict the trust classification. The highest classification accuracy of trust was 90.80%. This study proposes a method for accurately measuring trust in automated vehicles using voice recognition.
Collapse
Affiliation(s)
- Miaomiao Deng
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Jiaqi Chen
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Yue Wu
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Shu Ma
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Hongting Li
- Institute of Applied Psychology, College of Education, Zhejiang University of Technology, Hangzhou, China
| | - Zhen Yang
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, China.
| | - Yi Shen
- Department of Mathematics, Zhejiang Sci-Tech University, Hangzhou, China.
| |
Collapse
|
4
|
Kintz JR, Banerjee NT, Zhang JY, Anderson AP, Clark TK. Estimation of Subjectively Reported Trust, Mental Workload, and Situation Awareness Using Unobtrusive Measures. HUMAN FACTORS 2023; 65:1142-1160. [PMID: 36321727 DOI: 10.1177/00187208221129371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
OBJECTIVE We use a set of unobtrusive measures to estimate subjectively reported trust, mental workload, and situation awareness (henceforth "TWSA"). BACKGROUND Subjective questionnaires are commonly used to assess human cognitive states. However, they are obtrusive and usually impractical to administer during operations. Measures derived from actions operators take while working (which we call "embedded measures") have been proposed as an unobtrusive way to obtain TWSA estimates. Embedded measures have not been systematically investigated for each of TWSA, which prevents their operational utility. METHODS Fifteen participants completed twelve trials of spaceflight-relevant tasks while using a simulated autonomous system. Embedded measures of TWSA were obtained during each trial and participants completed TWSA questionnaires after each trial. Statistical models incorporating our embedded measures were fit with various formulations, interaction effects, and levels of personalization to understand their benefits and improve model accuracy. RESULTS The stepwise algorithm for building statistical models usually included embedded measures, which frequently corresponded to an intuitive increase or decrease in reported TWSA. Embedded measures alone could not accurately capture an operator's cognitive state, but combining the measures with readily observable task information or information about participants' backgrounds enabled the models to achieve good descriptive fit and accurate prediction of TWSA. CONCLUSION Statistical models leveraging embedded measures of TWSA can be used to accurately estimate responses on subjective questionnaires that measure TWSA. APPLICATION Our systematic approach to investigating embedded measures and fitting models allows for cognitive state estimation without disrupting tasks when administering questionnaires would be impractical.
Collapse
Affiliation(s)
- Jacob R Kintz
- Smead Department of Aerospace Engineering Sciences, University of Colorado-Boulder, Boulder, Colorado, USA
| | - Neil T Banerjee
- Smead Department of Aerospace Engineering Sciences, University of Colorado-Boulder, Boulder, Colorado, USA
| | - Johnny Y Zhang
- Smead Department of Aerospace Engineering Sciences, University of Colorado-Boulder, Boulder, Colorado, USA
| | - Allison P Anderson
- Smead Department of Aerospace Engineering Sciences, University of Colorado-Boulder, Boulder, Colorado, USA
| | - Torin K Clark
- Smead Department of Aerospace Engineering Sciences, University of Colorado-Boulder, Boulder, Colorado, USA
| |
Collapse
|
5
|
Ehrlich SK, Dean-Leon E, Tacca N, Armleder S, Dimova-Edeleva V, Cheng G. Human-robot collaborative task planning using anticipatory brain responses. PLoS One 2023; 18:e0287958. [PMID: 37432954 DOI: 10.1371/journal.pone.0287958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Accepted: 06/19/2023] [Indexed: 07/13/2023] Open
Abstract
Human-robot interaction (HRI) describes scenarios in which both human and robot work as partners, sharing the same environment or complementing each other on a joint task. HRI is characterized by the need for high adaptability and flexibility of robotic systems toward their human interaction partners. One of the major challenges in HRI is task planning with dynamic subtask assignment, which is particularly challenging when subtask choices of the human are not readily accessible by the robot. In the present work, we explore the feasibility of using electroencephalogram (EEG) based neuro-cognitive measures for online robot learning of dynamic subtask assignment. To this end, we demonstrate in an experimental human subject study, featuring a joint HRI task with a UR10 robotic manipulator, the presence of EEG measures indicative of a human partner anticipating a takeover situation from human to robot or vice-versa. The present work further proposes a reinforcement learning based algorithm employing these measures as a neuronal feedback signal from the human to the robot for dynamic learning of subtask-assignment. The efficacy of this algorithm is validated in a simulation-based study. The simulation results reveal that even with relatively low decoding accuracies, successful robot learning of subtask-assignment is feasible, with around 80% choice accuracy among four subtasks within 17 minutes of collaboration. The simulation results further reveal that scalability to more subtasks is feasible and mainly accompanied with longer robot learning times. These findings demonstrate the usability of EEG-based neuro-cognitive measures to mediate the complex and largely unsolved problem of human-robot collaborative task planning.
Collapse
Affiliation(s)
- Stefan K Ehrlich
- Chair for Cognitive Systems, Department of Electrical Engineering, TUM School of Computation, Information and Technology, Technical University of Munich, Munich, Germany
| | - Emmanuel Dean-Leon
- Department of Electrical Engineering, Automation, Chalmers University of Technology, Göteborg, Sweden
| | - Nicholas Tacca
- Battelle Memorial Institute, Columbus, OH, United States of America
| | - Simon Armleder
- Chair for Cognitive Systems, Department of Electrical Engineering, TUM School of Computation, Information and Technology, Technical University of Munich, Munich, Germany
| | - Viktorija Dimova-Edeleva
- MIRMI - Munich Institute of Robotics and Machine Intelligence, formerly MSRM, Technical University of Munich, Munich, Germany
| | - Gordon Cheng
- Chair for Cognitive Systems, Department of Electrical Engineering, TUM School of Computation, Information and Technology, Technical University of Munich, Munich, Germany
- Center of Competence NeuroEngineering, Technical University of Munich, München, Germany
| |
Collapse
|
6
|
Naser MYM, Bhattacharya S. Empowering human-AI teams via Intentional Behavioral Synchrony. FRONTIERS IN NEUROERGONOMICS 2023; 4:1181827. [PMID: 38234496 PMCID: PMC10790871 DOI: 10.3389/fnrgo.2023.1181827] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 06/06/2023] [Indexed: 01/19/2024]
Abstract
As Artificial Intelligence (AI) proliferates across various sectors such as healthcare, transportation, energy, and military applications, the collaboration between human-AI teams is becoming increasingly critical. Understanding the interrelationships between system elements - humans and AI - is vital to achieving the best outcomes within individual team members' capabilities. This is also crucial in designing better AI algorithms and finding favored scenarios for joint AI-human missions that capitalize on the unique capabilities of both elements. In this conceptual study, we introduce Intentional Behavioral Synchrony (IBS) as a synchronization mechanism between humans and AI to set up a trusting relationship without compromising mission goals. IBS aims to create a sense of similarity between AI decisions and human expectations, drawing on psychological concepts that can be integrated into AI algorithms. We also discuss the potential of using multimodal fusion to set up a feedback loop between the two partners. Our aim with this work is to start a research trend centered on exploring innovative ways of deploying synchrony between teams of non-human members. Our goal is to foster a better sense of collaboration and trust between humans and AI, resulting in more effective joint missions.
Collapse
Affiliation(s)
- Mohammad Y. M. Naser
- The Neuro-Interaction Innovation Lab, Kennesaw State University, Department of Electrical Engineering, Marietta, GA, United States
| | - Sylvia Bhattacharya
- The Neuro-Interaction Innovation Lab, Kennesaw State University, Department of Engineering Technology, Marietta, GA, United States
| |
Collapse
|
7
|
Kaklauskas A, Abraham A, Ubarte I, Kliukas R, Luksaite V, Binkyte-Veliene A, Vetloviene I, Kaklauskiene L. A Review of AI Cloud and Edge Sensors, Methods, and Applications for the Recognition of Emotional, Affective and Physiological States. SENSORS (BASEL, SWITZERLAND) 2022; 22:7824. [PMID: 36298176 PMCID: PMC9611164 DOI: 10.3390/s22207824] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Revised: 09/28/2022] [Accepted: 10/12/2022] [Indexed: 06/16/2023]
Abstract
Affective, emotional, and physiological states (AFFECT) detection and recognition by capturing human signals is a fast-growing area, which has been applied across numerous domains. The research aim is to review publications on how techniques that use brain and biometric sensors can be used for AFFECT recognition, consolidate the findings, provide a rationale for the current methods, compare the effectiveness of existing methods, and quantify how likely they are to address the issues/challenges in the field. In efforts to achieve the key goals of Society 5.0, Industry 5.0, and human-centered design better, the recognition of emotional, affective, and physiological states is progressively becoming an important matter and offers tremendous growth of knowledge and progress in these and other related fields. In this research, a review of AFFECT recognition brain and biometric sensors, methods, and applications was performed, based on Plutchik's wheel of emotions. Due to the immense variety of existing sensors and sensing systems, this study aimed to provide an analysis of the available sensors that can be used to define human AFFECT, and to classify them based on the type of sensing area and their efficiency in real implementations. Based on statistical and multiple criteria analysis across 169 nations, our outcomes introduce a connection between a nation's success, its number of Web of Science articles published, and its frequency of citation on AFFECT recognition. The principal conclusions present how this research contributes to the big picture in the field under analysis and explore forthcoming study trends.
Collapse
Affiliation(s)
- Arturas Kaklauskas
- Department of Construction Management and Real Estate, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| | - Ajith Abraham
- Machine Intelligence Research Labs, Scientific Network for Innovation and Research Excellence, Auburn, WA 98071, USA
| | - Ieva Ubarte
- Institute of Sustainable Construction, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| | - Romualdas Kliukas
- Department of Applied Mechanics, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| | - Vaida Luksaite
- Department of Construction Management and Real Estate, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| | - Arune Binkyte-Veliene
- Institute of Sustainable Construction, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| | - Ingrida Vetloviene
- Department of Construction Management and Real Estate, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| | - Loreta Kaklauskiene
- Department of Construction Management and Real Estate, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| |
Collapse
|
8
|
Hsieh SJ, Wang AR, Madison A, Tossell C, Visser ED. Adaptive Driving Assistant Model (ADAM) for Advising Drivers of Autonomous Vehicles. ACM T INTERACT INTEL 2022. [DOI: 10.1145/3545994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022]
Abstract
Fully autonomous driving is on the horizon; vehicles with advanced driver assistance systems (ADAS) such as Tesla's Autopilot are already available to consumers. However, all currently available ADAS applications require a human driver to be alert and ready to take control if needed. Partially automated driving introduces new complexities to human interactions with cars and can even increase collision risk. A better understanding of drivers’ trust in automation may help reduce these complexities. Much of the existing research on trust in ADAS has relied on use of surveys and physiological measures to assess trust and has been conducted using driving simulators. There have been relatively few studies that use telemetry data from real automated vehicles to assess trust in ADAS. In addition, although some ADAS technologies provide alerts when, for example, drivers’ hands are not on the steering wheel, these systems are not personalized to individual drivers. Needed are adaptive technologies that can help drivers of autonomous vehicles avoid crashes based on multiple real-time data streams. In this paper, we propose an architecture for adaptive autonomous driving assistance. Two layers of multiple sensory fusion models are developed to provide appropriate voice reminders to increase driving safety based on predicted driving status. Results suggest that human trust in automation can be quantified and predicted with 80% accuracy based on vehicle data, and that adaptive speech-based advice can be provided to drivers with 90 to 95% accuracy. With more data, these models can be used to evaluate trust in driving assistance tools, which can ultimately lead to safer and appropriate use of these features.
Collapse
|
9
|
Abubshait A, Parenti L, Perez-Osorio J, Wykowska A. Misleading Robot Signals in a Classification Task Induce Cognitive Load as Measured by Theta Synchronization Between Frontal and Temporo-parietal Brain Regions. FRONTIERS IN NEUROERGONOMICS 2022; 3:838136. [PMID: 38235447 PMCID: PMC10790903 DOI: 10.3389/fnrgo.2022.838136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Accepted: 06/01/2022] [Indexed: 01/19/2024]
Abstract
As technological advances progress, we find ourselves in situations where we need to collaborate with artificial agents (e.g., robots, autonomous machines and virtual agents). For example, autonomous machines will be part of search and rescue missions, space exploration and decision aids during monitoring tasks (e.g., baggage-screening at the airport). Efficient communication in these scenarios would be crucial to interact fluently. While studies examined the positive and engaging effect of social signals (i.e., gaze communication) on human-robot interaction, little is known about the effects of conflicting robot signals on the human actor's cognitive load. Moreover, it is unclear from a social neuroergonomics perspective how different brain regions synchronize or communicate with one another to deal with the cognitive load induced by conflicting signals in social situations with robots. The present study asked if neural oscillations that correlate with conflict processing are observed between brain regions when participants view conflicting robot signals. Participants classified different objects based on their color after a robot (i.e., iCub), presented on a screen, simulated handing over the object to them. The robot proceeded to cue participants (with a head shift) to the correct or incorrect target location. Since prior work has shown that unexpected cues can interfere with oculomotor planning and induces conflict, we expected that conflicting robot social signals which would interfere with the execution of actions. Indeed, we found that conflicting social signals elicited neural correlates of cognitive conflict as measured by mid-brain theta oscillations. More importantly, we found higher coherence values between mid-frontal electrode locations and posterior occipital electrode locations in the theta-frequency band for incongruent vs. congruent cues, which suggests that theta-band synchronization between these two regions allows for communication between cognitive control systems and gaze-related attentional mechanisms. We also find correlations between coherence values and behavioral performance (Reaction Times), which are moderated by the congruency of the robot signal. In sum, the influence of irrelevant social signals during goal-oriented tasks can be indexed by behavioral, neural oscillation and brain connectivity patterns. These data provide insights about a new measure for cognitive load, which can also be used in predicting human interaction with autonomous machines.
Collapse
Affiliation(s)
- Abdulaziz Abubshait
- Social Cognition in Human Robot Interaction (S4HRI), Italian Institute of Technology, Genova, Italy
| | - Lorenzo Parenti
- Social Cognition in Human Robot Interaction (S4HRI), Italian Institute of Technology, Genova, Italy
- Department of Psychology, University of Torino, Turin, Italy
| | - Jairo Perez-Osorio
- Social Cognition in Human Robot Interaction (S4HRI), Italian Institute of Technology, Genova, Italy
| | - Agnieszka Wykowska
- Social Cognition in Human Robot Interaction (S4HRI), Italian Institute of Technology, Genova, Italy
| |
Collapse
|
10
|
Matsumoto S, Washburn A, Riek LD. A Framework to Explore Proximate Human-Robot Coordination. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3526101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Proximate human-robot teaming (pxHRT) is a complex subspace within human-robot interaction. Studies in this space involve a range of equipment and methods, including the ability to sense people and robots precisely. Research in this area draws from a wide variety of other fields, from human-human interaction to control theory, making study design complex, particularly for those outside the field of HRI. In this paper, we introduce a framework that helps researchers consider tradeoffs across various task contexts, platforms, sensors, and analysis methods; metrics frequently used in the field; and common challenges researchers may face. We demonstrate the use of the framework via a case study which employs an autonomous mobile manipulator continuously engaging in shared workspace, handover, and co-manipulation tasks with people, and explores the effect of cognitive workload on pxHRT dynamics. We also demonstrate the utility of the framework in a case study with two groups of researchers new to pxHRT. With this framework, we hope to enable researchers, especially those outside HRI, to more thoroughly consider these complex components within their studies, more easily design experiments, and more fully explore research questions within the space of pxHRT.
Collapse
|
11
|
Bales G, Kong Z. Neurophysiological and Behavioral Differences in Human-Multiagent Tasks: An EEG Network Perspective. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3527928] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Effective human-multiagent teams will incorporate the cognitive skills of the human with the autonomous capabilities of the multiagent group to maximize task performance. However, producing a seamless fusion requires a greater understanding of the human’s cognitive state as it reacts to uncertainties in both the task environment and agent dynamics. This study examines external behaviors in concert with neurophysiological measures acquired via electroencephalography (EEG) to probe the interactions between cognitive processes, behaviors, and performance in a human-multiagent team task. We show that changes in the
α
(8-12Hz) and
θ
(4-8Hz) bands of EEG indicate a higher burden on the cognitive resources associated with visual-spatial reasoning required to estimate a more complex kinematic state of robotic agents. These results are reinforced by complementary behavioral shifts in gaze and pilot inputs. Additionally, higher performing participants tend to engage more actively in the task by utilizing greater amounts of visual-spatial reasoning. Finally, we show that features based on EEG dynamic-network-metrics provide discriminative information that distinguish gaze behaviors associated with the attention process.
Collapse
Affiliation(s)
- Gregory Bales
- Department of Mechanical and Aerospace Engineering, University of California, Davis, USA
| | - Zhaodan Kong
- Department of Mechanical and Aerospace Engineering, University of California, Davis, USA
| |
Collapse
|
12
|
Eloy L, Doherty EJ, Spencer CA, Bobko P, Hirshfield L. Using fNIRS to Identify Transparency- and Reliability-Sensitive Markers of Trust Across Multiple Timescales in Collaborative Human-Human-Agent Triads. FRONTIERS IN NEUROERGONOMICS 2022; 3:838625. [PMID: 38235468 PMCID: PMC10790910 DOI: 10.3389/fnrgo.2022.838625] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/18/2021] [Accepted: 03/08/2022] [Indexed: 01/19/2024]
Abstract
Intelligent agents are rapidly evolving from assistants into teammates as they perform increasingly complex tasks. Successful human-agent teams leverage the computational power and sensory capabilities of automated agents while keeping the human operator's expectation consistent with the agent's ability. This helps prevent over-reliance on and under-utilization of the agent to optimize its effectiveness. Research at the intersection of human-computer interaction, social psychology, and neuroergonomics has identified trust as a governing factor of human-agent interactions that can be modulated to maintain an appropriate expectation. To achieve this calibration, trust can be monitored continuously and unobtrusively using neurophysiological sensors. While prior studies have demonstrated the potential of functional near-infrared spectroscopy (fNIRS), a lightweight neuroimaging technology, in the prediction of social, cognitive, and affective states, few have successfully used it to measure complex social constructs like trust in artificial agents. Even fewer studies have examined the dynamics of hybrid teams of more than 1 human or 1 agent. We address this gap by developing a highly collaborative task that requires knowledge sharing within teams of 2 humans and 1 agent. Using brain data obtained with fNIRS sensors, we aim to identify brain regions sensitive to changes in agent behavior on a long- and short-term scale. We manipulated agent reliability and transparency while measuring trust, mental demand, team processes, and affect. Transparency and reliability levels are found to significantly affect trust in the agent, while transparency explanations do not impact mental demand. Reducing agent communication is shown to disrupt interpersonal trust and team cohesion, suggesting similar dynamics as human-human teams. Contrasts of General Linear Model analyses identify dorsal medial prefrontal cortex activation specific to assessing the agent's transparency explanations and characterize increases in mental demand as signaled by dorsal lateral prefrontal cortex and frontopolar activation. Short scale event-level data is analyzed to show that predicting whether an individual will trust the agent, with data from 15 s before their decision, is feasible with fNIRS data. Discussing our results, we identify targets and directions for future neuroergonomics research as a step toward building an intelligent trust-modulation system to optimize human-agent collaborations in real time.
Collapse
Affiliation(s)
- Lucca Eloy
- Institute of Cognitive Science, University of Colorado Boulder, Boulder, CO, United States
| | - Emily J. Doherty
- Institute of Cognitive Science, University of Colorado Boulder, Boulder, CO, United States
| | - Cara A. Spencer
- Institute of Cognitive Science, University of Colorado Boulder, Boulder, CO, United States
| | - Philip Bobko
- Department of Management, Gettysburg College, Gettysburg, PA, United States
| | - Leanne Hirshfield
- Institute of Cognitive Science, University of Colorado Boulder, Boulder, CO, United States
| |
Collapse
|
13
|
Thakur SS, Poddar P, Roy RB. Real-time prediction of smoking activity using machine learning based multi-class classification model. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:14529-14551. [PMID: 35233178 PMCID: PMC8874745 DOI: 10.1007/s11042-022-12349-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Revised: 08/18/2021] [Accepted: 01/18/2022] [Indexed: 05/29/2023]
Abstract
UNLABELLED Smoking cessation efforts can be greatly influenced by providing just-in-time intervention to individuals who are trying to quit smoking. Detecting smoking activity accurately among the confounding activities of daily living (ADLs) being monitored by the wearable device is a challenging and intriguing research problem. This study aims to develop a machine learning based modeling framework to identify the smoking activity among the confounding ADLs in real-time using the streaming data from the wrist-wearable IMU (6-axis inertial measurement unit) sensor. A low-cost wrist-wearable device has been designed and developed to collect raw sensor data from subjects for the activities. A sliding window mechanism has been used to process the streaming raw sensor data and extract several time-domain, frequency-domain, and descriptive features. Hyperparameter tuning and feature selection have been done to identify best hyperparameters and features respectively. Subsequently, multi-class classification models are developed and validated using in-sample and out-of-sample testing. The developed models obtained predictive accuracy (area under receiver operating curve) up to 98.7% for predicting the smoking activity. The findings of this study will lead to a novel application of wearable devices to accurately detect smoking activity in real-time. It will further help the healthcare professionals in monitoring their patients who are smokers by providing just-in-time intervention to help them quit smoking. The application of this framework can be extended to more preventive healthcare use-cases and detection of other activities of interest. SUPPLEMENTARY INFORMATION The online version contains supplementary material available at 10.1007/s11042-022-12349-6.
Collapse
Affiliation(s)
- Saurabh Singh Thakur
- Rajendra Mishra School of Engineering Entrepreneurship, Indian Institute of Technology, Kharagpur, India
| | - Pradeep Poddar
- Department of Metallurgical and Materials Engineering, Indian Institute of Technology Kharagpur, Kharagpur, India
| | - Ram Babu Roy
- Rajendra Mishra School of Engineering Entrepreneurship, Indian Institute of Technology, Kharagpur, India
| |
Collapse
|
14
|
Kohn SC, de Visser EJ, Wiese E, Lee YC, Shaw TH. Measurement of Trust in Automation: A Narrative Review and Reference Guide. Front Psychol 2021; 12:604977. [PMID: 34737716 PMCID: PMC8562383 DOI: 10.3389/fpsyg.2021.604977] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Accepted: 08/25/2021] [Indexed: 02/05/2023] Open
Abstract
With the rise of automated and autonomous agents, research examining Trust in Automation (TiA) has attracted considerable attention over the last few decades. Trust is a rich and complex construct which has sparked a multitude of measures and approaches to study and understand it. This comprehensive narrative review addresses known methods that have been used to capture TiA. We examined measurements deployed in existing empirical works, categorized those measures into self-report, behavioral, and physiological indices, and examined them within the context of an existing model of trust. The resulting work provides a reference guide for researchers, providing a list of available TiA measurement methods along with the model-derived constructs that they capture including judgments of trustworthiness, trust attitudes, and trusting behaviors. The article concludes with recommendations on how to improve the current state of TiA measurement.
Collapse
Affiliation(s)
| | - Ewart J de Visser
- Warfighter Effectiveness Research Center, United States Air Force Academy, Colorado Springs, CO, United States
| | - Eva Wiese
- George Mason University, Fairfax, VA, United States
| | - Yi-Ching Lee
- George Mason University, Fairfax, VA, United States
| | - Tyler H Shaw
- George Mason University, Fairfax, VA, United States
| |
Collapse
|
15
|
Hopko SK, Mehta RK. Neural Correlates of Trust in Automation: Considerations and Generalizability Between Technology Domains. FRONTIERS IN NEUROERGONOMICS 2021; 2:731327. [PMID: 38235218 PMCID: PMC10790920 DOI: 10.3389/fnrgo.2021.731327] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/26/2021] [Accepted: 08/10/2021] [Indexed: 01/19/2024]
Abstract
Investigations into physiological or neurological correlates of trust has increased in popularity due to the need for a continuous measure of trust, including for trust-sensitive or adaptive systems, measurements of trustworthiness or pain points of technology, or for human-in-the-loop cyber intrusion detection. Understanding the limitations and generalizability of the physiological responses between technology domains is important as the usefulness and relevance of results is impacted by fundamental characteristics of the technology domains, corresponding use cases, and socially acceptable behaviors of the technologies. While investigations into the neural correlates of trust in automation has grown in popularity, there is limited understanding of the neural correlates of trust, where the vast majority of current investigations are in cyber or decision aid technologies. Thus, the relevance of these correlates as a deployable measure for other domains and the robustness of the measures to varying use cases is unknown. As such, this manuscript discusses the current-state-of-knowledge in trust perceptions, factors that influence trust, and corresponding neural correlates of trust as generalizable between domains.
Collapse
Affiliation(s)
- Sarah K. Hopko
- Neuroergonomics Lab, Department of Industrial and Systems Engineering, Texas A&M University, College Station, TX, United States
| | | |
Collapse
|
16
|
Hakim H, Bettinger JA, Chambers CT, Driedger SM, Dubé E, Gavaruzzi T, Giguere AMC, Kavanagh É, Leask J, MacDonald SE, Orji R, Parent E, Paquette JS, Roberge J, Sander B, Scherer AM, Tremblay-Breault M, Wilson K, Reinharz D, Witteman HO. A Web Application About Herd Immunity Using Personalized Avatars: Development Study. J Med Internet Res 2020; 22:e20113. [PMID: 33124994 PMCID: PMC7665952 DOI: 10.2196/20113] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2020] [Revised: 07/03/2020] [Accepted: 07/26/2020] [Indexed: 01/30/2023] Open
Abstract
BACKGROUND Herd immunity or community immunity refers to the reduced risk of infection among susceptible individuals in a population through the presence and proximity of immune individuals. Recent studies suggest that improving the understanding of community immunity may increase intentions to get vaccinated. OBJECTIVE This study aims to design a web application about community immunity and optimize it based on users' cognitive and emotional responses. METHODS Our multidisciplinary team developed a web application about community immunity to communicate epidemiological evidence in a personalized way. In our application, people build their own community by creating an avatar representing themselves and 8 other avatars representing people around them, for example, their family or coworkers. The application integrates these avatars in a 2-min visualization showing how different parameters (eg, vaccine coverage, and contact within communities) influence community immunity. We predefined communication goals, created prototype visualizations, and tested four iterative versions of our visualization in a university-based human-computer interaction laboratory and community-based settings (a cafeteria, two shopping malls, and a public library). Data included psychophysiological measures (eye tracking, galvanic skin response, facial emotion recognition, and electroencephalogram) to assess participants' cognitive and affective responses to the visualization and verbal feedback to assess their interpretations of the visualization's content and messaging. RESULTS Among 110 participants across all four cycles, 68 (61.8%) were women and 38 (34.5%) were men (4/110, 3.6%; not reported), with a mean age of 38 (SD 17) years. More than half (65/110, 59.0%) of participants reported having a university-level education. Iterative changes across the cycles included adding the ability for users to create their own avatars, specific signals about who was represented by the different avatars, using color and movement to indicate protection or lack of protection from infectious disease, and changes to terminology to ensure clarity for people with varying educational backgrounds. Overall, we observed 3 generalizable findings. First, visualization does indeed appear to be a promising medium for conveying what community immunity is and how it works. Second, by involving multiple users in an iterative design process, it is possible to create a short and simple visualization that clearly conveys a complex topic. Finally, evaluating users' emotional responses during the design process, in addition to their cognitive responses, offers insights that help inform the final design of an intervention. CONCLUSIONS Visualization with personalized avatars may help people understand their individual roles in population health. Our app showed promise as a method of communicating the relationship between individual behavior and community health. The next steps will include assessing the effects of the application on risk perception, knowledge, and vaccination intentions in a randomized controlled trial. This study offers a potential road map for designing health communication materials for complex topics such as community immunity.
Collapse
Affiliation(s)
- Hina Hakim
- Department of Family and Emergency Medicine, Laval University, Quebec City, QC, Canada
| | - Julie A Bettinger
- Vaccine Evaluation Center, BC Children's Hospital, University of British Columbia, Vancouver, BC, Canada
| | - Christine T Chambers
- Department of Psychology and Neuroscience and Pediatrics, Dalhousie University, Halifax, NS, Canada
| | - S Michelle Driedger
- Department of Community Health Sciences, University of Manitoba, Winnipeg, MB, Canada, Winnipeg, MB, Canada
| | - Eve Dubé
- Institut national de santé publique du Québec, Institut national de santé publique du Québec, Quebec City, QC, Canada
| | - Teresa Gavaruzzi
- Department of Developmental Psychology and Socialization, University of Padova, Italy, Padova, Italy
| | - Anik M C Giguere
- Department of Family and Emergency Medicine, Laval University, Quebec City, QC, Canada
| | - Éric Kavanagh
- École de design, Édifice La Fabrique, Laval University, Quebec City, QC, Canada
| | - Julie Leask
- Faculty of Medicine and Health, Susan Wakil School of Nursing and Midwifery, University of Sydney, Sydney, Australia
| | | | - Rita Orji
- Faculty of Computer Science, Dalhousie University, Halifax, NS, Canada
| | - Elizabeth Parent
- Department of Family and Emergency Medicine, Laval University, Quebec City, QC, Canada
| | | | - Jacynthe Roberge
- École de design, Édifice La Fabrique, Laval University, Quebec City, QC, Canada
| | - Beate Sander
- University Health Network, Toronto General Hospital, Eaton Building, Toronto, ON, Canada
| | - Aaron M Scherer
- Department of Internal Medicine, University of Iowa, Iowa, IA, United States
| | | | - Kumanan Wilson
- Department of Medicine, Bruyere Research Institute and Ottawa Hospital Research Institute, University of Ottawa, Ottawa, ON, Canada
| | - Daniel Reinharz
- Department of Social and Preventive Medicine, Laval University, Quebec City, QC, Canada
| | - Holly O Witteman
- Department of Family and Emergency Medicine, Laval University, Quebec City, QC, Canada
| |
Collapse
|
17
|
Azevedo-Sa H, Jayaraman SK, Yang XJ, Robert LP, Tilbury DM. Context-Adaptive Management of Drivers’ Trust in Automated Vehicles. IEEE Robot Autom Lett 2020. [DOI: 10.1109/lra.2020.3025736] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
18
|
Azevedo-Sa H, Jayaraman SK, Esterwood CT, Yang XJ, Robert LP, Tilbury DM. Real-Time Estimation of Drivers’ Trust in Automated Driving Systems. Int J Soc Robot 2020. [DOI: 10.1007/s12369-020-00694-1] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
AbstractTrust miscalibration issues, represented by undertrust and overtrust, hinder the interaction between drivers and self-driving vehicles. A modern challenge for automotive engineers is to avoid these trust miscalibration issues through the development of techniques for measuring drivers’ trust in the automated driving system during real-time applications execution. One possible approach for measuring trust is through modeling its dynamics and subsequently applying classical state estimation methods. This paper proposes a framework for modeling the dynamics of drivers’ trust in automated driving systems and also for estimating these varying trust levels. The estimation method integrates sensed behaviors (from the driver) through a Kalman filter-based approach. The sensed behaviors include eye-tracking signals, the usage time of the system, and drivers’ performance on a non-driving-related task. We conducted a study ($$n=80$$
n
=
80
) with a simulated SAE level 3 automated driving system, and analyzed the factors that impacted drivers’ trust in the system. Data from the user study were also used for the identification of the trust model parameters. Results show that the proposed approach was successful in computing trust estimates over successive interactions between the driver and the automated driving system. These results encourage the use of strategies for modeling and estimating trust in automated driving systems. Such trust measurement technique paves a path for the design of trust-aware automated driving systems capable of changing their behaviors to control drivers’ trust levels to mitigate both undertrust and overtrust.
Collapse
|
19
|
Abstract
PURPOSE OF REVIEW To assess the state-of-the-art in research on trust in robots and to examine if recent methodological advances can aid in the development of trustworthy robots. RECENT FINDINGS While traditional work in trustworthy robotics has focused on studying the antecedents and consequences of trust in robots, recent work has gravitated towards the development of strategies for robots to actively gain, calibrate, and maintain the human user's trust. Among these works, there is emphasis on endowing robotic agents with reasoning capabilities (e.g., via probabilistic modeling). SUMMARY The state-of-the-art in trust research provides roboticists with a large trove of tools to develop trustworthy robots. However, challenges remain when it comes to trust in real-world human-robot interaction (HRI) settings: there exist outstanding issues in trust measurement, guarantees on robot behavior (e.g., with respect to user privacy), and handling rich multidimensional data. We examine how recent advances in psychometrics, trustworthy systems, robot-ethics, and deep learning can provide resolution to each of these issues. In conclusion, we are of the opinion that these methodological advances could pave the way for the creation of truly autonomous, trustworthy social robots.
Collapse
Affiliation(s)
- Bing Cai Kok
- Dept. of Computer Science, School of Computing, National University of Singapore, 13 Computing Drive, Singapore, 119077 Singapore
| | - Harold Soh
- Dept. of Computer Science, School of Computing, National University of Singapore, 13 Computing Drive, Singapore, 119077 Singapore
| |
Collapse
|
20
|
Measuring Trust with Psychophysiological Signals: A Systematic Mapping Study of Approaches Used. MULTIMODAL TECHNOLOGIES AND INTERACTION 2020. [DOI: 10.3390/mti4030063] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Trust plays an essential role in all human relationships. However, measuring trust remains a challenge for researchers exploring psychophysiological signals. Therefore, this article aims to systematically map the approaches used in studies assessing trust with psychophysiological signals. In particular, we examine the numbers and frequency of combined psychophysiological signals, the primary outcomes of previous studies, and the types and most commonly used data analysis techniques for analyzing psychophysiological data to infer a trust state. For this purpose, we employ a systematic mapping review method, through which we analyze 51 carefully selected articles (studies focused on trust using psychophysiology). Two significant findings are as follows: (1) Psychophysiological signals from EEG(electroencephalogram) and ECG(electrocardiogram) for monitoring peripheral and central nervous systems are the most frequently used to measure trust, while audio and EOG(electro-oculography) psychophysiological signals are the least commonly used. Moreover, the maximum number of psychophysiological signals ever combined so far is three (2). Most of which are peripheral nervous system monitoring psychophysiological signals that are low in spatial resolution. (3) Regarding outcomes: there is only one tool proposed for assessing trust in an interpersonal context, excluding trust in a technology context. Moreover, there are no stable and accurate ensemble models that have been developed to assess trust; all prior attempts led to unstable but fairly accurate models or did not satisfy the conditions for combining several algorithms (ensemble). In conclusion, the extent to which trust can be assessed using psychophysiological measures during user interactions (real-time) remains unknown, as there several issues, such as the lack of a stable and accurate ensemble trust classifier model, among others, that require urgent research attention. Although this topic is relatively new, much work has been done. However, more remains to be done to provide clarity on this topic.
Collapse
|
21
|
Lima BN, Balducci P, Passos RP, Novelli C, Fileni CHP, Vieira F, Camargo LBD, Vilela Junior GDB. Artificial intelligence based on fuzzy logic for the analysis of human movement in healthy people: a systematic review. Artif Intell Rev 2020. [DOI: 10.1007/s10462-020-09885-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
22
|
Neural Correlates of Variations in Human Trust in Human-like Machines during Non-reciprocal Interactions. Sci Rep 2019; 9:9975. [PMID: 31292474 PMCID: PMC6620272 DOI: 10.1038/s41598-019-46098-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2018] [Accepted: 05/31/2019] [Indexed: 11/08/2022] Open
Abstract
As intelligent machines have become widespread in various applications, it has become increasingly important to operate them efficiently. Monitoring human operators' trust is required for productive interactions between humans and machines. However, neurocognitive understanding of human trust in machines is limited. In this study, we analysed human behaviours and electroencephalograms (EEGs) obtained during non-reciprocal human-machine interactions. Human subjects supervised their partner agents by monitoring and intervening in the agents' actions in this non-reciprocal interaction, which reflected practical uses of autonomous or smart systems. Furthermore, we diversified the agents with external and internal human-like factors to understand the influence of anthropomorphism of machine agents. Agents' internal human-likenesses were manifested in the way they conducted a task and affected subjects' trust levels. From EEG analysis, we could define brain responses correlated with increase and decrease of trust. The effects of trust variations on brain responses were more pronounced with agents who were externally closer to humans and who elicited greater trust from the subjects. This research provides a theoretical basis for modelling human neural activities indicate trust in partner machines and can thereby contribute to the design of machines to promote efficient interactions with humans.
Collapse
|