1
|
Li M, Guo F, Li Z, Ma H, Duffy VG. Interactive effects of users' openness and robot reliability on trust: evidence from psychological intentions, task performance, visual behaviours, and cerebral activations. ERGONOMICS 2024:1-21. [PMID: 38635303 DOI: 10.1080/00140139.2024.2343954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Accepted: 04/09/2024] [Indexed: 04/19/2024]
Abstract
Although trust plays a vital role in human-robot interaction, there is currently a dearth of literature examining the effect of users' openness personality on trust in actual interaction. This study aims to investigate the interaction effects of users' openness and robot reliability on trust. We designed a voice-based walking task and collected subjective trust ratings, task metrics, eye-tracking data, and fNIRS signals from users with different openness to unravel the psychological intentions, task performance, visual behaviours, and cerebral activations underlying trust. The results showed significant interaction effects. Users with low openness exhibited lower subjective trust, more fixations, and higher activation of rTPJ in the highly reliable condition than those with high openness. The results suggested that users with low openness might be more cautious and suspicious about the highly reliable robot and allocate more visual attention and neural processing to monitor and infer robot status than users with high openness.
Collapse
Affiliation(s)
- Mingming Li
- Department of Industrial Engineering, College of Management Science and Engineering, Anhui University of Technology, Maanshan, China
- Department of Industrial Engineering, School of Business Administration, Northeastern University, Shenyang, China
| | - Fu Guo
- Department of Industrial Engineering, School of Business Administration, Northeastern University, Shenyang, China
| | - Zhixing Li
- Department of Industrial Engineering, School of Business Administration, Northeastern University, Shenyang, China
| | - Haiyang Ma
- Department of Industrial Engineering, School of Business Administration, Northeastern University, Shenyang, China
| | - Vincent G Duffy
- School of Industrial Engineering, Purdue University, West Lafayette, IN, USA
| |
Collapse
|
2
|
Carter OBJ, Loft S, Visser TAW. Meaningful Communication but not Superficial Anthropomorphism Facilitates Human-Automation Trust Calibration: The Human-Automation Trust Expectation Model (HATEM). HUMAN FACTORS 2023:187208231218156. [PMID: 38041565 DOI: 10.1177/00187208231218156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2023]
Abstract
OBJECTIVE The objective was to demonstrate anthropomorphism needs to communicate contextually useful information to increase user confidence and accurately calibrate human trust in automation. BACKGROUND Anthropomorphism is believed to improve human-automation trust but supporting evidence remains equivocal. We test the Human-Automation Trust Expectation Model (HATEM) that predicts improvements to trust calibration and confidence in accepted advice arising from anthropomorphism will be weak unless it aids naturalistic communication of contextually useful information to facilitate prediction of automation failures. METHOD Ninety-eight undergraduates used a submarine periscope simulator to classify ships, aided by the Ship Automated Modelling (SAM) system that was 50% reliable. A between-subjects 2 × 3 design compared SAM appearance (anthropomorphic avatar vs. camera eye) and voice inflection (monotone vs. meaningless vs. meaningful), with the meaningful inflections communicating contextually useful information about automated advice regarding certainty and uncertainty. RESULTS Avatar SAM appearance was rated as more anthropomorphic than camera eye, and meaningless and meaningful inflections were both rated more anthropomorphic than monotone. However, for subjective trust, trust calibration, and confidence in accepting SAM advice, there was no evidence of anthropomorphic appearance having any impact, while there was decisive evidence that meaningful inflections yielded better outcomes on these trust measures than monotone and meaningless inflections. CONCLUSION Anthropomorphism had negligible impact on human-automation trust unless its execution enhanced communication of relevant information that allowed participants to better calibrate expectations of automation performance. APPLICATION Designers using anthropomorphism to calibrate trust need to consider what contextually useful information will be communicated via anthropomorphic features.
Collapse
Affiliation(s)
| | - Shayne Loft
- The University of Western Australia, Australia
| | | |
Collapse
|
3
|
Roesler E. Anthropomorphic framing and failure comprehensibility influence different facets of trust towards industrial robots. Front Robot AI 2023; 10:1235017. [PMID: 37744186 PMCID: PMC10512549 DOI: 10.3389/frobt.2023.1235017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Accepted: 08/28/2023] [Indexed: 09/26/2023] Open
Abstract
Introduction: Utilizing anthropomorphic features in industrial robots is a prevalent strategy aimed at enhancing their perception as collaborative team partners and promoting increased tolerance for failures. Nevertheless, recent research highlights the presence of potential drawbacks associated with this approach. It is still widely unknown, how anthropomorphic framing influences the dynamics of trust especially, in context of different failure experiences. Method: The current laboratory study wanted to close this research gap. To do so, fifty-one participants interacted with a robot that was either anthropomorphically or technically framed. In addition, each robot produced either a comprehensible or an incomprehensible failure. Results: The analysis revealed no differences in general trust towards the technically and anthropomorphically framed robot. Nevertheless, the anthropomorphic robot was perceived as more transparent than the technical robot. Furthermore, the robot's purpose was perceived as more positive after experiencing a comprehensible failure. Discussion: The perceived higher transparency of anthropomorphically framed robots might be a double-edged sword, as the actual transparency did not differ between both conditions. In general, the results show that it is essential to consider trust multi-dimensionally, as a uni-dimensional approach which is often focused on performance might overshadow important facets of trust like transparency and purpose.
Collapse
Affiliation(s)
- Eileen Roesler
- Department of Psychology, George Mason University, Fairfax, VA, United States
| |
Collapse
|
4
|
Pilacinski A, Pinto A, Oliveira S, Araújo E, Carvalho C, Silva PA, Matias R, Menezes P, Sousa S. The robot eyes don't have it. The presence of eyes on collaborative robots yields marginally higher user trust but lower performance. Heliyon 2023; 9:e18164. [PMID: 37520993 PMCID: PMC10382291 DOI: 10.1016/j.heliyon.2023.e18164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 06/21/2023] [Accepted: 07/10/2023] [Indexed: 08/01/2023] Open
Abstract
Eye gaze is a prominent feature of human social lives, but little is known on whether fitting eyes on machines makes humans trust them more. In this study we compared subjective and objective markers of human trust when collaborating with eyed and non-eyed robots of the same type. We used virtual reality scenes in which we manipulated distance and the presence of eyes on a robot's display during simple collaboration scenes. We found that while collaboration with eyed cobots resulted in slightly higher subjective trust ratings, the objective markers such as pupil size and task completion time indicated it was in fact less comfortable to collaborate with eyed robots. These findings are in line with recent suggestions that anthropomorphism may be actually a detrimental feature of collaborative robots. These findings also show the complex relationship between human objective and subjective markers of trust when collaborating with artificial agents.
Collapse
Affiliation(s)
- Artur Pilacinski
- Medical Faculty, Ruhr University Bochum, Bochum, Germany
- CINEICC - Center for Research in Neuropsychology and Cognitive Behavioral Intervention, University of Coimbra, Coimbra, Portugal
- Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
| | - Ana Pinto
- Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
- Faculty of Sciences and Technology, University of Coimbra, Coimbra, Portugal
- CeBER – Centre for Business and Economics Research, University of Coimbra, Coimbra, Portugal
| | - Soraia Oliveira
- Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
| | - Eduardo Araújo
- Faculty of Sciences and Technology, University of Coimbra, Coimbra, Portugal
- Department of Informatics Engineering, University of Coimbra, Coimbra, Portugal
| | - Carla Carvalho
- CINEICC - Center for Research in Neuropsychology and Cognitive Behavioral Intervention, University of Coimbra, Coimbra, Portugal
- Faculty of Psychology and Educational Sciences, University of Coimbra, Coimbra, Portugal
| | - Paula Alexandra Silva
- Faculty of Sciences and Technology, University of Coimbra, Coimbra, Portugal
- Department of Informatics Engineering, University of Coimbra, Coimbra, Portugal
- CISUC - Centre for Informatics and Systems of the University of Coimbra, Coimbra, Portugal
| | - Ricardo Matias
- Faculty of Sciences and Technology, University of Coimbra, Coimbra, Portugal
- Electrical and Computer Engineering Department, University of Coimbra, Coimbra, Portugal
| | - Paulo Menezes
- Faculty of Sciences and Technology, University of Coimbra, Coimbra, Portugal
- Electrical and Computer Engineering Department, University of Coimbra, Coimbra, Portugal
| | - Sonia Sousa
- University of Trás-os-Montes e Alto Douro, Vila Real, Portugal
- School of Digital Technologies, Tallinn University, Tallinn, Estonia
| |
Collapse
|
5
|
Onnasch L, Schweidler P, Schmidt H. The potential of robot eyes as predictive cues in HRI-an eye-tracking study. Front Robot AI 2023; 10:1178433. [PMID: 37575370 PMCID: PMC10416260 DOI: 10.3389/frobt.2023.1178433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Accepted: 07/03/2023] [Indexed: 08/15/2023] Open
Abstract
Robots currently provide only a limited amount of information about their future movements to human collaborators. In human interaction, communication through gaze can be helpful by intuitively directing attention to specific targets. Whether and how this mechanism could benefit the interaction with robots and how a design of predictive robot eyes in general should look like is not well understood. In a between-subjects design, four different types of eyes were therefore compared with regard to their attention directing potential: a pair of arrows, human eyes, and two anthropomorphic robot eye designs. For this purpose, 39 subjects performed a novel, screen-based gaze cueing task in the laboratory. Participants' attention was measured using manual responses and eye-tracking. Information on the perception of the tested cues was provided through additional subjective measures. All eye models were overall easy to read and were able to direct participants' attention. The anthropomorphic robot eyes were most efficient at shifting participants' attention which was revealed by faster manual and saccadic reaction times. In addition, a robot equipped with anthropomorphic eyes was perceived as being more competent. Abstract anthropomorphic robot eyes therefore seem to trigger a reflexive reallocation of attention. This points to a social and automatic processing of such artificial stimuli.
Collapse
|
6
|
Hostettler D, Mayer S, Hildebrand C. Human-Like Movements of Industrial Robots Positively Impact Observer Perception. Int J Soc Robot 2022; 15:1-19. [PMID: 36570426 PMCID: PMC9763088 DOI: 10.1007/s12369-022-00954-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/02/2022] [Indexed: 12/24/2022]
Abstract
The number of industrial robots and collaborative robots on manufacturing shopfloors has been rapidly increasing over the past decades. However, research on industrial robot perception and attributions toward them is scarce as related work has predominantly explored the effect of robot appearance, movement patterns, or human-likeness of humanoid robots. The current research specifically examines attributions and perceptions of industrial robots-specifically, articulated collaborative robots-and how the type of movements of such robots impact human perception and preference. We developed and empirically tested a novel model of robot movement behavior and demonstrate how altering the movement behavior of a robotic arm leads to differing attributions of the robot's human-likeness. These findings have important implications for emerging research on the impact of robot movement on worker perception, preferences, and behavior in industrial settings.
Collapse
Affiliation(s)
- Damian Hostettler
- Institute of Computer Science, University of St. Gallen, Rosenbergstrasse 30, 9000 St. Gallen, Switzerland
| | - Simon Mayer
- Institute of Computer Science, University of St. Gallen, Rosenbergstrasse 30, 9000 St. Gallen, Switzerland
| | - Christian Hildebrand
- Institute of Behavioral Science and Technology, University of St. Gallen, Torstrasse 25, 9000 St. Gallen, Switzerland
| |
Collapse
|
7
|
Onnasch L, Kostadinova E, Schweidler P. Humans Can't Resist Robot Eyes - Reflexive Cueing With Pseudo-Social Stimuli. Front Robot AI 2022; 9:848295. [PMID: 37274454 PMCID: PMC10236938 DOI: 10.3389/frobt.2022.848295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Accepted: 03/01/2022] [Indexed: 06/06/2023] Open
Abstract
Joint attention is a key mechanism for humans to coordinate their social behavior. Whether and how this mechanism can benefit the interaction with pseudo-social partners such as robots is not well understood. To investigate the potential use of robot eyes as pseudo-social cues that ease attentional shifts we conducted an online study using a modified spatial cueing paradigm. The cue was either a non-social (arrow), a pseudo-social (two versions of an abstract robot eye), or a social stimulus (photographed human eyes) that was presented either paired (e.g. two eyes) or single (e.g. one eye). The latter was varied to separate two assumed triggers of joint attention: the social nature of the stimulus, and the additional spatial information that is conveyed only by paired stimuli. Results support the assumption that pseudo-social stimuli, in our case abstract robot eyes, have the potential to facilitate human-robot interaction as they trigger reflexive cueing. To our surprise, actual social cues did not evoke reflexive shifts in attention. We suspect that the robot eyes elicited the desired effects because they were human-like enough while at the same time being much easier to perceive than human eyes, due to a design with strong contrasts and clean lines. Moreover, results indicate that for reflexive cueing it does not seem to make a difference if the stimulus is presented single or paired. This might be a first indicator that joint attention depends rather on the stimulus' social nature or familiarity than its spatial expressiveness. Overall, the study suggests that using paired abstract robot eyes might be a good design practice for fostering a positive perception of a robot and to facilitate joint attention as a precursor for coordinated behavior.
Collapse
Affiliation(s)
- Linda Onnasch
- Engineering Psychology, Department of Psychology, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Eleonora Kostadinova
- Engineering Psychology, Department of Psychology, Humboldt-Universität zu Berlin, Berlin, Germany
| | | |
Collapse
|
8
|
An Animation Character Robot That Increases Sales. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12031724] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
Performing the role of a shopping assistant is one promising application for social robots. Robot clerks can provide a richer experience for customers and increase sales; however, the scant opportunities for interaction with customers in real shopping environments is a typical drawback. We solve this problem by developing a unique networked salesclerk system that consists of a virtual agent that acts through the customer’s smartphone and a physical agent that performs as a robot salesclerk in an actual store environment. Toward this capability, in cooperation with Production I.G. Inc., an animation production company, we adopted a character named Tachikoma from “Ghost in the Shell: Stand Alone Complex” (commonly known as the S.A.C. series) when designing the appearance and features of both agents. We conducted a field test to investigate how our system contributed to the sales of Ghost in the Shell anime-themed products, and the results showed the advantages of our system for increasing sales.
Collapse
|