1
|
Li M, Guo F, Li Z, Ma H, Duffy VG. Interactive effects of users' openness and robot reliability on trust: evidence from psychological intentions, task performance, visual behaviours, and cerebral activations. ERGONOMICS 2024:1-21. [PMID: 38635303 DOI: 10.1080/00140139.2024.2343954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Accepted: 04/09/2024] [Indexed: 04/19/2024]
Abstract
Although trust plays a vital role in human-robot interaction, there is currently a dearth of literature examining the effect of users' openness personality on trust in actual interaction. This study aims to investigate the interaction effects of users' openness and robot reliability on trust. We designed a voice-based walking task and collected subjective trust ratings, task metrics, eye-tracking data, and fNIRS signals from users with different openness to unravel the psychological intentions, task performance, visual behaviours, and cerebral activations underlying trust. The results showed significant interaction effects. Users with low openness exhibited lower subjective trust, more fixations, and higher activation of rTPJ in the highly reliable condition than those with high openness. The results suggested that users with low openness might be more cautious and suspicious about the highly reliable robot and allocate more visual attention and neural processing to monitor and infer robot status than users with high openness.
Collapse
Affiliation(s)
- Mingming Li
- Department of Industrial Engineering, College of Management Science and Engineering, Anhui University of Technology, Maanshan, China
- Department of Industrial Engineering, School of Business Administration, Northeastern University, Shenyang, China
| | - Fu Guo
- Department of Industrial Engineering, School of Business Administration, Northeastern University, Shenyang, China
| | - Zhixing Li
- Department of Industrial Engineering, School of Business Administration, Northeastern University, Shenyang, China
| | - Haiyang Ma
- Department of Industrial Engineering, School of Business Administration, Northeastern University, Shenyang, China
| | - Vincent G Duffy
- School of Industrial Engineering, Purdue University, West Lafayette, IN, USA
| |
Collapse
|
2
|
Hancock PA, Lee JD, Senders JW. Attribution Errors by People and Intelligent Machines. HUMAN FACTORS 2023; 65:1293-1305. [PMID: 34387108 DOI: 10.1177/00187208211036323] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
OBJECTIVE To explore the ramifications of attribution errors (AEs), initially in the context of vehicle collisions and then to extend this understanding into the broader and diverse realms of all forms of human-machine interaction. BACKGROUND This work focuses upon a particular topic that John Senders was examining at the time of his death. He was using the lens of attribution, and its associated errors, to seek to further understand and explore dyadic forms of driver collision. METHOD We evaluated the utility of the set of Senders' final observations on conjoint AE in two-vehicle collisions. We extended this evaluation to errors of attribution generally, as applicable to all human-human, human-technology, and prospectively technology-technology interactions. RESULTS As with Senders and his many other contributions, we find evident value in this perspective on how humans react to each other and how they react to emerging forms of technology, such as autonomous systems. We illustrate this value through contemporary examples and prospective analyses. APPLICATIONS The comprehension and mitigation of AEs can help improve all interactions between people, between intelligent machines and between humans and the machines they work with.
Collapse
|
3
|
Hancock PA, Kessler TT, Kaplan AD, Stowers K, Brill JC, Billings DR, Schaefer KE, Szalma JL. How and why humans trust: A meta-analysis and elaborated model. Front Psychol 2023; 14:1081086. [PMID: 37051611 PMCID: PMC10083508 DOI: 10.3389/fpsyg.2023.1081086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 01/26/2023] [Indexed: 03/29/2023] Open
Abstract
Trust exerts an impact on essentially all forms of social relationships. It affects individuals in deciding whether and how they will or will not interact with other people. Equally, trust also influences the stance of entire nations in their mutual dealings. In consequence, understanding the factors that influence the decision to trust, or not to trust, is crucial to the full spectrum of social dealings. Here, we report the most comprehensive extant meta-analysis of experimental findings relating to such human-to-human trust. Our analysis provides a quantitative evaluation of the factors that influence interpersonal trust, the initial propensity to trust, as well as an assessment of the general trusting of others. Over 2,000 relevant studies were initially identified for potential inclusion in the meta-analysis. Of these, (n = 338) passed all screening criteria and provided therefrom a total of (n = 2,185) effect sizes for analysis. The identified dependent variables were trustworthiness, propensity to trust, general trust, and the trust that supervisors and subordinates express in each other. Correlational results demonstrated that a large range of trustor, trustee, and shared, contextual factors impact each of trustworthiness, the propensity to trust, and trust within working relationships. The emphasis in the present work on contextual factors being one of several trust dimensions herein originated. Experimental results established that the reputation of the trustee and the shared closeness of trustor and trustee were the most predictive factors of trustworthiness outcome. From these collective findings, we propose an elaborated, overarching descriptive theory of trust in which special note is taken of the theory’s application to the growing human need to trust in non-human entities. The latter include diverse forms of automation, robots, artificially intelligent entities, as well as specific implementations such as driverless vehicles to name but a few. Future directions as to the momentary dynamics of trust development, its sustenance and its dissipation are also evaluated.
Collapse
Affiliation(s)
- P. A. Hancock
- Department of Psychology and Institute for Simulation and Training, University of Central Florida, Orlando, FL, United States
- *Correspondence: P. A. Hancock,
| | - Theresa T. Kessler
- Department of Psychology, University of Central Florida, Orlando, FL, United States
| | - Alexandra D. Kaplan
- Department of Psychology, University of Central Florida, Orlando, FL, United States
| | - Kimberly Stowers
- Department of Management, University of Alabama, Tuscaloosa, AL, United States
| | - J. Christopher Brill
- United States Air Force Research Laboratory, Wright Patterson Air Force Base, Dayton, NV, United States
| | | | - Kristin E. Schaefer
- DEVCOM Army Research Laboratory, Aberdeen Proving Ground, Adelphi, MD, United States
| | - James L. Szalma
- Department of Psychology, University of Central Florida, Orlando, FL, United States
| |
Collapse
|
4
|
Kaplan AD, Kessler TT, Brill JC, Hancock PA. Trust in Artificial Intelligence: Meta-Analytic Findings. HUMAN FACTORS 2023; 65:337-359. [PMID: 34048287 DOI: 10.1177/00187208211013988] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
OBJECTIVE The present meta-analysis sought to determine significant factors that predict trust in artificial intelligence (AI). Such factors were divided into those relating to (a) the human trustor, (b) the AI trustee, and (c) the shared context of their interaction. BACKGROUND There are many factors influencing trust in robots, automation, and technology in general, and there have been several meta-analytic attempts to understand the antecedents of trust in these areas. However, no targeted meta-analysis has been performed examining the antecedents of trust in AI. METHOD Data from 65 articles examined the three predicted categories, as well as the subcategories of human characteristics and abilities, AI performance and attributes, and contextual tasking. Lastly, four common uses for AI (i.e., chatbots, robots, automated vehicles, and nonembodied, plain algorithms) were examined as further potential moderating factors. RESULTS Results showed that all of the examined categories were significant predictors of trust in AI as well as many individual antecedents such as AI reliability and anthropomorphism, among many others. CONCLUSION Overall, the results of this meta-analysis determined several factors that influence trust, including some that have no bearing on AI performance. Additionally, we highlight the areas where there is currently no empirical research. APPLICATION Findings from this analysis will allow designers to build systems that elicit higher or lower levels of trust, as they require.
Collapse
Affiliation(s)
| | | | | | - P A Hancock
- 6243 University of Central Florida, Orlando, Florida, USA
| |
Collapse
|
5
|
Croijmans I, van Erp L, Bakker A, Cramer L, Heezen S, Van Mourik D, Weaver S, Hortensius R. No Evidence for an Effect of the Smell of Hexanal on Trust in Human-Robot Interaction. Int J Soc Robot 2022; 15:1-10. [PMID: 36128582 PMCID: PMC9477175 DOI: 10.1007/s12369-022-00918-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/25/2022] [Indexed: 11/06/2022]
Abstract
The level of interpersonal trust among people is partially determined through the sense of smell. Hexanal, a molecule which smell resembles freshly cut grass, can increase trust in people. Here, we ask the question if smell can be leveraged to facilitate human-robot interaction and test whether hexanal also increases the level of trust during collaboration with a social robot. In a preregistered double-blind, placebo-controlled study, we tested if trial-by-trial and general trust during perceptual decision making in collaboration with a social robot is affected by hexanal across two samples (n = 46 and n = 44). It was hypothesized that unmasked hexanal and hexanal masked by eugenol, a molecule with a smell resembling clove, would increase the level of trust in human-robot interaction, compared to eugenol alone or a control condition consisting of only the neutral smelling solvent propylene glycol. Contrasting previous findings in human interaction, no significant effect of unmasked or eugenol-masked hexanal on trust in robots was observed. These findings indicate that the conscious or nonconscious impact of smell on trust might not generalise to interactions with social robots. One explanation could be category- and context-dependency of smell leading to a mismatch between the natural smell of hexanal, a smell also occurring in human sweat, and the mechanical physical or mental representation of the robot.
Collapse
Affiliation(s)
- Ilja Croijmans
- Centre for
Language Studies, Radboud
University, Erasmusplein 1, 6525 HT Nijmegen, The Netherlands
| | - Laura van Erp
- Centre for
Language Studies, Radboud
University, Erasmusplein 1, 6525 HT Nijmegen, The Netherlands
| | - Annelie Bakker
- Centre for
Language Studies, Radboud
University, Erasmusplein 1, 6525 HT Nijmegen, The Netherlands
| | - Lara Cramer
- Centre for
Language Studies, Radboud
University, Erasmusplein 1, 6525 HT Nijmegen, The Netherlands
| | - Sophie Heezen
- Centre for
Language Studies, Radboud
University, Erasmusplein 1, 6525 HT Nijmegen, The Netherlands
| | - Dana Van Mourik
- Centre for
Language Studies, Radboud
University, Erasmusplein 1, 6525 HT Nijmegen, The Netherlands
| | - Sterre Weaver
- Centre for
Language Studies, Radboud
University, Erasmusplein 1, 6525 HT Nijmegen, The Netherlands
| | - Ruud Hortensius
- Centre for
Language Studies, Radboud
University, Erasmusplein 1, 6525 HT Nijmegen, The Netherlands
| |
Collapse
|
6
|
Kox ES, Siegling LB, Kerstholt JH. Trust Development in Military and Civilian Human–Agent Teams: The Effect of Social-Cognitive Recovery Strategies. Int J Soc Robot 2022; 14:1323-1338. [PMID: 35432627 PMCID: PMC8994847 DOI: 10.1007/s12369-022-00871-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/21/2022] [Indexed: 11/06/2022]
Abstract
Autonomous agents (AA) will increasingly be deployed as teammates instead of tools. In many operational situations, flawless performance from AA cannot be guaranteed. This may lead to a breach in the human’s trust, which can compromise collaboration. This highlights the importance of thinking about how to deal with error and trust violations when designing AA. The aim of this study was to explore the influence of uncertainty communication and apology on the development of trust in a Human–Agent Team (HAT) when there is a trust violation. Two experimental studies following the same method were performed with (I) a civilian group and (II) a military group of participants. The online task environment resembled a house search in which the participant was accompanied and advised by an AA as their artificial team member. Halfway during the task, an incorrect advice evoked a trust violation. Uncertainty communication was manipulated within-subjects, apology between-subjects. Our results showed that (a) communicating uncertainty led to higher levels of trust in both studies, (b) an incorrect advice by the agent led to a less severe decline in trust when that advice included a measure of uncertainty, and (c) after a trust violation, trust recovered significantly more when the agent offered an apology. The two latter effects were only found in the civilian study. We conclude that tailored agent communication is a key factor in minimizing trust reduction in face of agent failure to maintain effective long-term relationships in HATs. The difference in findings between participant groups emphasizes the importance of considering the (organizational) culture when designing artificial team members.
Collapse
|
7
|
Human, Organisational and Societal Factors in Robotic Rail Infrastructure Maintenance. SUSTAINABILITY 2022. [DOI: 10.3390/su14042123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Robotics are set to play a significant role in the maintenance of rail infrastructure. However, the introduction of robotics in this environment requires new ways of working for individuals, teams and organisations and needs to reflect societal attitudes if it is to achieve sustainable goals. The following paper presents a qualitative analysis of interviews with 25 experts from rail and robotics to outline the human and organisational issues of robotics in the rail infrastructure environment. Themes were structured around user, team, organisational and societal issues. While the results point to many of the expected issues of robotics (trust, acceptance, business change), a number of issues were identified that were specific to rail. Examples include the importance of considering the whole maintenance task lifecycle, conceptualizing robotic teamworking within the structures of rail maintenance worksites, the complex upstream (robotics suppliers) and downstream (third-party maintenance contractors) supply chain implications of robotic deployment and the public acceptance of robotics in an environment that often comes into direct contact with passenger and people around the railways. Recommendations are made in the paper for successful, human-centric rail robotics deployment.
Collapse
|
8
|
Hancock PA, Kessler TT, Kaplan AD, Brill JC, Szalma JL. Evolving Trust in Robots: Specification Through Sequential and Comparative Meta-Analyses. HUMAN FACTORS 2021; 63:1196-1229. [PMID: 32519902 DOI: 10.1177/0018720820922080] [Citation(s) in RCA: 35] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
OBJECTIVE The objectives of this meta-analysis are to explore the presently available empirical findings on the antecedents of trust in robots and use this information to expand upon a previous meta-analytic review of the area. BACKGROUND Human-robot interaction (HRI) represents an increasingly important dimension of our everyday existence. Currently, the most important element of these interactions is proposed to be whether the human trusts the robot or not. We have identified three overarching categories that exert effects on the expression of trust. These consist of factors associated with (a) the human, (b) the robot, and (c) the context in which any specific HRI event occurs. METHOD The current body of literature was examined and all qualifying articles pertaining to trust in robots were included in the meta-analysis. A previous meta-analysis on HRI trust was used as the basis for this extended, updated, and evolving analysis. RESULTS Multiple additional factors, which have now been demonstrated to significantly influence trust, were identified. The present results, expressed as points of difference and points of commonality between the current and previous analyses, are identified, explained, and cast in the setting of the emerging wave of HRI. CONCLUSION The present meta-analysis expands upon previous work and validates the overarching categories of trust antecedent (human-related, robot-related, and contextual), as well as identifying the significant individual precursors to trust within each category. A new and updated model of these complex interactions is offered. APPLICATION The identified trust factors can be used in order to promote appropriate levels of trust in robots.
Collapse
Affiliation(s)
- P A Hancock
- 6243University of Central Florida, Orlando, USA
- 6243Institute for Simulation and Training, University of Central Florida, Orlando, USA
| | | | | | - John C Brill
- 33319United States Air Force Research Laboratory, Wright-Patterson Air Force Base, Dayton, Ohio, USA
| | | |
Collapse
|
9
|
Aroyo AM, Pasquali D, Kothig A, Rea F, Sandini G, Sciutti A. Expectations Vs. Reality: Unreliability and Transparency in a Treasure Hunt Game With Icub. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3083465] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
10
|
Furlough C, Stokes T, Gillan DJ. Attributing Blame to Robots: I. The Influence of Robot Autonomy. HUMAN FACTORS 2021; 63:592-602. [PMID: 31613644 DOI: 10.1177/0018720819880641] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
OBJECTIVE The research examined how humans attribute blame to humans, nonautonomous robots, autonomous robots, or environmental factors for scenarios in which errors occur. BACKGROUND When robots and humans serve on teams, human perception of their technological team members can be a critical component of successful cooperation, especially when task completion fails. METHODS Participants read a set of scenarios that described human-robot team task failures. Separate scenarios were written to emphasize the role of the human, the robot, or environmental factors in producing the task failure. After reading each scenario, the participants allocated blame for the failure among the human, robot, and environmental factors. RESULTS In general, the order of amount of blame was humans, robots, and environmental factors. If the scenario described the robot as nonautonomous, the participants attributed almost as little blame to them as to the environmental factors; in contrast, if the scenario described the robot as autonomous, the participants attributed almost as much blame to them as to the human. CONCLUSION We suggest that humans use a hierarchy of blame in which robots are seen as partial social actors, with the degree to which people view them as social actors depending on the degree of autonomy. APPLICATION The acceptance of robots by human co-workers will be a function of the attribution of blame when errors occur in the workplace. The present research suggests that greater autonomy for the robot will result in greater attribution of blame in work tasks.
Collapse
Affiliation(s)
- Caleb Furlough
- 2167706798 North Carolina State University, Raleigh, USA
| | - Thomas Stokes
- 2167706798 North Carolina State University, Raleigh, USA
| | | |
Collapse
|
11
|
Jønholt L, Bundgaard CJ, Carlsen M, Sørensen DB. A Case Study on the Behavioural Effect of Positive Reinforcement Training in a Novel Task Participation Test in Göttingen Mini Pigs. Animals (Basel) 2021; 11:ani11061610. [PMID: 34072458 PMCID: PMC8229723 DOI: 10.3390/ani11061610] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Revised: 05/25/2021] [Accepted: 05/26/2021] [Indexed: 11/16/2022] Open
Abstract
In laboratory animal research, many procedures will be stressful for the animals, as they are forced to participate. Training animals to cooperate using clicker training (CT) or luring (LU) may reduce stress levels, and thereby increase animal welfare. In zoo animals, aquarium animals, and pets, CT is used to train animals to cooperate during medical procedures, whereas in experimental research, LU seem to be the preferred training method. This descriptive case study aims to present the behaviour of CT and LU pigs in a potentially fear-evoking behavioural test-the novel task participation test-in which the pigs walked a short runway on a novel walking surface. All eight pigs voluntarily participated, and only one LU pig showed body stretching combined with lack of tail wagging indicating reduced welfare. All CT pigs and one LU pig displayed tail wagging during the test, indicating a positive mental state. Hence, training pigs to cooperate during experimental procedures resulted in a smooth completion of the task with no signs of fear or anxiety in seven out of eight animals. We suggest that training laboratory pigs prior to experimental procedures or tests should be done to ensure low stress levels.
Collapse
Affiliation(s)
- Lisa Jønholt
- Department of Veterinary and Animal Sciences, Faculty of Health and Medical Sciences, University of Copenhagen, Gronnegaardsvej 15, 1870 Frederiksberg C, Denmark;
| | | | - Martin Carlsen
- Novo Nordisk A/S, Novo Nordisk Park 1, 2760 Maalov, Denmark; (C.J.B.); (M.C.)
| | - Dorte Bratbo Sørensen
- Department of Veterinary and Animal Sciences, Faculty of Health and Medical Sciences, University of Copenhagen, Gronnegaardsvej 15, 1870 Frederiksberg C, Denmark;
- Correspondence:
| |
Collapse
|
12
|
Abstract
Studying interactions of children with humanoid robots in familiar spaces in natural contexts has become a key issue for social robotics. To fill this need, we conducted several Child–Robot Interaction (CRI) events with the Pepper robot in Polish and Japanese kindergartens. In this paper, we explore the role of trust and expectations towards the robot in determining the success of CRI. We present several observations from the video recordings of our CRI events and the transcripts of free-format question-answering sessions with the robot using the Wizard-of-Oz (WOZ) methodology. From these observations, we identify children’s behaviors that indicate trust (or lack thereof) towards the robot, e.g., challenging behavior of a robot or physical interactions with it. We also gather insights into children’s expectations, e.g., verifying expectations as a causal process and an agency or expectations concerning the robot’s relationships, preferences and physical and behavioral capabilities. Based on our experiences, we suggest some guidelines for designing more effective CRI scenarios. Finally, we argue for the effectiveness of in-the-wild methodologies for planning and executing qualitative CRI studies.
Collapse
|
13
|
Abstract
This work considers the future of driving in terms of both its short- and long-term horizons. It conjectures that human-controlled driving will follow in the footsteps of a wide swath of other, now either residual or abandoned human occupations. Pursuits that have preceded it into oblivion. In this way, driving will dwindle down into only a few niche locales wherein enthusiasts will still persist, much in the way that steam train hobbyists now continue their own aspirational inclinations. Of course, the value of any such prognostication is in direct proportion to the degree that information is conveyed, and prospective uncertainty reduced. In more colloquial terms: the devil is in the details of these coming transitions. It is anticipated that we will see a progressive transformation of the composition of on-road traffic that will be registered most immediately in the realm of professional transportation in which the imperative for optimization exceeds that in virtually all other user segments. The transition from manual control to full automation will be more punctate than gradualist in its evolutionary development. As performance optimization slowly exhausts the commercial sector, it will progressively transition more into the discretionary realm by dint of simple technology transfer alone. The hedonic dimension of everyday driving will be dispersed and pursued by progressively fewer individuals. The traveling window of generational expectation will soon mean that human driving will be largely “forgotten,” as each sequential generation matures without this, still presently common experience. Indications of this stage of progress are beginning to be witnessed in the demographic profile of vehicle usage and ownership rates. The purpose of the exposition which follows is to consider and support each of these stated propositions.
Collapse
Affiliation(s)
- P A Hancock
- Department of Psychology and Institute of Simulation and Training, University of Central Florida, Orlando, FL, United States
| |
Collapse
|
14
|
Turja T, Saurio R, Katila J, Hennala L, Pekkarinen S, Melkas H. Intention to Use Exoskeletons in Geriatric Care Work: Need for Ergonomic and Social Design. ERGONOMICS IN DESIGN 2020. [DOI: 10.1177/1064804620961577] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In this research, we investigate user experiences with the Laevo exoskeletons in geriatric work. We introduce two studies where Finnish nurses used exoskeletons and identify the requirements and potential restrictions for using exoskeletons in care context. Our results show that nurses’ intentions to use the exoskeletons were mostly associated with perceived usefulness, ergonomics, and enjoyment of use. Also, social environment issues, such as other people’s reactions, are important considerations. Exoskeleton use has varying requirements depending on where it will be implemented. Thus, the end users’ ideas for the design are crucial in enabling exoskeleton use in different sectors of work.
Collapse
|
15
|
Volante WG, Sosna J, Kessler T, Sanders T, Hancock PA. Social Conformity Effects on Trust in Simulation-Based Human-Robot Interaction. HUMAN FACTORS 2019; 61:805-815. [PMID: 30431337 DOI: 10.1177/0018720818811190] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
OBJECTIVE We investigated the co-acting influences of communication and social conformity on trust in human-robot interaction. BACKGROUND Previous work has investigated aspects of the robot, the human, and the environment as influential factors in the human-robot relationship. Little work has examined the conjoint effects of social conformity and communication on this relationship. As social conformity and communication have been shown to affect human-human trust, there are a priori reasons to believe that they will play an influential role in human-robot trust also. METHOD The experiment examined the influences of social conformity and robot communication on trust. A 2 × 2 (communication × social group) design was implemented with each variable having two levels (communication, no communication; positive social group, negative social group). RESULTS We created a communication manipulation which we then demonstrated to mediate the trust level between human and robot. However, this influence on trust was overcome by social information in which the subsequent trust level, attributed to the robot, was dominated by expressed social group attitudes to that robot. CONCLUSION The results confirm the importance of human social assessments over direct robot communication in setting human-robot trust levels. When social opinions are expressed, observers appear to conform to the trust displayed by the group than relying on their own judgment. APPLICATION In human-robot teams, the perceptions of the group may exert a greater impact than even robot communication. This may be especially important when new human members are introduced into such teams.
Collapse
|
16
|
Sanders T, Kaplan A, Koch R, Schwartz M, Hancock PA. The Relationship Between Trust and Use Choice in Human-Robot Interaction. HUMAN FACTORS 2019; 61:614-626. [PMID: 30601683 DOI: 10.1177/0018720818816838] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
OBJECTIVE To understand the influence of trust on use choice in human-robot interaction via experimental investigation. BACKGROUND The general assumption that trusting a robot leads to using that robot has been previously identified, often by asking participants to choose between manually completing a task or using an automated aid. Our work further evaluates the relationship between trust and use choice and examines factors impacting choice. METHOD An experiment was conducted wherein participants rated a robot on a trust scale, then made decisions about whether to use that robotic agent or a human agent to complete a task. Participants provided explicit reasoning for their choices. RESULTS While we found statistical support for the "trust leads to use" relationship, qualitative results indicate other factors are important as well. CONCLUSION Results indicated that while trust leads to use, use is also heavily influenced by the specific task at hand. Users more often chose a robot for a dangerous task where loss of life is likely, citing safety as their primary concern. Conversely, users chose humans for the mundane warehouse task, mainly citing financial reasons, specifically fear of job and income loss for the human worker. APPLICATION Understanding the factors driving use choice is key to appropriate interaction in the field of human-robot teaming.
Collapse
Affiliation(s)
| | | | - Ryan Koch
- Tufts University, Medford, Massachusetts, USA
| | | | | |
Collapse
|
17
|
Do You Trust Me? Investigating the Formation of Trust in Social Robots. PROGRESS IN ARTIFICIAL INTELLIGENCE 2019. [DOI: 10.1007/978-3-030-30244-3_30] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
18
|
Gibo TL, Plaisier MA, Mugge W, Abbink DA. Reliance on Haptic Assistance Reflected in Haptic Cue Weighting. IEEE TRANSACTIONS ON HAPTICS 2019; 12:68-77. [PMID: 30106693 DOI: 10.1109/toh.2018.2864278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
When using an automated system, user trust in the automation is an important factor influencing performance. Prior studies have analyzed trust duringsupervisory control of automation, and how trust influences reliance: the behavioral correlate of trust. Here, we investigated how reliance on haptic assistance affects performance during shared control with an automated system. Subjects made reaches towards a hidden target using a visual cue and haptic cue (assistance from the automation). We sought to influence reliance by changing the variability of trial-by-trial random errors in the haptic assistance. Reliance was quantified in terms of the subject's position at the end of the reach relative to the two cues. Our results show that subjects aimed more towards the visual cue when the variability of the haptic cue errors increased, resembling cue weighting behavior. Similar behavior was observed both when subjects had explicit knowledge about the haptic cue error variability, as well as when they had only implicit knowledge (from experience). However, the group with explicit knowledge was able to more quickly adapt their reliance on the haptic assistance. The method we introduce here provides a quantitative way to study user reliance on the information provided by automated systems with shared control.
Collapse
|
19
|
Baker AL, Phillips EK, Ullman D, Keebler JR. Toward an Understanding of Trust Repair in Human-Robot Interaction. ACM T INTERACT INTEL 2018. [DOI: 10.1145/3181671] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
Gone are the days of robots solely operating in isolation, without direct interaction with people. Rather, robots are increasingly being deployed in environments and roles that require complex social interaction with humans. The implementation of human-robot teams continues to increase as technology develops in tandem with the state of human-robot interaction (HRI) research. Trust, a major component of human interaction, is an important facet of HRI. However, the ideas of
trust repair
and
trust violations
are understudied in the HRI literature. Trust repair is the activity of rebuilding trust after one party breaks the trust of another. These trust breaks are referred to as
trust violations
. Just as with humans, trust violations with robots are inevitable; as a result, a clear understanding of the process of HRI trust repair must be developed in order to ensure that a human-robot team can continue to perform well after a trust violation. Previous research on human-automation trust and human-human trust can serve as starting places for exploring trust repair in HRI. Although existing models of human-automation and human-human trust are helpful, they do not account for some of the complexities of building and maintaining trust in unique relationships between humans and robots. The purpose of this article is to provide a foundation for exploring human-robot trust repair by drawing upon prior work in the human-robot, human-automation, and human-human trust literature, concluding with recommendations for advancing this body of work.
Collapse
|
20
|
Aroyo AM, Rea F, Sandini G, Sciutti A. Trust and Social Engineering in Human Robot Interaction: Will a Robot Make You Disclose Sensitive Information, Conform to Its Recommendations or Gamble? IEEE Robot Autom Lett 2018. [DOI: 10.1109/lra.2018.2856272] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
21
|
Abstract
The ‘intuitive’ trust people feel when encountering robots in public spaces is a key determinant of their interactions with the systems. To study the trust we presented subjects with static images of a robot performing an access-control task, interacting with younger and older male and female civilians, applying polite or impolite behavior. Our results showed strong effects of the robot’s behavior. Age and gender of the people interacting with the robot had no significant effect on participants’ impressions of the robot’s attributes. This preliminary study shows that politeness may be a crucial determinant of people’s perception of peacekeeping robots.
Collapse
Affiliation(s)
- Ohad Inbar
- Dept. of Industrial Engineering, Tel Aviv University, Tel Aviv, Israel
| | - Joachim Meyer
- Dept. of Industrial Engineering, Tel Aviv University, Tel Aviv, Israel
| |
Collapse
|
22
|
Smith MA, Allaham MM, Wiese E. Trust in Automated Agents is Modulated by the Combined Influence of Agent and Task Type. ACTA ACUST UNITED AC 2016. [DOI: 10.1177/1541931213601046] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Trust in automation is an important topic in the field of human factors and has a substantial impact on both attitudes towards and performance with automated systems. One variable that has been shown to influence trust is the degree of human-likeness that is displayed by the automated system with the main finding being that increased human-like appearance leads to increased ratings of trust. In the current study, we investigate whether humanness unanimously leads to higher trust or whether the degree to which an agent is trusted depends on context variables (i.e., task type). For that purpose, we created a task with a social (i.e., judging emotional states from the eye region) and an analytical component (i.e., mathematical task) and measured how strongly participants complied to human, avatar or computer agents when performing the social versus the analytical version with them. We hypothesized that human-like agents are trusted more on social tasks, while machine-like agents are trusted more on analytical tasks. In line with our hypothesis, the results show that, human agents are in general not trusted more than automated agents but that the degree to which an agent is trusted depends on the anticipated expertise of an agent for a given task. The findings suggest that when designing automated systems that are supposed to interact with humans, the degree of humanness of the agent needs to match the degree to which a task requires social skills.
Collapse
|
23
|
Schaefer KE, Chen JYC, Szalma JL, Hancock PA. A Meta-Analysis of Factors Influencing the Development of Trust in Automation: Implications for Understanding Autonomy in Future Systems. HUMAN FACTORS 2016; 58:377-400. [PMID: 27005902 DOI: 10.1177/0018720816634228] [Citation(s) in RCA: 141] [Impact Index Per Article: 17.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2014] [Accepted: 01/13/2016] [Indexed: 06/05/2023]
Abstract
OBJECTIVE We used meta-analysis to assess research concerning human trust in automation to understand the foundation upon which future autonomous systems can be built. BACKGROUND Trust is increasingly important in the growing need for synergistic human-machine teaming. Thus, we expand on our previous meta-analytic foundation in the field of human-robot interaction to include all of automation interaction. METHOD We used meta-analysis to assess trust in automation. Thirty studies provided 164 pairwise effect sizes, and 16 studies provided 63 correlational effect sizes. RESULTS The overall effect size of all factors on trust development was ḡ = +0.48, and the correlational effect was [Formula: see text] = +0.34, each of which represented medium effects. Moderator effects were observed for the human-related (ḡ = +0.49; [Formula: see text] = +0.16) and automation-related (ḡ = +0.53; [Formula: see text] = +0.41) factors. Moderator effects specific to environmental factors proved insufficient in number to calculate at this time. CONCLUSION Findings provide a quantitative representation of factors influencing the development of trust in automation as well as identify additional areas of needed empirical research. APPLICATION This work has important implications to the enhancement of current and future human-automation interaction, especially in high-risk or extreme performance environments.
Collapse
Affiliation(s)
| | | | - James L Szalma
- U.S. Army Research Laboratory, Aberdeen Proving Ground, MarylandU.S. Army Research Laboratory, Orlando, FloridaUniversity of Central Florida, Orlando
| | | |
Collapse
|
24
|
Dragan A, Holladay R, Srinivasa S. Deceptive robot motion: synthesis, analysis and experiments. Auton Robots 2015. [DOI: 10.1007/s10514-015-9458-8] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
25
|
Schaefer KE, Adams JK, Cook JG, Bardwell-Owens A, Hancock PA. The Future of Robotic Design. ERGONOMICS IN DESIGN 2015. [DOI: 10.1177/1064804614562214] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The perception of what constitutes a robot depends on both technological advances and evolving social perceptions. Here, we explore how the fictional media affect those societal perceptions and the subsequent influence on robot design to date. We then examine how these design trends can be applied to present circumstances to suggest future design directions.
Collapse
|
26
|
Abstract
We here provide our assessment of the growing intimacy which is the relationships between humans and technology. With the development of each more innovative and intimate system, the line between human and machine is becoming increasingly blurred. The concept of human qua human and machine qua machine are no longer situated at polar extremes of a human versus automation spectrum. Rather, human and machine represent a converging dyad that is evolving toward a hybrid commonalty. Within this overarching context, we discuss the concept of intimacy using four basic dimensions: i) the internal perspective, ii) the external extension, iii) interpersonal interaction, and finally, iv) the societal reflection. Through the use of three case study personae (a disabled individual, a military operative, and a student), these dimensions are shown to be flexible to the characterization of various classes of users and can thus be used to frame the multilevel and emerging impacts of emergent physical and cognitive intimacy.
Collapse
|