1
|
Szulc J, Fletcher K. Numerical versus graphical aids for decision-making in a multi-cue signal identification task. APPLIED ERGONOMICS 2024; 118:104260. [PMID: 38417229 DOI: 10.1016/j.apergo.2024.104260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/28/2023] [Revised: 02/07/2024] [Accepted: 02/20/2024] [Indexed: 03/01/2024]
Abstract
Decision aids are commonly used in tactical decision-making environments to help humans integrate base-rate and multi-cue information. However, it is important that users appropriately trust and rely on aids. Decision aids can be presented in many ways, but the literature lacks clarity over the conditions surrounding their effectiveness. This research aims to determine whether a numerical or graphical aid more effectively supports human performance, and explores the relationships between aid presentation, trust, and workload. Participants (N = 30) completed a signal-identification task that required integration of readings from a set of three dynamic gauges. Participants experienced three conditions: unaided, using a numerical aid, and using a graphical aid. The aids combined gauge and base-rate information in a statistically-optimal fashion. Participants also indicated how much they trusted the system and how hard they worked during the task. Analyses explored the impact of aid condition on sensitivity, response bias, response time, trust, and workload. Both the numerical and graphical aids produced significant increases in sensitivity and trust, and significant decreases in workload in comparison to the unaided condition. The difference in response time between the graphical and unaided conditions approached significance, with participants responding faster using the graphical aid without decrements in sensitivity. Significant interactions between aid and signal type indicated that both aided conditions promoted faster responding to non-hostile signals, with larger mean differences in the graphical aid condition. Practically, graphical aids in which suggestions are more salient to users may promote faster responding in tactical environments, with negligible cost of accuracy.
Collapse
|
2
|
Rittenberg BSP, Holland CW, Barnhart GE, Gaudreau SM, Neyedli HF. Trust with increasing and decreasing reliability. HUMAN FACTORS 2024:187208241228636. [PMID: 38445652 DOI: 10.1177/00187208241228636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/07/2024]
Abstract
OBJECTIVE The primary purpose was to determine how trust changes over time when automation reliability increases or decreases. A secondary purpose was to determine how task-specific self-confidence is associated with trust and reliability level. BACKGROUND Both overtrust and undertrust can be detrimental to system performance; therefore, the temporal dynamics of trust with changing reliability level need to be explored. METHOD Two experiments used a dominant-color identification task, where automation provided a recommendation to users, with the reliability of the recommendation changing over 300 trials. In Experiment 1, two groups of participants interacted with the system: one group started with a 50% reliable system which increased to 100%, while the other used a system that decreased from 100% to 50%. Experiment 2 included a group where automation reliability increased from 70% to 100%. RESULTS Trust was initially high in the decreasing group and then declined as reliability level decreased; however, trust also declined in the 50% increasing reliability group. Furthermore, when user self-confidence increased, automation reliability had a greater influence on trust. In Experiment 2, the 70% increasing reliability group showed increased trust in the system. CONCLUSION Trust does not always track the reliability of automated systems; in particular, it is difficult for trust to recover once the user has interacted with a low reliability system. APPLICATIONS This study provides initial evidence into the dynamics of trust for automation that gets better over time suggesting that users should only start interacting with automation when it is sufficiently reliable.
Collapse
|
3
|
Lacroux A, Martin-Lacroux C. Should I Trust the Artificial Intelligence to Recruit? Recruiters' Perceptions and Behavior When Faced With Algorithm-Based Recommendation Systems During Resume Screening. Front Psychol 2022; 13:895997. [PMID: 35874355 PMCID: PMC9298741 DOI: 10.3389/fpsyg.2022.895997] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 05/30/2022] [Indexed: 11/13/2022] Open
Abstract
Resume screening assisted by decision support systems that incorporate artificial intelligence is currently undergoing a strong development in many organizations, raising technical, managerial, legal, and ethical issues. The purpose of the present paper is to better understand the reactions of recruiters when they are offered algorithm-based recommendations during resume screening. Two polarized attitudes have been identified in the literature on users' reactions to algorithm-based recommendations: algorithm aversion, which reflects a general distrust and preference for human recommendations; and automation bias, which corresponds to an overconfidence in the decisions or recommendations made by algorithmic decision support systems (ADSS). Drawing on results obtained in the field of automated decision support areas, we make the general hypothesis that recruiters trust human experts more than ADSS, because they distrust algorithms for subjective decisions such as recruitment. An experiment on resume screening was conducted on a sample of professionals (N = 694) involved in the screening of job applications. They were asked to study a job offer, then evaluate two fictitious resumes in a 2 × 2 factorial design with manipulation of the type of recommendation (no recommendation/algorithmic recommendation/human expert recommendation) and of the consistency of the recommendations (consistent vs. inconsistent recommendation). Our results support the general hypothesis of preference for human recommendations: recruiters exhibit a higher level of trust toward human expert recommendations compared with algorithmic recommendations. However, we also found that recommendation's consistence has a differential and unexpected impact on decisions: in the presence of an inconsistent algorithmic recommendation, recruiters favored the unsuitable over the suitable resume. Our results also show that specific personality traits (extraversion, neuroticism, and self-confidence) are associated with a differential use of algorithmic recommendations. Implications for research and HR policies are finally discussed.
Collapse
Affiliation(s)
- Alain Lacroux
- Univ. Polytechnique Hauts de France, IDH, CRISS, Valenciennes, France
| | | |
Collapse
|
4
|
Employees’ Trust in Artificial Intelligence in Companies: The Case of Energy and Chemical Industries in Poland. ENERGIES 2021. [DOI: 10.3390/en14071942] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
The use of artificial intelligence (AI) in companies is advancing rapidly. Consequently, multidisciplinary research on AI in business has developed dramatically during the last decade, moving from the focus on technological objectives towards an interest in human users’ perspective. In this article, we investigate the notion of employees’ trust in AI at the workplace (in the company), following a human-centered approach that considers AI integration in business from the employees’ perspective, taking into account the elements that facilitate human trust in AI. While employees’ trust in AI at the workplace seems critical, so far, few studies have systematically investigated its determinants. Therefore, this study is an attempt to fill the existing research gap. The research objective of the article is to examine links between employees’ trust in AI in the company and three other latent variables (general trust in technology, intra-organizational trust, and individual competence trust). A quantitative study conducted on a sample of 428 employees from companies of the energy and chemical industries in Poland allowed the hypotheses to be verified. The hypotheses were tested using structural equation modeling (SEM). The results indicate the existence of a positive relationship between general trust in technology and employees’ trust in AI in the company as well as between intra-organizational trust and employees’ trust in AI in the company in the surveyed firms.
Collapse
|
5
|
Hussein A, Elsawah S, Abbass HA. The reliability and transparency bases of trust in human-swarm interaction: principles and implications. ERGONOMICS 2020; 63:1116-1132. [PMID: 32370651 DOI: 10.1080/00140139.2020.1764112] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/17/2019] [Accepted: 04/13/2020] [Indexed: 06/11/2023]
Abstract
Automation reliability and transparency are key factors for trust calibration and as such can have distinct effects on human reliance behaviour and mission performance. One question that remains unexplored is: what are the implications of reliability and transparency on trust calibration for human-swarm interaction? We investigate this research question in the context of human-swarm interaction, as swarm systems are becoming more popular for their robustness and versatility. Thirty-two participants performed swarm-based tasks under different reliability and transparency conditions. The results indicate that trust, whether it is reliability- or transparency-based, indicates high reliance rates and shorter response times. Reliability-based trust is negatively correlated with correct rejection rates while transparency-based trust is positively correlated with these rates. We conclude that reliability and transparency have distinct effects on trust calibration. Practitioner Summary: Reliability and transparency have distinct effects on trust calibration. Findings from our human experiments suggest that transparency is a necessary design requirement if and when humans need to be involved in the decision-loop of human-swarm systems, especially when swarm reliability is high. Abbreviations: HRI: human-robot interaction; IOS: inter-organisational systems; LMM: liner mixed models; MANOVA: multivariate analysis of variance; UxV: heterogeneous unmanned vehicles; UAV: unmanned aerial vehicle.
Collapse
Affiliation(s)
- Aya Hussein
- School of Engineering and Information Technology, University of New South Wales, Canberra, Australia
| | - Sondoss Elsawah
- School of Engineering and Information Technology, University of New South Wales, Canberra, Australia
| | - Hussein A Abbass
- School of Engineering and Information Technology, University of New South Wales, Canberra, Australia
| |
Collapse
|
6
|
An experience-based contrast effect when relying on a decision aid. SN APPLIED SCIENCES 2020. [DOI: 10.1007/s42452-020-03236-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022] Open
|
7
|
Swanson LR, Bellanca JL, Helton J. Automated Systems and Trust: Mineworkers' Trust in Proximity Detection Systems for Mobile Machines. Saf Health Work 2020; 10:461-469. [PMID: 31890329 PMCID: PMC6933178 DOI: 10.1016/j.shaw.2019.09.003] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2018] [Revised: 05/09/2019] [Accepted: 09/18/2019] [Indexed: 11/30/2022] Open
Abstract
Background Collisions involving workers and mobile machines continue to be a major concern in underground coal mines. Over the last 30 years, these collisions have resulted in numerous injuries and fatalities. Recently, the Mine Safety and Health Administration (MSHA) proposed a rule that would require mines to equip mobile machines with proximity detection systems (PDSs) (systems designed for automated collision avoidance). Even though this regulation has not been enacted, some mines have installed PDSs on their scoops and hauling machines. However, early implementation of PDSs has introduced a variety of safety concerns. Past findings show that workers' trust can affect technology integration and influence unsafe use of automated technologies. Methods Using a mixed-methods approach, the present study explores the effect that factors such as mine of employment, age, experience, and system type have on workers' trust in PDSs for mobile machines. The study also explores how workers are trained on PDSs and how this training influences trust. Results The study resulted in three major findings. First, the mine of employment had a significant influence on workers' trust in mobile PDSs. Second, hands-on and classroom training was the most common types of training. Finally, over 70% of workers are trained on the system by the mine compared with 36% trained by the system manufacturer. Conclusion The influence of workers' mine of employment on trust in PDSs may indicate that practitioners and researchers may need to give the organizational and physical characteristics of each mine careful consideration to ensure safe integration of automated systems.
Collapse
Affiliation(s)
- LaTasha R. Swanson
- Corresponding author. 626 Cochrans Mill Road, Pittsburgh, PA 15236, USA.
| | | | | |
Collapse
|
8
|
Rovira E, McLaughlin AC, Pak R, High L. Looking for Age Differences in Self-Driving Vehicles: Examining the Effects of Automation Reliability, Driving Risk, and Physical Impairment on Trust. Front Psychol 2019; 10:800. [PMID: 31105610 PMCID: PMC6498898 DOI: 10.3389/fpsyg.2019.00800] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2018] [Accepted: 03/25/2019] [Indexed: 12/04/2022] Open
Abstract
PURPOSE Self-driving cars are an extremely high level of autonomous technology and represent a promising technology that may help older adults safely maintain independence. However, human behavior with automation is complex and not straightforward (Parasuraman and Riley, 1997; Parasuraman, 2000; Rovira et al., 2007; Parasuraman and Wickens, 2008; Parasuraman and Manzey, 2010; Parasuraman et al., 2012). In addition, because no fully self-driving vehicles are yet available to the public, most research has been limited to subjective survey-based assessments that depend on the respondents' limited knowledge based on second-hand reports and do not reflect the complex situational and dispositional factors known to affect trust and technology adoption. METHODS To address these issues, the current study examined the specific factors that affect younger and older adults' trust in self-driving vehicles. RESULTS The results showed that trust in self-driving vehicles depended on multiple interacting variables, such as the age of the respondent, risk during travel, impairment level of the hypothesized driver, and whether the self-driving car was reliable. CONCLUSION The primary contribution of this work is that, contrary to existing opinion surveys which suggest broad distrust in self-driving cars, the ratings of trust in self-driving cars varied with situational characteristics (reliability, driver impairment, risk level). Specifically, individuals reported less trust in the self-driving car when there was a failure with the car technology; and more trust in the technology in a low risk driving situation with an unimpaired driver when the automation was unreliable.
Collapse
Affiliation(s)
- Ericka Rovira
- Department of Behavioral Sciences and Leadership, US Military Academy, West Point, NY, United States
| | | | - Richard Pak
- Department of Psychology, Clemson University, Clemson, SC, United States
| | - Luke High
- Department of Behavioral Sciences and Leadership, US Military Academy, West Point, NY, United States
| |
Collapse
|
9
|
Lyons JB, Guznov SY. Individual differences in human–machine trust: A multi-study look at the perfect automation schema. THEORETICAL ISSUES IN ERGONOMICS SCIENCE 2018. [DOI: 10.1080/1463922x.2018.1491071] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- Joseph B. Lyons
- Airman Systems Directorate, Air Force Research Laboratory, Dayton, OH, USA
| | | |
Collapse
|
10
|
Niu J, Geng H, Zhang Y, Du X. Relationship between automation trust and operator performance for the novice and expert in spacecraft rendezvous and docking (RVD). APPLIED ERGONOMICS 2018; 71:1-8. [PMID: 29764609 DOI: 10.1016/j.apergo.2018.03.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/24/2017] [Revised: 03/22/2018] [Accepted: 03/25/2018] [Indexed: 06/08/2023]
Abstract
Operator trust in automation is a crucial factor influencing its use and operational performance. However, the relationship between automation trust and performance remains poorly understood and requires further investigation. The objective of this paper is to explore the difference in trust and performance on automation-aided spacecraft rendezvous and docking (RVD) between the novice and the expert and to investigate the relationship between automation trust and performance as well. We employed a two-factor mixed design, with training skill (novice and expert) and automation mode (manual RVD and automation aided RVD) serving as the two factors. Twenty participants, 10 novices and 10 experts, were recruited to conduct six RVD tasks for two automation levels. After the tasks, operator performance was recorded by the desktop hand-held docking training equipment. Operator trust was also measured by a 12-items questionnaire at the beginning and end of each trial. As a result, automation narrowed the performance gap significantly between the novice and the expert, and the automation trust showed a marginally significant difference between the novice and the expert. Furthermore, the result demonstrated that the attitude angle control error of the expert was related to the total trust score, whereas other automation performance indicators were not related to the total score of trust. However, automation performance was related to the dimensions of trust, such as entrust, harmful, and dependable.
Collapse
Affiliation(s)
- Jianwei Niu
- School of Mechanical Engineering, University of Science and Technology Beijing, Beijing, China.
| | - He Geng
- School of Mechanical Engineering, University of Science and Technology Beijing, Beijing, China.
| | - Yijing Zhang
- Department of Industrial Engineering, Tsinghua University, Beijing, China.
| | | |
Collapse
|
11
|
|
12
|
Lyons JB, Ho NT, Van Abel AL, Hoffmann LC, Sadler GG, Fergueson WE, Grigsby MA, Wilkins M. Comparing Trust in Auto-GCAS Between Experienced and Novice Air Force Pilots. ERGONOMICS IN DESIGN 2017. [DOI: 10.1177/1064804617716612] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
We examined F-16 pilots’ trust of the Automatic Ground Collision Avoidance System (Auto-GCAS), an automated system fielded on the F-16 to reduce the occurrence of controlled flight into terrain. We looked at the impact of experience (i.e., number of flight hours) as a predictor of trust perceptions and complacency potential among pilots. We expected that novice pilots would report higher trust and greater potential for complacency in relation to Auto-GCAS, which was shown to be partly true. Although novice pilots, compared with experienced pilots, reported equivalent trust perceptions, they also reported greater complacency potential.
Collapse
|
13
|
Wheatcroft JM, Jump M, Breckell AL, Adams-White J. Unmanned aerial systems (UAS) operators’ accuracy and confidence of decisions: Professional pilots or video game players? COGENT PSYCHOLOGY 2017. [DOI: 10.1080/23311908.2017.1327628] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
Affiliation(s)
- Jacqueline M. Wheatcroft
- Department of Psychological Sciences, Eleanor Rathbone Building, University of Liverpool, Liverpool L69 7ZA, UK
| | - Mike Jump
- Department of Engineering, University of Liverpool, Liverpool L69 7ZA, UK
| | - Amy L. Breckell
- School of Psychology, University of Liverpool, Liverpool L69 7ZA, UK
| | - Jade Adams-White
- Department of Psychological Sciences, Eleanor Rathbone Building, University of Liverpool, Liverpool L69 7ZA, UK
| |
Collapse
|
14
|
Hergeth S, Lorenz L, Krems JF. Prior Familiarization With Takeover Requests Affects Drivers' Takeover Performance and Automation Trust. HUMAN FACTORS 2017; 59:457-470. [PMID: 27923886 DOI: 10.1177/0018720816678714] [Citation(s) in RCA: 46] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
OBJECTIVE The objective for this study was to investigate the effects of prior familiarization with takeover requests (TORs) during conditional automated driving on drivers' initial takeover performance and automation trust. BACKGROUND System-initiated TORs are one of the biggest concerns for conditional automated driving and have been studied extensively in the past. Most, but not all, of these studies have included training sessions to familiarize participants with TORs. This makes them hard to compare and might obscure first-failure-like effects on takeover performance and automation trust formation. METHOD A driving simulator study compared drivers' takeover performance in two takeover situations across four prior familiarization groups (no familiarization, description, experience, description and experience) and automation trust before and after experiencing the system. RESULTS As hypothesized, prior familiarization with TORs had a more positive effect on takeover performance in the first than in a subsequent takeover situation. In all groups, automation trust increased after participants experienced the system. Participants who were given no prior familiarization with TORs reported highest automation trust both before and after experiencing the system. CONCLUSION The current results extend earlier findings suggesting that prior familiarization with TORs during conditional automated driving will be most relevant for takeover performance in the first takeover situation and that it lowers drivers' automation trust. APPLICATION Potential applications of this research include different approaches to familiarize users with automated driving systems, better integration of earlier findings, and sophistication of experimental designs.
Collapse
|
15
|
Pak R, McLaughlin AC, Leidheiser W, Rovira E. The effect of individual differences in working memory in older adults on performance with different degrees of automated technology. ERGONOMICS 2017; 60:518-532. [PMID: 27409279 DOI: 10.1080/00140139.2016.1189599] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
A leading hypothesis to explain older adults' overdependence on automation is age-related declines in working memory. However, it has not been empirically examined. The purpose of the current experiment was to examine how working memory affected performance with different degrees of automation in older adults. In contrast to the well-supported idea that higher degrees of automation, when the automation is correct, benefits performance but higher degrees of automation, when the automation fails, increasingly harms performance, older adults benefited from higher degrees of automation when the automation was correct but were not differentially harmed by automation failures. Surprisingly, working memory did not interact with degree of automation but did interact with automation correctness or failure. When automation was correct, older adults with higher working memory ability had better performance than those with lower abilities. But when automation was incorrect, all older adults, regardless of working memory ability, performed poorly. Practitioner Summary: The design of automation intended for older adults should focus on ways of making the correctness of the automation apparent to the older user and suggest ways of helping them recover when it is malfunctioning.
Collapse
Affiliation(s)
- Richard Pak
- a Department of Psychology , Clemson University , Clemson , SC , USA
| | | | | | - Ericka Rovira
- c Department of Behavioral Sciences & Leadership , U.S. Military Academy , West Point , NY , USA
| |
Collapse
|
16
|
Sauer J, Chavaillaz A. How operators make use of wide-choice adaptable automation: observations from a series of experimental studies. THEORETICAL ISSUES IN ERGONOMICS SCIENCE 2017. [DOI: 10.1080/1463922x.2017.1297866] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Juergen Sauer
- Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Alain Chavaillaz
- Department of Psychology, University of Fribourg, Fribourg, Switzerland
| |
Collapse
|
17
|
The Effects of Automation Error Types on Operators’ Trust and Reliance. LECTURE NOTES IN COMPUTER SCIENCE 2016. [DOI: 10.1007/978-3-319-39907-2_11] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
18
|
Hoff KA, Bashir M. Trust in automation: integrating empirical evidence on factors that influence trust. HUMAN FACTORS 2015; 57:407-434. [PMID: 25875432 DOI: 10.1177/0018720814547570] [Citation(s) in RCA: 330] [Impact Index Per Article: 36.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/20/2013] [Accepted: 07/15/2014] [Indexed: 06/04/2023]
Abstract
OBJECTIVE We systematically review recent empirical research on factors that influence trust in automation to present a three-layered trust model that synthesizes existing knowledge. BACKGROUND Much of the existing research on factors that guide human-automation interaction is centered around trust, a variable that often determines the willingness of human operators to rely on automation. Studies have utilized a variety of different automated systems in diverse experimental paradigms to identify factors that impact operators' trust. METHOD We performed a systematic review of empirical research on trust in automation from January 2002 to June 2013. Papers were deemed eligible only if they reported the results of a human-subjects experiment in which humans interacted with an automated system in order to achieve a goal. Additionally, a relationship between trust (or a trust-related behavior) and another variable had to be measured. All together, 101 total papers, containing 127 eligible studies, were included in the review. RESULTS Our analysis revealed three layers of variability in human-automation trust (dispositional trust, situational trust, and learned trust), which we organize into a model. We propose design recommendations for creating trustworthy automation and identify environmental conditions that can affect the strength of the relationship between trust and reliance. Future research directions are also discussed for each layer of trust. CONCLUSION Our three-layered trust model provides a new lens for conceptualizing the variability of trust in automation. Its structure can be applied to help guide future research and develop training interventions and design procedures that encourage appropriate trust.
Collapse
|
19
|
McBride SE, Rogers WA, Fisk AD. Understanding human management of automation errors. THEORETICAL ISSUES IN ERGONOMICS SCIENCE 2014; 15:545-577. [PMID: 25383042 PMCID: PMC4221095 DOI: 10.1080/1463922x.2013.817625] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance.
Collapse
Affiliation(s)
- Sara E McBride
- Georgia Institute of Technology, School of Psychology, 654 Cherry Street, Atlanta, GA 30332, USA
| | - Wendy A Rogers
- Georgia Institute of Technology, School of Psychology, 654 Cherry Street, Atlanta, GA 30332, USA
| | - Arthur D Fisk
- Georgia Institute of Technology, School of Psychology, 654 Cherry Street, Atlanta, GA 30332, USA
| |
Collapse
|