1
|
Candrian C. How Terminology Affects Users' Responses to System Failures. HUMAN FACTORS 2024; 66:2082-2103. [PMID: 37734726 PMCID: PMC11141081 DOI: 10.1177/00187208231202572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Accepted: 09/05/2023] [Indexed: 09/23/2023]
Abstract
OBJECTIVE The objective of our research is to advance the understanding of behavioral responses to a system's error. By examining trust as a dynamic variable and drawing from attribution theory, we explain the underlying mechanism and suggest how terminology can be used to mitigate the so-called algorithm aversion. In this way, we show that the use of different terms may shape consumers' perceptions and provide guidance on how these differences can be mitigated. BACKGROUND Previous research has interchangeably used various terms to refer to a system and results regarding trust in systems have been ambiguous. METHODS Across three studies, we examine the effect of different system terminology on consumer behavior following a system failure. RESULTS Our results show that terminology crucially affects user behavior. Describing a system as "AI" (i.e., self-learning and perceived as more complex) instead of as "algorithmic" (i.e., a less complex rule-based system) leads to more favorable behavioral responses by users when a system error occurs. CONCLUSION We suggest that in cases when a system's characteristics do not allow for it to be called "AI," users should be provided with an explanation of why the system's error occurred, and task complexity should be pointed out. We highlight the importance of terminology, as this can unintentionally impact the robustness and replicability of research findings. APPLICATION This research offers insights for industries utilizing AI and algorithmic systems, highlighting how strategic terminology use can shape user trust and response to errors, thereby enhancing system acceptance.
Collapse
|
2
|
Kohn SC, de Visser EJ, Wiese E, Lee YC, Shaw TH. Measurement of Trust in Automation: A Narrative Review and Reference Guide. Front Psychol 2021; 12:604977. [PMID: 34737716 PMCID: PMC8562383 DOI: 10.3389/fpsyg.2021.604977] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Accepted: 08/25/2021] [Indexed: 02/05/2023] Open
Abstract
With the rise of automated and autonomous agents, research examining Trust in Automation (TiA) has attracted considerable attention over the last few decades. Trust is a rich and complex construct which has sparked a multitude of measures and approaches to study and understand it. This comprehensive narrative review addresses known methods that have been used to capture TiA. We examined measurements deployed in existing empirical works, categorized those measures into self-report, behavioral, and physiological indices, and examined them within the context of an existing model of trust. The resulting work provides a reference guide for researchers, providing a list of available TiA measurement methods along with the model-derived constructs that they capture including judgments of trustworthiness, trust attitudes, and trusting behaviors. The article concludes with recommendations on how to improve the current state of TiA measurement.
Collapse
Affiliation(s)
| | - Ewart J de Visser
- Warfighter Effectiveness Research Center, United States Air Force Academy, Colorado Springs, CO, United States
| | - Eva Wiese
- George Mason University, Fairfax, VA, United States
| | - Yi-Ching Lee
- George Mason University, Fairfax, VA, United States
| | - Tyler H Shaw
- George Mason University, Fairfax, VA, United States
| |
Collapse
|
3
|
Stuck RE, Tomlinson BJ, Walker BN. The importance of incorporating risk into human-automation trust. THEORETICAL ISSUES IN ERGONOMICS SCIENCE 2021. [DOI: 10.1080/1463922x.2021.1975170] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Rachel E. Stuck
- School of Psychology, Georgia Institute of Technology, Atlanta, GA, USA
| | - Brianna J. Tomlinson
- School of Interactive Computing, Georgia Institute of Technology, Atlanta, GA, USA
| | - Bruce N. Walker
- School of Psychology, Georgia Institute of Technology, Atlanta, GA, USA
- School of Interactive Computing, Georgia Institute of Technology, Atlanta, GA, USA
| |
Collapse
|
4
|
Huang H, Rau PLP, Ma L. Will you listen to a robot? Effects of robot ability, task complexity, and risk on human decision-making. Adv Robot 2021. [DOI: 10.1080/01691864.2021.1974940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Hanjing Huang
- School of Economics and Management, Fuzhou University, Fuzhou, People’s Republic of China
- Department of Industrial Engineering, Tsinghua University, Beijing, People’s Republic of China
| | - Pei-Luen Patrick Rau
- Department of Industrial Engineering, Tsinghua University, Beijing, People’s Republic of China
| | - Liang Ma
- Department of Industrial Engineering, Tsinghua University, Beijing, People’s Republic of China
| |
Collapse
|
5
|
Loft S, Bhaskara A, Lock BA, Skinner M, Brooks J, Li R, Bell J. The Impact of Transparency and Decision Risk on Human-Automation Teaming Outcomes. HUMAN FACTORS 2021:187208211033445. [PMID: 34340583 DOI: 10.1177/00187208211033445] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
OBJECTIVE Examine the effects of decision risk and automation transparency on the accuracy and timeliness of operator decisions, automation verification rates, and subjective workload. BACKGROUND Decision aids typically benefit performance, but can provide incorrect advice due to contextual factors, creating the potential for automation disuse or misuse. Decision aids can reduce an operator's manual problem evaluation, and it can also be strategic for operators to minimize verifying automated advice in order to manage workload. METHOD Participants assigned the optimal unmanned vehicle to complete missions. A decision aid provided advice but was not always reliable. Two levels of decision aid transparency were manipulated between participants. The risk associated with each decision was manipulated using a financial incentive scheme. Participants could use a calculator to verify automated advice; however, this resulted in a financial penalty. RESULTS For high- compared with low-risk decisions, participants were more likely to reject incorrect automated advice and were more likely to verify automation and reported higher workload. Increased transparency did not lead to more accurate decisions and did not impact workload but decreased automation verification and eliminated the increased decision time associated with high decision risk. CONCLUSION Increased automation transparency was beneficial in that it decreased automation verification and decreased decision time. The increased workload and automation verification for high-risk missions is not necessarily problematic given the improved automation correct rejection rate. APPLICATION The findings have potential application to the design of interfaces to improve human-automation teaming, and for anticipating the impact of decision risk on operator behavior.
Collapse
Affiliation(s)
- Shayne Loft
- 2720 The University of Western Australia, Perth
| | - Adella Bhaskara
- 111604 Defence Science and Technology Group, Melbourne, Australia
- The University of Adelaide, Australia
| | | | - Michael Skinner
- 111604 Defence Science and Technology Group, Melbourne, Australia
| | - James Brooks
- 111604 Defence Science and Technology Group, Melbourne, Australia
| | - Ryan Li
- Applied Games and Simulations, Perth, Australia
| | - Jason Bell
- 2720 The University of Western Australia, Perth
| |
Collapse
|
6
|
Jin M, Lu G, Chen F, Shi X, Tan H, Zhai J. Modeling takeover behavior in level 3 automated driving via a structural equation model: Considering the mediating role of trust. ACCIDENT; ANALYSIS AND PREVENTION 2021; 157:106156. [PMID: 33957474 DOI: 10.1016/j.aap.2021.106156] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Revised: 03/17/2021] [Accepted: 04/22/2021] [Indexed: 06/12/2023]
Abstract
The takeover process in level 3 automated driving determines the controllability of the functions of automated vehicles and thereby traffic safety. In this study, we attempted to explain drivers' takeover performance variation in a level 3 automated vehicle in consideration of the effects of trust, system characteristics, environmental characteristics, and driver characteristics with a structural equation model. The model was built by incorporating drivers' takeover time and quality as endogenous variables. A theoretical framework of the model was hypothesized on the basis of the ACT-R cognitive architecture and relevant research results. The validity of the model was confirmed using data collected from 136 driving simulator samples under the condition of voluntary non-driving-related tasks. Results revealed that takeover time budget was the most critical factor in promoting the safety and stability of takeover process, which, together with traffic density, drivers' age and manual driving experience, determined drivers' takeover quality directly. In addition, the pre-existing experience with an automated system or a similar technology and self-confidence of the driver, as well as takeover time budget, strongly influenced the takeover time directly. Apart from the direct effects mentioned above, trust, as an intermediary variable, explained a major portion of the variance in takeover time. Theoretically, these findings suggest that takeover behavior could be comprehensively evaluated from the two dimensions of takeover time and quality through the combination of trust, driver characteristics, environmental characteristics, and vehicle characteristics. The influence mechanism of the above factors is complex and multidimensional. In addition to the form of direct influence, trust, as an intermediary variable, could reflect the internal mechanism of the takeover behavior variation. Practically, the findings emphasize the crucial role of trust in the change in takeover behavior through the dimensions of subjective trust level and monitoring strategy, which may provide new insights into the function design of takeover process.
Collapse
Affiliation(s)
- Mengxia Jin
- School of Transportation Science and Engineering, Beihang University, Beijing, 100191, China; Beijing Advanced Innovation Center for Big Data and Brain Computing, Beijing Key Laboratory for Cooperative Vehicle Infrastructure Systems and Safety Control, Beijing, 100191, China
| | - Guangquan Lu
- School of Transportation Science and Engineering, Beihang University, Beijing, 100191, China; Beijing Advanced Innovation Center for Big Data and Brain Computing, Beijing Key Laboratory for Cooperative Vehicle Infrastructure Systems and Safety Control, Beijing, 100191, China.
| | - Facheng Chen
- School of Transportation Science and Engineering, Beihang University, Beijing, 100191, China; Beijing Advanced Innovation Center for Big Data and Brain Computing, Beijing Key Laboratory for Cooperative Vehicle Infrastructure Systems and Safety Control, Beijing, 100191, China
| | - Xi Shi
- School of Transportation Science and Engineering, Beihang University, Beijing, 100191, China; Beijing Advanced Innovation Center for Big Data and Brain Computing, Beijing Key Laboratory for Cooperative Vehicle Infrastructure Systems and Safety Control, Beijing, 100191, China
| | - Haitian Tan
- School of Transportation Science and Engineering, Beihang University, Beijing, 100191, China; Beijing Advanced Innovation Center for Big Data and Brain Computing, Beijing Key Laboratory for Cooperative Vehicle Infrastructure Systems and Safety Control, Beijing, 100191, China
| | - Junda Zhai
- School of Transportation Science and Engineering, Beihang University, Beijing, 100191, China; Beijing Advanced Innovation Center for Big Data and Brain Computing, Beijing Key Laboratory for Cooperative Vehicle Infrastructure Systems and Safety Control, Beijing, 100191, China
| |
Collapse
|
7
|
Guo Y, Yang XJ. Modeling and Predicting Trust Dynamics in Human–Robot Teaming: A Bayesian Inference Approach. Int J Soc Robot 2020. [DOI: 10.1007/s12369-020-00703-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
AbstractTrust in automation, or more recently trust in autonomy, has received extensive research attention in the past three decades. The majority of prior literature adopted a “snapshot” view of trust and typically evaluated trust through questionnaires administered at the end of an experiment. This “snapshot” view, however, does not acknowledge that trust is a dynamic variable that can strengthen or decay over time. To fill the research gap, the present study aims to model trust dynamics when a human interacts with a robotic agent over time. The underlying premise of the study is that by interacting with a robotic agent and observing its performance over time, a rational human agent will update his/her trust in the robotic agent accordingly. Based on this premise, we develop a personalized trust prediction model and learn its parameters using Bayesian inference. Our proposed model adheres to three properties of trust dynamics characterizing human agents’ trust development process de facto and thus guarantees high model explicability and generalizability. We tested the proposed method using an existing dataset involving 39 human participants interacting with four drones in a simulated surveillance mission. The proposed method obtained a root mean square error of 0.072, significantly outperforming existing prediction methods. Moreover, we identified three distinct types of trust dynamics, the Bayesian decision maker, the oscillator, and the disbeliever, respectively. This prediction model can be used for the design of individualized and adaptive technologies.
Collapse
|
8
|
Niu J, Geng H, Zhang Y, Du X. Relationship between automation trust and operator performance for the novice and expert in spacecraft rendezvous and docking (RVD). APPLIED ERGONOMICS 2018; 71:1-8. [PMID: 29764609 DOI: 10.1016/j.apergo.2018.03.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/24/2017] [Revised: 03/22/2018] [Accepted: 03/25/2018] [Indexed: 06/08/2023]
Abstract
Operator trust in automation is a crucial factor influencing its use and operational performance. However, the relationship between automation trust and performance remains poorly understood and requires further investigation. The objective of this paper is to explore the difference in trust and performance on automation-aided spacecraft rendezvous and docking (RVD) between the novice and the expert and to investigate the relationship between automation trust and performance as well. We employed a two-factor mixed design, with training skill (novice and expert) and automation mode (manual RVD and automation aided RVD) serving as the two factors. Twenty participants, 10 novices and 10 experts, were recruited to conduct six RVD tasks for two automation levels. After the tasks, operator performance was recorded by the desktop hand-held docking training equipment. Operator trust was also measured by a 12-items questionnaire at the beginning and end of each trial. As a result, automation narrowed the performance gap significantly between the novice and the expert, and the automation trust showed a marginally significant difference between the novice and the expert. Furthermore, the result demonstrated that the attitude angle control error of the expert was related to the total trust score, whereas other automation performance indicators were not related to the total score of trust. However, automation performance was related to the dimensions of trust, such as entrust, harmful, and dependable.
Collapse
Affiliation(s)
- Jianwei Niu
- School of Mechanical Engineering, University of Science and Technology Beijing, Beijing, China.
| | - He Geng
- School of Mechanical Engineering, University of Science and Technology Beijing, Beijing, China.
| | - Yijing Zhang
- Department of Industrial Engineering, Tsinghua University, Beijing, China.
| | | |
Collapse
|
9
|
Lyons JB, Ho NT, Van Abel AL, Hoffmann LC, Sadler GG, Fergueson WE, Grigsby MA, Wilkins M. Comparing Trust in Auto-GCAS Between Experienced and Novice Air Force Pilots. ERGONOMICS IN DESIGN 2017. [DOI: 10.1177/1064804617716612] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
We examined F-16 pilots’ trust of the Automatic Ground Collision Avoidance System (Auto-GCAS), an automated system fielded on the F-16 to reduce the occurrence of controlled flight into terrain. We looked at the impact of experience (i.e., number of flight hours) as a predictor of trust perceptions and complacency potential among pilots. We expected that novice pilots would report higher trust and greater potential for complacency in relation to Auto-GCAS, which was shown to be partly true. Although novice pilots, compared with experienced pilots, reported equivalent trust perceptions, they also reported greater complacency potential.
Collapse
|
10
|
Pak R, McLaughlin AC, Leidheiser W, Rovira E. The effect of individual differences in working memory in older adults on performance with different degrees of automated technology. ERGONOMICS 2017; 60:518-532. [PMID: 27409279 DOI: 10.1080/00140139.2016.1189599] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
A leading hypothesis to explain older adults' overdependence on automation is age-related declines in working memory. However, it has not been empirically examined. The purpose of the current experiment was to examine how working memory affected performance with different degrees of automation in older adults. In contrast to the well-supported idea that higher degrees of automation, when the automation is correct, benefits performance but higher degrees of automation, when the automation fails, increasingly harms performance, older adults benefited from higher degrees of automation when the automation was correct but were not differentially harmed by automation failures. Surprisingly, working memory did not interact with degree of automation but did interact with automation correctness or failure. When automation was correct, older adults with higher working memory ability had better performance than those with lower abilities. But when automation was incorrect, all older adults, regardless of working memory ability, performed poorly. Practitioner Summary: The design of automation intended for older adults should focus on ways of making the correctness of the automation apparent to the older user and suggest ways of helping them recover when it is malfunctioning.
Collapse
Affiliation(s)
- Richard Pak
- a Department of Psychology , Clemson University , Clemson , SC , USA
| | | | | | - Ericka Rovira
- c Department of Behavioral Sciences & Leadership , U.S. Military Academy , West Point , NY , USA
| |
Collapse
|
11
|
Schaefer KE, Chen JYC, Szalma JL, Hancock PA. A Meta-Analysis of Factors Influencing the Development of Trust in Automation: Implications for Understanding Autonomy in Future Systems. HUMAN FACTORS 2016; 58:377-400. [PMID: 27005902 DOI: 10.1177/0018720816634228] [Citation(s) in RCA: 140] [Impact Index Per Article: 17.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2014] [Accepted: 01/13/2016] [Indexed: 06/05/2023]
Abstract
OBJECTIVE We used meta-analysis to assess research concerning human trust in automation to understand the foundation upon which future autonomous systems can be built. BACKGROUND Trust is increasingly important in the growing need for synergistic human-machine teaming. Thus, we expand on our previous meta-analytic foundation in the field of human-robot interaction to include all of automation interaction. METHOD We used meta-analysis to assess trust in automation. Thirty studies provided 164 pairwise effect sizes, and 16 studies provided 63 correlational effect sizes. RESULTS The overall effect size of all factors on trust development was ḡ = +0.48, and the correlational effect was [Formula: see text] = +0.34, each of which represented medium effects. Moderator effects were observed for the human-related (ḡ = +0.49; [Formula: see text] = +0.16) and automation-related (ḡ = +0.53; [Formula: see text] = +0.41) factors. Moderator effects specific to environmental factors proved insufficient in number to calculate at this time. CONCLUSION Findings provide a quantitative representation of factors influencing the development of trust in automation as well as identify additional areas of needed empirical research. APPLICATION This work has important implications to the enhancement of current and future human-automation interaction, especially in high-risk or extreme performance environments.
Collapse
Affiliation(s)
| | | | - James L Szalma
- U.S. Army Research Laboratory, Aberdeen Proving Ground, MarylandU.S. Army Research Laboratory, Orlando, FloridaUniversity of Central Florida, Orlando
| | | |
Collapse
|
12
|
Pak R, Rovira E, McLaughlin AC, Baldwin N. Does the domain of technology impact user trust? Investigating trust in automation across different consumer-oriented domains in young adults, military, and older adults. THEORETICAL ISSUES IN ERGONOMICS SCIENCE 2016. [DOI: 10.1080/1463922x.2016.1175523] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
13
|
Barg-Walkow LH, Rogers WA. The Effect of Incorrect Reliability Information on Expectations, Perceptions, and Use of Automation. HUMAN FACTORS 2016; 58:242-60. [PMID: 26519483 PMCID: PMC10664720 DOI: 10.1177/0018720815610271] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/26/2015] [Accepted: 09/12/2015] [Indexed: 06/05/2023]
Abstract
OBJECTIVE We examined how providing artificially high or low statements about automation reliability affected expectations, perceptions, and use of automation over time. BACKGROUND One common method of introducing automation is providing explicit statements about the automation's capabilities. Research is needed to understand how expectations from such introductions affect perceptions and use of automation. METHOD Explicit-statement introductions were manipulated to set higher-than (90%), same-as (75%), or lower-than (60%) levels of expectations in a dual-task scenario with 75% reliable automation. Two experiments were conducted to assess expectations, perceptions, compliance, reliance, and task performance over (a) 2 days and (b) 4 days. RESULTS The baseline assessments showed initial expectations of automation reliability matched introduced levels of expectation. For the duration of each experiment, the lower-than groups' perceptions were lower than the actual automation reliability. However, the higher-than groups' perceptions were no different from actual automation reliability after Day 1 in either study. There were few differences between groups for automation use, which generally stayed the same or increased with experience using the system. CONCLUSION Introductory statements describing artificially low automation reliability have a long-lasting impact on perceptions about automation performance. Statements including incorrect automation reliability do not appear to affect use of automation. APPLICATION Introductions should be designed according to desired outcomes for expectations, perceptions, and use of the automation. Low expectations have long-lasting effects.
Collapse
|
14
|
Hoff KA, Bashir M. Trust in automation: integrating empirical evidence on factors that influence trust. HUMAN FACTORS 2015; 57:407-434. [PMID: 25875432 DOI: 10.1177/0018720814547570] [Citation(s) in RCA: 330] [Impact Index Per Article: 36.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/20/2013] [Accepted: 07/15/2014] [Indexed: 06/04/2023]
Abstract
OBJECTIVE We systematically review recent empirical research on factors that influence trust in automation to present a three-layered trust model that synthesizes existing knowledge. BACKGROUND Much of the existing research on factors that guide human-automation interaction is centered around trust, a variable that often determines the willingness of human operators to rely on automation. Studies have utilized a variety of different automated systems in diverse experimental paradigms to identify factors that impact operators' trust. METHOD We performed a systematic review of empirical research on trust in automation from January 2002 to June 2013. Papers were deemed eligible only if they reported the results of a human-subjects experiment in which humans interacted with an automated system in order to achieve a goal. Additionally, a relationship between trust (or a trust-related behavior) and another variable had to be measured. All together, 101 total papers, containing 127 eligible studies, were included in the review. RESULTS Our analysis revealed three layers of variability in human-automation trust (dispositional trust, situational trust, and learned trust), which we organize into a model. We propose design recommendations for creating trustworthy automation and identify environmental conditions that can affect the strength of the relationship between trust and reliance. Future research directions are also discussed for each layer of trust. CONCLUSION Our three-layered trust model provides a new lens for conceptualizing the variability of trust in automation. Its structure can be applied to help guide future research and develop training interventions and design procedures that encourage appropriate trust.
Collapse
|
15
|
McBride SE, Rogers WA, Fisk AD. Understanding human management of automation errors. THEORETICAL ISSUES IN ERGONOMICS SCIENCE 2014; 15:545-577. [PMID: 25383042 PMCID: PMC4221095 DOI: 10.1080/1463922x.2013.817625] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance.
Collapse
Affiliation(s)
- Sara E McBride
- Georgia Institute of Technology, School of Psychology, 654 Cherry Street, Atlanta, GA 30332, USA
| | - Wendy A Rogers
- Georgia Institute of Technology, School of Psychology, 654 Cherry Street, Atlanta, GA 30332, USA
| | - Arthur D Fisk
- Georgia Institute of Technology, School of Psychology, 654 Cherry Street, Atlanta, GA 30332, USA
| |
Collapse
|
16
|
Pak R, Fink N, Price M, Bass B, Sturre L. Decision support aids with anthropomorphic characteristics influence trust and performance in younger and older adults. ERGONOMICS 2012; 55:1059-1072. [PMID: 22799560 DOI: 10.1080/00140139.2012.691554] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
This study examined the use of deliberately anthropomorphic automation on younger and older adults' trust, dependence and performance on a diabetes decision-making task. Research with anthropomorphic interface agents has shown mixed effects in judgments of preferences but has rarely examined effects on performance. Meanwhile, research in automation has shown some forms of anthropomorphism (e.g. etiquette) have effects on trust and dependence on automation. Participants answered diabetes questions with no-aid, a non-anthropomorphic aid or an anthropomorphised aid. Trust and dependence in the aid was measured. A minimally anthropomorphic aide primarily affected younger adults' trust in the aid. Dependence, however, for both age groups was influenced by the anthropomorphic aid. Automation that deliberately embodies person-like characteristics can influence trust and dependence on reasonably reliable automation. However, further research is necessary to better understand the specific aspects of the aid that affect different age groups. Automation that embodies human-like characteristics may be useful in situations where there is under-utilisation of reasonably reliable aids by enhancing trust and dependence in that aid. Practitioner Summary: The design of decision-support aids on consumer devices (e.g. smartphones) may influence the level of trust that users place in that system and their amount of use. This study is the first step in articulating how the design of aids may influence user's trust and use of such systems.
Collapse
Affiliation(s)
- Richard Pak
- Department of Psychology, Clemson University, Clemson, SC, USA.
| | | | | | | | | |
Collapse
|
17
|
Abstract
Consistent with technological advances, the role of the operator in many human factors domains has evolved from one characterized primarily by sensory and motor skills to one characterized primarily by cognitive skills and decision making. Decision making is a primary component in problem solving, human-automation interaction, response to alarms and warnings, and error mitigation. In this chapter we discuss decision making in terms of both front-end judgment processes (e.g., attending to and evaluating the significance of cues and information, formulating a diagnosis, or assessing the situation) and back-end decision processes (e.g., retrieving a course of action, weighing one's options, or mentally simulating a possible response). Two important metatheories—correspondence (empirical accuracy) and coherence (rationality and consistency)—provide ways to assess the goodness of each phase (e.g., Hammond, 1996, 2000; Mosier, 2009). We present several models of decision making, including Brunswik's lens model, naturalistic decision making, and decision ladders, and discuss them in terms of their point of focus and their primary strategies and goals. Next, we turn the discussion to layers in the decision context: individual variables, team decision making, technology, and organizational influences. Last, we focus on applications and lessons learned: investigating, enhancing, designing, and training for decision making. Drawing heavily on sources such as the Human Factors journal, we present recent human factors research exploring these issues, models, and applications.
Collapse
|