1
|
Gegoff I, Tatasciore M, Bowden V. Transparent Automated Advice to Mitigate the Impact of Variation in Automation Reliability. HUMAN FACTORS 2024; 66:2008-2024. [PMID: 37635389 PMCID: PMC11141097 DOI: 10.1177/00187208231196738] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/14/2023] [Accepted: 08/06/2023] [Indexed: 08/29/2023]
Abstract
OBJECTIVE To examine the extent to which increased automation transparency can mitigate the potential negative effects of low and high automation reliability on disuse and misuse of automated advice, and perceived trust in automation. BACKGROUND Automated decision aids that vary in the reliability of their advice are increasingly used in workplaces. Low-reliability automation can increase disuse of automated advice, while high-reliability automation can increase misuse. These effects could be reduced if the rationale underlying automated advice is made more transparent. METHODS Participants selected the optimal UV to complete missions. The Recommender (automated decision aid) assisted participants by providing advice; however, it was not always reliable. Participants determined whether the Recommender provided accurate information and whether to accept or reject advice. The level of automation transparency (medium, high) and reliability (low: 65%, high: 90%) were manipulated between-subjects. RESULTS With high- compared to low-reliability automation, participants made more accurate (correctly accepted advice and identified whether information was accurate/inaccurate) and faster decisions, and reported increased trust in automation. Increased transparency led to more accurate and faster decisions, lower subjective workload, and higher usability ratings. It also eliminated the increased automation disuse associated with low-reliability automation. However, transparency did not mitigate the misuse associated with high-reliability automation. CONCLUSION Transparency protected against low-reliability automation disuse, but not against the increased misuse potentially associated with the reduced monitoring and verification of high-reliability automation. APPLICATION These outcomes can inform the design of transparent automation to improve human-automation teaming under conditions of varied automation reliability.
Collapse
Affiliation(s)
| | | | - Vanessa Bowden
- The University of Western Australia, Perth, WA, Australia
| |
Collapse
|
2
|
Li M, Guo F, Li Z, Ma H, Duffy VG. Interactive effects of users' openness and robot reliability on trust: evidence from psychological intentions, task performance, visual behaviours, and cerebral activations. ERGONOMICS 2024:1-21. [PMID: 38635303 DOI: 10.1080/00140139.2024.2343954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Accepted: 04/09/2024] [Indexed: 04/19/2024]
Abstract
Although trust plays a vital role in human-robot interaction, there is currently a dearth of literature examining the effect of users' openness personality on trust in actual interaction. This study aims to investigate the interaction effects of users' openness and robot reliability on trust. We designed a voice-based walking task and collected subjective trust ratings, task metrics, eye-tracking data, and fNIRS signals from users with different openness to unravel the psychological intentions, task performance, visual behaviours, and cerebral activations underlying trust. The results showed significant interaction effects. Users with low openness exhibited lower subjective trust, more fixations, and higher activation of rTPJ in the highly reliable condition than those with high openness. The results suggested that users with low openness might be more cautious and suspicious about the highly reliable robot and allocate more visual attention and neural processing to monitor and infer robot status than users with high openness.
Collapse
Affiliation(s)
- Mingming Li
- Department of Industrial Engineering, College of Management Science and Engineering, Anhui University of Technology, Maanshan, China
- Department of Industrial Engineering, School of Business Administration, Northeastern University, Shenyang, China
| | - Fu Guo
- Department of Industrial Engineering, School of Business Administration, Northeastern University, Shenyang, China
| | - Zhixing Li
- Department of Industrial Engineering, School of Business Administration, Northeastern University, Shenyang, China
| | - Haiyang Ma
- Department of Industrial Engineering, School of Business Administration, Northeastern University, Shenyang, China
| | - Vincent G Duffy
- School of Industrial Engineering, Purdue University, West Lafayette, IN, USA
| |
Collapse
|
3
|
Strickland L, Farrell S, Wilson MK, Hutchinson J, Loft S. How do humans learn about the reliability of automation? Cogn Res Princ Implic 2024; 9:8. [PMID: 38361149 PMCID: PMC10869332 DOI: 10.1186/s41235-024-00533-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2023] [Accepted: 01/27/2024] [Indexed: 02/17/2024] Open
Abstract
In a range of settings, human operators make decisions with the assistance of automation, the reliability of which can vary depending upon context. Currently, the processes by which humans track the level of reliability of automation are unclear. In the current study, we test cognitive models of learning that could potentially explain how humans track automation reliability. We fitted several alternative cognitive models to a series of participants' judgements of automation reliability observed in a maritime classification task in which participants were provided with automated advice. We examined three experiments including eight between-subjects conditions and 240 participants in total. Our results favoured a two-kernel delta-rule model of learning, which specifies that humans learn by prediction error, and respond according to a learning rate that is sensitive to environmental volatility. However, we found substantial heterogeneity in learning processes across participants. These outcomes speak to the learning processes underlying how humans estimate automation reliability and thus have implications for practice.
Collapse
Affiliation(s)
- Luke Strickland
- The Future of Work Institute, Curtin University, 78 Murray Street, Perth, 6000, Australia.
| | - Simon Farrell
- The School of Psychological Science, The University of Western Australia, Crawley, Perth, Australia
| | - Micah K Wilson
- The Future of Work Institute, Curtin University, 78 Murray Street, Perth, 6000, Australia
| | - Jack Hutchinson
- The School of Psychological Science, The University of Western Australia, Crawley, Perth, Australia
| | - Shayne Loft
- The School of Psychological Science, The University of Western Australia, Crawley, Perth, Australia
| |
Collapse
|
4
|
Hutchinson J, Strickland L, Farrell S, Loft S. The Perception of Automation Reliability and Acceptance of Automated Advice. HUMAN FACTORS 2023; 65:1596-1612. [PMID: 34979821 DOI: 10.1177/00187208211062985] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
OBJECTIVE Examine (1) the extent to which humans can accurately estimate automation reliability and calibrate to changes in reliability, and how this is impacted by the recent accuracy of automation; and (2) factors that impact the acceptance of automated advice, including true automation reliability, reliability perception, and the difference between an operator's perception of automation reliability and perception of their own reliability. BACKGROUND Existing evidence suggests humans can adapt to changes in automation reliability but generally underestimate reliability. Cognitive science indicates that humans heavily weight evidence from more recent experiences. METHOD Participants monitored the behavior of maritime vessels (contacts) in order to classify them, and then received advice from automation regarding classification. Participants were assigned to either an initially high (90%) or low (60%) automation reliability condition. After some time, reliability switched to 75% in both conditions. RESULTS Participants initially underestimated automation reliability. After the change in true reliability, estimates in both conditions moved towards the common true reliability, but did not reach it. There were recency effects, with lower future reliability estimates immediately following incorrect automation advice. With lower initial reliability, automation acceptance rates tracked true reliability more closely than perceived reliability. A positive difference between participant assessments of the reliability of automation and their own reliability predicted greater automation acceptance. CONCLUSION Humans underestimate the reliability of automation, and we have demonstrated several critical factors that impact the perception of automation reliability and automation use. APPLICATION The findings have potential implications for training and adaptive human-automation teaming.
Collapse
Affiliation(s)
| | | | - Simon Farrell
- The University of Western Australia, Perth, WA, Australia
| | - Shayne Loft
- The University of Western Australia, Perth, WA, Australia
| |
Collapse
|
5
|
Merriman SE, Revell KMA, Plant KL. Training for the safe activation of Automated Vehicles matters: Revealing the benefits of online training to creating glaringly better mental models and behaviour. APPLIED ERGONOMICS 2023; 112:104057. [PMID: 37285640 DOI: 10.1016/j.apergo.2023.104057] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 05/25/2023] [Accepted: 05/27/2023] [Indexed: 06/09/2023]
Abstract
Automated Vehicle (AV) systems are expected to reduce the frequency and severity of on-road collisions. Unless drivers have an appropriate mental model for the capabilities and limitations of the automation, they may not activate the automation safely or appropriately on the road, potentially leading to a collision. As such, a training package (L4DTP) was developed to improve drivers' decisions and behaviour when activating an AV system and this was evaluated in a between-subjects simulator experiment. Drivers received no training (NT, control group), read an owner's manual (OM, experimental group 1: current training provision) or underwent the L4DTP (experimental group 2: new training programme). All drivers then experienced five scenarios in a driving simulator where they encountered road conditions which were safe and unsafe for activation. Their activation decisions, behaviour, trust in automation, workload and mental models were measured. This experiment found that drivers who read the OM or underwent the L4DTP made better activation decisions and showed better activation behaviour compared to drivers who received NT. Additionally, drivers who underwent the L4DTP found it easier, less demanding and felt under less time pressure when making their decisions, had to expend less effort to reach the same activation performance and had more appropriate and comprehensive mental models for when the automation can be activated compared to drivers who read the OM. This L4DTP can make roads safer by reducing collisions linked to poor activation decisions and behaviour. Therefore, there is the potential for a real benefit for society if this training programme is adopted into mandatory AV driver training.
Collapse
Affiliation(s)
- Siobhan E Merriman
- Human Factors Engineering, Transportation Research Group, Faculty of Engineering and Physical Sciences, Boldrewood Innovation Campus, University of Southampton, Burgess Road, Southampton, SO16 7QF, UK.
| | - Kirsten M A Revell
- Human Factors Engineering, Transportation Research Group, Faculty of Engineering and Physical Sciences, Boldrewood Innovation Campus, University of Southampton, Burgess Road, Southampton, SO16 7QF, UK.
| | - Katherine L Plant
- Human Factors Engineering, Transportation Research Group, Faculty of Engineering and Physical Sciences, Boldrewood Innovation Campus, University of Southampton, Burgess Road, Southampton, SO16 7QF, UK.
| |
Collapse
|
6
|
Avril E. Providing different levels of accuracy about the reliability of automation to a human operator: impact on human performance. ERGONOMICS 2023; 66:217-226. [PMID: 35451925 DOI: 10.1080/00140139.2022.2069870] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Accepted: 04/14/2022] [Indexed: 06/14/2023]
Abstract
Previous research has suggested that supervising automation can lead to a decrease in human performance, especially when automation is not totally reliable. Providing context-related information about reliability can help operators to better adjust their behaviour in a human-automation interaction context. However, previous studies have not specified the level of accuracy that this information should provide to the human operator. The objective of this study was to investigate the effects of different levels of information accuracy about an automation's reliability on human performance. Results showed that accuracy of information about reliability improves performance when specific percentages of reliability were given to the participants. Participants had a better performance in the condition of high accuracy of information. A link between perceived reliability and trust was found: the more the trust in automation increased, the more the perceived reliability increased.Practitioner summary: The experiment dealt with how accurate information about automation's reliability influences people's performance when supervising an automated task. Overall, this research suggests that designing systems that provide accurate, useful information can reduce the frequency of automation bias. Trust and perceived reliability of automation are related.Abbreviations: MATB: multi attribute task battery.
Collapse
Affiliation(s)
- Eugénie Avril
- Department of Psychology, Université Côte d'Azur, LAPCOS, Nice, France
| |
Collapse
|
7
|
Gain-loss separability in human- but not computer-based changes of mind. COMPUTERS IN HUMAN BEHAVIOR 2023. [DOI: 10.1016/j.chb.2023.107712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/16/2023]
|
8
|
Hutchinson J, Strickland L, Farrell S, Loft S. Human behavioral response to fluctuating automation reliability. APPLIED ERGONOMICS 2022; 105:103835. [PMID: 35797914 DOI: 10.1016/j.apergo.2022.103835] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Revised: 03/15/2022] [Accepted: 06/17/2022] [Indexed: 06/15/2023]
Abstract
Human perception of automation reliability and automation acceptance behaviours are key to effective human-automation teaming. This study examined factors that impact perceptions of automation reliability over time and the acceptance of automated advice. Participants completed a maritime vessel classification task in which they classified vessels (contacts) with the assistance of automation. In Experiment 1 automation reliability successively switched from high to low (or vice versa). In Experiment 2 automation reliability decreased by varying magnitudes before returning to high. Participants did not initially calibrate to true reliability and experiencing low automation reliability reduced future reliability estimates when experiencing subsequent high reliability. Automation acceptance was predicted by positive differences between participant perception of automation reliability and confidence in their own manual classification reliability. Experiencing low automation reliability caused perceptions of reliability and automation acceptance rates to diverge. These findings have important implications for training and adaptive human-automation teaming in complex work environments.
Collapse
Affiliation(s)
| | | | | | - Shayne Loft
- The University of Western Australia, Australia.
| |
Collapse
|
9
|
Long SK, Lee J, Yamani Y, Unverricht J, Itoh M. Does automation trust evolve from a leap of faith? An analysis using a reprogrammed pasteurizer simulation task. APPLIED ERGONOMICS 2022; 100:103674. [PMID: 35026680 DOI: 10.1016/j.apergo.2021.103674] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/18/2020] [Revised: 10/29/2021] [Accepted: 12/15/2021] [Indexed: 06/14/2023]
Abstract
Trust is a critical factor that drives successful human-automation interaction in a myriad of modern professional environments. One seminal work on human-automation trust is Muir and Moray (1996) showing that human-machine trust evolves from faith, then dependability, and finally predictability in a simulated supervisory control task. However, our recent work failed to replicate the finding of the original study, calling for further replication efforts. Experiment 1 aimed to fully replicate Muir and Moray (1996) where participants performed a simulated pasteurizer task. Experiment 2 attempted to replicate Experiment 1 using participants who major in Engineering as used in the original study. Both experiments showed that dependability was the best initial predictor of trust, building later to predictability and faith. Two experiments consistently failed to support both the hypothesis proposed by Muir and Moray (1996), that trust develops from predictability to dependability to faith, and their original findings that trust develops initially from faith. The results of the current experiments challenge this widely cited view of how human-machine trust develops. Modern automation designers should be aware that dependability might control initial trust development for general users and incorporate dependability information into their designs.
Collapse
|
10
|
Huang H, Rau PLP, Ma L. Will you listen to a robot? Effects of robot ability, task complexity, and risk on human decision-making. Adv Robot 2021. [DOI: 10.1080/01691864.2021.1974940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Hanjing Huang
- School of Economics and Management, Fuzhou University, Fuzhou, People’s Republic of China
- Department of Industrial Engineering, Tsinghua University, Beijing, People’s Republic of China
| | - Pei-Luen Patrick Rau
- Department of Industrial Engineering, Tsinghua University, Beijing, People’s Republic of China
| | - Liang Ma
- Department of Industrial Engineering, Tsinghua University, Beijing, People’s Republic of China
| |
Collapse
|
11
|
Parmar S, Thomas RP. Effects of Probabilistic Risk Situation Awareness Tool (RSAT) on Aeronautical Weather-Hazard Decision Making. Front Psychol 2021; 11:566780. [PMID: 33391082 PMCID: PMC7772149 DOI: 10.3389/fpsyg.2020.566780] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Accepted: 11/25/2020] [Indexed: 12/02/2022] Open
Abstract
We argue that providing cumulative risk as an estimate of the uncertainty in dynamically changing risky environments can help decision-makers meet mission-critical goals. Specifically, we constructed a simplified aviation-like weather decision-making task incorporating Next-Generation Radar (NEXRAD) images of convective weather. NEXRAD radar images provide information about geographically referenced precipitation. NEXRAD radar images are used by both pilots and laypeople to support decision-making about the level of risk posed by future weather-hazard movements. Using NEXRAD, people and professionals have to infer the uncertainty in the meteorological information to understand current hazards and extrapolate future conditions. Recent advancements in meteorology modeling afford the possibility of providing uncertainty information concerning hazardous weather for the current flight. Although there are systematic biases that plague people’s use of uncertainty information, there is evidence that presenting forecast uncertainty can improve weather-related decision-making. The current study augments NEXRAD by providing flight-path risk, referred to as the Risk Situational Awareness Tool (RSAT). RSAT provides the probability that a route will come within 20 NMI radius (FAA recommended safety distance) of hazardous weather within the next 45 min of flight. The study evaluates four NEXRAD displays integrated with RSAT, providing varying levels of support. The “no” support condition has no RSAT (the NEXRAD only condition). The “baseline” support condition employs an RSAT whose accuracy is consistent with current capability in meteorological modeling. The “moderate” support condition applies an RSAT whose accuracy is likely at the top of what is achievable in meteorology in the near future. The “high” support condition provides a level of support that is likely unachievable in an aviation weather decision-making context without considerable technological innovation. The results indicate that the operators relied on the RSAT and improved their performance as a consequence. We discuss the implications of the findings for the safe introduction of probabilistic tools in future general aviation cockpits and other dynamic decision-making contexts. Moreover, we discuss how the results contribute to research in the fields of dynamic risk and uncertainty, risk situation awareness, cumulative risk, and risk communication.
Collapse
Affiliation(s)
- Sweta Parmar
- Decision Processes Lab, School of Psychology, Georgia Institute of Technology, Atlanta, GA, United States
| | - Rickey P Thomas
- Decision Processes Lab, School of Psychology, Georgia Institute of Technology, Atlanta, GA, United States
| |
Collapse
|
12
|
Wohleber RW, Matthews G, Lin J, Szalma JL, Calhoun GL, Funke GJ, Chiu CYP, Ruff HA. Vigilance and Automation Dependence in Operation of Multiple Unmanned Aerial Systems (UAS): A Simulation Study. HUMAN FACTORS 2019; 61:488-505. [PMID: 30265579 DOI: 10.1177/0018720818799468] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
OBJECTIVE This simulation study investigated factors influencing sustained performance and fatigue during operation of multiple Unmanned Aerial Systems (UAS). The study tested effects of time-on-task and automation reliability on accuracy in surveillance tasks and dependence on automation. It also investigated the role of trait and state individual difference factors. BACKGROUND Warm's resource model of vigilance has been highly influential in human factors, but further tests of its applicability to complex, real-world tasks requiring sustained attention are necessary. Multi-UAS operation differs from standard vigilance paradigms in that the operator must switch attention between multiple subtasks, with support from automation. METHOD 131 participants performed surveillance tasks requiring signal discrimination and symbol counting with a multi-UAS simulation configured to impose low cognitive demands, for 2 hr. Automation reliability was manipulated between-groups. Five Factor Model personality traits were measured prior to performance. Subjective states were assessed with the Dundee Stress State Questionnaire. RESULTS Performance accuracy on the more demanding surveillance task showed a vigilance decrement, especially when automation reliability was low. Dependence on automation on this task declined over time. State but not trait factors predicted performance. High distress was associated with poorer performance in more demanding task conditions. CONCLUSIONS Vigilance decrement may be an operational issue for multi-UAS surveillance missions. Warm's resource theory may require modification to incorporate changes in information processing and task strategy associated with multitasking in low-workload, fatiguing environments. APPLICATION Interface design and operator evaluation for multi-UAS operations should address issues including motivation, stress, and sustaining attention to automation.
Collapse
Affiliation(s)
| | | | | | | | | | - Gregory J Funke
- Air Force Research Laboratory, Wright-Patterson AFB, OH, USA
| | | | | |
Collapse
|
13
|
Nakayama S, Diner D, Holland JG, Bloch G, Porfiri M, Nov O. The Influence of Social Information and Self-expertise on Emergent Task Allocation in Virtual Groups. Front Ecol Evol 2018. [DOI: 10.3389/fevo.2018.00016] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
14
|
Körber M, Baseler E, Bengler K. Introduction matters: Manipulating trust in automation and reliance in automated driving. APPLIED ERGONOMICS 2018; 66:18-31. [PMID: 28958427 DOI: 10.1016/j.apergo.2017.07.006] [Citation(s) in RCA: 65] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/26/2017] [Revised: 06/28/2017] [Accepted: 07/19/2017] [Indexed: 06/07/2023]
Abstract
Trust in automation is a key determinant for the adoption of automated systems and their appropriate use. Therefore, it constitutes an essential research area for the introduction of automated vehicles to road traffic. In this study, we investigated the influence of trust promoting (Trust promoted group) and trust lowering (Trust lowered group) introductory information on reported trust, reliance behavior and take-over performance. Forty participants encountered three situations in a 17-min highway drive in a conditionally automated vehicle (SAE Level 3). Situation 1 and Situation 3 were non-critical situations where a take-over was optional. Situation 2 represented a critical situation where a take-over was necessary to avoid a collision. A non-driving-related task (NDRT) was presented between the situations to record the allocation of visual attention. Participants reporting a higher trust level spent less time looking at the road or instrument cluster and more time looking at the NDRT. The manipulation of introductory information resulted in medium differences in reported trust and influenced participants' reliance behavior. Participants of the Trust promoted group looked less at the road or instrument cluster and more at the NDRT. The odds of participants of the Trust promoted group to overrule the automated driving system in the non-critical situations were 3.65 times (Situation 1) to 5 times (Situation 3) higher. In Situation 2, the Trust promoted group's mean take-over time was extended by 1154 ms and the mean minimum time-to-collision was 933 ms shorter. Six participants from the Trust promoted group compared to no participant of the Trust lowered group collided with the obstacle. The results demonstrate that the individual trust level influences how much drivers monitor the environment while performing an NDRT. Introductory information influences this trust level, reliance on an automated driving system, and if a critical take-over situation can be successfully solved.
Collapse
Affiliation(s)
- Moritz Körber
- Technical University of Munich, Boltzmannstraße 15, D - 85747, Garching, Germany.
| | - Eva Baseler
- Technical University of Munich, Boltzmannstraße 15, D - 85747, Garching, Germany.
| | - Klaus Bengler
- Technical University of Munich, Boltzmannstraße 15, D - 85747, Garching, Germany.
| |
Collapse
|
15
|
Pak R, McLaughlin AC, Leidheiser W, Rovira E. The effect of individual differences in working memory in older adults on performance with different degrees of automated technology. ERGONOMICS 2017; 60:518-532. [PMID: 27409279 DOI: 10.1080/00140139.2016.1189599] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
A leading hypothesis to explain older adults' overdependence on automation is age-related declines in working memory. However, it has not been empirically examined. The purpose of the current experiment was to examine how working memory affected performance with different degrees of automation in older adults. In contrast to the well-supported idea that higher degrees of automation, when the automation is correct, benefits performance but higher degrees of automation, when the automation fails, increasingly harms performance, older adults benefited from higher degrees of automation when the automation was correct but were not differentially harmed by automation failures. Surprisingly, working memory did not interact with degree of automation but did interact with automation correctness or failure. When automation was correct, older adults with higher working memory ability had better performance than those with lower abilities. But when automation was incorrect, all older adults, regardless of working memory ability, performed poorly. Practitioner Summary: The design of automation intended for older adults should focus on ways of making the correctness of the automation apparent to the older user and suggest ways of helping them recover when it is malfunctioning.
Collapse
Affiliation(s)
- Richard Pak
- a Department of Psychology , Clemson University , Clemson , SC , USA
| | | | | | - Ericka Rovira
- c Department of Behavioral Sciences & Leadership , U.S. Military Academy , West Point , NY , USA
| |
Collapse
|
16
|
Reiner AJ, Hollands JG, Jamieson GA. Target Detection and Identification Performance Using an Automatic Target Detection System. HUMAN FACTORS 2017; 59:242-258. [PMID: 27738280 DOI: 10.1177/0018720816670768] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
OBJECTIVE We investigated the effects of automatic target detection (ATD) on the detection and identification performance of soldiers. BACKGROUND Prior studies have shown that highlighting targets can aid their detection. We provided soldiers with ATD that was more likely to detect one target identity than another, potentially acting as an implicit identification aid. METHOD Twenty-eight soldiers detected and identified simulated human targets in an immersive virtual environment with and without ATD. Task difficulty was manipulated by varying scene illumination (day, night). The ATD identification bias was also manipulated (hostile bias, no bias, and friendly bias). We used signal detection measures to treat the identification results. RESULTS ATD presence improved detection performance, especially under high task difficulty (night illumination). Identification sensitivity was greater for cued than uncued targets. The identification decision criterion for cued targets varied with the ATD identification bias but showed a "sluggish beta" effect. CONCLUSION ATD helps soldiers detect and identify targets. The effects of biased ATD on identification should be considered with respect to the operational context. APPLICATION Less-than-perfectly-reliable ATD is a useful detection aid for dismounted soldiers. Disclosure of known ATD identification bias to the operator may aid the identification process.
Collapse
|
17
|
Navarro J, Deniel J, Yousfi E, Jallais C, Bueno M, Fort A. Influence of lane departure warnings onset and reliability on car drivers' behaviors. APPLIED ERGONOMICS 2017; 59:123-131. [PMID: 27890120 DOI: 10.1016/j.apergo.2016.08.010] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2016] [Revised: 07/28/2016] [Accepted: 08/12/2016] [Indexed: 06/06/2023]
Abstract
Lane departures represent an important cause of road crashes. The objective of the present study was to assess the effects of an auditory Lane Departure Warning System (LDWS) for partial and full lane departures (onset manipulation) combined with missed warnings (reliability manipulation: 100% reliable, 83% reliable and 66% reliable) on drivers' performances and acceptance. Several studies indicate that LDWS improves drivers' performances during lane departure episodes. However, little is known about the effects of the warning onset and reliability of LDWS. Results of studies which looked at forward collision warning systems show that early warnings tend to improve drivers' performances and receive a better trust judgement from the drivers when compared to later warnings. These studies also suggest that reliable assistances are more effective and trusted than unreliable ones. In the present study, lane departures were brought about by means of a distraction task whilst drivers simulated driving in a fixed-base simulator with or without an auditory LDWS. Results revealed steering behaviors improvements with LDWS. More effective recovery maneuvers were found with partial lane departure warnings than with full lane departure warnings and assistance unreliability did not impair significantly drivers' behaviors. Regarding missed lane departure episodes, drivers were found to react later and spend more time out of the driving lane when compared to properly warned lane departures, as if driving without assistance. Subjectively, LDWS did not reduce mental workload and partial lane departure warnings were judged more trustworthy than full lane departure ones. Data suggests the use of partial lane departure warnings when designing LDWS and that even unreliable LDWS may draw benefits compared to no assistance.
Collapse
Affiliation(s)
- J Navarro
- Laboratoire d'Etude des Mécanismes Cognitifs (EA 3082), University Lyon 2, France.
| | - J Deniel
- Laboratoire d'Etude des Mécanismes Cognitifs (EA 3082), University Lyon 2, France
| | - E Yousfi
- Laboratoire d'Etude des Mécanismes Cognitifs (EA 3082), University Lyon 2, France
| | - C Jallais
- LESCOT-TS2-IFSTTAR (French Institute of Science and Technology for Transport, Development and Networks), Bron, France
| | - M Bueno
- LESCOT-TS2-IFSTTAR (French Institute of Science and Technology for Transport, Development and Networks), Bron, France
| | - A Fort
- LESCOT-TS2-IFSTTAR (French Institute of Science and Technology for Transport, Development and Networks), Bron, France
| |
Collapse
|