1
|
Ling S, Zhang Y, Du N. More Is Not Always Better: Impacts of AI-Generated Confidence and Explanations in Human-Automation Interaction. HUMAN FACTORS 2024; 66:2606-2620. [PMID: 38437598 DOI: 10.1177/00187208241234810] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
OBJECTIVE The study aimed to enhance transparency in autonomous systems by automatically generating and visualizing confidence and explanations and assessing their impacts on performance, trust, preference, and eye-tracking behaviors in human-automation interaction. BACKGROUND System transparency is vital to maintaining appropriate levels of trust and mission success. Previous studies presented mixed results regarding the impact of displaying likelihood information and explanations, and often relied on hand-created information, limiting scalability and failing to address real-world dynamics. METHOD We conducted a dual-task experiment involving 42 university students who operated a simulated surveillance testbed with assistance from intelligent detectors. The study used a 2 (confidence visualization: yes vs. no) × 3 (visual explanations: none, bounding boxes, bounding boxes and keypoints) mixed design. Task performance, human trust, preference for intelligent detectors, and eye-tracking behaviors were evaluated. RESULTS Visual explanations using bounding boxes and keypoints improved detection task performance when confidence was not displayed. Meanwhile, visual explanations enhanced trust and preference for the intelligent detector, regardless of the explanation type. Confidence visualization did not influence human trust in and preference for the intelligent detector. Moreover, both visual information slowed saccade velocities. CONCLUSION The study demonstrated that visual explanations could improve performance, trust, and preference in human-automation interaction without confidence visualization partially by changing the search strategies. However, excessive information might cause adverse effects. APPLICATION These findings provide guidance for the design of transparent automation, emphasizing the importance of context-appropriate and user-centered explanations to foster effective human-machine collaboration.
Collapse
Affiliation(s)
| | | | - Na Du
- University of Pittsburgh, USA
| |
Collapse
|
2
|
Tatasciore M, Strickland L, Loft S. Transparency improves the accuracy of automation use, but automation confidence information does not. Cogn Res Princ Implic 2024; 9:67. [PMID: 39379606 PMCID: PMC11461414 DOI: 10.1186/s41235-024-00599-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2024] [Accepted: 09/20/2024] [Indexed: 10/10/2024] Open
Abstract
Increased automation transparency can improve the accuracy of automation use but can lead to increased bias towards agreeing with advice. Information about the automation's confidence in its advice may also increase the predictability of automation errors. We examined the effects of providing automation transparency, automation confidence information, and their potential interacting effect on the accuracy of automation use and other outcomes. An uninhabited vehicle (UV) management task was completed where participants selected the optimal UV to complete missions. Low or high automation transparency was provided, and participants agreed/disagreed with automated advice on each mission. We manipulated between participants whether automated advice was accompanied by confidence information. This information indicated on each trial whether automation was "somewhat" or "highly" confident in its advice. Higher transparency improved the accuracy of automation use, led to faster decisions, lower perceived workload, and increased trust and perceived usability. Providing participant automation confidence information, as compared with not, did not have an overall impact on any outcome variable and did not interact with transparency. Despite no benefit, participants who were provided confidence information did use it. For trials where lower compared to higher confidence information was presented, hit rates decreased, correct rejection rates increased, decision times slowed, and perceived workload increased, all suggestive of decreased reliance on automated advice. Such trial-by-trial shifts in automation use bias and other outcomes were not moderated by transparency. These findings can potentially inform the design of automated decision-support systems that are more understandable by humans in order to optimise human-automation interaction.
Collapse
Affiliation(s)
- Monica Tatasciore
- The University of Western Australia, 35 Stirling Highway, Perth, WA, 6009, Australia.
| | - Luke Strickland
- The University of Western Australia, 35 Stirling Highway, Perth, WA, 6009, Australia
- Curtin University, Perth, Australia
| | - Shayne Loft
- The University of Western Australia, 35 Stirling Highway, Perth, WA, 6009, Australia.
| |
Collapse
|
3
|
Gegoff I, Tatasciore M, Bowden V. Transparent Automated Advice to Mitigate the Impact of Variation in Automation Reliability. HUMAN FACTORS 2024; 66:2008-2024. [PMID: 37635389 PMCID: PMC11141097 DOI: 10.1177/00187208231196738] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/14/2023] [Accepted: 08/06/2023] [Indexed: 08/29/2023]
Abstract
OBJECTIVE To examine the extent to which increased automation transparency can mitigate the potential negative effects of low and high automation reliability on disuse and misuse of automated advice, and perceived trust in automation. BACKGROUND Automated decision aids that vary in the reliability of their advice are increasingly used in workplaces. Low-reliability automation can increase disuse of automated advice, while high-reliability automation can increase misuse. These effects could be reduced if the rationale underlying automated advice is made more transparent. METHODS Participants selected the optimal UV to complete missions. The Recommender (automated decision aid) assisted participants by providing advice; however, it was not always reliable. Participants determined whether the Recommender provided accurate information and whether to accept or reject advice. The level of automation transparency (medium, high) and reliability (low: 65%, high: 90%) were manipulated between-subjects. RESULTS With high- compared to low-reliability automation, participants made more accurate (correctly accepted advice and identified whether information was accurate/inaccurate) and faster decisions, and reported increased trust in automation. Increased transparency led to more accurate and faster decisions, lower subjective workload, and higher usability ratings. It also eliminated the increased automation disuse associated with low-reliability automation. However, transparency did not mitigate the misuse associated with high-reliability automation. CONCLUSION Transparency protected against low-reliability automation disuse, but not against the increased misuse potentially associated with the reduced monitoring and verification of high-reliability automation. APPLICATION These outcomes can inform the design of transparent automation to improve human-automation teaming under conditions of varied automation reliability.
Collapse
Affiliation(s)
| | | | - Vanessa Bowden
- The University of Western Australia, Perth, WA, Australia
| |
Collapse
|
4
|
Strickland L, Farrell S, Wilson MK, Hutchinson J, Loft S. How do humans learn about the reliability of automation? Cogn Res Princ Implic 2024; 9:8. [PMID: 38361149 PMCID: PMC10869332 DOI: 10.1186/s41235-024-00533-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2023] [Accepted: 01/27/2024] [Indexed: 02/17/2024] Open
Abstract
In a range of settings, human operators make decisions with the assistance of automation, the reliability of which can vary depending upon context. Currently, the processes by which humans track the level of reliability of automation are unclear. In the current study, we test cognitive models of learning that could potentially explain how humans track automation reliability. We fitted several alternative cognitive models to a series of participants' judgements of automation reliability observed in a maritime classification task in which participants were provided with automated advice. We examined three experiments including eight between-subjects conditions and 240 participants in total. Our results favoured a two-kernel delta-rule model of learning, which specifies that humans learn by prediction error, and respond according to a learning rate that is sensitive to environmental volatility. However, we found substantial heterogeneity in learning processes across participants. These outcomes speak to the learning processes underlying how humans estimate automation reliability and thus have implications for practice.
Collapse
Affiliation(s)
- Luke Strickland
- The Future of Work Institute, Curtin University, 78 Murray Street, Perth, 6000, Australia.
| | - Simon Farrell
- The School of Psychological Science, The University of Western Australia, Crawley, Perth, Australia
| | - Micah K Wilson
- The Future of Work Institute, Curtin University, 78 Murray Street, Perth, 6000, Australia
| | - Jack Hutchinson
- The School of Psychological Science, The University of Western Australia, Crawley, Perth, Australia
| | - Shayne Loft
- The School of Psychological Science, The University of Western Australia, Crawley, Perth, Australia
| |
Collapse
|
5
|
Tatasciore M, Loft S. Can increased automation transparency mitigate the effects of time pressure on automation use? APPLIED ERGONOMICS 2024; 114:104142. [PMID: 37757606 DOI: 10.1016/j.apergo.2023.104142] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 09/12/2023] [Accepted: 09/16/2023] [Indexed: 09/29/2023]
Abstract
A factor that can potentially negatively impact the accuracy of automated decision aid use, and increase perceived workload, is time pressure. Increased automation transparency can increase the accuracy of automation use. We examined the extent to which increased transparency can mitigate the negative effects of time pressure on the accuracy of automation use and perceived workload. Participants completed an uninhabited vehicle (UV) management task where they assigned the best UV to complete missions by either accepting or rejecting automated advice. Participants made a decision after either 25s (low time pressure) or 12s (high time pressure). The accuracy of automation use decreased, and perceived workload increased, when under higher time pressure. Higher transparency benefited the accuracy of automation use and increased perceived trust and usability. However, high transparency did not mitigate the negative impacts of high time pressure, indicating that increased time pressure can influence the processing of highly transparent information.
Collapse
Affiliation(s)
| | - Shayne Loft
- The University of Western Australia, Australia
| |
Collapse
|
6
|
Patton CE, Wickens CD, Smith CAP, Noble KM, Clegg BA. Supporting detection of hostile intentions: automated assistance in a dynamic decision-making context. Cogn Res Princ Implic 2023; 8:69. [PMID: 37980697 PMCID: PMC10657914 DOI: 10.1186/s41235-023-00519-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 09/29/2023] [Indexed: 11/21/2023] Open
Abstract
In a dynamic decision-making task simulating basic ship movements, participants attempted, through a series of actions, to elicit and identify which one of six other ships was exhibiting either of two hostile behaviors. A high-performing, although imperfect, automated attention aid was introduced. It visually highlighted the ship categorized by an algorithm as the most likely to be hostile. Half of participants also received automation transparency in the form of a statement about why the hostile ship was highlighted. Results indicated that while the aid's advice was often complied with and hence led to higher accuracy with a shorter response time, detection was still suboptimal. Additionally, transparency had limited impacts on all aspects of performance. Implications for detection of hostile intentions and the challenges of supporting dynamic decision making are discussed.
Collapse
Affiliation(s)
- Colleen E Patton
- Department of Psychology, Colorado State University, Fort Collins, USA.
| | | | - C A P Smith
- Department of Psychology, Colorado State University, Fort Collins, USA
| | - Kayla M Noble
- Department of Psychology, Colorado State University, Fort Collins, USA
| | | |
Collapse
|
7
|
Schraagen JM. Responsible use of AI in military systems: prospects and challenges. ERGONOMICS 2023; 66:1719-1729. [PMID: 37905780 DOI: 10.1080/00140139.2023.2278394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Accepted: 10/29/2023] [Indexed: 11/02/2023]
Abstract
Artificial Intelligence (AI) holds great potential for the military domain but is also seen as prone to data bias and lacking transparency and explainability. In order to advance the trustworthiness of AI-enabled systems, a dynamic approach to the development, deployment and use of AI systems is required. This approach, when incorporating ethical principles such as lawfulness, traceability, reliability and bias mitigation, is called 'Responsible AI'. This article describes the challenges of using AI responsibly in the military domain from a human factors and ergonomics perspective. Many of the ironies of automation originally described by Bainbridge still apply in the field of AI, but there are also some unique challenges and requirements that need to be considered, such as a larger emphasis on ethical risk analyses and validation and verification up-front, as well as moral situation awareness during deployment and use of AI in military systems.
Collapse
|
8
|
Endsley MR. Ironies of artificial intelligence. ERGONOMICS 2023; 66:1656-1668. [PMID: 37534468 DOI: 10.1080/00140139.2023.2243404] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 07/21/2023] [Indexed: 08/04/2023]
Abstract
Bainbridge's Ironies of Automation was a prescient description of automation related challenges for human performance that have characterised much of the 40 years since its publication. Today a new wave of automation based on artificial intelligence (AI) is being introduced across a wide variety of domains and applications. Not only are Bainbridge's original warnings still pertinent for AI, but AI's very nature and focus on cognitive tasks has introduced many new challenges for people who interact with it. Five ironies of AI are presented including difficulties with understanding AI and forming adaptations, opaqueness in AI limitations and biases that can drive human decision biases, and difficulties in understanding the AI reliability, despite the fact that AI remains insufficiently intelligent for many of its intended applications. Future directions are provided to create more human-centered AI applications that can address these challenges.
Collapse
|
9
|
Senathirajah Y, Solomonides A. Human Factors and Organizational Issues: Contributions from 2022. Yearb Med Inform 2023; 32:210-214. [PMID: 38147862 PMCID: PMC10751143 DOI: 10.1055/s-0043-1768750] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2023] Open
Abstract
OBJECTIVES To review publications in the field of Human Factors and Organisational Issues (HF&OI) in the year 2022 and to assess major contributions to the subject. METHOD A bibliographic search was conducted following refinement of standardized queries used in previous years. Sources used were PubMed, Web of Science, and referral via references from other papers. The search was carried out in January 2023, and (using the PubMed article type inclusion functionality) included clinical trials, meta-analyses, randomized controlled trials, reviews, case reports, classical articles, clinical studies, observational studies (including veterinary), comparative studies, and pragmatic clinical trials. RESULTS Among the 520 returned papers published in 2022 in the various areas of HF&OI, the full review process selected two best papers from among 10 finalists. As in previous years, topics showed development including increased use of Artificial Intelligence (AI) and digital health tools, advancement of methodological frameworks for implementation and evaluation as well as design, and trials of specific digital tools. CONCLUSIONS Recent literature in HF&OI continues to focus on both theoretical advances and practical deployment, with focus on areas of patient-facing digital health, methods for design and evaluation, and attention to implementation barriers.
Collapse
|
10
|
Tatasciore M, Bowden V, Loft S. Do concurrent task demands impact the benefit of automation transparency? APPLIED ERGONOMICS 2023; 110:104022. [PMID: 37019048 DOI: 10.1016/j.apergo.2023.104022] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 03/03/2023] [Accepted: 03/27/2023] [Indexed: 06/19/2023]
Abstract
Automated decision aids typically improve decision-making, but incorrect advice risks automation misuse or disuse. We examined the novel question of whether increased automation transparency improves the accuracy of automation use under conditions with/without concurrent (non-automated assisted) task demands. Participants completed an uninhabited vehicle (UV) management task whereby they assigned the best UV to complete missions. Automation advised the best UV but was not always correct. Concurrent non-automated task demands decreased the accuracy of automation use, and increased decision time and perceived workload. With no concurrent task demands, increased transparency which provided more information on how the automation made decisions, improved the accuracy of automation use. With concurrent task demands, increased transparency led to higher trust ratings, faster decisions, and a bias towards agreeing with automation. These outcomes indicate increased reliance on highly transparent automation under conditions with concurrent task demands and have potential implications for human-automation teaming design.
Collapse
Affiliation(s)
| | | | - Shayne Loft
- The University of Western Australia, Australia.
| |
Collapse
|
11
|
Taylor S, Wang M, Jeon M. Reliable and transparent in-vehicle agents lead to higher behavioral trust in conditionally automated driving systems. Front Psychol 2023; 14:1121622. [PMID: 37275735 PMCID: PMC10232983 DOI: 10.3389/fpsyg.2023.1121622] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 05/02/2023] [Indexed: 06/07/2023] Open
Abstract
Trust is critical for human-automation collaboration, especially under safety-critical tasks such as driving. Providing explainable information on how the automation system reaches decisions and predictions can improve system transparency, which is believed to further facilitate driver trust and user evaluation of the automated vehicles. However, what the optimal level of transparency is and how the system communicates it to calibrate drivers' trust and improve their driving performance remain uncertain. Such uncertainty becomes even more unpredictable given that the system reliability remains dynamic due to current technological limitations. To address this issue in conditionally automated vehicles, a total of 30 participants were recruited in a driving simulator study and assigned to either a low or a high system reliability condition. They experienced two driving scenarios accompanied by two types of in-vehicle agents delivering information with different transparency types: "what"-then-wait (on-demand) and "what + why" (proactive). The on-demand agent provided some information about the upcoming event and delivered more information if prompted by the driver, whereas the proactive agent provided all information at once. Results indicated that the on-demand agent was more habitable, or naturalistic, to drivers and was perceived with faster system response speed compared to the proactive agent. Drivers under the high-reliability condition complied with the takeover request (TOR) more (if the agent was on-demand) and had shorter takeover times (in both agent conditions) compared to those under the low-reliability condition. These findings inspire how the automation system can deliver information to improve system transparency while adapting to system reliability and user evaluation, which further contributes to driver trust calibration and performance correction in future automated vehicles.
Collapse
Affiliation(s)
- Skye Taylor
- Mind Music Machine Lab, Grado Department of Industrial and Systems Engineering, Virginia Tech, Blacksburg, VA, United States
- Link Lab, Department of Systems and Information Engineering, University of Virginia, Charlottesville, VA, United States
| | - Manhua Wang
- Mind Music Machine Lab, Grado Department of Industrial and Systems Engineering, Virginia Tech, Blacksburg, VA, United States
| | - Myounghoon Jeon
- Mind Music Machine Lab, Grado Department of Industrial and Systems Engineering, Virginia Tech, Blacksburg, VA, United States
| |
Collapse
|