1
|
Biondi FN, McDonnell AS, Mahmoodzadeh M, Jajo N, Balakumar Balasingam, Strayer DL. Vigilance Decrement During On-Road Partially Automated Driving Across Four Systems. HUMAN FACTORS 2024; 66:2179-2190. [PMID: 37496464 PMCID: PMC11344368 DOI: 10.1177/00187208231189658] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 07/03/2023] [Indexed: 07/28/2023]
Abstract
OBJECTIVE This study uses a detection task to measure changes in driver vigilance when operating four different partially automated systems. BACKGROUND Research show temporal declines in detection task performance during manual and fully automated driving, but the accuracy of using this approach for measuring changes in driver vigilance during on-road partially automated driving is yet unproven. METHOD Participants drove four different vehicles (Tesla Model 3, Cadillac CT6, Volvo XC90, and Nissan Rogue) equipped with level-2 systems in manual and partially automated modes. Response times to a detection task were recorded over eight consecutive time periods. RESULTS Bayesian analysis revealed a main effect of time period and an interaction between mode and time period. A main effect of vehicle and a time period x vehicle interaction were also found. CONCLUSION Results indicated that the reduction in detection task performance over time was worse during partially automated driving. Vehicle-specific analysis also revealed that detection task performance changed across vehicles, with slowest response time found for the Volvo. APPLICATION The greater decline in detection performance found in automated mode suggests that operating level-2 systems incurred in a greater vigilance decrement, a phenomenon that is of interest for Human Factors practitioners and regulators. We also argue that the observed vehicle-related differences are attributable to the unique design of their in-vehicle interfaces.
Collapse
Affiliation(s)
- Francesco N Biondi
- Human Systems Lab, University of Windsor, Windsor, ON, Canada
- Applied Cognition Lab, University of Utah, Salt Lake City, UT, USA
| | - Amy S McDonnell
- Applied Cognition Lab, University of Utah, Salt Lake City, UT, USA
| | | | - Noor Jajo
- Human Systems Lab, University of Windsor, Windsor, ON, Canada
| | | | - David L Strayer
- Applied Cognition Lab, University of Utah, Salt Lake City, UT, USA
| |
Collapse
|
2
|
Patton CE, Wickens CD. The relationship of trust and dependence. ERGONOMICS 2024:1-17. [PMID: 38725397 DOI: 10.1080/00140139.2024.2342436] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Accepted: 04/08/2024] [Indexed: 10/11/2024]
Abstract
The concepts of automation trust and dependence have often been viewed as closely related and on occasion, have been conflated in the research community. Yet, trust is a cognitive attitude and dependence is a behavioural measure, so it is unsurprising that different factors can affect the two. Here, we review the literature on the correlation between trust and dependence. On average, this correlation across people was quite low, suggesting that people who are more trusting of automation do not necessarily depend upon it more. Separately, we examined experiments that explicitly manipulated the reliability of automation, finding that higher automation reliability increased trust ratings twice as fast as dependence behaviours. This review provides novel quantitative evidence that the two constructs are not strongly correlated. Implications of this work, including potential moderating variables, contexts where trust is still relevant, and considerations of trust measurement, are discussed.
Collapse
Affiliation(s)
- Colleen E Patton
- Department of Psychology, North Carolina State University, Raleigh, NC, USA
| | | |
Collapse
|
3
|
Chu Y, Liu P. Automation complacency on the road. ERGONOMICS 2023; 66:1730-1749. [PMID: 37139680 DOI: 10.1080/00140139.2023.2210793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/26/2023] [Accepted: 05/02/2023] [Indexed: 05/05/2023]
Abstract
Given that automation complacency, a hitherto controversial concept, is already used to blame and punish human drivers in current accident investigations and courts, it is essential to map complacency research in driving automation and determine whether current research can support its legitimate usage in these practical fields. Here, we reviewed its status quo in the domain and conducted a thematic analysis. We then discussed five fundamental challenges that might undermine its scientific legitimation: conceptual confusion exists in whether it is an individual versus systems problem; uncertainties exist in current evidence of complacency; valid measures specific to complacency are lacking; short-term laboratory experiments cannot address the long-term nature of complacency and thus their findings may lack external validity; and no effective interventions directly target complacency prevention. The Human Factors/Ergonomics community has a responsibility to minimise its usage and defend human drivers who rely on automation that is far from perfect.Practitioner summary: Human drivers are accused of complacency and overreliance on driving automation in accident investigations and courts. Our review work shows that current academic research in the driving automation domain cannot support its legitimate usage in these practical fields. Its misuse will create a new form of consumer harms.
Collapse
Affiliation(s)
- Yueying Chu
- Center for Psychological Sciences, Zhejiang University, Hangzhou, PR China
| | - Peng Liu
- Center for Psychological Sciences, Zhejiang University, Hangzhou, PR China
| |
Collapse
|
4
|
Momen A, de Visser EJ, Fraune MR, Madison A, Rueben M, Cooley K, Tossell CC. Group trust dynamics during a risky driving experience in a Tesla Model X. Front Psychol 2023; 14:1129369. [PMID: 37408965 PMCID: PMC10319128 DOI: 10.3389/fpsyg.2023.1129369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Accepted: 05/23/2023] [Indexed: 07/07/2023] Open
Abstract
The growing concern about the risk and safety of autonomous vehicles (AVs) has made it vital to understand driver trust and behavior when operating AVs. While research has uncovered human factors and design issues based on individual driver performance, there remains a lack of insight into how trust in automation evolves in groups of people who face risk and uncertainty while traveling in AVs. To this end, we conducted a naturalistic experiment with groups of participants who were encouraged to engage in conversation while riding a Tesla Model X on campus roads. Our methodology was uniquely suited to uncover these issues through naturalistic interaction by groups in the face of a risky driving context. Conversations were analyzed, revealing several themes pertaining to trust in automation: (1) collective risk perception, (2) experimenting with automation, (3) group sense-making, (4) human-automation interaction issues, and (5) benefits of automation. Our findings highlight the untested and experimental nature of AVs and confirm serious concerns about the safety and readiness of this technology for on-road use. The process of determining appropriate trust and reliance in AVs will therefore be essential for drivers and passengers to ensure the safe use of this experimental and continuously changing technology. Revealing insights into social group-vehicle interaction, our results speak to the potential dangers and ethical challenges with AVs as well as provide theoretical insights on group trust processes with advanced technology.
Collapse
Affiliation(s)
- Ali Momen
- United States Air Force Academy, Colorado Springs, CO, United States
| | | | - Marlena R. Fraune
- Department of Psychology, New Mexico State University, Las Cruces, NM, United States
| | - Anna Madison
- United States Air Force Academy, Colorado Springs, CO, United States
- United States Army Research Laboratory, Aberdeen Proving Ground, Aberdeen, MD, United States
| | - Matthew Rueben
- Department of Psychology, New Mexico State University, Las Cruces, NM, United States
| | - Katrina Cooley
- United States Air Force Academy, Colorado Springs, CO, United States
| | - Chad C. Tossell
- United States Air Force Academy, Colorado Springs, CO, United States
| |
Collapse
|
5
|
Edelmann A, Stümper S, Petzoldt T. The interaction between perceived safety and perceived usefulness in automated parking as a result of safety distance. APPLIED ERGONOMICS 2023; 108:103962. [PMID: 36634461 DOI: 10.1016/j.apergo.2022.103962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Revised: 12/23/2022] [Accepted: 12/31/2022] [Indexed: 06/17/2023]
Abstract
Improved safety and traffic efficiency are among the proclaimed benefits of automated driving functions. In many scenarios, traffic safety and efficiency can be somewhat contradictory, especially in the perception of a user. In order for potential users to accept the automated system, it is necessary to find the optimal system configuration. Therefore, it is important to understand how the factors underlying acceptance develop and interact. In this study, seven safety distances of an automated parking system were implemented resulting in parking manoeuvres of varying efficiency (in terms of required moves). Participants experienced each configuration twice and rated their perceived safety and perceived usefulness. The results show that maximizing safety distances results in high perceived safety, yet also a diminished perceived usefulness due to reduced efficiency. On the other hand, maximum efficiency leads to a lower perceived safety and thus, a reduced rating of perceived usefulness. Furthermore, in some participants, perceived safety increased gradually, while for others, a threshold effect could be observed. The results demonstrate that the specification of a sole system characteristic can have multiple effects. These have to be considered to maximize acceptance.
Collapse
Affiliation(s)
- Aaron Edelmann
- AUDI AG, 85045 Ingolstadt, Germany; Technische Universität Dresden, Chair of Traffic and Transportation Psychology, Hettnerstraße 1, 01069 Dresden, Germany.
| | | | - Tibor Petzoldt
- Technische Universität Dresden, Chair of Traffic and Transportation Psychology, Hettnerstraße 1, 01069 Dresden, Germany.
| |
Collapse
|
6
|
Walliser AC, de Visser EJ, Shaw TH. Exploring system wide trust prevalence and mitigation strategies with multiple autonomous agents. COMPUTERS IN HUMAN BEHAVIOR 2023. [DOI: 10.1016/j.chb.2023.107671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
|
7
|
Nordhoff S, Stapel J, He X, Gentner A, Happee R. Do driver's characteristics, system performance, perceived safety, and trust influence how drivers use partial automation? A structural equation modelling analysis. Front Psychol 2023; 14:1125031. [PMID: 37139004 PMCID: PMC10150639 DOI: 10.3389/fpsyg.2023.1125031] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 03/06/2023] [Indexed: 05/05/2023] Open
Abstract
The present study surveyed actual extensive users of SAE Level 2 partially automated cars to investigate how driver’s characteristics (i.e., socio-demographics, driving experience, personality), system performance, perceived safety, and trust in partial automation influence use of partial automation. 81% of respondents stated that they use their automated car with speed (ACC) and steering assist (LKA) at least 1–2 times a week, and 84 and 92% activate LKA and ACC at least occasionally. Respondents positively rated the performance of Adaptive Cruise Control (ACC) and Lane Keeping Assistance (LKA). ACC was rated higher than LKA and detection of lead vehicles and lane markings was rated higher than smooth control for ACC and LKA, respectively. Respondents reported to primarily disengage (i.e., turn off) partial automation due to a lack of trust in the system and when driving is fun. They rarely disengaged the system when they noticed they become bored or sleepy. Structural equation modelling revealed that trust had a positive effect on driver’s propensity for secondary task engagement during partially automated driving, while the effect of perceived safety was not significant. Regarding driver’s characteristics, we did not find a significant effect of age on perceived safety and trust in partial automation. Neuroticism negatively correlated with perceived safety and trust, while extraversion did not impact perceived safety and trust. The remaining three personality dimensions ‘openness’, ‘conscientiousness’, and ‘agreeableness’ did not form valid and reliable scales in the confirmatory factor analysis, and could thus not be subjected to the structural equation modelling analysis. Future research should re-assess the suitability of the short 10-item scale as measure of the Big-Five personality traits, and investigate the impact on perceived safety, trust, use and use of automation.
Collapse
Affiliation(s)
- Sina Nordhoff
- Department Transport and Planning, Delft University of Technology, Delft, Netherlands
- *Correspondence: Sina Nordhoff,
| | - Jork Stapel
- Department Cognitive Robotics, Delft University of Technology, Delft, Netherlands
| | - Xiaolin He
- Department Cognitive Robotics, Delft University of Technology, Delft, Netherlands
| | | | - Riender Happee
- Department Cognitive Robotics, Delft University of Technology, Delft, Netherlands
| |
Collapse
|
8
|
Hsieh SJ, Wang AR, Madison A, Tossell C, Visser ED. Adaptive Driving Assistant Model (ADAM) for Advising Drivers of Autonomous Vehicles. ACM T INTERACT INTEL 2022. [DOI: 10.1145/3545994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022]
Abstract
Fully autonomous driving is on the horizon; vehicles with advanced driver assistance systems (ADAS) such as Tesla's Autopilot are already available to consumers. However, all currently available ADAS applications require a human driver to be alert and ready to take control if needed. Partially automated driving introduces new complexities to human interactions with cars and can even increase collision risk. A better understanding of drivers’ trust in automation may help reduce these complexities. Much of the existing research on trust in ADAS has relied on use of surveys and physiological measures to assess trust and has been conducted using driving simulators. There have been relatively few studies that use telemetry data from real automated vehicles to assess trust in ADAS. In addition, although some ADAS technologies provide alerts when, for example, drivers’ hands are not on the steering wheel, these systems are not personalized to individual drivers. Needed are adaptive technologies that can help drivers of autonomous vehicles avoid crashes based on multiple real-time data streams. In this paper, we propose an architecture for adaptive autonomous driving assistance. Two layers of multiple sensory fusion models are developed to provide appropriate voice reminders to increase driving safety based on predicted driving status. Results suggest that human trust in automation can be quantified and predicted with 80% accuracy based on vehicle data, and that adaptive speech-based advice can be provided to drivers with 90 to 95% accuracy. With more data, these models can be used to evaluate trust in driving assistance tools, which can ultimately lead to safer and appropriate use of these features.
Collapse
|
9
|
Nordhoff S, Stapel J, He X, Gentner A, Happee R. Perceived safety and trust in SAE Level 2 partially automated cars: Results from an online questionnaire. PLoS One 2021; 16:e0260953. [PMID: 34932565 PMCID: PMC8691907 DOI: 10.1371/journal.pone.0260953] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Accepted: 11/19/2021] [Indexed: 11/18/2022] Open
Abstract
The present online study surveyed drivers of SAE Level 2 partially automated cars on automation use and attitudes towards automation. Respondents reported high levels of trust in their partially automated cars to maintain speed and distance to the car ahead (M = 4.41), and to feel safe most of the time (M = 4.22) on a scale from 1 to 5. Respondents indicated to always know when the car is in partially automated driving mode (M= 4.42), and to monitor the performance of their car most of the time (M = 4.34). A low rating was obtained for engaging in other activities while driving the partially automated car (M= 2.27). Partial automation did, however, increase reported engagement in secondary tasks that are already performed during manual driving (i.e., the proportion of respondents reporting to observe the landscape, use the phone for texting, navigation, music selection and calls, and eat during partially automated driving was higher in comparison to manual driving). Unsafe behaviour was rare with 1% of respondents indicating to rarely monitor the road, and another 1% to sleep during partially automated driving. Structural equation modeling revealed a strong, positive relationship between perceived safety and trust (β = 0.69, p = 0.001). Performance expectancy had the strongest effects on automation use, followed by driver engagement, trust, and non-driving related task engagement. Perceived safety interacted with automation use through trust. We recommend future research to evaluate the development of perceived safety and trust in time, and revisit the influence of driver engagement and non-driving related task engagement, which emerged as new constructs related to trust in partial automation.
Collapse
Affiliation(s)
- Sina Nordhoff
- Department Transport & Planning, Delft University of Technology, Delft, The Netherlands
- * E-mail:
| | - Jork Stapel
- Department Cognitive Robotics, Delft University of Technology, Delft, The Netherlands
| | - Xiaolin He
- Department Cognitive Robotics, Delft University of Technology, Delft, The Netherlands
| | | | - Riender Happee
- Department Cognitive Robotics, Delft University of Technology, Delft, The Netherlands
| |
Collapse
|
10
|
Kohn SC, de Visser EJ, Wiese E, Lee YC, Shaw TH. Measurement of Trust in Automation: A Narrative Review and Reference Guide. Front Psychol 2021; 12:604977. [PMID: 34737716 PMCID: PMC8562383 DOI: 10.3389/fpsyg.2021.604977] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Accepted: 08/25/2021] [Indexed: 02/05/2023] Open
Abstract
With the rise of automated and autonomous agents, research examining Trust in Automation (TiA) has attracted considerable attention over the last few decades. Trust is a rich and complex construct which has sparked a multitude of measures and approaches to study and understand it. This comprehensive narrative review addresses known methods that have been used to capture TiA. We examined measurements deployed in existing empirical works, categorized those measures into self-report, behavioral, and physiological indices, and examined them within the context of an existing model of trust. The resulting work provides a reference guide for researchers, providing a list of available TiA measurement methods along with the model-derived constructs that they capture including judgments of trustworthiness, trust attitudes, and trusting behaviors. The article concludes with recommendations on how to improve the current state of TiA measurement.
Collapse
Affiliation(s)
| | - Ewart J de Visser
- Warfighter Effectiveness Research Center, United States Air Force Academy, Colorado Springs, CO, United States
| | - Eva Wiese
- George Mason University, Fairfax, VA, United States
| | - Yi-Ching Lee
- George Mason University, Fairfax, VA, United States
| | - Tyler H Shaw
- George Mason University, Fairfax, VA, United States
| |
Collapse
|
11
|
Muslim H, Itoh M, Liang CK, Antona-Makoshi J, Uchida N. Effects of gender, age, experience, and practice on driver reaction and acceptance of traffic jam chauffeur systems. Sci Rep 2021; 11:17874. [PMID: 34504190 PMCID: PMC8429645 DOI: 10.1038/s41598-021-97374-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Accepted: 08/17/2021] [Indexed: 11/17/2022] Open
Abstract
This study conducted a driving simulation experiment to compare four automated driving systems (ADS) designs during lane change demanding traffic situations on highways while accounting for the drivers’ gender, age, experience, and practice. A lane-change maneuver was required when the automated vehicle approaches traffic congestion on the left-hand lane. ADS-1 can only reduce the speed to synchronize with the congestion. ADS-2 reduces the speed and issues an optional request to intervene, advising the driver to change lanes manually. ADS-3 offers to overtake the congestion autonomously if the driver approves it. ADS-4 overtakes the congestion autonomously without the driver’s approval. Results of drivers’ reaction, acceptance, and trust indicated that differences between ADS designs increase when considering the combined effect of drivers’ demographic factors more than the individual effect of each factor. However, the more ADS seems to have driver-like capacities, the more impact of demographic factors is expected. While preliminary, these findings may help us understand how ADS users’ behavior can differ based on the interaction between human demographic factors and system design.
Collapse
Affiliation(s)
- Husam Muslim
- Japan Automobile Research Institution, 2530 Karima, Tsukuba, Ibaraki, 305-0822, Japan. .,Faculty of Engineering, Information and Systems, University of Tsukuba, 1-1-1 Tennoudai, Tsukuba, Ibaraki, 305-8573, Japan.
| | - Makoto Itoh
- Faculty of Engineering, Information and Systems, University of Tsukuba, 1-1-1 Tennoudai, Tsukuba, Ibaraki, 305-8573, Japan
| | - Cho Kiu Liang
- Graduate School of Systems and Information Engineering, University of Tsukuba, 1-1-1 Tennoudai, Tsukuba, Ibaraki, 305-8573, Japan
| | - Jacobo Antona-Makoshi
- Japan Automobile Research Institution, 2530 Karima, Tsukuba, Ibaraki, 305-0822, Japan
| | - Nobuyuki Uchida
- Japan Automobile Research Institution, 2530 Karima, Tsukuba, Ibaraki, 305-0822, Japan
| |
Collapse
|
12
|
Supporting User Onboarding in Automated Vehicles through Multimodal Augmented Reality Tutorials. MULTIMODAL TECHNOLOGIES AND INTERACTION 2021. [DOI: 10.3390/mti5050022] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Misconceptions of vehicle automation functionalities lead to either non-use or dangerous misuse of assistant systems, harming the users’ experience by reducing potential comfort or compromise safety. Thus, users must understand how and when to use an assistant system. In a preliminary online survey, we examined the use, trust, and the perceived understanding of modern vehicle assistant systems. Despite remaining incomprehensibility (36–64%), experienced misunderstandings (up to 9%), and the need for training (around 30%), users reported high trust in the systems. In the following study with first-time users, we examine the effect of different User Onboarding approaches for an automated parking assistant system in a Tesla and compare the traditional text-based manual with a multimodal augmented reality (AR) smartphone application in means of user acceptance, UX, trust, understanding, and task performance. While the User Onboarding experience for both approaches shows high pragmatic quality, the hedonic quality was perceived significantly higher in AR. For the automated parking process, reported hedonic and pragmatic user experience, trust, automation understanding, and acceptance do not differ, yet the observed task performance was higher in the AR condition. Overall, AR might help motivate proper User Onboarding and better communicate how to operate the system for inexperienced users.
Collapse
|
13
|
Horrey WJ, Lee JD. Preface to the Special Issue on Human Factors and Advanced Vehicle Automation: Of Benefits, Barriers, and Bridges to Safe and Effective Implementation. HUMAN FACTORS 2020; 62:189-193. [PMID: 32119576 DOI: 10.1177/0018720820901542] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
OBJECTIVE The aim of this special issue is to bring together the latest research related to driver interaction with various types of vehicle automation. BACKGROUND Vehicle technology has undergone significant progress over the past decade, bringing new support features that can assist the driver and take on more and more of the driving responsibilities. METHOD This issue is comprised of eight articles from international research teams, focusing on different types of automation and different user populations, including driver support features through to highly automated driving systems. RESULTS The papers comprising this special issue are clustered into three categories: (a) experimental studies of driver interactions with advanced vehicle technologies; (b) analysis of existing data sources; and (c) emerging human factors issues. Studies of currently available and pending systems highlight some of the human factors challenges associated with the driver-system interaction that are likely to become more prominent in the near future. Moreover, studies of more nascent concepts (i.e., those that are still a long way from production vehicles) underscore many attitudes, perceptions, and concerns that will need to be considered as these technologies progress. CONCLUSIONS Collectively, the papers comprising this special issue help fill some gaps in our knowledge. More importantly, they continue to help us identify and articulate some of the important and potential human factors barriers, design considerations, and research needs as these technologies become more ubiquitous.
Collapse
Affiliation(s)
| | - John D Lee
- 5228 University of Wisconsin-Madison, USA
| |
Collapse
|
14
|
|