1
|
Chai C, Lei Y, Wei H, Wu C, Zhang W, Hansen P, Fan H, Shi J. The effects of various auditory takeover requests: A simulated driving study considering the modality of non-driving-related tasks. APPLIED ERGONOMICS 2024; 118:104252. [PMID: 38417230 DOI: 10.1016/j.apergo.2024.104252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2023] [Revised: 10/26/2023] [Accepted: 02/16/2024] [Indexed: 03/01/2024]
Abstract
With the era of automated driving approaching, designing an effective auditory takeover request (TOR) is critical to ensure automated driving safety. The present study investigated the effects of speech-based (speech and spearcon) and non-speech-based (earcon and auditory icon) TORs on takeover performance and subjective preferences. The potential impact of the non-driving-related task (NDRT) modality on auditory TORs was considered. Thirty-two participants were recruited in the present study and assigned to two groups, with one group performing the visual N-back task and another performing the auditory N-back task during automated driving. They were required to complete four simulated driving blocks corresponding to four auditory TOR types. The earcon TOR was found to be the most suitable for alerting drivers to return to the control loop because of its advantageous takeover time, lane change time, and minimum time to collision. Although participants preferred the speech TOR, it led to relatively poor takeover performance. In addition, the auditory NDRT was found to have a detrimental impact on auditory TORs. When drivers were engaged in the auditory NDRT, the takeover time and lane change time advantages of earcon TORs no longer existed. These findings highlight the importance of considering the influence of auditory NDRTs when designing an auditory takeover interface. The present study also has some practical implications for researchers and designers when designing an auditory takeover system in automated vehicles.
Collapse
Affiliation(s)
- Chunlei Chai
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | - Yu Lei
- School of Software Technology, Zhejiang University, Hangzhou, China
| | - Haoran Wei
- School of Software Technology, Zhejiang University, Hangzhou, China
| | - Changxu Wu
- Department of Industrial Engineering, Tsinghua University, Beijing, China
| | - Wei Zhang
- Department of Industrial Engineering, Tsinghua University, Beijing, China
| | - Preben Hansen
- Department of Computer and System Sciences, Stockholm University, Stockholm, Sweden
| | - Hao Fan
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | - Jinlei Shi
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China.
| |
Collapse
|
2
|
Nadri C, Kekal S, Li Y, Li X, Lee SC, Nelson D, Lautala P, Jeon M. "Slow down. Rail crossing ahead. Look left and right at the crossing": In-vehicle auditory alerts improve driver behavior at rail crossings. APPLIED ERGONOMICS 2023; 106:103912. [PMID: 36179543 DOI: 10.1016/j.apergo.2022.103912] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 09/06/2022] [Accepted: 09/20/2022] [Indexed: 06/16/2023]
Abstract
Even though the rail industry has made great strides in reducing accidents at crossings, train-vehicle collisions at Highway-Rail Grade Crossings (HRGCs) continue to be a major issue in the US and across the world. In this research, we conducted a driving simulator study (N = 35) to evaluate a hybrid in-vehicle auditory alert (IVAA), composed of both speech and non-speech components, that was selected after two rounds of subjective evaluation studies. Participants drove through a simulated scenario and reacted to HRGCs with and without the IVAA present and through different music conditions and crossing devices. Driver simulator testing results showed that the inclusion of the hybrid IVAA significantly improved driving behavior near HRGCs in terms of gaze behavior, braking reaction, and approach speed to the crossing. The driving simulator study also showed the effects of background music and warning device types on driving performance. The study contributes to the large-scale implementation of IVAAs at HRGCs, as well as the development of guidelines toward a more standardized approach for IVAAs at HRGCs.
Collapse
Affiliation(s)
- Chihab Nadri
- Mind Music Machine Lab, Department of Industiral and Systems Engineering, Virginia Tech, USA.
| | - Siddhant Kekal
- Mind Music Machine Lab, Department of Industiral and Systems Engineering, Virginia Tech, USA.
| | - Yinjia Li
- Mind Music Machine Lab, Department of Industiral and Systems Engineering, Virginia Tech, USA.
| | - Xuan Li
- Mind Music Machine Lab, Department of Industiral and Systems Engineering, Virginia Tech, USA.
| | - Seul Chan Lee
- Department of Industrial and Systems Engineering/Engineering Research Institute, Gyeongsang National University, Gyeongsangnam-do, Jinju, South Korea.
| | - David Nelson
- Department of Civil and Environmental Engineering, Michigan Technological University, Houghton, MI, USA.
| | - Pasi Lautala
- Department of Civil and Environmental Engineering, Michigan Technological University, Houghton, MI, USA.
| | - Myounghoon Jeon
- Mind Music Machine Lab, Department of Industiral and Systems Engineering, Virginia Tech, USA.
| |
Collapse
|
3
|
Jing C, Dai H, Yao X, Du D, Yu K, Yu D, Zhi J. Influence of Multi-Modal Warning Interface on Takeover Efficiency of Autonomous High-Speed Train. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 20:322. [PMID: 36612647 PMCID: PMC9819043 DOI: 10.3390/ijerph20010322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 12/19/2022] [Accepted: 12/20/2022] [Indexed: 06/17/2023]
Abstract
As a large-scale public transport mode, the driving safety of high-speed rail has a profound impact on public health. In this study, we determined the most efficient multi-modal warning interface for automatic driving of a high-speed train and put forward suggestions for optimization and improvement. Forty-eight participants were selected, and a simulated 350 km/h high-speed train driving experiment equipped with a multi-modal warning interface was carried out. Then, the parameters of eye movement and behavior were analyzed by independent sample Kruskal-Wallis test and one-way analysis of variance. The results showed that the current level 3 warning visual interface of a high-speed train had the most abundant warning graphic information, but it failed to increase the takeover efficiency of the driver. The visual interface of the level 2 warning was more likely to attract the attention of drivers than the visual interface of the level 1 warning, but it still needs to be optimized in terms of the relevance of and guidance between graphic-text elements. The multi-modal warning interface had a faster response efficiency than the single-modal warning interface. The auditory-visual multi-modal interface had the highest takeover efficiency and was suitable for the most urgent (level 3) high-speed train warning. The introduction of an auditory interface could increase the efficiency of a purely visual interface, but the introduction of a tactile interface did not improve the efficiency. These findings can be used as a basis for the interface design of automatic driving high-speed trains and help improve the active safety of automatic driving high-speed trains, which is of great significance to protect the health and safety of the public.
Collapse
Affiliation(s)
- Chunhui Jing
- Department of Industrial Design, School of Design, Southwest Jiaotong University, Chengdu 610031, China
| | - Haohong Dai
- Department of Industrial Design, School of Design, Southwest Jiaotong University, Chengdu 610031, China
| | - Xing Yao
- School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore 639798, Singapore
| | - Dandan Du
- Department of Industrial Design, School of Design, Southwest Jiaotong University, Chengdu 610031, China
| | - Kaidi Yu
- Department of Industrial Design, School of Design, Southwest Jiaotong University, Chengdu 610031, China
| | - Dongyu Yu
- Department of Industrial Design, School of Design, Southwest Jiaotong University, Chengdu 610031, China
| | - Jinyi Zhi
- Department of Industrial Design, School of Design, Southwest Jiaotong University, Chengdu 610031, China
| |
Collapse
|
4
|
Song J, Wang Y, An X, Ma S, Wang D, Gan T, Shi H, Yang Z, Liu H. Novel sonification designs: Compressed, iconic, and pitch-dynamic auditory icons boost driving behavior. APPLIED ERGONOMICS 2022; 103:103797. [PMID: 35576785 DOI: 10.1016/j.apergo.2022.103797] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/19/2021] [Revised: 04/09/2022] [Accepted: 05/05/2022] [Indexed: 06/15/2023]
Abstract
With the development of connected vehicles, in-vehicle auditory alerts enable drivers to effectively avoid hazards by quickly presenting critical information in advance. Auditory icons can be understood quickly, evoking a better user experience. However, as collision warnings, the design and application of auditory icons still need further exploration. Thus, this study aims to investigate the effects of internal semantic mapping and external acoustic characteristics (compression and dynamics design) on driver performance and subjective experience. Thirty-two participants (17 females) experienced 15 types of warnings - (3 dynamics: mapping 0 vs. 1 vs. 2) × (5 warning types: original iconic vs. original metaphorical vs. compressed iconic vs. compressed metaphorical auditory icon vs. earcon) - in a simulator. We found that compression design was effective for rapid risk avoidance, which was more effective in iconic and highly pitch-dynamic sounds. This study provides additional ideas and principles for the design of auditory icon warnings.
Collapse
Affiliation(s)
- Jiaqing Song
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, 310018, China
| | - Yuwei Wang
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, 310018, China
| | - Xiaojiang An
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, 310018, China
| | - Shu Ma
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, 310018, China
| | - Duming Wang
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, 310018, China
| | - Tian Gan
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, 310018, China
| | - Hongqi Shi
- Wuhan Second Ship Design and Research Institute, Wuhan, 430064, China
| | - Zhen Yang
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, 310018, China. http://
| | - Hongyan Liu
- Department of Psychology, Zhejiang Sci-Tech University, Hangzhou, 310018, China. http://%
| |
Collapse
|
5
|
Nees MA, Sampsell NG. Simple auditory and visual interruptions of a continuous visual tracking task: modality effects and time course of interference. ERGONOMICS 2021; 64:879-890. [PMID: 33428536 DOI: 10.1080/00140139.2021.1873424] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2020] [Accepted: 01/04/2021] [Indexed: 06/12/2023]
Abstract
Research has produced conflicting evidence regarding whether performance of an on-going visual task is disrupted more by an interruption from a visual or an auditory alert. Tasks and alerts studied to date have been complex or idiosyncratic. This experiment examined how the modality of simple alerts-visual icons or auditory tones-affected performance of an on-going visual task. Participants (58 females and 4 males) tracked a visual target while performing a choice reaction time task in response to alerts. Visual alerts were more harmful to performance of the tracking task. Dual task workload was lowest with an auditory alert, provided there was not noise present. Interruptions affected tracking task performance for around 1500 ms. Results supported the predictions of Multiple Resources Theory and showed no evidence of auditory preemption. In practical applications for which an on-going visual task is interrupted, auditory alerts may be less disruptive and may reduce perceived workload. Practitioner Summary: Many practical scenarios involve on-going visual tasks that are interrupted by simple alerts requiring a simple response. Auditory alerts may be less disruptive than visual alerts and may reduce perceived workload. A conservative estimate is that the effects of even simple interruptions will last a minimum of 1500 ms. Abbreviations: ANOVA: analysis of variance; LSD: least significant difference; TLX: task load index.
Collapse
Affiliation(s)
- Michael A Nees
- Department of Psychology, Lafayette College, Easton, PA, USA
| | | |
Collapse
|
6
|
Facilitating Workers’ Task Proficiency with Subtle Decay of Contextual AR-Based Assistance Derived from Unconscious Memory Structures. INFORMATION 2021. [DOI: 10.3390/info12010017] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Contemporary assistance systems support a broad variety of tasks. When they provide information or instruction, the way they do it has an implicit and often not directly graspable impact on the user. System design often forces static roles onto the user, which can have negative side effects when system errors occur or unique and previously unknown situations need to be tackled. We propose an adjustable augmented reality-based assistance infrastructure that adapts to the user’s individual cognitive task proficiency and dynamically reduces its active intervention in a subtle, not consciously noticeable way over time to spare attentional resources and facilitate independent task execution. We also introduce multi-modal mechanisms to provide context-sensitive assistance and argue why system architectures that provide explainability of concealed automated processes can improve user trust and acceptance.
Collapse
|
7
|
Mase JM, Majid S, Mesgarpour M, Torres MT, Figueredo GP, Chapman P. Evaluating the impact of Heavy Goods Vehicle driver monitoring and coaching to reduce risky behaviour. ACCIDENT; ANALYSIS AND PREVENTION 2020; 146:105754. [PMID: 32932020 DOI: 10.1016/j.aap.2020.105754] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2020] [Revised: 08/24/2020] [Accepted: 08/27/2020] [Indexed: 06/11/2023]
Abstract
Determining the impact of driver-monitoring technologies to improve risky driving behaviours allows stakeholders to understand which aspects of onboard sensors and feedback need enhancement to promote road safety and education. This study investigates the influence of camera monitoring on Heavy Goods Vehicle (HGV) drivers' risky behaviours. We also assess whether monitoring affects individual driving events further when coupled with safe driving practices coaching. We evaluate the outcome of those practices on three telematics incidents heavily reliant on driving errors and violations, i.e., the number of vehicle harsh braking, harsh cornering and over speeding incidents. The objective is to understand how frequently individual incidents caused by risky driving behaviour occur (a) without camera monitoring and without any coaching; (b) after camera installation; and (c) after camera installation and coaching. We investigate two commercial HGV companies (Company 1 and Company 2) with 263 and 269 vehicles, respectively, over a 16 months period, from which the first 8 months contain data collected before the installation of cameras (baseline) and the rest of the dataset contains incident counts after the installation of cameras (intervention). Company 1 provides coaching during the intervention phase while Company 2 does not offer coaching. Our analysis considers the baseline and the intervention phases during the same seasons to eliminate any possible bias due to the influence of weather on driving behaviour. Results show an overall significant reduction in the mean frequency of harsh braking incidents from baseline to intervention by 16.82% in Company 1 and 4.62% in Company 2, and a significant reduction in the mean frequency of over speeding incidents from baseline to intervention by 34.29% in Company 1 and 28.13% in Company 2. Furthermore, the effect of coaching has a significant difference in reducing the frequency of harsh braking (p = .011) and harsh cornering (p < .001) compared to just camera monitoring. These results suggest that coaching interventions are more effective in reducing driving errors while monitoring reduces both driving errors and violations.
Collapse
Affiliation(s)
| | - Shazmin Majid
- School of Computer Science, The University of Nottingham, United Kingdom
| | | | | | | | - Peter Chapman
- School of Psychology, The University of Nottingham, United Kingdom
| |
Collapse
|
8
|
Skrypchuk L, Langdon P, Sawyer BD, Clarkson PJ. Unconstrained design: improving multitasking with in-vehicle information systems through enhanced situation awareness. THEORETICAL ISSUES IN ERGONOMICS SCIENCE 2019. [DOI: 10.1080/1463922x.2019.1680763] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Affiliation(s)
- Lee Skrypchuk
- Research and Technology Department, Jaguar Land Rover, Coventry, UK
- Engineering Design Centre, Cambridge University, Cambridge, UK
| | - Pat Langdon
- Engineering Design Centre, Cambridge University, Cambridge, UK
| | - Ben D. Sawyer
- Department of Industrial Engineering and Management Systems, University of Central Florida, Orlando, FL, USA
| | | |
Collapse
|
9
|
Cabral JP, Remijn GB. Auditory icons: Design and physical characteristics. APPLIED ERGONOMICS 2019; 78:224-239. [PMID: 31046954 DOI: 10.1016/j.apergo.2019.02.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/16/2018] [Revised: 01/14/2019] [Accepted: 02/25/2019] [Indexed: 06/09/2023]
Abstract
Auditory icons are short sound messages that convey information about an object, event or situation. Originally, auditory icons have been used in computer interfaces, but are nowadays found in many other fields. In this review article, an overview is given of the main theoretical ideas behind the use and design of auditory icons. We identified the most common fields in which auditory icons have been used, and analyzed their acoustic characteristics. The review shows that few studies have provided a precise description of the physical characteristics of the sounds in auditory icons, e.g., their intensity level, duration, and frequency range. To improve the validity and replicability of research on auditory icons, and their universal design, precise descriptions of acoustic characteristics should thus be provided.
Collapse
Affiliation(s)
- João Paulo Cabral
- Graduate School of Design, Department of Human Science, Kyushu University, 4-9-1 Shiobaru, Minamiku, Fukuoka, 815-8540, Japan.
| | - Gerard Bastiaan Remijn
- Department of Human Science, Research Center for Applied Perceptual Science, Kyushu University, 4-9-1 Shiobaru, Minamiku, Fukuoka, 815-8540, Japan.
| |
Collapse
|
10
|
Hansen NE, Harel A, Iyer N, Simpson BD, Wisniewski MG. Pre-stimulus brain state predicts auditory pattern identification accuracy. Neuroimage 2019; 199:512-520. [PMID: 31129305 DOI: 10.1016/j.neuroimage.2019.05.054] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2019] [Revised: 04/26/2019] [Accepted: 05/21/2019] [Indexed: 02/07/2023] Open
Abstract
Recent studies show that pre-stimulus band-specific power and phase in the electroencephalogram (EEG) can predict accuracy on tasks involving the detection of near-threshold stimuli. However, results in the auditory modality have been mixed, and few works have examined pre-stimulus features when more complex decisions are made (e.g. identifying supra-threshold sounds). Further, most auditory studies have used background sounds known to induce oscillatory EEG states, leaving it unclear whether phase predicts accuracy without such background sounds. To address this gap in knowledge, the present study examined pre-stimulus EEG as it relates to accuracy in a tone pattern identification task. On each trial, participants heard a triad of 40-ms sinusoidal tones (separated by 40-ms intervals), one of which was at a different frequency than the other two. Participants' task was to indicate the tone pattern (low-low-high, low-high-low, etc.). No background sounds were employed. Using a phase opposition measure based on inter-trial phase consistencies, pre-stimulus 7-10 Hz phase was found to differ between correct and incorrect trials ∼200 to 100 ms prior to tone-pattern onset. After sorting trials into bins based on phase, accuracy was found to be lowest at around π-+ relative to individuals' most accurate phase bin. No significant effects were found for pre-stimulus power. In the context of the literature, findings suggest an important relationship between the complexity of task demands and pre-stimulus activity within the auditory domain. Results also raise interesting questions about the role of induced oscillatory states or rhythmic processing modes in obtaining pre-stimulus effects of phase in auditory tasks.
Collapse
Affiliation(s)
- Natalie E Hansen
- U.S. Air Force Research Laboratory, 45433, USA; Wright State University, 45435, USA
| | | | | | | | | |
Collapse
|
11
|
Abstract
The search for the elusive “killer app” of sonification has been a recurring theme in sonification research. In this comment, I argue that the killer-app criterion of success stems from interdisciplinary tensions about how to evaluate sonifications. Using auditory graphs as an example, I argue that the auditory display community has produced successful examples of sonic information design that accomplish the human factors goal of improving human interactions with systems. Still, barriers to using sonifications in interfaces remain, and reducing those barriers could result in more widespread use of audio in systems.
Collapse
|
12
|
van der Heiden RMA, Janssen CP, Donker SF, Hardeman LES, Mans K, Kenemans JL. Susceptibility to audio signals during autonomous driving. PLoS One 2018; 13:e0201963. [PMID: 30102723 PMCID: PMC6089411 DOI: 10.1371/journal.pone.0201963] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2018] [Accepted: 07/25/2018] [Indexed: 11/26/2022] Open
Abstract
We investigate how susceptible human drivers are to auditory signals in three situations: when stationary, when driving, or when being driven by an autonomous vehicle. Previous research has shown that human susceptibility is reduced when driving compared to when being stationary. However, it is not known how susceptible humans are under autonomous driving conditions. At the same time, good susceptibility is crucial under autonomous driving conditions, as such systems might use auditory signals to communicate a transition of control from the automated vehicle to the human driver. We measured susceptibility using a three-stimulus auditory oddball paradigm while participants experienced three driving conditions: stationary, autonomous, or driving. We studied susceptibility through the frontal P3 (fP3) Electroencephalography Event-Related Potential response (EEG ERP response). Results show that the fP3 component is reduced in autonomous compared to stationary conditions, but not as strongly as when participants drove themselves. In addition, the fP3 component is further reduced when the oddball task does not require a response (i.e., in a passive condition, versus active). The implication is that, even in a relatively simple autonomous driving scenario, people's susceptibility of auditory signals is not as high as would be beneficial for responding to auditory stimuli.
Collapse
Affiliation(s)
| | - Christian P. Janssen
- Experimental Psychology & Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Stella F. Donker
- Experimental Psychology & Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Lotte E. S. Hardeman
- Experimental Psychology & Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Keri Mans
- Experimental Psychology & Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - J. Leon Kenemans
- Experimental Psychology & Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
13
|
Gune A, De Amicis R, Simoes B, Sanchez CA, Demirel HO. Graphically Hearing: Enhancing Understanding of Geospatial Data through an Integrated Auditory and Visual Experience. IEEE COMPUTER GRAPHICS AND APPLICATIONS 2018; 38:18-26. [PMID: 29975187 DOI: 10.1109/mcg.2018.042731655] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Effective presentation of data is critical to a users understanding of it. In this manuscript, we explore research challenges associated with presenting large geospatial datasets through a multimodal experience. We also suggest an interaction schema that enhances users cognition of geographic information through a user-driven display that visualizes and sonifies geospatial data.
Collapse
|
14
|
Houtenbos M, de Winter JCF, Hale AR, Wieringa PA, Hagenzieker MP. Concurrent audio-visual feedback for supporting drivers at intersections: A study using two linked driving simulators. APPLIED ERGONOMICS 2017; 60:30-42. [PMID: 28166889 DOI: 10.1016/j.apergo.2016.10.010] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/07/2016] [Revised: 09/28/2016] [Accepted: 10/15/2016] [Indexed: 06/06/2023]
Abstract
A large portion of road traffic crashes occur at intersections for the reason that drivers lack necessary visual information. This research examined the effects of an audio-visual display that provides real-time sonification and visualization of the speed and direction of another car approaching the crossroads on an intersecting road. The location of red blinking lights (left vs. right on the speedometer) and the lateral input direction of beeps (left vs. right ear in headphones) corresponded to the direction from where the other car approached, and the blink and beep rates were a function of the approaching car's speed. Two driving simulators were linked so that the participant and the experimenter drove in the same virtual world. Participants (N = 25) completed four sessions (two with the audio-visual display on, two with the audio-visual display off), each session consisting of 22 intersections at which the experimenter approached from the left or right and either maintained speed or slowed down. Compared to driving with the display off, the audio-visual display resulted in enhanced traffic efficiency (i.e., greater mean speed, less coasting) while not compromising safety (i.e., the time gap between the two vehicles was equivalent). A post-experiment questionnaire showed that the beeps were regarded as more useful than the lights. It is argued that the audio-visual display is a promising means of supporting drivers until fully automated driving is technically feasible.
Collapse
Affiliation(s)
- M Houtenbos
- SWOV Institute for Road Safety Research, PO Box 93113, 2509 AC, The Hague, The Netherlands; Delft University of Technology, Safety Science Group, Jaffalaan 5, 2628 BX, Delft, The Netherlands
| | - J C F de Winter
- Delft University of Technology, Department of Biomechanical Engineering, Mekelweg 2, 2628 CD, Delft, The Netherlands.
| | - A R Hale
- Delft University of Technology, Safety Science Group, Jaffalaan 5, 2628 BX, Delft, The Netherlands
| | - P A Wieringa
- Delft University of Technology, Department of Biomechanical Engineering, Mekelweg 2, 2628 CD, Delft, The Netherlands
| | - M P Hagenzieker
- SWOV Institute for Road Safety Research, PO Box 93113, 2509 AC, The Hague, The Netherlands; Delft University of Technology, Department of Transport & Planning, Stevinweg 1, 2628 CN, Delft, The Netherlands
| |
Collapse
|
15
|
Nees MA, Helbein B, Porter A. Speech Auditory Alerts Promote Memory for Alerted Events in a Video-Simulated Self-Driving Car Ride. HUMAN FACTORS 2016; 58:416-426. [PMID: 26884437 DOI: 10.1177/0018720816629279] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/10/2015] [Accepted: 01/02/2016] [Indexed: 06/05/2023]
Abstract
OBJECTIVE Auditory displays could be essential to helping drivers maintain situation awareness in autonomous vehicles, but to date, few or no studies have examined the effectiveness of different types of auditory displays for this application scenario. BACKGROUND Recent advances in the development of autonomous vehicles (i.e., self-driving cars) have suggested that widespread automation of driving may be tenable in the near future. Drivers may be required to monitor the status of automation programs and vehicle conditions as they engage in secondary leisure or work tasks (entertainment, communication, etc.) in autonomous vehicles. METHOD An experiment compared memory for alerted events-a component of Level 1 situation awareness-using speech alerts, auditory icons, and a visual control condition during a video-simulated self-driving car ride with a visual secondary task. The alerts gave information about the vehicle's operating status and the driving scenario. RESULTS Speech alerts resulted in better memory for alerted events. Both auditory display types resulted in less perceived effort devoted toward the study tasks but also greater perceived annoyance with the alerts. CONCLUSION Speech auditory displays promoted Level 1 situation awareness during a simulation of a ride in a self-driving vehicle under routine conditions, but annoyance remains a concern with auditory displays. APPLICATION Speech auditory displays showed promise as a means of increasing Level 1 situation awareness of routine scenarios during an autonomous vehicle ride with an unrelated secondary task.
Collapse
|
16
|
Kennedy KD, Stephens CL, Williams RA, Schutte PC. Automation and Inattentional Blindness in a Simulated Flight Task. ACTA ACUST UNITED AC 2014. [DOI: 10.1177/1541931214581433] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The study reported herein is a subset of a larger investigation on the role of automation in the context of single pilot aviation operations. This portion of the study focused on the relationship between automation and inattentional blindness (IB) occurrences for a runway incursion. The runway incursion critical stimulus was directly relevant to primary task performance. Participants performed the final five minutes of a landing scenario in one of three automation conditions (autopilot, autothrottle, and manual). Sixty non-pilot participants completed this study and 70% (42 of 60) failed to detect the runway incursion critical stimulus. Participants in the partial automation condition were significantly more likely to detect the runway incursion when compared to those in the full automation condition. The odds of participant detection in the full automation condition did not significantly vary from the manual condition. Participants that detected the runway incursion did not have significantly higher scores on any component of the NASA-TLX compared to those who failed to detect. The relationship demonstrated between automation condition and IB occurrence indicates the role of automation in operational attention detriment.
Collapse
|
17
|
Gonzalez C, Lewis BA, Roberts DM, Pratt SM, Baldwin CL. Perceived Urgency and Annoyance of Auditory Alerts in a Driving Context. ACTA ACUST UNITED AC 2012. [DOI: 10.1177/1071181312561337] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Complex in-vehicle technology and safety systems are finding their way into many cars on the road today. These systems require alerts and warnings that appropriately convey multiple levels of urgency, but if these are deemed excessively annoying, then their implementation may be of little consequence. In this study we used a well-documented psychophysical approach to identify the relationship between specific auditory parameters, perceived urgency and perceived annoyance. In agreement with existing literature, increases in all parameters led to increases in both urgency and annoyance - although differentially. Of the parameters investigated, only pulse rate exhibited a stronger psychophysical relationship with urgency than annoyance. The tradeoff between urgency and annoyance is of practical concern and results from this study provide a potential guideline to determine the viability of future in vehicle alerts based on this relationship.
Collapse
|