1
|
Greenlee ET, Hess LJ, Simpson BD, Finomore VS. Vigilance to Spatialized Auditory Displays: Initial Assessment of Performance and Workload. HUMAN FACTORS 2024; 66:987-1003. [PMID: 36455164 DOI: 10.1177/00187208221139744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
OBJECTIVE The present study was designed to evaluate human performance and workload associated with an auditory vigilance task that required spatial discrimination of auditory stimuli. BACKGROUND Spatial auditory displays have been increasingly developed and implemented into settings that require vigilance toward auditory spatial discrimination and localization (e.g., collision avoidance warnings). Research has yet to determine whether a vigilance decrement could impede performance in such applications. METHOD Participants completed a 40-minute auditory vigilance task in either a spatial discrimination condition or a temporal discrimination condition. In the spatial discrimination condition, participants differentiated sounds based on differences in spatial location. In the temporal discrimination condition, participants differentiated sounds based on differences in stimulus duration. RESULTS Correct detections and false alarms declined during the vigilance task, and each did so at a similar rate in both conditions. The overall level of correct detections did not differ significantly between conditions, but false alarms occurred more frequently within the spatial discrimination condition than in the temporal discrimination condition. NASA-TLX ratings and pupil diameter measurements indicated no differences in workload. CONCLUSION Results indicated that tasks requiring auditory spatial discrimination can induce a vigilance decrement; and they may result in inferior vigilance performance, compared to tasks requiring discrimination of auditory duration. APPLICATION Vigilance decrements may impede performance and safety in settings that depend on sustained attention to spatial auditory displays. Display designers should also be aware that auditory displays that require users to discriminate differences in spatial location may result in poorer discrimination performance than non-spatial displays.
Collapse
Affiliation(s)
| | | | - Brian D Simpson
- Air Force Research Laboratory, Wright-Patterson AFB, OH, USA
| | | |
Collapse
|
2
|
Nelson WT, Bolia RS, Ericson MA, McKinley RL. Spatial Audio Displays for Speech Communications: A Comparison of Free Field and Virtual Acoustic Environments. ACTA ACUST UNITED AC 2016. [DOI: 10.1177/154193129904302207] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The ability of listeners to detect, identify, and monitor multiple simultaneous speech signals was measured in free field and virtual acoustic environments. Factorial combinations of four variables, including audio condition, spatial condition, the number of speech signals, and the sex of the talker were employed using a within-subjects design. Participants were required to detect the presentation of a critical speech signal among a background of non-signal speech events. Results indicated that spatial separation increased the percentage of correctly identified critical speech signals as the number of competing messages increased. These outcomes are discussed in the context of designing binaural speech displays to enhance speech communication in aviation environments.
Collapse
Affiliation(s)
- W. Todd Nelson
- Air Force Research Laboratory, Wright-Patterson AFB, OH 45433
| | - Robert S. Bolia
- Air Force Research Laboratory, Wright-Patterson AFB, OH 45433
| | - Mark A. Ericson
- Air Force Research Laboratory, Wright-Patterson AFB, OH 45433
| | | |
Collapse
|
3
|
Towers J, Burgess-Limerick R, Riek S. Concurrent 3-D sonifications enable the head-up monitoring of two interrelated aircraft navigation instruments. HUMAN FACTORS 2014; 56:1414-1427. [PMID: 25509822 DOI: 10.1177/0018720814536443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
OBJECTIVE The aim of this study was to enable the head-up monitoring of two interrelated aircraft navigation instruments by developing a 3-D auditory display that encodes this navigation information within two spatially discrete sonifications. BACKGROUND Head-up monitoring of aircraft navigation information utilizing 3-D audio displays, particularly involving concurrently presented sonifications, requires additional research. METHOD A flight simulator's head-down waypoint bearing and course deviation instrument readouts were conveyed to participants via a 3-D auditory display. Both readouts were separately represented by a colocated pair of continuous sounds, one fixed and the other varying in pitch, which together encoded the instrument value's deviation from the norm. Each sound pair's position in the listening space indicated the left/right parameter of its instrument's readout. Participants' accuracy in navigating a predetermined flight plan was evaluated while performing a head-up task involving the detection of visual flares in the out-of-cockpit scene. RESULTS The auditory display significantly improved aircraft heading and course deviation accuracy, head-up time, and flare detections. Head tracking did not improve performance by providing participants with the ability to orient potentially conflicting sounds, suggesting that the use of integrated localizing cues was successful. Conclusion: A supplementary 3-D auditory display enabled effective head-up monitoring of interrelated navigation information normally attended to through a head-down display. APPLICATION Pilots operating aircraft, such as helicopters and unmanned aerial vehicles, may benefit from a supplementary auditory display because they navigate in two dimensions while performing head-up, out-of-aircraft, visual tasks.
Collapse
|
4
|
Lu SA, Wickens CD, Prinet JC, Hutchins SD, Sarter N, Sebok A. Supporting interruption management and multimodal interface design: three meta-analyses of task performance as a function of interrupting task modality. HUMAN FACTORS 2013; 55:697-724. [PMID: 23964412 DOI: 10.1177/0018720813476298] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
OBJECTIVE The aim of this study was to integrate empirical data showing the effects of interrupting task modality on the performance of an ongoing visual-manual task and the interrupting task itself. The goal is to support interruption management and the design of multimodal interfaces. BACKGROUND Multimodal interfaces have been proposed as a promising means to support interruption management.To ensure the effectiveness of this approach, their design needs to be based on an analysis of empirical data concerning the effectiveness of individual and redundant channels of information presentation. METHOD Three meta-analyses were conducted to contrast performance on an ongoing visual task and interrupting tasks as a function of interrupting task modality (auditory vs. tactile, auditory vs. visual, and single modality vs. redundant auditory-visual). In total, 68 studies were included and six moderator variables were considered. RESULTS The main findings from the meta-analyses are that response times are faster for tactile interrupting tasks in case of low-urgency messages.Accuracy is higher with tactile interrupting tasks for low-complexity signals but higher with auditory interrupting tasks for high-complexity signals. Redundant auditory-visual combinations are preferable for communication tasks during high workload and with a small visual angle of separation. CONCLUSION The three meta-analyses contribute to the knowledge base in multimodal information processing and design. They highlight the importance of moderator variables in predicting the effects of interruption task modality on ongoing and interrupting task performance. APPLICATIONS The findings from this research will help inform the design of multimodal interfaces in data-rich, event-driven domains.
Collapse
Affiliation(s)
- Sara A Lu
- Department of Industrial and Operations Engineering, Center for Ergonomics, University of Michigan, Ann Arbor, USA
| | | | | | | | | | | |
Collapse
|
5
|
Metzger U, Parasuraman R. Effects of Automated Conflict Cuing and Traffic Density on Air Traffic Controller Performance and Visual Attention in a Datalink Environment. ACTA ACUST UNITED AC 2006. [DOI: 10.1207/s15327108ijap1604_1] [Citation(s) in RCA: 17] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
6
|
Pierno AC, Caria A, Glover S, Castiello U. Effects of increasing visual load on aurally and visually guided target acquisition in a virtual environment. APPLIED ERGONOMICS 2005; 36:335-343. [PMID: 15854577 DOI: 10.1016/j.apergo.2004.11.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/17/2003] [Accepted: 11/30/2004] [Indexed: 05/24/2023]
Abstract
The aim of the present study is to investigate interactions between vision and audition during a target acquisition task performed in a virtual environment. We measured the time taken to locate a visual target (acquisition time) signalled by auditory and/or visual cues in conditions of variable visual load. Visual load was increased by introducing a secondary visual task. The auditory cue was constructed using virtual three-dimensional (3D) sound techniques. The visual cue was constructed in the form of a 3D updating arrow. The results suggested that both auditory and visual cues reduced acquisition time as compared to an uncued condition. Whereas the visual cue elicited faster acquisition time than the auditory cue, the combination of the two cues produced the fastest acquisition time. The introduction of secondary visual task differentially affected acquisition time depending on cue modality. In conditions of high visual load, acquiring a target signalled by the auditory cue led to slower and more error-prone performance than acquiring a target signalled by either the visual cue alone or by both the visual and auditory cues.
Collapse
Affiliation(s)
- Andrea C Pierno
- Department of Psychology, Royal Holloway-University of London, Egham, Surrey, UK
| | | | | | | |
Collapse
|
7
|
Oving AB, Veltman JA, Bronkhorst AW. Effectiveness of 3-D Audio for Warnings in the Cockpit. ACTA ACUST UNITED AC 2004. [DOI: 10.1207/s15327108ijap1403_3] [Citation(s) in RCA: 17] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
8
|
|
9
|
Veltman JA, Oving AB, Bronkhorst AW. 3-D Audio in the Fighter Cockpit Improves Task Performance. ACTA ACUST UNITED AC 2004. [DOI: 10.1207/s15327108ijap1403_2] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
10
|
Pierno AC, Caria A, Castiello U. Comparing effects of 2-D and 3-D visual cues during aurally aided target acquisition. HUMAN FACTORS 2004; 46:728-737. [PMID: 15709333 DOI: 10.1518/hfes.46.4.728.56815] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
The aim of the present study was to investigate interactions between vision and audition during a visual target acquisition task performed in a virtual environment. In two experiments, participants were required to perform an acquisition task guided by auditory and/or visual cues. In both experiments the auditory cues were constructed using virtual 3-D sound techniques based on nonindividualized head-related transfer functions. In Experiment 1 the visual cue was constructed in the form of a continuously updated 2-D arrow. In Experiment 2 the visual cue was a nonstereoscopic, perspective-based 3-D arrow. The results suggested that virtual spatial auditory cues reduced acquisition time but were not as effective as the virtual visual cues. Experiencing the 3-D perspective-based arrow rather than the 2-D arrow produced a faster acquisition time not only in the visually aided conditions but also when the auditory cues were presented in isolation. Suggested novel applications include providing 3-D nonstereoscopic, perspective-based visual information on radar displays, which may lead to a better integration with spatial virtual auditory information.
Collapse
Affiliation(s)
- Andrea C Pierno
- Royal Holloway, University of London, Egham, United Kingdom.
| | | | | |
Collapse
|
11
|
Masalonis AJ, Parasuraman R. Fuzzy signal detection theory: analysis of human and machine performance in air traffic control, and analytic considerations. ERGONOMICS 2003; 46:1045-1074. [PMID: 12850931 DOI: 10.1080/0014013031000121986] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
This paper applies fuzzy SDT (signal detection theory) techniques, which combine fuzzy logic and conventional SDT, to empirical data. Two studies involving detection of aircraft conflicts in air traffic control (ATC) were analysed using both conventional and fuzzy SDT. Study 1 used data from a preliminary field evaluation of an automated conflict probe system, the User Request Evaluation Tool (URET). The second study used data from a laboratory controller-in-the-loop simulation of Free Flight conditions. Instead of assigning each potential conflict event as a signal (conflict) or non-signal, each event was defined as a signal (conflict) to some fuzzy degree between 0 and 1 by mapping distance into the range [0, 1]. Each event was also given a fuzzy membership, [0, 1], in the set 'response', based on the perceived probability of a conflict or on the colour-coded alert severity. Fuzzy SDT generally reduced the computed false alarm rate for both the human and machine conflict systems, partly because conflicts just outside the conflict criterion used in conventional SDT, were defined by fuzzy SDT as a signal worthy of some attention. The results illustrate the potential of fuzzy SDT to provide, especially in exploratory data analysis, a more complete picture of performance in aircraft conflict detection and many other applications. Alternative analytic methods also using fuzzy SDT concepts are discussed.
Collapse
Affiliation(s)
- Anthony J Masalonis
- Center for Advanced Aviation System Development, The MITRE Corportation, Mclean, VA 22101-7508, USA.
| | | |
Collapse
|
12
|
Wickens CD, Goh J, Helleberg J, Horrey WJ, Talleur DA. Attentional models of multitask pilot performance using advanced display technology. HUMAN FACTORS 2003; 45:360-380. [PMID: 14702989 DOI: 10.1518/hfes.45.3.360.27250] [Citation(s) in RCA: 82] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
In the first part of the reported research, 12 instrument-rated pilots flew a high-fidelity simulation, in which air traffic control presentation of auditory (voice) information regarding traffic and flight parameters was compared with advanced display technology presentation of equivalent information regarding traffic (cockpit display of traffic information) and flight parameters (data link display). Redundant combinations were also examined while pilots flew the aircraft simulation, monitored for outside traffic, and read back communications messages. The data suggested a modest cost for visual presentation over auditory presentation, a cost mediated by head-down visual scanning, and no benefit for redundant presentation. The effects in Part 1 were modeled by multiple-resource and preemption models of divided attention. In the second part of the research, visual scanning in all conditions was fit by an expected value model of selective attention derived from a previous experiment. This model accounted for 94% of the variance in the scanning data and 90% of the variance in a second validation experiment. Actual or potential applications of this research include guidance on choosing the appropriate modality for presenting in-cockpit information and understanding task strategies induced by introducing new aviation technology.
Collapse
Affiliation(s)
- Christopher D Wickens
- Institute of Aviation, Human Factors Division, University of Illinois, I Airport Rd., Savoy, IL 61874, USA.
| | | | | | | | | |
Collapse
|
13
|
Nelson WT, Bolia RS, Tripp LD. Auditory localization under sustained +Gz acceleration. HUMAN FACTORS 2001; 43:299-309. [PMID: 11592670 DOI: 10.1518/001872001775900896] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
The ability to localize a virtual sound source in the horizontal plane was evaluated under varying levels of sustained (+Gz) acceleration. Participants were required to judge the locations of spatialized noise bursts in the horizontal plane (elevation 0 degrees) during exposure to 1.0, 1.5, 2.5, 4.0, 5.5, and 7.0 +Gz. The experiment was conducted at the U.S. Air Force Research Laboratory's Dynamic Environment Simulator, a three-axis centrifuge. No significant increases in localization error were found between 1.0 and 5.5 +Gz; however, a significant increase did occur at the 7.0 +Gz level. In addition, the percentage of front/back confusions did not vary as a function of +Gz level. Collectively, these results indicate that the ability to localize virtual sound sources is well maintained at various levels of sustained acceleration. Actual or potential applications include the incorporation of spatial audio displays into the human-computer interface for vehicles that are operated in acceleration environments.
Collapse
Affiliation(s)
- W T Nelson
- Divine, Inc, Cincinnati, Ohio 45242, USA.
| | | | | |
Collapse
|
14
|
Nelson WT, Hettinger LJ, Cunningham JA, Brickman BJ, Haas MW, McKinley RL. Effects of localized auditory information on visual target detection performance using a helmet-mounted display. HUMAN FACTORS 1998; 40:452-460. [PMID: 9849103 DOI: 10.1518/001872098779591304] [Citation(s) in RCA: 30] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
An experiment was conducted to evaluate the effects of localized auditory information on visual target detection performance. Visual targets were presented on either a wide field-of-view dome display or a helmet-mounted display and were accompanied by either localized, nonlocalized, or no auditory information. The addition of localized auditory information resulted in significant increases in target detection performance and significant reductions in workload ratings as compared with conditions in which auditory information was either nonlocalized or absent. Qualitative and quantitative analyses of participants' head motions revealed that the addition of localized auditory information resulted in extremely efficient and consistent search strategies. Implications for the development and design of multisensory virtual environments are discussed. Actual or potential applications of this research include the use of spatial auditory displays to augment visual information presented in helmet-mounted displays, thereby leading to increases in performance efficiency, reductions in physical and mental workload, and enhanced spatial awareness of objects in the environment.
Collapse
Affiliation(s)
- W T Nelson
- U.S. Air Force Research Laboratory, Wright-Patterson Air Force Base, AFRL/HECP, OH 45433-7022, USA
| | | | | | | | | | | |
Collapse
|
15
|
Flanagan P, McAnally KI, Martin RL, Meehan JW, Oldfield SR. Aurally and visually guided visual search in a virtual environment. HUMAN FACTORS 1998; 40:461-468. [PMID: 9849104 DOI: 10.1518/001872098779591331] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
We investigated the time participants took to perform a visual search task for targets outside the visual field of view using a helmet-mounted display. We also measured the effectiveness of visual and auditory cues to target location. The auditory stimuli used to cue location were noise bursts previously recorded from the ear canals of the participants and were either presented briefly at the beginning of a trial or continually updated to compensate for head movements. The visual cue was a dynamic arrow that indicated the direction and angular distance from the instantaneous head position to the target. Both visual and auditory spatial cues reduced search time dramatically, compared with unaided search. The updating audio cue was more effective than the transient audio cue and was as effective as the visual cue in reducing search time. These data show that both spatial auditory and visual cues can markedly improve visual search performance. Potential applications for this research include highly visual environments, such as aviation, where there is risk of overloading the visual modality with information.
Collapse
Affiliation(s)
- P Flanagan
- School of Psychology, Deakin University, Geelong, Victoria, Australia
| | | | | | | | | |
Collapse
|
16
|
Abstract
64 commercial airline pilots (ages 35-64 yr, Mdn: 53) were surveyed regarding hearing loss and tinnitus. Within specific age groups, the proportions responding positively exceed the corresponding proportions in the general population reported by the National Center for Health Statistics.
Collapse
Affiliation(s)
- D R Begault
- Flight Management and Human Factors Division, NASA Ames Research Center, Moffett Field, CA 94035, USA
| | | | | | | |
Collapse
|