1
|
Yost WA. Randomizing spectral cues used to resolve front-back reversals in sound-source localization. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 154:661-670. [PMID: 37540095 PMCID: PMC10404140 DOI: 10.1121/10.0020563] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Revised: 07/05/2023] [Accepted: 07/18/2023] [Indexed: 08/05/2023]
Abstract
Front-back reversals (FBRs) in sound-source localization tasks due to cone-of-confusion errors on the azimuth plane occur with some regularity, and their occurrence is listener-dependent. There are fewer FBRs for wideband, high-frequency sounds than for low-frequency sounds presumably because the sources of low-frequency sounds are localized on the basis of interaural differences (interaural time and level differences), which can lead to ambiguous responses. Spectral cues can aid in determining sound-source locations for wideband, high-frequency sounds, and such spectral cues do not lead to ambiguous responses. However, to what extent spectral features might aid sound-source localization is still not known. This paper explores conditions in which the spectral profile of two-octave wide noise bands, whose sources were localized on the azimuth plane, were randomly varied. The experiment demonstrated that such spectral profile randomization increased FBRs for high-frequency noise bands, presumably because whatever spectral features are used for sound-source localization were no longer as useful for resolving FBRs, and listeners relied on interaural differences for sound-source localization, which led to response ambiguities. Additionally, head rotation decreased FBRs in all cases, even when FBRs increased due to spectral profile randomization. In all cases, the occurrence of FBRs was listener-dependent.
Collapse
Affiliation(s)
- William A Yost
- Spatial Hearing Lab, College of Health Solutions, Arizona State University, Tempe, Arizona 85004, USA
| |
Collapse
|
2
|
Long Y, Wang W, Liu J, Liu K, Gong S. The interference of tinnitus on sound localization was related to the type of stimulus. Front Neurosci 2023; 17:1077455. [PMID: 36824213 PMCID: PMC9941629 DOI: 10.3389/fnins.2023.1077455] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Accepted: 01/23/2023] [Indexed: 02/10/2023] Open
Abstract
Spatial processing is a major cognitive function of hearing. Sound source localization is an intuitive evaluation of spatial hearing. Current evidence of the effect of tinnitus on sound source localization remains limited. The present study aimed to investigate whether tinnitus affects the ability to localize sound in participants with normal hearing and whether the effect is related to the type of stimulus. Overall, 40 participants with tinnitus and another 40 control participants without tinnitus were evaluated. The sound source discrimination tasks were performed on the horizontal plane. Pure tone (PT, with single frequency) and monosyllable (MS, with spectrum information) were used as stimuli. The root-mean-square error (RMSE) score was calculated as the mean target response difference. When the stimuli were PTs, the RMSE scores of the control and tinnitus group were 11.77 ± 2.57° and 13.97 ± 4.18°, respectively. The control group performed significantly better than did the tinnitus group (t = 2.841, p = 0.006). When the stimuli were MS, the RMSE scores of the control and tinnitus groups were 7.12 ± 2.29° and 7.90 ± 2.33°, respectively. There was no significant difference between the two groups (t = 1.501, p = 0.137). Neither the effect of unilateral or bilateral tinnitus (PT: t = 0.763, p = 0.450; MS: t = 1.760, p = 0.086) nor the effect of tinnitus side (left/right, PT: t = 0.389, p = 0.703; MS: t = 1.407, p = 0.179) on sound localization ability were determined. The sound source localization ability gradually deteriorated with an increase in age (PT: r2 = 0.153, p < 0.001; MS: r2 = 0.516, p = 0.043). In conclusion, tinnitus interfered with the ability to localize PTs, but the ability to localize MS was not affected. Therefore, the interference of tinnitus in localizing sound sources is related to the type of stimulus.
Collapse
Affiliation(s)
- Yue Long
- Department of Otolaryngology-Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China,Clinical Center for Hearing Loss, Capital Medical University, Beijing, China
| | - Wei Wang
- Department of Otolaryngology-Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Jiao Liu
- Department of Otolaryngology-Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Ke Liu
- Department of Otolaryngology-Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China,Ke Liu,
| | - Shusheng Gong
- Department of Otolaryngology-Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China,Clinical Center for Hearing Loss, Capital Medical University, Beijing, China,*Correspondence: Shusheng Gong,
| |
Collapse
|
3
|
Han JH, Lee J, Lee HJ. Ear-Specific Hemispheric Asymmetry in Unilateral Deafness Revealed by Auditory Cortical Activity. Front Neurosci 2021; 15:698718. [PMID: 34393711 PMCID: PMC8363420 DOI: 10.3389/fnins.2021.698718] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Accepted: 07/12/2021] [Indexed: 12/14/2022] Open
Abstract
Profound unilateral deafness reduces the ability to localize sounds achieved via binaural hearing. Furthermore, unilateral deafness promotes a substantial change in cortical processing to binaural stimulation, thereby leading to reorganization over the whole brain. Although distinct patterns in the hemispheric laterality depending on the side and duration of deafness have been suggested, the neurological mechanisms underlying the difference in relation to behavioral performance when detecting spatially varied cues remain unknown. To elucidate the mechanism, we compared N1/P2 auditory cortical activities and the pattern of hemispheric asymmetry of normal hearing, unilaterally deaf (UD), and simulated acute unilateral hearing loss groups while passively listening to speech sounds delivered from different locations under open free field condition. The behavioral performances of the participants concerning sound localization were measured by detecting sound sources in the azimuth plane. The results reveal a delayed reaction time in the right-sided UD (RUD) group for the sound localization task and prolonged P2 latency compared to the left-sided UD (LUD) group. Moreover, the RUD group showed adaptive cortical reorganization evidenced by increased responses in the hemisphere ipsilateral to the intact ear for individuals with better sound localization whereas left-sided unilateral deafness caused contralateral dominance in activity from the hearing ear. The brain dynamics of right-sided unilateral deafness indicate greater capability of adaptive change to compensate for impairment in spatial hearing. In addition, cortical N1 responses to spatially varied speech sounds in unilateral deaf people were inversely related to the duration of deafness in the area encompassing the right auditory cortex, indicating that early intervention would be needed to protect from maladaptation of the central auditory system following unilateral deafness.
Collapse
Affiliation(s)
- Ji-Hye Han
- Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang-si, South Korea
| | - Jihyun Lee
- Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang-si, South Korea
| | - Hyo-Jeong Lee
- Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang-si, South Korea.,Department of Otorhinolaryngology-Head and Neck Surgery, Hallym University College of Medicine, Chuncheon-si, South Korea
| |
Collapse
|
4
|
Valzolgher C, Verdelet G, Salemme R, Lombardi L, Gaveau V, Farné A, Pavani F. Reaching to sounds in virtual reality: A multisensory-motor approach to promote adaptation to altered auditory cues. Neuropsychologia 2020; 149:107665. [PMID: 33130161 DOI: 10.1016/j.neuropsychologia.2020.107665] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Revised: 07/25/2020] [Accepted: 10/24/2020] [Indexed: 11/26/2022]
Abstract
When localising sounds in space the brain relies on internal models that specify the correspondence between the auditory input reaching the ears, initial head-position and coordinates in external space. These models can be updated throughout life, setting the basis for re-learning spatial hearing abilities in adulthood. In addition, strategic behavioural adjustments allow people to quickly adapt to atypical listening situations. Until recently, the potential role of dynamic listening, involving head-movements or reaching to sounds, have remained largely overlooked. Here, we exploited visual virtual reality (VR) and real-time kinematic tracking, to study the role of active multisensory-motor interactions when hearing individuals adapt to altered binaural cues (one ear plugged and muffed). Participants were immersed in a VR scenario showing 17 virtual speakers at ear-level. In each trial, they heard a sound delivered from a real speaker aligned with one of the virtual ones and were instructed to either reach-to-touch the perceived sound source (Reaching group), or read the label associated with the speaker (Naming group). Participants were free to move their heads during the task and received audio-visual feedback on their performance. Most importantly, they performed the task under binaural or monaural listening. Results show that both groups adapted rapidly to monaural listening, improving sound localisation performance across trials and changing their head-movement behaviour. Reaching the sounds induced faster and larger sound localisation improvements, compared to just naming its position. This benefit was linked to progressively wider head-movements to explore auditory space, selectively in the Reaching group. In conclusion, reaching to sounds in an immersive visual VR context proved most effective for adapting to altered binaural listening. Head-movements played an important role in adaptation, pointing to the importance of dynamic listening when implementing training protocols for improving spatial hearing.
Collapse
Affiliation(s)
- Chiara Valzolgher
- IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France; Centre for Mind/Brain Sciences (CIMeC), University of Trento, Italy.
| | | | - Romeo Salemme
- IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France; Neuro-immersion, Centre de Recherche en Neuroscience Lyon (CRNL), France
| | - Luigi Lombardi
- Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Italy
| | - Valerie Gaveau
- IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France
| | - Alessandro Farné
- IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France; Centre for Mind/Brain Sciences (CIMeC), University of Trento, Italy; Neuro-immersion, Centre de Recherche en Neuroscience Lyon (CRNL), France
| | - Francesco Pavani
- IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France; Centre for Mind/Brain Sciences (CIMeC), University of Trento, Italy; Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Italy
| |
Collapse
|
5
|
Aggius-Vella E, Gori M, Animali S, Campus C, Binda P. Non-spatial skills differ in the front and rear peri-personal space. Neuropsychologia 2020; 147:107619. [PMID: 32898519 DOI: 10.1016/j.neuropsychologia.2020.107619] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2020] [Revised: 07/25/2020] [Accepted: 09/02/2020] [Indexed: 11/26/2022]
Abstract
In measuring behavioural and pupillary responses to auditory oddball stimuli delivered in the front and rear peri-personal space, we find that pupils dilate in response to rare stimuli, both target and distracters. Dilation in response to targets is stronger than the response to distracters, implying a task relevance effect on pupil responses. Crucially, pupil dilation in response to targets is also selectively modulated by the location of sound sources: stronger in the front than in the rear peri-personal space, in spite of matching behavioural performance. This supports the concept that even non-spatial skills, such as the ability to alert in response to behaviourally relevant events, are differentially engaged across subregions of the peri-personal space.
Collapse
Affiliation(s)
- Elena Aggius-Vella
- Unit for Visually Impaired People (U-VIP), Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Genoa, Italy; Institute for Mind, Brain and Technology, Ivcher School of Psychology, Inter-Disciplinary Center (IDC), Herzeliya, Israel
| | - Monica Gori
- Unit for Visually Impaired People (U-VIP), Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Genoa, Italy
| | - Silvia Animali
- University of Pisa, Dept. of Translational Research and New Technologies in Medicine and Surgery, Italy; University of Pisa, Department of Surgical, Medical and Molecular Pathology and Critical Care Medicine, University of Pisa, Italy
| | - Claudio Campus
- Unit for Visually Impaired People (U-VIP), Center for Human Technologies, Fondazione Istituto Italiano di Tecnologia, Genoa, Italy
| | - Paola Binda
- University of Pisa, Dept. of Translational Research and New Technologies in Medicine and Surgery, Italy.
| |
Collapse
|
6
|
Valzolgher C, Campus C, Rabini G, Gori M, Pavani F. Updating spatial hearing abilities through multisensory and motor cues. Cognition 2020; 204:104409. [PMID: 32717425 DOI: 10.1016/j.cognition.2020.104409] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2019] [Revised: 07/09/2020] [Accepted: 07/09/2020] [Indexed: 10/23/2022]
Abstract
Spatial hearing relies on a series of mechanisms for associating auditory cues with positions in space. When auditory cues are altered, humans, as well as other animals, can update the way they exploit auditory cues and partially compensate for their spatial hearing difficulties. In two experiments, we simulated monaural listening in hearing adults by temporarily plugging and muffing one ear, to assess the effects of active or passive training conditions. During active training, participants moved an audio-bracelet attached to their wrist, while continuously attending to the position of the sounds it produced. During passive training, participants received identical acoustic stimulation and performed exactly the same task, but the audio-bracelet was moved by the experimenter. Before and after training, we measured adaptation to monaural listening in three auditory tasks: single sound localization, minimum audible angle (MAA), spatial and temporal bisection. We also performed the tests twice in an untrained group, which completed the same auditory tasks but received no training. Results showed that participants significantly improved in single sound localization, across 3 consecutive days, but more in the active compared to the passive training group. This reveals that benefits of kinesthetic cues are additive with respect to those of paying attention to the position of sounds and/or seeing their positions when updating spatial hearing. The observed adaptation did not generalize to other auditory spatial tasks (space bisection and MAA), suggesting that partial updating of sound-space correspondences does not extend to all aspects of spatial hearing.
Collapse
Affiliation(s)
- Chiara Valzolgher
- Centro Interdipartimentale Mente/Cervello (CIMeC), University of Trento, Italy; IMPACT, Centre de Recherche en Neurosciences Lyon (CRNL), France.
| | | | - Giuseppe Rabini
- Centro Interdipartimentale Mente/Cervello (CIMeC), University of Trento, Italy
| | - Monica Gori
- Italian Institute of Technology (IIT), Italy
| | - Francesco Pavani
- Centro Interdipartimentale Mente/Cervello (CIMeC), University of Trento, Italy; IMPACT, Centre de Recherche en Neurosciences Lyon (CRNL), France; Department of Psychology and Cognitive Science, Universiy of Trento, Italy
| |
Collapse
|
7
|
Rabini G, Lucin G, Pavani F. Certain, but incorrect: on the relation between subjective certainty and accuracy in sound localisation. Exp Brain Res 2020; 238:727-739. [PMID: 32080750 DOI: 10.1007/s00221-020-05748-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2019] [Accepted: 02/05/2020] [Indexed: 10/25/2022]
Abstract
When asked to identify the position of a sound, listeners can report its perceived location as well as their subjective certainty about this spatial judgement. Yet, research to date focused primarily on measures of perceived location (e.g., accuracy and precision of pointing responses), neglecting instead the phenomenological experience of subjective spatial certainty. The present study aimed to investigate: (1) changes in subjective certainty about sound position induced by listening with one ear plugged (simulated monaural listening), compared to typical binaural listening and (2) the relation between subjective certainty about sound position and localisation accuracy. In two experiments (N = 20 each), participants localised single sounds delivered from one of 60 speakers hidden from view in front space. In each trial, they also provided a subjective rating of their spatial certainty about sound position. No feedback on response was provided. Overall, participants were mostly accurate and certain about sound position in binaural listening, whereas their accuracy and subjective certainty decreased in monaural listening. Interestingly, accuracy and certainty dissociated within single trials during monaural listening: in some trials participants were certain but incorrect, in others they were uncertain but correct. Furthermore, unlike accuracy, subjective certainty rapidly increased as a function of time during the monaural listening block. Finally, subjective certainty changed as a function of perceived location of the sound source. These novel findings reveal that listeners quickly update their subjective confidence on sound position, when they experience an altered listening condition, even in the absence of feedback. Furthermore, they document a dissociation between accuracy and subjective certainty when mapping auditory input to space.
Collapse
Affiliation(s)
- Giuseppe Rabini
- Centre for Mind/Brain Sciences (CIMeC), University of Trento, Via Angelo Bettini 31, 38068, Rovereto, TN, Italy.
| | - Giulia Lucin
- Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Via Angelo Bettini 84, 38068, Rovereto, TN, Italy
| | - Francesco Pavani
- Centre for Mind/Brain Sciences (CIMeC), University of Trento, Via Angelo Bettini 31, 38068, Rovereto, TN, Italy.,Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Via Angelo Bettini 84, 38068, Rovereto, TN, Italy.,IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), Lyon, France
| |
Collapse
|
8
|
Balachandar K, Carlile S. The monaural spectral cues identified by a reverse correlation analysis of free-field auditory localization data. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:29. [PMID: 31370620 DOI: 10.1121/1.5113577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/25/2018] [Accepted: 06/04/2019] [Indexed: 06/10/2023]
Abstract
The outer-ear's location-dependent pattern of spectral filtering generates cues used to determine a sound source's elevation as well as front-back location. The authors aim to identify these features using a reverse correlation analysis (RCA), combining free-field localization behaviour with the associated head-related transfer functions' (HRTFs) magnitude spectrum from a sample of 73 participants. Localization responses were collected before and immediately after introducing a pair of outer-ear inserts which modified the listener's HRTFs to varying extent. The RCA identified several different features responsible for eliciting localization responses. The efficacy of these was examined using two models of monaural localization. In general, the predicted performance was closely aligned with the free-field localization error for the bare-ear condition; however, both models tended to grossly over-estimate the localization error based on HRTFs modified by the outer-ear inserts. The RCA's feature selection notably had the effect of better aligning the predicted performance of both models to the actual localization performance. This suggests that the RCA revealed sufficient detail for both models to correctly predict localization performance and also limited the influence of filtered-out elements in the distorted HRTFs that contributed to the degraded accuracy of both models.
Collapse
Affiliation(s)
- Kapilesh Balachandar
- Auditory Neuroscience Laboratory, University of Sydney, New South Wales 2006, Australia
| | - Simon Carlile
- Auditory Neuroscience Laboratory, University of Sydney, New South Wales 2006, Australia
| |
Collapse
|
9
|
Kumpik DP, King AJ. A review of the effects of unilateral hearing loss on spatial hearing. Hear Res 2018; 372:17-28. [PMID: 30143248 PMCID: PMC6341410 DOI: 10.1016/j.heares.2018.08.003] [Citation(s) in RCA: 63] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/26/2018] [Revised: 07/05/2018] [Accepted: 08/09/2018] [Indexed: 12/13/2022]
Abstract
The capacity of the auditory system to extract spatial information relies principally on the detection and interpretation of binaural cues, i.e., differences in the time of arrival or level of the sound between the two ears. In this review, we consider the effects of unilateral or asymmetric hearing loss on spatial hearing, with a focus on the adaptive changes in the brain that may help to compensate for an imbalance in input between the ears. Unilateral hearing loss during development weakens the brain's representation of the deprived ear, and this may outlast the restoration of function in that ear and therefore impair performance on tasks such as sound localization and spatial release from masking that rely on binaural processing. However, loss of hearing in one ear also triggers a reweighting of the cues used for sound localization, resulting in increased dependence on the spectral cues provided by the other ear for localization in azimuth, as well as adjustments in binaural sensitivity that help to offset the imbalance in inputs between the two ears. These adaptive strategies enable the developing auditory system to compensate to a large degree for asymmetric hearing loss, thereby maintaining accurate sound localization. They can also be leveraged by training following hearing loss in adulthood. Although further research is needed to determine whether this plasticity can generalize to more realistic listening conditions and to other tasks, such as spatial unmasking, the capacity of the auditory system to undergo these adaptive changes has important implications for rehabilitation strategies in the hearing impaired. Unilateral hearing loss in infancy can disrupt spatial hearing, even after binaural inputs are restored. Plasticity in the developing brain enables substantial recovery in sound localization accuracy. Adaptation to unilateral hearing loss is based on reweighting of monaural spectral cues and binaural plasticity. Training on auditory tasks can partially compensate for unilateral hearing loss, highlighting potential therapies.
Collapse
Affiliation(s)
- Daniel P Kumpik
- Department of Physiology, Anatomy and Genetics, Parks Road, Oxford, OX1 3PT, UK
| | - Andrew J King
- Department of Physiology, Anatomy and Genetics, Parks Road, Oxford, OX1 3PT, UK.
| |
Collapse
|
10
|
Zaunschirm M, Schörkhuber C, Höldrich R. Binaural rendering of Ambisonic signals by head-related impulse response time alignment and a diffuseness constraint. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 143:3616. [PMID: 29960468 DOI: 10.1121/1.5040489] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Binaural rendering of Ambisonic signals is of great interest in the fields of virtual reality, immersive media, and virtual acoustics. Typically, the spatial order of head-related impulse responses (HRIRs) is considerably higher than the order of the Ambisonic signals. The resulting order reduction of the HRIRs has a detrimental effect on the binaurally rendered signals, and perceptual evaluations indicate limited externalization, localization accuracy, and altered timbre. In this contribution, a binaural renderer, which is computed using a frequency-dependent time alignment of HRIRs followed by a minimization of the squared error subject to a diffuse-field covariance matrix constraint, is presented. The frequency-dependent time alignment retains the interaural time difference (at low frequencies) and results in a HRIR set with lower spatial complexity, while the constrained optimization controls the diffuse-field behavior. Technical evaluations in terms of sound coloration, interaural level differences, diffuse-field response, and interaural coherence, as well as findings from formal listening experiments show a significant improvement of the proposed method compared to state-of-the-art methods.
Collapse
Affiliation(s)
- Markus Zaunschirm
- Institute of Electronic Music and Acoustics, University of Music and Performing Arts, Graz, 8010, Austria
| | - Christian Schörkhuber
- Institute of Electronic Music and Acoustics, University of Music and Performing Arts, Graz, 8010, Austria
| | - Robert Höldrich
- Institute of Electronic Music and Acoustics, University of Music and Performing Arts, Graz, 8010, Austria
| |
Collapse
|
11
|
Berger CC, Gonzalez-Franco M, Tajadura-Jiménez A, Florencio D, Zhang Z. Generic HRTFs May be Good Enough in Virtual Reality. Improving Source Localization through Cross-Modal Plasticity. Front Neurosci 2018; 12:21. [PMID: 29456486 PMCID: PMC5801410 DOI: 10.3389/fnins.2018.00021] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2017] [Accepted: 01/11/2018] [Indexed: 11/13/2022] Open
Abstract
Auditory spatial localization in humans is performed using a combination of interaural time differences, interaural level differences, as well as spectral cues provided by the geometry of the ear. To render spatialized sounds within a virtual reality (VR) headset, either individualized or generic Head Related Transfer Functions (HRTFs) are usually employed. The former require arduous calibrations, but enable accurate auditory source localization, which may lead to a heightened sense of presence within VR. The latter obviate the need for individualized calibrations, but result in less accurate auditory source localization. Previous research on auditory source localization in the real world suggests that our representation of acoustic space is highly plastic. In light of these findings, we investigated whether auditory source localization could be improved for users of generic HRTFs via cross-modal learning. The results show that pairing a dynamic auditory stimulus, with a spatio-temporally aligned visual counterpart, enabled users of generic HRTFs to improve subsequent auditory source localization. Exposure to the auditory stimulus alone or to asynchronous audiovisual stimuli did not improve auditory source localization. These findings have important implications for human perception as well as the development of VR systems as they indicate that generic HRTFs may be enough to enable good auditory source localization in VR.
Collapse
Affiliation(s)
- Christopher C. Berger
- Microsoft Research, Redmond, WA, United States
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, United States
| | | | - Ana Tajadura-Jiménez
- UCL Interaction Centre, University College London, London, United Kingdom
- Interactive Systems DEI-Lab, Universidad Carlos III de Madrid, Madrid, Spain
| | | | - Zhengyou Zhang
- Microsoft Research, Redmond, WA, United States
- Department Electrical Engineering, University of Washington, Seattle, WA, United States
| |
Collapse
|
12
|
Karim AM, Rumalla K, King LA, Hullar TE. The effect of spatial auditory landmarks on ambulation. Gait Posture 2018; 60:171-174. [PMID: 29241100 PMCID: PMC5809182 DOI: 10.1016/j.gaitpost.2017.12.003] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/09/2016] [Revised: 12/01/2017] [Accepted: 12/02/2017] [Indexed: 02/02/2023]
Abstract
The maintenance of balance and posture is a result of the collaborative efforts of vestibular, proprioceptive, and visual sensory inputs, but a fourth neural input, audition, may also improve balance. Here, we tested the hypothesis that auditory inputs function as environmental spatial landmarks whose effectiveness depends on sound localization ability during ambulation. Eight blindfolded normal young subjects performed the Fukuda-Unterberger test in three auditory conditions: silence, white noise played through headphones (head-referenced condition), and white noise played through a loudspeaker placed directly in front at 135 centimeters away from the ear at ear height (earth-referenced condition). For the earth-referenced condition, an additional experiment was performed where the effect of moving the speaker azimuthal position to 45, 90, 135, and 180° was tested. Subjects performed significantly better in the earth-referenced condition than in the head-referenced or silent conditions. Performance progressively decreased over the range from 0° to 135° but all subjects then improved slightly at the 180° compared to the 135° condition. These results suggest that presence of sound dramatically improves the ability to ambulate when vision is limited, but that sound sources must be located in the external environment in order to improve balance. This supports the hypothesis that they act by providing spatial landmarks against which head and body movement and orientation may be compared and corrected. Balance improvement in the azimuthal plane mirrors sensitivity to sound movement at similar positions, indicating that similar auditory mechanisms may underlie both processes. These results may help optimize the use of auditory cues to improve balance in particular patient populations.
Collapse
Affiliation(s)
| | | | - Laurie A King
- Oregon Health and Science University, Portland, OR, USA
| | | |
Collapse
|
13
|
Watson CJG, Carlile S, Kelly H, Balachandar K. The Generalization of Auditory Accommodation to Altered Spectral Cues. Sci Rep 2017; 7:11588. [PMID: 28912440 PMCID: PMC5599623 DOI: 10.1038/s41598-017-11981-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2017] [Accepted: 08/30/2017] [Indexed: 11/23/2022] Open
Abstract
The capacity of healthy adult listeners to accommodate to altered spectral cues to the source locations of broadband sounds has now been well documented. In recent years we have demonstrated that the degree and speed of accommodation are improved by using an integrated sensory-motor training protocol under anechoic conditions. Here we demonstrate that the learning which underpins the localization performance gains during the accommodation process using anechoic broadband training stimuli generalize to environmentally relevant scenarios. As previously, alterations to monaural spectral cues were produced by fitting participants with custom-made outer ear molds, worn during waking hours. Following acute degradations in localization performance, participants then underwent daily sensory-motor training to improve localization accuracy using broadband noise stimuli over ten days. Participants not only demonstrated post-training improvements in localization accuracy for broadband noises presented in the same set of positions used during training, but also for stimuli presented in untrained locations, for monosyllabic speech sounds, and for stimuli presented in reverberant conditions. These findings shed further light on the neuroplastic capacity of healthy listeners, and represent the next step in the development of training programs for users of assistive listening devices which degrade localization acuity by distorting or bypassing monaural cues.
Collapse
Affiliation(s)
- Christopher J G Watson
- School of Medical Sciences, University of Sydney, Sydney, New South Wales, 2006, Australia.
| | - Simon Carlile
- School of Medical Sciences, University of Sydney, Sydney, New South Wales, 2006, Australia
| | - Heather Kelly
- School of Medical Sciences, University of Sydney, Sydney, New South Wales, 2006, Australia
| | - Kapilesh Balachandar
- School of Medical Sciences, University of Sydney, Sydney, New South Wales, 2006, Australia
| |
Collapse
|
14
|
Ehlers E, Goupell MJ, Zheng Y, Godar SP, Litovsky RY. Binaural sensitivity in children who use bilateral cochlear implants. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 141:4264. [PMID: 28618809 PMCID: PMC5464955 DOI: 10.1121/1.4983824] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/25/2016] [Revised: 05/04/2017] [Accepted: 05/08/2017] [Indexed: 05/29/2023]
Abstract
Children who are deaf and receive bilateral cochlear implants (BiCIs) perform better on spatial hearing tasks using bilateral rather than unilateral inputs; however, they underperform relative to normal-hearing (NH) peers. This gap in performance is multi-factorial, including the inability of speech processors to reliably deliver binaural cues. Although much is known regarding binaural sensitivity of adults with BiCIs, less is known about how the development of binaural sensitivity in children with BiCIs compared to NH children. Sixteen children (ages 9-17 years) were tested using synchronized research processors. Interaural time differences and interaural level differences (ITDs and ILDs, respectively) were presented to pairs of pitch-matched electrodes. Stimuli were 300-ms, 100-pulses-per-second, constant-amplitude pulse trains. In the first and second experiments, discrimination of interaural cues (either ITDs or ILDs) was measured using a two-interval left/right task. In the third experiment, subjects reported the perceived intracranial position of ITDs and ILDs in a lateralization task. All children demonstrated sensitivity to ILDs, possibly due to monaural level cues. Children who were born deaf had weak or absent sensitivity to ITDs; in contrast, ITD sensitivity was noted in children with previous exposure to acoustic hearing. Therefore, factors such as auditory deprivation, in particular, lack of early exposure to consistent timing differences between the ears, may delay the maturation of binaural circuits and cause insensitivity to binaural differences.
Collapse
Affiliation(s)
- Erica Ehlers
- University of Wisconsin-Madison, Waisman Center, 1500 Highland Avenue, Madison, Wisconsin 53705, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Yi Zheng
- Beijing Advanced Innovation Center for Future Education, Beijing Normal University, Beijing 100875, China
| | - Shelly P Godar
- University of Wisconsin-Madison, Waisman Center, 1500 Highland Avenue, Madison, Wisconsin 53705, USA
| | - Ruth Y Litovsky
- University of Wisconsin-Madison, Waisman Center, 1500 Highland Avenue, Madison, Wisconsin 53705, USA
| |
Collapse
|
15
|
Joubaud T, Zimpfer V, Garcia A, Langrenne C. Sound localization models as evaluation tools for tactical communication and protective systems. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 141:2637. [PMID: 28464634 DOI: 10.1121/1.4979693] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
Tactical Communication and Protective Systems (TCAPS) are hearing protection devices that sufficiently protect the listener's ears from hazardous sounds and preserve speech intelligibility. However, previous studies demonstrated that TCAPS still deteriorate the listener's situational awareness, in particular, the ability to locate sound sources. On the horizontal plane, this is mainly explained by the degradation of the acoustical cues normally preventing the listener from making front-back confusions. As part of TCAPS development and assessment, a method predicting the TCAPS-induced degradation of the sound localization capability based on electroacoustic measurements would be more suitable than time-consuming behavioral experiments. In this context, the present paper investigates two methods based on Head-Related Transfer Functions (HRTFs): a template-matching model and a three-layer neural network. They are optimized to fit human sound source identification performance in open ear condition. The methods are applied to HRTFs measured with six TCAPS, providing identification probabilities. They are compared with the results of a behavioral experiment, conducted with the same protectors, and which ranks the TCAPS by type. The neural network predicts realistic performances with earplugs, but overestimates errors with earmuffs. The template-matching model predicts human performance well, except for two particular TCAPS.
Collapse
Affiliation(s)
- Thomas Joubaud
- Acoustics and Protection of the Soldier, French-German Research Institute of Saint-Louis, 5 rue du Général Cassagnou, BP 70034, 68301 Saint-Louis, France
| | - Véronique Zimpfer
- Acoustics and Protection of the Soldier, French-German Research Institute of Saint-Louis, 5 rue du Général Cassagnou, BP 70034, 68301 Saint-Louis, France
| | - Alexandre Garcia
- Laboratoire de Mécanique des Structures et des Systèmes Couplés, Conservatoire National des Arts et Métiers, 292 rue Saint-Martin, 75141 Paris Cedex 03, France
| | - Christophe Langrenne
- Laboratoire de Mécanique des Structures et des Systèmes Couplés, Conservatoire National des Arts et Métiers, 292 rue Saint-Martin, 75141 Paris Cedex 03, France
| |
Collapse
|
16
|
Carlile S, Fox A, Orchard-Mills E, Leung J, Alais D. Six Degrees of Auditory Spatial Separation. J Assoc Res Otolaryngol 2016; 17:209-21. [PMID: 27033087 PMCID: PMC4854823 DOI: 10.1007/s10162-016-0560-1] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2014] [Accepted: 03/09/2016] [Indexed: 11/30/2022] Open
Abstract
The location of a sound is derived computationally from acoustical cues rather than being inherent in the topography of the input signal, as in vision. Since Lord Rayleigh, the descriptions of that representation have swung between "labeled line" and "opponent process" models. Employing a simple variant of a two-point separation judgment using concurrent speech sounds, we found that spatial discrimination thresholds changed nonmonotonically as a function of the overall separation. Rather than increasing with separation, spatial discrimination thresholds first declined as two-point separation increased before reaching a turning point and increasing thereafter with further separation. This "dipper" function, with a minimum at 6 ° of separation, was seen for regions around the midline as well as for more lateral regions (30 and 45 °). The discrimination thresholds for the binaural localization cues were linear over the same range, so these cannot explain the shape of these functions. These data and a simple computational model indicate that the perception of auditory space involves a local code or multichannel mapping emerging subsequent to the binaural cue coding.
Collapse
Affiliation(s)
- Simon Carlile
- School of Medical Sciences, University of Sydney, Sydney, NSW, 2006, Australia.
- Bosch Institute, University of Sydney, Sydney, NSW, 2006, Australia.
| | - Alex Fox
- School of Medical Sciences, University of Sydney, Sydney, NSW, 2006, Australia
| | - Emily Orchard-Mills
- School of Medical Sciences, University of Sydney, Sydney, NSW, 2006, Australia
- School of Psychology, University of Sydney, Sydney, NSW, 2006, Australia
| | - Johahn Leung
- School of Medical Sciences, University of Sydney, Sydney, NSW, 2006, Australia
| | - David Alais
- School of Psychology, University of Sydney, Sydney, NSW, 2006, Australia
| |
Collapse
|
17
|
Carlile S, Leung J. The Perception of Auditory Motion. Trends Hear 2016; 20:2331216516644254. [PMID: 27094029 PMCID: PMC4871213 DOI: 10.1177/2331216516644254] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2015] [Revised: 03/22/2016] [Accepted: 03/22/2016] [Indexed: 11/16/2022] Open
Abstract
The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception.
Collapse
Affiliation(s)
- Simon Carlile
- School of Medical Sciences, University of Sydney, NSW, Australia Starkey Hearing Research Center, Berkeley, CA, USA
| | - Johahn Leung
- School of Medical Sciences, University of Sydney, NSW, Australia
| |
Collapse
|
18
|
Keating P, Rosenior-Patten O, Dahmen JC, Bell O, King AJ. Behavioral training promotes multiple adaptive processes following acute hearing loss. eLife 2016; 5:e12264. [PMID: 27008181 PMCID: PMC4841776 DOI: 10.7554/elife.12264] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2015] [Accepted: 03/23/2016] [Indexed: 11/13/2022] Open
Abstract
The brain possesses a remarkable capacity to compensate for changes in inputs resulting from a range of sensory impairments. Developmental studies of sound localization have shown that adaptation to asymmetric hearing loss can be achieved either by reinterpreting altered spatial cues or by relying more on those cues that remain intact. Adaptation to monaural deprivation in adulthood is also possible, but appears to lack such flexibility. Here we show, however, that appropriate behavioral training enables monaurally-deprived adult humans to exploit both of these adaptive processes. Moreover, cortical recordings in ferrets reared with asymmetric hearing loss suggest that these forms of plasticity have distinct neural substrates. An ability to adapt to asymmetric hearing loss using multiple adaptive processes is therefore shared by different species and may persist throughout the lifespan. This highlights the fundamental flexibility of neural systems, and may also point toward novel therapeutic strategies for treating sensory disorders. DOI:http://dx.doi.org/10.7554/eLife.12264.001 The brain normally compares the timing and intensity of the sounds that reach each ear to work out a sound’s origin. Hearing loss in one ear disrupts these between-ear comparisons, which causes listeners to make errors in this process. With time, however, the brain adapts to this hearing loss and once again learns to localize sounds accurately. Previous research has shown that young ferrets can adapt to hearing loss in one ear in two distinct ways. The ferrets either learn to remap the altered between-ear comparisons, caused by losing hearing in one ear, onto their new locations. Alternatively, the ferrets learn to locate sounds using only their good ear. Each strategy is suited to localizing different types of sound, but it was not known how this adaptive flexibility unfolds over time, whether it persists throughout the lifespan, or whether it is shared by other species. Now, Keating et al. show that, with some coaching, adult humans also adapt to temporary loss of hearing in one ear using the same two strategies. In the experiments, adult humans were trained to localize different kinds of sounds while wearing an earplug in one ear. These sounds were presented from 12 loudspeakers arranged in a horizontal circle around the person being tested. The experiments showed that short periods of behavioral training enable adult humans to adapt to a hearing loss in one ear and recover their ability to localize sounds. Just like the ferrets, adult humans learned to correctly associate altered between-ear comparisons with their new locations and to rely more on the cues from the unplugged ear to locate sound. Which of these adaptive strategies the participants used depended on the frequencies present in the sounds. The cells in the ear and brain that detect and make sense of sound typically respond best to a limited range of frequencies, and so this suggests that each strategy relies on a distinct set of cells. Keating et al. confirmed in ferrets that different brain cells are indeed used to bring about adaptation to hearing loss in one ear using each strategy. These insights may aid the development of new therapies to treat hearing loss. DOI:http://dx.doi.org/10.7554/eLife.12264.002
Collapse
Affiliation(s)
- Peter Keating
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Onayomi Rosenior-Patten
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Johannes C Dahmen
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Olivia Bell
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Andrew J King
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
19
|
Trapeau R, Schönwiesner M. Adaptation to shifted interaural time differences changes encoding of sound location in human auditory cortex. Neuroimage 2015; 118:26-38. [PMID: 26054873 DOI: 10.1016/j.neuroimage.2015.06.006] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2015] [Revised: 05/06/2015] [Accepted: 06/02/2015] [Indexed: 11/29/2022] Open
Abstract
The auditory system infers the location of sound sources from the processing of different acoustic cues. These cues change during development and when assistive hearing devices are worn. Previous studies have found behavioral recalibration to modified localization cues in human adults, but very little is known about the neural correlates and mechanisms of this plasticity. We equipped participants with digital devices, worn in the ear canal that allowed us to delay sound input to one ear, and thus modify interaural time differences, a major cue for horizontal sound localization. Participants wore the digital earplugs continuously for nine days while engaged in day-to-day activities. Daily psychoacoustical testing showed rapid recalibration to the manipulation and confirmed that adults can adapt to shifted interaural time differences in their daily multisensory environment. High-resolution functional MRI scans performed before and after recalibration showed that recalibration was accompanied by changes in hemispheric lateralization of auditory cortex activity. These changes corresponded to a shift in spatial coding of sound direction comparable to the observed behavioral recalibration. Fitting the imaging results with a model of auditory spatial processing also revealed small shifts in voxel-wise spatial tuning within each hemisphere.
Collapse
Affiliation(s)
- Régis Trapeau
- International Laboratory for Brain, Music and Sound Research (BRAMS), Department of Psychology, Université de Montréal, Montreal , QC, Canada; Centre for Research on Brain, Language and Music (CRBLM), McGill University, Montreal, QC, Canada
| | - Marc Schönwiesner
- International Laboratory for Brain, Music and Sound Research (BRAMS), Department of Psychology, Université de Montréal, Montreal , QC, Canada; Centre for Research on Brain, Language and Music (CRBLM), McGill University, Montreal, QC, Canada; Department of Neurology and Neurosurgery, Faculty of Medicine, McGill University, Montreal, QC, Canada.
| |
Collapse
|
20
|
King AJ. Crossmodal plasticity and hearing capabilities following blindness. Cell Tissue Res 2015; 361:295-300. [PMID: 25893928 PMCID: PMC4486786 DOI: 10.1007/s00441-015-2175-y] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2015] [Accepted: 03/18/2015] [Indexed: 10/27/2022]
Abstract
Valuable insights into the role of experience in shaping perception can be obtained by studying the effects of blindness or other forms of sensory deprivation on the intact senses. Blind individuals are particularly dependent on their hearing and there is extensive evidence that they can develop superior auditory skills, either as a result of plasticity within the auditory system or through the recruitment of functionally relevant occipital cortical areas that lack their normal visual inputs. Because spatial processing normally relies on close interactions between vision and hearing, much of the research in this area has focused on the effects of blindness on auditory localization. Although enhanced auditory skills have been reported in many studies, some aspects of spatial hearing are impaired in the absence of vision. In this case, the effects of crossmodal plasticity may reflect a balance between adaptive changes that compensate for blindness and the role vision normally plays, particularly during development, in calibrating the brain's representation of auditory space.
Collapse
Affiliation(s)
- Andrew J King
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, UK,
| |
Collapse
|
21
|
Voss P, Tabry V, Zatorre RJ. Trade-off in the sound localization abilities of early blind individuals between the horizontal and vertical planes. J Neurosci 2015; 35:6051-6. [PMID: 25878278 PMCID: PMC6605175 DOI: 10.1523/jneurosci.4544-14.2015] [Citation(s) in RCA: 42] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2014] [Revised: 02/17/2015] [Accepted: 03/04/2015] [Indexed: 11/21/2022] Open
Abstract
There is substantial evidence that sensory deprivation leads to important cross-modal brain reorganization that is paralleled by enhanced perceptual abilities. However, it remains unclear how widespread these enhancements are, and whether they are intercorrelated or arise at the expense of other perceptual abilities. One specific area where such a trade-off might arise is that of spatial hearing, where blind individuals have been shown to possess superior monaural localization abilities in the horizontal plane, but inferior localization abilities in the vertical plane. While both of these tasks likely involve the use of monaural cues due to the absence of any relevant binaural signal, there is currently no proper explanation for this discrepancy, nor has any study investigated both sets of abilities in the same sample of blind individuals. Here, we assess whether the enhancements observed in the horizontal plane are related to the deficits observed in the vertical plane by testing sound localization in both planes in groups of blind and sighted persons. Our results show that the blind individuals who displayed the highest accuracy at localizing sounds monaurally in the horizontal plane are also the ones who exhibited the greater deficit when localizing in the vertical plane. These findings appear to argue against the idea of generalized perceptual enhancements in the early blind, and instead suggest the possibility of a trade-off in the localization proficiency between the two auditory spatial planes, such that learning to use monaural cues for the horizontal plane comes at the expense of using those cues to localize in the vertical plane.
Collapse
Affiliation(s)
- Patrice Voss
- Montreal Neurological Institute, McGill University, Montréal, Québec H3A 2B4, Canada, International Laboratory for Brain, Music and Sound Research (BRAMS), Montréal, Québec H2V 4P3, Canada, and
| | - Vanessa Tabry
- Department of Psychology, Concordia University, Montréal, Québec H3A 2B4, Canada
| | - Robert J Zatorre
- Montreal Neurological Institute, McGill University, Montréal, Québec H3A 2B4, Canada, International Laboratory for Brain, Music and Sound Research (BRAMS), Montréal, Québec H2V 4P3, Canada, and
| |
Collapse
|
22
|
Yost WA, Zhong X. Sound source localization identification accuracy: bandwidth dependencies. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2014; 136:2737-46. [PMID: 25373973 DOI: 10.1121/1.4898045] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
Sound source localization accuracy using a sound source identification task was measured in the front, right quarter of the azimuth plane as rms (root-mean-square) error (degrees) for stimulus conditions in which the bandwidth (1/20 to 2 octaves wide) and center frequency (250, 2000, 4000 Hz) of 200-ms noise bursts were varied. Tones of different frequencies (250, 2000, 4000 Hz) were also used. As stimulus bandwidth increases, there is an increase in sound source localization identification accuracy (i.e., rms error decreases). Wideband stimuli (>1 octave wide) produce best sound source localization accuracy (~6°-7° rms error), and localization accuracy for these wideband noise stimuli does not depend on center frequency. For narrow bandwidths (<1 octave) and tonal stimuli, accuracy does depend on center frequency such that highest accuracy is obtained for low-frequency stimuli (centered on 250 Hz), worse accuracy for mid-frequency stimuli (centered on 2000 Hz), and intermediate accuracy for high-frequency stimuli (centered on 4000 Hz).
Collapse
Affiliation(s)
- William A Yost
- Speech and Hearing Science, Arizona State University, P.O. Box 870102, Tempe, Arizona 85287
| | - Xuan Zhong
- Speech and Hearing Science, Arizona State University, P.O. Box 870102, Tempe, Arizona 85287
| |
Collapse
|
23
|
McAnally KI, Martin RL. Sound localization with head movement: implications for 3-d audio displays. Front Neurosci 2014; 8:210. [PMID: 25161605 PMCID: PMC4130110 DOI: 10.3389/fnins.2014.00210] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2013] [Accepted: 07/01/2014] [Indexed: 11/13/2022] Open
Abstract
Previous studies have shown that the accuracy of sound localization is improved if listeners are allowed to move their heads during signal presentation. This study describes the function relating localization accuracy to the extent of head movement in azimuth. Sounds that are difficult to localize were presented in the free field from sources at a wide range of azimuths and elevations. Sounds remained active until the participants' heads had rotated through windows ranging in width of 2, 4, 8, 16, 32, or 64° of azimuth. Error in determining sound-source elevation and the rate of front/back confusion were found to decrease with increases in azimuth window width. Error in determining sound-source lateral angle was not found to vary with azimuth window width. Implications for 3-d audio displays: the utility of a 3-d audio display for imparting spatial information is likely to be improved if operators are able to move their heads during signal presentation. Head movement may compensate in part for a paucity of spectral cues to sound-source location resulting from limitations in either the audio signals presented or the directional filters (i.e., head-related transfer functions) used to generate a display. However, head movements of a moderate size (i.e., through around 32° of azimuth) may be required to ensure that spatial information is conveyed with high accuracy.
Collapse
Affiliation(s)
- Ken I McAnally
- Aerospace Division, Defence Science and Technology Organisation Melbourne, VIC, Australia
| | - Russell L Martin
- Aerospace Division, Defence Science and Technology Organisation Melbourne, VIC, Australia
| |
Collapse
|
24
|
Carlile S. The plastic ear and perceptual relearning in auditory spatial perception. Front Neurosci 2014; 8:237. [PMID: 25147497 PMCID: PMC4123622 DOI: 10.3389/fnins.2014.00237] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2014] [Accepted: 07/18/2014] [Indexed: 11/28/2022] Open
Abstract
The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear molds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10–60 days) performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localization, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear molds or through virtual auditory space stimulation using non-individualized spectral cues. The work with ear molds demonstrates that a relatively short period of training involving audio-motor feedback (5–10 days) significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide spatial cues but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis.
Collapse
Affiliation(s)
- Simon Carlile
- School of Medical Sciences and Bosch Institute, University of Sydney Sydney, NSW, Australia
| |
Collapse
|
25
|
Durin V, Carlile S, Guillon P, Best V, Kalluri S. Acoustic analysis of the directional information captured by five different hearing aid styles. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2014; 136:818-828. [PMID: 25096115 DOI: 10.1121/1.4883372] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
This study compared the head-related transfer functions (HRTFs) recorded from the bare ear of a mannequin for 393 spatial locations and for five different hearing aid styles: Invisible-in-the-canal (IIC), completely-in-the-canal (CIC), in-the-canal (ITC), in-the-ear (ITE), and behind-the-ear (BTE). The spectral distortions of each style compared to the bare ear were described qualitatively in terms of the gain and frequency characteristics of the prominent spectral notch and two peaks in the HRTFs. Two quantitative measures of the differences between the HRTF sets and a measure of the dissimilarity of the HRTFs within each set were also computed. In general, the IIC style was most similar and the BTE most dissimilar to the bare ear recordings. The relative similarities among the CIC, ITC, and ITE styles depended on the metric employed. The within-style spectral dissimilarities were comparable for the bare ear, IIC, CIC, and ITC with increasing ambiguity for the ITE and BTE styles. When the analysis bandwidth was limited to 8 kHz, the HRTFs within each set became much more similar.
Collapse
Affiliation(s)
- Virginie Durin
- VAST Audio Pty Ltd., 4 Cornwallis Street, Eveleigh, New South Wales 2015, Australia
| | - Simon Carlile
- Bosh Institute and School of Medical Sciences, Anderson Stuart Building (F13), University of Sydney, New South Wales 2006, Australia
| | - Pierre Guillon
- Computing and Audio Research Laboratory, School of Electrical and Information Engineering, University of Sydney, New South Wales 2006, Australia
| | - Virginia Best
- Bosh Institute and School of Medical Sciences, Anderson Stuart Building (F13), University of Sydney, New South Wales 2006, Australia
| | - Sridhar Kalluri
- Starkey Hearing Research Center, 2150 Shattuck Avenue, Suite 408, Berkeley, California 94704-1345
| |
Collapse
|
26
|
Zimpfer V, Sarafian D. Impact of hearing protection devices on sound localization performance. Front Neurosci 2014; 8:135. [PMID: 24966807 PMCID: PMC4052631 DOI: 10.3389/fnins.2014.00135] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2013] [Accepted: 05/14/2014] [Indexed: 11/13/2022] Open
Abstract
Hearing Protection Devices (HPDs) can protect the ear against loud potentially damaging sounds while allowing lower-level sounds such as speech to be perceived. However, the impact of these devices on the ability to localize sound sources is not well known. To address this question, we propose two different methods: one behavioral and one dealing with acoustical measurements. For the behavioral method, sound localization performance was measured with, and without, HPDs on 20 listeners. Five HPDs, including both passive (non-linear attenuation) and three active (talk-through) systems were evaluated. The results showed a significant increase in localization errors, especially front-back and up-down confusions relative to the "naked ear" test condition for all of the systems tested, especially for the talk-through headphone system. For the acoustic measurement method, Head-Related Transfer Functions (HRTFs) were measured on an artificial head both without, and with the HPDs in place. The effects of the HPDs on the spectral cues for the localization of different sound sources in the horizontal plane were analyzed. Alterations of the Interaural Spectral Difference (ISD) cues were identified, which could explain the observed increase in front-back confusions caused by the talk-through headphone protectors.
Collapse
Affiliation(s)
- Véronique Zimpfer
- French-German Research Institute of Saint-Louis (ISL), Acoustics and Protection of Soldier Group Saint-Louis, France
| | - David Sarafian
- Institut de Recherche Biomédicale des Armées, Département Action et Cognition en Situation Opérationnelle Brétigny sur Orge, France
| |
Collapse
|
27
|
Alves-Pinto A, Palmer AR, Lopez-Poveda EA. Perception and coding of high-frequency spectral notches: potential implications for sound localization. Front Neurosci 2014; 8:112. [PMID: 24904258 PMCID: PMC4034511 DOI: 10.3389/fnins.2014.00112] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2013] [Accepted: 04/29/2014] [Indexed: 11/13/2022] Open
Abstract
The interaction of sound waves with the human pinna introduces high-frequency notches (5-10 kHz) in the stimulus spectrum that are thought to be useful for vertical sound localization. A common view is that these notches are encoded as rate profiles in the auditory nerve (AN). Here, we review previously published psychoacoustical evidence in humans and computer-model simulations of inner hair cell responses to noises with and without high-frequency spectral notches that dispute this view. We also present new recordings from guinea pig AN and "ideal observer" analyses of these recordings that suggest that discrimination between noises with and without high-frequency spectral notches is probably based on the information carried in the temporal pattern of AN discharges. The exact nature of the neural code involved remains nevertheless uncertain: computer model simulations suggest that high-frequency spectral notches are encoded in spike timing patterns that may be operant in the 4-7 kHz frequency regime, while "ideal observer" analysis of experimental neural responses suggest that an effective cue for high-frequency spectral discrimination may be based on sampling rates of spike arrivals of AN fibers using non-overlapping time binwidths of between 4 and 9 ms. Neural responses show that sensitivity to high-frequency notches is greatest for fibers with low and medium spontaneous rates than for fibers with high spontaneous rates. Based on this evidence, we conjecture that inter-subject variability at high-frequency spectral notch detection and, consequently, at vertical sound localization may partly reflect individual differences in the available number of functional medium- and low-spontaneous-rate fibers.
Collapse
Affiliation(s)
- Ana Alves-Pinto
- Klinikum rechts der Isar, Technische Universität MünchenMunich, Germany
| | - Alan R. Palmer
- Medical Research Council Institute of Hearing Research, University ParkNottingham, UK
| | - Enrique A. Lopez-Poveda
- Departamento de Cirugía, Facultad de Medicina, Instituto de Neurociencias de Castilla y León, Instituto de Investigación Biomédica de Salamanca, Universidad de SalamancaSalamanca, Spain
| |
Collapse
|
28
|
Carlile S, Balachandar K, Kelly H. Accommodating to new ears: the effects of sensory and sensory-motor feedback. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2014; 135:2002-2011. [PMID: 25234999 DOI: 10.1121/1.4868369] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Changing the shape of the outer ear using small in-ear molds degrades sound localization performance consistent with the distortion of monaural spectral cues to location. It has been shown recently that adult listeners re-calibrate to these new spectral cues for locations both inside and outside the visual field. This raises the question as to the teacher signal for this remarkable functional plasticity. Furthermore, large individual differences in the extent and rate of accommodation suggests a number of factors may be driving this process. A training paradigm exploiting multi-modal and sensory-motor feedback during accommodation was examined to determine whether it might accelerate this process. So as to standardize the modification of the spectral cues, molds filling 40% of the volume of each outer ear were custom made for each subject. Daily training sessions for about an hour, involving repetitive auditory stimuli and exploratory behavior by the subject, significantly improved the extent of accommodation measured by both front-back confusions and polar angle localization errors, with some improvement in the rate of accommodation demonstrated by front-back confusion errors. This work has implications for both the process by which a coherent representation of auditory space is maintained and for accommodative training for hearing aid wearers.
Collapse
Affiliation(s)
- Simon Carlile
- School of Medical Sciences and The Bosch Institute, University of Sydney, Sydney, New South Wales 2006, Australia
| | - Kapilesh Balachandar
- School of Medical Sciences and The Bosch Institute, University of Sydney, Sydney, New South Wales 2006, Australia
| | - Heather Kelly
- School of Medical Sciences and The Bosch Institute, University of Sydney, Sydney, New South Wales 2006, Australia
| |
Collapse
|
29
|
Keating P, King AJ. Developmental plasticity of spatial hearing following asymmetric hearing loss: context-dependent cue integration and its clinical implications. Front Syst Neurosci 2013; 7:123. [PMID: 24409125 PMCID: PMC3873525 DOI: 10.3389/fnsys.2013.00123] [Citation(s) in RCA: 59] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2013] [Accepted: 12/12/2013] [Indexed: 11/23/2022] Open
Abstract
Under normal hearing conditions, comparisons of the sounds reaching each ear are critical for accurate sound localization. Asymmetric hearing loss should therefore degrade spatial hearing and has become an important experimental tool for probing the plasticity of the auditory system, both during development and adulthood. In clinical populations, hearing loss affecting one ear more than the other is commonly associated with otitis media with effusion, a disorder experienced by approximately 80% of children before the age of two. Asymmetric hearing may also arise in other clinical situations, such as after unilateral cochlear implantation. Here, we consider the role played by spatial cue integration in sound localization under normal acoustical conditions. We then review evidence for adaptive changes in spatial hearing following a developmental hearing loss in one ear, and show that adaptation may be achieved either by learning a new relationship between the altered cues and directions in space or by changing the way different cues are integrated in the brain. We next consider developmental plasticity as a source of vulnerability, describing maladaptive effects of asymmetric hearing loss that persist even when normal hearing is provided. We also examine the extent to which the consequences of asymmetric hearing loss depend upon its timing and duration. Although much of the experimental literature has focused on the effects of a stable unilateral hearing loss, some of the most common hearing impairments experienced by children tend to fluctuate over time. We therefore propose that there is a need to bridge this gap by investigating the effects of recurring hearing loss during development, and outline recent steps in this direction. We conclude by arguing that this work points toward a more nuanced view of developmental plasticity, in which plasticity may be selectively expressed in response to specific sensory contexts, and consider the clinical implications of this.
Collapse
Affiliation(s)
- Peter Keating
- Department of Physiology, Anatomy and Genetics, University of OxfordOxford, UK
| | - Andrew J. King
- Department of Physiology, Anatomy and Genetics, University of OxfordOxford, UK
| |
Collapse
|
30
|
Relearning auditory spectral cues for locations inside and outside the visual field. J Assoc Res Otolaryngol 2013; 15:249-63. [PMID: 24306277 DOI: 10.1007/s10162-013-0429-5] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2013] [Accepted: 11/17/2013] [Indexed: 11/27/2022] Open
Abstract
Previous research has demonstrated that, over a period of weeks, the auditory system accommodates to changes in the monaural spectral cues for sound locations within the frontal region of space. We were interested to determine if similar accommodation could occur for locations in the posterior regions of space, i.e. in the absence of contemporaneous visual information that indicates any mismatch between the perceived and actual location of a sound source. To distort the normal spectral cues to sound location, eight listeners wore small moulds in each ear. HRTF recordings confirmed that while the moulds substantially altered the monaural spectral cues, sufficient residual cues were retained to provide a basis for relearning. Compared to control measures, sound localization performance initially decreased significantly, with a sevenfold increase in front-back confusions and elevation errors more than doubled. Subjects wore the moulds continuously for a period of up to 60 days (median 38 days), over which time performance improved but remained significantly poorer than control levels. Sound localization performance for frontal locations (audio-visual field) was compared with that for posterior space (audio-only field), and there was no significant difference between regions in either the extent or rate of accommodation. This suggests a common mechanism for both regions of space that does not rely on contemporaneous visual information as a teacher signal for recalibration of the auditory system to modified spectral cues.
Collapse
|
31
|
Keating P, Dahmen JC, King AJ. Context-specific reweighting of auditory spatial cues following altered experience during development. Curr Biol 2013; 23:1291-9. [PMID: 23810532 PMCID: PMC3722484 DOI: 10.1016/j.cub.2013.05.045] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2013] [Revised: 05/13/2013] [Accepted: 05/24/2013] [Indexed: 11/30/2022]
Abstract
Background Neural systems must weight and integrate different sensory cues in order to make decisions. However, environmental conditions often change over time, altering the reliability of different cues and therefore the optimal way for combining them. To explore how cue integration develops in dynamic environments, we examined the effects on auditory spatial processing of rearing ferrets with localization cues that were modified via a unilateral earplug, interspersed with brief periods of normal hearing. Results In contrast with control animals, which rely primarily on timing and intensity differences between their two ears to localize sound sources, the juvenile-plugged ferrets developed the ability to localize sounds accurately by relying more on the unchanged spectral localization cues provided by the single normal ear. This adaptive process was paralleled by changes in neuronal responses in the primary auditory cortex, which became relatively more sensitive to these monaural spatial cues. Our behavioral and physiological data demonstrated, however, that the reweighting of different spatial cues disappeared as soon as normal hearing was experienced, showing for the first time that this type of plasticity can be context specific. Conclusions These results show that developmental changes can be selectively expressed in response to specific acoustic conditions. In this way, the auditory system can develop and simultaneously maintain two distinct models of auditory space and switch between these models depending on the prevailing sensory context. This ability is likely to be critical for maintaining accurate perception in dynamic environments and may point toward novel therapeutic strategies for individuals who experience sensory deficits during development. Ferrets reared with a unilateral hearing loss are able to localize sounds accurately Adaptation relies on cue reweighting that reverses when normal hearing is available Auditory cortical neurons show corresponding context-specific plasticity Contextual cue reweighting maintains perceptual stability in dynamic environments
Collapse
Affiliation(s)
- Peter Keating
- Department of Physiology, Anatomy and Genetics, University of Oxford, Parks Road, Oxford OX1 3PT, UK.
| | | | | |
Collapse
|
32
|
Catz N, Noreña AJ. Enhanced representation of spectral contrasts in the primary auditory cortex. Front Syst Neurosci 2013; 7:21. [PMID: 23801943 PMCID: PMC3686080 DOI: 10.3389/fnsys.2013.00021] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2013] [Accepted: 05/23/2013] [Indexed: 11/15/2022] Open
Abstract
The role of early auditory processing may be to extract some elementary features from an acoustic mixture in order to organize the auditory scene. To accomplish this task, the central auditory system may rely on the fact that sensory objects are often composed of spectral edges, i.e., regions where the stimulus energy changes abruptly over frequency. The processing of acoustic stimuli may benefit from a mechanism enhancing the internal representation of spectral edges. While the visual system is thought to rely heavily on this mechanism (enhancing spatial edges), it is still unclear whether a related process plays a significant role in audition. We investigated the cortical representation of spectral edges, using acoustic stimuli composed of multi-tone pips whose time-averaged spectral envelope contained suppressed or enhanced regions. Importantly, the stimuli were designed such that neural responses properties could be assessed as a function of stimulus frequency during stimulus presentation. Our results suggest that the representation of acoustic spectral edges is enhanced in the auditory cortex, and that this enhancement is sensitive to the characteristics of the spectral contrast profile, such as depth, sharpness and width. Spectral edges are maximally enhanced for sharp contrast and large depth. Cortical activity was also suppressed at frequencies within the suppressed region. To note, the suppression of firing was larger at frequencies nearby the lower edge of the suppressed region than at the upper edge. Overall, the present study gives critical insights into the processing of spectral contrasts in the auditory system.
Collapse
Affiliation(s)
- Nicolas Catz
- Laboratory of Adaptive and Integrative Neurobiology, Fédération de recherche 3C, UMR CNRS 7260, Université Aix-Marseille Marseille, France
| | | |
Collapse
|
33
|
Behavioral sensitivity to broadband binaural localization cues in the ferret. J Assoc Res Otolaryngol 2013; 14:561-72. [PMID: 23615803 PMCID: PMC3705081 DOI: 10.1007/s10162-013-0390-3] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2013] [Accepted: 04/05/2013] [Indexed: 11/29/2022] Open
Abstract
Although the ferret has become an important model species for studying both fundamental and clinical aspects of spatial hearing, previous behavioral work has focused on studies of sound localization and spatial release from masking in the free field. This makes it difficult to tease apart the role played by different spatial cues. In humans and other species, interaural time differences (ITDs) and interaural level differences (ILDs) play a critical role in sound localization in the azimuthal plane and also facilitate sound source separation in noisy environments. In this study, we used a range of broadband noise stimuli presented via customized earphones to measure ITD and ILD sensitivity in the ferret. Our behavioral data show that ferrets are extremely sensitive to changes in either binaural cue, with levels of performance approximating that found in humans. The measured thresholds were relatively stable despite extensive and prolonged (>16 weeks) testing on ITD and ILD tasks with broadband stimuli. For both cues, sensitivity was reduced at shorter durations. In addition, subtle effects of changing the stimulus envelope were observed on ITD, but not ILD, thresholds. Sensitivity to these cues also differed in other ways. Whereas ILD sensitivity was unaffected by changes in average binaural level or interaural correlation, the same manipulations produced much larger effects on ITD sensitivity, with thresholds declining when either of these parameters was reduced. The binaural sensitivity measured in this study can largely account for the ability of ferrets to localize broadband stimuli in the azimuthal plane. Our results are also broadly consistent with data from humans and confirm the ferret as an excellent experimental model for studying spatial hearing.
Collapse
|
34
|
Abstract
Although ears capable of detecting airborne sound have arisen repeatedly and independently in different species, most animals that are capable of hearing have a pair of ears. We review the advantages that arise from having two ears and discuss recent research on the similarities and differences in the binaural processing strategies adopted by birds and mammals. We also ask how these different adaptations for binaural and spatial hearing might inform and inspire the development of techniques for future auditory prosthetic devices.
Collapse
|
35
|
Bentvelzen A, Leung J, Alais D. Discriminating Audiovisual Speed: Optimal Integration of Speed Defaults to Probability Summation When Component Reliabilities Diverge. Perception 2009; 38:966-87. [DOI: 10.1068/p6261] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
We investigated audiovisual speed perception to test the maximum-likelihood-estimation (MLE) model of multisensory integration. According to MLE, audiovisual speed perception will be based on a weighted average of visual and auditory speed estimates, with each component weighted by its inverse variance, a statistically optimal combination that produces a fused estimate with minimised variance and thereby affords maximal discrimination. We use virtual auditory space to create ecologically valid auditory motion, together with visual apparent motion around an array of 63 LEDs. To degrade the usual dominance of vision over audition, we added positional jitter to the motion sequences, and also measured peripheral trajectories. Both factors degraded visual speed discrimination, while auditory speed perception was unaffected by trajectory location. In the bimodal conditions, a speed conflict was introduced (48° versus 60° s−1) and two measures were taken: perceived audiovisual speed, and the precision (variability) of audiovisual speed discrimination. These measures showed only a weak tendency to follow MLE predictions. However, splitting the data into two groups based on whether the unimodal component weights were similar or disparate revealed interesting findings: similarly weighted components were integrated in a manner closely matching MLE predictions, while dissimilarity weighted components (greater than 3: 1 difference) were integrated according to probability-summation predictions. These results suggest that different multisensory integration strategies may be implemented depending on relative component reliabilities, with MLE integration vetoed when component weights are highly disparate.
Collapse
Affiliation(s)
- Adam Bentvelzen
- School of Psychology, University of Sydney, Sydney 2006, Australia
| | - Johahn Leung
- School of Psychology, University of Sydney, Sydney 2006, Australia
| | - David Alais
- School of Psychology, University of Sydney, Sydney 2006, Australia
| |
Collapse
|
36
|
Cooper J, Carlile S, Alais D. Distortions of auditory space during rapid head turns. Exp Brain Res 2008; 191:209-19. [PMID: 18696058 DOI: 10.1007/s00221-008-1516-4] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2006] [Accepted: 07/21/2008] [Indexed: 10/21/2022]
Abstract
Auditory localisation was examined using brief broadband sounds presented during rapid head turns to visual targets in the peripheral field. Presenting sounds during a rapid head movement will "smear" the acoustic cues to the sound's location. During the early stages of a head turn, sound localisation accuracy was comparable to a no-turn control condition. However, significant localisation errors occurred when the probe sound was presented during the later part of a head turn. After correcting for head position, the estimate of lateral angle (horizontal position) in the front hemisphere was generally accurate. However, lateral angle estimates for positions in the rear hemisphere exhibited systematic errors that were especially large around the midline. Polar angle (elevation) perception remained robust, being comparable to no-turn controls whether tested early or late in the head turn. The results are interpreted in terms of a 'multiple look' strategy for calculating sound location, and the allocation of attention to the hemisphere containing the head-turn target.
Collapse
Affiliation(s)
- Joel Cooper
- Auditory Neuroscience Laboratory, School of Medical Science and Bosch Institute, University of Sydney, Sydney, NSW 2006, Australia
| | | | | |
Collapse
|
37
|
Alves-Pinto A, Lopez-Poveda EA. Psychophysical assessment of the level-dependent representation of high-frequency spectral notches in the peripheral auditory system. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2008; 124:409-421. [PMID: 18646986 DOI: 10.1121/1.2920957] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
To discriminate between broadband noises with and without a high-frequency spectral notch is more difficult at 70-80 dB sound pressure level than at lower or higher levels [Alves-Pinto, A. and Lopez-Poveda, E. A. (2005). "Detection of high-frequency spectral notches as a function of level," J. Acoust. Soc. Am. 118, 2458-2469]. One possible explanation is that the notch is less clearly represented internally at 70-80 dB SPL than at any other level. To test this hypothesis, forward-masking patterns were measured for flat-spectrum and notched noise maskers for masker levels of 50, 70, 80, and 90 dB SPL. Masking patterns were measured in two conditions: (1) fixing the masker-probe time interval at 2 ms and (2) varying the interval to achieve similar masked thresholds for different masker levels. The depth of the spectral notch remained approximately constant in the fixed-interval masking patterns and gradually decreased with increasing masker level in the variable-interval masking patterns. This difference probably reflects the effects of peripheral compression. These results are inconsistent with the nonmonotonic level-dependent performance in spectral discrimination. Assuming that a forward-masking pattern is a reasonable psychoacoustical correlate of the auditory-nerve rate-profile representation of the stimulus spectrum, these results undermine the common view that high-frequency spectral notches must be encoded in the rate-profile of auditory-nerve fibers.
Collapse
Affiliation(s)
- Ana Alves-Pinto
- Unidad de Audición Computacional y Psicoacústica, Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Avenida Alfonso X "El Sabio" s/n, 37007 Salamanca, Spain.
| | | |
Collapse
|
38
|
Lopez-Poveda EA, Alves-Pinto A, Palmer AR, Eustaquio-Martín A. Rate versus time representation of high-frequency spectral notches in the peripheral auditory system: A computational modeling study. Neurocomputing 2008. [DOI: 10.1016/j.neucom.2007.07.030] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
39
|
Chiu C, Moss CF. The role of the external ear in vertical sound localization in the free flying bat, Eptesicus fuscus. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2007; 121:2227-35. [PMID: 17471736 DOI: 10.1121/1.2434760] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
The role of the external ear in sonar target localization for prey capture was studied by deflecting the tragus of six big brown bats, Eptesicus fuscus. The prey capture performance of the bat dropped significantly in the tragus-deflection condition, compared with baseline, control, and recovery conditions. Target localization error occurred in the tragus-deflected bat, and mainly in elevation. The deflection of the tragus did not abolish the prey capture ability of the bat, which suggests that other cues are available used for prey localization. Adaptive vocal and motor behaviors were also investigated in this study. The bat did not show significant changes in vocal behaviors but modified its flight trajectories in response to the tragus manipulation. The tragus-deflected bat tended to attack the prey item from above and had lower tangential velocity and larger bearing from the side, compared with baseline and recovery conditions. These findings highlight the contribution of the tragus to vertical sound localization in the free-flying big brown bat and demonstrate flight adaptations the bat makes to compensate altered acoustic cues.
Collapse
Affiliation(s)
- Chen Chiu
- Department of Psychology, Neuroscience and Cognitive Science Program, University of Maryland, College Park 20742, USA
| | | |
Collapse
|