1
|
Dietze A, Sörös P, Pöntynen H, Witt K, Dietz M. Longitudinal observations of the effects of ischemic stroke on binaural perception. Front Neurosci 2024; 18:1322762. [PMID: 38482140 PMCID: PMC10936579 DOI: 10.3389/fnins.2024.1322762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Accepted: 02/08/2024] [Indexed: 11/02/2024] Open
Abstract
Acute ischemic stroke, characterized by a localized reduction in blood flow to specific areas of the brain, has been shown to affect binaural auditory perception. In a previous study conducted during the acute phase of ischemic stroke, two tasks of binaural hearing were performed: binaural tone-in-noise detection, and lateralization of stimuli with interaural time- or level differences. Various lesion-specific, as well as individual, differences in binaural performance between patients in the acute phase of stroke and a control group were demonstrated. For the current study, we re-invited the same group of patients, whereupon a subgroup repeated the experiments during the subacute and chronic phases of stroke. Similar to the initial study, this subgroup consisted of patients with lesions in different locations, including cortical and subcortical areas. At the group level, the results from the tone-in-noise detection experiment remained consistent across the three measurement phases, as did the number of deviations from normal performance in the lateralization task. However, the performance in the lateralization task exhibited variations over time among individual patients. Some patients demonstrated improvements in their lateralization abilities, indicating recovery, whereas others' lateralization performance deteriorated during the later stages of stroke. Notably, our analyses did not reveal consistent patterns for patients with similar lesion locations. These findings suggest that recovery processes are more individual than the acute effects of stroke on binaural perception. Individual impairments in binaural hearing abilities after the acute phase of ischemic stroke have been demonstrated and should therefore also be targeted in rehabilitation programs.
Collapse
Affiliation(s)
- Anna Dietze
- Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany
- Cluster of Excellence “Hearing4all”, University of Oldenburg, Oldenburg, Germany
| | - Peter Sörös
- Department of Neurology, School of Medicine and Health Sciences, University of Oldenburg, Oldenburg, Germany
- Research Center Neurosensory Science, University of Oldenburg, Oldenburg, Germany
| | - Henri Pöntynen
- Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany
- Cluster of Excellence “Hearing4all”, University of Oldenburg, Oldenburg, Germany
| | - Karsten Witt
- Department of Neurology, School of Medicine and Health Sciences, University of Oldenburg, Oldenburg, Germany
- Research Center Neurosensory Science, University of Oldenburg, Oldenburg, Germany
- Department of Neurology, Evangelical Hospital, Oldenburg, Germany
| | - Mathias Dietz
- Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany
- Cluster of Excellence “Hearing4all”, University of Oldenburg, Oldenburg, Germany
- Research Center Neurosensory Science, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
2
|
Valzolgher C, Capra S, Sum K, Finos L, Pavani F, Picinali L. Spatial hearing training in virtual reality with simulated asymmetric hearing loss. Sci Rep 2024; 14:2469. [PMID: 38291126 PMCID: PMC10827792 DOI: 10.1038/s41598-024-51892-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 01/10/2024] [Indexed: 02/01/2024] Open
Abstract
Sound localization is essential to perceive the surrounding world and to interact with objects. This ability can be learned across time, and multisensory and motor cues play a crucial role in the learning process. A recent study demonstrated that when training localization skills, reaching to the sound source to determine its position reduced localization errors faster and to a greater extent as compared to just naming sources' positions, despite the fact that in both tasks, participants received the same feedback about the correct position of sound sources in case of wrong response. However, it remains to establish which features have made reaching to sound more effective as compared to naming. In the present study, we introduced a further condition in which the hand is the effector providing the response, but without it reaching toward the space occupied by the target source: the pointing condition. We tested three groups of participants (naming, pointing, and reaching groups) each while performing a sound localization task in normal and altered listening situations (i.e. mild-moderate unilateral hearing loss) simulated through auditory virtual reality technology. The experiment comprised four blocks: during the first and the last block, participants were tested in normal listening condition, while during the second and the third in altered listening condition. We measured their performance, their subjective judgments (e.g. effort), and their head-related behavior (through kinematic tracking). First, people's performance decreased when exposed to asymmetrical mild-moderate hearing impairment, more specifically on the ipsilateral side and for the pointing group. Second, we documented that all groups decreased their localization errors across altered listening blocks, but the extent of this reduction was higher for reaching and pointing as compared to the naming group. Crucially, the reaching group leads to a greater error reduction for the side where the listening alteration was applied. Furthermore, we documented that, across blocks, reaching and pointing groups increased the implementation of head motor behavior during the task (i.e., they increased approaching head movements toward the space of the sound) more than naming. Third, while performance in the unaltered blocks (first and last) was comparable, only the reaching group continued to exhibit a head behavior similar to those developed during the altered blocks (second and third), corroborating the previous observed relationship between the reaching to sounds task and head movements. In conclusion, this study further demonstrated the effectiveness of reaching to sounds as compared to pointing and naming in the learning processes. This effect could be related both to the process of implementing goal-directed motor actions and to the role of reaching actions in fostering the implementation of head-related motor strategies.
Collapse
Affiliation(s)
- Chiara Valzolgher
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068, Rovereto, TN, Italy.
| | - Sara Capra
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068, Rovereto, TN, Italy
| | - Kevin Sum
- Audio Experience Design (www.axdesign.co.uk), Imperial College London, London, UK
| | - Livio Finos
- Department of Statistical Sciences, University of Padova, Padova, Italy
| | - Francesco Pavani
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068, Rovereto, TN, Italy
- Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Rovereto, Italy
- Centro Interuniversitario di Ricerca "Cognizione, Linguaggio e Sordità" (CIRCLeS), Rovereto, Italy
| | - Lorenzo Picinali
- Audio Experience Design (www.axdesign.co.uk), Imperial College London, London, UK
| |
Collapse
|
3
|
Valzolgher C, Bouzaid S, Grenouillet S, Gatel J, Ratenet L, Murenu F, Verdelet G, Salemme R, Gaveau V, Coudert A, Hermann R, Truy E, Farnè A, Pavani F. Training spatial hearing in unilateral cochlear implant users through reaching to sounds in virtual reality. Eur Arch Otorhinolaryngol 2023; 280:3661-3672. [PMID: 36905419 PMCID: PMC10313844 DOI: 10.1007/s00405-023-07886-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Accepted: 02/13/2023] [Indexed: 03/12/2023]
Abstract
BACKGROUND AND PURPOSE Use of unilateral cochlear implant (UCI) is associated with limited spatial hearing skills. Evidence that training these abilities in UCI user is possible remains limited. In this study, we assessed whether a Spatial training based on hand-reaching to sounds performed in virtual reality improves spatial hearing abilities in UCI users METHODS: Using a crossover randomized clinical trial, we compared the effects of a Spatial training protocol with those of a Non-Spatial control training. We tested 17 UCI users in a head-pointing to sound task and in an audio-visual attention orienting task, before and after each training. Study is recorded in clinicaltrials.gov (NCT04183348). RESULTS During the Spatial VR training, sound localization errors in azimuth decreased. Moreover, when comparing head-pointing to sounds before vs. after training, localization errors decreased after the Spatial more than the control training. No training effects emerged in the audio-visual attention orienting task. CONCLUSIONS Our results showed that sound localization in UCI users improves during a Spatial training, with benefits that extend also to a non-trained sound localization task (generalization). These findings have potentials for novel rehabilitation procedures in clinical contexts.
Collapse
Affiliation(s)
- Chiara Valzolgher
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini, 31 Rovereto, Trento, Italy.
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France.
| | - Sabrina Bouzaid
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
| | - Solene Grenouillet
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
| | | | | | - Francesca Murenu
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
| | - Grégoire Verdelet
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
- Neuroimmersion, Lyon, France
| | - Romeo Salemme
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
- Neuroimmersion, Lyon, France
| | - Valérie Gaveau
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
| | | | | | - Eric Truy
- Hospices Civils de Lyon, Lyon, France
| | - Alessandro Farnè
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
- Neuroimmersion, Lyon, France
| | - Francesco Pavani
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini, 31 Rovereto, Trento, Italy
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, University of Lyon 1, Lyon, France
- Centro Interuniversitario di Ricerca "Cognizione, Linguaggio e Sordità" (CIRCLeS), Trento, Italy
| |
Collapse
|
4
|
Sanchez Jimenez A, Willard KJ, Bajo VM, King AJ, Nodal FR. Persistence and generalization of adaptive changes in auditory localization behavior following unilateral conductive hearing loss. Front Neurosci 2023; 17:1067937. [PMID: 36816127 PMCID: PMC9929551 DOI: 10.3389/fnins.2023.1067937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Accepted: 01/10/2023] [Indexed: 02/04/2023] Open
Abstract
Introduction Sound localization relies on the neural processing of binaural and monaural spatial cues generated by the physical properties of the head and body. Hearing loss in one ear compromises binaural computations, impairing the ability to localize sounds in the horizontal plane. With appropriate training, adult individuals can adapt to this binaural imbalance and largely recover their localization accuracy. However, it remains unclear how long this learning is retained or whether it generalizes to other stimuli. Methods We trained ferrets to localize broadband noise bursts in quiet conditions and measured their initial head orienting responses and approach-to-target behavior. To evaluate the persistence of auditory spatial learning, we tested the sound localization performance of the animals over repeated periods of monaural earplugging that were interleaved with short or long periods of normal binaural hearing. To explore learning generalization to other stimulus types, we measured the localization accuracy before and after adaptation using different bandwidth stimuli presented against constant or amplitude-modulated background noise. Results Retention of learning resulted in a smaller initial deficit when the same ear was occluded on subsequent occasions. Each time, the animals' performance recovered with training to near pre-plug levels of localization accuracy. By contrast, switching the earplug to the contralateral ear resulted in less adaptation, indicating that the capacity to learn a new strategy for localizing sound is more limited if the animals have previously adapted to conductive hearing loss in the opposite ear. Moreover, the degree of adaptation to the training stimulus for individual animals was significantly correlated with the extent to which learning extended to untrained octave band target sounds presented in silence and to broadband targets presented in background noise, suggesting that adaptation and generalization go hand in hand. Conclusions Together, these findings provide further evidence for plasticity in the weighting of monaural and binaural cues during adaptation to unilateral conductive hearing loss, and show that the training-dependent recovery in spatial hearing can generalize to more naturalistic listening conditions, so long as the target sounds provide sufficient spatial information.
Collapse
|
5
|
Nisha KV, Uppunda AK, Kumar RT. Spatial rehabilitation using virtual auditory space training paradigm in individuals with sensorineural hearing impairment. Front Neurosci 2023; 16:1080398. [PMID: 36733923 PMCID: PMC9887142 DOI: 10.3389/fnins.2022.1080398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 12/20/2022] [Indexed: 01/18/2023] Open
Abstract
Purpose The present study aimed to quantify the effects of spatial training using virtual sources on a battery of spatial acuity measures in listeners with sensorineural hearing impairment (SNHI). Methods An intervention-based time-series comparison design involving 82 participants divided into three groups was adopted. Group I (n = 27, SNHI-spatially trained) and group II (n = 25, SNHI-untrained) consisted of SNHI listeners, while group III (n = 30) had listeners with normal hearing (NH). The study was conducted in three phases. In the pre-training phase, all the participants underwent a comprehensive assessment of their spatial processing abilities using a battery of tests including spatial acuity in free-field and closed-field scenarios, tests for binaural processing abilities (interaural time threshold [ITD] and level difference threshold [ILD]), and subjective ratings. While spatial acuity in the free field was assessed using a loudspeaker-based localization test, the closed-field source identification test was performed using virtual stimuli delivered through headphones. The ITD and ILD thresholds were obtained using a MATLAB psychoacoustic toolbox, while the participant ratings on the spatial subsection of speech, spatial, and qualities questionnaire in Kannada were used for the subjective ratings. Group I listeners underwent virtual auditory spatial training (VAST), following pre-evaluation assessments. All tests were re-administered on the group I listeners halfway through training (mid-training evaluation phase) and after training completion (post-training evaluation phase), whereas group II underwent these tests without any training at the same time intervals. Results and discussion Statistical analysis showed the main effect of groups in all tests at the pre-training evaluation phase, with post hoc comparisons that revealed group equivalency in spatial performance of both SNHI groups (groups I and II). The effect of VAST in group I was evident on all the tests, with the localization test showing the highest predictive power for capturing VAST-related changes on Fischer discriminant analysis (FDA). In contrast, group II demonstrated no changes in spatial acuity across timelines of measurements. FDA revealed increased errors in the categorization of NH as SNHI-trained at post-training evaluation compared to pre-training evaluation, as the spatial performance of the latter improved with VAST in the post-training phase. Conclusion The study demonstrated positive outcomes of spatial training using VAST in listeners with SNHI. The utility of this training program can be extended to other clinical population with spatial auditory processing deficits such as auditory neuropathy spectrum disorder, cochlear implants, central auditory processing disorders etc.
Collapse
|
6
|
Valzolgher C, Todeschini M, Verdelet G, Gatel J, Salemme R, Gaveau V, Truy E, Farnè A, Pavani F. Adapting to altered auditory cues: Generalization from manual reaching to head pointing. PLoS One 2022; 17:e0263509. [PMID: 35421095 PMCID: PMC9009652 DOI: 10.1371/journal.pone.0263509] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Accepted: 01/21/2022] [Indexed: 12/02/2022] Open
Abstract
Localising sounds means having the ability to process auditory cues deriving from the interplay among sound waves, the head and the ears. When auditory cues change because of temporary or permanent hearing loss, sound localization becomes difficult and uncertain. The brain can adapt to altered auditory cues throughout life and multisensory training can promote the relearning of spatial hearing skills. Here, we study the training potentials of sound-oriented motor behaviour to test if a training based on manual actions toward sounds can learning effects that generalize to different auditory spatial tasks. We assessed spatial hearing relearning in normal hearing adults with a plugged ear by using visual virtual reality and body motion tracking. Participants performed two auditory tasks that entail explicit and implicit processing of sound position (head-pointing sound localization and audio-visual attention cueing, respectively), before and after having received a spatial training session in which they identified sound position by reaching to auditory sources nearby. Using a crossover design, the effects of the above-mentioned spatial training were compared to a control condition involving the same physical stimuli, but different task demands (i.e., a non-spatial discrimination of amplitude modulations in the sound). According to our findings, spatial hearing in one-ear plugged participants improved more after reaching to sound trainings rather than in the control condition. Training by reaching also modified head-movement behaviour during listening. Crucially, the improvements observed during training generalize also to a different sound localization task, possibly as a consequence of acquired and novel head-movement strategies.
Collapse
Affiliation(s)
- Chiara Valzolgher
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- Center for Mind/Brain Sciences—CIMeC, University of Trento, Trento, Italy
| | - Michela Todeschini
- Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Trento, Italy
| | - Gregoire Verdelet
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- Neuroimmersion, Lyon Neuroscience Research Center, Lyon, France
| | | | - Romeo Salemme
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- Neuroimmersion, Lyon Neuroscience Research Center, Lyon, France
| | - Valerie Gaveau
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- University of Lyon 1, Villeurbanne, France
| | - Eric Truy
- Hospices Civils de Lyon, Lyon, France
| | - Alessandro Farnè
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- Center for Mind/Brain Sciences—CIMeC, University of Trento, Trento, Italy
- Neuroimmersion, Lyon Neuroscience Research Center, Lyon, France
| | - Francesco Pavani
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- Center for Mind/Brain Sciences—CIMeC, University of Trento, Trento, Italy
| |
Collapse
|
7
|
Courtois G, Grimaldi V, Lissek H, Estoppey P, Georganti E. Perception of Auditory Distance in Normal-Hearing and Moderate-to-Profound Hearing-Impaired Listeners. Trends Hear 2020; 23:2331216519887615. [PMID: 31774032 PMCID: PMC6887817 DOI: 10.1177/2331216519887615] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
The auditory system allows the estimation of the distance to sound-emitting objects using multiple spatial cues. In virtual acoustics over headphones, a prerequisite to render auditory distance impression is sound externalization, which denotes the perception of synthesized stimuli outside of the head. Prior studies have found that listeners with mild-to-moderate hearing loss are able to perceive auditory distance and are sensitive to externalization. However, this ability may be degraded by certain factors, such as non-linear amplification in hearing aids or the use of a remote wireless microphone. In this study, 10 normal-hearing and 20 moderate-to-profound hearing-impaired listeners were instructed to estimate the distance of stimuli processed with different methods yielding various perceived auditory distances in the vicinity of the listeners. Two different configurations of non-linear amplification were implemented, and a novel feature aiming to restore a sense of distance in wireless microphone systems was tested. The results showed that the hearing-impaired listeners, even those with a profound hearing loss, were able to discriminate nearby and far sounds that were equalized in level. Their perception of auditory distance was however more contracted than in normal-hearing listeners. Non-linear amplification was found to distort the original spatial cues, but no adverse effect on the ratings of auditory distance was evident. Finally, it was shown that the novel feature was successful in allowing the hearing-impaired participants to perceive externalized sounds with wireless microphone systems.
Collapse
Affiliation(s)
- Gilles Courtois
- Swiss Federal Institute of Technology (EPFL), Signal Processing Laboratory (LTS2), Lausanne, Switzerland.,Sonova AG, Stäfa, Switzerland
| | - Vincent Grimaldi
- Swiss Federal Institute of Technology (EPFL), Signal Processing Laboratory (LTS2), Lausanne, Switzerland
| | - Hervé Lissek
- Swiss Federal Institute of Technology (EPFL), Signal Processing Laboratory (LTS2), Lausanne, Switzerland
| | | | | |
Collapse
|
8
|
Jenny C, Reuter C. Usability of Individualized Head-Related Transfer Functions in Virtual Reality: Empirical Study With Perceptual Attributes in Sagittal Plane Sound Localization. JMIR Serious Games 2020; 8:e17576. [PMID: 32897232 PMCID: PMC7509635 DOI: 10.2196/17576] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2019] [Revised: 05/07/2020] [Accepted: 07/26/2020] [Indexed: 01/19/2023] Open
Abstract
BACKGROUND In order to present virtual sound sources via headphones spatially, head-related transfer functions (HRTFs) can be applied to audio signals. In this so-called binaural virtual acoustics, the spatial perception may be degraded if the HRTFs deviate from the true HRTFs of the listener. OBJECTIVE In this study, participants wearing virtual reality (VR) headsets performed a listening test on the 3D audio perception of virtual audiovisual scenes, thus enabling us to investigate the necessity and influence of the individualization of HRTFs. Two hypotheses were investigated: first, general HRTFs lead to limitations of 3D audio perception in VR and second, the localization model for stationary localization errors is transferable to nonindividualized HRTFs in more complex environments such as VR. METHODS For the evaluation, 39 subjects rated individualized and nonindividualized HRTFs in an audiovisual virtual scene on the basis of 5 perceptual qualities: localizability, front-back position, externalization, tone color, and realism. The VR listening experiment consisted of 2 tests: in the first test, subjects evaluated their own and the general HRTF from the Massachusetts Institute of Technology Knowles Electronics Manikin for Acoustic Research database and in the second test, their own and 2 other nonindividualized HRTFs from the Acoustics Research Institute HRTF database. For the experiment, 2 subject-specific, nonindividualized HRTFs with a minimal and maximal localization error deviation were selected according to the localization model in sagittal planes. RESULTS With the Wilcoxon signed-rank test for the first test, analysis of variance for the second test, and a sample size of 78, the results were significant in all perceptual qualities, except for the front-back position between own and minimal deviant nonindividualized HRTF (P=.06). CONCLUSIONS Both hypotheses have been accepted. Sounds filtered by individualized HRTFs are considered easier to localize, easier to externalize, more natural in timbre, and thus more realistic compared to sounds filtered by nonindividualized HRTFs.
Collapse
Affiliation(s)
- Claudia Jenny
- Musicological Department, University of Vienna, Vienna, Austria
| | | |
Collapse
|
9
|
Rabini G, Lucin G, Pavani F. Certain, but incorrect: on the relation between subjective certainty and accuracy in sound localisation. Exp Brain Res 2020; 238:727-739. [PMID: 32080750 DOI: 10.1007/s00221-020-05748-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2019] [Accepted: 02/05/2020] [Indexed: 10/25/2022]
Abstract
When asked to identify the position of a sound, listeners can report its perceived location as well as their subjective certainty about this spatial judgement. Yet, research to date focused primarily on measures of perceived location (e.g., accuracy and precision of pointing responses), neglecting instead the phenomenological experience of subjective spatial certainty. The present study aimed to investigate: (1) changes in subjective certainty about sound position induced by listening with one ear plugged (simulated monaural listening), compared to typical binaural listening and (2) the relation between subjective certainty about sound position and localisation accuracy. In two experiments (N = 20 each), participants localised single sounds delivered from one of 60 speakers hidden from view in front space. In each trial, they also provided a subjective rating of their spatial certainty about sound position. No feedback on response was provided. Overall, participants were mostly accurate and certain about sound position in binaural listening, whereas their accuracy and subjective certainty decreased in monaural listening. Interestingly, accuracy and certainty dissociated within single trials during monaural listening: in some trials participants were certain but incorrect, in others they were uncertain but correct. Furthermore, unlike accuracy, subjective certainty rapidly increased as a function of time during the monaural listening block. Finally, subjective certainty changed as a function of perceived location of the sound source. These novel findings reveal that listeners quickly update their subjective confidence on sound position, when they experience an altered listening condition, even in the absence of feedback. Furthermore, they document a dissociation between accuracy and subjective certainty when mapping auditory input to space.
Collapse
Affiliation(s)
- Giuseppe Rabini
- Centre for Mind/Brain Sciences (CIMeC), University of Trento, Via Angelo Bettini 31, 38068, Rovereto, TN, Italy.
| | - Giulia Lucin
- Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Via Angelo Bettini 84, 38068, Rovereto, TN, Italy
| | - Francesco Pavani
- Centre for Mind/Brain Sciences (CIMeC), University of Trento, Via Angelo Bettini 31, 38068, Rovereto, TN, Italy.,Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Via Angelo Bettini 84, 38068, Rovereto, TN, Italy.,IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), Lyon, France
| |
Collapse
|
10
|
Kramer A, Röder B, Bruns P. Feedback Modulates Audio-Visual Spatial Recalibration. Front Integr Neurosci 2020; 13:74. [PMID: 32009913 PMCID: PMC6979315 DOI: 10.3389/fnint.2019.00074] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Accepted: 12/10/2019] [Indexed: 11/13/2022] Open
Abstract
In an ever-changing environment, crossmodal recalibration is crucial to maintain precise and coherent spatial estimates across different sensory modalities. Accordingly, it has been found that perceived auditory space is recalibrated toward vision after consistent exposure to spatially misaligned audio-visual stimuli (VS). While this so-called ventriloquism aftereffect (VAE) yields internal consistency between vision and audition, it does not necessarily lead to consistency between the perceptual representation of space and the actual environment. For this purpose, feedback about the true state of the external world might be necessary. Here, we tested whether the size of the VAE is modulated by external feedback and reward. During adaptation audio-VS with a fixed spatial discrepancy were presented. Participants had to localize the sound and received feedback about the magnitude of their localization error. In half of the sessions the feedback was based on the position of the VS and in the other half it was based on the position of the auditory stimulus. An additional monetary reward was given if the localization error fell below a certain threshold that was based on participants’ performance in the pretest. As expected, when error feedback was based on the position of the VS, auditory localization during adaptation trials shifted toward the position of the VS. Conversely, feedback based on the position of the auditory stimuli reduced the visual influence on auditory localization (i.e., the ventriloquism effect) and improved sound localization accuracy. After adaptation with error feedback based on the VS position, a typical auditory VAE (but no visual aftereffect) was observed in subsequent unimodal localization tests. By contrast, when feedback was based on the position of the auditory stimuli during adaptation, no auditory VAE was observed in subsequent unimodal auditory trials. Importantly, in this situation no visual aftereffect was found either. As feedback did not change the physical attributes of the audio-visual stimulation during adaptation, the present findings suggest that crossmodal recalibration is subject to top–down influences. Such top–down influences might help prevent miscalibration of audition toward conflicting visual stimulation in situations in which external feedback indicates that visual information is inaccurate.
Collapse
Affiliation(s)
- Alexander Kramer
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| | - Patrick Bruns
- Biological Psychology and Neuropsychology, University of Hamburg, Hamburg, Germany
| |
Collapse
|
11
|
Steadman MA, Kim C, Lestang JH, Goodman DFM, Picinali L. Short-term effects of sound localization training in virtual reality. Sci Rep 2019; 9:18284. [PMID: 31798004 PMCID: PMC6893038 DOI: 10.1038/s41598-019-54811-w] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2019] [Accepted: 11/18/2019] [Indexed: 11/08/2022] Open
Abstract
Head-related transfer functions (HRTFs) capture the direction-dependant way that sound interacts with the head and torso. In virtual audio systems, which aim to emulate these effects, non-individualized, generic HRTFs are typically used leading to an inaccurate perception of virtual sound location. Training has the potential to exploit the brain's ability to adapt to these unfamiliar cues. In this study, three virtual sound localization training paradigms were evaluated; one provided simple visual positional confirmation of sound source location, a second introduced game design elements ("gamification") and a final version additionally utilized head-tracking to provide listeners with experience of relative sound source motion ("active listening"). The results demonstrate a significant effect of training after a small number of short (12-minute) training sessions, which is retained across multiple days. Gamification alone had no significant effect on the efficacy of the training, but active listening resulted in a significantly greater improvements in localization accuracy. In general, improvements in virtual sound localization following training generalized to a second set of non-individualized HRTFs, although some HRTF-specific changes were observed in polar angle judgement for the active listening group. The implications of this on the putative mechanisms of the adaptation process are discussed.
Collapse
Affiliation(s)
- Mark A Steadman
- Dyson School of Design Engineering, Imperial College London, London, UK.
- Department of Bioengineering, Imperial College London, London, UK.
| | - Chungeun Kim
- Dyson School of Design Engineering, Imperial College London, London, UK
| | - Jean-Hugues Lestang
- Department of Electrical and Electronic Engineering, Imperial College London, London, UK
| | - Dan F M Goodman
- Department of Electrical and Electronic Engineering, Imperial College London, London, UK
| | - Lorenzo Picinali
- Dyson School of Design Engineering, Imperial College London, London, UK
| |
Collapse
|
12
|
Differential Adaptation in Azimuth and Elevation to Acute Monaural Spatial Hearing after Training with Visual Feedback. eNeuro 2019; 6:ENEURO.0219-19.2019. [PMID: 31601632 PMCID: PMC6825955 DOI: 10.1523/eneuro.0219-19.2019] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Revised: 08/31/2019] [Accepted: 09/04/2019] [Indexed: 11/21/2022] Open
Abstract
Sound localization in the horizontal plane (azimuth) relies mainly on binaural difference cues in sound level and arrival time. Blocking one ear will perturb these cues, and may strongly affect azimuth performance of the listener. However, single-sided deaf listeners, as well as acutely single-sided plugged normal-hearing subjects, often use a combination of (ambiguous) monaural head-shadow cues, impoverished binaural level-difference cues, and (veridical, but limited) pinna- and head-related spectral cues to estimate source azimuth. To what extent listeners can adjust the relative contributions of these different cues is unknown, as the mechanisms underlying adaptive processes to acute monauralization are still unclear. By providing visual feedback during a brief training session with a high-pass (HP) filtered sound at a fixed sound level, we investigated the ability of listeners to adapt to their erroneous sound-localization percepts. We show that acutely plugged listeners rapidly adjusted the relative contributions of perceived sound level, and the spectral and distorted binaural cues, to improve their localization performance in azimuth also for different sound levels and locations than those experienced during training. Interestingly, our results also show that this acute cue-reweighting led to poorer localization performance in elevation, which was in line with the acoustic–spatial information provided during training. We conclude that the human auditory system rapidly readjusts the weighting of all relevant localization cues, to adequately respond to the demands of the current acoustic environment, even if the adjustments may hamper veridical localization performance in the real world.
Collapse
|
13
|
Interactions between egocentric and allocentric spatial coding of sounds revealed by a multisensory learning paradigm. Sci Rep 2019; 9:7892. [PMID: 31133688 PMCID: PMC6536515 DOI: 10.1038/s41598-019-44267-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2018] [Accepted: 05/08/2019] [Indexed: 11/09/2022] Open
Abstract
Although sound position is initially head-centred (egocentric coordinates), our brain can also represent sounds relative to one another (allocentric coordinates). Whether reference frames for spatial hearing are independent or interact remained largely unexplored. Here we developed a new allocentric spatial-hearing training and tested whether it can improve egocentric sound-localisation performance in normal-hearing adults listening with one ear plugged. Two groups of participants (N = 15 each) performed an egocentric sound-localisation task (point to a syllable), in monaural listening, before and after 4-days of multisensory training on triplets of white-noise bursts paired with occasional visual feedback. Critically, one group performed an allocentric task (auditory bisection task), whereas the other processed the same stimuli to perform an egocentric task (pointing to a designated sound of the triplet). Unlike most previous works, we tested also a no training group (N = 15). Egocentric sound-localisation abilities in the horizontal plane improved for all groups in the space ipsilateral to the ear-plug. This unexpected finding highlights the importance of including a no training group when studying sound localisation re-learning. Yet, performance changes were qualitatively different in trained compared to untrained participants, providing initial evidence that allocentric and multisensory procedures may prove useful when aiming to promote sound localisation re-learning.
Collapse
|
14
|
Auditory Accommodation to Poorly Matched Non-Individual Spectral Localization Cues Through Active Learning. Sci Rep 2019; 9:1063. [PMID: 30705332 PMCID: PMC6355836 DOI: 10.1038/s41598-018-37873-0] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2018] [Accepted: 12/17/2018] [Indexed: 12/05/2022] Open
Abstract
This study examines the effect of adaptation to non-ideal auditory localization cues represented by the Head-Related Transfer Function (HRTF) and the retention of training for up to three months after the last session. Continuing from a previous study on rapid non-individual HRTF learning, subjects using non-individual HRTFs were tested alongside control subjects using their own measured HRTFs. Perceptually worst-rated non-individual HRTFs were chosen to represent the worst-case scenario in practice and to allow for maximum potential for improvement. The methodology consisted of a training game and a localization test to evaluate performance carried out over 10 sessions. Sessions 1–4 occurred at 1 week intervals, performed by all subjects. During initial sessions, subjects showed improvement in localization performance for polar error. Following this, half of the subjects stopped the training game element, continuing with only the localization task. The group that continued to train showed improvement, with 3 of 8 subjects achieving group mean polar errors comparable to the control group. The majority of the group that stopped the training game retained their performance attained at the end of session 4. In general, adaptation was found to be quite subject dependent, highlighting the limits of HRTF adaptation in the case of poor HRTF matches. No identifier to predict learning ability was observed.
Collapse
|
15
|
Denk F, Ewert SD, Kollmeier B. Spectral directional cues captured by hearing device microphones in individual human ears. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 144:2072. [PMID: 30404454 DOI: 10.1121/1.5056173] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/12/2018] [Accepted: 09/11/2018] [Indexed: 06/08/2023]
Abstract
Spatial hearing abilities with hearing devices ultimately depend on how well acoustic directional cues are captured by the microphone(s) of the device. A comprehensive objective evaluation of monaural spectral directional cues captured at 9 microphone locations integrated in 5 hearing device styles is presented, utilizing a recent database of head-related transfer functions (HRTFs) that includes data from 16 human and 3 artificial ear pairs. Differences between HRTFs to the eardrum and hearing device microphones were assessed by descriptive analyses and quantitative metrics, and compared to differences between individual ears. Directional information exploited for vertical sound localization was evaluated by means of computational models. Directional information at microphone locations inside the pinna is significantly biased and qualitatively poorer compared to locations in the ear canal; behind-the-ear microphones capture almost no directional cues. These errors are expected to impair vertical sound localization, even if the new cues would be optimally mapped to locations. Differences between HRTFs to the eardrum and hearing device microphones are qualitatively different from between-subject differences and can be described as a partial destruction rather than an alteration of relevant cues, although spectral difference metrics produce similar results. Dummy heads do not fully reflect the results with individual subjects.
Collapse
Affiliation(s)
- Florian Denk
- Medizinische Physik and Cluster of Excellence "Hearing4all," University of Oldenburg, Küpkersweg 74, 26129 Oldenburg, Germany
| | - Stephan D Ewert
- Medizinische Physik and Cluster of Excellence "Hearing4all," University of Oldenburg, Küpkersweg 74, 26129 Oldenburg, Germany
| | - Birger Kollmeier
- Medizinische Physik and Cluster of Excellence "Hearing4all," University of Oldenburg, Küpkersweg 74, 26129 Oldenburg, Germany
| |
Collapse
|
16
|
Kumpik DP, King AJ. A review of the effects of unilateral hearing loss on spatial hearing. Hear Res 2018; 372:17-28. [PMID: 30143248 PMCID: PMC6341410 DOI: 10.1016/j.heares.2018.08.003] [Citation(s) in RCA: 71] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/26/2018] [Revised: 07/05/2018] [Accepted: 08/09/2018] [Indexed: 12/13/2022]
Abstract
The capacity of the auditory system to extract spatial information relies principally on the detection and interpretation of binaural cues, i.e., differences in the time of arrival or level of the sound between the two ears. In this review, we consider the effects of unilateral or asymmetric hearing loss on spatial hearing, with a focus on the adaptive changes in the brain that may help to compensate for an imbalance in input between the ears. Unilateral hearing loss during development weakens the brain's representation of the deprived ear, and this may outlast the restoration of function in that ear and therefore impair performance on tasks such as sound localization and spatial release from masking that rely on binaural processing. However, loss of hearing in one ear also triggers a reweighting of the cues used for sound localization, resulting in increased dependence on the spectral cues provided by the other ear for localization in azimuth, as well as adjustments in binaural sensitivity that help to offset the imbalance in inputs between the two ears. These adaptive strategies enable the developing auditory system to compensate to a large degree for asymmetric hearing loss, thereby maintaining accurate sound localization. They can also be leveraged by training following hearing loss in adulthood. Although further research is needed to determine whether this plasticity can generalize to more realistic listening conditions and to other tasks, such as spatial unmasking, the capacity of the auditory system to undergo these adaptive changes has important implications for rehabilitation strategies in the hearing impaired. Unilateral hearing loss in infancy can disrupt spatial hearing, even after binaural inputs are restored. Plasticity in the developing brain enables substantial recovery in sound localization accuracy. Adaptation to unilateral hearing loss is based on reweighting of monaural spectral cues and binaural plasticity. Training on auditory tasks can partially compensate for unilateral hearing loss, highlighting potential therapies.
Collapse
Affiliation(s)
- Daniel P Kumpik
- Department of Physiology, Anatomy and Genetics, Parks Road, Oxford, OX1 3PT, UK
| | - Andrew J King
- Department of Physiology, Anatomy and Genetics, Parks Road, Oxford, OX1 3PT, UK.
| |
Collapse
|
17
|
Watson CJG, Carlile S, Kelly H, Balachandar K. The Generalization of Auditory Accommodation to Altered Spectral Cues. Sci Rep 2017; 7:11588. [PMID: 28912440 PMCID: PMC5599623 DOI: 10.1038/s41598-017-11981-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2017] [Accepted: 08/30/2017] [Indexed: 11/23/2022] Open
Abstract
The capacity of healthy adult listeners to accommodate to altered spectral cues to the source locations of broadband sounds has now been well documented. In recent years we have demonstrated that the degree and speed of accommodation are improved by using an integrated sensory-motor training protocol under anechoic conditions. Here we demonstrate that the learning which underpins the localization performance gains during the accommodation process using anechoic broadband training stimuli generalize to environmentally relevant scenarios. As previously, alterations to monaural spectral cues were produced by fitting participants with custom-made outer ear molds, worn during waking hours. Following acute degradations in localization performance, participants then underwent daily sensory-motor training to improve localization accuracy using broadband noise stimuli over ten days. Participants not only demonstrated post-training improvements in localization accuracy for broadband noises presented in the same set of positions used during training, but also for stimuli presented in untrained locations, for monosyllabic speech sounds, and for stimuli presented in reverberant conditions. These findings shed further light on the neuroplastic capacity of healthy listeners, and represent the next step in the development of training programs for users of assistive listening devices which degrade localization acuity by distorting or bypassing monaural cues.
Collapse
Affiliation(s)
- Christopher J G Watson
- School of Medical Sciences, University of Sydney, Sydney, New South Wales, 2006, Australia.
| | - Simon Carlile
- School of Medical Sciences, University of Sydney, Sydney, New South Wales, 2006, Australia
| | - Heather Kelly
- School of Medical Sciences, University of Sydney, Sydney, New South Wales, 2006, Australia
| | - Kapilesh Balachandar
- School of Medical Sciences, University of Sydney, Sydney, New South Wales, 2006, Australia
| |
Collapse
|
18
|
Hassager HG, Wiinberg A, Dau T. Effects of hearing-aid dynamic range compression on spatial perception in a reverberant environment. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 141:2556. [PMID: 28464692 DOI: 10.1121/1.4979783] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
This study investigated the effects of fast-acting hearing-aid compression on normal-hearing and hearing-impaired listeners' spatial perception in a reverberant environment. Three compression schemes-independent compression at each ear, linked compression between the two ears, and "spatially ideal" compression operating solely on the dry source signal-were considered using virtualized speech and noise bursts. Listeners indicated the location and extent of their perceived sound images on the horizontal plane. Linear processing was considered as the reference condition. The results showed that both independent and linked compression resulted in more diffuse and broader sound images as well as internalization and image splits, whereby more image splits were reported for the noise bursts than for speech. Only the spatially ideal compression provided the listeners with a spatial percept similar to that obtained with linear processing. The same general pattern was observed for both listener groups. An analysis of the interaural coherence and direct-to-reverberant ratio suggested that the spatial distortions associated with independent and linked compression resulted from enhanced reverberant energy. Thus, modifications of the relation between the direct and the reverberant sound should be avoided in amplification strategies that attempt to preserve the natural sound scene while restoring loudness cues.
Collapse
Affiliation(s)
- Henrik Gert Hassager
- Hearing Systems Group, Department of Electrical Engineering, Technical University of Denmark, DK-2800 Kongens Lyngby, Denmark
| | - Alan Wiinberg
- Hearing Systems Group, Department of Electrical Engineering, Technical University of Denmark, DK-2800 Kongens Lyngby, Denmark
| | - Torsten Dau
- Hearing Systems Group, Department of Electrical Engineering, Technical University of Denmark, DK-2800 Kongens Lyngby, Denmark
| |
Collapse
|
19
|
Jóhannesson ÓI, Balan O, Unnthorsson R, Moldoveanu A, Kristjánsson Á. The Sound of Vision Project: On the Feasibility of an Audio-Haptic Representation of the Environment, for the Visually Impaired. Brain Sci 2016; 6:brainsci6030020. [PMID: 27355966 PMCID: PMC5039449 DOI: 10.3390/brainsci6030020] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2016] [Revised: 06/18/2016] [Accepted: 06/23/2016] [Indexed: 11/16/2022] Open
Abstract
The Sound of Vision project involves developing a sensory substitution device that is aimed at creating and conveying a rich auditory representation of the surrounding environment to the visually impaired. However, the feasibility of such an approach is strongly constrained by neural flexibility, possibilities of sensory substitution and adaptation to changed sensory input. We review evidence for such flexibility from various perspectives. We discuss neuroplasticity of the adult brain with an emphasis on functional changes in the visually impaired compared to sighted people. We discuss effects of adaptation on brain activity, in particular short-term and long-term effects of repeated exposure to particular stimuli. We then discuss evidence for sensory substitution such as Sound of Vision involves, while finally discussing evidence for adaptation to changes in the auditory environment. We conclude that sensory substitution enterprises such as Sound of Vision are quite feasible in light of the available evidence, which is encouraging regarding such projects.
Collapse
Affiliation(s)
- Ómar I Jóhannesson
- Laboratory of Visual Perception and Visuo-motor control, Faculty of Psychology, School of Health Sciences, University of Iceland, Reykjavik 101, Iceland.
| | - Oana Balan
- Faculty of Automatic Control and Computers, Computer Science and Engineering Department, University Politehnica of Bucharest, Bucharest 060042, Romania.
| | - Runar Unnthorsson
- Faculty of Industrial Engineering, Mechanical Engineering and Computer Science, School of Engineering and Natural Sciences, University of Iceland, Reykjavik 101, Iceland.
| | - Alin Moldoveanu
- Faculty of Automatic Control and Computers, Computer Science and Engineering Department, University Politehnica of Bucharest, Bucharest 060042, Romania.
| | - Árni Kristjánsson
- Laboratory of Visual Perception and Visuo-motor control, Faculty of Psychology, School of Health Sciences, University of Iceland, Reykjavik 101, Iceland.
| |
Collapse
|
20
|
Keating P, Rosenior-Patten O, Dahmen JC, Bell O, King AJ. Behavioral training promotes multiple adaptive processes following acute hearing loss. eLife 2016; 5:e12264. [PMID: 27008181 PMCID: PMC4841776 DOI: 10.7554/elife.12264] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2015] [Accepted: 03/23/2016] [Indexed: 11/13/2022] Open
Abstract
The brain possesses a remarkable capacity to compensate for changes in inputs resulting from a range of sensory impairments. Developmental studies of sound localization have shown that adaptation to asymmetric hearing loss can be achieved either by reinterpreting altered spatial cues or by relying more on those cues that remain intact. Adaptation to monaural deprivation in adulthood is also possible, but appears to lack such flexibility. Here we show, however, that appropriate behavioral training enables monaurally-deprived adult humans to exploit both of these adaptive processes. Moreover, cortical recordings in ferrets reared with asymmetric hearing loss suggest that these forms of plasticity have distinct neural substrates. An ability to adapt to asymmetric hearing loss using multiple adaptive processes is therefore shared by different species and may persist throughout the lifespan. This highlights the fundamental flexibility of neural systems, and may also point toward novel therapeutic strategies for treating sensory disorders. DOI:http://dx.doi.org/10.7554/eLife.12264.001 The brain normally compares the timing and intensity of the sounds that reach each ear to work out a sound’s origin. Hearing loss in one ear disrupts these between-ear comparisons, which causes listeners to make errors in this process. With time, however, the brain adapts to this hearing loss and once again learns to localize sounds accurately. Previous research has shown that young ferrets can adapt to hearing loss in one ear in two distinct ways. The ferrets either learn to remap the altered between-ear comparisons, caused by losing hearing in one ear, onto their new locations. Alternatively, the ferrets learn to locate sounds using only their good ear. Each strategy is suited to localizing different types of sound, but it was not known how this adaptive flexibility unfolds over time, whether it persists throughout the lifespan, or whether it is shared by other species. Now, Keating et al. show that, with some coaching, adult humans also adapt to temporary loss of hearing in one ear using the same two strategies. In the experiments, adult humans were trained to localize different kinds of sounds while wearing an earplug in one ear. These sounds were presented from 12 loudspeakers arranged in a horizontal circle around the person being tested. The experiments showed that short periods of behavioral training enable adult humans to adapt to a hearing loss in one ear and recover their ability to localize sounds. Just like the ferrets, adult humans learned to correctly associate altered between-ear comparisons with their new locations and to rely more on the cues from the unplugged ear to locate sound. Which of these adaptive strategies the participants used depended on the frequencies present in the sounds. The cells in the ear and brain that detect and make sense of sound typically respond best to a limited range of frequencies, and so this suggests that each strategy relies on a distinct set of cells. Keating et al. confirmed in ferrets that different brain cells are indeed used to bring about adaptation to hearing loss in one ear using each strategy. These insights may aid the development of new therapies to treat hearing loss. DOI:http://dx.doi.org/10.7554/eLife.12264.002
Collapse
Affiliation(s)
- Peter Keating
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Onayomi Rosenior-Patten
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Johannes C Dahmen
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Olivia Bell
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Andrew J King
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
21
|
Mendonça C, Escher A, van de Par S, Colonius H. Predicting auditory space calibration from recent multisensory experience. Exp Brain Res 2015; 233:1983-91. [PMID: 25795081 PMCID: PMC4464732 DOI: 10.1007/s00221-015-4259-z] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2014] [Accepted: 03/12/2015] [Indexed: 11/05/2022]
Abstract
Multisensory experience can lead to auditory space recalibration. After exposure to discrepant audiovisual stimulation, sound percepts are displaced in space, in the direction of the previous visual stimulation. This study focuses on identifying the factors in recent sensory experience leading to such auditory space shifts. Sequences of five audiovisual pairs were presented, each randomly congruent or discrepant in space. Each sequence was followed by a single auditory trial and two visual trials. In each trial, participants had to identify the perceived stimuli positions. We found that auditory localization is shifted during audiovisual discrepant trials and during subsequent auditory trials, suggesting a recalibration effect. Time did not lead to greater recalibration effects. The last audiovisual trial affects the subsequent auditory shift the most. The number of discrepant trials in a sequence, and the number of consecutive trials in sequence, also correlated with the subsequent auditory shift. To estimate the individual contribution of previously presented trials to the recalibration effect, a best-fitting model was developed to predict the shift in a linear weighted combination of stimulus features: (1) whether matching or discrepant trials occurred in the sequence, (2) total number of discrepant trials, and (3) maximum number of consecutive discrepant trials, (4) whether the last trial was discrepant or not. The selected model consists of a function including as properties the type of stimulus of the last audiovisual sequence trial and the overall probability of mismatching trials in sequence.
Collapse
Affiliation(s)
- Catarina Mendonça
- Department of Signal Processing and Acoustics, Aalto University, Otakaari 5, 02150, Espoo, Finland,
| | | | | | | |
Collapse
|