1
|
Ryu W, Lee S, Park E. The Effect of Training on Localizing HoloLens-Generated 3D Sound Sources. SENSORS (BASEL, SWITZERLAND) 2024; 24:3442. [PMID: 38894232 PMCID: PMC11174390 DOI: 10.3390/s24113442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2024] [Revised: 05/20/2024] [Accepted: 05/25/2024] [Indexed: 06/21/2024]
Abstract
Sound localization is a crucial aspect of human auditory perception. VR (virtual reality) technologies provide immersive audio platforms that allow human listeners to experience natural sounds based on their ability to localize sound. However, the simulations of sound generated by these platforms, which are based on the general head-related transfer function (HRTF), often lack accuracy in terms of individual sound perception and localization due to significant individual differences in this function. In this study, we aimed to investigate the disparities between the perceived locations of sound sources by users and the locations generated by the platform. Our goal was to determine if it is possible to train users to adapt to the platform-generated sound sources. We utilized the Microsoft HoloLens 2 virtual platform and collected data from 12 subjects based on six separate training sessions arranged in 2 weeks. We employed three modes of training to assess their effects on sound localization, in particular for studying the impacts of multimodal error, visual, and sound guidance in combination with kinesthetic/postural guidance, on the effectiveness of the training. We analyzed the collected data in terms of the training effect between pre- and post-sessions as well as the retention effect between two separate sessions based on subject-wise paired statistics. Our findings indicate that, as far as the training effect between pre- and post-sessions is concerned, the effect is proven to be statistically significant, in particular in the case wherein kinesthetic/postural guidance is mixed with visual and sound guidance. Conversely, visual error guidance alone was found to be largely ineffective. On the other hand, as far as the retention effect between two separate sessions is concerned, we could not find any meaningful statistical implication on the effect for all three error guidance modes out of the 2-week session of training. These findings can contribute to the improvement of VR technologies by ensuring they are designed to optimize human sound localization abilities.
Collapse
Affiliation(s)
- Wonyeol Ryu
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea;
| | - Sukhan Lee
- Artificial Intelligence Department, Sungkyunkwan University, Suwon 16419, Republic of Korea
| | - Eunil Park
- Department of Intelligent Software, Sungkyunkwan University, Suwon 16419, Republic of Korea;
| |
Collapse
|
2
|
Valzolgher C, Capra S, Sum K, Finos L, Pavani F, Picinali L. Spatial hearing training in virtual reality with simulated asymmetric hearing loss. Sci Rep 2024; 14:2469. [PMID: 38291126 PMCID: PMC10827792 DOI: 10.1038/s41598-024-51892-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 01/10/2024] [Indexed: 02/01/2024] Open
Abstract
Sound localization is essential to perceive the surrounding world and to interact with objects. This ability can be learned across time, and multisensory and motor cues play a crucial role in the learning process. A recent study demonstrated that when training localization skills, reaching to the sound source to determine its position reduced localization errors faster and to a greater extent as compared to just naming sources' positions, despite the fact that in both tasks, participants received the same feedback about the correct position of sound sources in case of wrong response. However, it remains to establish which features have made reaching to sound more effective as compared to naming. In the present study, we introduced a further condition in which the hand is the effector providing the response, but without it reaching toward the space occupied by the target source: the pointing condition. We tested three groups of participants (naming, pointing, and reaching groups) each while performing a sound localization task in normal and altered listening situations (i.e. mild-moderate unilateral hearing loss) simulated through auditory virtual reality technology. The experiment comprised four blocks: during the first and the last block, participants were tested in normal listening condition, while during the second and the third in altered listening condition. We measured their performance, their subjective judgments (e.g. effort), and their head-related behavior (through kinematic tracking). First, people's performance decreased when exposed to asymmetrical mild-moderate hearing impairment, more specifically on the ipsilateral side and for the pointing group. Second, we documented that all groups decreased their localization errors across altered listening blocks, but the extent of this reduction was higher for reaching and pointing as compared to the naming group. Crucially, the reaching group leads to a greater error reduction for the side where the listening alteration was applied. Furthermore, we documented that, across blocks, reaching and pointing groups increased the implementation of head motor behavior during the task (i.e., they increased approaching head movements toward the space of the sound) more than naming. Third, while performance in the unaltered blocks (first and last) was comparable, only the reaching group continued to exhibit a head behavior similar to those developed during the altered blocks (second and third), corroborating the previous observed relationship between the reaching to sounds task and head movements. In conclusion, this study further demonstrated the effectiveness of reaching to sounds as compared to pointing and naming in the learning processes. This effect could be related both to the process of implementing goal-directed motor actions and to the role of reaching actions in fostering the implementation of head-related motor strategies.
Collapse
Affiliation(s)
- Chiara Valzolgher
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068, Rovereto, TN, Italy.
| | - Sara Capra
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068, Rovereto, TN, Italy
| | - Kevin Sum
- Audio Experience Design (www.axdesign.co.uk), Imperial College London, London, UK
| | - Livio Finos
- Department of Statistical Sciences, University of Padova, Padova, Italy
| | - Francesco Pavani
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068, Rovereto, TN, Italy
- Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Rovereto, Italy
- Centro Interuniversitario di Ricerca "Cognizione, Linguaggio e Sordità" (CIRCLeS), Rovereto, Italy
| | - Lorenzo Picinali
- Audio Experience Design (www.axdesign.co.uk), Imperial College London, London, UK
| |
Collapse
|
3
|
Andrejková G, Best V, Kopčo N. Time scales of adaptation to context in horizontal sound localization. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 154:2191-2202. [PMID: 37815410 PMCID: PMC10567122 DOI: 10.1121/10.0021304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 08/01/2023] [Accepted: 09/19/2023] [Indexed: 10/11/2023]
Abstract
Psychophysical experiments explored how the repeated presentation of a context, consisting of an adaptor and a target, induces plasticity in the localization of an identical target presented alone on interleaved trials. The plasticity, and its time course, was examined both in a classroom and in an anechoic chamber. Adaptors and targets were 2 ms noise clicks and listeners were tasked with localizing the targets while ignoring the adaptors (when present). The context was either simple, consisting of a single-click adaptor and a target, or complex, containing either a single-click or an eight-click adaptor that varied from trial to trial. The adaptor was presented either from a frontal or a lateral location, fixed within a run. The presence of context caused responses to the isolated targets to be displaced up to 14° away from the adaptor location. This effect was stronger and slower if the context was complex, growing over the 5 min duration of the runs. Additionally, the simple context buildup had a slower onset in the classroom. Overall, the results illustrate that sound localization is subject to slow adaptive processes that depend on the spatial and temporal structure of the context and on the level of reverberation in the environment.
Collapse
Affiliation(s)
- Gabriela Andrejková
- Institute of Computer Science, Faculty of Science, P. J. Šafárik University, Košice, 04001, Slovakia
| | - Virginia Best
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| | - Norbert Kopčo
- Institute of Computer Science, Faculty of Science, P. J. Šafárik University, Košice, 04001, Slovakia
| |
Collapse
|
4
|
Sanchez Jimenez A, Willard KJ, Bajo VM, King AJ, Nodal FR. Persistence and generalization of adaptive changes in auditory localization behavior following unilateral conductive hearing loss. Front Neurosci 2023; 17:1067937. [PMID: 36816127 PMCID: PMC9929551 DOI: 10.3389/fnins.2023.1067937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Accepted: 01/10/2023] [Indexed: 02/04/2023] Open
Abstract
Introduction Sound localization relies on the neural processing of binaural and monaural spatial cues generated by the physical properties of the head and body. Hearing loss in one ear compromises binaural computations, impairing the ability to localize sounds in the horizontal plane. With appropriate training, adult individuals can adapt to this binaural imbalance and largely recover their localization accuracy. However, it remains unclear how long this learning is retained or whether it generalizes to other stimuli. Methods We trained ferrets to localize broadband noise bursts in quiet conditions and measured their initial head orienting responses and approach-to-target behavior. To evaluate the persistence of auditory spatial learning, we tested the sound localization performance of the animals over repeated periods of monaural earplugging that were interleaved with short or long periods of normal binaural hearing. To explore learning generalization to other stimulus types, we measured the localization accuracy before and after adaptation using different bandwidth stimuli presented against constant or amplitude-modulated background noise. Results Retention of learning resulted in a smaller initial deficit when the same ear was occluded on subsequent occasions. Each time, the animals' performance recovered with training to near pre-plug levels of localization accuracy. By contrast, switching the earplug to the contralateral ear resulted in less adaptation, indicating that the capacity to learn a new strategy for localizing sound is more limited if the animals have previously adapted to conductive hearing loss in the opposite ear. Moreover, the degree of adaptation to the training stimulus for individual animals was significantly correlated with the extent to which learning extended to untrained octave band target sounds presented in silence and to broadband targets presented in background noise, suggesting that adaptation and generalization go hand in hand. Conclusions Together, these findings provide further evidence for plasticity in the weighting of monaural and binaural cues during adaptation to unilateral conductive hearing loss, and show that the training-dependent recovery in spatial hearing can generalize to more naturalistic listening conditions, so long as the target sounds provide sufficient spatial information.
Collapse
|
5
|
Dynamic speaker localization based on a novel lightweight R–CNN model. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08251-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
|
6
|
Abstract
OBJECTIVES We assessed if spatial hearing training improves sound localization in bilateral cochlear implant (BCI) users and whether its benefits can generalize to untrained sound localization tasks. DESIGN In 20 BCI users, we assessed the effects of two training procedures (spatial versus nonspatial control training) on two different tasks performed before and after training (head-pointing to sound and audiovisual attention orienting). In the spatial training, participants identified sound position by reaching toward the sound sources with their hand. In the nonspatial training, comparable reaching movements served to identify sound amplitude modulations. A crossover randomized design allowed comparison of training procedures within the same participants. Spontaneous head movements while listening to the sounds were allowed and tracked to correlate them with localization performance. RESULTS During spatial training, BCI users reduced their sound localization errors in azimuth and adapted their spontaneous head movements as a function of sound eccentricity. These effects generalized to the head-pointing sound localization task, as revealed by greater reduction of sound localization error in azimuth and more accurate first head-orienting response, as compared to the control nonspatial training. BCI users benefited from auditory spatial cues for orienting visual attention, but the spatial training did not enhance this multisensory attention ability. CONCLUSIONS Sound localization in BCI users improves with spatial reaching-to-sound training, with benefits to a nontrained sound localization task. These findings pave the way to novel rehabilitation procedures in clinical contexts.
Collapse
|
7
|
Hüg MX, Bermejo F, Tommasini FC, Di Paolo EA. Effects of guided exploration on reaching measures of auditory peripersonal space. Front Psychol 2022; 13:983189. [PMID: 36337523 PMCID: PMC9632294 DOI: 10.3389/fpsyg.2022.983189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Accepted: 09/23/2022] [Indexed: 11/13/2022] Open
Abstract
Despite the recognized importance of bodily movements in spatial audition, few studies have integrated action-based protocols with spatial hearing in the peripersonal space. Recent work shows that tactile feedback and active exploration allow participants to improve performance in auditory distance perception tasks. However, the role of the different aspects involved in the learning phase, such as voluntary control of movement, proprioceptive cues, and the possibility of self-correcting errors, is still unclear. We study the effect of guided reaching exploration on perceptual learning of auditory distance in peripersonal space. We implemented a pretest-posttest experimental design in which blindfolded participants must reach for a sound source located in this region. They were divided into three groups that were differentiated by the intermediate training phase: Guided, an experimenter guides the participant’s arm to contact the sound source; Active, the participant freely explores the space until contacting the source; and Control, without tactile feedback. The effects of exploration feedback on auditory distance perception in the peripersonal space are heterogeneous. Both the Guided and Active groups change their performance. However, participants in the Guided group tended to overestimate distances more than those in the Active group. The response error of the Guided group corresponds to a generalized calibration criterion over the entire range of reachable distances. Whereas the Active group made different adjustments for proximal and distal positions. The results suggest that guided exploration can induce changes on the boundary of the auditory reachable space. We postulate that aspects of agency such as initiation, control, and monitoring of movement, assume different degrees of involvement in both guided and active tasks, reinforcing a non-binary approach to the question of activity-passivity in perceptual learning and supporting a complex view of the phenomena involved in action-based learning.
Collapse
Affiliation(s)
- Mercedes X. Hüg
- Centro de Investigación y Transferencia en Acústica, CONICET, Universidad Tecnológica Nacional Facultad Regional Córdoba, Córdoba, Argentina
- Facultad de Psicología, Universidad Nacional de Córdoba, Córdoba, Argentina
- *Correspondence: Mercedes X. Hüg,
| | - Fernando Bermejo
- Centro de Investigación y Transferencia en Acústica, CONICET, Universidad Tecnológica Nacional Facultad Regional Córdoba, Córdoba, Argentina
- Facultad de Psicología, Universidad Nacional de Córdoba, Córdoba, Argentina
| | - Fabián C. Tommasini
- Centro de Investigación y Transferencia en Acústica, CONICET, Universidad Tecnológica Nacional Facultad Regional Córdoba, Córdoba, Argentina
| | - Ezequiel A. Di Paolo
- Ikerbasque, Basque Foundation for Science, Bilbao, Spain
- IAS Research Center for Life, Mind and Society, University of the Basque Country, San Sebastián, Spain
- Department of Informatics, University of Sussex, Brighton, United Kingdom
| |
Collapse
|
8
|
Valzolgher C, Todeschini M, Verdelet G, Gatel J, Salemme R, Gaveau V, Truy E, Farnè A, Pavani F. Adapting to altered auditory cues: Generalization from manual reaching to head pointing. PLoS One 2022; 17:e0263509. [PMID: 35421095 PMCID: PMC9009652 DOI: 10.1371/journal.pone.0263509] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Accepted: 01/21/2022] [Indexed: 12/02/2022] Open
Abstract
Localising sounds means having the ability to process auditory cues deriving from the interplay among sound waves, the head and the ears. When auditory cues change because of temporary or permanent hearing loss, sound localization becomes difficult and uncertain. The brain can adapt to altered auditory cues throughout life and multisensory training can promote the relearning of spatial hearing skills. Here, we study the training potentials of sound-oriented motor behaviour to test if a training based on manual actions toward sounds can learning effects that generalize to different auditory spatial tasks. We assessed spatial hearing relearning in normal hearing adults with a plugged ear by using visual virtual reality and body motion tracking. Participants performed two auditory tasks that entail explicit and implicit processing of sound position (head-pointing sound localization and audio-visual attention cueing, respectively), before and after having received a spatial training session in which they identified sound position by reaching to auditory sources nearby. Using a crossover design, the effects of the above-mentioned spatial training were compared to a control condition involving the same physical stimuli, but different task demands (i.e., a non-spatial discrimination of amplitude modulations in the sound). According to our findings, spatial hearing in one-ear plugged participants improved more after reaching to sound trainings rather than in the control condition. Training by reaching also modified head-movement behaviour during listening. Crucially, the improvements observed during training generalize also to a different sound localization task, possibly as a consequence of acquired and novel head-movement strategies.
Collapse
Affiliation(s)
- Chiara Valzolgher
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- Center for Mind/Brain Sciences—CIMeC, University of Trento, Trento, Italy
- * E-mail:
| | - Michela Todeschini
- Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Trento, Italy
| | - Gregoire Verdelet
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- Neuroimmersion, Lyon Neuroscience Research Center, Lyon, France
| | | | - Romeo Salemme
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- Neuroimmersion, Lyon Neuroscience Research Center, Lyon, France
| | - Valerie Gaveau
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- University of Lyon 1, Villeurbanne, France
| | - Eric Truy
- Hospices Civils de Lyon, Lyon, France
| | - Alessandro Farnè
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- Center for Mind/Brain Sciences—CIMeC, University of Trento, Trento, Italy
- Neuroimmersion, Lyon Neuroscience Research Center, Lyon, France
| | - Francesco Pavani
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- Center for Mind/Brain Sciences—CIMeC, University of Trento, Trento, Italy
| |
Collapse
|
9
|
Nisha KV, Kumar AU. Effects of Spatial Training Paradigms on Auditory Spatial Refinement in Normal-Hearing Listeners: A Comparative Study. J Audiol Otol 2022; 26:113-121. [PMID: 35196448 PMCID: PMC9271736 DOI: 10.7874/jao.2021.00451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Accepted: 12/10/2021] [Indexed: 11/22/2022] Open
Abstract
Background and Objectives This study compared the effectiveness of two spatial training programs using real and virtual sound sources in refining spatial acuity skills in listeners with normal hearing. Subjects and Methods The study was conducted on two groups of 10 participants each; groups I and II underwent spatial training using real and virtual sound sources, respectively. The study was conducted in three phases: pre-training, training, and post-training phases. At the pre- and post-training phases, the spatial acuity of the participants was measured using real sound sources through the localization test, and virtual sound sources through the virtual acoustic space identification (VASI) test. The thresholds of interaural time difference (ITD) and interaural level difference (ILD) were also measured. In the training phase, Group I participants underwent localization training using loudspeakers in free field, while participants in Group II were subjected to virtual acoustic space (VAS) training using virtual sound sources from headphones. Both the training methods consisted of 5-8 sessions (20 min each) of systematically presented stimuli graded according to duration and back attenuation (for real source training) or number of VAS locations (for virtual source training). Results Results of independent t-scores comparing the spatial learning scores (pre vs. post-training) for each measure showed differences in performance between the two groups. Group II performed better than Group I on the VASI test, while the Group I out-performed Group II on the ITD. Both groups improved equally on the localization test and ILD. Conclusions Based on the present findings, we recommend the use of VAS training as it has practical implications due to its cost effectiveness, need for minimal equipment, and end user usefulness.
Collapse
Affiliation(s)
| | - Ajith Uppunda Kumar
- Department of Audiology, All India Institute of Speech and Hearing, Naimisham Campus, Mysore, India
| |
Collapse
|
10
|
Audet DJ, Gray WO, Brown AD. Audiovisual training rapidly reduces potentially hazardous perceptual errors caused by earplugs. Hear Res 2022; 414:108394. [PMID: 34911017 PMCID: PMC8761180 DOI: 10.1016/j.heares.2021.108394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 10/27/2021] [Accepted: 11/09/2021] [Indexed: 02/03/2023]
Abstract
Our ears capture sound from all directions but do not encode directional information explicitly. Instead, subtle acoustic features associated with unique sound source locations must be learned through experience. Surprisingly, aspects of this mapping process remain highly plastic throughout adulthood: Adult human listeners can accommodate acutely modified acoustic inputs ("new ears") over a period of a few weeks to recover near-normal sound localization, and this process can be accelerated with explicit training. Here we evaluated the extent of such plasticity given only transient exposure to distorted inputs. Distortions were produced via earplugs, which severely degrade sound localization performance, constraining their usability in real-world settings that require accurate directional hearing. Localization was measured over a period of ten weeks. Provision of feedback via simple paired auditory and visual stimuli led to a rapid decrease in the occurrence of large errors (responses >|±30°| from target) despite only once-weekly exposure to the altered inputs. Moreover, training effects generalized to untrained sound source locations. Lesser but qualitatively similar improvements were observed in a group of subjects that did not receive explicit feedback. In total, data demonstrate that even transient exposure to altered spatial acoustic information is sufficient for meaningful perceptual improvement (i.e., chronic exposure is not required), offering insight on the nature and time course of perceptual learning in the context of spatial hearing. Data also suggest that the large and potentially hazardous errors in localization caused by earplugs can be mitigated with appropriate training, offering a practical means to increase their usability.
Collapse
Affiliation(s)
- David J Audet
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA 98105, United States
| | - William O Gray
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA 98105, United States
| | - Andrew D Brown
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA 98105, United States; Virginia Merrill Bloedel Hearing Research Center, University of Washington, Seattle, WA 98195, United States.
| |
Collapse
|
11
|
Klingel M, Laback B. Binaural-cue Weighting and Training-Induced Reweighting Across Frequencies. Trends Hear 2022; 26:23312165221104872. [PMID: 35791626 PMCID: PMC9272187 DOI: 10.1177/23312165221104872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
During sound lateralization, the information provided by interaural differences in time (ITD) and level (ILD) is weighted, with ITDs and ILDs dominating for low and high frequencies, respectively. For mid frequencies, the weighting between these binaural cues can be changed via training. The present study investigated whether binaural-cue weights change gradually with increasing frequency region, whether they can be changed in various frequency regions, and whether such binaural-cue reweighting generalizes to untrained frequencies. In two experiments, a total of 39 participants lateralized 500-ms, 1/3-octave-wide noise bursts containing various ITD/ILD combinations in a virtual audio-visual environment. Binaural-cue weights were measured before and after a 2-session training in which, depending on the group, either ITDs or ILDs were visually reinforced. In experiment 1, four frequency bands (centered at 1000, 1587, 2520, and 4000 Hz) and a multiband stimulus comprising all four bands were presented during weight measurements. During training, only the 1000-, 2520-, and 4000-Hz bands were presented. In experiment 2, the weight measurements only included the two mid-frequency bands, while the training only included the 1587-Hz band. ILD weights increased gradually from low- to high-frequency bands. When ILDs were reinforced during training, they increased for the 4000- (experiment 1) and 2520-Hz band (experiment 2). When ITDs were reinforced, ITD weights increased only for the 1587-Hz band (at specific azimuths). This suggests that ILD reweighting requires high, and ITD reweighting requires low frequencies without including frequency regions providing fine-structure ITD cues. The changes in binaural-cue weights were independent of the trained bands, suggesting some generalization of binaural-cue reweighting.
Collapse
Affiliation(s)
- Maike Klingel
- Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, 27258University of Vienna, Wien, Austria.,Acoustics Research Institute, Austrian Academy of Sciences, Wien, Austria
| | - Bernhard Laback
- Acoustics Research Institute, Austrian Academy of Sciences, Wien, Austria
| |
Collapse
|
12
|
Vickers D, Salorio-Corbetto M, Driver S, Rocca C, Levtov Y, Sum K, Parmar B, Dritsakis G, Albanell Flores J, Jiang D, Mahon M, Early F, Van Zalk N, Picinali L. Involving Children and Teenagers With Bilateral Cochlear Implants in the Design of the BEARS (Both EARS) Virtual Reality Training Suite Improves Personalization. Front Digit Health 2021; 3:759723. [PMID: 34870270 PMCID: PMC8637804 DOI: 10.3389/fdgth.2021.759723] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Accepted: 10/06/2021] [Indexed: 11/25/2022] Open
Abstract
Older children and teenagers with bilateral cochlear implants often have poor spatial hearing because they cannot fuse sounds from the two ears. This deficit jeopardizes speech and language development, education, and social well-being. The lack of protocols for fitting bilateral cochlear implants and resources for spatial-hearing training contribute to these difficulties. Spatial hearing develops with bilateral experience. A large body of research demonstrates that sound localisation can improve with training, underpinned by plasticity-driven changes in the auditory pathways. Generalizing training to non-trained auditory skills is best achieved by using a multi-modal (audio-visual) implementation and multi-domain training tasks (localisation, speech-in-noise, and spatial music). The goal of this work was to develop a package of virtual-reality games (BEARS, Both EARS) to train spatial hearing in young people (8–16 years) with bilateral cochlear implants using an action-research protocol. The action research protocol used formalized cycles for participants to trial aspects of the BEARS suite, reflect on their experiences, and in turn inform changes in the game implementations. This participatory design used the stakeholder participants as co-creators. The cycles for each of the three domains (localisation, spatial speech-in-noise, and spatial music) were customized to focus on the elements that the stakeholder participants considered important. The participants agreed that the final games were appropriate and ready to be used by patients. The main areas of modification were: the variety of immersive scenarios to cover age range and interests, the number of levels of complexity to ensure small improvements were measurable, feedback, and reward schemes to ensure positive reinforcement, and an additional implementation on an iPad for those who had difficulties with the headsets due to age or balance issues. The effectiveness of the BEARS training suite will be evaluated in a large-scale clinical trial to determine if using the games lead to improvements in speech-in-noise, quality of life, perceived benefit, and cost utility. Such interventions allow patients to take control of their own management reducing the reliance on outpatient-based rehabilitation. For young people, a virtual-reality implementation is more engaging than traditional rehabilitation methods, and the participatory design used here has ensured that the BEARS games are relevant.
Collapse
Affiliation(s)
- Deborah Vickers
- Sound Laboratory, Cambridge Hearing Group, Clinical Neurosciences, University of Cambridge, Cambridge, United Kingdom
| | - Marina Salorio-Corbetto
- Sound Laboratory, Cambridge Hearing Group, Clinical Neurosciences, University of Cambridge, Cambridge, United Kingdom
| | - Sandra Driver
- St Thomas' Hearing Implant Centre, Guys and St Thomas' NHS Foundation Trust, London, United Kingdom
| | - Christine Rocca
- St Thomas' Hearing Implant Centre, Guys and St Thomas' NHS Foundation Trust, London, United Kingdom
| | | | - Kevin Sum
- Audio Experience Design, Dyson School of Design Engineering, Imperial College London, London, United Kingdom
| | - Bhavisha Parmar
- Sound Laboratory, Cambridge Hearing Group, Clinical Neurosciences, University of Cambridge, Cambridge, United Kingdom
| | - Giorgos Dritsakis
- Sound Laboratory, Cambridge Hearing Group, Clinical Neurosciences, University of Cambridge, Cambridge, United Kingdom
| | - Jordi Albanell Flores
- Audio Experience Design, Dyson School of Design Engineering, Imperial College London, London, United Kingdom
| | - Dan Jiang
- St Thomas' Hearing Implant Centre, Guys and St Thomas' NHS Foundation Trust, London, United Kingdom
| | - Merle Mahon
- Psychology and Language Sciences, Faculty of Brain Sciences, University College London, London, United Kingdom
| | - Frances Early
- Department of Respiratory Medicine, Cambridge University Hospital NHS Foundation Trust, Cambridge, United Kingdom
| | - Nejra Van Zalk
- Design Psychology Lab, Dyson School of Design Engineering, Imperial College London, London, United Kingdom
| | - Lorenzo Picinali
- Audio Experience Design, Dyson School of Design Engineering, Imperial College London, London, United Kingdom
| |
Collapse
|
13
|
Klingel M, Kopčo N, Laback B. Reweighting of Binaural Localization Cues Induced by Lateralization Training. J Assoc Res Otolaryngol 2021; 22:551-566. [PMID: 33959826 PMCID: PMC8476684 DOI: 10.1007/s10162-021-00800-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Accepted: 03/29/2021] [Indexed: 11/03/2022] Open
Abstract
Normal-hearing listeners adapt to alterations in sound localization cues. This adaptation can result from the establishment of a new spatial map of the altered cues or from a stronger relative weighting of unaltered compared to altered cues. Such reweighting has been shown for monaural vs. binaural cues. However, studies attempting to reweight the two binaural cues, interaural differences in time (ITD) and level (ILD), yielded inconclusive results. This study investigated whether binaural-cue reweighting can be induced by lateralization training in a virtual audio-visual environment. Twenty normal-hearing participants, divided into two groups, completed the experiment consisting of 7 days of lateralization training, preceded and followed by a test measuring the binaural-cue weights. Participants' task was to lateralize 500-ms bandpass-filtered (2-4 kHz) noise bursts containing various combinations of spatially consistent and inconsistent binaural cues. During training, additional visual cues reinforced the azimuth corresponding to ITDs in one group and ILDs in the other group and the azimuthal ranges of the binaural cues were manipulated group-specifically. Both groups showed a significant increase of the reinforced-cue weight from pre- to posttest, suggesting that participants reweighted the binaural cues in the expected direction. This reweighting occurred within the first training session. The results are relevant as binaural-cue reweighting likely occurs when normal-hearing listeners adapt to new acoustic environments. Reweighting might also be a factor underlying the low contribution of ITDs to sound localization of cochlear-implant listeners as they typically do not experience reliable ITD cues with clinical devices.
Collapse
Affiliation(s)
- Maike Klingel
- Acoustics Research Institute, Austrian Academy of Sciences, 1040 Vienna, Austria
- Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, 1010 Vienna, Austria
- Institute of Computer Science, Faculty of Science, P. J. Šafárik University in Košice, 04180 Košice, Slovakia
| | - Norbert Kopčo
- Institute of Computer Science, Faculty of Science, P. J. Šafárik University in Košice, 04180 Košice, Slovakia
| | - Bernhard Laback
- Acoustics Research Institute, Austrian Academy of Sciences, 1040 Vienna, Austria
| |
Collapse
|
14
|
Taub M, Yovel Y. Adaptive learning and recall of motor-sensory sequences in adult echolocating bats. BMC Biol 2021; 19:164. [PMID: 34412628 PMCID: PMC8377959 DOI: 10.1186/s12915-021-01099-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Accepted: 07/15/2021] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Learning to adapt to changes in the environment is highly beneficial. This is especially true for echolocating bats that forage in diverse environments, moving between open spaces to highly complex ones. Bats are known for their ability to rapidly adjust their sensing according to auditory information gathered from the environment within milliseconds but can they also benefit from longer adaptive processes? In this study, we examined adult bats' ability to slowly adapt their sensing strategy to a new type of environment they have never experienced for such long durations, and to then maintain this learned echolocation strategy over time. RESULTS We show that over a period of weeks, Pipistrellus kuhlii bats gradually adapt their pre-takeoff echolocation sequence when moved to a constantly cluttered environment. After adopting this improved strategy, the bats retained an ability to instantaneously use it when placed back in a similarly cluttered environment, even after spending many months in a significantly less cluttered environment. CONCLUSIONS We demonstrate long-term adaptive flexibility in sensory acquisition in adult animals. Our study also gives further insight into the importance of sensory planning in the initiation of a precise sensorimotor behavior such as approaching for landing.
Collapse
Affiliation(s)
- Mor Taub
- Department of Zoology, Faculty of Life Sciences, Tel Aviv University, 6997801, Tel Aviv, Israel.
| | - Yossi Yovel
- Department of Zoology, Faculty of Life Sciences, Tel Aviv University, 6997801, Tel Aviv, Israel.
- Sagol School of Neuroscience, Tel Aviv University, 6997801, Tel Aviv, Israel.
| |
Collapse
|
15
|
Potential of Augmented Reality Platforms to Improve Individual Hearing Aids and to Support More Ecologically Valid Research. Ear Hear 2021; 41 Suppl 1:140S-146S. [PMID: 33105268 PMCID: PMC7676615 DOI: 10.1097/aud.0000000000000961] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
An augmented reality (AR) platform combines several technologies in a system that can render individual “digital objects” that can be manipulated for a given purpose. In the audio domain, these may, for example, be generated by speaker separation, noise suppression, and signal enhancement. Access to the “digital objects” could be used to augment auditory objects that the user wants to hear better. Such AR platforms in conjunction with traditional hearing aids may contribute to closing the gap for people with hearing loss through multimodal sensor integration, leveraging extensive current artificial intelligence research, and machine-learning frameworks. This could take the form of an attention-driven signal enhancement and noise suppression platform, together with context awareness, which would improve the interpersonal communication experience in complex real-life situations. In that sense, an AR platform could serve as a frontend to current and future hearing solutions. The AR device would enhance the signals to be attended, but the hearing amplification would still be handled by hearing aids. In this article, suggestions are made about why AR platforms may offer ideal affordances to compensate for hearing loss, and how research-focused AR platforms could help toward better understanding of the role of hearing in everyday life.
Collapse
|
16
|
Valzolgher C, Verdelet G, Salemme R, Lombardi L, Gaveau V, Farné A, Pavani F. Reaching to sounds in virtual reality: A multisensory-motor approach to promote adaptation to altered auditory cues. Neuropsychologia 2020; 149:107665. [PMID: 33130161 DOI: 10.1016/j.neuropsychologia.2020.107665] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Revised: 07/25/2020] [Accepted: 10/24/2020] [Indexed: 11/26/2022]
Abstract
When localising sounds in space the brain relies on internal models that specify the correspondence between the auditory input reaching the ears, initial head-position and coordinates in external space. These models can be updated throughout life, setting the basis for re-learning spatial hearing abilities in adulthood. In addition, strategic behavioural adjustments allow people to quickly adapt to atypical listening situations. Until recently, the potential role of dynamic listening, involving head-movements or reaching to sounds, have remained largely overlooked. Here, we exploited visual virtual reality (VR) and real-time kinematic tracking, to study the role of active multisensory-motor interactions when hearing individuals adapt to altered binaural cues (one ear plugged and muffed). Participants were immersed in a VR scenario showing 17 virtual speakers at ear-level. In each trial, they heard a sound delivered from a real speaker aligned with one of the virtual ones and were instructed to either reach-to-touch the perceived sound source (Reaching group), or read the label associated with the speaker (Naming group). Participants were free to move their heads during the task and received audio-visual feedback on their performance. Most importantly, they performed the task under binaural or monaural listening. Results show that both groups adapted rapidly to monaural listening, improving sound localisation performance across trials and changing their head-movement behaviour. Reaching the sounds induced faster and larger sound localisation improvements, compared to just naming its position. This benefit was linked to progressively wider head-movements to explore auditory space, selectively in the Reaching group. In conclusion, reaching to sounds in an immersive visual VR context proved most effective for adapting to altered binaural listening. Head-movements played an important role in adaptation, pointing to the importance of dynamic listening when implementing training protocols for improving spatial hearing.
Collapse
Affiliation(s)
- Chiara Valzolgher
- IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France; Centre for Mind/Brain Sciences (CIMeC), University of Trento, Italy.
| | | | - Romeo Salemme
- IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France; Neuro-immersion, Centre de Recherche en Neuroscience Lyon (CRNL), France
| | - Luigi Lombardi
- Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Italy
| | - Valerie Gaveau
- IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France
| | - Alessandro Farné
- IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France; Centre for Mind/Brain Sciences (CIMeC), University of Trento, Italy; Neuro-immersion, Centre de Recherche en Neuroscience Lyon (CRNL), France
| | - Francesco Pavani
- IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France; Centre for Mind/Brain Sciences (CIMeC), University of Trento, Italy; Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Italy
| |
Collapse
|
17
|
Bermejo F, Di Paolo EA, Gilberto LG, Lunati V, Barrios MV. Learning to find spatially reversed sounds. Sci Rep 2020; 10:4562. [PMID: 32165690 PMCID: PMC7067813 DOI: 10.1038/s41598-020-61332-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2019] [Accepted: 02/24/2020] [Indexed: 11/29/2022] Open
Abstract
Adaptation to systematic visual distortions is well-documented but there is little evidence of similar adaptation to radical changes in audition. We use a pseudophone to transpose the sound streams arriving at the left and right ears, evaluating the perceptual effects it provokes and the possibility of learning to locate sounds in the reversed condition. Blindfolded participants remain seated at the center of a semicircular arrangement of 7 speakers and are asked to orient their head towards a sound source. We postulate that a key factor underlying adaptation is the self-generated activity that allows participants to learn new sensorimotor schemes. We investigate passive listening conditions (very short duration stimulus not permitting active exploration) and dynamic conditions (continuous stimulus allowing participants time to freely move their heads or remain still). We analyze head movement kinematics, localization errors, and qualitative reports. Results show movement-induced perceptual disruptions in the dynamic condition with static sound sources displaying apparent movement. This effect is reduced after a short training period and participants learn to find sounds in a left-right reversed field for all but the extreme lateral positions where motor patterns are more restricted. Strategies become less exploratory and more direct with training. Results support the hypothesis that self-generated movements underlie adaptation to radical sensorimotor distortions.
Collapse
Affiliation(s)
- Fernando Bermejo
- Centro de Investigación y Transferencia en Acústica, Universidad Tecnológica Nacional - Facultad Regional Córdoba, CONICET, CP 5016, Córdoba, Argentina.
- Facultad de Psicología, Universidad Nacional de Córdoba, CP 5016, Córdoba, Argentina.
| | - Ezequiel A Di Paolo
- Ikerbasque, Basque Foundation for Science, Bilbao, Spain
- IAS-Research Center for Life, Mind, and Society, University of the Basque Country, San Sebastián, Spain
- Centre for Computational Neuroscience and Robotics, University of Sussex, Brighton, UK
| | - L Guillermo Gilberto
- Centro de Investigación y Transferencia en Acústica, Universidad Tecnológica Nacional - Facultad Regional Córdoba, CONICET, CP 5016, Córdoba, Argentina
- Consejo Nacional de Investigaciones Científicas y Tecnológicas (CONICET), Ciudad Autónoma de Buenos Aires, Argentina
| | - Valentín Lunati
- Centro de Investigación y Transferencia en Acústica, Universidad Tecnológica Nacional - Facultad Regional Córdoba, CONICET, CP 5016, Córdoba, Argentina
- Consejo Nacional de Investigaciones Científicas y Tecnológicas (CONICET), Ciudad Autónoma de Buenos Aires, Argentina
| | - M Virginia Barrios
- Centro de Investigación y Transferencia en Acústica, Universidad Tecnológica Nacional - Facultad Regional Córdoba, CONICET, CP 5016, Córdoba, Argentina
- Facultad de Psicología, Universidad Nacional de Córdoba, CP 5016, Córdoba, Argentina
| |
Collapse
|
18
|
Rabini G, Lucin G, Pavani F. Certain, but incorrect: on the relation between subjective certainty and accuracy in sound localisation. Exp Brain Res 2020; 238:727-739. [PMID: 32080750 DOI: 10.1007/s00221-020-05748-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2019] [Accepted: 02/05/2020] [Indexed: 10/25/2022]
Abstract
When asked to identify the position of a sound, listeners can report its perceived location as well as their subjective certainty about this spatial judgement. Yet, research to date focused primarily on measures of perceived location (e.g., accuracy and precision of pointing responses), neglecting instead the phenomenological experience of subjective spatial certainty. The present study aimed to investigate: (1) changes in subjective certainty about sound position induced by listening with one ear plugged (simulated monaural listening), compared to typical binaural listening and (2) the relation between subjective certainty about sound position and localisation accuracy. In two experiments (N = 20 each), participants localised single sounds delivered from one of 60 speakers hidden from view in front space. In each trial, they also provided a subjective rating of their spatial certainty about sound position. No feedback on response was provided. Overall, participants were mostly accurate and certain about sound position in binaural listening, whereas their accuracy and subjective certainty decreased in monaural listening. Interestingly, accuracy and certainty dissociated within single trials during monaural listening: in some trials participants were certain but incorrect, in others they were uncertain but correct. Furthermore, unlike accuracy, subjective certainty rapidly increased as a function of time during the monaural listening block. Finally, subjective certainty changed as a function of perceived location of the sound source. These novel findings reveal that listeners quickly update their subjective confidence on sound position, when they experience an altered listening condition, even in the absence of feedback. Furthermore, they document a dissociation between accuracy and subjective certainty when mapping auditory input to space.
Collapse
Affiliation(s)
- Giuseppe Rabini
- Centre for Mind/Brain Sciences (CIMeC), University of Trento, Via Angelo Bettini 31, 38068, Rovereto, TN, Italy.
| | - Giulia Lucin
- Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Via Angelo Bettini 84, 38068, Rovereto, TN, Italy
| | - Francesco Pavani
- Centre for Mind/Brain Sciences (CIMeC), University of Trento, Via Angelo Bettini 31, 38068, Rovereto, TN, Italy.,Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Via Angelo Bettini 84, 38068, Rovereto, TN, Italy.,IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), Lyon, France
| |
Collapse
|
19
|
Differential Adaptation in Azimuth and Elevation to Acute Monaural Spatial Hearing after Training with Visual Feedback. eNeuro 2019; 6:ENEURO.0219-19.2019. [PMID: 31601632 PMCID: PMC6825955 DOI: 10.1523/eneuro.0219-19.2019] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Revised: 08/31/2019] [Accepted: 09/04/2019] [Indexed: 11/21/2022] Open
Abstract
Sound localization in the horizontal plane (azimuth) relies mainly on binaural difference cues in sound level and arrival time. Blocking one ear will perturb these cues, and may strongly affect azimuth performance of the listener. However, single-sided deaf listeners, as well as acutely single-sided plugged normal-hearing subjects, often use a combination of (ambiguous) monaural head-shadow cues, impoverished binaural level-difference cues, and (veridical, but limited) pinna- and head-related spectral cues to estimate source azimuth. To what extent listeners can adjust the relative contributions of these different cues is unknown, as the mechanisms underlying adaptive processes to acute monauralization are still unclear. By providing visual feedback during a brief training session with a high-pass (HP) filtered sound at a fixed sound level, we investigated the ability of listeners to adapt to their erroneous sound-localization percepts. We show that acutely plugged listeners rapidly adjusted the relative contributions of perceived sound level, and the spectral and distorted binaural cues, to improve their localization performance in azimuth also for different sound levels and locations than those experienced during training. Interestingly, our results also show that this acute cue-reweighting led to poorer localization performance in elevation, which was in line with the acoustic–spatial information provided during training. We conclude that the human auditory system rapidly readjusts the weighting of all relevant localization cues, to adequately respond to the demands of the current acoustic environment, even if the adjustments may hamper veridical localization performance in the real world.
Collapse
|
20
|
Reaching measures and feedback effects in auditory peripersonal space. Sci Rep 2019; 9:9476. [PMID: 31263231 PMCID: PMC6603038 DOI: 10.1038/s41598-019-45755-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2019] [Accepted: 06/14/2019] [Indexed: 11/09/2022] Open
Abstract
We analyse the effects of exploration feedback on reaching measures of perceived auditory peripersonal space (APS) boundary and the auditory distance perception (ADP) of sound sources located within it. We conducted an experiment in which the participants had to estimate if a sound source was (or not) reachable and to estimate its distance (40 to 150 cm in 5-cm steps) by reaching to a small loudspeaker. The stimulus consisted of a train of three bursts of Gaussian broadband noise. Participants were randomly assigned to two groups: Experimental (EG) and Control (CG). There were three phases in the following order: Pretest-Test-Posttest. For all phases, the listeners performed the same task except for the EG-Test phase where the participants reach in order to touch the sound source. We applied models to characterise the participants' responses and provide evidence that feedback significantly reduces the response bias of both the perceived boundary of the APS and the ADP of sound sources located within reach. In the CG, the repetition of the task did not affect APS and ADP accuracy, but it improved the performance consistency: the reachable uncertainty zone in APS was reduced and there was a tendency to decrease variability in ADP.
Collapse
|
21
|
Interactions between egocentric and allocentric spatial coding of sounds revealed by a multisensory learning paradigm. Sci Rep 2019; 9:7892. [PMID: 31133688 PMCID: PMC6536515 DOI: 10.1038/s41598-019-44267-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2018] [Accepted: 05/08/2019] [Indexed: 11/09/2022] Open
Abstract
Although sound position is initially head-centred (egocentric coordinates), our brain can also represent sounds relative to one another (allocentric coordinates). Whether reference frames for spatial hearing are independent or interact remained largely unexplored. Here we developed a new allocentric spatial-hearing training and tested whether it can improve egocentric sound-localisation performance in normal-hearing adults listening with one ear plugged. Two groups of participants (N = 15 each) performed an egocentric sound-localisation task (point to a syllable), in monaural listening, before and after 4-days of multisensory training on triplets of white-noise bursts paired with occasional visual feedback. Critically, one group performed an allocentric task (auditory bisection task), whereas the other processed the same stimuli to perform an egocentric task (pointing to a designated sound of the triplet). Unlike most previous works, we tested also a no training group (N = 15). Egocentric sound-localisation abilities in the horizontal plane improved for all groups in the space ipsilateral to the ear-plug. This unexpected finding highlights the importance of including a no training group when studying sound localisation re-learning. Yet, performance changes were qualitatively different in trained compared to untrained participants, providing initial evidence that allocentric and multisensory procedures may prove useful when aiming to promote sound localisation re-learning.
Collapse
|
22
|
Zonooz B, Arani E, Körding KP, Aalbers PATR, Celikel T, Van Opstal AJ. Spectral Weighting Underlies Perceived Sound Elevation. Sci Rep 2019; 9:1642. [PMID: 30733476 PMCID: PMC6367479 DOI: 10.1038/s41598-018-37537-z] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2018] [Accepted: 12/07/2018] [Indexed: 11/09/2022] Open
Abstract
The brain estimates the two-dimensional direction of sounds from the pressure-induced displacements of the eardrums. Accurate localization along the horizontal plane (azimuth angle) is enabled by binaural difference cues in timing and intensity. Localization along the vertical plane (elevation angle), including frontal and rear directions, relies on spectral cues made possible by the elevation dependent filtering in the idiosyncratic pinna cavities. However, the problem of extracting elevation from the sensory input is ill-posed, since the spectrum results from a convolution between source spectrum and the particular head-related transfer function (HRTF) associated with the source elevation, which are both unknown to the system. It is not clear how the auditory system deals with this problem, or which implicit assumptions it makes about source spectra. By varying the spectral contrast of broadband sounds around the 6–9 kHz band, which falls within the human pinna’s most prominent elevation-related spectral notch, we here suggest that the auditory system performs a weighted spectral analysis across different frequency bands to estimate source elevation. We explain our results by a model, in which the auditory system weighs the different spectral bands, and compares the convolved weighted sensory spectrum with stored information about its own HRTFs, and spatial prior assumptions.
Collapse
Affiliation(s)
- Bahram Zonooz
- Biophysics Department, Donders Institute for Brain, Cognition, and Behaviour, Radboud University, 6525 AJ, Nijmegen, The Netherlands
| | - Elahe Arani
- Biophysics Department, Donders Institute for Brain, Cognition, and Behaviour, Radboud University, 6525 AJ, Nijmegen, The Netherlands
| | - Konrad P Körding
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA.,Department of Neuroscience, University of Pennsylvania, Philadelphia, PA, USA
| | - P A T Remco Aalbers
- Biophysics Department, Donders Institute for Brain, Cognition, and Behaviour, Radboud University, 6525 AJ, Nijmegen, The Netherlands
| | - Tansu Celikel
- Neurophysiology Department, Donders Institute for Brain, Cognition, and Behaviour, Radboud University, 6525 AJ, Nijmegen, The Netherlands
| | - A John Van Opstal
- Biophysics Department, Donders Institute for Brain, Cognition, and Behaviour, Radboud University, 6525 AJ, Nijmegen, The Netherlands.
| |
Collapse
|
23
|
Zonooz B, Arani E, Opstal AJV. Learning to localise weakly-informative sound spectra with and without feedback. Sci Rep 2018; 8:17933. [PMID: 30560940 PMCID: PMC6298951 DOI: 10.1038/s41598-018-36422-z] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2018] [Accepted: 11/20/2018] [Indexed: 11/12/2022] Open
Abstract
How the human auditory system learns to map complex pinna-induced spectral-shape cues onto veridical estimates of sound-source elevation in the median plane is still unclear. Earlier studies demonstrated considerable sound-localisation plasticity after applying pinna moulds, and to altered vision. Several factors may contribute to auditory spatial learning, like visual or motor feedback, or updated priors. We here induced perceptual learning for sounds with degraded spectral content, having weak, but consistent, elevation-dependent cues, as demonstrated by low-gain stimulus-response relations. During training, we provided visual feedback for only six targets in the midsagittal plane, to which listeners gradually improved their response accuracy. Interestingly, listeners' performance also improved without visual feedback, albeit less strongly. Post-training results showed generalised improved response behaviour, also to non-trained locations and acoustic spectra, presented throughout the two-dimensional frontal hemifield. We argue that the auditory system learns to reweigh contributions from low-informative spectral bands to update its prior elevation estimates, and explain our results with a neuro-computational model.
Collapse
Affiliation(s)
- Bahram Zonooz
- Biophysics Department, Donders Center for Neuroscience, Radboud University, Heyendaalseweg 135, 6525, AJ, Nijmegen, The Netherlands
| | - Elahe Arani
- Biophysics Department, Donders Center for Neuroscience, Radboud University, Heyendaalseweg 135, 6525, AJ, Nijmegen, The Netherlands
| | - A John Van Opstal
- Biophysics Department, Donders Center for Neuroscience, Radboud University, Heyendaalseweg 135, 6525, AJ, Nijmegen, The Netherlands.
| |
Collapse
|
24
|
Denk F, Ewert SD, Kollmeier B. Spectral directional cues captured by hearing device microphones in individual human ears. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 144:2072. [PMID: 30404454 DOI: 10.1121/1.5056173] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/12/2018] [Accepted: 09/11/2018] [Indexed: 06/08/2023]
Abstract
Spatial hearing abilities with hearing devices ultimately depend on how well acoustic directional cues are captured by the microphone(s) of the device. A comprehensive objective evaluation of monaural spectral directional cues captured at 9 microphone locations integrated in 5 hearing device styles is presented, utilizing a recent database of head-related transfer functions (HRTFs) that includes data from 16 human and 3 artificial ear pairs. Differences between HRTFs to the eardrum and hearing device microphones were assessed by descriptive analyses and quantitative metrics, and compared to differences between individual ears. Directional information exploited for vertical sound localization was evaluated by means of computational models. Directional information at microphone locations inside the pinna is significantly biased and qualitatively poorer compared to locations in the ear canal; behind-the-ear microphones capture almost no directional cues. These errors are expected to impair vertical sound localization, even if the new cues would be optimally mapped to locations. Differences between HRTFs to the eardrum and hearing device microphones are qualitatively different from between-subject differences and can be described as a partial destruction rather than an alteration of relevant cues, although spectral difference metrics produce similar results. Dummy heads do not fully reflect the results with individual subjects.
Collapse
Affiliation(s)
- Florian Denk
- Medizinische Physik and Cluster of Excellence "Hearing4all," University of Oldenburg, Küpkersweg 74, 26129 Oldenburg, Germany
| | - Stephan D Ewert
- Medizinische Physik and Cluster of Excellence "Hearing4all," University of Oldenburg, Küpkersweg 74, 26129 Oldenburg, Germany
| | - Birger Kollmeier
- Medizinische Physik and Cluster of Excellence "Hearing4all," University of Oldenburg, Küpkersweg 74, 26129 Oldenburg, Germany
| |
Collapse
|
25
|
Bosen AK, Fleming JT, Allen PD, O’Neill WE, Paige GD. Multiple time scales of the ventriloquism aftereffect. PLoS One 2018; 13:e0200930. [PMID: 30067790 PMCID: PMC6070234 DOI: 10.1371/journal.pone.0200930] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2017] [Accepted: 07/05/2018] [Indexed: 11/18/2022] Open
Abstract
The ventriloquism aftereffect (VAE) refers to a shift in auditory spatial perception following exposure to a spatial disparity between auditory and visual stimuli. The VAE has been previously measured on two distinct time scales. Hundreds or thousands of exposures to a an audio-visual spatial disparity produces enduring VAE that persists after exposure ceases. Exposure to a single audio-visual spatial disparity produces immediate VAE that decays over seconds. To determine if these phenomena are two extremes of a continuum or represent distinct processes, we conducted an experiment with normal hearing listeners that measured VAE in response to a repeated, constant audio-visual disparity sequence, both immediately after exposure to each audio-visual disparity and after the end of the sequence. In each experimental session, subjects were exposed to sequences of auditory and visual targets that were constantly offset by +8° or −8° in azimuth from one another, then localized auditory targets presented in isolation following each sequence. Eye position was controlled throughout the experiment, to avoid the effects of gaze on auditory localization. In contrast to other studies that did not control eye position, we found both a large shift in auditory perception that decayed rapidly after each AV disparity exposure, along with a gradual shift in auditory perception that grew over time and persisted after exposure to the AV disparity ceased. We modeled the temporal and spatial properties of the measured auditory shifts using grey box nonlinear system identification, and found that two models could explain the data equally well. In the power model, the temporal decay of the ventriloquism aftereffect was modeled with a power law relationship. This causes an initial rapid drop in auditory shift, followed by a long tail which accumulates with repeated exposure to audio-visual disparity. In the double exponential model, two separate processes were required to explain the data, one which accumulated and decayed exponentially and the other which slowly integrated over time. Both models fit the data best when the spatial spread of the ventriloquism aftereffect was limited to a window around the location of the audio-visual disparity. We directly compare the predictions made by each model, and suggest additional measurements that could help distinguish which model best describes the mechanisms underlying the VAE.
Collapse
Affiliation(s)
- Adam K. Bosen
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, United States of America
- * E-mail:
| | - Justin T. Fleming
- Department of Neurobiology and Anatomy, University of Rochester, Rochester, NY, United States of America
| | - Paul D. Allen
- Department of Neurobiology and Anatomy, University of Rochester, Rochester, NY, United States of America
| | - William E. O’Neill
- Department of Neurobiology and Anatomy, University of Rochester, Rochester, NY, United States of America
| | - Gary D. Paige
- Department of Neurobiology and Anatomy, University of Rochester, Rochester, NY, United States of America
| |
Collapse
|
26
|
The Encoding of Sound Source Elevation in the Human Auditory Cortex. J Neurosci 2018; 38:3252-3264. [PMID: 29507148 DOI: 10.1523/jneurosci.2530-17.2018] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2017] [Revised: 02/11/2018] [Accepted: 02/14/2018] [Indexed: 11/21/2022] Open
Abstract
Spatial hearing is a crucial capacity of the auditory system. While the encoding of horizontal sound direction has been extensively studied, very little is known about the representation of vertical sound direction in the auditory cortex. Using high-resolution fMRI, we measured voxelwise sound elevation tuning curves in human auditory cortex and show that sound elevation is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. We changed the ear shape of participants (male and female) with silicone molds for several days. This manipulation reduced or abolished the ability to discriminate sound elevation and flattened cortical tuning curves. Tuning curves recovered their original shape as participants adapted to the modified ears and regained elevation perception over time. These findings suggest that the elevation tuning observed in low-level auditory cortex did not arise from the physical features of the stimuli but is contingent on experience with spectral cues and covaries with the change in perception. One explanation for this observation may be that the tuning in low-level auditory cortex underlies the subjective perception of sound elevation.SIGNIFICANCE STATEMENT This study addresses two fundamental questions about the brain representation of sensory stimuli: how the vertical spatial axis of auditory space is represented in the auditory cortex and whether low-level sensory cortex represents physical stimulus features or subjective perceptual attributes. Using high-resolution fMRI, we show that vertical sound direction is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. In addition, we demonstrate that the shape of these tuning functions is contingent on experience with spectral cues and covaries with the change in perception, which may indicate that the tuning functions in low-level auditory cortex underlie the perceived elevation of a sound source.
Collapse
|
27
|
Berger CC, Gonzalez-Franco M, Tajadura-Jiménez A, Florencio D, Zhang Z. Generic HRTFs May be Good Enough in Virtual Reality. Improving Source Localization through Cross-Modal Plasticity. Front Neurosci 2018; 12:21. [PMID: 29456486 PMCID: PMC5801410 DOI: 10.3389/fnins.2018.00021] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2017] [Accepted: 01/11/2018] [Indexed: 11/13/2022] Open
Abstract
Auditory spatial localization in humans is performed using a combination of interaural time differences, interaural level differences, as well as spectral cues provided by the geometry of the ear. To render spatialized sounds within a virtual reality (VR) headset, either individualized or generic Head Related Transfer Functions (HRTFs) are usually employed. The former require arduous calibrations, but enable accurate auditory source localization, which may lead to a heightened sense of presence within VR. The latter obviate the need for individualized calibrations, but result in less accurate auditory source localization. Previous research on auditory source localization in the real world suggests that our representation of acoustic space is highly plastic. In light of these findings, we investigated whether auditory source localization could be improved for users of generic HRTFs via cross-modal learning. The results show that pairing a dynamic auditory stimulus, with a spatio-temporally aligned visual counterpart, enabled users of generic HRTFs to improve subsequent auditory source localization. Exposure to the auditory stimulus alone or to asynchronous audiovisual stimuli did not improve auditory source localization. These findings have important implications for human perception as well as the development of VR systems as they indicate that generic HRTFs may be enough to enable good auditory source localization in VR.
Collapse
Affiliation(s)
- Christopher C. Berger
- Microsoft Research, Redmond, WA, United States
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, United States
| | | | - Ana Tajadura-Jiménez
- UCL Interaction Centre, University College London, London, United Kingdom
- Interactive Systems DEI-Lab, Universidad Carlos III de Madrid, Madrid, Spain
| | | | - Zhengyou Zhang
- Microsoft Research, Redmond, WA, United States
- Department Electrical Engineering, University of Washington, Seattle, WA, United States
| |
Collapse
|
28
|
Watson CJG, Carlile S, Kelly H, Balachandar K. The Generalization of Auditory Accommodation to Altered Spectral Cues. Sci Rep 2017; 7:11588. [PMID: 28912440 PMCID: PMC5599623 DOI: 10.1038/s41598-017-11981-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2017] [Accepted: 08/30/2017] [Indexed: 11/23/2022] Open
Abstract
The capacity of healthy adult listeners to accommodate to altered spectral cues to the source locations of broadband sounds has now been well documented. In recent years we have demonstrated that the degree and speed of accommodation are improved by using an integrated sensory-motor training protocol under anechoic conditions. Here we demonstrate that the learning which underpins the localization performance gains during the accommodation process using anechoic broadband training stimuli generalize to environmentally relevant scenarios. As previously, alterations to monaural spectral cues were produced by fitting participants with custom-made outer ear molds, worn during waking hours. Following acute degradations in localization performance, participants then underwent daily sensory-motor training to improve localization accuracy using broadband noise stimuli over ten days. Participants not only demonstrated post-training improvements in localization accuracy for broadband noises presented in the same set of positions used during training, but also for stimuli presented in untrained locations, for monosyllabic speech sounds, and for stimuli presented in reverberant conditions. These findings shed further light on the neuroplastic capacity of healthy listeners, and represent the next step in the development of training programs for users of assistive listening devices which degrade localization acuity by distorting or bypassing monaural cues.
Collapse
Affiliation(s)
- Christopher J G Watson
- School of Medical Sciences, University of Sydney, Sydney, New South Wales, 2006, Australia.
| | - Simon Carlile
- School of Medical Sciences, University of Sydney, Sydney, New South Wales, 2006, Australia
| | - Heather Kelly
- School of Medical Sciences, University of Sydney, Sydney, New South Wales, 2006, Australia
| | - Kapilesh Balachandar
- School of Medical Sciences, University of Sydney, Sydney, New South Wales, 2006, Australia
| |
Collapse
|
29
|
Jóhannesson ÓI, Balan O, Unnthorsson R, Moldoveanu A, Kristjánsson Á. The Sound of Vision Project: On the Feasibility of an Audio-Haptic Representation of the Environment, for the Visually Impaired. Brain Sci 2016; 6:brainsci6030020. [PMID: 27355966 PMCID: PMC5039449 DOI: 10.3390/brainsci6030020] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2016] [Revised: 06/18/2016] [Accepted: 06/23/2016] [Indexed: 11/16/2022] Open
Abstract
The Sound of Vision project involves developing a sensory substitution device that is aimed at creating and conveying a rich auditory representation of the surrounding environment to the visually impaired. However, the feasibility of such an approach is strongly constrained by neural flexibility, possibilities of sensory substitution and adaptation to changed sensory input. We review evidence for such flexibility from various perspectives. We discuss neuroplasticity of the adult brain with an emphasis on functional changes in the visually impaired compared to sighted people. We discuss effects of adaptation on brain activity, in particular short-term and long-term effects of repeated exposure to particular stimuli. We then discuss evidence for sensory substitution such as Sound of Vision involves, while finally discussing evidence for adaptation to changes in the auditory environment. We conclude that sensory substitution enterprises such as Sound of Vision are quite feasible in light of the available evidence, which is encouraging regarding such projects.
Collapse
Affiliation(s)
- Ómar I Jóhannesson
- Laboratory of Visual Perception and Visuo-motor control, Faculty of Psychology, School of Health Sciences, University of Iceland, Reykjavik 101, Iceland.
| | - Oana Balan
- Faculty of Automatic Control and Computers, Computer Science and Engineering Department, University Politehnica of Bucharest, Bucharest 060042, Romania.
| | - Runar Unnthorsson
- Faculty of Industrial Engineering, Mechanical Engineering and Computer Science, School of Engineering and Natural Sciences, University of Iceland, Reykjavik 101, Iceland.
| | - Alin Moldoveanu
- Faculty of Automatic Control and Computers, Computer Science and Engineering Department, University Politehnica of Bucharest, Bucharest 060042, Romania.
| | - Árni Kristjánsson
- Laboratory of Visual Perception and Visuo-motor control, Faculty of Psychology, School of Health Sciences, University of Iceland, Reykjavik 101, Iceland.
| |
Collapse
|
30
|
Carlile S, Leung J. The Perception of Auditory Motion. Trends Hear 2016; 20:2331216516644254. [PMID: 27094029 PMCID: PMC4871213 DOI: 10.1177/2331216516644254] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2015] [Revised: 03/22/2016] [Accepted: 03/22/2016] [Indexed: 11/16/2022] Open
Abstract
The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception.
Collapse
Affiliation(s)
- Simon Carlile
- School of Medical Sciences, University of Sydney, NSW, Australia Starkey Hearing Research Center, Berkeley, CA, USA
| | - Johahn Leung
- School of Medical Sciences, University of Sydney, NSW, Australia
| |
Collapse
|
31
|
Keating P, Rosenior-Patten O, Dahmen JC, Bell O, King AJ. Behavioral training promotes multiple adaptive processes following acute hearing loss. eLife 2016; 5:e12264. [PMID: 27008181 PMCID: PMC4841776 DOI: 10.7554/elife.12264] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2015] [Accepted: 03/23/2016] [Indexed: 11/13/2022] Open
Abstract
The brain possesses a remarkable capacity to compensate for changes in inputs resulting from a range of sensory impairments. Developmental studies of sound localization have shown that adaptation to asymmetric hearing loss can be achieved either by reinterpreting altered spatial cues or by relying more on those cues that remain intact. Adaptation to monaural deprivation in adulthood is also possible, but appears to lack such flexibility. Here we show, however, that appropriate behavioral training enables monaurally-deprived adult humans to exploit both of these adaptive processes. Moreover, cortical recordings in ferrets reared with asymmetric hearing loss suggest that these forms of plasticity have distinct neural substrates. An ability to adapt to asymmetric hearing loss using multiple adaptive processes is therefore shared by different species and may persist throughout the lifespan. This highlights the fundamental flexibility of neural systems, and may also point toward novel therapeutic strategies for treating sensory disorders. DOI:http://dx.doi.org/10.7554/eLife.12264.001 The brain normally compares the timing and intensity of the sounds that reach each ear to work out a sound’s origin. Hearing loss in one ear disrupts these between-ear comparisons, which causes listeners to make errors in this process. With time, however, the brain adapts to this hearing loss and once again learns to localize sounds accurately. Previous research has shown that young ferrets can adapt to hearing loss in one ear in two distinct ways. The ferrets either learn to remap the altered between-ear comparisons, caused by losing hearing in one ear, onto their new locations. Alternatively, the ferrets learn to locate sounds using only their good ear. Each strategy is suited to localizing different types of sound, but it was not known how this adaptive flexibility unfolds over time, whether it persists throughout the lifespan, or whether it is shared by other species. Now, Keating et al. show that, with some coaching, adult humans also adapt to temporary loss of hearing in one ear using the same two strategies. In the experiments, adult humans were trained to localize different kinds of sounds while wearing an earplug in one ear. These sounds were presented from 12 loudspeakers arranged in a horizontal circle around the person being tested. The experiments showed that short periods of behavioral training enable adult humans to adapt to a hearing loss in one ear and recover their ability to localize sounds. Just like the ferrets, adult humans learned to correctly associate altered between-ear comparisons with their new locations and to rely more on the cues from the unplugged ear to locate sound. Which of these adaptive strategies the participants used depended on the frequencies present in the sounds. The cells in the ear and brain that detect and make sense of sound typically respond best to a limited range of frequencies, and so this suggests that each strategy relies on a distinct set of cells. Keating et al. confirmed in ferrets that different brain cells are indeed used to bring about adaptation to hearing loss in one ear using each strategy. These insights may aid the development of new therapies to treat hearing loss. DOI:http://dx.doi.org/10.7554/eLife.12264.002
Collapse
Affiliation(s)
- Peter Keating
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Onayomi Rosenior-Patten
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Johannes C Dahmen
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Olivia Bell
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Andrew J King
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
32
|
Trapeau R, Schönwiesner M. Adaptation to shifted interaural time differences changes encoding of sound location in human auditory cortex. Neuroimage 2015; 118:26-38. [PMID: 26054873 DOI: 10.1016/j.neuroimage.2015.06.006] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2015] [Revised: 05/06/2015] [Accepted: 06/02/2015] [Indexed: 11/29/2022] Open
Abstract
The auditory system infers the location of sound sources from the processing of different acoustic cues. These cues change during development and when assistive hearing devices are worn. Previous studies have found behavioral recalibration to modified localization cues in human adults, but very little is known about the neural correlates and mechanisms of this plasticity. We equipped participants with digital devices, worn in the ear canal that allowed us to delay sound input to one ear, and thus modify interaural time differences, a major cue for horizontal sound localization. Participants wore the digital earplugs continuously for nine days while engaged in day-to-day activities. Daily psychoacoustical testing showed rapid recalibration to the manipulation and confirmed that adults can adapt to shifted interaural time differences in their daily multisensory environment. High-resolution functional MRI scans performed before and after recalibration showed that recalibration was accompanied by changes in hemispheric lateralization of auditory cortex activity. These changes corresponded to a shift in spatial coding of sound direction comparable to the observed behavioral recalibration. Fitting the imaging results with a model of auditory spatial processing also revealed small shifts in voxel-wise spatial tuning within each hemisphere.
Collapse
Affiliation(s)
- Régis Trapeau
- International Laboratory for Brain, Music and Sound Research (BRAMS), Department of Psychology, Université de Montréal, Montreal , QC, Canada; Centre for Research on Brain, Language and Music (CRBLM), McGill University, Montreal, QC, Canada
| | - Marc Schönwiesner
- International Laboratory for Brain, Music and Sound Research (BRAMS), Department of Psychology, Université de Montréal, Montreal , QC, Canada; Centre for Research on Brain, Language and Music (CRBLM), McGill University, Montreal, QC, Canada; Department of Neurology and Neurosurgery, Faculty of Medicine, McGill University, Montreal, QC, Canada.
| |
Collapse
|