1
|
Valzolgher C. Motor Strategies: The Role of Active Behavior in Spatial Hearing Research. Psychol Rep 2024:332941241260246. [PMID: 38857521 DOI: 10.1177/00332941241260246] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/12/2024]
Abstract
When completing a task, the ability to implement behavioral strategies to solve it in an effective and cognitively less-demanding way is extremely adaptive for humans. This behavior makes it possible to accumulate evidence and test one's own predictions about the external world. In this work, starting from examples in the field of spatial hearing research, I analyze the importance of considering motor strategies in perceptual tasks, and I stress the urgent need to create ecological experimental settings, which are essential in allowing the implementation of such behaviors and in measuring them. In particular, I will consider head movements as an example of strategic behavior implemented to solve acoustic space-perception tasks.
Collapse
Affiliation(s)
- Chiara Valzolgher
- Center for Mind/Brain Sciences, University of Trento, Rovereto, Italy
| |
Collapse
|
2
|
Valzolgher C, Capra S, Sum K, Finos L, Pavani F, Picinali L. Spatial hearing training in virtual reality with simulated asymmetric hearing loss. Sci Rep 2024; 14:2469. [PMID: 38291126 PMCID: PMC10827792 DOI: 10.1038/s41598-024-51892-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 01/10/2024] [Indexed: 02/01/2024] Open
Abstract
Sound localization is essential to perceive the surrounding world and to interact with objects. This ability can be learned across time, and multisensory and motor cues play a crucial role in the learning process. A recent study demonstrated that when training localization skills, reaching to the sound source to determine its position reduced localization errors faster and to a greater extent as compared to just naming sources' positions, despite the fact that in both tasks, participants received the same feedback about the correct position of sound sources in case of wrong response. However, it remains to establish which features have made reaching to sound more effective as compared to naming. In the present study, we introduced a further condition in which the hand is the effector providing the response, but without it reaching toward the space occupied by the target source: the pointing condition. We tested three groups of participants (naming, pointing, and reaching groups) each while performing a sound localization task in normal and altered listening situations (i.e. mild-moderate unilateral hearing loss) simulated through auditory virtual reality technology. The experiment comprised four blocks: during the first and the last block, participants were tested in normal listening condition, while during the second and the third in altered listening condition. We measured their performance, their subjective judgments (e.g. effort), and their head-related behavior (through kinematic tracking). First, people's performance decreased when exposed to asymmetrical mild-moderate hearing impairment, more specifically on the ipsilateral side and for the pointing group. Second, we documented that all groups decreased their localization errors across altered listening blocks, but the extent of this reduction was higher for reaching and pointing as compared to the naming group. Crucially, the reaching group leads to a greater error reduction for the side where the listening alteration was applied. Furthermore, we documented that, across blocks, reaching and pointing groups increased the implementation of head motor behavior during the task (i.e., they increased approaching head movements toward the space of the sound) more than naming. Third, while performance in the unaltered blocks (first and last) was comparable, only the reaching group continued to exhibit a head behavior similar to those developed during the altered blocks (second and third), corroborating the previous observed relationship between the reaching to sounds task and head movements. In conclusion, this study further demonstrated the effectiveness of reaching to sounds as compared to pointing and naming in the learning processes. This effect could be related both to the process of implementing goal-directed motor actions and to the role of reaching actions in fostering the implementation of head-related motor strategies.
Collapse
Affiliation(s)
- Chiara Valzolgher
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068, Rovereto, TN, Italy.
| | - Sara Capra
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068, Rovereto, TN, Italy
| | - Kevin Sum
- Audio Experience Design (www.axdesign.co.uk), Imperial College London, London, UK
| | - Livio Finos
- Department of Statistical Sciences, University of Padova, Padova, Italy
| | - Francesco Pavani
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068, Rovereto, TN, Italy
- Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Rovereto, Italy
- Centro Interuniversitario di Ricerca "Cognizione, Linguaggio e Sordità" (CIRCLeS), Rovereto, Italy
| | - Lorenzo Picinali
- Audio Experience Design (www.axdesign.co.uk), Imperial College London, London, UK
| |
Collapse
|
3
|
Willmore BDB, King AJ. Adaptation in auditory processing. Physiol Rev 2023; 103:1025-1058. [PMID: 36049112 PMCID: PMC9829473 DOI: 10.1152/physrev.00011.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023] Open
Abstract
Adaptation is an essential feature of auditory neurons, which reduces their responses to unchanging and recurring sounds and allows their response properties to be matched to the constantly changing statistics of sounds that reach the ears. As a consequence, processing in the auditory system highlights novel or unpredictable sounds and produces an efficient representation of the vast range of sounds that animals can perceive by continually adjusting the sensitivity and, to a lesser extent, the tuning properties of neurons to the most commonly encountered stimulus values. Together with attentional modulation, adaptation to sound statistics also helps to generate neural representations of sound that are tolerant to background noise and therefore plays a vital role in auditory scene analysis. In this review, we consider the diverse forms of adaptation that are found in the auditory system in terms of the processing levels at which they arise, the underlying neural mechanisms, and their impact on neural coding and perception. We also ask what the dynamics of adaptation, which can occur over multiple timescales, reveal about the statistical properties of the environment. Finally, we examine how adaptation to sound statistics is influenced by learning and experience and changes as a result of aging and hearing loss.
Collapse
Affiliation(s)
- Ben D. B. Willmore
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Andrew J. King
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
4
|
Valzolgher C, Alzaher M, Gaveau V, Coudert A, Marx M, Truy E, Barone P, Farnè A, Pavani F. Capturing Visual Attention With Perturbed Auditory Spatial Cues. Trends Hear 2023; 27:23312165231182289. [PMID: 37611181 PMCID: PMC10467228 DOI: 10.1177/23312165231182289] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Revised: 05/25/2023] [Accepted: 05/29/2023] [Indexed: 08/25/2023] Open
Abstract
Lateralized sounds can orient visual attention, with benefits for audio-visual processing. Here, we asked to what extent perturbed auditory spatial cues-resulting from cochlear implants (CI) or unilateral hearing loss (uHL)-allow this automatic mechanism of information selection from the audio-visual environment. We used a classic paradigm from experimental psychology (capture of visual attention with sounds) to probe the integrity of audio-visual attentional orienting in 60 adults with hearing loss: bilateral CI users (N = 20), unilateral CI users (N = 20), and individuals with uHL (N = 20). For comparison, we also included a group of normal-hearing (NH, N = 20) participants, tested in binaural and monaural listening conditions (i.e., with one ear plugged). All participants also completed a sound localization task to assess spatial hearing skills. Comparable audio-visual orienting was observed in bilateral CI, uHL, and binaural NH participants. By contrast, audio-visual orienting was, on average, absent in unilateral CI users and reduced in NH listening with one ear plugged. Spatial hearing skills were better in bilateral CI, uHL, and binaural NH participants than in unilateral CI users and monaurally plugged NH listeners. In unilateral CI users, spatial hearing skills correlated with audio-visual-orienting abilities. These novel results show that audio-visual-attention orienting can be preserved in bilateral CI users and in uHL patients to a greater extent than unilateral CI users. This highlights the importance of assessing the impact of hearing loss beyond auditory difficulties alone: to capture to what extent it may enable or impede typical interactions with the multisensory environment.
Collapse
Affiliation(s)
- Chiara Valzolgher
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Rovereto, Italy
- Integrative, Multisensory, Perception, Action and Cognition Team, Lyon Neuroscience Research Center, Lyon, France
| | - Mariam Alzaher
- Centre de Recherche Cerveau & Cognition, Toulouse, France
- Hospices Civils, Toulouse, France
| | - Valérie Gaveau
- Integrative, Multisensory, Perception, Action and Cognition Team, Lyon Neuroscience Research Center, Lyon, France
| | | | - Mathieu Marx
- Centre de Recherche Cerveau & Cognition, Toulouse, France
- Hospices Civils, Toulouse, France
| | - Eric Truy
- Integrative, Multisensory, Perception, Action and Cognition Team, Lyon Neuroscience Research Center, Lyon, France
- Hospices Civils de Lyon, Lyon, France
| | - Pascal Barone
- Centre de Recherche Cerveau & Cognition, Toulouse, France
| | - Alessandro Farnè
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Rovereto, Italy
- Integrative, Multisensory, Perception, Action and Cognition Team, Lyon Neuroscience Research Center, Lyon, France
- Neuro-immersion, Lyon, France
| | - Francesco Pavani
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Rovereto, Italy
- Integrative, Multisensory, Perception, Action and Cognition Team, Lyon Neuroscience Research Center, Lyon, France
- Centro Interuniversitario di Ricerca « Cognizione, Linguaggio e Sordità », Rovereto, Italy
| |
Collapse
|
5
|
Dietze A, Sörös P, Bröer M, Methner A, Pöntynen H, Sundermann B, Witt K, Dietz M. Effects of acute ischemic stroke on binaural perception. Front Neurosci 2022; 16:1022354. [PMID: 36620448 PMCID: PMC9817147 DOI: 10.3389/fnins.2022.1022354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Accepted: 11/28/2022] [Indexed: 12/24/2022] Open
Abstract
Stroke-induced lesions at different locations in the brain can affect various aspects of binaural hearing, including spatial perception. Previous studies found impairments in binaural hearing, especially in patients with temporal lobe tumors or lesions, but also resulting from lesions all along the auditory pathway from brainstem nuclei up to the auditory cortex. Currently, structural magnetic resonance imaging (MRI) is used in the clinical treatment routine of stroke patients. In combination with structural imaging, an analysis of binaural hearing enables a better understanding of hearing-related signaling pathways and of clinical disorders of binaural processing after a stroke. However, little data are currently available on binaural hearing in stroke patients, particularly for the acute phase of stroke. Here, we sought to address this gap in an exploratory study of patients in the acute phase of ischemic stroke. We conducted psychoacoustic measurements using two tasks of binaural hearing: binaural tone-in-noise detection, and lateralization of stimuli with interaural time- or level differences. The location of the stroke lesion was established by previously acquired MRI data. An additional general assessment included three-frequency audiometry, cognitive assessments, and depression screening. Fifty-five patients participated in the experiments, on average 5 days after their stroke onset. Patients whose lesions were in different locations were tested, including lesions in brainstem areas, basal ganglia, thalamus, temporal lobe, and other cortical and subcortical areas. Lateralization impairments were found in most patients with lesions within the auditory pathway. Lesioned areas at brainstem levels led to distortions of lateralization in both hemifields, thalamus lesions were correlated with a shift of the whole auditory space, whereas some cortical lesions predominantly affected the lateralization of stimuli contralateral to the lesion and resulted in more variable responses. Lateralization performance was also found to be affected by lesions of the right, but not the left, basal ganglia, as well as by lesions in non-auditory cortical areas. In general, altered lateralization was common in the stroke group. In contrast, deficits in tone-in-noise detection were relatively scarce in our sample of lesion patients, although a significant number of patients with multiple lesion sites were not able to complete the task.
Collapse
Affiliation(s)
- Anna Dietze
- Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany,Cluster of Excellence “Hearing4all”, University of Oldenburg, Oldenburg, Germany,*Correspondence: Anna Dietze,
| | - Peter Sörös
- Department of Neurology, School of Medicine and Health Sciences, University of Oldenburg, Oldenburg, Germany,Research Center Neurosensory Science, University of Oldenburg, Oldenburg, Germany
| | - Matthias Bröer
- Department of Neurology, School of Medicine and Health Sciences, University of Oldenburg, Oldenburg, Germany
| | - Anna Methner
- Department of Neurology, School of Medicine and Health Sciences, University of Oldenburg, Oldenburg, Germany
| | - Henri Pöntynen
- Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany,Cluster of Excellence “Hearing4all”, University of Oldenburg, Oldenburg, Germany
| | - Benedikt Sundermann
- Research Center Neurosensory Science, University of Oldenburg, Oldenburg, Germany,Institute of Radiology and Neuroradiology, Evangelisches Krankenhaus Oldenburg, Oldenburg, Germany
| | - Karsten Witt
- Department of Neurology, School of Medicine and Health Sciences, University of Oldenburg, Oldenburg, Germany,Research Center Neurosensory Science, University of Oldenburg, Oldenburg, Germany
| | - Mathias Dietz
- Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany,Cluster of Excellence “Hearing4all”, University of Oldenburg, Oldenburg, Germany,Research Center Neurosensory Science, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
6
|
Klingel M, Kopčo N, Laback B. Reweighting of Binaural Localization Cues Induced by Lateralization Training. J Assoc Res Otolaryngol 2021; 22:551-566. [PMID: 33959826 PMCID: PMC8476684 DOI: 10.1007/s10162-021-00800-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Accepted: 03/29/2021] [Indexed: 11/03/2022] Open
Abstract
Normal-hearing listeners adapt to alterations in sound localization cues. This adaptation can result from the establishment of a new spatial map of the altered cues or from a stronger relative weighting of unaltered compared to altered cues. Such reweighting has been shown for monaural vs. binaural cues. However, studies attempting to reweight the two binaural cues, interaural differences in time (ITD) and level (ILD), yielded inconclusive results. This study investigated whether binaural-cue reweighting can be induced by lateralization training in a virtual audio-visual environment. Twenty normal-hearing participants, divided into two groups, completed the experiment consisting of 7 days of lateralization training, preceded and followed by a test measuring the binaural-cue weights. Participants' task was to lateralize 500-ms bandpass-filtered (2-4 kHz) noise bursts containing various combinations of spatially consistent and inconsistent binaural cues. During training, additional visual cues reinforced the azimuth corresponding to ITDs in one group and ILDs in the other group and the azimuthal ranges of the binaural cues were manipulated group-specifically. Both groups showed a significant increase of the reinforced-cue weight from pre- to posttest, suggesting that participants reweighted the binaural cues in the expected direction. This reweighting occurred within the first training session. The results are relevant as binaural-cue reweighting likely occurs when normal-hearing listeners adapt to new acoustic environments. Reweighting might also be a factor underlying the low contribution of ITDs to sound localization of cochlear-implant listeners as they typically do not experience reliable ITD cues with clinical devices.
Collapse
Affiliation(s)
- Maike Klingel
- Acoustics Research Institute, Austrian Academy of Sciences, 1040 Vienna, Austria
- Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, 1010 Vienna, Austria
- Institute of Computer Science, Faculty of Science, P. J. Šafárik University in Košice, 04180 Košice, Slovakia
| | - Norbert Kopčo
- Institute of Computer Science, Faculty of Science, P. J. Šafárik University in Košice, 04180 Košice, Slovakia
| | - Bernhard Laback
- Acoustics Research Institute, Austrian Academy of Sciences, 1040 Vienna, Austria
| |
Collapse
|
7
|
Valzolgher C, Verdelet G, Salemme R, Lombardi L, Gaveau V, Farné A, Pavani F. Reaching to sounds in virtual reality: A multisensory-motor approach to promote adaptation to altered auditory cues. Neuropsychologia 2020; 149:107665. [PMID: 33130161 DOI: 10.1016/j.neuropsychologia.2020.107665] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Revised: 07/25/2020] [Accepted: 10/24/2020] [Indexed: 11/26/2022]
Abstract
When localising sounds in space the brain relies on internal models that specify the correspondence between the auditory input reaching the ears, initial head-position and coordinates in external space. These models can be updated throughout life, setting the basis for re-learning spatial hearing abilities in adulthood. In addition, strategic behavioural adjustments allow people to quickly adapt to atypical listening situations. Until recently, the potential role of dynamic listening, involving head-movements or reaching to sounds, have remained largely overlooked. Here, we exploited visual virtual reality (VR) and real-time kinematic tracking, to study the role of active multisensory-motor interactions when hearing individuals adapt to altered binaural cues (one ear plugged and muffed). Participants were immersed in a VR scenario showing 17 virtual speakers at ear-level. In each trial, they heard a sound delivered from a real speaker aligned with one of the virtual ones and were instructed to either reach-to-touch the perceived sound source (Reaching group), or read the label associated with the speaker (Naming group). Participants were free to move their heads during the task and received audio-visual feedback on their performance. Most importantly, they performed the task under binaural or monaural listening. Results show that both groups adapted rapidly to monaural listening, improving sound localisation performance across trials and changing their head-movement behaviour. Reaching the sounds induced faster and larger sound localisation improvements, compared to just naming its position. This benefit was linked to progressively wider head-movements to explore auditory space, selectively in the Reaching group. In conclusion, reaching to sounds in an immersive visual VR context proved most effective for adapting to altered binaural listening. Head-movements played an important role in adaptation, pointing to the importance of dynamic listening when implementing training protocols for improving spatial hearing.
Collapse
Affiliation(s)
- Chiara Valzolgher
- IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France; Centre for Mind/Brain Sciences (CIMeC), University of Trento, Italy.
| | | | - Romeo Salemme
- IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France; Neuro-immersion, Centre de Recherche en Neuroscience Lyon (CRNL), France
| | - Luigi Lombardi
- Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Italy
| | - Valerie Gaveau
- IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France
| | - Alessandro Farné
- IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France; Centre for Mind/Brain Sciences (CIMeC), University of Trento, Italy; Neuro-immersion, Centre de Recherche en Neuroscience Lyon (CRNL), France
| | - Francesco Pavani
- IMPACT, Centre de Recherche en Neuroscience Lyon (CRNL), France; Centre for Mind/Brain Sciences (CIMeC), University of Trento, Italy; Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Italy
| |
Collapse
|
8
|
Valzolgher C, Campus C, Rabini G, Gori M, Pavani F. Updating spatial hearing abilities through multisensory and motor cues. Cognition 2020; 204:104409. [PMID: 32717425 DOI: 10.1016/j.cognition.2020.104409] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2019] [Revised: 07/09/2020] [Accepted: 07/09/2020] [Indexed: 10/23/2022]
Abstract
Spatial hearing relies on a series of mechanisms for associating auditory cues with positions in space. When auditory cues are altered, humans, as well as other animals, can update the way they exploit auditory cues and partially compensate for their spatial hearing difficulties. In two experiments, we simulated monaural listening in hearing adults by temporarily plugging and muffing one ear, to assess the effects of active or passive training conditions. During active training, participants moved an audio-bracelet attached to their wrist, while continuously attending to the position of the sounds it produced. During passive training, participants received identical acoustic stimulation and performed exactly the same task, but the audio-bracelet was moved by the experimenter. Before and after training, we measured adaptation to monaural listening in three auditory tasks: single sound localization, minimum audible angle (MAA), spatial and temporal bisection. We also performed the tests twice in an untrained group, which completed the same auditory tasks but received no training. Results showed that participants significantly improved in single sound localization, across 3 consecutive days, but more in the active compared to the passive training group. This reveals that benefits of kinesthetic cues are additive with respect to those of paying attention to the position of sounds and/or seeing their positions when updating spatial hearing. The observed adaptation did not generalize to other auditory spatial tasks (space bisection and MAA), suggesting that partial updating of sound-space correspondences does not extend to all aspects of spatial hearing.
Collapse
Affiliation(s)
- Chiara Valzolgher
- Centro Interdipartimentale Mente/Cervello (CIMeC), University of Trento, Italy; IMPACT, Centre de Recherche en Neurosciences Lyon (CRNL), France.
| | | | - Giuseppe Rabini
- Centro Interdipartimentale Mente/Cervello (CIMeC), University of Trento, Italy
| | - Monica Gori
- Italian Institute of Technology (IIT), Italy
| | - Francesco Pavani
- Centro Interdipartimentale Mente/Cervello (CIMeC), University of Trento, Italy; IMPACT, Centre de Recherche en Neurosciences Lyon (CRNL), France; Department of Psychology and Cognitive Science, Universiy of Trento, Italy
| |
Collapse
|
9
|
Kumpik DP, Campbell C, Schnupp JWH, King AJ. Re-weighting of Sound Localization Cues by Audiovisual Training. Front Neurosci 2019; 13:1164. [PMID: 31802997 PMCID: PMC6873890 DOI: 10.3389/fnins.2019.01164] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2019] [Accepted: 10/15/2019] [Indexed: 11/28/2022] Open
Abstract
Sound localization requires the integration in the brain of auditory spatial cues generated by interactions with the external ears, head and body. Perceptual learning studies have shown that the relative weighting of these cues can change in a context-dependent fashion if their relative reliability is altered. One factor that may influence this process is vision, which tends to dominate localization judgments when both modalities are present and induces a recalibration of auditory space if they become misaligned. It is not known, however, whether vision can alter the weighting of individual auditory localization cues. Using virtual acoustic space stimuli, we measured changes in subjects’ sound localization biases and binaural localization cue weights after ∼50 min of training on audiovisual tasks in which visual stimuli were either informative or not about the location of broadband sounds. Four different spatial configurations were used in which we varied the relative reliability of the binaural cues: interaural time differences (ITDs) and frequency-dependent interaural level differences (ILDs). In most subjects and experiments, ILDs were weighted more highly than ITDs before training. When visual cues were spatially uninformative, some subjects showed a reduction in auditory localization bias and the relative weighting of ILDs increased after training with congruent binaural cues. ILDs were also upweighted if they were paired with spatially-congruent visual cues, and the largest group-level improvements in sound localization accuracy occurred when both binaural cues were matched to visual stimuli. These data suggest that binaural cue reweighting reflects baseline differences in the relative weights of ILDs and ITDs, but is also shaped by the availability of congruent visual stimuli. Training subjects with consistently misaligned binaural and visual cues produced the ventriloquism aftereffect, i.e., a corresponding shift in auditory localization bias, without affecting the inter-subject variability in sound localization judgments or their binaural cue weights. Our results show that the relative weighting of different auditory localization cues can be changed by training in ways that depend on their reliability as well as the availability of visual spatial information, with the largest improvements in sound localization likely to result from training with fully congruent audiovisual information.
Collapse
Affiliation(s)
- Daniel P Kumpik
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Connor Campbell
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Jan W H Schnupp
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Andrew J King
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
10
|
Zirn S, Angermeier J, Arndt S, Aschendorff A, Wesarg T. Reducing the Device Delay Mismatch Can Improve Sound Localization in Bimodal Cochlear Implant/Hearing-Aid Users. Trends Hear 2019; 23:2331216519843876. [PMID: 31018790 PMCID: PMC6484236 DOI: 10.1177/2331216519843876] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
In users of a cochlear implant (CI) together with a contralateral hearing aid (HA), so-called bimodal listeners, differences in processing latencies between digital HA and CI up to 9 ms constantly superimpose interaural time differences. In the present study, the effect of this device delay mismatch on sound localization accuracy was investigated. For this purpose, localization accuracy in the frontal horizontal plane was measured with the original and minimized device delay mismatch. The reduction was achieved by delaying the CI stimulation according to the delay of the individually worn HA. For this, a portable, programmable, battery-powered delay line based on a ring buffer running on a microcontroller was designed and assembled. After an acclimatization period to the delayed CI stimulation of 1 hr, the nine bimodal study participants showed a highly significant improvement in localization accuracy of 11.6% compared with the everyday situation without the delay line ( p < .01). Concluding, delaying CI stimulation to minimize the device delay mismatch seems to be a promising method to increase sound localization accuracy in bimodal listeners.
Collapse
Affiliation(s)
- Stefan Zirn
- 1 Department of Electrical Engineering, Medical Engineering and Computer Science, University of Applied Sciences Offenburg, Germany
| | - Julian Angermeier
- 1 Department of Electrical Engineering, Medical Engineering and Computer Science, University of Applied Sciences Offenburg, Germany
| | - Susan Arndt
- 2 Department of Otorhinolaryngology-Head and Neck Surgery, Medical Center, Faculty of Medicine-University of Freiburg, Germany
| | - Antje Aschendorff
- 2 Department of Otorhinolaryngology-Head and Neck Surgery, Medical Center, Faculty of Medicine-University of Freiburg, Germany
| | - Thomas Wesarg
- 2 Department of Otorhinolaryngology-Head and Neck Surgery, Medical Center, Faculty of Medicine-University of Freiburg, Germany
| |
Collapse
|
11
|
Interactions between egocentric and allocentric spatial coding of sounds revealed by a multisensory learning paradigm. Sci Rep 2019; 9:7892. [PMID: 31133688 PMCID: PMC6536515 DOI: 10.1038/s41598-019-44267-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2018] [Accepted: 05/08/2019] [Indexed: 11/09/2022] Open
Abstract
Although sound position is initially head-centred (egocentric coordinates), our brain can also represent sounds relative to one another (allocentric coordinates). Whether reference frames for spatial hearing are independent or interact remained largely unexplored. Here we developed a new allocentric spatial-hearing training and tested whether it can improve egocentric sound-localisation performance in normal-hearing adults listening with one ear plugged. Two groups of participants (N = 15 each) performed an egocentric sound-localisation task (point to a syllable), in monaural listening, before and after 4-days of multisensory training on triplets of white-noise bursts paired with occasional visual feedback. Critically, one group performed an allocentric task (auditory bisection task), whereas the other processed the same stimuli to perform an egocentric task (pointing to a designated sound of the triplet). Unlike most previous works, we tested also a no training group (N = 15). Egocentric sound-localisation abilities in the horizontal plane improved for all groups in the space ipsilateral to the ear-plug. This unexpected finding highlights the importance of including a no training group when studying sound localisation re-learning. Yet, performance changes were qualitatively different in trained compared to untrained participants, providing initial evidence that allocentric and multisensory procedures may prove useful when aiming to promote sound localisation re-learning.
Collapse
|
12
|
Venskytis EJ, Clayton C, Montagne C, Zhou Y. Audiovisual Interactions in Stereo Sound Localization for Individuals With Unilateral Hearing Loss. Trends Hear 2019; 23:2331216519846232. [PMID: 31035906 PMCID: PMC6572873 DOI: 10.1177/2331216519846232] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023] Open
Abstract
This study investigated the effects of unilateral hearing loss (UHL), of either conductive or sensorineural origin, on stereo sound localization and related visual bias in listeners with normal hearing, short-term (acute) UHL, and chronic UHL. Time-delay-based stereophony was used to isolate interaural-time-difference cues for sound source localization in free field. Listeners with acute moderate (<40 dB for tens of minutes) and chronic severe (>50 dB for more than 10 years) UHL showed poor localization and compressed auditory space that favored the intact ear. Listeners with chronic moderate (<50 dB for more than 12 years) UHL performed near normal. These results show that the auditory spatial mechanisms that allow stereo localization become less sensitive to moderate UHL in the long term. Presenting LED flashes at either the same or a different location as the sound source elicited visual bias in all groups but to different degrees. Hearing loss led to increased visual bias, especially on the impaired side, for the severe and acute UHL listeners, suggesting that vision plays a compensatory role in restoring perceptual spatial symmetry.
Collapse
Affiliation(s)
- Emily J Venskytis
- 1 Department of Speech and Hearing Science, College of Health Solutions, Arizona State University, Tempe, AZ, USA
| | - Colton Clayton
- 1 Department of Speech and Hearing Science, College of Health Solutions, Arizona State University, Tempe, AZ, USA
| | - Christopher Montagne
- 1 Department of Speech and Hearing Science, College of Health Solutions, Arizona State University, Tempe, AZ, USA
| | - Yi Zhou
- 1 Department of Speech and Hearing Science, College of Health Solutions, Arizona State University, Tempe, AZ, USA
| |
Collapse
|
13
|
Interaural Time Difference Perception with a Cochlear Implant and a Normal Ear. J Assoc Res Otolaryngol 2018; 19:703-715. [PMID: 30264229 DOI: 10.1007/s10162-018-00697-w] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2017] [Accepted: 09/04/2018] [Indexed: 01/16/2023] Open
Abstract
Currently there is a growing population of cochlear-implant (CI) users with (near) normal hearing in the non-implanted ear. This configuration is often called SSD (single-sided deafness) CI. The goal of the CI is often to improve spatial perception, so the question raises to what extent SSD CI listeners are sensitive to interaural time differences (ITDs). In a controlled lab setup, sensitivity to ITDs was investigated in 11 SSD CI listeners. The stimuli were 100-pps pulse trains on the CI side and band-limited click trains on the acoustic side. After determining level balance and the delay needed to achieve synchronous stimulation of the two ears, the just noticeable difference in ITD was measured using an adaptive procedure. Seven out of 11 listeners were sensitive to ITDs, with a median just noticeable difference of 438 μs. Out of the four listeners who were not sensitive to ITD, one listener reported binaural fusion, and three listeners reported no binaural fusion. To enable ITD sensitivity, a frequency-dependent delay of the electrical stimulus was required to synchronize the electric and acoustic signals at the level of the auditory nerve. Using subjective fusion measures and refined by ITD sensitivity, it was possible to match a CI electrode to an acoustic frequency range. This shows the feasibility of these measures for the allocation of acoustic frequency ranges to electrodes when fitting a CI to a subject with (near) normal hearing in the contralateral ear.
Collapse
|
14
|
The Encoding of Sound Source Elevation in the Human Auditory Cortex. J Neurosci 2018; 38:3252-3264. [PMID: 29507148 DOI: 10.1523/jneurosci.2530-17.2018] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2017] [Revised: 02/11/2018] [Accepted: 02/14/2018] [Indexed: 11/21/2022] Open
Abstract
Spatial hearing is a crucial capacity of the auditory system. While the encoding of horizontal sound direction has been extensively studied, very little is known about the representation of vertical sound direction in the auditory cortex. Using high-resolution fMRI, we measured voxelwise sound elevation tuning curves in human auditory cortex and show that sound elevation is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. We changed the ear shape of participants (male and female) with silicone molds for several days. This manipulation reduced or abolished the ability to discriminate sound elevation and flattened cortical tuning curves. Tuning curves recovered their original shape as participants adapted to the modified ears and regained elevation perception over time. These findings suggest that the elevation tuning observed in low-level auditory cortex did not arise from the physical features of the stimuli but is contingent on experience with spectral cues and covaries with the change in perception. One explanation for this observation may be that the tuning in low-level auditory cortex underlies the subjective perception of sound elevation.SIGNIFICANCE STATEMENT This study addresses two fundamental questions about the brain representation of sensory stimuli: how the vertical spatial axis of auditory space is represented in the auditory cortex and whether low-level sensory cortex represents physical stimulus features or subjective perceptual attributes. Using high-resolution fMRI, we show that vertical sound direction is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. In addition, we demonstrate that the shape of these tuning functions is contingent on experience with spectral cues and covaries with the change in perception, which may indicate that the tuning functions in low-level auditory cortex underlie the perceived elevation of a sound source.
Collapse
|
15
|
Tissieres I, Fornari E, Clarke S, Crottaz-Herbette S. Supramodal effect of rightward prismatic adaptation on spatial representations within the ventral attentional system. Brain Struct Funct 2017; 223:1459-1471. [PMID: 29151115 DOI: 10.1007/s00429-017-1572-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2017] [Accepted: 11/15/2017] [Indexed: 10/18/2022]
Abstract
Rightward prismatic adaptation (R-PA) was shown to alleviate not only visuo-spatial but also auditory symptoms in neglect. The neural mechanisms underlying the effect of R-PA have been previously investigated in visual tasks, demonstrating a shift of hemispheric dominance for visuo-spatial attention from the right to the left hemisphere both in normal subjects and in patients. We have investigated whether the same neural mechanisms underlie the supramodal effect of R-PA on auditory attention. Normal subjects underwent a brief session of R-PA, which was preceded and followed by an fMRI evaluation during which subjects detected targets within the left, central and right space in the auditory or visual modality. R-PA-related changes in activation patterns were found bilaterally in the inferior parietal lobule. In either modality, the representation of the left, central and right space increased in the left IPL, whereas the representation of the right space decreased in the right IPL. Thus, a brief exposure to R-PA modulated the representation of the auditory and visual space within the ventral attentional system. This shift in hemispheric dominance for auditory spatial attention offers a parsimonious explanation for the previously reported effects of R-PA on auditory symptoms in neglect.
Collapse
Affiliation(s)
- Isabel Tissieres
- Neuropsychology and Neurorehabilitation Service, Centre Hospitalier Universitaire Vaudois (CHUV), University of Lausanne, Av. Pierre-Decker 5, 1011, Lausanne, Switzerland
| | - Eleonora Fornari
- CIBM (Centre d'Imagerie Biomédicale), Department of Radiology, Centre Hospitalier Universitaire Vaudois (CHUV), University of Lausanne, 1011, Lausanne, Switzerland
| | - Stephanie Clarke
- Neuropsychology and Neurorehabilitation Service, Centre Hospitalier Universitaire Vaudois (CHUV), University of Lausanne, Av. Pierre-Decker 5, 1011, Lausanne, Switzerland
| | - Sonia Crottaz-Herbette
- Neuropsychology and Neurorehabilitation Service, Centre Hospitalier Universitaire Vaudois (CHUV), University of Lausanne, Av. Pierre-Decker 5, 1011, Lausanne, Switzerland.
| |
Collapse
|
16
|
Trapeau R, Aubrais V, Schönwiesner M. Fast and persistent adaptation to new spectral cues for sound localization suggests a many-to-one mapping mechanism. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 140:879. [PMID: 27586720 DOI: 10.1121/1.4960568] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
The adult human auditory system can adapt to changes in spectral cues for sound localization. This plasticity was demonstrated by changing the shape of the pinna with earmolds. Previous results indicate that participants regain localization accuracy after several weeks of adaptation and that the adapted state is retained for at least one week without earmolds. No aftereffect was observed after mold removal, but any aftereffect may be too short to be observed when responses are averaged over many trials. This work investigated the lack of aftereffect by analyzing single-trial responses and modifying visual, auditory, and tactile information during the localization task. Results showed that participants localized accurately immediately after mold removal, even at the first stimulus presentation. Knowledge of the stimulus spectrum, tactile information about the absence of the earmolds, and visual feedback were not necessary to localize accurately after adaptation. Part of the adaptation persisted for one month without molds. The results are consistent with the hypothesis of a many-to-one mapping of the spectral cues, in which several spectral profiles are simultaneously associated with one sound location. Additionally, participants with acoustically more informative spectral cues localized sounds more accurately, and larger acoustical disturbances by the molds reduced adaptation success.
Collapse
Affiliation(s)
- Régis Trapeau
- International Laboratory for Brain, Music and Sound Research (BRAMS), Department of Psychology, Université de Montréal, Pavillon 1420 Boulevard Mont-Royal, Outremont, Quebec, H2V 4P3, Canada
| | - Valérie Aubrais
- International Laboratory for Brain, Music and Sound Research (BRAMS), Department of Psychology, Université de Montréal, Pavillon 1420 Boulevard Mont-Royal, Outremont, Quebec, H2V 4P3, Canada
| | - Marc Schönwiesner
- International Laboratory for Brain, Music and Sound Research (BRAMS), Department of Psychology, Université de Montréal, Pavillon 1420 Boulevard Mont-Royal, Outremont, Quebec, H2V 4P3, Canada
| |
Collapse
|
17
|
Physiological Evidence for a Midline Spatial Channel in Human Auditory Cortex. J Assoc Res Otolaryngol 2016; 17:331-40. [PMID: 27164943 PMCID: PMC4940291 DOI: 10.1007/s10162-016-0571-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2015] [Accepted: 04/26/2016] [Indexed: 12/23/2022] Open
Abstract
Studies with humans and other mammals have provided support for a two-channel representation of horizontal (“azimuthal”) space in the auditory system. In this representation, location-sensitive neurons contribute activity to one of two broadly tuned channels whose responses are compared to derive an estimate of sound-source location. One channel is maximally responsive to sounds towards the left and the other to sounds towards the right. However, recent psychophysical studies of humans, and physiological studies of other mammals, point to the presence of an additional channel, maximally responsive to the midline. In this study, we used electroencephalography to seek physiological evidence for such a midline channel in humans. We measured neural responses to probe stimuli presented from straight ahead (0 °) or towards the right (+30 ° or +90 °). Probes were preceded by adapter stimuli to temporarily suppress channel activity. Adapters came from 0 ° or alternated between left and right (−30 ° and +30 ° or −90 ° and +90 °). For the +90 ° probe, to which the right-tuned channel would respond most strongly, both accounts predict greatest adaptation when the adapters are at ±90 °. For the 0 ° probe, the two-channel account predicts greatest adaptation from the ±90 ° adapters, while the three-channel account predicts greatest adaptation when the adapters are at 0 ° because these adapters stimulate the midline-tuned channel which responds most strongly to the 0 ° probe. The results were consistent with the three-channel account. In addition, a computational implementation of the three-channel account fitted the probe response sizes well, explaining 93 % of the variance about the mean, whereas a two-channel implementation produced a poor fit and explained only 61 % of the variance.
Collapse
|
18
|
Keating P, Rosenior-Patten O, Dahmen JC, Bell O, King AJ. Behavioral training promotes multiple adaptive processes following acute hearing loss. eLife 2016; 5:e12264. [PMID: 27008181 PMCID: PMC4841776 DOI: 10.7554/elife.12264] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2015] [Accepted: 03/23/2016] [Indexed: 11/13/2022] Open
Abstract
The brain possesses a remarkable capacity to compensate for changes in inputs resulting from a range of sensory impairments. Developmental studies of sound localization have shown that adaptation to asymmetric hearing loss can be achieved either by reinterpreting altered spatial cues or by relying more on those cues that remain intact. Adaptation to monaural deprivation in adulthood is also possible, but appears to lack such flexibility. Here we show, however, that appropriate behavioral training enables monaurally-deprived adult humans to exploit both of these adaptive processes. Moreover, cortical recordings in ferrets reared with asymmetric hearing loss suggest that these forms of plasticity have distinct neural substrates. An ability to adapt to asymmetric hearing loss using multiple adaptive processes is therefore shared by different species and may persist throughout the lifespan. This highlights the fundamental flexibility of neural systems, and may also point toward novel therapeutic strategies for treating sensory disorders. DOI:http://dx.doi.org/10.7554/eLife.12264.001 The brain normally compares the timing and intensity of the sounds that reach each ear to work out a sound’s origin. Hearing loss in one ear disrupts these between-ear comparisons, which causes listeners to make errors in this process. With time, however, the brain adapts to this hearing loss and once again learns to localize sounds accurately. Previous research has shown that young ferrets can adapt to hearing loss in one ear in two distinct ways. The ferrets either learn to remap the altered between-ear comparisons, caused by losing hearing in one ear, onto their new locations. Alternatively, the ferrets learn to locate sounds using only their good ear. Each strategy is suited to localizing different types of sound, but it was not known how this adaptive flexibility unfolds over time, whether it persists throughout the lifespan, or whether it is shared by other species. Now, Keating et al. show that, with some coaching, adult humans also adapt to temporary loss of hearing in one ear using the same two strategies. In the experiments, adult humans were trained to localize different kinds of sounds while wearing an earplug in one ear. These sounds were presented from 12 loudspeakers arranged in a horizontal circle around the person being tested. The experiments showed that short periods of behavioral training enable adult humans to adapt to a hearing loss in one ear and recover their ability to localize sounds. Just like the ferrets, adult humans learned to correctly associate altered between-ear comparisons with their new locations and to rely more on the cues from the unplugged ear to locate sound. Which of these adaptive strategies the participants used depended on the frequencies present in the sounds. The cells in the ear and brain that detect and make sense of sound typically respond best to a limited range of frequencies, and so this suggests that each strategy relies on a distinct set of cells. Keating et al. confirmed in ferrets that different brain cells are indeed used to bring about adaptation to hearing loss in one ear using each strategy. These insights may aid the development of new therapies to treat hearing loss. DOI:http://dx.doi.org/10.7554/eLife.12264.002
Collapse
Affiliation(s)
- Peter Keating
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Onayomi Rosenior-Patten
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Johannes C Dahmen
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Olivia Bell
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Andrew J King
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|