1
|
Clarke S, Da Costa S, Crottaz-Herbette S. Dual Representation of the Auditory Space. Brain Sci 2024; 14:535. [PMID: 38928534 PMCID: PMC11201621 DOI: 10.3390/brainsci14060535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Revised: 05/19/2024] [Accepted: 05/21/2024] [Indexed: 06/28/2024] Open
Abstract
Auditory spatial cues contribute to two distinct functions, of which one leads to explicit localization of sound sources and the other provides a location-linked representation of sound objects. Behavioral and imaging studies demonstrated right-hemispheric dominance for explicit sound localization. An early clinical case study documented the dissociation between the explicit sound localizations, which was heavily impaired, and fully preserved use of spatial cues for sound object segregation. The latter involves location-linked encoding of sound objects. We review here evidence pertaining to brain regions involved in location-linked representation of sound objects. Auditory evoked potential (AEP) and functional magnetic resonance imaging (fMRI) studies investigated this aspect by comparing encoding of individual sound objects, which changed their locations or remained stationary. Systematic search identified 1 AEP and 12 fMRI studies. Together with studies of anatomical correlates of impaired of spatial-cue-based sound object segregation after focal brain lesions, the present evidence indicates that the location-linked representation of sound objects involves strongly the left hemisphere and to a lesser degree the right hemisphere. Location-linked encoding of sound objects is present in several early-stage auditory areas and in the specialized temporal voice area. In these regions, emotional valence benefits from location-linked encoding as well.
Collapse
Affiliation(s)
- Stephanie Clarke
- Neuropsychology and Neurorehabilitation Service, Centre Hospitalier Universitaire Vaudois (CHUV), University of Lausanne, Av. Pierre-Decker 5, 1011 Lausanne, Switzerland; (S.D.C.); (S.C.-H.)
| | | | | |
Collapse
|
2
|
Grisendi T, Clarke S, Da Costa S. Emotional sounds in space: asymmetrical representation within early-stage auditory areas. Front Neurosci 2023; 17:1164334. [PMID: 37274197 PMCID: PMC10235458 DOI: 10.3389/fnins.2023.1164334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Accepted: 04/07/2023] [Indexed: 06/06/2023] Open
Abstract
Evidence from behavioral studies suggests that the spatial origin of sounds may influence the perception of emotional valence. Using 7T fMRI we have investigated the impact of the categories of sound (vocalizations; non-vocalizations), emotional valence (positive, neutral, negative) and spatial origin (left, center, right) on the encoding in early-stage auditory areas and in the voice area. The combination of these different characteristics resulted in a total of 18 conditions (2 categories x 3 valences x 3 lateralizations), which were presented in a pseudo-randomized order in blocks of 11 different sounds (of the same condition) in 12 distinct runs of 6 min. In addition, two localizers, i.e., tonotopy mapping; human vocalizations, were used to define regions of interest. A three-way repeated measure ANOVA on the BOLD responses revealed bilateral significant effects and interactions in the primary auditory cortex, the lateral early-stage auditory areas, and the voice area. Positive vocalizations presented on the left side yielded greater activity in the ipsilateral and contralateral primary auditory cortex than did neutral or negative vocalizations or any other stimuli at any of the three positions. Right, but not left area L3 responded more strongly to (i) positive vocalizations presented ipsi- or contralaterally than to neutral or negative vocalizations presented at the same positions; and (ii) to neutral than positive or negative non-vocalizations presented contralaterally. Furthermore, comparison with a previous study indicates that spatial cues may render emotional valence more salient within the early-stage auditory areas.
Collapse
Affiliation(s)
- Tiffany Grisendi
- Service de Neuropsychologie et de Neuroréhabilitation, Centre Hospitalier Universitaire Vaudois (CHUV) and University of Lausanne, Lausanne, Switzerland
| | - Stephanie Clarke
- Service de Neuropsychologie et de Neuroréhabilitation, Centre Hospitalier Universitaire Vaudois (CHUV) and University of Lausanne, Lausanne, Switzerland
| | - Sandra Da Costa
- Centre d’Imagerie Biomédicale, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| |
Collapse
|
3
|
Farahbod H, Rogalsky C, Keator LM, Cai J, Pillay SB, Turner K, LaCroix A, Fridriksson J, Binder JR, Middlebrooks JC, Hickok G, Saberi K. Informational Masking in Aging and Brain-lesioned Individuals. J Assoc Res Otolaryngol 2023; 24:67-79. [PMID: 36471207 PMCID: PMC9971540 DOI: 10.1007/s10162-022-00877-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 11/01/2022] [Indexed: 12/12/2022] Open
Abstract
Auditory stream segregation and informational masking were investigated in brain-lesioned individuals, age-matched controls with no neurological disease, and young college-age students. A psychophysical paradigm known as rhythmic masking release (RMR) was used to examine the ability of participants to identify a change in the rhythmic sequence of 20-ms Gaussian noise bursts presented through headphones and filtered through generalized head-related transfer functions to produce the percept of an externalized auditory image (i.e., a 3D virtual reality sound). The target rhythm was temporally interleaved with a masker sequence comprising similar noise bursts in a manner that resulted in a uniform sequence with no information remaining about the target rhythm when the target and masker were presented from the same location (an impossible task). Spatially separating the target and masker sequences allowed participants to determine if there was a change in the target rhythm midway during its presentation. RMR thresholds were defined as the minimum spatial separation between target and masker sequences that resulted in 70.7% correct-performance level in a single-interval 2-alternative forced-choice adaptive tracking procedure. The main findings were (1) significantly higher RMR thresholds for individuals with brain lesions (especially those with damage to parietal areas) and (2) a left-right spatial asymmetry in performance for lesion (but not control) participants. These findings contribute to a better understanding of spatiotemporal relations in informational masking and the neural bases of auditory scene analysis.
Collapse
Affiliation(s)
- Haleh Farahbod
- grid.266093.80000 0001 0668 7243Department of Cognitive Sciences, University of California, Irvine, USA
| | - Corianne Rogalsky
- grid.215654.10000 0001 2151 2636College of Health Solutions, Arizona State University, Tempe, USA
| | - Lynsey M. Keator
- grid.254567.70000 0000 9075 106XDepartment of Communication Sciences and Disorders, University of South Carolina, Columbia, USA
| | - Julia Cai
- grid.215654.10000 0001 2151 2636College of Health Solutions, Arizona State University, Tempe, USA
| | - Sara B. Pillay
- grid.30760.320000 0001 2111 8460Department of Neurology, Medical College of Wisconsin, Milwaukee, USA
| | - Katie Turner
- grid.266093.80000 0001 0668 7243Department of Cognitive Sciences, University of California, Irvine, USA
| | - Arianna LaCroix
- grid.260024.20000 0004 0627 4571College of Health Sciences, Midwestern University, Glendale, USA
| | - Julius Fridriksson
- grid.254567.70000 0000 9075 106XDepartment of Communication Sciences and Disorders, University of South Carolina, Columbia, USA
| | - Jeffrey R. Binder
- grid.30760.320000 0001 2111 8460Department of Neurology, Medical College of Wisconsin, Milwaukee, USA
| | - John C. Middlebrooks
- grid.266093.80000 0001 0668 7243Department of Cognitive Sciences, University of California, Irvine, USA ,grid.266093.80000 0001 0668 7243Department of Otolaryngology, University of California, Irvine, USA ,grid.266093.80000 0001 0668 7243Department of Language Science, University of California, Irvine, USA
| | - Gregory Hickok
- grid.266093.80000 0001 0668 7243Department of Cognitive Sciences, University of California, Irvine, USA ,grid.266093.80000 0001 0668 7243Department of Language Science, University of California, Irvine, USA
| | - Kourosh Saberi
- Department of Cognitive Sciences, University of California, Irvine, USA.
| |
Collapse
|
4
|
Interaction of bottom-up and top-down neural mechanisms in spatial multi-talker speech perception. Curr Biol 2022; 32:3971-3986.e4. [PMID: 35973430 DOI: 10.1016/j.cub.2022.07.047] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 06/08/2022] [Accepted: 07/19/2022] [Indexed: 11/20/2022]
Abstract
How the human auditory cortex represents spatially separated simultaneous talkers and how talkers' locations and voices modulate the neural representations of attended and unattended speech are unclear. Here, we measured the neural responses from electrodes implanted in neurosurgical patients as they performed single-talker and multi-talker speech perception tasks. We found that spatial separation between talkers caused a preferential encoding of the contralateral speech in Heschl's gyrus (HG), planum temporale (PT), and superior temporal gyrus (STG). Location and spectrotemporal features were encoded in different aspects of the neural response. Specifically, the talker's location changed the mean response level, whereas the talker's spectrotemporal features altered the variation of response around response's baseline. These components were differentially modulated by the attended talker's voice or location, which improved the population decoding of attended speech features. Attentional modulation due to the talker's voice only appeared in the auditory areas with longer latencies, but attentional modulation due to location was present throughout. Our results show that spatial multi-talker speech perception relies upon a separable pre-attentive neural representation, which could be further tuned by top-down attention to the location and voice of the talker.
Collapse
|
5
|
Vannson N, Strelnikov K, James CJ, Deguine O, Barone P, Marx M. Evidence of a functional reorganization in the auditory dorsal stream following unilateral hearing loss. Neuropsychologia 2020; 149:107683. [PMID: 33212140 DOI: 10.1016/j.neuropsychologia.2020.107683] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 10/16/2020] [Accepted: 11/08/2020] [Indexed: 12/11/2022]
Abstract
Unilateral hearing loss (UHL) generates a disruption of binaural hearing mechanisms, which impairs sound localization and speech understanding in noisy environments. We conducted an original study using fMRI and psychoacoustic assessments to investigate the relationships between the extent of cortical reorganization across the auditory areas for UHL patients, the severity of unilateral hearing loss, and the deficit in binaural abilities. Twenty-eight volunteers (14 UHL patients) were recruited (twenty-two females and six males). The brain imaging analysis demonstrated that UHL induces a shift in aural dominance favoring the better ear, with a cortical reorganization located in the non-primary auditory areas, ipsilateral (same side) to the better ear. This reorganization is correlated not only to the hearing loss severity but also to spatial localization abilities. A regression analysis between brain activity and patient's performance clearly showed that the spatial hearing deficit was linked to a functional alteration of the posterior auditory areas known to process spatial hearing. Altogether, our study reveals that UHL alters the dorsal auditory stream, which is deleterious to spatial hearing.
Collapse
Affiliation(s)
- Nicolas Vannson
- Brain and Cognition Research Centre, University of Toulouse Paul Sabatier, Toulouse, France; Brain and Cognition Research Centre, CNRS-UMR, 5549, Toulouse, France; Cochlear France SAS, Toulouse, France.
| | | | | | - Olivier Deguine
- Brain and Cognition Research Centre, University of Toulouse Paul Sabatier, Toulouse, France; Brain and Cognition Research Centre, CNRS-UMR, 5549, Toulouse, France; Service d'Otologie, Otoneurologie et ORL pédiatrique, Hôpital Pierre-Paul Riquet, CHU Toulouse Purpan, France
| | - Pascal Barone
- Brain and Cognition Research Centre, University of Toulouse Paul Sabatier, Toulouse, France; Brain and Cognition Research Centre, CNRS-UMR, 5549, Toulouse, France
| | - Mathieu Marx
- Brain and Cognition Research Centre, University of Toulouse Paul Sabatier, Toulouse, France; Brain and Cognition Research Centre, CNRS-UMR, 5549, Toulouse, France; Service d'Otologie, Otoneurologie et ORL pédiatrique, Hôpital Pierre-Paul Riquet, CHU Toulouse Purpan, France
| |
Collapse
|
6
|
Middlebrooks JC, Waters MF. Spatial Mechanisms for Segregation of Competing Sounds, and a Breakdown in Spatial Hearing. Front Neurosci 2020; 14:571095. [PMID: 33041763 PMCID: PMC7525094 DOI: 10.3389/fnins.2020.571095] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Accepted: 08/21/2020] [Indexed: 01/02/2023] Open
Abstract
We live in complex auditory environments, in which we are confronted with multiple competing sounds, including the cacophony of talkers in busy markets, classrooms, offices, etc. The purpose of this article is to synthesize observations from a series of experiments that focused on how spatial hearing might aid in disentangling interleaved sequences of sounds. The experiments were unified by a non-verbal task, "rhythmic masking release", which was applied to psychophysical studies in humans and cats and to cortical physiology in anesthetized cats. Human and feline listeners could segregate competing sequences of sounds from sources that were separated by as little as ∼10°. Similarly, single neurons in the cat primary auditory cortex tended to synchronize selectively to sound sequences from one of two competing sources, again with spatial resolution of ∼10°. The spatial resolution of spatial stream segregation varied widely depending on the binaural and monaural acoustical cues that were available in various experimental conditions. This is in contrast to a measure of basic sound-source localization, the minimum audible angle, which showed largely constant acuity across those conditions. The differential utilization of acoustical cues suggests that the central spatial mechanisms for stream segregation differ from those for sound localization. The highest-acuity spatial stream segregation was derived from interaural time and level differences. Brainstem processing of those cues is thought to rely heavily on normal function of a voltage-gated potassium channel, Kv3.3. A family was studied having a dominant negative mutation in the gene for that channel. Affected family members exhibited severe loss of sensitivity for interaural time and level differences, which almost certainly would degrade their ability to segregate competing sounds in real-world auditory scenes.
Collapse
Affiliation(s)
- John C. Middlebrooks
- Departments of Otolaryngology, Neurobiology and Behavior, Cognitive Sciences, and Biomedical Engineering, University of California, Irvine, Irvine, CA, United States
| | - Michael F. Waters
- Department of Neurology, Barrow Neurological Institute, Phoenix, AZ, United States
| |
Collapse
|
7
|
Tissieres I, Crottaz-Herbette S, Clarke S. Implicit representation of the auditory space: contribution of the left and right hemispheres. Brain Struct Funct 2019; 224:1569-1582. [PMID: 30848352 DOI: 10.1007/s00429-019-01853-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2018] [Accepted: 02/25/2019] [Indexed: 11/24/2022]
Abstract
Spatial cues contribute to the ability to segregate sound sources and thus facilitate their detection and recognition. This implicit use of spatial cues can be preserved in cases of cortical spatial deafness, suggesting that partially distinct neural networks underlie the explicit sound localization and the implicit use of spatial cues. We addressed this issue by assessing 40 patients, 20 patients with left and 20 patients with right hemispheric damage, for their ability to use auditory spatial cues implicitly in a paradigm of spatial release from masking (SRM) and explicitly in sound localization. The anatomical correlates of their performance were determined with voxel-based lesion-symptom mapping (VLSM). During the SRM task, the target was always presented at the centre, whereas the masker was presented at the centre or at one of the two lateral positions on the right or left side. The SRM effect was absent in some but not all patients; the inability to perceive the target when the masker was at one of the lateral positions correlated with lesions of the left temporo-parieto-frontal cortex or of the right inferior parietal lobule and the underlying white matter. As previously reported, sound localization depended critically on the right parietal and opercular cortex. Thus, explicit and implicit use of spatial cues depends on at least partially distinct neural networks. Our results suggest that the implicit use may rely on the left-dominant position-linked representation of sound objects, which has been demonstrated in previous EEG and fMRI studies.
Collapse
Affiliation(s)
- Isabel Tissieres
- Service de neuropsychologie et de neuroréhabilitation, Centre Hospitalier Universitaire Vaudois (CHUV), Université de Lausanne, Lausanne, Switzerland
| | - Sonia Crottaz-Herbette
- Service de neuropsychologie et de neuroréhabilitation, Centre Hospitalier Universitaire Vaudois (CHUV), Université de Lausanne, Lausanne, Switzerland
| | - Stephanie Clarke
- Service de neuropsychologie et de neuroréhabilitation, Centre Hospitalier Universitaire Vaudois (CHUV), Université de Lausanne, Lausanne, Switzerland.
| |
Collapse
|
8
|
Chillemi G, Calamuneri A, Quartarone A, Terranova C, Salatino A, Cacciola A, Milardi D, Ricci R. Endogenous orientation of visual attention in auditory space. J Adv Res 2019; 18:95-100. [PMID: 30828479 PMCID: PMC6383076 DOI: 10.1016/j.jare.2019.01.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2018] [Revised: 01/17/2019] [Accepted: 01/18/2019] [Indexed: 11/24/2022] Open
Abstract
Facilitation was observed for right-sided auditory stimuli in a new visuo-audio task. Auditory space has dynamic nature, which adapts to changes in visual space. Sound localization was enhanced by visual cues. Crossmodal links in spatial attention were found between audition and vision. These findings have theoretical and translational implications for future studies.
Visuospatial attention is asymmetrically distributed with a leftward bias (i.e. pseudoneglect), while evidence for asymmetries in auditory spatial attention is still controversial. In the present study, we investigated putative asymmetries in the distribution of auditory spatial attention and the influence that visual information might have on its deployment. A modified version of the Posner task (i.e. the visuo-audio spatial task [VAST]) was used to investigate spatial processing of auditory targets when endogenous orientation of spatial attention was mediated by visual cues in healthy adults. A line bisection task (LBT) was also administered to assess the presence of a leftward bias in deployment of visuospatial attention. Overall, participants showed rightward and leftward biases in the VAST and the LBT, respectively. In the VAST, sound localization was enhanced by visual cues. Altogether, these findings support the existence of a facilitation effect for auditory targets originating from the right side of space and provide new evidence for crossmodal links in endogenous spatial attention between vision and audition.
Collapse
Affiliation(s)
- Gaetana Chillemi
- IRCCS Centro Neurolesi Bonino Pulejo, Contrada Casazza, SS113, 98124 Messina, Italy
| | | | - Angelo Quartarone
- IRCCS Centro Neurolesi Bonino Pulejo, Contrada Casazza, SS113, 98124 Messina, Italy.,Department of Biomedical and Dental Sciences and Morphofunctional Imaging, University of Messina, Via Consolare Valeria 1, Gazzi, 98125 Messina, Italy
| | - Carmen Terranova
- Department of Clinical and Experimental Medicine, Endocrinology, University of Messina, Via Consolare Valeria 1, Gazzi, 98125 Messina, Italy
| | - Adriana Salatino
- Department of Psychology, University of Torino, Torino 10123, Italy
| | - Alberto Cacciola
- IRCCS Centro Neurolesi Bonino Pulejo, Contrada Casazza, SS113, 98124 Messina, Italy.,Department of Biomedical and Dental Sciences and Morphofunctional Imaging, University of Messina, Via Consolare Valeria 1, Gazzi, 98125 Messina, Italy
| | - Demetrio Milardi
- IRCCS Centro Neurolesi Bonino Pulejo, Contrada Casazza, SS113, 98124 Messina, Italy.,Department of Biomedical and Dental Sciences and Morphofunctional Imaging, University of Messina, Via Consolare Valeria 1, Gazzi, 98125 Messina, Italy
| | - Raffaella Ricci
- Department of Psychology, University of Torino, Torino 10123, Italy
| |
Collapse
|
9
|
Tissieres I, Crottaz-Herbette S, Clarke S. Exploring auditory neglect: Anatomo-clinical correlations of auditory extinction. Ann Phys Rehabil Med 2018; 61:386-394. [DOI: 10.1016/j.rehab.2018.05.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2018] [Revised: 05/05/2018] [Accepted: 05/06/2018] [Indexed: 11/26/2022]
|
10
|
Activity in Human Auditory Cortex Represents Spatial Separation Between Concurrent Sounds. J Neurosci 2018; 38:4977-4984. [PMID: 29712782 DOI: 10.1523/jneurosci.3323-17.2018] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2017] [Revised: 03/05/2018] [Accepted: 03/09/2018] [Indexed: 11/21/2022] Open
Abstract
The primary and posterior auditory cortex (AC) are known for their sensitivity to spatial information, but how this information is processed is not yet understood. AC that is sensitive to spatial manipulations is also modulated by the number of auditory streams present in a scene (Smith et al., 2010), suggesting that spatial and nonspatial cues are integrated for stream segregation. We reasoned that, if this is the case, then it is the distance between sounds rather than their absolute positions that is essential. To test this hypothesis, we measured human brain activity in response to spatially separated concurrent sounds with fMRI at 7 tesla in five men and five women. Stimuli were spatialized amplitude-modulated broadband noises recorded for each participant via in-ear microphones before scanning. Using a linear support vector machine classifier, we investigated whether sound location and/or location plus spatial separation between sounds could be decoded from the activity in Heschl's gyrus and the planum temporale. The classifier was successful only when comparing patterns associated with the conditions that had the largest difference in perceptual spatial separation. Our pattern of results suggests that the representation of spatial separation is not merely the combination of single locations, but rather is an independent feature of the auditory scene.SIGNIFICANCE STATEMENT Often, when we think of auditory spatial information, we think of where sounds are coming from-that is, the process of localization. However, this information can also be used in scene analysis, the process of grouping and segregating features of a soundwave into objects. Essentially, when sounds are further apart, they are more likely to be segregated into separate streams. Here, we provide evidence that activity in the human auditory cortex represents the spatial separation between sounds rather than their absolute locations, indicating that scene analysis and localization processes may be independent.
Collapse
|
11
|
Da Costa S, Clarke S, Crottaz-Herbette S. Keeping track of sound objects in space: The contribution of early-stage auditory areas. Hear Res 2018; 366:17-31. [PMID: 29643021 DOI: 10.1016/j.heares.2018.03.027] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/15/2017] [Revised: 03/21/2018] [Accepted: 03/28/2018] [Indexed: 12/01/2022]
Abstract
The influential dual-stream model of auditory processing stipulates that information pertaining to the meaning and to the position of a given sound object is processed in parallel along two distinct pathways, the ventral and dorsal auditory streams. Functional independence of the two processing pathways is well documented by conscious experience of patients with focal hemispheric lesions. On the other hand there is growing evidence that the meaning and the position of a sound are combined early in the processing pathway, possibly already at the level of early-stage auditory areas. Here, we investigated how early auditory areas integrate sound object meaning and space (simulated by interaural time differences) using a repetition suppression fMRI paradigm at 7 T. Subjects listen passively to environmental sounds presented in blocks of repetitions of the same sound object (same category) or different sounds objects (different categories), perceived either in the left or right space (no change within block) or shifted left-to-right or right-to-left halfway in the block (change within block). Environmental sounds activated bilaterally the superior temporal gyrus, middle temporal gyrus, inferior frontal gyrus, and right precentral cortex. Repetitions suppression effects were measured within bilateral early-stage auditory areas in the lateral portion of the Heschl's gyrus and posterior superior temporal plane. Left lateral early-stages areas showed significant effects for position and change, interactions Category x Initial Position and Category x Change in Position, while right lateral areas showed main effect of category and interaction Category x Change in Position. The combined evidence from our study and from previous studies speaks in favour of a position-linked representation of sound objects, which is independent from semantic encoding within the ventral stream and from spatial encoding within the dorsal stream. We argue for a third auditory stream, which has its origin in lateral belt areas and tracks sound objects across space.
Collapse
Affiliation(s)
- Sandra Da Costa
- Centre d'Imagerie BioMédicale (CIBM), EPFL et Universités de Lausanne et de Genève, Bâtiment CH, Station 6, CH-1015 Lausanne, Switzerland.
| | - Stephanie Clarke
- Service de Neuropsychologie et de Neuroréhabilitation, CHUV, Université de Lausanne, Avenue Pierre Decker 5, CH-1011 Lausanne, Switzerland
| | - Sonia Crottaz-Herbette
- Service de Neuropsychologie et de Neuroréhabilitation, CHUV, Université de Lausanne, Avenue Pierre Decker 5, CH-1011 Lausanne, Switzerland
| |
Collapse
|
12
|
For Better or Worse: The Effect of Prismatic Adaptation on Auditory Neglect. Neural Plast 2017; 2017:8721240. [PMID: 29138699 PMCID: PMC5613466 DOI: 10.1155/2017/8721240] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2017] [Accepted: 08/08/2017] [Indexed: 12/01/2022] Open
Abstract
Patients with auditory neglect attend less to auditory stimuli on their left and/or make systematic directional errors when indicating sound positions. Rightward prismatic adaptation (R-PA) was repeatedly shown to alleviate symptoms of visuospatial neglect and once to restore partially spatial bias in dichotic listening. It is currently unknown whether R-PA affects only this ear-related symptom or also other aspects of auditory neglect. We have investigated the effect of R-PA on left ear extinction in dichotic listening, space-related inattention assessed by diotic listening, and directional errors in auditory localization in patients with auditory neglect. The most striking effect of R-PA was the alleviation of left ear extinction in dichotic listening, which occurred in half of the patients with initial deficit. In contrast to nonresponders, their lesions spared the right dorsal attentional system and posterior temporal cortex. The beneficial effect of R-PA on an ear-related performance contrasted with detrimental effects on diotic listening and auditory localization. The former can be parsimoniously explained by the SHD-VAS model (shift in hemispheric dominance within the ventral attentional system; Clarke and Crottaz-Herbette 2016), which is based on the R-PA-induced shift of the right-dominant ventral attentional system to the left hemisphere. The negative effects in space-related tasks may be due to the complex nature of auditory space encoding at a cortical level.
Collapse
|
13
|
Evidence for cue-independent spatial representation in the human auditory cortex during active listening. Proc Natl Acad Sci U S A 2017; 114:E7602-E7611. [PMID: 28827357 DOI: 10.1073/pnas.1707522114] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues-particularly interaural time and level differences (ITD and ILD)-that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and-critically-for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues.
Collapse
|
14
|
Cogné M, Knebel JF, Klinger E, Bindschaedler C, Rapin PA, Joseph PA, Clarke S. The effect of contextual auditory stimuli on virtual spatial navigation in patients with focal hemispheric lesions. Neuropsychol Rehabil 2016; 28:1-16. [PMID: 27653552 DOI: 10.1080/09602011.2015.1127260] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Topographical disorientation is a frequent deficit among patients suffering from brain injury. Spatial navigation can be explored in this population using virtual reality environments, even in the presence of motor or sensory disorders. Furthermore, the positive or negative impact of specific stimuli can be investigated. We studied how auditory stimuli influence the performance of brain-injured patients in a navigational task, using the Virtual Action Planning-Supermarket (VAP-S) with the addition of contextual ("sonar effect" and "name of product") and non-contextual ("periodic randomised noises") auditory stimuli. The study included 22 patients with a first unilateral hemispheric brain lesion and 17 healthy age-matched control subjects. After a software familiarisation, all subjects were tested without auditory stimuli, with a sonar effect or periodic random sounds in a random order, and with the stimulus "name of product". Contextual auditory stimuli improved patient performance more than control group performance. Contextual stimuli benefited most patients with severe executive dysfunction or with severe unilateral neglect. These results indicate that contextual auditory stimuli are useful in the assessment of navigational abilities in brain-damaged patients and that they should be used in rehabilitation paradigms.
Collapse
Affiliation(s)
- Mélanie Cogné
- a Rehabilitation Medicine Unit and EA4136, University of Bordeaux , Bordeaux , France
| | - Jean-François Knebel
- b Neuropsychology and Neurorehabilitation Service, Centre Hospitalier Universitaire Vaudois (CHUV) , Lausanne , Switzerland.,c Department of Radiology and Department of Clinical Neurosciences , Laboratory for Investigative Neurophysiology (The LINE), University Hospital Center and University of Lausanne , Lausanne , Switzerland.,d EEG Brain Brain Mapping Core, Centre for Biomedical Imaging (CIBM) , Lausanne , Switzerland
| | - Evelyne Klinger
- e Digital Interactions Health and Disability Lab, ESIEA , Laval , France.,f French Institute for Research on Handicap , Paris , France
| | - Claire Bindschaedler
- b Neuropsychology and Neurorehabilitation Service, Centre Hospitalier Universitaire Vaudois (CHUV) , Lausanne , Switzerland
| | | | - Pierre-Alain Joseph
- a Rehabilitation Medicine Unit and EA4136, University of Bordeaux , Bordeaux , France.,f French Institute for Research on Handicap , Paris , France
| | - Stephanie Clarke
- b Neuropsychology and Neurorehabilitation Service, Centre Hospitalier Universitaire Vaudois (CHUV) , Lausanne , Switzerland
| |
Collapse
|
15
|
Derey K, Valente G, de Gelder B, Formisano E. Opponent Coding of Sound Location (Azimuth) in Planum Temporale is Robust to Sound-Level Variations. Cereb Cortex 2015; 26:450-464. [PMID: 26545618 PMCID: PMC4677988 DOI: 10.1093/cercor/bhv269] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022] Open
Abstract
Coding of sound location in auditory cortex (AC) is only partially understood. Recent electrophysiological research suggests that neurons in mammalian auditory cortex are characterized by broad spatial tuning and a preference for the contralateral hemifield, that is, a nonuniform sampling of sound azimuth. Additionally, spatial selectivity decreases with increasing sound intensity. To accommodate these findings, it has been proposed that sound location is encoded by the integrated activity of neuronal populations with opposite hemifield tuning (“opponent channel model”). In this study, we investigated the validity of such a model in human AC with functional magnetic resonance imaging (fMRI) and a phase-encoding paradigm employing binaural stimuli recorded individually for each participant. In all subjects, we observed preferential fMRI responses to contralateral azimuth positions. Additionally, in most AC locations, spatial tuning was broad and not level invariant. We derived an opponent channel model of the fMRI responses by subtracting the activity of contralaterally tuned regions in bilateral planum temporale. This resulted in accurate decoding of sound azimuth location, which was unaffected by changes in sound level. Our data thus support opponent channel coding as a neural mechanism for representing acoustic azimuth in human AC.
Collapse
Affiliation(s)
- Kiki Derey
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht 6200 MD, The Netherlands
| | - Giancarlo Valente
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht 6200 MD, The Netherlands
| | - Beatrice de Gelder
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht 6200 MD, The Netherlands
| | - Elia Formisano
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht 6200 MD, The Netherlands
| |
Collapse
|
16
|
Roaring lions and chirruping lemurs: How the brain encodes sound objects in space. Neuropsychologia 2015; 75:304-13. [DOI: 10.1016/j.neuropsychologia.2015.06.012] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2014] [Revised: 06/07/2015] [Accepted: 06/10/2015] [Indexed: 01/29/2023]
|
17
|
Abstract
The auditory system derives locations of sound sources from spatial cues provided by the interaction of sound with the head and external ears. Those cues are analyzed in specific brainstem pathways and then integrated as cortical representation of locations. The principal cues for horizontal localization are interaural time differences (ITDs) and interaural differences in sound level (ILDs). Vertical and front/back localization rely on spectral-shape cues derived from direction-dependent filtering properties of the external ears. The likely first sites of analysis of these cues are the medial superior olive (MSO) for ITDs, lateral superior olive (LSO) for ILDs, and dorsal cochlear nucleus (DCN) for spectral-shape cues. Localization in distance is much less accurate than that in horizontal and vertical dimensions, and interpretation of the basic cues is influenced by additional factors, including acoustics of the surroundings and familiarity of source spectra and levels. Listeners are quite sensitive to sound motion, but it remains unclear whether that reflects specific motion detection mechanisms or simply detection of changes in static location. Intact auditory cortex is essential for normal sound localization. Cortical representation of sound locations is highly distributed, with no evidence for point-to-point topography. Spatial representation is strictly contralateral in laboratory animals that have been studied, whereas humans show a prominent right-hemisphere dominance.
Collapse
Affiliation(s)
- John C Middlebrooks
- Departments of Otolaryngology, Neurobiology and Behavior, Cognitive Sciences, and Biomedical Engineering, University of California at Irvine, Irvine, CA, USA.
| |
Collapse
|
18
|
Bourquin NMP, Murray MM, Clarke S. Location-independent and location-linked representations of sound objects. Neuroimage 2013; 73:40-9. [DOI: 10.1016/j.neuroimage.2013.01.026] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2012] [Revised: 01/14/2013] [Accepted: 01/16/2013] [Indexed: 11/24/2022] Open
|