1
|
Mazo C, Baeta M, Petreanu L. Auditory cortex conveys non-topographic sound localization signals to visual cortex. Nat Commun 2024; 15:3116. [PMID: 38600132 PMCID: PMC11006897 DOI: 10.1038/s41467-024-47546-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 04/02/2024] [Indexed: 04/12/2024] Open
Abstract
Spatiotemporally congruent sensory stimuli are fused into a unified percept. The auditory cortex (AC) sends projections to the primary visual cortex (V1), which could provide signals for binding spatially corresponding audio-visual stimuli. However, whether AC inputs in V1 encode sound location remains unknown. Using two-photon axonal calcium imaging and a speaker array, we measured the auditory spatial information transmitted from AC to layer 1 of V1. AC conveys information about the location of ipsilateral and contralateral sound sources to V1. Sound location could be accurately decoded by sampling AC axons in V1, providing a substrate for making location-specific audiovisual associations. However, AC inputs were not retinotopically arranged in V1, and audio-visual modulations of V1 neurons did not depend on the spatial congruency of the sound and light stimuli. The non-topographic sound localization signals provided by AC might allow the association of specific audiovisual spatial patterns in V1 neurons.
Collapse
Affiliation(s)
- Camille Mazo
- Champalimaud Neuroscience Programme, Champalimaud Foundation, Lisbon, Portugal.
| | - Margarida Baeta
- Champalimaud Neuroscience Programme, Champalimaud Foundation, Lisbon, Portugal
| | - Leopoldo Petreanu
- Champalimaud Neuroscience Programme, Champalimaud Foundation, Lisbon, Portugal.
| |
Collapse
|
2
|
Mai J, Gargiullo R, Zheng M, Esho V, Hussein OE, Pollay E, Bowe C, Williamson LM, McElroy AF, Goolsby WN, Brooks KA, Rodgers CC. Sound-seeking before and after hearing loss in mice. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.08.574475. [PMID: 38260458 PMCID: PMC10802496 DOI: 10.1101/2024.01.08.574475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2024]
Abstract
How we move our bodies affects how we perceive sound. For instance, we can explore an environment to seek out the source of a sound and we can use head movements to compensate for hearing loss. How we do this is not well understood because many auditory experiments are designed to limit head and body movements. To study the role of movement in hearing, we developed a behavioral task called sound-seeking that rewarded mice for tracking down an ongoing sound source. Over the course of learning, mice more efficiently navigated to the sound. We then asked how auditory behavior was affected by hearing loss induced by surgical removal of the malleus from the middle ear. An innate behavior, the auditory startle response, was abolished by bilateral hearing loss and unaffected by unilateral hearing loss. Similarly, performance on the sound-seeking task drastically declined after bilateral hearing loss and did not recover. In striking contrast, mice with unilateral hearing loss were only transiently impaired on sound-seeking; over a recovery period of about a week, they regained high levels of performance, increasingly reliant on a different spatial sampling strategy. Thus, even in the face of permanent unilateral damage to the peripheral auditory system, mice recover their ability to perform a naturalistic sound-seeking task. This paradigm provides an opportunity to examine how body movement enables better hearing and resilient adaptation to sensory deprivation.
Collapse
Affiliation(s)
- Jessica Mai
- Department of Neurosurgery, Emory University School of Medicine, Atlanta GA 30322
| | - Rowan Gargiullo
- Department of Neurosurgery, Emory University School of Medicine, Atlanta GA 30322
| | - Megan Zheng
- Department of Neurosurgery, Emory University School of Medicine, Atlanta GA 30322
| | - Valentina Esho
- Department of Neurosurgery, Emory University School of Medicine, Atlanta GA 30322
| | - Osama E Hussein
- Department of Neurosurgery, Emory University School of Medicine, Atlanta GA 30322
| | - Eliana Pollay
- Department of Neurosurgery, Emory University School of Medicine, Atlanta GA 30322
| | - Cedric Bowe
- Neuroscience Graduate Program, Emory University, Atlanta GA 30322
| | | | | | - William N Goolsby
- Department of Cell Biology, Emory University School of Medicine, Atlanta GA 30322
| | - Kaitlyn A Brooks
- Department of Otolaryngology - Head and Neck Surgery, Emory University School of Medicine, Atlanta GA 30308
| | - Chris C Rodgers
- Department of Neurosurgery, Emory University School of Medicine, Atlanta GA 30322
- Department of Cell Biology, Emory University School of Medicine, Atlanta GA 30322
- Department of Biomedical Engineering, Georgia Tech and Emory University School of Medicine, Atlanta GA 30322
- Department of Biology, Emory College of Arts and Sciences, Atlanta GA 30322
| |
Collapse
|
3
|
Hallett M. Medial-lateral organization of primary auditory cortex and the question of sound localization. J Comp Neurol 2023; 531:1893-1896. [PMID: 37357573 PMCID: PMC10749981 DOI: 10.1002/cne.25516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Revised: 05/14/2023] [Accepted: 05/18/2023] [Indexed: 06/27/2023]
Abstract
Pandya made many important contributions to the understanding of the anatomy of the cortical auditory pathways beginning with his publication in 1969. This review focuses on the observation in that article on the transcallosal connections of the primary auditory cortex. The medial part of the cortex has such connections, but the lateral part does not. Pandya and colleagues speculated that this might have something to do with spatial localization of sound. Review of the subsequent literature shows that the primary auditory cortex anatomy is complex, but the original observation is likely correct. However, the physiological speculation was not.
Collapse
Affiliation(s)
- Mark Hallett
- National Institute of Neurological Disorders and Stroke, NIH, Bethesda
| |
Collapse
|
4
|
Vivaldo CA, Lee J, Shorkey M, Keerthy A, Rothschild G. Auditory cortex ensembles jointly encode sound and locomotion speed to support sound perception during movement. PLoS Biol 2023; 21:e3002277. [PMID: 37651461 PMCID: PMC10499203 DOI: 10.1371/journal.pbio.3002277] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Revised: 09/13/2023] [Accepted: 07/26/2023] [Indexed: 09/02/2023] Open
Abstract
The ability to process and act upon incoming sounds during locomotion is critical for survival and adaptive behavior. Despite the established role that the auditory cortex (AC) plays in behavior- and context-dependent sound processing, previous studies have found that auditory cortical activity is on average suppressed during locomotion as compared to immobility. While suppression of auditory cortical responses to self-generated sounds results from corollary discharge, which weakens responses to predictable sounds, the functional role of weaker responses to unpredictable external sounds during locomotion remains unclear. In particular, whether suppression of external sound-evoked responses during locomotion reflects reduced involvement of the AC in sound processing or whether it results from masking by an alternative neural computation in this state remains unresolved. Here, we tested the hypothesis that rather than simple inhibition, reduced sound-evoked responses during locomotion reflect a tradeoff with the emergence of explicit and reliable coding of locomotion velocity. To test this hypothesis, we first used neural inactivation in behaving mice and found that the AC plays a critical role in sound-guided behavior during locomotion. To investigate the nature of this processing, we used two-photon calcium imaging of local excitatory auditory cortical neural populations in awake mice. We found that locomotion had diverse influences on activity of different neurons, with a net suppression of baseline-subtracted sound-evoked responses and neural stimulus detection, consistent with previous studies. Importantly, we found that the net inhibitory effect of locomotion on baseline-subtracted sound-evoked responses was strongly shaped by elevated ongoing activity that compressed the response dynamic range, and that rather than reflecting enhanced "noise," this ongoing activity reliably encoded the animal's locomotion speed. Decoding analyses revealed that locomotion speed and sound are robustly co-encoded by auditory cortical ensemble activity. Finally, we found consistent patterns of joint coding of sound and locomotion speed in electrophysiologically recorded activity in freely moving rats. Together, our data suggest that rather than being suppressed by locomotion, auditory cortical ensembles explicitly encode it alongside sound information to support sound perception during locomotion.
Collapse
Affiliation(s)
- Carlos Arturo Vivaldo
- Department of Psychology, University of Michigan, Ann Arbor, Michigan, United States of America
| | - Joonyeup Lee
- Department of Psychology, University of Michigan, Ann Arbor, Michigan, United States of America
| | - MaryClaire Shorkey
- Department of Psychology, University of Michigan, Ann Arbor, Michigan, United States of America
| | - Ajay Keerthy
- Department of Psychology, University of Michigan, Ann Arbor, Michigan, United States of America
| | - Gideon Rothschild
- Department of Psychology, University of Michigan, Ann Arbor, Michigan, United States of America
- Kresge Hearing Research Institute and Department of Otolaryngology—Head and Neck Surgery, University of Michigan, Ann Arbor, Michigan, United States of America
| |
Collapse
|
5
|
Goal-driven, neurobiological-inspired convolutional neural network models of human spatial hearing. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.05.104] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
6
|
Mackey C, Tarabillo A, Ramachandran R. Three psychophysical metrics of auditory temporal integration in macaques. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:3176. [PMID: 34717465 PMCID: PMC8556002 DOI: 10.1121/10.0006658] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
The relationship between sound duration and detection threshold has long been thought to reflect temporal integration. Reports of species differences in this relationship are equivocal: some meta-analyses report no species differences, whereas others report substantial differences, particularly between humans and their close phylogenetic relatives, macaques. This renders translational work in macaques problematic. To reevaluate this difference, tone detection performance was measured in macaques using a go/no-go reaction time (RT) task at various tone durations and in the presence of broadband noise (BBN). Detection thresholds, RTs, and the dynamic range (DR) of the psychometric function decreased as the tone duration increased. The threshold by duration trends suggest macaques integrate at a similar rate to humans. The RT trends also resemble human data and are the first reported in animals. Whereas the BBN did not affect how the threshold or RT changed with the duration, it substantially reduced the DR at short durations. A probabilistic Poisson model replicated the effects of duration on threshold and DR and required integration from multiple simulated auditory nerve fibers to explain the performance at shorter durations. These data suggest that, contrary to previous studies, macaques are uniquely well-suited to model human temporal integration and form the baseline for future neurophysiological studies.
Collapse
Affiliation(s)
- Chase Mackey
- Neuroscience Graduate Program, Vanderbilt University, Nashville, Tennessee 37240, USA
| | - Alejandro Tarabillo
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee 37232, USA
| | - Ramnarayan Ramachandran
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee 37232, USA
| |
Collapse
|
7
|
The Neurophysiological Basis of the Trial-Wise and Cumulative Ventriloquism Aftereffects. J Neurosci 2021; 41:1068-1079. [PMID: 33273069 PMCID: PMC7880291 DOI: 10.1523/jneurosci.2091-20.2020] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Revised: 10/12/2020] [Accepted: 11/08/2020] [Indexed: 01/23/2023] Open
Abstract
Our senses often receive conflicting multisensory information, which our brain reconciles by adaptive recalibration. A classic example is the ventriloquism aftereffect, which emerges following both cumulative (long-term) and trial-wise exposure to spatially discrepant multisensory stimuli. Despite the importance of such adaptive mechanisms for interacting with environments that change over multiple timescales, it remains debated whether the ventriloquism aftereffects observed following trial-wise and cumulative exposure arise from the same neurophysiological substrate. We address this question by probing electroencephalography recordings from healthy humans (both sexes) for processes predictive of the aftereffect biases following the exposure to spatially offset audiovisual stimuli. Our results support the hypothesis that discrepant multisensory evidence shapes aftereffects on distinct timescales via common neurophysiological processes reflecting sensory inference and memory in parietal-occipital regions, while the cumulative exposure to consistent discrepancies additionally recruits prefrontal processes. During the subsequent unisensory trial, both trial-wise and cumulative exposure bias the encoding of the acoustic information, but do so distinctly. Our results posit a central role of parietal regions in shaping multisensory spatial recalibration, suggest that frontal regions consolidate the behavioral bias for persistent multisensory discrepancies, but also show that the trial-wise and cumulative exposure bias sound position encoding via distinct neurophysiological processes. SIGNIFICANCE STATEMENT Our brain easily reconciles conflicting multisensory information, such as seeing an actress on screen while hearing her voice over headphones. These adaptive mechanisms exert a persistent influence on the perception of subsequent unisensory stimuli, known as the ventriloquism aftereffect. While this aftereffect emerges following trial-wise or cumulative exposure to multisensory discrepancies, it remained unclear whether both arise from a common neural substrate. We here rephrase this hypothesis using human electroencephalography recordings. Our data suggest that parietal regions involved in multisensory and spatial memory mediate the aftereffect following both trial-wise and cumulative adaptation, but also show that additional and distinct processes are involved in consolidating and implementing the aftereffect following prolonged exposure.
Collapse
|
8
|
Stankova EP, Kruchinina OV, Shepovalnikov AN, Galperina EI. Evolution of the Central Mechanisms
of Oral Speech. J EVOL BIOCHEM PHYS+ 2020. [DOI: 10.1134/s0022093020030011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
9
|
Cheng Y, Zhang Y, Wang F, Jia G, Zhou J, Shan Y, Sun X, Yu L, Merzenich MM, Recanzone GH, Yang L, Zhou X. Reversal of Age-Related Changes in Cortical Sound-Azimuth Selectivity with Training. Cereb Cortex 2020; 30:1768-1778. [PMID: 31504260 DOI: 10.1093/cercor/bhz201] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2018] [Revised: 07/11/2019] [Accepted: 08/08/2019] [Indexed: 02/03/2023] Open
Abstract
The compromised abilities to understand speech and localize sounds are two hallmark deficits in aged individuals. Earlier studies have shown that age-related deficits in cortical neural timing, which is clearly associated with speech perception, can be partially reversed with auditory training. However, whether training can reverse aged-related cortical changes in the domain of spatial processing has never been studied. In this study, we examined cortical spatial processing in ~21-month-old rats that were trained on a sound-azimuth discrimination task. We found that animals that experienced 1 month of training displayed sharper cortical sound-azimuth tuning when compared to the age-matched untrained controls. This training-induced remodeling in spatial tuning was paralleled by increases of cortical parvalbumin-labeled inhibitory interneurons. However, no measurable changes in cortical spatial processing were recorded in age-matched animals that were passively exposed to training sounds with no task demands. These results that demonstrate the effects of training on cortical spatial domain processing in the rodent model further support the notion that age-related changes in central neural process are, due to their plastic nature, reversible. Moreover, the results offer the encouraging possibility that behavioral training might be used to attenuate declines in auditory perception, which are commonly observed in older individuals.
Collapse
Affiliation(s)
- Yuan Cheng
- Key Laboratory of Brain Functional Genomics of Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics, School of Life Sciences, East China Normal University, Shanghai 200062, China.,New York University-East China Normal University Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai 200062, China
| | - Yifan Zhang
- Key Laboratory of Brain Functional Genomics of Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics, School of Life Sciences, East China Normal University, Shanghai 200062, China.,New York University-East China Normal University Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai 200062, China
| | - Fang Wang
- Key Laboratory of Brain Functional Genomics of Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics, School of Life Sciences, East China Normal University, Shanghai 200062, China.,New York University-East China Normal University Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai 200062, China
| | - Guoqiang Jia
- Key Laboratory of Brain Functional Genomics of Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics, School of Life Sciences, East China Normal University, Shanghai 200062, China.,New York University-East China Normal University Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai 200062, China
| | - Jie Zhou
- Key Laboratory of Brain Functional Genomics of Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics, School of Life Sciences, East China Normal University, Shanghai 200062, China.,New York University-East China Normal University Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai 200062, China
| | - Ye Shan
- Key Laboratory of Brain Functional Genomics of Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics, School of Life Sciences, East China Normal University, Shanghai 200062, China
| | - Xinde Sun
- Key Laboratory of Brain Functional Genomics of Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics, School of Life Sciences, East China Normal University, Shanghai 200062, China
| | - Liping Yu
- Key Laboratory of Brain Functional Genomics of Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics, School of Life Sciences, East China Normal University, Shanghai 200062, China
| | | | - Gregg H Recanzone
- Center for Neuroscience and Department of Neurobiology, Physiology and Behavior, University of California at Davis, CA 95616, USA
| | - Lianfang Yang
- Department of Physical Education, Zhejiang University of Finance & Economics, Hangzhou 310018, China
| | - Xiaoming Zhou
- Key Laboratory of Brain Functional Genomics of Ministry of Education, Shanghai Key Laboratory of Brain Functional Genomics, School of Life Sciences, East China Normal University, Shanghai 200062, China.,New York University-East China Normal University Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai 200062, China
| |
Collapse
|
10
|
Ng CW, Recanzone GH. Age-Related Changes in Temporal Processing of Rapidly-Presented Sound Sequences in the Macaque Auditory Cortex. Cereb Cortex 2019; 28:3775-3796. [PMID: 29040403 DOI: 10.1093/cercor/bhx240] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2017] [Accepted: 08/31/2017] [Indexed: 11/13/2022] Open
Abstract
The mammalian auditory cortex is necessary to resolve temporal features in rapidly-changing sound streams. This capability is crucial for speech comprehension in humans and declines with normal aging. Nonhuman primate studies have revealed detrimental effects of normal aging on the auditory nervous system, and yet the underlying influence on temporal processing remains less well-defined. Therefore, we recorded from the core and lateral belt areas of auditory cortex when awake young and old monkeys listened to tone-pip and noise-burst sound sequences. Elevated spontaneous and stimulus-driven activity were the hallmark characteristics in old monkeys. These old neurons showed isomorphic-like discharge patterns to stimulus envelopes, though their phase-locking was less precise. Functional preference in temporal coding between the core and belt existed in the young monkeys but was mostly absent in the old monkeys, in which old belt neurons showed core-like response profiles. Finally, the analysis of population activity patterns indicated that the aged auditory cortex demonstrated a homogenous, distributed coding strategy, compared to the selective, sparse coding strategy observed in the young monkeys. Degraded temporal fidelity and highly-responsive, broadly-tuned cortical responses could underlie how aged humans have difficulties to resolve and track dynamic sounds leading to speech processing deficits.
Collapse
Affiliation(s)
- Chi-Wing Ng
- Center for the Neurobiology of Learning and Memory, University of California, Irvine, CA, USA
| | - Gregg H Recanzone
- Center for Neuroscience, University of California, Davis, CA, USA.,Department of Neurobiology, Physiology and Behavior, University of California, Davis, CA, USA
| |
Collapse
|
11
|
Abstract
Humans and other animals use spatial hearing to rapidly localize events in the environment. However, neural encoding of sound location is a complex process involving the computation and integration of multiple spatial cues that are not represented directly in the sensory organ (the cochlea). Our understanding of these mechanisms has increased enormously in the past few years. Current research is focused on the contribution of animal models for understanding human spatial audition, the effects of behavioural demands on neural sound location encoding, the emergence of a cue-independent location representation in the auditory cortex, and the relationship between single-source and concurrent location encoding in complex auditory scenes. Furthermore, computational modelling seeks to unravel how neural representations of sound source locations are derived from the complex binaural waveforms of real-life sounds. In this article, we review and integrate the latest insights from neurophysiological, neuroimaging and computational modelling studies of mammalian spatial hearing. We propose that the cortical representation of sound location emerges from recurrent processing taking place in a dynamic, adaptive network of early (primary) and higher-order (posterior-dorsal and dorsolateral prefrontal) auditory regions. This cortical network accommodates changing behavioural requirements and is especially relevant for processing the location of real-life, complex sounds and complex auditory scenes.
Collapse
|
12
|
Neurons in primary auditory cortex represent sound source location in a cue-invariant manner. Nat Commun 2019; 10:3019. [PMID: 31289272 PMCID: PMC6616358 DOI: 10.1038/s41467-019-10868-9] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2018] [Accepted: 06/07/2019] [Indexed: 02/04/2023] Open
Abstract
Auditory cortex is required for sound localisation, but how neural firing in auditory cortex underlies our perception of sound sources in space remains unclear. Specifically, whether neurons in auditory cortex represent spatial cues or an integrated representation of auditory space across cues is not known. Here, we measured the spatial receptive fields of neurons in primary auditory cortex (A1) while ferrets performed a relative localisation task. Manipulating the availability of binaural and spectral localisation cues had little impact on ferrets’ performance, or on neural spatial tuning. A subpopulation of neurons encoded spatial position consistently across localisation cue type. Furthermore, neural firing pattern decoders outperformed two-channel model decoders using population activity. Together, these observations suggest that A1 encodes the location of sound sources, as opposed to spatial cue values. The brain's auditory cortex is involved not just in detection of sounds, but also in localizing them. Here, the authors show that neurons in ferret primary auditory cortex (A1) encode the location of sound sources, as opposed to merely reflecting spatial cues.
Collapse
|
13
|
Remington ED, Wang X. Neural Representations of the Full Spatial Field in Auditory Cortex of Awake Marmoset (Callithrix jacchus). Cereb Cortex 2019; 29:1199-1216. [PMID: 29420692 PMCID: PMC6373678 DOI: 10.1093/cercor/bhy025] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2017] [Accepted: 01/13/2018] [Indexed: 11/14/2022] Open
Abstract
Unlike visual signals, sound can reach the ears from any direction, and the ability to localize sounds from all directions is essential for survival in a natural environment. Previous studies have largely focused on the space in front of a subject that is also covered by vision and were often limited to measuring spatial tuning along the horizontal (azimuth) plane. As a result, we know relatively little about how the auditory cortex responds to sounds coming from spatial locations outside the frontal space where visual information is unavailable. By mapping single-neuron responses to the full spatial field in awake marmoset (Callithrix jacchus), an arboreal animal for which spatial processing is vital in its natural habitat, we show that spatial receptive fields in several auditory areas cover all spatial locations. Several complementary measures of spatial tuning showed that neurons were tuned to both frontal space and rear space (outside the coverage of vision), as well as the space above and below the horizontal plane. Together, these findings provide valuable new insights into the representation of all spatial locations by primate auditory cortex.
Collapse
Affiliation(s)
- Evan D Remington
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Xiaoqin Wang
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
14
|
Chaplin TA, Rosa MGP, Lui LL. Auditory and Visual Motion Processing and Integration in the Primate Cerebral Cortex. Front Neural Circuits 2018; 12:93. [PMID: 30416431 PMCID: PMC6212655 DOI: 10.3389/fncir.2018.00093] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Accepted: 10/08/2018] [Indexed: 11/13/2022] Open
Abstract
The ability of animals to detect motion is critical for survival, and errors or even delays in motion perception may prove costly. In the natural world, moving objects in the visual field often produce concurrent sounds. Thus, it can highly advantageous to detect motion elicited from sensory signals of either modality, and to integrate them to produce more reliable motion perception. A great deal of progress has been made in understanding how visual motion perception is governed by the activity of single neurons in the primate cerebral cortex, but far less progress has been made in understanding both auditory motion and audiovisual motion integration. Here we, review the key cortical regions for motion processing, focussing on translational motion. We compare the representations of space and motion in the visual and auditory systems, and examine how single neurons in these two sensory systems encode the direction of motion. We also discuss the way in which humans integrate of audio and visual motion cues, and the regions of the cortex that may mediate this process.
Collapse
Affiliation(s)
- Tristan A Chaplin
- Neuroscience Program, Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, VIC, Australia.,Australian Research Council (ARC) Centre of Excellence for Integrative Brain Function, Monash University Node, Clayton, VIC, Australia
| | - Marcello G P Rosa
- Neuroscience Program, Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, VIC, Australia.,Australian Research Council (ARC) Centre of Excellence for Integrative Brain Function, Monash University Node, Clayton, VIC, Australia
| | - Leo L Lui
- Neuroscience Program, Biomedicine Discovery Institute and Department of Physiology, Monash University, Clayton, VIC, Australia.,Australian Research Council (ARC) Centre of Excellence for Integrative Brain Function, Monash University Node, Clayton, VIC, Australia
| |
Collapse
|
15
|
The Encoding of Sound Source Elevation in the Human Auditory Cortex. J Neurosci 2018; 38:3252-3264. [PMID: 29507148 DOI: 10.1523/jneurosci.2530-17.2018] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2017] [Revised: 02/11/2018] [Accepted: 02/14/2018] [Indexed: 11/21/2022] Open
Abstract
Spatial hearing is a crucial capacity of the auditory system. While the encoding of horizontal sound direction has been extensively studied, very little is known about the representation of vertical sound direction in the auditory cortex. Using high-resolution fMRI, we measured voxelwise sound elevation tuning curves in human auditory cortex and show that sound elevation is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. We changed the ear shape of participants (male and female) with silicone molds for several days. This manipulation reduced or abolished the ability to discriminate sound elevation and flattened cortical tuning curves. Tuning curves recovered their original shape as participants adapted to the modified ears and regained elevation perception over time. These findings suggest that the elevation tuning observed in low-level auditory cortex did not arise from the physical features of the stimuli but is contingent on experience with spectral cues and covaries with the change in perception. One explanation for this observation may be that the tuning in low-level auditory cortex underlies the subjective perception of sound elevation.SIGNIFICANCE STATEMENT This study addresses two fundamental questions about the brain representation of sensory stimuli: how the vertical spatial axis of auditory space is represented in the auditory cortex and whether low-level sensory cortex represents physical stimulus features or subjective perceptual attributes. Using high-resolution fMRI, we show that vertical sound direction is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. In addition, we demonstrate that the shape of these tuning functions is contingent on experience with spectral cues and covaries with the change in perception, which may indicate that the tuning functions in low-level auditory cortex underlie the perceived elevation of a sound source.
Collapse
|
16
|
Toarmino CR, Yen CCC, Papoti D, Bock NA, Leopold DA, Miller CT, Silva AC. Functional magnetic resonance imaging of auditory cortical fields in awake marmosets. Neuroimage 2017; 162:86-92. [PMID: 28830766 DOI: 10.1016/j.neuroimage.2017.08.052] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2017] [Revised: 08/14/2017] [Accepted: 08/18/2017] [Indexed: 11/25/2022] Open
Abstract
The primate auditory cortex is organized into a network of anatomically and functionally distinct processing fields. Because of its tonotopic properties, the auditory core has been the main target of neurophysiological studies ranging from sensory encoding to perceptual decision-making. By comparison, the auditory belt has been less extensively studied, in part due to the fact that neurons in the belt areas prefer more complex stimuli and integrate over a wider frequency range than neurons in the core, which prefer pure tones of a single frequency. Complementary approaches, such as functional magnetic resonance imaging (fMRI), allow the anatomical identification of both the auditory core and belt and facilitate their functional characterization by rapidly testing a range of stimuli across multiple brain areas simultaneously that can be used to guide subsequent neural recordings. Bridging these technologies in primates will serve to further expand our understanding of primate audition. Here, we developed a novel preparation to test whether different areas of the auditory cortex could be identified using fMRI in common marmosets (Callithrix jacchus), a powerful model of the primate auditory system. We used two types of stimulation, band pass noise and pure tones, to parse apart the auditory core from surrounding secondary belt fields. In contrast to most auditory fMRI experiments in primates, we employed a continuous sampling paradigm to rapidly collect data with little deleterious effects. Here we found robust bilateral auditory cortex activation in two marmosets and unilateral activation in a third utilizing this preparation. Furthermore, we confirmed results previously reported in electrophysiology experiments, such as the tonotopic organization of the auditory core and regions activating preferentially to complex over simple stimuli. Overall, these data establish a key preparation for future research to investigate various functional properties of marmoset auditory cortex.
Collapse
Affiliation(s)
- Camille R Toarmino
- Cortical Systems and Behavior Laboratory, Department of Psychology and Neurosciences Graduate Program, The University of California at San Diego, La Jolla, CA, 92093-0109, USA
| | - Cecil C C Yen
- Cerebral Microcirculation Section, Laboratory of Functional and Molecular Imaging, National Institute of Neurological Disorders and Stroke, Bethesda, MD, 20892-4478, USA
| | - Daniel Papoti
- Cerebral Microcirculation Section, Laboratory of Functional and Molecular Imaging, National Institute of Neurological Disorders and Stroke, Bethesda, MD, 20892-4478, USA
| | - Nicholas A Bock
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Ontario, L8S 4K1, Canada
| | - David A Leopold
- Section on Cognitive Neurophysiology and Imaging, Laboratory of Neuropsychology, National Institute of Mental Health, Bethesda, MD, 20892-4400, USA
| | - Cory T Miller
- Cortical Systems and Behavior Laboratory, Department of Psychology and Neurosciences Graduate Program, The University of California at San Diego, La Jolla, CA, 92093-0109, USA
| | - Afonso C Silva
- Cerebral Microcirculation Section, Laboratory of Functional and Molecular Imaging, National Institute of Neurological Disorders and Stroke, Bethesda, MD, 20892-4478, USA.
| |
Collapse
|
17
|
Town SM, Brimijoin WO, Bizley JK. Egocentric and allocentric representations in auditory cortex. PLoS Biol 2017; 15:e2001878. [PMID: 28617796 PMCID: PMC5472254 DOI: 10.1371/journal.pbio.2001878] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2016] [Accepted: 05/08/2017] [Indexed: 11/18/2022] Open
Abstract
A key function of the brain is to provide a stable representation of an object's location in the world. In hearing, sound azimuth and elevation are encoded by neurons throughout the auditory system, and auditory cortex is necessary for sound localization. However, the coordinate frame in which neurons represent sound space remains undefined: classical spatial receptive fields in head-fixed subjects can be explained either by sensitivity to sound source location relative to the head (egocentric) or relative to the world (allocentric encoding). This coordinate frame ambiguity can be resolved by studying freely moving subjects; here we recorded spatial receptive fields in the auditory cortex of freely moving ferrets. We found that most spatially tuned neurons represented sound source location relative to the head across changes in head position and direction. In addition, we also recorded a small number of neurons in which sound location was represented in a world-centered coordinate frame. We used measurements of spatial tuning across changes in head position and direction to explore the influence of sound source distance and speed of head movement on auditory cortical activity and spatial tuning. Modulation depth of spatial tuning increased with distance for egocentric but not allocentric units, whereas, for both populations, modulation was stronger at faster movement speeds. Our findings suggest that early auditory cortex primarily represents sound source location relative to ourselves but that a minority of cells can represent sound location in the world independent of our own position.
Collapse
Affiliation(s)
- Stephen M. Town
- Ear Institute, University College London, London, United Kingdom
| | - W. Owen Brimijoin
- MRC/CSO Institute of Hearing Research – Scottish Section, Glasgow, United Kingdom
| | | |
Collapse
|
18
|
Nourski KV, Banks MI, Steinschneider M, Rhone AE, Kawasaki H, Mueller RN, Todd MM, Howard MA. Electrocorticographic delineation of human auditory cortical fields based on effects of propofol anesthesia. Neuroimage 2017; 152:78-93. [PMID: 28254512 PMCID: PMC5432407 DOI: 10.1016/j.neuroimage.2017.02.061] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2016] [Revised: 02/13/2017] [Accepted: 02/21/2017] [Indexed: 12/20/2022] Open
Abstract
The functional organization of human auditory cortex remains incompletely characterized. While the posteromedial two thirds of Heschl's gyrus (HG) is generally considered to be part of core auditory cortex, additional subdivisions of HG remain speculative. To further delineate the hierarchical organization of human auditory cortex, we investigated regional heterogeneity in the modulation of auditory cortical responses under varying depths of anesthesia induced by propofol. Non-invasive studies have shown that propofol differentially affects auditory cortical activity, with a greater impact on non-core areas. Subjects were neurosurgical patients undergoing removal of intracranial electrodes placed to identify epileptic foci. Stimuli were 50Hz click trains, presented continuously during an awake baseline period, and subsequently, while propofol infusion was incrementally titrated to induce general anesthesia. Electrocorticographic recordings were made with depth electrodes implanted in HG and subdural grid electrodes implanted over superior temporal gyrus (STG). Depth of anesthesia was monitored using spectral entropy. Averaged evoked potentials (AEPs), frequency-following responses (FFRs) and high gamma (70-150Hz) event-related band power were used to characterize auditory cortical activity. Based on the changes in AEPs and FFRs during the induction of anesthesia, posteromedial HG could be divided into two subdivisions. In the most posteromedial aspect of the gyrus, the earliest AEP deflections were preserved and FFRs increased during induction. In contrast, the remainder of the posteromedial HG exhibited attenuation of both the AEP and the FFR. The anterolateral HG exhibited weaker activation characterized by broad, low-voltage AEPs and the absence of FFRs. Lateral STG exhibited limited activation by click trains, and FFRs there diminished during induction. Sustained high gamma activity was attenuated in the most posteromedial portion of HG, and was absent in all other regions. These differential patterns of auditory cortical activity during the induction of anesthesia may serve as useful physiological markers for field delineation. In this study, the posteromedial HG could be parcellated into at least two subdivisions. Preservation of the earliest AEP deflections and FFRs in the posteromedial HG likely reflects the persistence of feedforward synaptic activity generated by inputs from subcortical auditory pathways, including the medial geniculate nucleus.
Collapse
Affiliation(s)
- Kirill V Nourski
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA.
| | - Matthew I Banks
- Department of Anesthesiology, University of Wisconsin - Madison, Madison, WI, USA
| | - Mitchell Steinschneider
- Departments of Neurology and Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Ariane E Rhone
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA
| | - Hiroto Kawasaki
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA
| | - Rashmi N Mueller
- Department of Anesthesia, The University of Iowa, Iowa City, IA, USA
| | - Michael M Todd
- Department of Anesthesia, The University of Iowa, Iowa City, IA, USA; Department of Anesthesiology, University of Minnesota, Minneapolis, MN, USA
| | - Matthew A Howard
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA; Pappajohn Biomedical Institute, The University of Iowa, Iowa City, IA, USA; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, USA
| |
Collapse
|
19
|
Poirier C, Baumann S, Dheerendra P, Joly O, Hunter D, Balezeau F, Sun L, Rees A, Petkov CI, Thiele A, Griffiths TD. Auditory motion-specific mechanisms in the primate brain. PLoS Biol 2017; 15:e2001379. [PMID: 28472038 PMCID: PMC5417421 DOI: 10.1371/journal.pbio.2001379] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2016] [Accepted: 04/07/2017] [Indexed: 12/25/2022] Open
Abstract
This work examined the mechanisms underlying auditory motion processing in the auditory cortex of awake monkeys using functional magnetic resonance imaging (fMRI). We tested to what extent auditory motion analysis can be explained by the linear combination of static spatial mechanisms, spectrotemporal processes, and their interaction. We found that the posterior auditory cortex, including A1 and the surrounding caudal belt and parabelt, is involved in auditory motion analysis. Static spatial and spectrotemporal processes were able to fully explain motion-induced activation in most parts of the auditory cortex, including A1, but not in circumscribed regions of the posterior belt and parabelt cortex. We show that in these regions motion-specific processes contribute to the activation, providing the first demonstration that auditory motion is not simply deduced from changes in static spatial location. These results demonstrate that parallel mechanisms for motion and static spatial analysis coexist within the auditory dorsal stream.
Collapse
Affiliation(s)
- Colline Poirier
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
- * E-mail: (CP); (TDG)
| | - Simon Baumann
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Pradeep Dheerendra
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Olivier Joly
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - David Hunter
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Fabien Balezeau
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Li Sun
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Adrian Rees
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Christopher I. Petkov
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Alexander Thiele
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
| | - Timothy D. Griffiths
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, Tyne and Wear, United Kingdom
- * E-mail: (CP); (TDG)
| |
Collapse
|
20
|
Bednar A, Boland FM, Lalor EC. Different spatio-temporal electroencephalography features drive the successful decoding of binaural and monaural cues for sound localization. Eur J Neurosci 2017; 45:679-689. [DOI: 10.1111/ejn.13524] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2016] [Revised: 01/10/2017] [Accepted: 01/13/2017] [Indexed: 11/27/2022]
Affiliation(s)
- Adam Bednar
- School of Engineering; Trinity Centre for Bioengineering and Trinity College Institute of Neuroscience; Trinity College Dublin; University of Dublin; Dublin Ireland
- Department of Biomedical Engineering and Department of Neuroscience; University of Rochester; 500 Joseph C. Wilson Blvd. Box 270168 Rochester, NY 14611 USA
| | - Francis M. Boland
- School of Engineering; Electronic & Electrical Engineering; Trinity College Dublin; Dublin Ireland
| | - Edmund C. Lalor
- School of Engineering; Trinity Centre for Bioengineering and Trinity College Institute of Neuroscience; Trinity College Dublin; University of Dublin; Dublin Ireland
- Department of Biomedical Engineering and Department of Neuroscience; University of Rochester; 500 Joseph C. Wilson Blvd. Box 270168 Rochester, NY 14611 USA
| |
Collapse
|
21
|
Tolnai S, Beutelmann R, Klump GM. Effect of preceding stimulation on sound localization and its representation in the auditory midbrain. Eur J Neurosci 2017; 45:460-471. [PMID: 27891687 DOI: 10.1111/ejn.13491] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2016] [Revised: 10/27/2016] [Accepted: 11/21/2016] [Indexed: 11/29/2022]
Affiliation(s)
- Sandra Tolnai
- Cluster of Excellence Hearing4all; Animal Physiology and Behaviour Group; Department of Neuroscience; School of Medicine and Health Sciences; University of Oldenburg; Oldenburg D-26111 Germany
| | - Rainer Beutelmann
- Cluster of Excellence Hearing4all; Animal Physiology and Behaviour Group; Department of Neuroscience; School of Medicine and Health Sciences; University of Oldenburg; Oldenburg D-26111 Germany
| | - Georg M. Klump
- Cluster of Excellence Hearing4all; Animal Physiology and Behaviour Group; Department of Neuroscience; School of Medicine and Health Sciences; University of Oldenburg; Oldenburg D-26111 Germany
| |
Collapse
|
22
|
Ramamurthy DL, Recanzone GH. Spectral and spatial tuning of onset and offset response functions in auditory cortical fields A1 and CL of rhesus macaques. J Neurophysiol 2016; 117:966-986. [PMID: 27927783 DOI: 10.1152/jn.00534.2016] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2016] [Accepted: 12/06/2016] [Indexed: 11/22/2022] Open
Abstract
The mammalian auditory cortex is necessary for spectral and spatial processing of acoustic stimuli. Most physiological studies of single neurons in the auditory cortex have focused on the onset and sustained portions of evoked responses, but there have been far fewer studies on the relationship between onset and offset responses. In the current study, we compared spectral and spatial tuning of onset and offset responses of neurons in primary auditory cortex (A1) and the caudolateral (CL) belt area of awake macaque monkeys. Several different metrics were used to determine the relationship between onset and offset response profiles in both frequency and space domains. In the frequency domain, a substantial proportion of neurons in A1 and CL displayed highly dissimilar best stimuli for onset- and offset-evoked responses, although even for these neurons, there was usually a large overlap in the range of frequencies that elicited onset, and offset responses and distributions of tuning overlap metrics were mostly unimodal. In the spatial domain, the vast majority of neurons displayed very similar best locations for onset- and offset-evoked responses, along with unimodal distributions of all tuning overlap metrics considered. Finally, for both spectral and spatial tuning, a slightly larger fraction of neurons in A1 displayed nonoverlapping onset and offset response profiles, relative to CL, which supports hierarchical differences in the processing of sounds in the two areas. However, these differences are small compared with differences in proportions of simple cells (low overlap) and complex cells (high overlap) in primary and secondary visual areas.NEW & NOTEWORTHY In the current study, we examine the relationship between the tuning of neural responses evoked by the onset and offset of acoustic stimuli in the primary auditory cortex, as well as a higher-order auditory area-the caudolateral belt field-in awake rhesus macaques. In these areas, the relationship between onset and offset response profiles in frequency and space domains formed a continuum, ranging from highly overlapping to highly nonoverlapping.
Collapse
Affiliation(s)
- Deepa L Ramamurthy
- Center for Neuroscience, University of California, Davis, California; and
| | - Gregg H Recanzone
- Center for Neuroscience, University of California, Davis, California; and.,Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| |
Collapse
|
23
|
Intracortical depth analyses of frequency-sensitive regions of human auditory cortex using 7TfMRI. Neuroimage 2016; 143:116-127. [PMID: 27608603 DOI: 10.1016/j.neuroimage.2016.09.010] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2016] [Revised: 08/15/2016] [Accepted: 09/04/2016] [Indexed: 11/23/2022] Open
Abstract
Despite recent advances in auditory neuroscience, the exact functional organization of human auditory cortex (AC) has been difficult to investigate. Here, using reversals of tonotopic gradients as the test case, we examined whether human ACs can be more precisely mapped by avoiding signals caused by large draining vessels near the pial surface, which bias blood-oxygen level dependent (BOLD) signals away from the actual sites of neuronal activity. Using ultra-high field (7T) fMRI and cortical depth analysis techniques previously applied in visual cortices, we sampled 1mm isotropic voxels from different depths of AC during narrow-band sound stimulation with biologically relevant temporal patterns. At the group level, analyses that considered voxels from all cortical depths, but excluded those intersecting the pial surface, showed (a) the greatest statistical sensitivity in contrasts between activations to high vs. low frequency sounds and (b) the highest inter-subject consistency of phase-encoded continuous tonotopy mapping. Analyses based solely on voxels intersecting the pial surface produced the least consistent group results, even when compared to analyses based solely on voxels intersecting the white-matter surface where both signal strength and within-subject statistical power are weakest. However, no evidence was found for reduced within-subject reliability in analyses considering the pial voxels only. Our group results could, thus, reflect improved inter-subject correspondence of high and low frequency gradients after the signals from voxels near the pial surface are excluded. Using tonotopy analyses as the test case, our results demonstrate that when the major physiological and anatomical biases imparted by the vasculature are controlled, functional mapping of human ACs becomes more consistent from subject to subject than previously thought.
Collapse
|
24
|
Renvall H, Staeren N, Barz CS, Ley A, Formisano E. Attention Modulates the Auditory Cortical Processing of Spatial and Category Cues in Naturalistic Auditory Scenes. Front Neurosci 2016; 10:254. [PMID: 27375416 PMCID: PMC4894904 DOI: 10.3389/fnins.2016.00254] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2016] [Accepted: 05/23/2016] [Indexed: 11/13/2022] Open
Abstract
This combined fMRI and MEG study investigated brain activations during listening and attending to natural auditory scenes. We first recorded, using in-ear microphones, vocal non-speech sounds, and environmental sounds that were mixed to construct auditory scenes containing two concurrent sound streams. During the brain measurements, subjects attended to one of the streams while spatial acoustic information of the scene was either preserved (stereophonic sounds) or removed (monophonic sounds). Compared to monophonic sounds, stereophonic sounds evoked larger blood-oxygenation-level-dependent (BOLD) fMRI responses in the bilateral posterior superior temporal areas, independent of which stimulus attribute the subject was attending to. This finding is consistent with the functional role of these regions in the (automatic) processing of auditory spatial cues. Additionally, significant differences in the cortical activation patterns depending on the target of attention were observed. Bilateral planum temporale and inferior frontal gyrus were preferentially activated when attending to stereophonic environmental sounds, whereas when subjects attended to stereophonic voice sounds, the BOLD responses were larger at the bilateral middle superior temporal gyrus and sulcus, previously reported to show voice sensitivity. In contrast, the time-resolved MEG responses were stronger for mono- than stereophonic sounds in the bilateral auditory cortices at ~360 ms after the stimulus onset when attending to the voice excerpts within the combined sounds. The observed effects suggest that during the segregation of auditory objects from the auditory background, spatial sound cues together with other relevant temporal and spectral cues are processed in an attention-dependent manner at the cortical locations generally involved in sound recognition. More synchronous neuronal activation during monophonic than stereophonic sound processing, as well as (local) neuronal inhibitory mechanisms in the auditory cortex, may explain the simultaneous increase of BOLD responses and decrease of MEG responses. These findings highlight the complimentary role of electrophysiological and hemodynamic measures in addressing brain processing of complex stimuli.
Collapse
Affiliation(s)
- Hanna Renvall
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht UniversityMaastricht, Netherlands; Department of Neuroscience and Biomedical Engineering, Aalto University School of ScienceEspoo, Finland; Aalto Neuroimaging, Magnetoencephalography (MEG) Core, Aalto UniversityEspoo, Finland
| | - Noël Staeren
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University Maastricht, Netherlands
| | - Claudia S Barz
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht UniversityMaastricht, Netherlands; Institute for Neuroscience and Medicine, Research Centre JuelichJuelich, Germany; Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen UniversityAachen, Germany
| | - Anke Ley
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University Maastricht, Netherlands
| | - Elia Formisano
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht UniversityMaastricht, Netherlands; Maastricht Center for Systems Biology (MaCSBio), Maastricht UniversityMaastricht, Netherlands
| |
Collapse
|
25
|
Derey K, Valente G, de Gelder B, Formisano E. Opponent Coding of Sound Location (Azimuth) in Planum Temporale is Robust to Sound-Level Variations. Cereb Cortex 2015; 26:450-464. [PMID: 26545618 PMCID: PMC4677988 DOI: 10.1093/cercor/bhv269] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022] Open
Abstract
Coding of sound location in auditory cortex (AC) is only partially understood. Recent electrophysiological research suggests that neurons in mammalian auditory cortex are characterized by broad spatial tuning and a preference for the contralateral hemifield, that is, a nonuniform sampling of sound azimuth. Additionally, spatial selectivity decreases with increasing sound intensity. To accommodate these findings, it has been proposed that sound location is encoded by the integrated activity of neuronal populations with opposite hemifield tuning (“opponent channel model”). In this study, we investigated the validity of such a model in human AC with functional magnetic resonance imaging (fMRI) and a phase-encoding paradigm employing binaural stimuli recorded individually for each participant. In all subjects, we observed preferential fMRI responses to contralateral azimuth positions. Additionally, in most AC locations, spatial tuning was broad and not level invariant. We derived an opponent channel model of the fMRI responses by subtracting the activity of contralaterally tuned regions in bilateral planum temporale. This resulted in accurate decoding of sound azimuth location, which was unaffected by changes in sound level. Our data thus support opponent channel coding as a neural mechanism for representing acoustic azimuth in human AC.
Collapse
Affiliation(s)
- Kiki Derey
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht 6200 MD, The Netherlands
| | - Giancarlo Valente
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht 6200 MD, The Netherlands
| | - Beatrice de Gelder
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht 6200 MD, The Netherlands
| | - Elia Formisano
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht 6200 MD, The Netherlands
| |
Collapse
|
26
|
Friederici AD, Singer W. Grounding language processing on basic neurophysiological principles. Trends Cogn Sci 2015; 19:329-38. [DOI: 10.1016/j.tics.2015.03.012] [Citation(s) in RCA: 74] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2015] [Revised: 03/19/2015] [Accepted: 03/24/2015] [Indexed: 01/02/2023]
|
27
|
Lui LL, Mokri Y, Reser DH, Rosa MGP, Rajan R. Responses of neurons in the marmoset primary auditory cortex to interaural level differences: comparison of pure tones and vocalizations. Front Neurosci 2015; 9:132. [PMID: 25941469 PMCID: PMC4403308 DOI: 10.3389/fnins.2015.00132] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2014] [Accepted: 04/01/2015] [Indexed: 11/13/2022] Open
Abstract
Interaural level differences (ILDs) are the dominant cue for localizing the sources of high frequency sounds that differ in azimuth. Neurons in the primary auditory cortex (A1) respond differentially to ILDs of simple stimuli such as tones and noise bands, but the extent to which this applies to complex natural sounds, such as vocalizations, is not known. In sufentanil/N2O anesthetized marmosets, we compared the responses of 76 A1 neurons to three vocalizations (Ock, Tsik, and Twitter) and pure tones at cells' characteristic frequency. Each stimulus was presented with ILDs ranging from 20 dB favoring the contralateral ear to 20 dB favoring the ipsilateral ear to cover most of the frontal azimuthal space. The response to each stimulus was tested at three average binaural levels (ABLs). Most neurons were sensitive to ILDs of vocalizations and pure tones. For all stimuli, the majority of cells had monotonic ILD sensitivity functions favoring the contralateral ear, but we also observed ILD sensitivity functions that peaked near the midline and functions favoring the ipsilateral ear. Representation of ILD in A1 was better for pure tones and the Ock vocalization in comparison to the Tsik and Twitter calls; this was reflected by higher discrimination indices and greater modulation ranges. ILD sensitivity was heavily dependent on ABL: changes in ABL by ±20 dB SPL from the optimal level for ILD sensitivity led to significant decreases in ILD sensitivity for all stimuli, although ILD sensitivity to pure tones and Ock calls was most robust to such ABL changes. Our results demonstrate differences in ILD coding for pure tones and vocalizations, showing that ILD sensitivity in A1 to complex sounds cannot be simply extrapolated from that to pure tones. They also show A1 neurons do not show level-invariant representation of ILD, suggesting that such a representation of auditory space is likely to require population coding, and further processing at subsequent hierarchical stages.
Collapse
Affiliation(s)
- Leo L Lui
- Department of Physiology, Monash University Clayton, VIC, Australia ; Australian Research Council, Centre of Excellence for Integrative Brain Function, Monash University Clayton, VIC, Australia
| | - Yasamin Mokri
- Department of Physiology, Monash University Clayton, VIC, Australia
| | - David H Reser
- Department of Physiology, Monash University Clayton, VIC, Australia
| | - Marcello G P Rosa
- Department of Physiology, Monash University Clayton, VIC, Australia ; Australian Research Council, Centre of Excellence for Integrative Brain Function, Monash University Clayton, VIC, Australia
| | - Ramesh Rajan
- Department of Physiology, Monash University Clayton, VIC, Australia ; Australian Research Council, Centre of Excellence for Integrative Brain Function, Monash University Clayton, VIC, Australia ; Ear Sciences Institute of Australia Subiaco, WA, Australia
| |
Collapse
|
28
|
Abstract
The auditory system derives locations of sound sources from spatial cues provided by the interaction of sound with the head and external ears. Those cues are analyzed in specific brainstem pathways and then integrated as cortical representation of locations. The principal cues for horizontal localization are interaural time differences (ITDs) and interaural differences in sound level (ILDs). Vertical and front/back localization rely on spectral-shape cues derived from direction-dependent filtering properties of the external ears. The likely first sites of analysis of these cues are the medial superior olive (MSO) for ITDs, lateral superior olive (LSO) for ILDs, and dorsal cochlear nucleus (DCN) for spectral-shape cues. Localization in distance is much less accurate than that in horizontal and vertical dimensions, and interpretation of the basic cues is influenced by additional factors, including acoustics of the surroundings and familiarity of source spectra and levels. Listeners are quite sensitive to sound motion, but it remains unclear whether that reflects specific motion detection mechanisms or simply detection of changes in static location. Intact auditory cortex is essential for normal sound localization. Cortical representation of sound locations is highly distributed, with no evidence for point-to-point topography. Spatial representation is strictly contralateral in laboratory animals that have been studied, whereas humans show a prominent right-hemisphere dominance.
Collapse
Affiliation(s)
- John C Middlebrooks
- Departments of Otolaryngology, Neurobiology and Behavior, Cognitive Sciences, and Biomedical Engineering, University of California at Irvine, Irvine, CA, USA.
| |
Collapse
|
29
|
Abstract
The auditory cortex is a network of areas in the part of the brain that receives inputs from the subcortical auditory pathways in the brainstem and thalamus. Through an elaborate network of intrinsic and extrinsic connections, the auditory cortex is thought to bring about the conscious perception of sound and provide a basis for the comprehension and production of meaningful utterances. In this chapter, the organization of auditory cortex is described with an emphasis on its anatomic features and the flow of information within the network. These features are then used to introduce key neurophysiologic concepts that are being intensively studied in humans and animal models. The discussion is presented in the context of our working model of the primate auditory cortex and extensions to humans. The material is presented in the context of six underlying principles, which reflect distinct, but related, aspects of anatomic and physiologic organization: (1) the division of auditory cortex into regions; (2) the subdivision of regions into areas; (3) tonotopic organization of areas; (4) thalamocortical connections; (5) serial and parallel organization of connections; and (6) topographic relationships between auditory and auditory-related areas. Although the functional roles of the various components of this network remain poorly defined, a more complete understanding is emerging from ongoing studies that link auditory behavior to its anatomic and physiologic substrates.
Collapse
Affiliation(s)
- Troy A Hackett
- Department of Hearing and Speech Sciences, Vanderbilt University School of Medicine and Department of Psychology, Vanderbilt University, Nashville, TN, USA.
| |
Collapse
|
30
|
Niwa M, O'Connor KN, Engall E, Johnson JS, Sutter ML. Hierarchical effects of task engagement on amplitude modulation encoding in auditory cortex. J Neurophysiol 2014; 113:307-27. [PMID: 25298387 DOI: 10.1152/jn.00458.2013] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We recorded from middle lateral belt (ML) and primary (A1) auditory cortical neurons while animals discriminated amplitude-modulated (AM) sounds and also while they sat passively. Engagement in AM discrimination improved ML and A1 neurons' ability to discriminate AM with both firing rate and phase-locking; however, task engagement affected neural AM discrimination differently in the two fields. The results suggest that these two areas utilize different AM coding schemes: a "single mode" in A1 that relies on increased activity for AM relative to unmodulated sounds and a "dual-polar mode" in ML that uses both increases and decreases in neural activity to encode modulation. In the dual-polar ML code, nonsynchronized responses might play a special role. The results are consistent with findings in the primary and secondary somatosensory cortices during discrimination of vibrotactile modulation frequency, implicating a common scheme in the hierarchical processing of temporal information among different modalities. The time course of activity differences between behaving and passive conditions was also distinct in A1 and ML and may have implications for auditory attention. At modulation depths ≥ 16% (approximately behavioral threshold), A1 neurons' improvement in distinguishing AM from unmodulated noise is relatively constant or improves slightly with increasing modulation depth. In ML, improvement during engagement is most pronounced near threshold and disappears at highly suprathreshold depths. This ML effect is evident later in the stimulus, and mainly in nonsynchronized responses. This suggests that attention-related increases in activity are stronger or longer-lasting for more difficult stimuli in ML.
Collapse
Affiliation(s)
- Mamiko Niwa
- Center for Neuroscience and Department of Neurobiology, Physiology, and Behavior, University of California, Davis, California
| | - Kevin N O'Connor
- Center for Neuroscience and Department of Neurobiology, Physiology, and Behavior, University of California, Davis, California
| | - Elizabeth Engall
- Center for Neuroscience and Department of Neurobiology, Physiology, and Behavior, University of California, Davis, California
| | - Jeffrey S Johnson
- Center for Neuroscience and Department of Neurobiology, Physiology, and Behavior, University of California, Davis, California
| | - M L Sutter
- Center for Neuroscience and Department of Neurobiology, Physiology, and Behavior, University of California, Davis, California
| |
Collapse
|
31
|
Identifying and quantifying multisensory integration: a tutorial review. Brain Topogr 2014; 27:707-30. [PMID: 24722880 DOI: 10.1007/s10548-014-0365-7] [Citation(s) in RCA: 133] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2013] [Accepted: 03/26/2014] [Indexed: 12/19/2022]
Abstract
We process information from the world through multiple senses, and the brain must decide what information belongs together and what information should be segregated. One challenge in studying such multisensory integration is how to quantify the multisensory interactions, a challenge that is amplified by the host of methods that are now used to measure neural, behavioral, and perceptual responses. Many of the measures that have been developed to quantify multisensory integration (and which have been derived from single unit analyses), have been applied to these different measures without much consideration for the nature of the process being studied. Here, we provide a review focused on the means with which experimenters quantify multisensory processes and integration across a range of commonly used experimental methodologies. We emphasize the most commonly employed measures, including single- and multiunit responses, local field potentials, functional magnetic resonance imaging, and electroencephalography, along with behavioral measures of detection, accuracy, and response times. In each section, we will discuss the different metrics commonly used to quantify multisensory interactions, including the rationale for their use, their advantages, and the drawbacks and caveats associated with them. Also discussed are possible alternatives to the most commonly used metrics.
Collapse
|
32
|
Joachimsthaler B, Uhlmann M, Miller F, Ehret G, Kurt S. Quantitative analysis of neuronal response properties in primary and higher-order auditory cortical fields of awake house mice (Mus musculus). Eur J Neurosci 2014; 39:904-918. [PMID: 24506843 PMCID: PMC4264920 DOI: 10.1111/ejn.12478] [Citation(s) in RCA: 57] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2013] [Revised: 12/10/2013] [Accepted: 12/11/2013] [Indexed: 12/01/2022]
Abstract
Because of its great genetic potential, the mouse (Mus musculus) has become a popular model species for studies on hearing and sound processing along the auditory pathways. Here, we present the first comparative study on the representation of neuronal response parameters to tones in primary and higher-order auditory cortical fields of awake mice. We quantified 12 neuronal properties of tone processing in order to estimate similarities and differences of function between the fields, and to discuss how far auditory cortex (AC) function in the mouse is comparable to that in awake monkeys and cats. Extracellular recordings were made from 1400 small clusters of neurons from cortical layers III/IV in the primary fields AI (primary auditory field) and AAF (anterior auditory field), and the higher-order fields AII (second auditory field) and DP (dorsoposterior field). Field specificity was shown with regard to spontaneous activity, correlation between spontaneous and evoked activity, tone response latency, sharpness of frequency tuning, temporal response patterns (occurrence of phasic responses, phasic-tonic responses, tonic responses, and off-responses), and degree of variation between the characteristic frequency (CF) and the best frequency (BF) (CF-BF relationship). Field similarities were noted as significant correlations between CFs and BFs, V-shaped frequency tuning curves, similar minimum response thresholds and non-monotonic rate-level functions in approximately two-thirds of the neurons. Comparative and quantitative analyses showed that the measured response characteristics were, to various degrees, susceptible to influences of anesthetics. Therefore, studies of neuronal responses in the awake AC are important in order to establish adequate relationships between neuronal data and auditory perception and acoustic response behavior.
Collapse
Affiliation(s)
- Bettina Joachimsthaler
- Institute of Neurobiology, University of UlmInstitute of Neurobiology 89081 Ulm, Germany
- Systems Neurophysiology, Department of Cognitive Neurology, Werner Reichardt Centre for Integrative Neuroscience, Hertie Institute for Clinical Brain Research, University of TübingenTübingen, Germany
| | - Michaela Uhlmann
- Institute of Neurobiology, University of UlmInstitute of Neurobiology 89081 Ulm, Germany
| | - Frank Miller
- Institute of Neurobiology, University of UlmInstitute of Neurobiology 89081 Ulm, Germany
| | - Günter Ehret
- Institute of Neurobiology, University of UlmInstitute of Neurobiology 89081 Ulm, Germany
| | - Simone Kurt
- Institute of Neurobiology, University of UlmInstitute of Neurobiology 89081 Ulm, Germany
- Cluster of Excellence “Hearing4all”, Institute of Audioneurotechnology and Hannover Medical School, Department of Experimental Otology, ENT Clinics30625 Hannover, Germany
| |
Collapse
|
33
|
Kusmierek P, Rauschecker JP. Selectivity for space and time in early areas of the auditory dorsal stream in the rhesus monkey. J Neurophysiol 2014; 111:1671-85. [PMID: 24501260 DOI: 10.1152/jn.00436.2013] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The respective roles of ventral and dorsal cortical processing streams are still under discussion in both vision and audition. We characterized neural responses in the caudal auditory belt cortex, an early dorsal stream region of the macaque. We found fast neural responses with elevated temporal precision as well as neurons selective to sound location. These populations were partly segregated: Neurons in a caudomedial area more precisely followed temporal stimulus structure but were less selective to spatial location. Response latencies in this area were even shorter than in primary auditory cortex. Neurons in a caudolateral area showed higher selectivity for sound source azimuth and elevation, but responses were slower and matching to temporal sound structure was poorer. In contrast to the primary area and other regions studied previously, latencies in the caudal belt neurons were not negatively correlated with best frequency. Our results suggest that two functional substreams may exist within the auditory dorsal stream.
Collapse
Affiliation(s)
- Pawel Kusmierek
- Department of Neuroscience, Georgetown University Medical Center, Washington, District of Columbia
| | | |
Collapse
|
34
|
Auditory-cortex short-term plasticity induced by selective attention. Neural Plast 2014; 2014:216731. [PMID: 24551458 PMCID: PMC3914570 DOI: 10.1155/2014/216731] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2013] [Accepted: 12/15/2013] [Indexed: 11/23/2022] Open
Abstract
The ability to concentrate on relevant sounds in the acoustic environment is crucial for everyday function and communication. Converging lines of evidence suggests that transient functional changes in auditory-cortex neurons, “short-term plasticity”, might explain this fundamental function. Under conditions of strongly focused attention, enhanced processing of attended sounds can take place at very early latencies (~50 ms from sound onset) in primary auditory cortex and possibly even at earlier latencies in subcortical structures. More robust selective-attention short-term plasticity is manifested as modulation of responses peaking at ~100 ms from sound onset in functionally specialized nonprimary auditory-cortical areas by way of stimulus-specific reshaping of neuronal receptive fields that supports filtering of selectively attended sound features from task-irrelevant ones. Such effects have been shown to take effect in ~seconds following shifting of attentional focus. There are findings suggesting that the reshaping of neuronal receptive fields is even stronger at longer auditory-cortex response latencies (~300 ms from sound onset). These longer-latency short-term plasticity effects seem to build up more gradually, within tens of seconds after shifting the focus of attention. Importantly, some of the auditory-cortical short-term plasticity effects observed during selective attention predict enhancements in behaviorally measured sound discrimination performance.
Collapse
|
35
|
A neural network model can explain ventriloquism aftereffect and its generalization across sound frequencies. BIOMED RESEARCH INTERNATIONAL 2013; 2013:475427. [PMID: 24228250 PMCID: PMC3818813 DOI: 10.1155/2013/475427] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/29/2013] [Revised: 08/28/2013] [Accepted: 08/28/2013] [Indexed: 11/17/2022]
Abstract
Exposure to synchronous but spatially disparate auditory and visual stimuli produces a perceptual shift of sound location towards the visual stimulus (ventriloquism effect). After adaptation to a ventriloquism situation, enduring sound shift is observed in the absence of the visual stimulus (ventriloquism aftereffect). Experimental studies report opposing results as to aftereffect generalization across sound frequencies varying from aftereffect being confined to the frequency used during adaptation to aftereffect generalizing across some octaves. Here, we present an extension of a model of visual-auditory interaction we previously developed. The new model is able to simulate the ventriloquism effect and, via Hebbian learning rules, the ventriloquism aftereffect and can be used to investigate aftereffect generalization across frequencies. The model includes auditory neurons coding both for the spatial and spectral features of the auditory stimuli and mimicking properties of biological auditory neurons. The model suggests that different extent of aftereffect generalization across frequencies can be obtained by changing the intensity of the auditory stimulus that induces different amounts of activation in the auditory layer. The model provides a coherent theoretical framework to explain the apparently contradictory results found in the literature. Model mechanisms and hypotheses are discussed in relation to neurophysiological and psychophysical data.
Collapse
|
36
|
Yao JD, Bremen P, Middlebrooks JC. Rat primary auditory cortex is tuned exclusively to the contralateral hemifield. J Neurophysiol 2013; 110:2140-51. [PMID: 23945782 DOI: 10.1152/jn.00219.2013] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The rat is a widely used species for study of the auditory system. Psychophysical results from rats have shown an inability to discriminate sound source locations within a lateral hemifield, despite showing fairly sharp near-midline acuity. We tested the hypothesis that those characteristics of the rat's sound localization psychophysics are evident in the characteristics of spatial sensitivity of its cortical neurons. In addition, we sought quantitative descriptions of in vivo spatial sensitivity of cortical neurons that would support development of an in vitro experimental model to study cortical mechanisms of spatial hearing. We assessed the spatial sensitivity of single- and multiple-neuron responses in the primary auditory cortex (A1) of urethane-anesthetized rats. Free-field noise bursts were varied throughout 360° of azimuth in the horizontal plane at sound levels from 10 to 40 dB above neural thresholds. All neurons encountered in A1 displayed contralateral-hemifield spatial tuning in that they responded strongly to contralateral sound source locations, their responses cut off sharply for locations near the frontal midline, and they showed weak or no responses to ipsilateral sources. Spatial tuning was quite stable across a 30-dB range of sound levels. Consistent with rat psychophysical results, a linear discriminator analysis of spike counts exhibited high spatial acuity for near-midline sounds and poor discrimination for off-midline locations. Hemifield spatial tuning is the most common pattern across all mammals tested previously. The homogeneous population of neurons in rat area A1 will make an excellent system for study of the mechanisms underlying that pattern.
Collapse
Affiliation(s)
- Justin D Yao
- Department of Neurobiology and Behavior, University of California at Irvine, Irvine, California
| | | | | |
Collapse
|
37
|
Behavioral sensitivity to broadband binaural localization cues in the ferret. J Assoc Res Otolaryngol 2013; 14:561-72. [PMID: 23615803 PMCID: PMC3705081 DOI: 10.1007/s10162-013-0390-3] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2013] [Accepted: 04/05/2013] [Indexed: 11/29/2022] Open
Abstract
Although the ferret has become an important model species for studying both fundamental and clinical aspects of spatial hearing, previous behavioral work has focused on studies of sound localization and spatial release from masking in the free field. This makes it difficult to tease apart the role played by different spatial cues. In humans and other species, interaural time differences (ITDs) and interaural level differences (ILDs) play a critical role in sound localization in the azimuthal plane and also facilitate sound source separation in noisy environments. In this study, we used a range of broadband noise stimuli presented via customized earphones to measure ITD and ILD sensitivity in the ferret. Our behavioral data show that ferrets are extremely sensitive to changes in either binaural cue, with levels of performance approximating that found in humans. The measured thresholds were relatively stable despite extensive and prolonged (>16 weeks) testing on ITD and ILD tasks with broadband stimuli. For both cues, sensitivity was reduced at shorter durations. In addition, subtle effects of changing the stimulus envelope were observed on ITD, but not ILD, thresholds. Sensitivity to these cues also differed in other ways. Whereas ILD sensitivity was unaffected by changes in average binaural level or interaural correlation, the same manipulations produced much larger effects on ITD sensitivity, with thresholds declining when either of these parameters was reduced. The binaural sensitivity measured in this study can largely account for the ability of ferrets to localize broadband stimuli in the azimuthal plane. Our results are also broadly consistent with data from humans and confirm the ferret as an excellent experimental model for studying spatial hearing.
Collapse
|
38
|
Rauschecker JP. Processing Streams in Auditory Cortex. NEURAL CORRELATES OF AUDITORY COGNITION 2013. [DOI: 10.1007/978-1-4614-2350-8_2] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
39
|
Evidence for opponent process analysis of sound source location in humans. J Assoc Res Otolaryngol 2012; 14:83-101. [PMID: 23090057 DOI: 10.1007/s10162-012-0356-x] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2011] [Accepted: 10/10/2012] [Indexed: 10/27/2022] Open
Abstract
Research with barn owls suggested that sound source location is represented topographically in the brain by an array of neurons each tuned to a narrow range of locations. However, research with small-headed mammals has offered an alternative view in which location is represented by the balance of activity in two opponent channels broadly tuned to the left and right auditory space. Both channels may be present in each auditory cortex, although the channel representing contralateral space may be dominant. Recent studies have suggested that opponent channel coding of space may also apply in humans, although these studies have used a restricted set of spatial cues or probed a restricted set of spatial locations, and there have been contradictory reports as to the relative dominance of the ipsilateral and contralateral channels in each cortex. The current study used electroencephalography (EEG) in conjunction with sound field stimulus presentation to address these issues and to inform the development of an explicit computational model of human sound source localization. Neural responses were compatible with the opponent channel account of sound source localization and with contralateral channel dominance in the left, but not the right, auditory cortex. A computational opponent channel model reproduced every important aspect of the EEG data and allowed inferences about the width of tuning in the spatial channels. Moreover, the model predicted the oft-reported decrease in spatial acuity measured psychophysically with increasing reference azimuth. Predictions of spatial acuity closely matched those measured psychophysically by previous authors.
Collapse
|
40
|
Bishop CW, London S, Miller LM. Neural time course of visually enhanced echo suppression. J Neurophysiol 2012; 108:1869-83. [PMID: 22786953 PMCID: PMC3545000 DOI: 10.1152/jn.00175.2012] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2012] [Accepted: 07/08/2012] [Indexed: 11/22/2022] Open
Abstract
Auditory spatial perception plays a critical role in day-to-day communication. For instance, listeners utilize acoustic spatial information to segregate individual talkers into distinct auditory "streams" to improve speech intelligibility. However, spatial localization is an exceedingly difficult task in everyday listening environments with numerous distracting echoes from nearby surfaces, such as walls. Listeners' brains overcome this unique challenge by relying on acoustic timing and, quite surprisingly, visual spatial information to suppress short-latency (1-10 ms) echoes through a process known as "the precedence effect" or "echo suppression." In the present study, we employed electroencephalography (EEG) to investigate the neural time course of echo suppression both with and without the aid of coincident visual stimulation in human listeners. We find that echo suppression is a multistage process initialized during the auditory N1 (70-100 ms) and followed by space-specific suppression mechanisms from 150 to 250 ms. Additionally, we find a robust correlate of listeners' spatial perception (i.e., suppressing or not suppressing the echo) over central electrode sites from 300 to 500 ms. Contrary to our hypothesis, vision's powerful contribution to echo suppression occurs late in processing (250-400 ms), suggesting that vision contributes primarily during late sensory or decision making processes. Together, our findings support growing evidence that echo suppression is a slow, progressive mechanism modifiable by visual influences during late sensory and decision making stages. Furthermore, our findings suggest that audiovisual interactions are not limited to early, sensory-level modulations but extend well into late stages of cortical processing.
Collapse
Affiliation(s)
- Christopher W Bishop
- Center for Mind and Brain, University of California, Davis, California 95618, USA.
| | | | | |
Collapse
|
41
|
Nodal FR, Bajo VM, King AJ. Plasticity of spatial hearing: behavioural effects of cortical inactivation. J Physiol 2012; 590:3965-86. [PMID: 22547635 PMCID: PMC3464400 DOI: 10.1113/jphysiol.2011.222828] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The contribution of auditory cortex to spatial information processing was explored behaviourally in adult ferrets by reversibly deactivating different cortical areas by subdural placement of a polymer that released the GABAA agonist muscimol over a period of weeks. The spatial extent and time course of cortical inactivation were determined electrophysiologically. Muscimol-Elvax was placed bilaterally over the anterior (AEG), middle (MEG) or posterior ectosylvian gyrus (PEG), so that different regions of the auditory cortex could be deactivated in different cases. Sound localization accuracy in the horizontal plane was assessed by measuring both the initial head orienting and approach-to-target responses made by the animals. Head orienting behaviour was unaffected by silencing any region of the auditory cortex, whereas the accuracy of approach-to-target responses to brief sounds (40 ms noise bursts) was reduced by muscimol-Elvax but not by drug-free implants. Modest but significant localization impairments were observed after deactivating the MEG, AEG or PEG, although the largest deficits were produced in animals in which the MEG, where the primary auditory fields are located, was silenced. We also examined experience-induced spatial plasticity by reversibly plugging one ear. In control animals, localization accuracy for both approach-to-target and head orienting responses was initially impaired by monaural occlusion, but recovered with training over the next few days. Deactivating any part of the auditory cortex resulted in less complete recovery than in controls, with the largest deficits observed after silencing the higher-level cortical areas in the AEG and PEG. Although suggesting that each region of auditory cortex contributes to spatial learning, differences in the localization deficits and degree of adaptation between groups imply a regional specialization in the processing of spatial information across the auditory cortex.
Collapse
Affiliation(s)
- Fernando R Nodal
- Department of Physiology, Anatomy and Genetics, Sherrington Building, University of Oxford, Parks Road, Oxford OX1 3PT, UK.
| | | | | |
Collapse
|
42
|
de la Mothe LA, Blumell S, Kajikawa Y, Hackett TA. Cortical connections of auditory cortex in marmoset monkeys: lateral belt and parabelt regions. Anat Rec (Hoboken) 2012; 295:800-21. [PMID: 22461313 DOI: 10.1002/ar.22451] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2011] [Accepted: 03/01/2012] [Indexed: 11/12/2022]
Abstract
The current working model of primate auditory cortex is constructed from a number of studies of both new and old world monkeys. It includes three levels of processing. A primary level, the core region, is surrounded both medially and laterally by a secondary belt region. A third level of processing, the parabelt region, is located lateral to the belt. The marmoset monkey (Callithrix jacchus jacchus) has become an important model system to study auditory processing, but its anatomical organization has not been fully established. In previous studies, we focused on the architecture and connections of the core and medial belt areas (de la Mothe et al., 2006a, J Comp Neurol 496:27-71; de la Mothe et al., 2006b, J Comp Neurol 496:72-96). In this study, the corticocortical connections of the lateral belt and parabelt were examined in the marmoset. Tracers were injected into both rostral and caudal portions of the lateral belt and parabelt. Both regions revealed topographic connections along the rostrocaudal axis, where caudal areas of injection had stronger connections with caudal areas, and rostral areas of injection with rostral areas. The lateral belt had strong connections with the core, belt, and parabelt, whereas the parabelt had strong connections with the belt but not the core. Label in the core from injections in the parabelt was significantly reduced or absent, consistent with the idea that the parabelt relies mainly on the belt for its cortical input. In addition, the present and previous studies indicate hierarchical principles of anatomical organization in the marmoset that are consistent with those observed in other primates.
Collapse
Affiliation(s)
- Lisa A de la Mothe
- Department of Psychology, Tennessee State University, Nashville, Tennessee 37209, USA
| | | | | | | |
Collapse
|
43
|
Kuśmierek P, Ortiz M, Rauschecker JP. Sound-identity processing in early areas of the auditory ventral stream in the macaque. J Neurophysiol 2011; 107:1123-41. [PMID: 22131372 DOI: 10.1152/jn.00793.2011] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Auditory cortical processing is thought to be accomplished along two processing streams. The existence of a posterior/dorsal stream dealing, among others, with the processing of spatial aspects of sound has been corroborated by numerous studies in several species. An anterior/ventral stream for the processing of nonspatial sound qualities, including the identification of sounds such as species-specific vocalizations, has also received much support. Originally discovered in anterolateral belt cortex, most recent work on the anterior/ventral pathway has been performed on far anterior superior temporal (ST) areas and on ventrolateral prefrontal cortex (VLPFC). Regions of the anterior/ventral stream near its origin in early auditory areas have been less explored. In the present study, we examined three early auditory regions with different anteroposterior locations (caudal, middle, and rostral) in awake rhesus macaques. We analyzed how well classification based on sound-evoked activity patterns of neuronal populations replicates the original stimulus categories. Of the three regions, the rostral region (rR), which included core area R and medial belt area RM, yielded the greatest classification success across all stimulus classes or between classes of natural sounds. Starting from ∼80 ms past stimulus onset, clustering based on the population response in rR became clearly more successful than clustering based on responses from any other region. Our study demonstrates that specialization for sound-identity processing can be found very early in the auditory ventral stream. Furthermore, the fact that this processing develops over time can shed light on underlying mechanisms. Finally, we show that population analysis is a more sensitive method for revealing functional specialization than conventional types of analysis.
Collapse
Affiliation(s)
- Paweł Kuśmierek
- Department of Neuroscience, Georgetown University Medical Center, Washington, District of Columbia 20057, USA.
| | | | | |
Collapse
|
44
|
The auditory dorsal pathway: Orienting vision. Neurosci Biobehav Rev 2011; 35:2162-73. [PMID: 21530585 DOI: 10.1016/j.neubiorev.2011.04.005] [Citation(s) in RCA: 60] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2010] [Revised: 03/16/2011] [Accepted: 04/10/2011] [Indexed: 11/24/2022]
|
45
|
Oscillatory alpha-band mechanisms and the deployment of spatial attention to anticipated auditory and visual target locations: supramodal or sensory-specific control mechanisms? J Neurosci 2011; 31:9923-32. [PMID: 21734284 DOI: 10.1523/jneurosci.4660-10.2011] [Citation(s) in RCA: 166] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Oscillatory alpha-band activity (8-15 Hz) over parieto-occipital cortex in humans plays an important role in suppression of processing for inputs at to-be-ignored regions of space, with increased alpha-band power observed over cortex contralateral to locations expected to contain distractors. It is unclear whether similar processes operate during deployment of spatial attention in other sensory modalities. Evidence from lesion patients suggests that parietal regions house supramodal representations of space. The parietal lobes are prominent generators of alpha oscillations, raising the possibility that alpha is a neural signature of supramodal spatial attention. Furthermore, when spatial attention is deployed within vision, processing of task-irrelevant auditory inputs at attended locations is also enhanced, pointing to automatic links between spatial deployments across senses. Here, we asked whether lateralized alpha-band activity is also evident in a purely auditory spatial-cueing task and whether it had the same underlying generator configuration as in a purely visuospatial task. If common to both sensory systems, this would provide strong support for "supramodal" attention theory. Alternately, alpha-band differences between auditory and visual tasks would support a sensory-specific account. Lateralized shifts in alpha-band activity were indeed observed during a purely auditory spatial task. Crucially, there were clear differences in scalp topographies of this alpha activity depending on the sensory system within which spatial attention was deployed. Findings suggest that parietally generated alpha-band mechanisms are central to attentional deployments across modalities but that they are invoked in a sensory-specific manner. The data support an "interactivity account," whereby a supramodal system interacts with sensory-specific control systems during deployment of spatial attention.
Collapse
|
46
|
Kajikawa Y, Falchier A, Musacchia G, Lakatos P, Schroeder C. Audiovisual Integration in Nonhuman Primates. Front Neurosci 2011. [DOI: 10.1201/9781439812174-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
47
|
Kajikawa Y, Falchier A, Musacchia G, Lakatos P, Schroeder C. Audiovisual Integration in Nonhuman Primates. Front Neurosci 2011. [DOI: 10.1201/b11092-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
48
|
Columnar and layer-specific representation of spatial sensitivity in mouse primary auditory cortex. Neuroreport 2011; 22:530-4. [PMID: 21666517 DOI: 10.1097/wnr.0b013e328348aae5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
The primary auditory cortex (AI) is implicated in coding sound location, as revealed by behavior-lesion experiments, but our knowledge about the functional organization and laminar specificity of neural spatial sensitivity is still very limited. Using single-unit recordings in mouse AI, we show that (i) an inverse relationship between onset latency and spike count is consistently observed when all the azimuthal points are taken; (ii) a substantial proportion of penetrations perpendicular to the AI surface showed columnar organization of best azimuths; (iii) the preferred azimuth range of AI neurons demonstrated layer-specific distribution pattern. Our findings suggest that similar to other response properties, the manner of sound space information processing in the auditory cortex is also layer dependent.
Collapse
|
49
|
Sarro EC, Rosen MJ, Sanes DH. Taking advantage of behavioral changes during development and training to assess sensory coding mechanisms. Ann N Y Acad Sci 2011; 1225:142-54. [PMID: 21535001 DOI: 10.1111/j.1749-6632.2011.06023.x] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
The relationship between behavioral and neural performance has been explored in adult animals, but rarely during the developmental period when perceptual abilities emerge. We used these naturally occurring changes in auditory perception to evaluate underlying encoding mechanisms. Performance of juvenile and adult gerbils on an amplitude modulation (AM) detection task was compared with response properties from auditory cortex of age-matched animals. When tested with an identical behavioral procedure, juveniles display poorer AM detection thresholds than adults. Two neurometric analyses indicate that the most sensitive juvenile and adult neurons have equivalent AM thresholds. However, a pooling neurometric revealed that adult cortex encodes smaller AM depths. By each measure, neural sensitivity was superior to psychometric thresholds. However, juvenile training improved adult behavioral thresholds, such that they verged on the best sensitivity of adult neurons. Thus, periods of training may allow an animal to use the encoded information already present in cortex.
Collapse
Affiliation(s)
- Emma C Sarro
- Center for Neural Science, New York University, New York, New York, USA.
| | | | | |
Collapse
|
50
|
Abstract
Auditory signals are decomposed into discrete frequency elements early in the transduction process, yet somehow these signals are recombined into the rich acoustic percepts that we readily identify and are familiar with. The cerebral cortex is necessary for the perception of these signals, and studies from several laboratories over the past decade have made significant advances in our understanding of the neuronal mechanisms underlying auditory perception. This review will concentrate on recent studies in the macaque monkey that indicate that the activity of populations of neurons better accounts for the perceptual abilities compared to the activity of single neurons. The best examples address whether the acoustic space is represented along the "where" pathway in the caudal regions of auditory cortex. Our current understanding of how such population activity could also underlie the perception of the nonspatial features of acoustic stimuli is reviewed, as is how multisensory interactions can influence our auditory perception.
Collapse
Affiliation(s)
- Gregg H Recanzone
- Center for Neuroscience and Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| |
Collapse
|