1
|
Wang Q, Luo L, Xu N, Wang J, Yang R, Chen G, Ren J, Luan G, Fang F. Neural response properties predict perceived contents and locations elicited by intracranial electrical stimulation of human auditory cortex. Cereb Cortex 2024; 34:bhad517. [PMID: 38185991 DOI: 10.1093/cercor/bhad517] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Revised: 12/09/2023] [Accepted: 12/10/2023] [Indexed: 01/09/2024] Open
Abstract
Intracranial electrical stimulation (iES) of auditory cortex can elicit sound experiences with a variety of perceived contents (hallucination or illusion) and locations (contralateral or bilateral side), independent of actual acoustic inputs. However, the neural mechanisms underlying this elicitation heterogeneity remain undiscovered. Here, we collected subjective reports following iES at 3062 intracranial sites in 28 patients (both sexes) and identified 113 auditory cortical sites with iES-elicited sound experiences. We then decomposed the sound-induced intracranial electroencephalogram (iEEG) signals recorded from all 113 sites into time-frequency features. We found that the iES-elicited perceived contents can be predicted by the early high-γ features extracted from sound-induced iEEG. In contrast, the perceived locations elicited by stimulating hallucination sites and illusion sites are determined by the late high-γ and long-lasting α features, respectively. Our study unveils the crucial neural signatures of iES-elicited sound experiences in human and presents a new strategy to hearing restoration for individuals suffering from deafness.
Collapse
Affiliation(s)
- Qian Wang
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing 100871, China
- IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China
- National Key Laboratory of General Artificial Intelligence, Peking University, Beijing 100871, China
| | - Lu Luo
- School of Psychology, Beijing Sport University, Beijing 100084, China
| | - Na Xu
- Division of Brain Sciences, Changping Laboratory, Beijing 102206, China
| | - Jing Wang
- Department of Neurology, Sanbo Brain Hospital, Capital Medical University, Beijing 100093, China
| | - Ruolin Yang
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing 100871, China
- IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China
- Peking-Tsinghua Center for Life Sciences, Peking University, Beijing 100871, China
| | - Guanpeng Chen
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing 100871, China
- IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China
- Peking-Tsinghua Center for Life Sciences, Peking University, Beijing 100871, China
| | - Jie Ren
- Department of Functional Neurosurgery, Beijing Key Laboratory of Epilepsy, Sanbo Brain Hospital, Capital Medical University, Beijing 100093, China
- Epilepsy Center, Kunming Sanbo Brain Hospital, Kunming 650100 China
| | - Guoming Luan
- Department of Functional Neurosurgery, Beijing Key Laboratory of Epilepsy, Sanbo Brain Hospital, Capital Medical University, Beijing 100093, China
- Beijing Institute for Brain Disorders, Beijing 100069, China
| | - Fang Fang
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing 100871, China
- IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China
- Peking-Tsinghua Center for Life Sciences, Peking University, Beijing 100871, China
| |
Collapse
|
2
|
Middlebrooks JC, Javier-Tolentino LK, Arneja A, Richardson ML. High Spectral and Temporal Acuity in Primary Auditory Cortex of Awake Cats. J Assoc Res Otolaryngol 2023; 24:197-215. [PMID: 36795196 PMCID: PMC10121981 DOI: 10.1007/s10162-023-00890-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Accepted: 01/20/2023] [Indexed: 02/17/2023] Open
Abstract
Most accounts of single- and multi-unit responses in auditory cortex under anesthetized conditions have emphasized V-shaped frequency tuning curves and low-pass sensitivity to rates of repeated sounds. In contrast, single-unit recordings in awake marmosets also show I-shaped and O-shaped response areas having restricted tuning to frequency and (for O units) sound level. That preparation also demonstrates synchrony to moderate click rates and representation of higher click rates by spike rates of non-synchronized tonic responses, neither of which are commonly seen in anesthetized conditions. The spectral and temporal representation observed in the marmoset might reflect special adaptations of that species, might be due to single- rather than multi-unit recording, or might indicate characteristics of awake-versus-anesthetized recording conditions. We studied spectral and temporal representation in the primary auditory cortex of alert cats. We observed V-, I-, and O-shaped response areas like those demonstrated in awake marmosets. Neurons could synchronize to click trains at rates about an octave higher than is usually seen with anesthesia. Representations of click rates by rates of non-synchronized tonic responses exhibited dynamic ranges that covered the entire range of tested click rates. The observation of these spectral and temporal representations in cats demonstrates that they are not unique to primates and, indeed, might be widespread among mammalian species. Moreover, we observed no significant difference in stimulus representation between single- and multi-unit recordings. It appears that the principal factor that has hindered observations of high spectral and temporal acuity in the auditory cortex has been the use of general anesthesia.
Collapse
Affiliation(s)
- John C Middlebrooks
- Department of Otolaryngology, University of California at Irvine, D404 Medical Science D, Irvine, CA, 92697-5310, USA.
- Department of Neurobiology and Behavior, University of California at Irvine, Irvine, CA, USA.
- Department of Cognitive Sciences, University of California at Irvine, Irvine, CA, USA.
- Center for Hearing Research, University of California at Irvine, Irvine, CA, USA.
| | - Lauren K Javier-Tolentino
- Department of Neurobiology and Behavior, University of California at Irvine, Irvine, CA, USA
- Center for Hearing Research, University of California at Irvine, Irvine, CA, USA
| | - Akshat Arneja
- Department of Cognitive Sciences, University of California at Irvine, Irvine, CA, USA
- Center for Hearing Research, University of California at Irvine, Irvine, CA, USA
| | - Matthew L Richardson
- Department of Otolaryngology, University of California at Irvine, D404 Medical Science D, Irvine, CA, 92697-5310, USA
- Center for Hearing Research, University of California at Irvine, Irvine, CA, USA
| |
Collapse
|
3
|
Homma NY, Bajo VM. Lemniscal Corticothalamic Feedback in Auditory Scene Analysis. Front Neurosci 2021; 15:723893. [PMID: 34489635 PMCID: PMC8417129 DOI: 10.3389/fnins.2021.723893] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Accepted: 07/30/2021] [Indexed: 12/15/2022] Open
Abstract
Sound information is transmitted from the ear to central auditory stations of the brain via several nuclei. In addition to these ascending pathways there exist descending projections that can influence the information processing at each of these nuclei. A major descending pathway in the auditory system is the feedback projection from layer VI of the primary auditory cortex (A1) to the ventral division of medial geniculate body (MGBv) in the thalamus. The corticothalamic axons have small glutamatergic terminals that can modulate thalamic processing and thalamocortical information transmission. Corticothalamic neurons also provide input to GABAergic neurons of the thalamic reticular nucleus (TRN) that receives collaterals from the ascending thalamic axons. The balance of corticothalamic and TRN inputs has been shown to refine frequency tuning, firing patterns, and gating of MGBv neurons. Therefore, the thalamus is not merely a relay stage in the chain of auditory nuclei but does participate in complex aspects of sound processing that include top-down modulations. In this review, we aim (i) to examine how lemniscal corticothalamic feedback modulates responses in MGBv neurons, and (ii) to explore how the feedback contributes to auditory scene analysis, particularly on frequency and harmonic perception. Finally, we will discuss potential implications of the role of corticothalamic feedback in music and speech perception, where precise spectral and temporal processing is essential.
Collapse
Affiliation(s)
- Natsumi Y. Homma
- Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, CA, United States
- Coleman Memorial Laboratory, Department of Otolaryngology – Head and Neck Surgery, University of California, San Francisco, San Francisco, CA, United States
| | - Victoria M. Bajo
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
4
|
Brinkmann P, Kotz SA, Smit JV, Janssen MLF, Schwartze M. Auditory thalamus dysfunction and pathophysiology in tinnitus: a predictive network hypothesis. Brain Struct Funct 2021; 226:1659-1676. [PMID: 33934235 PMCID: PMC8203542 DOI: 10.1007/s00429-021-02284-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Accepted: 04/21/2021] [Indexed: 01/12/2023]
Abstract
Tinnitus is the perception of a 'ringing' sound without an acoustic source. It is generally accepted that tinnitus develops after peripheral hearing loss and is associated with altered auditory processing. The thalamus is a crucial relay in the underlying pathways that actively shapes processing of auditory signals before the respective information reaches the cerebral cortex. Here, we review animal and human evidence to define thalamic function in tinnitus. Overall increased spontaneous firing patterns and altered coherence between the thalamic medial geniculate body (MGB) and auditory cortices is observed in animal models of tinnitus. It is likely that the functional connectivity between the MGB and primary and secondary auditory cortices is reduced in humans. Conversely, there are indications for increased connectivity between the MGB and several areas in the cingulate cortex and posterior cerebellar regions, as well as variability in connectivity between the MGB and frontal areas regarding laterality and orientation in the inferior, medial and superior frontal gyrus. We suggest that these changes affect adaptive sensory gating of temporal and spectral sound features along the auditory pathway, reflecting dysfunction in an extensive thalamo-cortical network implicated in predictive temporal adaptation to the auditory environment. Modulation of temporal characteristics of input signals might hence factor into a thalamo-cortical dysrhythmia profile of tinnitus, but could ultimately also establish new directions for treatment options for persons with tinnitus.
Collapse
Affiliation(s)
- Pia Brinkmann
- Department of Neuropsychology and Psychopharmacology, University of Maastricht, Universiteitssingel 40, 6229, Maastricht, The Netherlands.
| | - Sonja A Kotz
- Department of Neuropsychology and Psychopharmacology, University of Maastricht, Universiteitssingel 40, 6229, Maastricht, The Netherlands
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Jasper V Smit
- Department of Ear Nose and Throat/Head and Neck Surgery, Zuyderland Medical Center, Sittard/Heerlen, the Netherlands
| | - Marcus L F Janssen
- Department of Clinical Neurophysiology, Maastricht University Medical Center, Maastricht, The Netherlands
- School for Mental Health and Neuroscience, Faculty of Health, Medicine and Life Sciences, Maastricht University, Maastricht, The Netherlands
| | - Michael Schwartze
- Department of Neuropsychology and Psychopharmacology, University of Maastricht, Universiteitssingel 40, 6229, Maastricht, The Netherlands
| |
Collapse
|
5
|
Herrmann B, Butler BE. Hearing loss and brain plasticity: the hyperactivity phenomenon. Brain Struct Funct 2021; 226:2019-2039. [PMID: 34100151 DOI: 10.1007/s00429-021-02313-9] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2020] [Accepted: 06/03/2021] [Indexed: 12/22/2022]
Abstract
Many aging adults experience some form of hearing problems that may arise from auditory peripheral damage. However, it has been increasingly acknowledged that hearing loss is not only a dysfunction of the auditory periphery but also results from changes within the entire auditory system, from periphery to cortex. Damage to the auditory periphery is associated with an increase in neural activity at various stages throughout the auditory pathway. Here, we review neurophysiological evidence of hyperactivity, auditory perceptual difficulties that may result from hyperactivity, and outline open conceptual and methodological questions related to the study of hyperactivity. We suggest that hyperactivity alters all aspects of hearing-including spectral, temporal, spatial hearing-and, in turn, impairs speech comprehension when background sound is present. By focusing on the perceptual consequences of hyperactivity and the potential challenges of investigating hyperactivity in humans, we hope to bring animal and human electrophysiologists closer together to better understand hearing problems in older adulthood.
Collapse
Affiliation(s)
- Björn Herrmann
- Rotman Research Institute, Baycrest, Toronto, ON, M6A 2E1, Canada. .,Department of Psychology, University of Toronto, Toronto, ON, Canada.
| | - Blake E Butler
- Department of Psychology & The Brain and Mind Institute, University of Western Ontario, London, ON, Canada.,National Centre for Audiology, University of Western Ontario, London, ON, Canada
| |
Collapse
|
6
|
Diversity of Receptive Fields and Sideband Inhibition with Complex Thalamocortical and Intracortical Origin in L2/3 of Mouse Primary Auditory Cortex. J Neurosci 2021; 41:3142-3162. [PMID: 33593857 DOI: 10.1523/jneurosci.1732-20.2021] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Revised: 01/29/2021] [Accepted: 02/07/2021] [Indexed: 11/21/2022] Open
Abstract
Receptive fields of primary auditory cortex (A1) neurons show excitatory neuronal frequency preference and diverse inhibitory sidebands. While the frequency preferences of excitatory neurons in local A1 areas can be heterogeneous, those of inhibitory neurons are more homogeneous. To date, the diversity and the origin of inhibitory sidebands in local neuronal populations and the relation between local cellular frequency preference and inhibitory sidebands are unknown. To reveal both excitatory and inhibitory subfields, we presented two-tone and pure tone stimuli while imaging excitatory neurons (Thy1) and two types of inhibitory neurons (parvalbumin and somatostatin) in L2/3 of mice A1. We classified neurons into six classes based on frequency response area (FRA) shapes and sideband inhibition depended both on FRA shapes and cell types. Sideband inhibition showed higher local heterogeneity than frequency tuning, suggesting that sideband inhibition originates from diverse sources of local and distant neurons. Two-tone interactions depended on neuron subclasses with excitatory neurons showing the most nonlinearity. Onset and offset neurons showed dissimilar spectral integration, suggesting differing circuits processing sound onset and offset. These results suggest that excitatory neurons integrate complex and nonuniform inhibitory input. Thalamocortical terminals also exhibited sideband inhibition, but with different properties from those of cortical neurons. Thus, some components of sideband inhibition are inherited from thalamocortical inputs and are further modified by converging intracortical circuits. The combined heterogeneity of frequency tuning and diverse sideband inhibition facilitates complex spectral shape encoding and allows for rapid and extensive plasticity.SIGNIFICANCE STATEMENT Sensory systems recognize and differentiate between different stimuli through selectivity for different features. Sideband inhibition serves as an important mechanism to sharpen stimulus selectivity, but its cortical mechanisms are not entirely resolved. We imaged pyramidal neurons and two common classes of interneurons suggested to mediate sideband inhibition (parvalbumin and somatostatin positive) in the auditory cortex and inferred their inhibitory sidebands. We observed a higher degree of variability in the inhibitory sideband than in the local frequency tuning, which cannot be predicted from the relative high homogeneity of responses by inhibitory interneurons. This suggests that cortical sideband inhibition is nonuniform and likely results from a complex interplay between existing functional inhibition in the feedforward input and cortical refinement.
Collapse
|
7
|
Cooke JE, Lee JJ, Bartlett EL, Wang X, Bendor D. Post-stimulatory activity in primate auditory cortex evoked by sensory stimulation during passive listening. Sci Rep 2020; 10:13885. [PMID: 32807854 PMCID: PMC7431571 DOI: 10.1038/s41598-020-70397-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Accepted: 07/17/2020] [Indexed: 01/04/2023] Open
Abstract
Under certain circumstances, cortical neurons are capable of elevating their firing for long durations in the absence of a stimulus. Such activity has typically been observed and interpreted in the context of performance of a behavioural task. Here we investigated whether post-stimulatory activity is observed in auditory cortex and the medial geniculate body of the thalamus in the absence of any explicit behavioural task. We recorded spiking activity from single units in the auditory cortex (fields A1, R and RT) and auditory thalamus of awake, passively-listening marmosets. We observed post-stimulatory activity that lasted for hundreds of milliseconds following the termination of the acoustic stimulus. Post-stimulatory activity was observed following both adapting, sustained and suppressed response profiles during the stimulus. These response types were observed across all cortical fields tested, but were largely absent from the auditory thalamus. As well as being of shorter duration, thalamic post-stimulatory activity emerged following a longer latency than in cortex, indicating that post-stimulatory activity may be generated within auditory cortex during passive listening. Given that these responses were observed in the absence of an explicit behavioural task, post-stimulatory activity in sensory cortex may play a functional role in processes such as echoic memory and temporal integration that occur during passive listening.
Collapse
Affiliation(s)
- James E Cooke
- Institute of Behavioural Neuroscience (IBN), University College London (UCL), London, WC1H 0AP, UK.
| | - Julie J Lee
- Institute of Behavioural Neuroscience (IBN), University College London (UCL), London, WC1H 0AP, UK
- Institute of Ophthalmology, University College London (UCL), London, WC1H 0AP, UK
| | - Edward L Bartlett
- Departments of Biological Sciences and Biomedical Engineering, Purdue University, West Lafayette, 47907, USA
| | - Xiaoqin Wang
- Departments of Biomedical Engineering, Johns Hopkins University, Baltimore, 21205, USA
| | - Daniel Bendor
- Institute of Behavioural Neuroscience (IBN), University College London (UCL), London, WC1H 0AP, UK
| |
Collapse
|
8
|
Gao L, Wang X. Subthreshold Activity Underlying the Diversity and Selectivity of the Primary Auditory Cortex Studied by Intracellular Recordings in Awake Marmosets. Cereb Cortex 2020; 29:994-1005. [PMID: 29377991 DOI: 10.1093/cercor/bhy006] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2017] [Indexed: 11/14/2022] Open
Abstract
Extracellular recording studies have revealed diverse and selective neural responses in the primary auditory cortex (A1) of awake animals. However, we have limited knowledge on subthreshold events that give rise to these responses, especially in non-human primates, as intracellular recordings in awake animals pose substantial technical challenges. We developed a novel intracellular recording technique in awake marmosets to systematically study subthreshold activity of A1 neurons that underlies their diverse and selective spiking responses. Our findings showed that in contrast to predominantly transient depolarization observed in A1 of anesthetized animals, both transient and sustained depolarization (during or beyond the stimulus period) were observed. Comparing with spiking responses, subthreshold responses were often longer lasting in duration and more broadly tuned in frequency, and showed narrower intensity tuning in non-monotonic neurons and lower response threshold in monotonic neurons. These observations demonstrated the enhancement of stimulus selectivity from subthreshold to spiking responses in individual A1 neurons. Furthermore, A1 neurons classified as regular- or fast-spiking subpopulation based on their spike shapes exhibited distinct response properties in frequency and intensity domains. These findings provide valuable insights into cortical integration and transformation of auditory information at the cellular level in auditory cortex of awake non-human primates.
Collapse
Affiliation(s)
- Lixia Gao
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA.,Interdisciplinary Institute of Neuroscience and Technology, Qiushi Academy for Advanced Studies, Zhejiang University, Hangzhou, People's Republic of China
| | - Xiaoqin Wang
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
9
|
Zulfiqar I, Moerel M, Formisano E. Spectro-Temporal Processing in a Two-Stream Computational Model of Auditory Cortex. Front Comput Neurosci 2020; 13:95. [PMID: 32038212 PMCID: PMC6987265 DOI: 10.3389/fncom.2019.00095] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Accepted: 12/23/2019] [Indexed: 12/14/2022] Open
Abstract
Neural processing of sounds in the dorsal and ventral streams of the (human) auditory cortex is optimized for analyzing fine-grained temporal and spectral information, respectively. Here we use a Wilson and Cowan firing-rate modeling framework to simulate spectro-temporal processing of sounds in these auditory streams and to investigate the link between neural population activity and behavioral results of psychoacoustic experiments. The proposed model consisted of two core (A1 and R, representing primary areas) and two belt (Slow and Fast, representing rostral and caudal processing respectively) areas, differing in terms of their spectral and temporal response properties. First, we simulated the responses to amplitude modulated (AM) noise and tones. In agreement with electrophysiological results, we observed an area-dependent transition from a temporal (synchronization) to a rate code when moving from low to high modulation rates. Simulated neural responses in a task of amplitude modulation detection suggested that thresholds derived from population responses in core areas closely resembled those of psychoacoustic experiments in human listeners. For tones, simulated modulation threshold functions were found to be dependent on the carrier frequency. Second, we simulated the responses to complex tones with missing fundamental stimuli and found that synchronization of responses in the Fast area accurately encoded pitch, with the strength of synchronization depending on number and order of harmonic components. Finally, using speech stimuli, we showed that the spectral and temporal structure of the speech was reflected in parallel by the modeled areas. The analyses highlighted that the Slow stream coded with high spectral precision the aspects of the speech signal characterized by slow temporal changes (e.g., prosody), while the Fast stream encoded primarily the faster changes (e.g., phonemes, consonants, temporal pitch). Interestingly, the pitch of a speaker was encoded both spatially (i.e., tonotopically) in Slow area and temporally in Fast area. Overall, performed simulations showed that the model is valuable for generating hypotheses on how the different cortical areas/streams may contribute toward behaviorally relevant aspects of auditory processing. The model can be used in combination with physiological models of neurovascular coupling to generate predictions for human functional MRI experiments.
Collapse
Affiliation(s)
- Isma Zulfiqar
- Maastricht Centre for Systems Biology, Maastricht University, Maastricht, Netherlands
| | - Michelle Moerel
- Maastricht Centre for Systems Biology, Maastricht University, Maastricht, Netherlands.,Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands.,Maastricht Brain Imaging Center, Maastricht, Netherlands
| | - Elia Formisano
- Maastricht Centre for Systems Biology, Maastricht University, Maastricht, Netherlands.,Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands.,Maastricht Brain Imaging Center, Maastricht, Netherlands
| |
Collapse
|
10
|
Mihai PG, Moerel M, de Martino F, Trampel R, Kiebel S, von Kriegstein K. Modulation of tonotopic ventral medial geniculate body is behaviorally relevant for speech recognition. eLife 2019; 8:e44837. [PMID: 31453811 PMCID: PMC6711666 DOI: 10.7554/elife.44837] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2019] [Accepted: 07/19/2019] [Indexed: 01/24/2023] Open
Abstract
Sensory thalami are central sensory pathway stations for information processing. Their role for human cognition and perception, however, remains unclear. Recent evidence suggests an involvement of the sensory thalami in speech recognition. In particular, the auditory thalamus (medial geniculate body, MGB) response is modulated by speech recognition tasks and the amount of this task-dependent modulation is associated with speech recognition abilities. Here, we tested the specific hypothesis that this behaviorally relevant modulation is present in the MGB subsection that corresponds to the primary auditory pathway (i.e., the ventral MGB [vMGB]). We used ultra-high field 7T fMRI to identify the vMGB, and found a significant positive correlation between the amount of task-dependent modulation and the speech recognition performance across participants within left vMGB, but not within the other MGB subsections. These results imply that modulation of thalamic driving input to the auditory cortex facilitates speech recognition.
Collapse
Affiliation(s)
- Paul Glad Mihai
- Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
- Chair of Cognitive and Clinical Neuroscience, Faculty of PsychologyTechnische Universität DresdenDresdenGermany
| | - Michelle Moerel
- Department of Cognitive Neuroscience, Faculty of Psychology and NeuroscienceMaastricht UniversityMaastrichtNetherlands
- Maastricht Brain Imaging Center (MBIC)MaastrichtNetherlands
- Maastricht Centre for Systems Biology (MaCSBio)Maastricht UniversityMaastrichtNetherlands
| | - Federico de Martino
- Department of Cognitive Neuroscience, Faculty of Psychology and NeuroscienceMaastricht UniversityMaastrichtNetherlands
- Maastricht Brain Imaging Center (MBIC)MaastrichtNetherlands
- Center for Magnetic Resonance ResearchUniversity of MinnesotaMinneapolisUnited States
| | - Robert Trampel
- Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| | - Stefan Kiebel
- Chair of Cognitive and Clinical Neuroscience, Faculty of PsychologyTechnische Universität DresdenDresdenGermany
| | - Katharina von Kriegstein
- Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
- Chair of Cognitive and Clinical Neuroscience, Faculty of PsychologyTechnische Universität DresdenDresdenGermany
| |
Collapse
|
11
|
Sumner CJ, Wells TT, Bergevin C, Sollini J, Kreft HA, Palmer AR, Oxenham AJ, Shera CA. Mammalian behavior and physiology converge to confirm sharper cochlear tuning in humans. Proc Natl Acad Sci U S A 2018; 115:11322-11326. [PMID: 30322908 PMCID: PMC6217411 DOI: 10.1073/pnas.1810766115] [Citation(s) in RCA: 44] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Frequency analysis of sound by the cochlea is the most fundamental property of the auditory system. Despite its importance, the resolution of this frequency analysis in humans remains controversial. The controversy persists because the methods used to estimate tuning in humans are indirect and have not all been independently validated in other species. Some data suggest that human cochlear tuning is considerably sharper than that of laboratory animals, while others suggest little or no difference between species. We show here in a single species (ferret) that behavioral estimates of tuning bandwidths obtained using perceptual masking methods, and objective estimates obtained using otoacoustic emissions, both also employed in humans, agree closely with direct physiological measurements from single auditory-nerve fibers. Combined with human behavioral data, this outcome indicates that the frequency analysis performed by the human cochlea is of significantly higher resolution than found in common laboratory animals. This finding raises important questions about the evolutionary origins of human cochlear tuning, its role in the emergence of speech communication, and the mechanisms underlying our ability to separate and process natural sounds in complex acoustic environments.
Collapse
Affiliation(s)
- Christian J Sumner
- Medical Research Council Institute of Hearing Research, School of Medicine, The University of Nottingham, NG7 2RD Nottingham, United Kingdom;
| | - Toby T Wells
- Medical Research Council Institute of Hearing Research, School of Medicine, The University of Nottingham, NG7 2RD Nottingham, United Kingdom
| | - Christopher Bergevin
- Department of Physics & Astronomy, York University, Toronto, ON M3J 1P3, Canada
- Centre for Vision Research, York University, Toronto, ON M3J 1P3, Canada
| | - Joseph Sollini
- Medical Research Council Institute of Hearing Research, School of Medicine, The University of Nottingham, NG7 2RD Nottingham, United Kingdom
| | - Heather A Kreft
- Department of Psychology, University of Minnesota, Minneapolis, MN 55455
- Department of Otolaryngology, University of Minnesota, Minneapolis, MN 55455
| | - Alan R Palmer
- Medical Research Council Institute of Hearing Research, School of Medicine, The University of Nottingham, NG7 2RD Nottingham, United Kingdom
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, Minneapolis, MN 55455
- Department of Otolaryngology, University of Minnesota, Minneapolis, MN 55455
| | - Christopher A Shera
- Caruso Department of Otolaryngology, University of Southern California, Los Angeles, CA 90033
- Department of Physics and Astronomy, University of Southern California, Los Angeles, CA 90089
| |
Collapse
|
12
|
Li G, Henriquez CS, Fröhlich F. Unified thalamic model generates multiple distinct oscillations with state-dependent entrainment by stimulation. PLoS Comput Biol 2017; 13:e1005797. [PMID: 29073146 PMCID: PMC5675460 DOI: 10.1371/journal.pcbi.1005797] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2017] [Revised: 11/07/2017] [Accepted: 09/26/2017] [Indexed: 11/21/2022] Open
Abstract
The thalamus plays a critical role in the genesis of thalamocortical oscillations, yet the underlying mechanisms remain elusive. To understand whether the isolated thalamus can generate multiple distinct oscillations, we developed a biophysical thalamic model to test the hypothesis that generation of and transition between distinct thalamic oscillations can be explained as a function of neuromodulation by acetylcholine (ACh) and norepinephrine (NE) and afferent synaptic excitation. Indeed, the model exhibited four distinct thalamic rhythms (delta, sleep spindle, alpha and gamma oscillations) that span the physiological states corresponding to different arousal levels from deep sleep to focused attention. Our simulation results indicate that generation of these distinct thalamic oscillations is a result of both intrinsic oscillatory cellular properties and specific network connectivity patterns. We then systematically varied the ACh/NE and input levels to generate a complete map of the different oscillatory states and their transitions. Lastly, we applied periodic stimulation to the thalamic network and found that entrainment of thalamic oscillations is highly state-dependent. Our results support the hypothesis that ACh/NE modulation and afferent excitation define thalamic oscillatory states and their response to brain stimulation. Our model proposes a broader and more central role of the thalamus in the genesis of multiple distinct thalamo-cortical rhythms than previously assumed. Computational modeling has served as an important tool to understand the cellular and circuit mechanisms of thalamocortical oscillations. However, most of the existing thalamic models focus on only one particular oscillatory pattern such as alpha or spindle oscillations. Thus, it remains unclear whether the same thalamic circuitry on its own could generate all major oscillatory patterns and if so what mechanisms underlie the transition among these distinct states. Here we present a unified model of the thalamus that is capable of independently generating multiple distinct oscillations corresponding to different physiological conditions. We then mapped out the different thalamic oscillations by varying the ACh/NE modulatory level and input level systematically. Our simulation results offer a mechanistic understanding of thalamic oscillations and support the long standing notion of a thalamic “pacemaker”. It also suggests that pathological oscillations associated with neurological and psychiatric disorders may stem from malfunction of the thalamic circuitry.
Collapse
Affiliation(s)
- Guoshi Li
- Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States of America
| | - Craig S. Henriquez
- Department of Biomedical Engineering, Duke University, Durham, NC, United States of America
| | - Flavio Fröhlich
- Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States of America
- Department of Biomedical Engineering, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States of America
- Department of Cell Biology and Physiology, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States of America
- Department of Neurology, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States of America
- Neuroscience Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States of America
- * E-mail:
| |
Collapse
|
13
|
Cluster-based analysis improves predictive validity of spike-triggered receptive field estimates. PLoS One 2017; 12:e0183914. [PMID: 28877194 PMCID: PMC5587334 DOI: 10.1371/journal.pone.0183914] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2016] [Accepted: 08/14/2017] [Indexed: 11/19/2022] Open
Abstract
Spectrotemporal receptive field (STRF) characterization is a central goal of auditory physiology. STRFs are often approximated by the spike-triggered average (STA), which reflects the average stimulus preceding a spike. In many cases, the raw STA is subjected to a threshold defined by gain values expected by chance. However, such correction methods have not been universally adopted, and the consequences of specific gain-thresholding approaches have not been investigated systematically. Here, we evaluate two classes of statistical correction techniques, using the resulting STRF estimates to predict responses to a novel validation stimulus. The first, more traditional technique eliminated STRF pixels (time-frequency bins) with gain values expected by chance. This correction method yielded significant increases in prediction accuracy, including when the threshold setting was optimized for each unit. The second technique was a two-step thresholding procedure wherein clusters of contiguous pixels surviving an initial gain threshold were then subjected to a cluster mass threshold based on summed pixel values. This approach significantly improved upon even the best gain-thresholding techniques. Additional analyses suggested that allowing threshold settings to vary independently for excitatory and inhibitory subfields of the STRF resulted in only marginal additional gains, at best. In summary, augmenting reverse correlation techniques with principled statistical correction choices increased prediction accuracy by over 80% for multi-unit STRFs and by over 40% for single-unit STRFs, furthering the interpretational relevance of the recovered spectrotemporal filters for auditory systems analysis.
Collapse
|
14
|
Yamada Y, Matsumoto Y, Okahara N, Mikoshiba K. Chronic multiscale imaging of neuronal activity in the awake common marmoset. Sci Rep 2016; 6:35722. [PMID: 27786241 PMCID: PMC5082371 DOI: 10.1038/srep35722] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2016] [Accepted: 10/03/2016] [Indexed: 11/14/2022] Open
Abstract
We report a methodology to chronically record in vivo brain activity in the awake common marmoset. Over a month, stable imaging revealed macroscopic sensory maps in the somatosensory cortex and their underlying cellular activity with a high signal-to-noise ratio in the awake but not anesthetized state. This methodology is applicable to other brain regions, and will be useful for studying cortical activity and plasticity in marmosets during learning, development, and in neurological disorders.
Collapse
Affiliation(s)
- Yoshiyuki Yamada
- Laboratory for Developmental Neurobiology, Brain Science Institute (BSI), RIKEN, Wako, Saitama, Japan.,Central Institute for Experimental Animals, Kawasaki, Kanagawa, Japan.,Japan Science and Technology Agency, International Cooperative Research Project and Solution-Oriented Research for Science and Technology, Calcium Oscillation Project, Wako, Saitama, Japan
| | - Yoshifumi Matsumoto
- Laboratory for Developmental Neurobiology, Brain Science Institute (BSI), RIKEN, Wako, Saitama, Japan.,Central Institute for Experimental Animals, Kawasaki, Kanagawa, Japan
| | - Norio Okahara
- Central Institute for Experimental Animals, Kawasaki, Kanagawa, Japan
| | - Katsuhiko Mikoshiba
- Laboratory for Developmental Neurobiology, Brain Science Institute (BSI), RIKEN, Wako, Saitama, Japan.,Central Institute for Experimental Animals, Kawasaki, Kanagawa, Japan.,Japan Science and Technology Agency, International Cooperative Research Project and Solution-Oriented Research for Science and Technology, Calcium Oscillation Project, Wako, Saitama, Japan
| |
Collapse
|
15
|
Keating P, Rosenior-Patten O, Dahmen JC, Bell O, King AJ. Behavioral training promotes multiple adaptive processes following acute hearing loss. eLife 2016; 5:e12264. [PMID: 27008181 PMCID: PMC4841776 DOI: 10.7554/elife.12264] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2015] [Accepted: 03/23/2016] [Indexed: 11/13/2022] Open
Abstract
The brain possesses a remarkable capacity to compensate for changes in inputs resulting from a range of sensory impairments. Developmental studies of sound localization have shown that adaptation to asymmetric hearing loss can be achieved either by reinterpreting altered spatial cues or by relying more on those cues that remain intact. Adaptation to monaural deprivation in adulthood is also possible, but appears to lack such flexibility. Here we show, however, that appropriate behavioral training enables monaurally-deprived adult humans to exploit both of these adaptive processes. Moreover, cortical recordings in ferrets reared with asymmetric hearing loss suggest that these forms of plasticity have distinct neural substrates. An ability to adapt to asymmetric hearing loss using multiple adaptive processes is therefore shared by different species and may persist throughout the lifespan. This highlights the fundamental flexibility of neural systems, and may also point toward novel therapeutic strategies for treating sensory disorders. DOI:http://dx.doi.org/10.7554/eLife.12264.001 The brain normally compares the timing and intensity of the sounds that reach each ear to work out a sound’s origin. Hearing loss in one ear disrupts these between-ear comparisons, which causes listeners to make errors in this process. With time, however, the brain adapts to this hearing loss and once again learns to localize sounds accurately. Previous research has shown that young ferrets can adapt to hearing loss in one ear in two distinct ways. The ferrets either learn to remap the altered between-ear comparisons, caused by losing hearing in one ear, onto their new locations. Alternatively, the ferrets learn to locate sounds using only their good ear. Each strategy is suited to localizing different types of sound, but it was not known how this adaptive flexibility unfolds over time, whether it persists throughout the lifespan, or whether it is shared by other species. Now, Keating et al. show that, with some coaching, adult humans also adapt to temporary loss of hearing in one ear using the same two strategies. In the experiments, adult humans were trained to localize different kinds of sounds while wearing an earplug in one ear. These sounds were presented from 12 loudspeakers arranged in a horizontal circle around the person being tested. The experiments showed that short periods of behavioral training enable adult humans to adapt to a hearing loss in one ear and recover their ability to localize sounds. Just like the ferrets, adult humans learned to correctly associate altered between-ear comparisons with their new locations and to rely more on the cues from the unplugged ear to locate sound. Which of these adaptive strategies the participants used depended on the frequencies present in the sounds. The cells in the ear and brain that detect and make sense of sound typically respond best to a limited range of frequencies, and so this suggests that each strategy relies on a distinct set of cells. Keating et al. confirmed in ferrets that different brain cells are indeed used to bring about adaptation to hearing loss in one ear using each strategy. These insights may aid the development of new therapies to treat hearing loss. DOI:http://dx.doi.org/10.7554/eLife.12264.002
Collapse
Affiliation(s)
- Peter Keating
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Onayomi Rosenior-Patten
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Johannes C Dahmen
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Olivia Bell
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Andrew J King
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
16
|
Complex pitch perception mechanisms are shared by humans and a New World monkey. Proc Natl Acad Sci U S A 2015; 113:781-6. [PMID: 26712015 DOI: 10.1073/pnas.1516120113] [Citation(s) in RCA: 43] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The perception of the pitch of harmonic complex sounds is a crucial function of human audition, especially in music and speech processing. Whether the underlying mechanisms of pitch perception are unique to humans, however, is unknown. Based on estimates of frequency resolution at the level of the auditory periphery, psychoacoustic studies in humans have revealed several primary features of central pitch mechanisms. It has been shown that (i) pitch strength of a harmonic tone is dominated by resolved harmonics; (ii) pitch of resolved harmonics is sensitive to the quality of spectral harmonicity; and (iii) pitch of unresolved harmonics is sensitive to the salience of temporal envelope cues. Here we show, for a standard musical tuning fundamental frequency of 440 Hz, that the common marmoset (Callithrix jacchus), a New World monkey with a hearing range similar to that of humans, exhibits all of the primary features of central pitch mechanisms demonstrated in humans. Thus, marmosets and humans may share similar pitch perception mechanisms, suggesting that these mechanisms may have emerged early in primate evolution.
Collapse
|
17
|
|
18
|
Resnik J, Paz R. Fear generalization in the primate amygdala. Nat Neurosci 2014; 18:188-90. [PMID: 25531573 DOI: 10.1038/nn.3900] [Citation(s) in RCA: 57] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2014] [Accepted: 11/16/2014] [Indexed: 12/13/2022]
Abstract
Broad generalization of negative memories is a potential etiology for anxiety disorders, yet the underlying mechanisms remain unknown. We developed a non-human primate model that replicates behavioral observations in humans and identifies specific changes in tuning properties of amygdala neurons: the width of auditory tuning increases with the distance of its center from the conditioned stimulus. This center-width relationship can account for better detection and at the same time explain the wide stimulus generalization.
Collapse
Affiliation(s)
- Jennifer Resnik
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, Israel
| | - Rony Paz
- Department of Neurobiology, Weizmann Institute of Science, Rehovot, Israel
| |
Collapse
|
19
|
Akram S, Englitz B, Elhilali M, Simon JZ, Shamma SA. Investigating the neural correlates of a streaming percept in an informational-masking paradigm. PLoS One 2014; 9:e114427. [PMID: 25490720 PMCID: PMC4260833 DOI: 10.1371/journal.pone.0114427] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2014] [Accepted: 11/10/2014] [Indexed: 11/19/2022] Open
Abstract
Humans routinely segregate a complex acoustic scene into different auditory streams, through the extraction of bottom-up perceptual cues and the use of top-down selective attention. To determine the neural mechanisms underlying this process, neural responses obtained through magnetoencephalography (MEG) were correlated with behavioral performance in the context of an informational masking paradigm. In half the trials, subjects were asked to detect frequency deviants in a target stream, consisting of a rhythmic tone sequence, embedded in a separate masker stream composed of a random cloud of tones. In the other half of the trials, subjects were exposed to identical stimuli but asked to perform a different task—to detect tone-length changes in the random cloud of tones. In order to verify that the normalized neural response to the target sequence served as an indicator of streaming, we correlated neural responses with behavioral performance under a variety of stimulus parameters (target tone rate, target tone frequency, and the “protection zone”, that is, the spectral area with no tones around the target frequency) and attentional states (changing task objective while maintaining the same stimuli). In all conditions that facilitated target/masker streaming behaviorally, MEG normalized neural responses also changed in a manner consistent with the behavior. Thus, attending to the target stream caused a significant increase in power and phase coherence of the responses in recording channels correlated with an increase in the behavioral performance of the listeners. Normalized neural target responses also increased as the protection zone widened and as the frequency of the target tones increased. Finally, when the target sequence rate increased, the buildup of the normalized neural responses was significantly faster, mirroring the accelerated buildup of the streaming percepts. Our data thus support close links between the perceptual and neural consequences of the auditory stream segregation.
Collapse
Affiliation(s)
- Sahar Akram
- The Institute for Systems Research, University of Maryland, College Park, Maryland, United States of America
- Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland, United States of America
- * E-mail:
| | - Bernhard Englitz
- The Institute for Systems Research, University of Maryland, College Park, Maryland, United States of America
- Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland, United States of America
- Département d'Etudes Cognitives, Ecole normale supérieure, Paris, France
- Department of Neurophysiology, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| | - Mounya Elhilali
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, Maryland, United States of America
| | - Jonathan Z. Simon
- Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland, United States of America
- Department of Biology, University of Maryland University, College Park, Maryland, United States of America
| | - Shihab A. Shamma
- The Institute for Systems Research, University of Maryland, College Park, Maryland, United States of America
- Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland, United States of America
- Département d'Etudes Cognitives, Ecole normale supérieure, Paris, France
| |
Collapse
|
20
|
Orton LD, Rees A. Intercollicular commissural connections refine the representation of sound frequency and level in the auditory midbrain. eLife 2014; 3. [PMID: 25406067 PMCID: PMC4235006 DOI: 10.7554/elife.03764] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2014] [Accepted: 10/15/2014] [Indexed: 11/13/2022] Open
Abstract
Connections unifying hemispheric sensory representations of vision and touch occur in cortex, but for hearing, commissural connections earlier in the pathway may be important. The brainstem auditory pathways course bilaterally to the inferior colliculi (ICs). Each IC represents one side of auditory space but they are interconnected by a commissure. By deactivating one IC in guinea pig with cooling or microdialysis of procaine, and recording neural activity to sound in the other, we found that commissural input influences fundamental aspects of auditory processing. The areas of nonV frequency response areas (FRAs) were modulated, but the areas of almost all V-shaped FRAs were not. The supra-threshold sensitivity of rate level functions decreased during deactivation and the ability to signal changes in sound level was decremented. This commissural enhancement suggests the ICs should be viewed as a single entity in which the representation of sound in each is governed by the other. DOI:http://dx.doi.org/10.7554/eLife.03764.001 The bilateral arrangement of our eyes and ears enables us to receive information from both sides of our body. This information is conveyed via various sensory pathways that take different routes through the brain to culminate in the cerebral hemispheres. The information is then processed in the brain's outer layer, which is called the cortex. In the visual system, information from both eyes is kept separate until it reaches the cortex. A similar arrangement exists for touch. However, hearing is unusual among our senses in that sounds undergo much more processing in the brainstem, which is located at the base of the brain, than other types of stimuli. Orton and Rees now show that, in contrast to vision and touch, information about sounds occurring to our left or right is refined by interactions between the two sides of the midbrain. To test for sideward interactions between the two limbs of the auditory pathway, electrodes were lowered into the brains of anesthetized guinea pigs so that neuronal responses to tones could be recorded. The electrodes were placed in the region of the midbrain that contains two structures called the inferior colliculi (meaning ‘lower hills’ in Latin). Each inferior colliculus predominantly receives inputs from the opposite ear. However, recordings made in one colliculus when the other was deactivated revealed that one colliculus normally alters the response of the other. This shows that there is an important sideward interaction between the two halves of the auditory pathway in the midbrain that refines how fundamental aspects of sound, such as its frequency and intensity, are processed. This represents a marked departure from our previous understanding of auditory processing in the mammalian brain, and opens up new lines of investigation into the functioning of the auditory system in health and disease. DOI:http://dx.doi.org/10.7554/eLife.03764.002
Collapse
Affiliation(s)
- Llwyd David Orton
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Adrian Rees
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, United Kingdom
| |
Collapse
|
21
|
Online stimulus optimization rapidly reveals multidimensional selectivity in auditory cortical neurons. J Neurosci 2014; 34:8963-75. [PMID: 24990917 DOI: 10.1523/jneurosci.0260-14.2014] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
Neurons in sensory brain regions shape our perception of the surrounding environment through two parallel operations: decomposition and integration. For example, auditory neurons decompose sounds by separately encoding their frequency, temporal modulation, intensity, and spatial location. Neurons also integrate across these various features to support a unified perceptual gestalt of an auditory object. At higher levels of a sensory pathway, neurons may select for a restricted region of feature space defined by the intersection of multiple, independent stimulus dimensions. To further characterize how auditory cortical neurons decompose and integrate multiple facets of an isolated sound, we developed an automated procedure that manipulated five fundamental acoustic properties in real time based on single-unit feedback in awake mice. Within several minutes, the online approach converged on regions of the multidimensional stimulus manifold that reliably drove neurons at significantly higher rates than predefined stimuli. Optimized stimuli were cross-validated against pure tone receptive fields and spectrotemporal receptive field estimates in the inferior colliculus and primary auditory cortex. We observed, from midbrain to cortex, increases in both level invariance and frequency selectivity, which may underlie equivalent sparseness of responses in the two areas. We found that onset and steady-state spike rates increased proportionately as the stimulus was tailored to the multidimensional receptive field. By separately evaluating the amount of leverage each sound feature exerted on the overall firing rate, these findings reveal interdependencies between stimulus features as well as hierarchical shifts in selectivity and invariance that may go unnoticed with traditional approaches.
Collapse
|
22
|
Imaizumi K, Lee CC. Frequency transformation in the auditory lemniscal thalamocortical system. Front Neural Circuits 2014; 8:75. [PMID: 25071456 PMCID: PMC4086294 DOI: 10.3389/fncir.2014.00075] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2014] [Accepted: 06/16/2014] [Indexed: 12/02/2022] Open
Abstract
The auditory lemniscal thalamocortical (TC) pathway conveys information from the ventral division of the medial geniculate body to the primary auditory cortex (A1). Although their general topographic organization has been well characterized, functional transformations at the lemniscal TC synapse still remain incompletely codified, largely due to the need for integration of functional anatomical results with the variability observed with various animal models and experimental techniques. In this review, we discuss these issues with classical approaches, such as in vivo extracellular recordings and tracer injections to physiologically identified areas in A1, and then compare these studies with modern approaches, such as in vivo two-photon calcium imaging, in vivo whole-cell recordings, optogenetic methods, and in vitro methods using slice preparations. A surprising finding from a comparison of classical and modern approaches is the similar degree of convergence from thalamic neurons to single A1 neurons and clusters of A1 neurons, although, thalamic convergence to single A1 neurons is more restricted from areas within putative thalamic frequency lamina. These comparisons suggest that frequency convergence from thalamic input to A1 is functionally limited. Finally, we consider synaptic organization of TC projections and future directions for research.
Collapse
Affiliation(s)
- Kazuo Imaizumi
- Department of Comparative Biomedical Sciences, Louisiana State University, School of Veterinary Medicine Baton Rouge, LA, USA
| | - Charles C Lee
- Department of Comparative Biomedical Sciences, Louisiana State University, School of Veterinary Medicine Baton Rouge, LA, USA
| |
Collapse
|
23
|
Micheyl C, Schrater PR, Oxenham AJ. Auditory frequency and intensity discrimination explained using a cortical population rate code. PLoS Comput Biol 2013; 9:e1003336. [PMID: 24244142 PMCID: PMC3828126 DOI: 10.1371/journal.pcbi.1003336] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2013] [Accepted: 09/27/2013] [Indexed: 11/18/2022] Open
Abstract
The nature of the neural codes for pitch and loudness, two basic auditory attributes, has been a key question in neuroscience for over century. A currently widespread view is that sound intensity (subjectively, loudness) is encoded in spike rates, whereas sound frequency (subjectively, pitch) is encoded in precise spike timing. Here, using information-theoretic analyses, we show that the spike rates of a population of virtual neural units with frequency-tuning and spike-count correlation characteristics similar to those measured in the primary auditory cortex of primates, contain sufficient statistical information to account for the smallest frequency-discrimination thresholds measured in human listeners. The same population, and the same spike-rate code, can also account for the intensity-discrimination thresholds of humans. These results demonstrate the viability of a unified rate-based cortical population code for both sound frequency (pitch) and sound intensity (loudness), and thus suggest a resolution to a long-standing puzzle in auditory neuroscience.
Collapse
Affiliation(s)
- Christophe Micheyl
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota, United States of America
- * E-mail:
| | - Paul R. Schrater
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota, United States of America
- Department of Computer Science, University of Minnesota, Minneapolis, Minnesota, United States of America
| | - Andrew J. Oxenham
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota, United States of America
- Department of Otolaryngology, University of Minnesota, Minneapolis, Minnesota, United States of America
| |
Collapse
|
24
|
Single neuron and population coding of natural sounds in auditory cortex. Curr Opin Neurobiol 2013; 24:103-10. [PMID: 24492086 DOI: 10.1016/j.conb.2013.09.007] [Citation(s) in RCA: 50] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2013] [Revised: 08/29/2013] [Accepted: 09/09/2013] [Indexed: 11/22/2022]
Abstract
The auditory system drives behavior using information extracted from sounds. Early in the auditory hierarchy, circuits are highly specialized for detecting basic sound features. However, already at the level of the auditory cortex the functional organization of the circuits and the underlying coding principles become different. Here, we review some recent progress in our understanding of single neuron and population coding in primary auditory cortex, focusing on natural sounds. We discuss possible mechanisms explaining why single neuron responses to simple sounds cannot predict responses to natural stimuli. We describe recent work suggesting that structural features like local subnetworks rather than smoothly mapped tonotopy are essential components of population coding. Finally, we suggest a synthesis of how single neurons and subnetworks may be involved in coding natural sounds.
Collapse
|
25
|
Neural representation of harmonic complex tones in primary auditory cortex of the awake monkey. J Neurosci 2013; 33:10312-23. [PMID: 23785145 DOI: 10.1523/jneurosci.0020-13.2013] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Many natural sounds are periodic and consist of frequencies (harmonics) that are integer multiples of a common fundamental frequency (F0). Such harmonic complex tones (HCTs) evoke a pitch corresponding to their F0, which plays a key role in the perception of speech and music. "Pitch-selective" neurons have been identified in non-primary auditory cortex of marmoset monkeys. Noninvasive studies point to a putative "pitch center" located in a homologous cortical region in humans. It remains unclear whether there is sufficient spectral and temporal information available at the level of primary auditory cortex (A1) to enable reliable pitch extraction in non-primary auditory cortex. Here we evaluated multiunit responses to HCTs in A1 of awake macaques using a stimulus design employed in auditory nerve studies of pitch encoding. The F0 of the HCTs was varied in small increments, such that harmonics of the HCTs fell either on the peak or on the sides of the neuronal pure tone tuning functions. Resultant response-amplitude-versus-harmonic-number functions ("rate-place profiles") displayed a periodic pattern reflecting the neuronal representation of individual HCT harmonics. Consistent with psychoacoustic findings in humans, lower harmonics were better resolved in rate-place profiles than higher harmonics. Lower F0s were also temporally represented by neuronal phase-locking to the periodic waveform of the HCTs. Findings indicate that population responses in A1 contain sufficient spectral and temporal information for extracting the pitch of HCTs by neurons in downstream cortical areas that receive their input from A1.
Collapse
|
26
|
The role of harmonic resolvability in pitch perception in a vocal nonhuman primate, the common marmoset (Callithrix jacchus). J Neurosci 2013; 33:9161-8. [PMID: 23699526 DOI: 10.1523/jneurosci.0066-13.2013] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Pitch is one of the most fundamental percepts in the auditory system and can be extracted using either spectral or temporal information in an acoustic signal. Although pitch perception has been extensively studied in human subjects, it is far less clear how nonhuman primates perceive pitch. We have addressed this question in a series of behavioral studies in which marmosets, a vocal nonhuman primate species, were trained to discriminate complex harmonic tones differing in either spectral (fundamental frequency [f0]) or temporal envelope (repetition rate) cues. We found that marmosets used temporal envelope information to discriminate pitch for acoustic stimuli with higher-order harmonics and lower f0 values and spectral information for acoustic stimuli with lower-order harmonics and higher f0 values. We further measured frequency resolution in marmosets using a psychophysical task in which pure tone thresholds were measured as a function of notched noise masker bandwidth. Results show that only the first four harmonics are resolved at low f0 values and up to 16 harmonics are resolved at higher f0 values. Resolvability in marmosets is different from that in humans, where the first five to nine harmonics are consistently resolved across most f0 values, and is likely the result of a smaller marmoset cochlea. In sum, these results show that marmosets use two mechanisms to extract pitch (harmonic templates [spectral] for resolved harmonics, and envelope extraction [temporal] for unresolved harmonics) and that species differences in stimulus resolvability need to be taken into account when investigating and comparing mechanisms of pitch perception across animals.
Collapse
|
27
|
Bartlett EL. The organization and physiology of the auditory thalamus and its role in processing acoustic features important for speech perception. BRAIN AND LANGUAGE 2013; 126:29-48. [PMID: 23725661 PMCID: PMC3707394 DOI: 10.1016/j.bandl.2013.03.003] [Citation(s) in RCA: 54] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/24/2012] [Revised: 02/28/2013] [Accepted: 03/19/2013] [Indexed: 05/17/2023]
Abstract
The auditory thalamus, or medial geniculate body (MGB), is the primary sensory input to auditory cortex. Therefore, it plays a critical role in the complex auditory processing necessary for robust speech perception. This review will describe the functional organization of the thalamus as it relates to processing acoustic features important for speech perception, focusing on thalamic nuclei that relate to auditory representations of language sounds. The MGB can be divided into three main subdivisions, the ventral, dorsal, and medial subdivisions, each with different connectivity, auditory response properties, neuronal properties, and synaptic properties. Together, the MGB subdivisions actively and dynamically shape complex auditory processing and form ongoing communication loops with auditory cortex and subcortical structures.
Collapse
|
28
|
De Martino F, Moerel M, van de Moortele PF, Ugurbil K, Goebel R, Yacoub E, Formisano E. Spatial organization of frequency preference and selectivity in the human inferior colliculus. Nat Commun 2013; 4:1386. [PMID: 23340426 PMCID: PMC3556928 DOI: 10.1038/ncomms2379] [Citation(s) in RCA: 83] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2012] [Accepted: 12/12/2012] [Indexed: 01/10/2023] Open
Abstract
To date, the functional organization of human auditory sub-cortical structures can only be inferred from animal models. Here we use high-resolution functional MRI at ultra-high magnetic fields (7 Tesla) to map the organization of spectral responses in the human inferior colliculus (hIC), a sub-cortical structure fundamental for sound processing. We reveal a tonotopic map with a spatial gradient of preferred frequencies approximately oriented from dorso-lateral (low frequencies) to ventro-medial (high frequencies) locations. Furthermore, we observe a spatial organization of spectral selectivity (tuning) of fMRI responses in the hIC. Along isofrequency contours, fMRI-tuning is narrowest in central locations and broadest in the surrounding regions. Finally, by comparing sub-cortical and cortical auditory areas we show that fMRI-tuning is narrower in hIC than on the cortical surface. Our findings pave the way to non-invasive investigations of sound processing in human sub-cortical nuclei and to studying the interplay between sub-cortical and cortical neuronal populations.
Collapse
Affiliation(s)
- Federico De Martino
- Faculty of Psychology and Neuroscience, Department of Cognitive Neurosciences, Maastricht University, Universiteitssingel 40, Maastricht 6229ER, The Netherlands.
| | | | | | | | | | | | | |
Collapse
|
29
|
Straka MM, Schendel D, Lim HH. Neural integration and enhancement from the inferior colliculus up to different layers of auditory cortex. J Neurophysiol 2013; 110:1009-20. [PMID: 23719210 DOI: 10.1152/jn.00022.2013] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
While the cochlear implant has successfully restored hearing to many deaf patients, it cannot benefit those without a functional auditory nerve or an implantable cochlea. As an alternative, the auditory midbrain implant (AMI) has been developed and implanted into deaf patients. Consisting of a single-shank array, the AMI is designed for stimulation along the tonotopic gradient of the inferior colliculus (ICC). Although the AMI can provide frequency cues, it appears to insufficiently transmit temporal cues for speech understanding because repeated stimulation of a single site causes strong suppressive and refractory effects. Applying the electrical stimulation to at least two sites within an isofrequency lamina can circumvent these refractory processes. Moreover, coactivation with short intersite delays (<5 ms) can elicit cortical activation which is enhanced beyond the summation of activity induced by the individual sites. The goal of our study was to further investigate the role of the auditory cortex in this enhancement effect. In guinea pigs, we electrically stimulated two locations within an ICC lamina or along different laminae with varying interpulse intervals (0-10 ms) and recorded activity in different locations and layers of primary auditory cortex (A1). Our findings reveal a neural mechanism that integrates activity only from neurons located within the same ICC lamina for short spiking intervals (<6 ms). This mechanism leads to enhanced activity into layers III-V of A1 that is further magnified in supragranular layers. This integration mechanism may contribute to perceptual coding of different sound features that are relevant for improving AMI performance.
Collapse
Affiliation(s)
- Malgorzata M Straka
- Department of Biomedical Engineering, University of Minnesota, Twin Cities, Minneapolis, Minnesota, USA.
| | | | | |
Collapse
|
30
|
Herrmann B, Henry MJ, Obleser J. Frequency-specific adaptation in human auditory cortex depends on the spectral variance in the acoustic stimulation. J Neurophysiol 2013; 109:2086-96. [DOI: 10.1152/jn.00907.2012] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
In auditory cortex, activation and subsequent adaptation is strongest for regions responding best to a stimulated tone frequency and less for regions responding best to other frequencies. Previous attempts to characterize the spread of neural adaptation in humans investigated the auditory cortex N1 component of the event-related potentials. Importantly, however, more recent studies in animals show that neural response properties are not independent of the stimulation context. To link these findings in animals to human scalp potentials, we investigated whether contextual factors of the acoustic stimulation, namely, spectral variance, affect the spread of neural adaptation. Electroencephalograms were recorded while human participants listened to random tone sequences varying in spectral variance (narrow vs. wide). Spread of adaptation was investigated by modeling single-trial neural adaptation and subsequent recovery based on the spectro-temporal stimulation history. Frequency-specific neural responses were largest on the N1 component, and the modeled neural adaptation indices were strongly predictive of trial-by-trial amplitude variations. Yet the spread of adaption varied depending on the spectral variance in the stimulation, such that adaptation spread was broadened for tone sequences with wide spectral variance. Thus the present findings reveal context-dependent auditory cortex adaptation and point toward a flexibly adjusting auditory system that changes its response properties with the spectral requirements of the acoustic environment.
Collapse
Affiliation(s)
- Björn Herrmann
- Max Planck Research Group “Auditory Cognition,” Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Molly J. Henry
- Max Planck Research Group “Auditory Cognition,” Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Jonas Obleser
- Max Planck Research Group “Auditory Cognition,” Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
31
|
Tuning in to sound: frequency-selective attentional filter in human primary auditory cortex. J Neurosci 2013; 33:1858-63. [PMID: 23365225 DOI: 10.1523/jneurosci.4405-12.2013] [Citation(s) in RCA: 65] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Cocktail parties, busy streets, and other noisy environments pose a difficult challenge to the auditory system: how to focus attention on selected sounds while ignoring others? Neurons of primary auditory cortex, many of which are sharply tuned to sound frequency, could help solve this problem by filtering selected sound information based on frequency-content. To investigate whether this occurs, we used high-resolution fMRI at 7 tesla to map the fine-scale frequency-tuning (1.5 mm isotropic resolution) of primary auditory areas A1 and R in six human participants. Then, in a selective attention experiment, participants heard low (250 Hz)- and high (4000 Hz)-frequency streams of tones presented at the same time (dual-stream) and were instructed to focus attention onto one stream versus the other, switching back and forth every 30 s. Attention to low-frequency tones enhanced neural responses within low-frequency-tuned voxels relative to high, and when attention switched the pattern quickly reversed. Thus, like a radio, human primary auditory cortex is able to tune into attended frequency channels and can switch channels on demand.
Collapse
|
32
|
Venkataraman Y, Bartlett EL. Postnatal development of synaptic properties of the GABAergic projection from the inferior colliculus to the auditory thalamus. J Neurophysiol 2013; 109:2866-82. [PMID: 23536710 DOI: 10.1152/jn.00021.2013] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022] Open
Abstract
The development of auditory temporal processing is important for processing complex sounds as well as for acquiring reading and language skills. Neuronal properties and sound processing change dramatically in auditory cortex neurons after the onset of hearing. However, the development of the auditory thalamus or medial geniculate body (MGB) has not been well studied over this critical time window. Since synaptic inhibition has been shown to be crucial for auditory temporal processing, this study examined the development of a feedforward, GABAergic connection to the MGB from the inferior colliculus (IC), which is also the source of sensory glutamatergic inputs to the MGB. IC-MGB inhibition was studied using whole cell patch-clamp recordings from rat brain slices in current-clamp and voltage-clamp modes at three age groups: a prehearing group [postnatal day (P)7-P9], an immediate posthearing group (P15-P17), and a juvenile group (P22-P32) whose neuronal properties are largely mature. Membrane properties matured substantially across the ages studied. GABAA and GABAB inhibitory postsynaptic potentials were present at all ages and were similar in amplitude. Inhibitory postsynaptic potentials became faster to single shocks, showed less depression to train stimuli at 5 and 10 Hz, and were overall more efficacious in controlling excitability with age. Overall, IC-MGB inhibition becomes faster and more precise during a time period of rapid changes across the auditory system due to the codevelopment of membrane properties and synaptic properties.
Collapse
Affiliation(s)
- Yamini Venkataraman
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
| | | |
Collapse
|
33
|
Abstract
Pitch, our perception of how high or low a sound is on a musical scale, is a fundamental perceptual attribute of sounds and is important for both music and speech. After more than a century of research, the exact mechanisms used by the auditory system to extract pitch are still being debated. Theoretically, pitch can be computed using either spectral or temporal acoustic features of a sound. We have investigated how cues derived from the temporal envelope and spectrum of an acoustic signal are used for pitch extraction in the common marmoset (Callithrix jacchus), a vocal primate species, by measuring pitch discrimination behaviorally and examining pitch-selective neuronal responses in auditory cortex. We find that pitch is extracted by marmosets using temporal envelope cues for lower pitch sounds composed of higher-order harmonics, whereas spectral cues are used for higher pitch sounds with lower-order harmonics. Our data support dual-pitch processing mechanisms, originally proposed by psychophysicists based on human studies, whereby pitch is extracted using a combination of temporal envelope and spectral cues.
Collapse
|
34
|
Johnson LA, Della Santina CC, Wang X. Temporal bone characterization and cochlear implant feasibility in the common marmoset (Callithrix jacchus). Hear Res 2012; 290:37-44. [PMID: 22583919 DOI: 10.1016/j.heares.2012.05.002] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/21/2012] [Revised: 04/28/2012] [Accepted: 05/03/2012] [Indexed: 10/28/2022]
Abstract
The marmoset (Callithrix jacchus) is a valuable non-human primate model for studying behavioral and neural mechanisms related to vocal communication. It is also well suited for investigating neural mechanisms related to cochlear implants. The purpose of this study was to characterize marmoset temporal bone anatomy and investigate the feasibility of implanting a multi-channel intracochlear electrode into the marmoset scala tympani. Micro computed tomography (microCT) was used to create high-resolution images of marmoset temporal bones. Cochlear fluid spaces, middle ear ossicles, semicircular canals and the surrounding temporal bone were reconstructed in three-dimensional space. Our results show that the marmoset cochlea is ∼16.5 mm in length and has ∼2.8 turns. The cross-sectional area of the scala tympani is greatest (∼0.8 mm(2)) at ∼1.75 mm from the base of the scala, reduces to ∼0.4 mm(2) at 5 mm from the base, and decreases at a constant rate for the remaining length. Interestingly, this length-area profile, when scaled 2.5 times, is similar to the scala tympani of the human cochlea. Given these dimensions, a compatible multi-channel implant electrode was identified. In a cadaveric specimen, this electrode was inserted ¾ turn into the scala tympani through a cochleostomy at ∼1 mm apical to the round window. The depth of the most apical electrode band was ∼8 mm. Our study provides detailed structural anatomy data for the middle and inner ear of the marmoset, and suggests the potential of the marmoset as a new non-human primate model for cochlear implant research.
Collapse
Affiliation(s)
- Luke A Johnson
- Biomedical Engineering Dept., Johns Hopkins University, 412 Traylor Research Building, 720 Rutland Avenue, Baltimore, MD 21205, USA.
| | | | | |
Collapse
|
35
|
Abstract
The primary auditory cortex (PAC) is central to human auditory abilities, yet its location in the brain remains unclear. We measured the two largest tonotopic subfields of PAC (hA1 and hR) using high-resolution functional MRI at 7 T relative to the underlying anatomy of Heschl's gyrus (HG) in 10 individual human subjects. The data reveals a clear anatomical-functional relationship that, for the first time, indicates the location of PAC across the range of common morphological variants of HG (single gyri, partial duplications, and complete duplications). In 20/20 individual hemispheres, two primary mirror-symmetric tonotopic maps were clearly observed with gradients perpendicular to HG. PAC spanned both divisions of HG in cases of partial and complete duplications (11/20 hemispheres), not only the anterior division as commonly assumed. Specifically, the central union of the two primary maps (the hA1-R border) was consistently centered on the full Heschl's structure: on the gyral crown of single HGs and within the sulcal divide of duplicated HGs. The anatomical-functional variants of PAC appear to be part of a continuum, rather than distinct subtypes. These findings significantly revise HG as a marker for human PAC and suggest that tonotopic maps may have shaped HG during human evolution. Tonotopic mappings were based on only 16 min of fMRI data acquisition, so these methods can be used as an initial mapping step in future experiments designed to probe the function of specific auditory fields.
Collapse
|
36
|
Frequency selectivity in Old-World monkeys corroborates sharp cochlear tuning in humans. Proc Natl Acad Sci U S A 2011; 108:17516-20. [PMID: 21987783 DOI: 10.1073/pnas.1105867108] [Citation(s) in RCA: 93] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Frequency selectivity in the inner ear is fundamental to hearing and is traditionally thought to be similar across mammals. Although direct measurements are not possible in humans, estimates of frequency tuning based on noninvasive recordings of sound evoked from the cochlea (otoacoustic emissions) have suggested substantially sharper tuning in humans but remain controversial. We report measurements of frequency tuning in macaque monkeys, Old-World primates phylogenetically closer to humans than the laboratory animals often taken as models of human hearing (e.g., cats, guinea pigs, chinchillas). We find that measurements of tuning obtained directly from individual auditory-nerve fibers and indirectly using otoacoustic emissions both indicate that at characteristic frequencies above about 500 Hz, peripheral frequency selectivity in macaques is significantly sharper than in these common laboratory animals, matching that inferred for humans above 4-5 kHz. Compared with the macaque, the human otoacoustic estimates thus appear neither prohibitively sharp nor exceptional. Our results validate the use of otoacoustic emissions for noninvasive measurement of cochlear tuning and corroborate the finding of sharp tuning in humans. The results have important implications for understanding the mechanical and neural coding of sound in the human cochlea, and thus for developing strategies to compensate for the degradation of tuning in the hearing-impaired.
Collapse
|