1
|
Shi K, Quass GL, Rogalla MM, Ford AN, Czarny JE, Apostolides PF. Population coding of time-varying sounds in the nonlemniscal inferior colliculus. J Neurophysiol 2024; 131:842-864. [PMID: 38505907 DOI: 10.1152/jn.00013.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Revised: 02/29/2024] [Accepted: 03/15/2024] [Indexed: 03/21/2024] Open
Abstract
The inferior colliculus (IC) of the midbrain is important for complex sound processing, such as discriminating conspecific vocalizations and human speech. The IC's nonlemniscal, dorsal "shell" region is likely important for this process, as neurons in these layers project to higher-order thalamic nuclei that subsequently funnel acoustic signals to the amygdala and nonprimary auditory cortices, forebrain circuits important for vocalization coding in a variety of mammals, including humans. However, the extent to which shell IC neurons transmit acoustic features necessary to discern vocalizations is less clear, owing to the technical difficulty of recording from neurons in the IC's superficial layers via traditional approaches. Here, we use two-photon Ca2+ imaging in mice of either sex to test how shell IC neuron populations encode the rate and depth of amplitude modulation, important sound cues for speech perception. Most shell IC neurons were broadly tuned, with a low neurometric discrimination of amplitude modulation rate; only a subset was highly selective to specific modulation rates. Nevertheless, neural network classifier trained on fluorescence data from shell IC neuron populations accurately classified amplitude modulation rate, and decoding accuracy was only marginally reduced when highly tuned neurons were omitted from training data. Rather, classifier accuracy increased monotonically with the modulation depth of the training data, such that classifiers trained on full-depth modulated sounds had median decoding errors of ∼0.2 octaves. Thus, shell IC neurons may transmit time-varying signals via a population code, with perhaps limited reliance on the discriminative capacity of any individual neuron.NEW & NOTEWORTHY The IC's shell layers originate a "nonlemniscal" pathway important for perceiving vocalization sounds. However, prior studies suggest that individual shell IC neurons are broadly tuned and have high response thresholds, implying a limited reliability of efferent signals. Using Ca2+ imaging, we show that amplitude modulation is accurately represented in the population activity of shell IC neurons. Thus, downstream targets can read out sounds' temporal envelopes from distributed rate codes transmitted by populations of broadly tuned neurons.
Collapse
Affiliation(s)
- Kaiwen Shi
- Department of Otolaryngology-Head & Neck Surgery, Kresge Hearing Research Institute, University of Michigan Medical School, Ann Arbor, Michigan, United States
| | - Gunnar L Quass
- Department of Otolaryngology-Head & Neck Surgery, Kresge Hearing Research Institute, University of Michigan Medical School, Ann Arbor, Michigan, United States
| | - Meike M Rogalla
- Department of Otolaryngology-Head & Neck Surgery, Kresge Hearing Research Institute, University of Michigan Medical School, Ann Arbor, Michigan, United States
| | - Alexander N Ford
- Department of Otolaryngology-Head & Neck Surgery, Kresge Hearing Research Institute, University of Michigan Medical School, Ann Arbor, Michigan, United States
| | - Jordyn E Czarny
- Department of Otolaryngology-Head & Neck Surgery, Kresge Hearing Research Institute, University of Michigan Medical School, Ann Arbor, Michigan, United States
| | - Pierre F Apostolides
- Department of Otolaryngology-Head & Neck Surgery, Kresge Hearing Research Institute, University of Michigan Medical School, Ann Arbor, Michigan, United States
- Department of Molecular & Integrative Physiology, University of Michigan Medical School, Ann Arbor, Michigan, United States
| |
Collapse
|
2
|
Mittelstadt JK, Shilling-Scrivo KV, Kanold PO. Long-term training alters response dynamics in the aging auditory cortex. Hear Res 2024; 444:108965. [PMID: 38364511 PMCID: PMC11186583 DOI: 10.1016/j.heares.2024.108965] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 01/16/2024] [Accepted: 01/20/2024] [Indexed: 02/18/2024]
Abstract
Age-related auditory dysfunction, presbycusis, is caused in part by functional changes in the auditory cortex (ACtx) such as altered response dynamics and increased population correlations. Given the ability of cortical function to be altered by training, we tested if performing auditory tasks might benefit auditory function in old age. We examined this by training adult mice on a low-effort tone-detection task for at least six months and then investigated functional responses in ACtx at an older age (∼18 months). Task performance remained stable well into old age. Comparing sound-evoked responses of thousands of ACtx neurons using in vivo 2-photon Ca2+ imaging, we found that many aspects of youthful neuronal activity, including low activity correlations, lower neural excitability, and a greater proportion of suppressed responses, were preserved in trained old animals as compared to passively-exposed old animals. Thus, consistent training on a low-effort task can benefit age-related functional changes in ACtx and may preserve many aspects of auditory function.
Collapse
Affiliation(s)
- Jonah K Mittelstadt
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA; Solomon H. Snyder Department of Neuroscience, Johns Hopkins University, Baltimore, MD 21205, USA; Department of Biology, University of Maryland, College Park, MD 20742, USA
| | - Kelson V Shilling-Scrivo
- Department of Biology, University of Maryland, College Park, MD 20742, USA; Department of Anatomy and Neurobiology, University of Maryland School of Medicine, Baltimore, MD 21230, USA
| | - Patrick O Kanold
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA; Solomon H. Snyder Department of Neuroscience, Johns Hopkins University, Baltimore, MD 21205, USA; Department of Biology, University of Maryland, College Park, MD 20742, USA; Kavli Neuroscience Discovery Institute, Johns Hopkins University, Baltimore, MD 21205, USA.
| |
Collapse
|
3
|
Arribas DM, Marin-Burgin A, Morelli LG. Adult-born granule cells improve stimulus encoding and discrimination in the dentate gyrus. eLife 2023; 12:e80250. [PMID: 37584478 PMCID: PMC10476965 DOI: 10.7554/elife.80250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Accepted: 08/15/2023] [Indexed: 08/17/2023] Open
Abstract
Heterogeneity plays an important role in diversifying neural responses to support brain function. Adult neurogenesis provides the dentate gyrus with a heterogeneous population of granule cells (GCs) that were born and developed their properties at different times. Immature GCs have distinct intrinsic and synaptic properties than mature GCs and are needed for correct encoding and discrimination in spatial tasks. How immature GCs enhance the encoding of information to support these functions is not well understood. Here, we record the responses to fluctuating current injections of GCs of different ages in mouse hippocampal slices to study how they encode stimuli. Immature GCs produce unreliable responses compared to mature GCs, exhibiting imprecise spike timings across repeated stimulation. We use a statistical model to describe the stimulus-response transformation performed by GCs of different ages. We fit this model to the data and obtain parameters that capture GCs' encoding properties. Parameter values from this fit reflect the maturational differences of the population and indicate that immature GCs perform a differential encoding of stimuli. To study how this age heterogeneity influences encoding by a population, we perform stimulus decoding using populations that contain GCs of different ages. We find that, despite their individual unreliability, immature GCs enhance the fidelity of the signal encoded by the population and improve the discrimination of similar time-dependent stimuli. Thus, the observed heterogeneity confers the population with enhanced encoding capabilities.
Collapse
Affiliation(s)
- Diego M Arribas
- Instituto de Investigacion en Biomedicina de Buenos Aires (IBioBA) – CONICET/Partner Institute of the Max Planck Society, Polo Cientifico TecnologicoBuenos AiresArgentina
| | - Antonia Marin-Burgin
- Instituto de Investigacion en Biomedicina de Buenos Aires (IBioBA) – CONICET/Partner Institute of the Max Planck Society, Polo Cientifico TecnologicoBuenos AiresArgentina
| | - Luis G Morelli
- Instituto de Investigacion en Biomedicina de Buenos Aires (IBioBA) – CONICET/Partner Institute of the Max Planck Society, Polo Cientifico TecnologicoBuenos AiresArgentina
- Departamento de Fisica, FCEyN UBA, Ciudad UniversitariaBuenos AiresArgentina
- Max Planck Institute for Molecular Physiology, Department of Systemic Cell BiologyDortmundGermany
| |
Collapse
|
4
|
Shi K, Quass GL, Rogalla MM, Ford AN, Czarny JE, Apostolides PF. Population coding of time-varying sounds in the non-lemniscal Inferior Colliculus. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.08.14.553263. [PMID: 37645904 PMCID: PMC10461978 DOI: 10.1101/2023.08.14.553263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/31/2023]
Abstract
The inferior colliculus (IC) of the midbrain is important for complex sound processing, such as discriminating conspecific vocalizations and human speech. The IC's non-lemniscal, dorsal "shell" region is likely important for this process, as neurons in these layers project to higher-order thalamic nuclei that subsequently funnel acoustic signals to the amygdala and non-primary auditory cortices; forebrain circuits important for vocalization coding in a variety of mammals, including humans. However, the extent to which shell IC neurons transmit acoustic features necessary to discern vocalizations is less clear, owing to the technical difficulty of recording from neurons in the IC's superficial layers via traditional approaches. Here we use 2-photon Ca2+ imaging in mice of either sex to test how shell IC neuron populations encode the rate and depth of amplitude modulation, important sound cues for speech perception. Most shell IC neurons were broadly tuned, with a low neurometric discrimination of amplitude modulation rate; only a subset were highly selective to specific modulation rates. Nevertheless, neural network classifier trained on fluorescence data from shell IC neuron populations accurately classified amplitude modulation rate, and decoding accuracy was only marginally reduced when highly tuned neurons were omitted from training data. Rather, classifier accuracy increased monotonically with the modulation depth of the training data, such that classifiers trained on full-depth modulated sounds had median decoding errors of ~0.2 octaves. Thus, shell IC neurons may transmit time-varying signals via a population code, with perhaps limited reliance on the discriminative capacity of any individual neuron.
Collapse
Affiliation(s)
- Kaiwen Shi
- Kresge Hearing Research Institute, Department of Otolaryngology — Head & Neck Surgery, University of Michigan Medical School, Ann Arbor, MI, 48109
| | - Gunnar L. Quass
- Kresge Hearing Research Institute, Department of Otolaryngology — Head & Neck Surgery, University of Michigan Medical School, Ann Arbor, MI, 48109
| | - Meike M. Rogalla
- Kresge Hearing Research Institute, Department of Otolaryngology — Head & Neck Surgery, University of Michigan Medical School, Ann Arbor, MI, 48109
| | - Alexander N. Ford
- Kresge Hearing Research Institute, Department of Otolaryngology — Head & Neck Surgery, University of Michigan Medical School, Ann Arbor, MI, 48109
| | - Jordyn E. Czarny
- Kresge Hearing Research Institute, Department of Otolaryngology — Head & Neck Surgery, University of Michigan Medical School, Ann Arbor, MI, 48109
| | - Pierre F. Apostolides
- Kresge Hearing Research Institute, Department of Otolaryngology — Head & Neck Surgery, University of Michigan Medical School, Ann Arbor, MI, 48109
- Department of Molecular & Integrative Physiology, University of Michigan Medical School, Ann Arbor, MI, 48109
| |
Collapse
|
5
|
Bálint A, Szabó Á, Andics A, Gácsi M. Dog and human neural sensitivity to voicelikeness: A comparative fMRI study. Neuroimage 2023; 265:119791. [PMID: 36476565 DOI: 10.1016/j.neuroimage.2022.119791] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 12/01/2022] [Accepted: 12/03/2022] [Indexed: 12/12/2022] Open
Abstract
Voice-sensitivity in the auditory cortex of a range of mammals has been proposed to be determined primarily by tuning to conspecific auditory stimuli, but recent human findings indicate a role for a more general tuning to voicelikeness. Vocal emotional valence, a central characteristic of vocalisations, has been linked to the same basic acoustic parameters across species. Comparative neuroimaging revealed that during voice perception, such acoustic parameters modulate emotional valence-sensitivity in auditory cortical regions in both family dogs and humans. To explore the role of voicelikeness in auditory emotional valence-sensitivity across species, here we constructed artificial emotional sounds in two sound categories: voice-like vs. sine-wave sounds, parametrically modulating two main acoustic parameters, f0 and call length. We hypothesised that if mammalian auditory systems are characterised by a general tuning to voicelikeness, voice-like sounds will be processed preferentially, and acoustic parameters for voice-like sounds will be processed differently than for sine-wave sounds - both in dogs and humans. We found cortical areas in both species that responded stronger to voice-like than to sine-wave stimuli, while there were no regions responding stronger to sine-wave sounds in either species. Additionally, we found that in bilateral primary and emotional valence-sensitive auditory regions of both species, the processing of voice-like and sine-wave sounds are modulated by f0 in opposite ways. These results reveal functional similarities between evolutionarily distant mammals for processing voicelikeness and its effect on processing basic acoustic cues of vocal emotions.
Collapse
Affiliation(s)
- Anna Bálint
- ELKH-ELTE Comparative Ethology Research Group, H-1117 Budapest, Pázmány Péter sétány 1/C, Hungary.
| | - Ádám Szabó
- Department of Neuroradiology at the Medical Imaging Centre of the Semmelweis University, H-1082 Budapest, Üllői út 78a, Hungary
| | - Attila Andics
- Department of Ethology, Eötvös Loránd University, H-1117 Budapest, Pázmány Péter sétány 1/C, Hungary; MTA-ELTE 'Lendület' Neuroethology of Communication Research Group, Hungarian Academy of Sciences - Eötvös Loránd University, H-1117 Budapest, Pázmány Péter sétány 1/C, Hungary; ELTE NAP Canine Brain Research Group, H-1117 Budapest, Pázmány Péter sétány 1/C, Hungary
| | - Márta Gácsi
- ELKH-ELTE Comparative Ethology Research Group, H-1117 Budapest, Pázmány Péter sétány 1/C, Hungary; Department of Ethology, Eötvös Loránd University, H-1117 Budapest, Pázmány Péter sétány 1/C, Hungary
| |
Collapse
|
6
|
Seenivasan P, Narayanan R. Efficient information coding and degeneracy in the nervous system. Curr Opin Neurobiol 2022; 76:102620. [PMID: 35985074 PMCID: PMC7613645 DOI: 10.1016/j.conb.2022.102620] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 07/01/2022] [Accepted: 07/07/2022] [Indexed: 11/25/2022]
Abstract
Efficient information coding (EIC) is a universal biological framework rooted in the fundamental principle that system responses should match their natural stimulus statistics for maximizing environmental information. Quantitatively assessed through information theory, such adaptation to the environment occurs at all biological levels and timescales. The context dependence of environmental stimuli and the need for stable adaptations make EIC a daunting task. We argue that biological complexity is the principal architect that subserves deft execution of stable EIC. Complexity in a system is characterized by several functionally segregated subsystems that show a high degree of functional integration when they interact with each other. Complex biological systems manifest heterogeneities and degeneracy, wherein structurally different subsystems could interact to yield the same functional outcome. We argue that complex systems offer several choices that effectively implement EIC and homeostasis for each of the different contexts encountered by the system.
Collapse
Affiliation(s)
- Pavithraa Seenivasan
- Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science, Bangalore, 560012, India. https://twitter.com/PaveeSeeni
| | - Rishikesh Narayanan
- Cellular Neurophysiology Laboratory, Molecular Biophysics Unit, Indian Institute of Science, Bangalore, 560012, India.
| |
Collapse
|
7
|
Shilling-Scrivo K, Mittelstadt J, Kanold PO. Altered Response Dynamics and Increased Population Correlation to Tonal Stimuli Embedded in Noise in Aging Auditory Cortex. J Neurosci 2021; 41:9650-9668. [PMID: 34611028 PMCID: PMC8612470 DOI: 10.1523/jneurosci.0839-21.2021] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2021] [Revised: 09/25/2021] [Accepted: 09/29/2021] [Indexed: 11/21/2022] Open
Abstract
Age-related hearing loss (presbycusis) is a chronic health condition that affects one-third of the world population. One hallmark of presbycusis is a difficulty hearing in noisy environments. Presbycusis can be separated into two components: alterations of peripheral mechanotransduction of sound in the cochlea and central alterations of auditory processing areas of the brain. Although the effects of the aging cochlea in hearing loss have been well studied, the role of the aging brain in hearing loss is less well understood. Therefore, to examine how age-related central processing changes affect hearing in noisy environments, we used a mouse model (Thy1-GCaMP6s X CBA) that has excellent peripheral hearing in old age. We used in vivo two-photon Ca2+ imaging to measure the responses of neuronal populations in auditory cortex (ACtx) of adult (2-6 months, nine male, six female, 4180 neurons) and aging mice (15-17 months, six male, three female, 1055 neurons) while listening to tones in noisy backgrounds. We found that ACtx neurons in aging mice showed larger responses to tones and have less suppressed responses consistent with reduced inhibition. Aging neurons also showed less sensitivity to temporal changes. Population analysis showed that neurons in aging mice showed higher pairwise activity correlations and showed a reduced diversity in responses to sound stimuli. Using neural decoding techniques, we show a loss of information in neuronal populations in the aging brain. Thus, aging not only affects the responses of single neurons but also affects how these neurons jointly represent stimuli.SIGNIFICANCE STATEMENT Aging results in hearing deficits particularly under challenging listening conditions. We show that auditory cortex contains distinct subpopulations of excitatory neurons that preferentially encode different stimulus features and that aging selectively reduces certain subpopulations. We also show that aging increases correlated activity between neurons and thereby reduces the response diversity in auditory cortex. The loss of population response diversity leads to a decrease of stimulus information and deficits in sound encoding, especially in noisy backgrounds. Future work determining the identities of circuits affected by aging could provide new targets for therapeutic strategies.
Collapse
Affiliation(s)
- Kelson Shilling-Scrivo
- Department of Anatomy and Neurobiology, University of Maryland School of Medicine, Baltimore, Maryland 21230
| | - Jonah Mittelstadt
- Department of Biology, University of Maryland, College Park, Maryland 20742
| | - Patrick O Kanold
- Department of Biology, University of Maryland, College Park, Maryland 20742
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland 20215
- Kavli Neuroscience Discovery Institute, Johns Hopkins University, Baltimore, MD 21205
| |
Collapse
|
8
|
Mishra AP, Peng F, Li K, Harper NS, Schnupp JWH. Sensitivity of neural responses in the inferior colliculus to statistical features of sound textures. Hear Res 2021; 412:108357. [PMID: 34739889 DOI: 10.1016/j.heares.2021.108357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Revised: 09/04/2021] [Accepted: 09/21/2021] [Indexed: 11/16/2022]
Abstract
Previous psychophysical studies have identified a hierarchy of time-averaged statistics which determine the identity of natural sound textures. However, it is unclear whether the neurons in the inferior colliculus (IC) are sensitive to each of these statistical features in the natural sound textures. We used 13 representative sound textures spanning the space of 3 statistics extracted from over 200 natural textures. The synthetic textures were generated by incorporating the statistical features in a step-by-step manner, in which a particular statistical feature was changed while the other statistical features remain unchanged. The extracellular activity in response to the synthetic texture stimuli was recorded in the IC of anesthetized rats. Analysis of the transient and sustained multiunit activity after each transition of statistical feature showed that the IC units were sensitive to the changes of all types of statistics, although to a varying extent. For example, we found that more neurons were sensitive to the changes in variance than that in the modulation correlations. Our results suggest that the sensitivity of the statistical features in the subcortical levels contributes to the identification and discrimination of natural sound textures.
Collapse
Affiliation(s)
- Ambika P Mishra
- Department of Neuroscience, City University of Hong Kong, Hong Kong SAR.
| | - Fei Peng
- Department of Neuroscience, City University of Hong Kong, Hong Kong SAR.
| | - Kongyan Li
- Department of Neuroscience, City University of Hong Kong, Hong Kong SAR
| | - Nicol S Harper
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, UK.
| | - Jan W H Schnupp
- Department of Neuroscience, City University of Hong Kong, Hong Kong SAR.
| |
Collapse
|
9
|
Gentile Polese A, Nigam S, Hurley LM. 5-HT1A Receptors Alter Temporal Responses to Broadband Vocalizations in the Mouse Inferior Colliculus Through Response Suppression. Front Neural Circuits 2021; 15:718348. [PMID: 34512276 PMCID: PMC8430226 DOI: 10.3389/fncir.2021.718348] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Accepted: 07/19/2021] [Indexed: 01/21/2023] Open
Abstract
Neuromodulatory systems may provide information on social context to auditory brain regions, but relatively few studies have assessed the effects of neuromodulation on auditory responses to acoustic social signals. To address this issue, we measured the influence of the serotonergic system on the responses of neurons in a mouse auditory midbrain nucleus, the inferior colliculus (IC), to vocal signals. Broadband vocalizations (BBVs) are human-audible signals produced by mice in distress as well as by female mice in opposite-sex interactions. The production of BBVs is context-dependent in that they are produced both at early stages of interactions as females physically reject males and at later stages as males mount females. Serotonin in the IC of males corresponds to these events, and is elevated more in males that experience less female rejection. We measured the responses of single IC neurons to five recorded examples of BBVs in anesthetized mice. We then locally activated the 5-HT1A receptor through iontophoretic application of 8-OH-DPAT. IC neurons showed little selectivity for different BBVs, but spike trains were characterized by local regions of high spike probability, which we called "response features." Response features varied across neurons and also across calls for individual neurons, ranging from 1 to 7 response features for responses of single neurons to single calls. 8-OH-DPAT suppressed spikes and also reduced the numbers of response features. The weakest response features were the most likely to disappear, suggestive of an "iceberg"-like effect in which activation of the 5-HT1A receptor suppressed weakly suprathreshold response features below the spiking threshold. Because serotonin in the IC is more likely to be elevated for mounting-associated BBVs than for rejection-associated BBVs, these effects of the 5-HT1A receptor could contribute to the differential auditory processing of BBVs in different behavioral subcontexts.
Collapse
Affiliation(s)
- Arianna Gentile Polese
- Department of Cell and Developmental Biology, University of Colorado Anschutz Medical Campus, Aurora, CO, United States
- Department of Biology, Program in Neuroscience, Indiana University Bloomington, Bloomington, IN, United States
| | - Sunny Nigam
- Department of Neurobiology and Anatomy, McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, TX, United States
- Department of Physics, Indiana University Bloomington, Bloomington, IN, United States
| | - Laura M. Hurley
- Department of Neurobiology and Anatomy, McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, TX, United States
| |
Collapse
|
10
|
Logerot P, Smith PF, Wild M, Kubke MF. Auditory processing in the zebra finch midbrain: single unit responses and effect of rearing experience. PeerJ 2020; 8:e9363. [PMID: 32775046 PMCID: PMC7384439 DOI: 10.7717/peerj.9363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2019] [Accepted: 05/26/2020] [Indexed: 11/26/2022] Open
Abstract
In birds the auditory system plays a key role in providing the sensory input used to discriminate between conspecific and heterospecific vocal signals. In those species that are known to learn their vocalizations, for example, songbirds, it is generally considered that this ability arises and is manifest in the forebrain, although there is no a priori reason why brainstem components of the auditory system could not also play an important part. To test this assumption, we used groups of normal reared and cross-fostered zebra finches that had previously been shown in behavioural experiments to reduce their preference for conspecific songs subsequent to cross fostering experience with Bengalese finches, a related species with a distinctly different song. The question we asked, therefore, is whether this experiential change also changes the bias in favour of conspecific song displayed by auditory midbrain units of normally raised zebra finches. By recording the responses of single units in MLd to a variety of zebra finch and Bengalese finch songs in both normally reared and cross-fostered zebra finches, we provide a positive answer to this question. That is, the difference in response to conspecific and heterospecific songs seen in normal reared zebra finches is reduced following cross-fostering. In birds the virtual absence of mammalian-like cortical projections upon auditory brainstem nuclei argues against the interpretation that MLd units change, as observed in the present experiments, as a result of top-down influences on sensory processing. Instead, it appears that MLd units can be influenced significantly by sensory inputs arising directly from a change in auditory experience during development.
Collapse
Affiliation(s)
- Priscilla Logerot
- Anatomy and Medical Imaging, University of Auckland, University of Auckland, Auckland, New Zealand
| | - Paul F. Smith
- Dept. of Pharmacology and Toxicology, School of Biomedical Sciences, Brain Health Research Centre, Brain Research New Zealand, and Eisdell Moore Centre, University of Otago, Dunedin, New Zealand
| | - Martin Wild
- Anatomy and Medical Imaging and Eisdell Moore Centre, University of Auckland, University of Auckland, Auckland, New Zealand
| | - M. Fabiana Kubke
- Anatomy and Medical Imaging, Centre for Brain Research and Eisdell Moore Centre, University of Auckland, University of Auckland, Auckland, New Zealand
| |
Collapse
|
11
|
Cai H, Dent ML. Best sensitivity of temporal modulation transfer functions in laboratory mice matches the amplitude modulation embedded in vocalizations. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:337. [PMID: 32006990 PMCID: PMC7043865 DOI: 10.1121/10.0000583] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/30/2019] [Revised: 12/18/2019] [Accepted: 12/22/2019] [Indexed: 06/10/2023]
Abstract
The perception of spectrotemporal changes is crucial for distinguishing between acoustic signals, including vocalizations. Temporal modulation transfer functions (TMTFs) have been measured in many species and reveal that the discrimination of amplitude modulation suffers at rapid modulation frequencies. TMTFs were measured in six CBA/CaJ mice in an operant conditioning procedure, where mice were trained to discriminate an 800 ms amplitude modulated white noise target from a continuous noise background. TMTFs of mice show a bandpass characteristic, with an upper limit cutoff frequency of around 567 Hz. Within the measured modulation frequencies ranging from 5 Hz to 1280 Hz, the mice show a best sensitivity for amplitude modulation at around 160 Hz. To look for a possible parallel evolution between sound perception and production in living organisms, we also analyzed the components of amplitude modulations embedded in natural ultrasonic vocalizations (USVs) emitted by this strain. We found that the cutoff frequency of amplitude modulation in most of the individual USVs is around their most sensitive range obtained from the psychoacoustic experiments. Further analyses of the duration and modulation frequency ranges of USVs indicated that the broader the frequency ranges of amplitude modulation in natural USVs, the shorter the durations of the USVs.
Collapse
Affiliation(s)
- Huaizhen Cai
- Department of Psychology, University at Buffalo-SUNY, Buffalo, New York 14260, USA
| | - Micheal L Dent
- Department of Psychology, University at Buffalo-SUNY, Buffalo, New York 14260, USA
| |
Collapse
|
12
|
Gourévitch B, Mahrt EJ, Bakay W, Elde C, Portfors CV. GABA A receptors contribute more to rate than temporal coding in the IC of awake mice. J Neurophysiol 2020; 123:134-148. [PMID: 31721644 DOI: 10.1152/jn.00377.2019] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
Speech is our most important form of communication, yet we have a poor understanding of how communication sounds are processed by the brain. Mice make great model organisms to study neural processing of communication sounds because of their rich repertoire of social vocalizations and because they have brain structures analogous to humans, such as the auditory midbrain nucleus inferior colliculus (IC). Although the combined roles of GABAergic and glycinergic inhibition on vocalization selectivity in the IC have been studied to a limited degree, the discrete contributions of GABAergic inhibition have only rarely been examined. In this study, we examined how GABAergic inhibition contributes to shaping responses to pure tones as well as selectivity to complex sounds in the IC of awake mice. In our set of long-latency neurons, we found that GABAergic inhibition extends the evoked firing rate range of IC neurons by lowering the baseline firing rate but maintaining the highest probability of firing rate. GABAergic inhibition also prevented IC neurons from bursting in a spontaneous state. Finally, we found that although GABAergic inhibition shaped the spectrotemporal response to vocalizations in a nonlinear fashion, it did not affect the neural code needed to discriminate vocalizations, based either on spiking patterns or on firing rate. Overall, our results emphasize that even if GABAergic inhibition generally decreases the firing rate, it does so while maintaining or extending the abilities of neurons in the IC to code the wide variety of sounds that mammals are exposed to in their daily lives.NEW & NOTEWORTHY GABAergic inhibition adds nonlinearity to neuronal response curves. This increases the neuronal range of evoked firing rate by reducing baseline firing. GABAergic inhibition prevents bursting responses from neurons in a spontaneous state, reducing noise in the temporal coding of the neuron. This could result in improved signal transmission to the cortex.
Collapse
Affiliation(s)
- Boris Gourévitch
- Institut de l'Audition, Institut Pasteur, INSERM, Sorbonne Université, F-75012 Paris, France.,CNRS, France
| | - Elena J Mahrt
- School of Biological Sciences, Washington State University, Vancouver, Washington
| | - Warren Bakay
- Institut de l'Audition, Institut Pasteur, INSERM, Sorbonne Université, F-75012 Paris, France
| | - Cameron Elde
- School of Biological Sciences, Washington State University, Vancouver, Washington
| | - Christine V Portfors
- School of Biological Sciences, Washington State University, Vancouver, Washington
| |
Collapse
|
13
|
Sadeghi M, Zhai X, Stevenson IH, Escabí MA. A neural ensemble correlation code for sound category identification. PLoS Biol 2019; 17:e3000449. [PMID: 31574079 PMCID: PMC6788721 DOI: 10.1371/journal.pbio.3000449] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2019] [Revised: 10/11/2019] [Accepted: 09/03/2019] [Indexed: 12/25/2022] Open
Abstract
Humans and other animals effortlessly identify natural sounds and categorize them into behaviorally relevant categories. Yet, the acoustic features and neural transformations that enable sound recognition and the formation of perceptual categories are largely unknown. Here, using multichannel neural recordings in the auditory midbrain of unanesthetized female rabbits, we first demonstrate that neural ensemble activity in the auditory midbrain displays highly structured correlations that vary with distinct natural sound stimuli. These stimulus-driven correlations can be used to accurately identify individual sounds using single-response trials, even when the sounds do not differ in their spectral content. Combining neural recordings and an auditory model, we then show how correlations between frequency-organized auditory channels can contribute to discrimination of not just individual sounds but sound categories. For both the model and neural data, spectral and temporal correlations achieved similar categorization performance and appear to contribute equally. Moreover, both the neural and model classifiers achieve their best task performance when they accumulate evidence over a time frame of approximately 1-2 seconds, mirroring human perceptual trends. These results together suggest that time-frequency correlations in sounds may be reflected in the correlations between auditory midbrain ensembles and that these correlations may play an important role in the identification and categorization of natural sounds.
Collapse
Affiliation(s)
- Mina Sadeghi
- Department of Electrical and Computer Engineering, University of Connecticut, Storrs, Connecticut, United States of America
| | - Xiu Zhai
- Department of Electrical and Computer Engineering, University of Connecticut, Storrs, Connecticut, United States of America
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut, United States of America
| | - Ian H. Stevenson
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut, United States of America
- Department of Psychological Sciences, University of Connecticut, Storrs, Connecticut, United States of America
| | - Monty A. Escabí
- Department of Electrical and Computer Engineering, University of Connecticut, Storrs, Connecticut, United States of America
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut, United States of America
- Department of Psychological Sciences, University of Connecticut, Storrs, Connecticut, United States of America
- * E-mail:
| |
Collapse
|
14
|
Kobrina A, Dent ML. The effects of age and sex on the detection of pure tones by adult CBA/CaJ mice (Mus musculus). J Neurosci Res 2019; 98:1731-1744. [PMID: 31304616 DOI: 10.1002/jnr.24496] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2019] [Revised: 05/23/2019] [Accepted: 06/28/2019] [Indexed: 12/28/2022]
Abstract
Age-related hearing loss (ARHL) is a neurodegenerative disorder characterized by a gradual decrease in hearing sensitivity. Previous electrophysiological and behavioral studies have demonstrated that the CBA/CaJ mouse strain is an appropriate model for the late-onset hearing loss found in humans. However, few studies have characterized hearing in these mice behaviorally using longitudinal methodologies. The goal of this research was to utilize a longitudinal design and operant conditioning procedures with positive reinforcement to construct audiograms and temporal integration functions in aging CBA/CaJ mice. In the first experiment, thresholds were collected for 8, 16, 24, 42, and 64 kHz pure tones in 30 male and 35 female CBA/CaJ mice. Similar to humans, mice had higher thresholds for high frequency tones than for low frequency pure tones across the lifespan. Female mice had better hearing acuity than males after 645 days of age. In the second experiment, temporal integration functions were constructed for 18 male and 18 female mice for 16 and 64 kHz tones varying in duration. Mice showed an increase in thresholds for tones shorter than 200 ms, reaching peak performance at shorter durations than other rodent species. Overall, CBA/CaJ mice experience ARHL for pure tones of different frequencies and durations, making them a good model for studies on hearing loss. These findings highlight the importance of using a wide range of stimuli and a longitudinal design when comparing presbycusis across different species.
Collapse
Affiliation(s)
| | - Micheal L Dent
- Department of Psychology, University at Buffalo SUNY, Buffalo, New York
| |
Collapse
|
15
|
A broad filter between call frequency and peripheral auditory sensitivity in northern grasshopper mice (Onychomys leucogaster). J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2019; 205:481-489. [DOI: 10.1007/s00359-019-01338-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2018] [Revised: 03/18/2019] [Accepted: 04/11/2019] [Indexed: 12/19/2022]
|
16
|
Liu ST, Montes-Lourido P, Wang X, Sadagopan S. Optimal features for auditory categorization. Nat Commun 2019; 10:1302. [PMID: 30899018 PMCID: PMC6428858 DOI: 10.1038/s41467-019-09115-y] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2018] [Accepted: 02/20/2019] [Indexed: 01/13/2023] Open
Abstract
Humans and vocal animals use vocalizations to communicate with members of their species. A necessary function of auditory perception is to generalize across the high variability inherent in vocalization production and classify them into behaviorally distinct categories ('words' or 'call types'). Here, we demonstrate that detecting mid-level features in calls achieves production-invariant classification. Starting from randomly chosen marmoset call features, we use a greedy search algorithm to determine the most informative and least redundant features necessary for call classification. High classification performance is achieved using only 10-20 features per call type. Predictions of tuning properties of putative feature-selective neurons accurately match some observed auditory cortical responses. This feature-based approach also succeeds for call categorization in other species, and for other complex classification tasks such as caller identification. Our results suggest that high-level neural representations of sounds are based on task-dependent features optimized for specific computational goals.
Collapse
Affiliation(s)
- Shi Tong Liu
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, 15213, PA, USA
| | - Pilar Montes-Lourido
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, 15213, PA, USA
| | - Xiaoqin Wang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, 21205, MD, USA
| | - Srivatsun Sadagopan
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, 15213, PA, USA. .,Department of Neurobiology, University of Pittsburgh, Pittsburgh, 15213, PA, USA. .,Department of Otolaryngology, University of Pittsburgh, Pittsburgh, 15213, PA, USA.
| |
Collapse
|
17
|
Neural processes of vocal social perception: Dog-human comparative fMRI studies. Neurosci Biobehav Rev 2019; 85:54-64. [PMID: 29287629 DOI: 10.1016/j.neubiorev.2017.11.017] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2017] [Revised: 11/20/2017] [Accepted: 11/23/2017] [Indexed: 11/20/2022]
Abstract
In this review we focus on the exciting new opportunities in comparative neuroscience to study neural processes of vocal social perception by comparing dog and human neural activity using fMRI methods. The dog is a relatively new addition to this research area; however, it has a large potential to become a standard species in such investigations. Although there has been great interest in the emergence of human language abilities, in case of fMRI methods, most research to date focused on homologue comparisons within Primates. By belonging to a very different clade of mammalian evolution, dogs could give such research agendas a more general mammalian foundation. In addition, broadening the scope of investigations into vocal communication in general can also deepen our understanding of human vocal skills. Being selected for and living in an anthropogenic environment, research with dogs may also be informative about the way in which human non-linguistic and linguistic signals are represented in a mammalian brain without skills for language production.
Collapse
|
18
|
Nomoto K, Ikumi M, Otsuka M, Asaba A, Kato M, Koshida N, Mogi K, Kikusui T. Female mice exhibit both sexual and social partner preferences for vocalizing males. Integr Zool 2019; 13:735-744. [PMID: 30019858 DOI: 10.1111/1749-4877.12357] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
Acoustic signals are widely used as courtship signals in the animal kingdom. It has long been known that male mice emit ultrasonic vocalizations (USVs) in the presence of female mice or in response to female secretions. This observation led to the hypothesis that male USVs play a role in courtship behavior. Although previous studies showed that female mice have a social partner preference for vocalizing males, it is not known if they exhibit a sexual partner preference when given a choice. To address this issue, we examined the copulatory behaviors of female mice with either devocalized males (with or without the playback of the USVs) or sham-operated males in 2 different behavioral paradigms: the no-choice paradigm in the home cage of a male mouse (without choice of mating partners) or the mate-choice paradigm in a 3-chambered apparatus (with choice of mating partners). In the no-choice paradigm, female mice exhibited comparable sexual receptivity with sham-operated and devocalized males. In addition, we found that female mice showed more approach behavior towards devocalized males when male USVs were played back. In the mate-choice paradigm, female mice visited more frequently and stayed longer with sham-operated than devocalized males. Furthermore, we showed that female mice received more intromissions from sham-operated males than devocalized males. In summary, our results suggested that, although female mice can copulate equally with both devocalized and vocalizing males when given no choice of mating partner, female mice exhibit both sexual and social partner preferences for vocalizing males in the mate-choice paradigm.
Collapse
Affiliation(s)
- Kensaku Nomoto
- Companion Animal Research Laboratory, School of Veterinary Medicine, Azabu University, Kanagawa, Japan
| | - Mayu Ikumi
- Companion Animal Research Laboratory, School of Veterinary Medicine, Azabu University, Kanagawa, Japan
| | - Monami Otsuka
- Companion Animal Research Laboratory, School of Veterinary Medicine, Azabu University, Kanagawa, Japan
| | - Akari Asaba
- Companion Animal Research Laboratory, School of Veterinary Medicine, Azabu University, Kanagawa, Japan
| | | | - Nobuyoshi Koshida
- Graduate School of Engineering, Tokyo University of Agriculture and Technology, Tokyo, Japan
| | - Kazutaka Mogi
- Companion Animal Research Laboratory, School of Veterinary Medicine, Azabu University, Kanagawa, Japan
| | - Takefumi Kikusui
- Companion Animal Research Laboratory, School of Veterinary Medicine, Azabu University, Kanagawa, Japan
| |
Collapse
|
19
|
Gervain J, Geffen MN. Efficient Neural Coding in Auditory and Speech Perception. Trends Neurosci 2019; 42:56-65. [PMID: 30297085 PMCID: PMC6542557 DOI: 10.1016/j.tins.2018.09.004] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2018] [Revised: 09/06/2018] [Accepted: 09/10/2018] [Indexed: 02/05/2023]
Abstract
Speech has long been recognized as 'special'. Here, we suggest that one of the reasons for speech being special is that our auditory system has evolved to encode it in an efficient, optimal way. The theory of efficient neural coding argues that our perceptual systems have evolved to encode environmental stimuli in the most efficient way. Mathematically, this can be achieved if the optimally efficient codes match the statistics of the signals they represent. Experimental evidence suggests that the auditory code is optimal in this mathematical sense: statistical properties of speech closely match response properties of the cochlea, the auditory nerve, and the auditory cortex. Even more interestingly, these results may be linked to phenomena in auditory and speech perception.
Collapse
Affiliation(s)
- Judit Gervain
- Laboratoire Psychologie de la Perception, Université Paris Descartes, Paris, France; Laboratoire Psychologie de la Perception, CNRS, Paris, France
| | - Maria N Geffen
- Departments of Otorhinolaryngology, Neuroscience and Neurology, University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
20
|
Berry Ii MJ, Lebois F, Ziskind A, da Silveira RA. Functional Diversity in the Retina Improves the Population Code. Neural Comput 2018; 31:270-311. [PMID: 30576618 DOI: 10.1162/neco_a_01158] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Within a given brain region, individual neurons exhibit a wide variety of different feature selectivities. Here, we investigated the impact of this extensive functional diversity on the population neural code. Our approach was to build optimal decoders to discriminate among stimuli using the spiking output of a real, measured neural population and compare its performance against a matched, homogeneous neural population with the same number of cells and spikes. Analyzing large populations of retinal ganglion cells, we found that the real, heterogeneous population can yield a discrimination error lower than the homogeneous population by several orders of magnitude and consequently can encode much more visual information. This effect increases with population size and with graded degrees of heterogeneity. We complemented these results with an analysis of coding based on the Chernoff distance, as well as derivations of inequalities on coding in certain limits, from which we can conclude that the beneficial effect of heterogeneity occurs over a broad set of conditions. Together, our results indicate that the presence of functional diversity in neural populations can enhance their coding fidelity appreciably. A noteworthy outcome of our study is that this effect can be extremely strong and should be taken into account when investigating design principles for neural circuits.
Collapse
Affiliation(s)
- Michael J Berry Ii
- Princeton Neuroscience Institute and Department of Molecular Biology, Princeton University, Princeton, NJ 08544, U.S.A.
| | - Felix Lebois
- Department of Physics, Ecole Normale Supérieure, 75005 Paris, France
| | - Avi Ziskind
- Department of Physics, Princeton University, Princeton, NJ 08544, U.S.A.
| | - Rava Azeredo da Silveira
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, U.S.A.; Department of Physics, Ecole Normale Supérieure, 75005 Paris; Laboratoire de Physique Statistique, Ecole Normale Supérieure, PSL Research University, 75231 Paris; Université Paris Diderot Sorbonne Paris Cité, 75031 Paris; Sorbonne Universités UPMC Université Paris 6, 75005 Paris, France; CNRS
| |
Collapse
|
21
|
Felix RA, Gourévitch B, Portfors CV. Subcortical pathways: Towards a better understanding of auditory disorders. Hear Res 2018; 362:48-60. [PMID: 29395615 PMCID: PMC5911198 DOI: 10.1016/j.heares.2018.01.008] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/28/2017] [Revised: 12/11/2017] [Accepted: 01/16/2018] [Indexed: 01/13/2023]
Abstract
Hearing loss is a significant problem that affects at least 15% of the population. This percentage, however, is likely significantly higher because of a variety of auditory disorders that are not identifiable through traditional tests of peripheral hearing ability. In these disorders, individuals have difficulty understanding speech, particularly in noisy environments, even though the sounds are loud enough to hear. The underlying mechanisms leading to such deficits are not well understood. To enable the development of suitable treatments to alleviate or prevent such disorders, the affected processing pathways must be identified. Historically, mechanisms underlying speech processing have been thought to be a property of the auditory cortex and thus the study of auditory disorders has largely focused on cortical impairments and/or cognitive processes. As we review here, however, there is strong evidence to suggest that, in fact, deficits in subcortical pathways play a significant role in auditory disorders. In this review, we highlight the role of the auditory brainstem and midbrain in processing complex sounds and discuss how deficits in these regions may contribute to auditory dysfunction. We discuss current research with animal models of human hearing and then consider human studies that implicate impairments in subcortical processing that may contribute to auditory disorders.
Collapse
Affiliation(s)
- Richard A Felix
- School of Biological Sciences and Integrative Physiology and Neuroscience, Washington State University, Vancouver, WA, USA
| | - Boris Gourévitch
- Unité de Génétique et Physiologie de l'Audition, UMRS 1120 INSERM, Institut Pasteur, Université Pierre et Marie Curie, F-75015, Paris, France; CNRS, France
| | - Christine V Portfors
- School of Biological Sciences and Integrative Physiology and Neuroscience, Washington State University, Vancouver, WA, USA.
| |
Collapse
|
22
|
Kobrina A, Toal KL, Dent ML. Intensity difference limens in adult CBA/CaJ mice (Mus musculus). Behav Processes 2018; 148:46-48. [PMID: 29341905 PMCID: PMC5807135 DOI: 10.1016/j.beproc.2018.01.009] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2017] [Revised: 01/11/2018] [Accepted: 01/12/2018] [Indexed: 11/30/2022]
Abstract
Mice have emerged as important models of auditory perception and acoustic communication. To study and model complex sound perception and communication, basic hearing abilities have to be established, yet intensity difference limens have not been measured in CBA/CaJ mice. Nine mice were trained using operant conditioning procedures with positive reinforcement to discriminate sound intensity across frequencies. Intensity difference limens were measured for 12, 16, 24, and 42 kHz tones at 10 and 30 dB sensation levels. Mice are capable of discriminating intensities across frequencies and sensation levels, but have higher intensity difference limens (IDLs) thresholds than other mammals.
Collapse
Affiliation(s)
- Anastasiya Kobrina
- B76 Park Hall, Department of Psychology, University at Buffalo SUNY, Buffalo, NY 14260, United States.
| | - Katrina L Toal
- B76 Park Hall, Department of Psychology, University at Buffalo SUNY, Buffalo, NY 14260, United States.
| | - Micheal L Dent
- B76 Park Hall, Department of Psychology, University at Buffalo SUNY, Buffalo, NY 14260, United States.
| |
Collapse
|
23
|
|
24
|
|
25
|
Beiran M, Kruscha A, Benda J, Lindner B. Coding of time-dependent stimuli in homogeneous and heterogeneous neural populations. J Comput Neurosci 2017; 44:189-202. [PMID: 29222729 DOI: 10.1007/s10827-017-0674-4] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2017] [Revised: 11/08/2017] [Accepted: 11/12/2017] [Indexed: 11/29/2022]
Abstract
We compare the information transmission of a time-dependent signal by two types of uncoupled neuron populations that differ in their sources of variability: i) a homogeneous population whose units receive independent noise and ii) a deterministic heterogeneous population, where each unit exhibits a different baseline firing rate ('disorder'). Our criterion for making both sources of variability quantitatively comparable is that the interspike-interval distributions are identical for both systems. Numerical simulations using leaky integrate-and-fire neurons unveil that a non-zero amount of both noise or disorder maximizes the encoding efficiency of the homogeneous and heterogeneous system, respectively, as a particular case of suprathreshold stochastic resonance. Our findings thus illustrate that heterogeneity can render similarly profitable effects for neuronal populations as dynamic noise. The optimal noise/disorder depends on the system size and the properties of the stimulus such as its intensity or cutoff frequency. We find that weak stimuli are better encoded by a noiseless heterogeneous population, whereas for strong stimuli a homogeneous population outperforms an equivalent heterogeneous system up to a moderate noise level. Furthermore, we derive analytical expressions of the coherence function for the cases of very strong noise and of vanishing intrinsic noise or heterogeneity, which predict the existence of an optimal noise intensity. Our results show that, depending on the type of signal, noise as well as heterogeneity can enhance the encoding performance of neuronal populations.
Collapse
Affiliation(s)
- Manuel Beiran
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany. .,Group for Neural Theory, Laboratoire de Neurosciences Cognitives, Département Études Cognitives, École Normale Supérieure, INSERM, PSL Research University, Paris, France.
| | - Alexandra Kruscha
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany.,Physics Department, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Jan Benda
- Institute for Neurobiology, Eberhard Karls Universität, Tübingen, Germany
| | - Benjamin Lindner
- Bernstein Center for Computational Neuroscience Berlin, Berlin, Germany.,Physics Department, Humboldt-Universität zu Berlin, Berlin, Germany
| |
Collapse
|
26
|
MUPET-Mouse Ultrasonic Profile ExTraction: A Signal Processing Tool for Rapid and Unsupervised Analysis of Ultrasonic Vocalizations. Neuron 2017; 94:465-485.e5. [PMID: 28472651 DOI: 10.1016/j.neuron.2017.04.005] [Citation(s) in RCA: 88] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2015] [Revised: 07/04/2016] [Accepted: 04/04/2017] [Indexed: 12/26/2022]
Abstract
Vocalizations play a significant role in social communication across species. Analyses in rodents have used a limited number of spectro-temporal measures to compare ultrasonic vocalizations (USVs), which limits the ability to address repertoire complexity in the context of behavioral states. Using an automated and unsupervised signal processing approach, we report the development of MUPET (Mouse Ultrasonic Profile ExTraction) software, an open-access MATLAB tool that provides data-driven, high-throughput analyses of USVs. MUPET measures, learns, and compares syllable types and provides an automated time stamp of syllable events. Using USV data from a large mouse genetic reference panel and open-source datasets produced in different social contexts, MUPET analyzes the fine details of syllable production and repertoire use. MUPET thus serves as a new tool for USV repertoire analyses, with the capability to be adapted for use with other species.
Collapse
|
27
|
Behavioral and Single-Neuron Sensitivity to Millisecond Variations in Temporally Patterned Communication Signals. J Neurosci 2017; 36:8985-9000. [PMID: 27559179 DOI: 10.1523/jneurosci.0648-16.2016] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2016] [Accepted: 07/05/2016] [Indexed: 01/09/2023] Open
Abstract
UNLABELLED In many sensory pathways, central neurons serve as temporal filters for timing patterns in communication signals. However, how a population of neurons with diverse temporal filtering properties codes for natural variation in communication signals is unknown. Here we addressed this question in the weakly electric fish Brienomyrus brachyistius, which varies the time intervals between successive electric organ discharges to communicate. These fish produce an individually stereotyped signal called a scallop, which consists of a distinctive temporal pattern of ∼8-12 electric pulses. We manipulated the temporal structure of natural scallops during behavioral playback and in vivo electrophysiology experiments to probe the temporal sensitivity of scallop encoding and recognition. We found that presenting time-reversed, randomized, or jittered scallops increased behavioral response thresholds, demonstrating that fish's electric signaling behavior was sensitive to the precise temporal structure of scallops. Next, using in vivo intracellular recordings and discriminant function analysis, we found that the responses of interval-selective midbrain neurons were also sensitive to the precise temporal structure of scallops. Subthreshold changes in membrane potential recorded from single neurons discriminated natural scallops from time-reversed, randomized, and jittered sequences. Pooling the responses of multiple neurons improved the discriminability of natural sequences from temporally manipulated sequences. Finally, we found that single-neuron responses were sensitive to interindividual variation in scallop sequences, raising the question of whether fish may analyze scallop structure to gain information about the sender. Collectively, these results demonstrate that a population of interval-selective neurons can encode behaviorally relevant temporal patterns with millisecond precision. SIGNIFICANCE STATEMENT The timing patterns of action potentials, or spikes, play important roles in representing information in the nervous system. However, how these temporal patterns are recognized by downstream neurons is not well understood. Here we use the electrosensory system of mormyrid weakly electric fish to investigate how a population of neurons with diverse temporal filtering properties encodes behaviorally relevant input timing patterns, and how this relates to behavioral sensitivity. We show that fish are behaviorally sensitive to millisecond variations in natural, temporally patterned communication signals, and that the responses of individual midbrain neurons are also sensitive to variation in these patterns. In fact, the output of single neurons contains enough information to discriminate stereotyped communication signals produced by different individuals.
Collapse
|
28
|
Eliades SJ, Wang X. Contributions of sensory tuning to auditory-vocal interactions in marmoset auditory cortex. Hear Res 2017; 348:98-111. [PMID: 28284736 DOI: 10.1016/j.heares.2017.03.001] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/09/2016] [Revised: 02/27/2017] [Accepted: 03/02/2017] [Indexed: 01/30/2023]
Abstract
During speech, humans continuously listen to their own vocal output to ensure accurate communication. Such self-monitoring is thought to require the integration of information about the feedback of vocal acoustics with internal motor control signals. The neural mechanism of this auditory-vocal interaction remains largely unknown at the cellular level. Previous studies in naturally vocalizing marmosets have demonstrated diverse neural activities in auditory cortex during vocalization, dominated by a vocalization-induced suppression of neural firing. How underlying auditory tuning properties of these neurons might contribute to this sensory-motor processing is unknown. In the present study, we quantitatively compared marmoset auditory cortex neural activities during vocal production with those during passive listening. We found that neurons excited during vocalization were readily driven by passive playback of vocalizations and other acoustic stimuli. In contrast, neurons suppressed during vocalization exhibited more diverse playback responses, including responses that were not predictable by auditory tuning properties. These results suggest that vocalization-related excitation in auditory cortex is largely a sensory-driven response. In contrast, vocalization-induced suppression is not well predicted by a neuron's auditory responses, supporting the prevailing theory that internal motor-related signals contribute to the auditory-vocal interaction observed in auditory cortex.
Collapse
Affiliation(s)
- Steven J Eliades
- Department of Otorhinolaryngology: Head and Neck Surgery, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA.
| | - Xiaoqin Wang
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
29
|
Akimov AG, Egorova MA, Ehret G. Spectral summation and facilitation in on- and off-responses for optimized representation of communication calls in mouse inferior colliculus. Eur J Neurosci 2017; 45:440-459. [PMID: 27891665 DOI: 10.1111/ejn.13488] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2016] [Revised: 11/17/2016] [Accepted: 11/21/2016] [Indexed: 12/01/2022]
Abstract
Selectivity for processing of species-specific vocalizations and communication sounds has often been associated with the auditory cortex. The midbrain inferior colliculus, however, is the first center in the auditory pathways of mammals integrating acoustic information processed in separate nuclei and channels in the brainstem and, therefore, could significantly contribute to enhance the perception of species' communication sounds. Here, we used natural wriggling calls of mouse pups, which communicate need for maternal care to adult females, and further 15 synthesized sounds to test the hypothesis that neurons in the central nucleus of the inferior colliculus of adult females optimize their response rates for reproduction of the three main harmonics (formants) of wriggling calls. The results confirmed the hypothesis showing that average response rates, as recorded extracellularly from single units, were highest and spectral facilitation most effective for both onset and offset responses to the call and call models with three resolved frequencies according to critical bands in perception. In addition, the general on- and/or off-response enhancement in almost half the investigated 122 neurons favors not only perception of single calls but also of vocalization rhythm. In summary, our study provides strong evidence that critical-band resolved frequency components within a communication sound increase the probability of its perception by boosting the signal-to-noise ratio of neural response rates within the inferior colliculus for at least 20% (our criterion for facilitation). These mechanisms, including enhancement of rhythm coding, are generally favorable to processing of other animal and human vocalizations, including formants of speech sounds.
Collapse
Affiliation(s)
- Alexander G Akimov
- Sechnov Institute of Evolutionary Physiology and Biochemistry, Russian Academy of Sciences, St. Petersburg, Russia
| | - Marina A Egorova
- Sechnov Institute of Evolutionary Physiology and Biochemistry, Russian Academy of Sciences, St. Petersburg, Russia
| | - Günter Ehret
- Institute of Neurobiology, University of Ulm, D-89069, Ulm, Germany
| |
Collapse
|
30
|
Felix RA, Elde CJ, Nevue AA, Portfors CV. Serotonin modulates response properties of neurons in the dorsal cochlear nucleus of the mouse. Hear Res 2016; 344:13-23. [PMID: 27838373 DOI: 10.1016/j.heares.2016.10.017] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/08/2016] [Revised: 10/10/2016] [Accepted: 10/26/2016] [Indexed: 01/15/2023]
Abstract
The neurochemical serotonin (5-hydroxytryptamine, 5-HT) is involved in a variety of behavioral functions including arousal, reward, and attention, and has a role in several complex disorders of the brain. In the auditory system, 5-HT fibers innervate a number of subcortical nuclei, yet the modulatory role of 5-HT in nearly all of these areas remains poorly understood. In this study, we examined spiking activity of neurons in the dorsal cochlear nucleus (DCN) following iontophoretic application of 5-HT. The DCN is an early site in the auditory pathway that receives dense 5-HT fiber input from the raphe nuclei and has been implicated in the generation of auditory disorders marked by neuronal hyperexcitability. Recordings from the DCN in awake mice demonstrated that iontophoretic application of 5-HT had heterogeneous effects on spiking rate, spike timing, and evoked spiking threshold. We found that 56% of neurons exhibited increases in spiking rate during 5-HT delivery, while 22% had decreases in rate and the remaining neurons had no change. These changes were similar for spontaneous and evoked spiking and were typically accompanied by changes in spike timing. Spiking increases were associated with lower first spike latencies and jitter, while decreases in spiking generally had opposing effects on spike timing. Cases in which 5-HT application resulted in increased spiking also exhibited lower thresholds compared to the control condition, while cases of decreased spiking had no threshold change. We also found that the 5-HT2 receptor subtype likely has a role in mediating increased excitability. Our results demonstrate that 5-HT can modulate activity in the DCN of awake animals and that it primarily acts to increase neuronal excitability, in contrast to other auditory regions where it largely has a suppressive role. Modulation of DCN function by 5-HT has implications for auditory processing in both normal hearing and disordered states.
Collapse
Affiliation(s)
- Richard A Felix
- School of Biological Sciences and Integrative Physiology and Neuroscience, Washington State University, Vancouver, WA, USA.
| | - Cameron J Elde
- School of Biological Sciences and Integrative Physiology and Neuroscience, Washington State University, Vancouver, WA, USA
| | - Alexander A Nevue
- School of Biological Sciences and Integrative Physiology and Neuroscience, Washington State University, Vancouver, WA, USA
| | - Christine V Portfors
- School of Biological Sciences and Integrative Physiology and Neuroscience, Washington State University, Vancouver, WA, USA
| |
Collapse
|
31
|
Kobrina A, Dent ML. The effects of aging and sex on detection of ultrasonic vocalizations by adult CBA/CaJ mice (Mus musculus). Hear Res 2016; 341:119-129. [PMID: 27579993 DOI: 10.1016/j.heares.2016.08.014] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/06/2016] [Revised: 05/16/2016] [Accepted: 08/25/2016] [Indexed: 11/16/2022]
Abstract
Mice are frequently used as animal models for human hearing research, yet their auditory capabilities have not been fully explored. Previous studies have established auditory threshold sensitivities for pure tone stimuli in CBA/CaJ mice using ABR and behavioral methodologies. Little is known about how they perceive their own ultrasonic vocalizations (USVs), and nothing is known about how aging influences this perception. The aim of the present study was to establish auditory threshold sensitivity for several USV types, as well as to track these thresholds across the mouse's lifespan. In order to determine how well mice detect these complex communication stimuli, several CBA/CaJ mice were trained and tested at various ages on a detection task using operant conditioning procedures. Results showed that mice were able to detect USVs into old age. Not surprisingly, thresholds differed for the different USV types. Male mice suffered greater hearing loss than females for all calls but not for 42 kHz tones. In conclusion, the results highlight the importance of studying complex signals across the lifespan.
Collapse
Affiliation(s)
- Anastasiya Kobrina
- Department of Psychology, University at Buffalo-SUNY, Buffalo, NY 14260, USA.
| | - Micheal L Dent
- Department of Psychology, University at Buffalo-SUNY, Buffalo, NY 14260, USA.
| |
Collapse
|
32
|
Lyzwa D, Herrmann JM, Wörgötter F. Natural Vocalizations in the Mammalian Inferior Colliculus are Broadly Encoded by a Small Number of Independent Multi-Units. Front Neural Circuits 2016; 9:91. [PMID: 26869890 PMCID: PMC4740783 DOI: 10.3389/fncir.2015.00091] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2015] [Accepted: 12/28/2015] [Indexed: 11/18/2022] Open
Abstract
How complex natural sounds are represented by the main converging center of the auditory midbrain, the central inferior colliculus, is an open question. We applied neural discrimination to determine the variation of detailed encoding of individual vocalizations across the best frequency gradient of the central inferior colliculus. The analysis was based on collective responses from several neurons. These multi-unit spike trains were recorded from guinea pigs exposed to a spectrotemporally rich set of eleven species-specific vocalizations. Spike trains of disparate units from the same recording were combined in order to investigate whether groups of multi-unit clusters represent the whole set of vocalizations more reliably than only one unit, and whether temporal response correlations between them facilitate an unambiguous neural representation of the vocalizations. We found a spatial distribution of the capability to accurately encode groups of vocalizations across the best frequency gradient. Different vocalizations are optimally discriminated at different locations of the best frequency gradient. Furthermore, groups of a few multi-unit clusters yield improved discrimination over only one multi-unit cluster between all tested vocalizations. However, temporal response correlations between units do not yield better discrimination. Our study is based on a large set of units of simultaneously recorded responses from several guinea pigs and electrode insertion positions. Our findings suggest a broadly distributed code for behaviorally relevant vocalizations in the mammalian inferior colliculus. Responses from a few non-interacting units are sufficient to faithfully represent the whole set of studied vocalizations with diverse spectrotemporal properties.
Collapse
Affiliation(s)
- Dominika Lyzwa
- Max Planck Institute for Dynamics and Self-OrganizationGöttingen, Germany
- Institute for Nonlinear Dynamics, Physics Department, Georg-August-UniversityGöttingen, Germany
- Bernstein Focus NeurotechnologyGöttingen, Germany
| | - J. Michael Herrmann
- Bernstein Focus NeurotechnologyGöttingen, Germany
- Institute of Perception, Action and Behavior, School of Informatics, University of EdinburghEdinburgh, UK
| | - Florentin Wörgötter
- Bernstein Focus NeurotechnologyGöttingen, Germany
- Institute for Physics - Biophysics, Georg-August-UniversityGöttingen, Germany
| |
Collapse
|
33
|
Nevue AA, Elde CJ, Perkel DJ, Portfors CV. Dopaminergic Input to the Inferior Colliculus in Mice. Front Neuroanat 2016; 9:168. [PMID: 26834578 PMCID: PMC4720752 DOI: 10.3389/fnana.2015.00168] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2015] [Accepted: 12/28/2015] [Indexed: 11/17/2022] Open
Abstract
The response of sensory neurons to stimuli can be modulated by a variety of factors including attention, emotion, behavioral context, and disorders involving neuromodulatory systems. For example, patients with Parkinson’s disease (PD) have disordered speech processing, suggesting that dopamine alters normal representation of these salient sounds. Understanding the mechanisms by which dopamine modulates auditory processing is thus an important goal. The principal auditory midbrain nucleus, the inferior colliculus (IC), is a likely location for dopaminergic modulation of auditory processing because it contains dopamine receptors and nerve terminals immunoreactive for tyrosine hydroxylase (TH), the rate-limiting enzyme in dopamine synthesis. However, the sources of dopaminergic input to the IC are unknown. In this study, we iontophoretically injected a retrograde tracer into the IC of mice and then stained the tissue for TH. We also immunostained for dopamine beta-hydroxylase (DBH), an enzyme critical for the conversion of dopamine to norepinephrine, to differentiate between dopaminergic and noradrenergic inputs. Retrogradely labeled neurons that were positive for TH were seen bilaterally, with strong ipsilateral dominance, in the subparafascicular thalamic nucleus (SPF). All retrogradely labeled neurons that we observed in other brain regions were TH-negative. Projections from the SPF were confirmed using an anterograde tracer, revealing TH-positive and DBH-negative anterogradely labeled fibers and terminals in the IC. While the functional role of this dopaminergic input to the IC is not yet known, it provides a potential mechanism for context dependent modulation of auditory processing.
Collapse
Affiliation(s)
- Alexander A Nevue
- School of Biological Sciences, Washington State University Vancouver Vancouver, WA, USA
| | - Cameron J Elde
- School of Biological Sciences, Washington State University Vancouver Vancouver, WA, USA
| | - David J Perkel
- Department of Biology, University of WashingtonSeattle, WA, USA; Department of Otolaryngology-Head and Neck Surgery, University of WashingtonSeattle, WA, USA; The Virginia Merrill Bloedel Hearing Research Center, University of WashingtonSeattle, WA, USA
| | - Christine V Portfors
- School of Biological Sciences, Washington State University Vancouver Vancouver, WA, USA
| |
Collapse
|
34
|
Roberts PD, Portfors CV. Responses to Social Vocalizations in the Dorsal Cochlear Nucleus of Mice. Front Syst Neurosci 2015; 9:172. [PMID: 26733824 PMCID: PMC4680083 DOI: 10.3389/fnsys.2015.00172] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2015] [Accepted: 11/26/2015] [Indexed: 11/18/2022] Open
Abstract
Identifying sounds is critical for an animal to make appropriate behavioral responses to environmental stimuli, including vocalizations from conspecifics. Identification of vocalizations may be supported by neuronal selectivity in the auditory pathway. The first place in the ascending auditory pathway where neuronal selectivity to vocalizations has been found is in the inferior colliculus (IC), but very few brainstem nuclei have been evaluated. Here, we tested whether selectivity to vocalizations is present in the dorsal cochlear nucleus (DCN). We recorded extracellular neural responses in the DCN of mice and found that fusiform cells responded in a heterogeneous and selective manner to mouse ultrasonic vocalizations. Most fusiform cells responded to vocalizations that contained spectral energy at much higher frequencies than the characteristic frequencies of the cells. To understand this mismatch of stimulus properties and frequency tuning of the cells, we developed a dynamic, nonlinear model of the cochlea that simulates cochlear distortion products on the basilar membrane. We preprocessed the vocalization stimuli through this model and compared responses to these distorted vocalizations with responses to the original vocalizations. We found that fusiform cells in the DCN respond in a heterogeneous manner to vocalizations, and that these neurons can use distortion products as a mechanism for encoding ultrasonic vocalizations. In addition, the selective neuronal responses were dependent on the presence of inhibitory sidebands that modulated the response depending on the temporal structure of the distortion product. These findings suggest that important processing of complex sounds occurs at a very early stage of central auditory processing and is not strictly a function of the cortex.
Collapse
Affiliation(s)
- Patrick D Roberts
- School of Biological Sciences and Integrative Physiology and Neuroscience, Washington State University Vancouver, WA, USA
| | - Christine V Portfors
- School of Biological Sciences and Integrative Physiology and Neuroscience, Washington State University Vancouver, WA, USA
| |
Collapse
|
35
|
Malinina ES, Egorova MA, Akimov AG. Neurophysiological approaches to studying the functional role of auditory critical bands. J EVOL BIOCHEM PHYS+ 2015. [DOI: 10.1134/s0022093015050063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
36
|
Carruthers IM, Laplagne DA, Jaegle A, Briguglio JJ, Mwilambwe-Tshilobo L, Natan RG, Geffen MN. Emergence of invariant representation of vocalizations in the auditory cortex. J Neurophysiol 2015; 114:2726-40. [PMID: 26311178 DOI: 10.1152/jn.00095.2015] [Citation(s) in RCA: 43] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2015] [Accepted: 08/25/2015] [Indexed: 11/22/2022] Open
Abstract
An essential task of the auditory system is to discriminate between different communication signals, such as vocalizations. In everyday acoustic environments, the auditory system needs to be capable of performing the discrimination under different acoustic distortions of vocalizations. To achieve this, the auditory system is thought to build a representation of vocalizations that is invariant to their basic acoustic transformations. The mechanism by which neuronal populations create such an invariant representation within the auditory cortex is only beginning to be understood. We recorded the responses of populations of neurons in the primary and nonprimary auditory cortex of rats to original and acoustically distorted vocalizations. We found that populations of neurons in the nonprimary auditory cortex exhibited greater invariance in encoding vocalizations over acoustic transformations than neuronal populations in the primary auditory cortex. These findings are consistent with the hypothesis that invariant representations are created gradually through hierarchical transformation within the auditory pathway.
Collapse
Affiliation(s)
- Isaac M Carruthers
- Department of Otorhinolaryngology and Head and Neck Surgery, University of Pennsylvania, Philadelphia, Pennsylvania; Graduate Group in Physics, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Diego A Laplagne
- Brain Institute, Federal University of Rio Grande do Norte, Natal, Brazil; and
| | - Andrew Jaegle
- Department of Otorhinolaryngology and Head and Neck Surgery, University of Pennsylvania, Philadelphia, Pennsylvania; Graduate Group in Neuroscience, University of Pennsylvania, Philadelphia, Pennsylvania
| | - John J Briguglio
- Department of Otorhinolaryngology and Head and Neck Surgery, University of Pennsylvania, Philadelphia, Pennsylvania; Graduate Group in Physics, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Laetitia Mwilambwe-Tshilobo
- Department of Otorhinolaryngology and Head and Neck Surgery, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Ryan G Natan
- Department of Otorhinolaryngology and Head and Neck Surgery, University of Pennsylvania, Philadelphia, Pennsylvania; Brain Institute, Federal University of Rio Grande do Norte, Natal, Brazil; and
| | - Maria N Geffen
- Department of Otorhinolaryngology and Head and Neck Surgery, University of Pennsylvania, Philadelphia, Pennsylvania; Graduate Group in Physics, University of Pennsylvania, Philadelphia, Pennsylvania; Graduate Group in Neuroscience, University of Pennsylvania, Philadelphia, Pennsylvania
| |
Collapse
|
37
|
Gao PP, Zhang JW, Fan SJ, Sanes DH, Wu EX. Auditory midbrain processing is differentially modulated by auditory and visual cortices: An auditory fMRI study. Neuroimage 2015; 123:22-32. [PMID: 26306991 DOI: 10.1016/j.neuroimage.2015.08.040] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2015] [Revised: 08/15/2015] [Accepted: 08/18/2015] [Indexed: 11/19/2022] Open
Abstract
The cortex contains extensive descending projections, yet the impact of cortical input on brainstem processing remains poorly understood. In the central auditory system, the auditory cortex contains direct and indirect pathways (via brainstem cholinergic cells) to nuclei of the auditory midbrain, called the inferior colliculus (IC). While these projections modulate auditory processing throughout the IC, single neuron recordings have samples from only a small fraction of cells during stimulation of the corticofugal pathway. Furthermore, assessments of cortical feedback have not been extended to sensory modalities other than audition. To address these issues, we devised blood-oxygen-level-dependent (BOLD) functional magnetic resonance imaging (fMRI) paradigms to measure the sound-evoked responses throughout the rat IC and investigated the effects of bilateral ablation of either auditory or visual cortices. Auditory cortex ablation increased the gain of IC responses to noise stimuli (primarily in the central nucleus of the IC) and decreased response selectivity to forward species-specific vocalizations (versus temporally reversed ones, most prominently in the external cortex of the IC). In contrast, visual cortex ablation decreased the gain and induced a much smaller effect on response selectivity. The results suggest that auditory cortical projections normally exert a large-scale and net suppressive influence on specific IC subnuclei, while visual cortical projections provide a facilitatory influence. Meanwhile, auditory cortical projections enhance the midbrain response selectivity to species-specific vocalizations. We also probed the role of the indirect cholinergic projections in the auditory system in the descending modulation process by pharmacologically blocking muscarinic cholinergic receptors. This manipulation did not affect the gain of IC responses but significantly reduced the response selectivity to vocalizations. The results imply that auditory cortical gain modulation is mediated primarily through direct projections and they point to future investigations of the differential roles of the direct and indirect projections in corticofugal modulation. In summary, our imaging findings demonstrate the large-scale descending influences, from both the auditory and visual cortices, on sound processing in different IC subdivisions. They can guide future studies on the coordinated activity across multiple regions of the auditory network, and its dysfunctions.
Collapse
Affiliation(s)
- Patrick P Gao
- Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Pokfulam, Hong Kong SAR, China; Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Jevin W Zhang
- Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Pokfulam, Hong Kong SAR, China; Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Shu-Juan Fan
- Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Pokfulam, Hong Kong SAR, China; Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Dan H Sanes
- Center for Neural Science, New York University, New York, NY 10003, United States
| | - Ed X Wu
- Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Pokfulam, Hong Kong SAR, China; Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China; Department of Anatomy, The University of Hong Kong, Pokfulam, Hong Kong SAR, China; Department of Medicine, The University of Hong Kong, Pokfulam, Hong Kong SAR, China.
| |
Collapse
|
38
|
Abstract
Vertebrate audition is a dynamic process, capable of exhibiting both short- and long-term adaptations to varying listening conditions. Precise spike timing has long been known to play an important role in auditory encoding, but its role in sensory plasticity remains largely unexplored. We addressed this issue in Gambel's white-crowned sparrow (Zonotrichia leucophrys gambelii), a songbird that shows pronounced seasonal fluctuations in circulating levels of sex-steroid hormones, which are known to be potent neuromodulators of auditory function. We recorded extracellular single-unit activity in the auditory forebrain of males and females under different breeding conditions and used a computational approach to explore two potential strategies for the neural discrimination of sound level: one based on spike counts and one based on spike timing reliability. We report that breeding condition has robust sex-specific effects on spike timing. Specifically, in females, breeding condition increases the proportion of cells that rely solely on spike timing information and increases the temporal resolution required for optimal intensity encoding. Furthermore, in a functionally distinct subset of cells that are particularly well suited for amplitude encoding, female breeding condition enhances spike timing-based discrimination accuracy. No effects of breeding condition were observed in males. Our results suggest that high-resolution temporal discharge patterns may provide a plastic neural substrate for sensory coding.
Collapse
|
39
|
High-field functional magnetic resonance imaging of vocalization processing in marmosets. Sci Rep 2015; 5:10950. [PMID: 26091254 PMCID: PMC4473644 DOI: 10.1038/srep10950] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2014] [Accepted: 04/29/2015] [Indexed: 11/17/2022] Open
Abstract
Vocalizations are behaviorally critical sounds, and this behavioral importance is reflected in the ascending auditory system, where conspecific vocalizations are increasingly over-represented at higher processing stages. Recent evidence suggests that, in macaques, this increasing selectivity for vocalizations might culminate in a cortical region that is densely populated by vocalization-preferring neurons. Such a region might be a critical node in the representation of vocal communication sounds, underlying the recognition of vocalization type, caller and social context. These results raise the questions of whether cortical specializations for vocalization processing exist in other species, their cortical location, and their relationship to the auditory processing hierarchy. To explore cortical specializations for vocalizations in another species, we performed high-field fMRI of the auditory cortex of a vocal New World primate, the common marmoset (Callithrix jacchus). Using a sparse imaging paradigm, we discovered a caudal-rostral gradient for the processing of conspecific vocalizations in marmoset auditory cortex, with regions of the anterior temporal lobe close to the temporal pole exhibiting the highest preference for vocalizations. These results demonstrate similar cortical specializations for vocalization processing in macaques and marmosets, suggesting that cortical specializations for vocal processing might have evolved before the lineages of these species diverged.
Collapse
|
40
|
Gao PP, Zhang JW, Chan RW, Leong ATL, Wu EX. BOLD fMRI study of ultrahigh frequency encoding in the inferior colliculus. Neuroimage 2015; 114:427-37. [PMID: 25869860 DOI: 10.1016/j.neuroimage.2015.04.007] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2014] [Revised: 03/02/2015] [Accepted: 04/02/2015] [Indexed: 01/23/2023] Open
Abstract
Many vertebrates communicate with ultrahigh frequency (UHF) vocalizations to limit auditory detection by predators. The mechanisms underlying the neural encoding of such UHF sounds may provide important insights for understanding neural processing of other complex sounds (e.g. human speeches). In the auditory system, sound frequency is normally encoded topographically as tonotopy, which, however, contains very limited representation of UHFs in many species. Instead, electrophysiological studies suggested that two neural mechanisms, both exploiting the interactions between frequencies, may contribute to UHF processing. Neurons can exhibit excitatory or inhibitory responses to a tone when another UHF tone is presented simultaneously (combination sensitivity). They can also respond to such stimulation if they are tuned to the frequency of the cochlear-generated distortion products of the two tones, e.g. their difference frequency (cochlear distortion). Both mechanisms are present in an early station of the auditory pathway, the midbrain inferior colliculus (IC). Currently, it is unclear how prevalent the two mechanisms are and how they are functionally integrated in encoding UHFs. This study investigated these issues with large-view BOLD fMRI in rat auditory system, particularly the IC. UHF vocalizations (above 40kHz), but not pure tones at similar frequencies (45, 55, 65, 75kHz), evoked robust BOLD responses in multiple auditory nuclei, including the IC, reinforcing the sensitivity of the auditory system to UHFs despite limited representation in tonotopy. Furthermore, BOLD responses were detected in the IC when a pair of UHF pure tones was presented simultaneously (45 & 55kHz, 55 & 65kHz, 45 & 65kHz, 45 & 75kHz). For all four pairs, a cluster of voxels in the ventromedial side always showed the strongest responses, displaying combination sensitivity. Meanwhile, voxels in the dorsolateral side that showed strongest secondary responses to each pair of UHF pure tones also showed the strongest responses to a pure tone at their difference frequency, suggesting that they are sensitive to cochlear distortion. These BOLD fMRI results indicated that combination sensitivity and cochlear distortion are employed by large but spatially distinctive neuron populations in the IC to represent UHFs. Our imaging findings provided insights for understanding sound feature encoding in the early stage of the auditory pathway.
Collapse
Affiliation(s)
- Patrick P Gao
- Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Pokfulam, Hong Kong SAR, China; Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Jevin W Zhang
- Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Pokfulam, Hong Kong SAR, China; Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Russell W Chan
- Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Pokfulam, Hong Kong SAR, China; Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Alex T L Leong
- Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Pokfulam, Hong Kong SAR, China; Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Ed X Wu
- Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Pokfulam, Hong Kong SAR, China; Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China; Department of Anatomy, The University of Hong Kong, Pokfulam, Hong Kong SAR, China; Department of Medicine, The University of Hong Kong, Pokfulam, Hong Kong SAR, China.
| |
Collapse
|
41
|
Yang M, Mahrt EJ, Lewis F, Foley G, Portmann T, Dolmetsch RE, Portfors CV, Crawley JN. 16p11.2 Deletion Syndrome Mice Display Sensory and Ultrasonic Vocalization Deficits During Social Interactions. Autism Res 2015; 8:507-21. [PMID: 25663600 DOI: 10.1002/aur.1465] [Citation(s) in RCA: 65] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2014] [Accepted: 12/24/2014] [Indexed: 11/08/2022]
Abstract
Recurrent deletions and duplications at chromosomal region 16p11.2 are variably associated with speech delay, autism spectrum disorder, developmental delay, schizophrenia, and cognitive impairments. Social communication deficits are a primary diagnostic symptom of autism. Here we investigated ultrasonic vocalizations (USVs) in young adult male 16p11.2 deletion mice during a novel three-phase male-female social interaction test that detects vocalizations emitted by a male in the presence of an estrous female, how the male changes its calling when the female is suddenly absent, and the extent to which calls resume when the female returns. Strikingly fewer vocalizations were detected in two independent cohorts of 16p11.2 heterozygous deletion males (+/-) during the first exposure to an unfamiliar estrous female, as compared to wildtype littermates (+/+). When the female was removed, +/+ emitted calls, but at a much lower level, whereas +/- males called minimally. Sensory and motor abnormalities were detected in +/-, including higher nociceptive thresholds, a complete absence of acoustic startle responses, and hearing loss in all +/- as confirmed by lack of auditory brainstem responses to frequencies between 8 and 100 kHz. Stereotyped circling and backflipping appeared in a small percentage of individuals, as previously reported. However, these sensory and motor phenotypes could not directly explain the low vocalizations in 16p11.2 deletion mice, since (a) +/- males displayed normal abilities to emit vocalizations when the female was subsequently reintroduced, and (b) +/- vocalized less than +/+ to social odor cues delivered on an inanimate cotton swab. Our findings support the concept that mouse USVs in social settings represent a response to social cues, and that 16p11.2 deletion mice are deficient in their initial USVs responses to novel social cues.
Collapse
Affiliation(s)
- Mu Yang
- Department of Psychiatry and Behavioral Sciences, University of California Davis School of Medicine, Sacramento, CA, 95817
| | - Elena J Mahrt
- School of Biological Sciences, College of Arts and Sciences, Washington State University Vancouver, Vancouver, WA, 98686
| | - Freeman Lewis
- Department of Psychiatry and Behavioral Sciences, University of California Davis School of Medicine, Sacramento, CA, 95817
| | - Gillian Foley
- Department of Psychiatry and Behavioral Sciences, University of California Davis School of Medicine, Sacramento, CA, 95817
| | - Thomas Portmann
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA, 94305.,Drug Discovery Program, Circuit Therapeutics Inc., Menlo Park, CA, 94025
| | - Ricardo E Dolmetsch
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA, 94305.,Novartis Institutes for Biomedical Research, Cambridge, MA, 02139
| | - Christine V Portfors
- School of Biological Sciences, College of Arts and Sciences, Washington State University Vancouver, Vancouver, WA, 98686
| | - Jacqueline N Crawley
- Department of Psychiatry and Behavioral Sciences, University of California Davis School of Medicine, Sacramento, CA, 95817
| |
Collapse
|
42
|
Holfoth DP, Neilans EG, Dent ML. Discrimination of partial from whole ultrasonic vocalizations using a go/no-go task in mice. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2014; 136:3401. [PMID: 25480084 PMCID: PMC4257972 DOI: 10.1121/1.4900564] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/24/2014] [Revised: 10/03/2014] [Accepted: 10/14/2014] [Indexed: 06/04/2023]
Abstract
Mice are a commonly used model in hearing research, yet little is known about how they perceive conspecific ultrasonic vocalizations (USVs). Humans and birds can distinguish partial versions of a communication signal, and discrimination is superior when the beginning of the signal is present compared to the end of the signal. Since these effects occur in both humans and birds, it was hypothesized that mice would display similar facilitative effects with the initial portions of their USVs. Laboratory mice were tested on a discrimination task using operant conditioning procedures. The mice were required to discriminate incomplete versions of a USV target from a repeating background containing the whole USV. The results showed that the mice had difficulty discriminating incomplete USVs from whole USVs, especially when the beginning of the USVs were presented. This finding suggests that the mice perceive the initial portions of a USV as more similar to the whole USV than the latter parts of the USV, similar to results from humans and birds.
Collapse
Affiliation(s)
- David P Holfoth
- Department of Psychology, University at Buffalo, The State University of New York, Buffalo, New York 14260
| | - Erikson G Neilans
- Department of Psychology, University at Buffalo, The State University of New York, Buffalo, New York 14260
| | - Micheal L Dent
- Department of Psychology, University at Buffalo, The State University of New York, Buffalo, New York 14260
| |
Collapse
|
43
|
Duque D, Malmierca MS. Stimulus-specific adaptation in the inferior colliculus of the mouse: anesthesia and spontaneous activity effects. Brain Struct Funct 2014; 220:3385-98. [PMID: 25115620 DOI: 10.1007/s00429-014-0862-1] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2014] [Accepted: 07/29/2014] [Indexed: 12/19/2022]
Abstract
Rapid behavioral responses to unexpected events in the acoustic environment are critical for survival. Stimulus-specific adaptation (SSA) is the process whereby some auditory neurons respond better to rare stimuli than to repetitive stimuli. Most experiments on SSA have been performed under anesthesia, and it is unknown if SSA sensitivity is altered by the anesthetic agent. Only a direct comparison can answer this question. Here, we recorded extracellular single units in the inferior colliculus of awake and anesthetized mice under an oddball paradigm that elicits SSA. Our results demonstrate that SSA is similar, but not identical, in the awake and anesthetized preparations. The differences are mostly due to the higher spontaneous activity observed in the awake animals, which also revealed a high incidence of inhibitory receptive fields. We conclude that SSA is not an artifact of anesthesia and that spontaneous activity modulates neuronal SSA differentially, depending on the state of arousal. Our results suggest that SSA may be especially important when nervous system activity is suppressed during sleep-like states. This may be a useful survival mechanism that allows the organism to respond to danger when sleeping.
Collapse
Affiliation(s)
- Daniel Duque
- Auditory Neurophysiology Unit, Laboratory for the Neurobiology of Hearing, Institute of Neuroscience of Castilla Y León, University of Salamanca, C/Pintor Fernando Gallego, 1, 37007, Salamanca, Spain
| | - Manuel S Malmierca
- Auditory Neurophysiology Unit, Laboratory for the Neurobiology of Hearing, Institute of Neuroscience of Castilla Y León, University of Salamanca, C/Pintor Fernando Gallego, 1, 37007, Salamanca, Spain.
- Department of Cell Biology and Pathology, Faculty of Medicine, University of Salamanca, Campus Miguel de Unamuno, 37007, Salamanca, Spain.
| |
Collapse
|
44
|
Ahn J, Kreeger LJ, Lubejko ST, Butts DA, MacLeod KM. Heterogeneity of intrinsic biophysical properties among cochlear nucleus neurons improves the population coding of temporal information. J Neurophysiol 2014; 111:2320-31. [PMID: 24623512 DOI: 10.1152/jn.00836.2013] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Reliable representation of the spectrotemporal features of an acoustic stimulus is critical for sound recognition. However, if all neurons respond with identical firing to the same stimulus, redundancy in the activity patterns would reduce the information capacity of the population. We thus investigated spike reliability and temporal fluctuation coding in an ensemble of neurons recorded in vitro from the avian auditory brain stem. Sequential patch-clamp recordings were made from neurons of the cochlear nucleus angularis while injecting identical filtered Gaussian white noise currents, simulating synaptic drive. The spiking activity in neurons receiving these identically fluctuating stimuli was highly correlated, measured pairwise across neurons and as a pseudo-population. Two distinct uncorrelated noise stimuli could be discriminated using the temporal patterning, but not firing rate, of the spike trains in the neural ensemble, with best discrimination using information at time scales of 5-20 ms. Despite high cross-correlation values, the spike patterns observed in individual neurons were idiosyncratic, with notable heterogeneity across neurons. To investigate how temporal information is being encoded, we used optimal linear reconstruction to produce an estimate of the original current stimulus from the spike trains. Ensembles of trains sampled across the neural population could be used to predict >50% of the stimulus variation using optimal linear decoding, compared with ∼20% using the same number of spike trains recorded from single neurons. We conclude that heterogeneity in the intrinsic biophysical properties of cochlear nucleus neurons reduces firing pattern redundancy while enhancing representation of temporal information.
Collapse
Affiliation(s)
- J Ahn
- Department of Biology, University of Maryland, College Park, Maryland
| | - L J Kreeger
- Department of Biology, University of Maryland, College Park, Maryland
| | - S T Lubejko
- Department of Biology, University of Maryland, College Park, Maryland
| | - D A Butts
- Department of Biology, University of Maryland, College Park, Maryland; Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland; and
| | - K M MacLeod
- Department of Biology, University of Maryland, College Park, Maryland; Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland; and Center for the Comparative and Evolutionary Biology of Hearing, University of Maryland, College Park, Maryland
| |
Collapse
|
45
|
Neilans EG, Holfoth DP, Radziwon KE, Portfors CV, Dent ML. Discrimination of ultrasonic vocalizations by CBA/CaJ mice (Mus musculus) is related to spectrotemporal dissimilarity of vocalizations. PLoS One 2014; 9:e85405. [PMID: 24416405 PMCID: PMC3887032 DOI: 10.1371/journal.pone.0085405] [Citation(s) in RCA: 47] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2013] [Accepted: 12/04/2013] [Indexed: 11/20/2022] Open
Abstract
The function of ultrasonic vocalizations (USVs) produced by mice (Mus musculus) is a topic of broad interest to many researchers. These USVs differ widely in spectrotemporal characteristics, suggesting different categories of vocalizations, although this has never been behaviorally demonstrated. Although electrophysiological studies indicate that neurons can discriminate among vocalizations at the level of the auditory midbrain, perceptual acuity for vocalizations has yet to be determined. Here, we trained CBA/CaJ mice using operant conditioning to discriminate between different vocalizations and between a spectrotemporally modified vocalization and its original version. Mice were able to discriminate between vocalization types and between manipulated vocalizations, with performance negatively correlating with spectrotemporal similarity. That is, discrimination performance was higher for dissimilar vocalizations and much lower for similar vocalizations. The behavioral data match previous neurophysiological results in the inferior colliculus (IC), using the same stimuli. These findings suggest that the different vocalizations could carry different meanings for the mice. Furthermore, the finding that behavioral discrimination matched neural discrimination in the IC suggests that the IC plays an important role in the perceptual discrimination of vocalizations.
Collapse
Affiliation(s)
- Erikson G. Neilans
- Department of Psychology, University at Buffalo, the State University of New York, Buffalo, New York, United States of America
| | - David P. Holfoth
- Department of Psychology, University at Buffalo, the State University of New York, Buffalo, New York, United States of America
| | - Kelly E. Radziwon
- Department of Psychology, University at Buffalo, the State University of New York, Buffalo, New York, United States of America
| | - Christine V. Portfors
- School of Biological Sciences, Washington State University-Vancouver, Vancouver, Washington, United States of America
| | - Micheal L. Dent
- Department of Psychology, University at Buffalo, the State University of New York, Buffalo, New York, United States of America
| |
Collapse
|
46
|
Portfors CV, Roberts PD. Mismatch of structural and functional tonotopy for natural sounds in the auditory midbrain. Neuroscience 2013; 258:192-203. [PMID: 24252321 DOI: 10.1016/j.neuroscience.2013.11.012] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2013] [Revised: 11/06/2013] [Accepted: 11/06/2013] [Indexed: 11/24/2022]
Abstract
Neurons in the auditory system are spatially organized in their responses to pure tones, and this tonotopy is expected to predict neuronal responses to more complex sounds such as vocalizations. We presented vocalizations with low-, medium- and high-frequency content to determine if selectivity of neurons in the inferior colliculus (IC) of mice respects the tonotopic spatial structure. Tonotopy in the IC predicts that neurons located in dorsal regions should only respond to low-frequency vocalizations and only neurons located in ventral regions should respond to high-frequency vocalizations. We found that responses to vocalizations were independent of location, and many neurons in the dorsal, low-frequency region of IC responded to high-frequency vocalizations. To test whether this was due to dorsal neurons having broad frequency tuning curves, we convolved each neuron's frequency tuning curve with each vocalization, and found that the tuning curves were not good predictors of the actual neural responses to the vocalizations. We then used a nonlinear model of signal transduction in the cochlea that generates distortion products to predict neural responses to the vocalizations. We found that these predictions more closely matched the actual neural responses. Our findings suggest that the cochlea distorts the frequency representation in vocalizations and some neurons use this distorted representation to encode the vocalizations.
Collapse
Affiliation(s)
- C V Portfors
- School of Biological Sciences, Washington State University, Vancouver, WA 98686, USA.
| | - P D Roberts
- Oregon Health & Science University, Portland, OR 97239, USA
| |
Collapse
|
47
|
Rode T, Hartmann T, Hubka P, Scheper V, Lenarz M, Lenarz T, Kral A, Lim HH. Neural representation in the auditory midbrain of the envelope of vocalizations based on a peripheral ear model. Front Neural Circuits 2013; 7:166. [PMID: 24155694 PMCID: PMC3800787 DOI: 10.3389/fncir.2013.00166] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2013] [Accepted: 09/24/2013] [Indexed: 11/24/2022] Open
Abstract
The auditory midbrain implant (AMI) consists of a single shank array (20 sites) for stimulation along the tonotopic axis of the central nucleus of the inferior colliculus (ICC) and has been safely implanted in deaf patients who cannot benefit from a cochlear implant (CI). The AMI improves lip-reading abilities and environmental awareness in the implanted patients. However, the AMI cannot achieve the high levels of speech perception possible with the CI. It appears the AMI can transmit sufficient spectral cues but with limited temporal cues required for speech understanding. Currently, the AMI uses a CI-based strategy, which was originally designed to stimulate each frequency region along the cochlea with amplitude-modulated pulse trains matching the envelope of the bandpass-filtered sound components. However, it is unclear if this type of stimulation with only a single site within each frequency lamina of the ICC can elicit sufficient temporal cues for speech perception. At least speech understanding in quiet is still possible with envelope cues as low as 50 Hz. Therefore, we investigated how ICC neurons follow the bandpass-filtered envelope structure of natural stimuli in ketamine-anesthetized guinea pigs. We identified a subset of ICC neurons that could closely follow the envelope structure (up to ß100 Hz) of a diverse set of species-specific calls, which was revealed by using a peripheral ear model to estimate the true bandpass-filtered envelopes observed by the brain. Although previous studies have suggested a complex neural transformation from the auditory nerve to the ICC, our data suggest that the brain maintains a robust temporal code in a subset of ICC neurons matching the envelope structure of natural stimuli. Clinically, these findings suggest that a CI-based strategy may still be effective for the AMI if the appropriate neurons are entrained to the envelope of the acoustic stimulus and can transmit sufficient temporal cues to higher centers.
Collapse
Affiliation(s)
- Thilo Rode
- Department of Otorhinolaryngology, Hannover Medical University Hannover, Germany
| | | | | | | | | | | | | | | |
Collapse
|
48
|
Single neuron and population coding of natural sounds in auditory cortex. Curr Opin Neurobiol 2013; 24:103-10. [PMID: 24492086 DOI: 10.1016/j.conb.2013.09.007] [Citation(s) in RCA: 50] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2013] [Revised: 08/29/2013] [Accepted: 09/09/2013] [Indexed: 11/22/2022]
Abstract
The auditory system drives behavior using information extracted from sounds. Early in the auditory hierarchy, circuits are highly specialized for detecting basic sound features. However, already at the level of the auditory cortex the functional organization of the circuits and the underlying coding principles become different. Here, we review some recent progress in our understanding of single neuron and population coding in primary auditory cortex, focusing on natural sounds. We discuss possible mechanisms explaining why single neuron responses to simple sounds cannot predict responses to natural stimuli. We describe recent work suggesting that structural features like local subnetworks rather than smoothly mapped tonotopy are essential components of population coding. Finally, we suggest a synthesis of how single neurons and subnetworks may be involved in coding natural sounds.
Collapse
|
49
|
Akimov AG. Encoding of Pups’ wriggling call models by neuronal population of midbrain inferior colliculus central nucleus in house mouse (Mus musculus). J EVOL BIOCHEM PHYS+ 2013. [DOI: 10.1134/s0022093013030122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
50
|
Gittelman JX, Perkel DJ, Portfors CV. Dopamine modulates auditory responses in the inferior colliculus in a heterogeneous manner. J Assoc Res Otolaryngol 2013; 14:719-29. [PMID: 23835945 DOI: 10.1007/s10162-013-0405-0] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2013] [Accepted: 06/21/2013] [Indexed: 02/02/2023] Open
Abstract
Perception of complex sounds such as speech is affected by a variety of factors, including attention, expectation of reward, physiological state, and/or disorders, yet the mechanisms underlying this modulation are not well understood. Although dopamine is commonly studied for its role in reward-based learning and in disorders, multiple lines of evidence suggest that dopamine is also involved in modulating auditory processing. In this study, we examined the effects of dopamine application on neuronal response properties in the inferior colliculus (IC) of awake mice. Because the IC contains dopamine receptors and nerve terminals immunoreactive for tyrosine hydroxylase, we predicted that dopamine would modulate auditory responses in the IC. We recorded single-unit responses before, during, and after the iontophoretic application of dopamine using piggyback electrodes. We examined the effects of dopamine on firing rate, timing, and probability of bursting. We found that application of dopamine affected neural responses in a heterogeneous manner. In more than 80 % of the neurons, dopamine either increased (32 %) or decreased (50 %) firing rate, and the effects were similar on spontaneous and sound-evoked activity. Dopamine also either increased or decreased first spike latency and jitter in almost half of the neurons. In 3/28 neurons (11 %), dopamine significantly altered the probability of bursting. The heterogeneous effects of dopamine observed in the IC of awake mice were similar to effects observed in other brain areas. Our findings indicate that dopamine differentially modulates neural activity in the IC and thus may play an important role in auditory processing.
Collapse
Affiliation(s)
- Joshua X Gittelman
- School of Biological Sciences, Washington State University, 14204 NE Salmon Creek Ave., Vancouver, WA, USA
| | | | | |
Collapse
|