1
|
Tehrani M, Shanbhag S, Huyck JJ, Patel R, Kazimierski D, Wenstrup JJ. The Mouse Inferior Colliculus Responds Preferentially to Non-Ultrasonic Vocalizations. eNeuro 2024; 11:ENEURO.0097-24.2024. [PMID: 38514192 PMCID: PMC11015948 DOI: 10.1523/eneuro.0097-24.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Accepted: 03/10/2024] [Indexed: 03/23/2024] Open
Abstract
The inferior colliculus (IC), the midbrain auditory integration center, analyzes information about social vocalizations and provides substrates for higher level processing of vocal signals. We used multichannel recordings to characterize and localize responses to social vocalizations and synthetic stimuli within the IC of female and male mice, both urethane anesthetized and unanesthetized. We compared responses to ultrasonic vocalizations (USVs) with other vocalizations in the mouse repertoire and related vocal responses to frequency tuning, IC subdivisions, and sex. Responses to lower frequency, broadband social vocalizations were widespread in IC, well represented throughout the tonotopic axis, across subdivisions, and in both sexes. Responses to USVs were much more limited. Although we observed some differences in tonal and vocal responses by sex and subdivision, representations of vocal responses by sex and subdivision were largely the same. For most units, responses to vocal signals occurred only when frequency response areas overlapped with spectra of the vocal signals. Since tuning to frequencies contained within the highest frequency USVs is limited (<15% of IC units), responses to these vocalizations are correspondingly limited (<5% of sound-responsive units). These results highlight a paradox of USV processing in some rodents: although USVs are the most abundant social vocalization, their representation and the representation of corresponding frequencies are less than lower frequency social vocalizations. We interpret this paradox in light of observations suggesting that USVs with lower frequency elements (<50 kHz) are associated with increased emotional intensity and engage a larger population of neurons in the mouse auditory system.
Collapse
Affiliation(s)
- Mahtab Tehrani
- Department of Anatomy and Neurobiology and Hearing Research Group, Northeast Ohio Medical University, Rootstown, Ohio 44272
- Brain Health Research Institute, Kent State University, Kent, Ohio 44242
| | - Sharad Shanbhag
- Department of Anatomy and Neurobiology and Hearing Research Group, Northeast Ohio Medical University, Rootstown, Ohio 44272
- Brain Health Research Institute, Kent State University, Kent, Ohio 44242
| | - Julia J Huyck
- Brain Health Research Institute, Kent State University, Kent, Ohio 44242
- Speech Pathology and Audiology Program, Kent State University, Kent, Ohio 44242
| | - Rahi Patel
- Department of Anatomy and Neurobiology and Hearing Research Group, Northeast Ohio Medical University, Rootstown, Ohio 44272
| | - Diana Kazimierski
- Department of Anatomy and Neurobiology and Hearing Research Group, Northeast Ohio Medical University, Rootstown, Ohio 44272
| | - Jeffrey J Wenstrup
- Department of Anatomy and Neurobiology and Hearing Research Group, Northeast Ohio Medical University, Rootstown, Ohio 44272
- Brain Health Research Institute, Kent State University, Kent, Ohio 44242
| |
Collapse
|
2
|
Tehrani M, Shanbhag S, Huyck JJ, Patel R, Kazimiersky D, Wenstrup JJ. The Mouse Inferior Colliculus Responds Preferentially to Non-Ultrasonic Vocalizations. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.09.579664. [PMID: 38370776 PMCID: PMC10871332 DOI: 10.1101/2024.02.09.579664] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/20/2024]
Abstract
The inferior colliculus (IC), the midbrain auditory integration center, analyzes information about social vocalizations and provides substrates for higher level processing of vocal signals. We used multi-channel recordings to characterize and localize responses to social vocalizations and synthetic stimuli within the IC of female and male mice, both urethane-anesthetized and unanesthetized. We compared responses to ultrasonic vocalizations (USVs) with other vocalizations in the mouse repertoire and related vocal responses to frequency tuning, IC subdivisions, and sex. Responses to lower frequency, broadband social vocalizations were widespread in IC, well represented throughout the tonotopic axis, across subdivisions, and in both sexes. Responses to USVs were much more limited. Although we observed some differences in tonal and vocal responses by sex and subdivision, representations of vocal responses by sex and subdivision were largely the same. For most units, responses to vocal signals occurred only when frequency response areas overlapped with spectra of the vocal signals. Since tuning to frequencies contained within the highest frequency USVs is limited (< 15% of IC units), responses to these vocalizations are correspondingly limited (< 5% of sound-responsive units). These results highlight a paradox of USV processing in some rodents: although USVs are the most abundant social vocalization, their representation and the representation of corresponding frequencies is less than lower frequency social vocalizations. We interpret this paradox in light of observations suggesting that USVs with lower frequency elements (<50 kHz) are associated with increased emotional intensity and engage a larger population of neurons in the mouse auditory system. SIGNIFICANCE STATEMENT The inferior colliculus (IC) integrates multiple inputs to analyze information about social vocalizations. In mice, we show that the most common type of social vocalization, the ultrasonic vocalization or USV, was poorly represented in IC compared to lower frequency vocalizations. For most neurons, responses to vocal signals occurred only when frequency response areas overlapped with vocalization spectra. These results highlight a paradox of USV processing in some rodent auditory systems: although USVs are the most abundant social vocalization, their representation and representation of corresponding frequencies is less than lower frequency social vocalizations. These results suggest that USVs with lower frequency elements (<50 kHz)-associated with increased emotional intensity-will engage a larger population of neurons in the mouse auditory system.
Collapse
|
3
|
Souffi S, Varnet L, Zaidi M, Bathellier B, Huetz C, Edeline JM. Reduction in sound discrimination in noise is related to envelope similarity and not to a decrease in envelope tracking abilities. J Physiol 2023; 601:123-149. [PMID: 36373184 DOI: 10.1113/jp283526] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 11/08/2022] [Indexed: 11/15/2022] Open
Abstract
Humans and animals constantly face challenging acoustic environments, such as various background noises, that impair the detection, discrimination and identification of behaviourally relevant sounds. Here, we disentangled the role of temporal envelope tracking in the reduction in neuronal and behavioural discrimination between communication sounds in situations of acoustic degradations. By collecting neuronal activity from six different levels of the auditory system, from the auditory nerve up to the secondary auditory cortex, in anaesthetized guinea-pigs, we found that tracking of slow changes of the temporal envelope is a general functional property of auditory neurons for encoding communication sounds in quiet conditions and in adverse, challenging conditions. Results from a go/no-go sound discrimination task in mice support the idea that the loss of distinct slow envelope cues in noisy conditions impacted the discrimination performance. Together, these results suggest that envelope tracking is potentially a universal mechanism operating in the central auditory system, which allows the detection of any between-stimulus difference in the slow envelope and thus copes with degraded conditions. KEY POINTS: In quiet conditions, envelope tracking in the low amplitude modulation range (<20 Hz) is correlated with the neuronal discrimination between communication sounds as quantified by mutual information from the cochlear nucleus up to the auditory cortex. At each level of the auditory system, auditory neurons retain their abilities to track the communication sound envelopes in situations of acoustic degradation, such as vocoding and the addition of masking noises up to a signal-to-noise ratio of -10 dB. In noisy conditions, the increase in between-stimulus envelope similarity explains the reduction in both behavioural and neuronal discrimination in the auditory system. Envelope tracking can be viewed as a universal mechanism that allows neural and behavioural discrimination as long as the temporal envelope of communication sounds displays some differences.
Collapse
Affiliation(s)
- Samira Souffi
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS - Université Paris-Saclay, Saclay, France
| | - Léo Varnet
- Laboratoire des systèmes perceptifs, UMR CNRS 8248, Département d'Etudes Cognitives, Ecole Normale Supérieure, Université Paris Sciences & Lettres, Paris, France
| | - Meryem Zaidi
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS - Université Paris-Saclay, Saclay, France
| | - Brice Bathellier
- Institut de l'Audition, Institut Pasteur, Université de Paris, INSERM, Paris, France
| | - Chloé Huetz
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS - Université Paris-Saclay, Saclay, France
| | - Jean-Marc Edeline
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS - Université Paris-Saclay, Saclay, France
| |
Collapse
|
4
|
Gentile Polese A, Nigam S, Hurley LM. 5-HT1A Receptors Alter Temporal Responses to Broadband Vocalizations in the Mouse Inferior Colliculus Through Response Suppression. Front Neural Circuits 2021; 15:718348. [PMID: 34512276 PMCID: PMC8430226 DOI: 10.3389/fncir.2021.718348] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Accepted: 07/19/2021] [Indexed: 01/21/2023] Open
Abstract
Neuromodulatory systems may provide information on social context to auditory brain regions, but relatively few studies have assessed the effects of neuromodulation on auditory responses to acoustic social signals. To address this issue, we measured the influence of the serotonergic system on the responses of neurons in a mouse auditory midbrain nucleus, the inferior colliculus (IC), to vocal signals. Broadband vocalizations (BBVs) are human-audible signals produced by mice in distress as well as by female mice in opposite-sex interactions. The production of BBVs is context-dependent in that they are produced both at early stages of interactions as females physically reject males and at later stages as males mount females. Serotonin in the IC of males corresponds to these events, and is elevated more in males that experience less female rejection. We measured the responses of single IC neurons to five recorded examples of BBVs in anesthetized mice. We then locally activated the 5-HT1A receptor through iontophoretic application of 8-OH-DPAT. IC neurons showed little selectivity for different BBVs, but spike trains were characterized by local regions of high spike probability, which we called "response features." Response features varied across neurons and also across calls for individual neurons, ranging from 1 to 7 response features for responses of single neurons to single calls. 8-OH-DPAT suppressed spikes and also reduced the numbers of response features. The weakest response features were the most likely to disappear, suggestive of an "iceberg"-like effect in which activation of the 5-HT1A receptor suppressed weakly suprathreshold response features below the spiking threshold. Because serotonin in the IC is more likely to be elevated for mounting-associated BBVs than for rejection-associated BBVs, these effects of the 5-HT1A receptor could contribute to the differential auditory processing of BBVs in different behavioral subcontexts.
Collapse
Affiliation(s)
- Arianna Gentile Polese
- Department of Cell and Developmental Biology, University of Colorado Anschutz Medical Campus, Aurora, CO, United States
- Department of Biology, Program in Neuroscience, Indiana University Bloomington, Bloomington, IN, United States
| | - Sunny Nigam
- Department of Neurobiology and Anatomy, McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, TX, United States
- Department of Physics, Indiana University Bloomington, Bloomington, IN, United States
| | - Laura M. Hurley
- Department of Neurobiology and Anatomy, McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, TX, United States
| |
Collapse
|
5
|
Natural Statistics as Inference Principles of Auditory Tuning in Biological and Artificial Midbrain Networks. eNeuro 2021; 8:ENEURO.0525-20.2021. [PMID: 33947687 PMCID: PMC8211468 DOI: 10.1523/eneuro.0525-20.2021] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 03/10/2021] [Accepted: 04/27/2021] [Indexed: 12/04/2022] Open
Abstract
Bats provide a powerful mammalian model to explore the neural representation of complex sounds, as they rely on hearing to survive in their environment. The inferior colliculus (IC) is a central hub of the auditory system that receives converging projections from the ascending pathway and descending inputs from auditory cortex. In this work, we build an artificial neural network to replicate auditory characteristics in IC neurons of the big brown bat. We first test the hypothesis that spectro-temporal tuning of IC neurons is optimized to represent the natural statistics of conspecific vocalizations. We estimate spectro-temporal receptive fields (STRFs) of IC neurons and compare tuning characteristics to statistics of bat calls. The results indicate that the FM tuning of IC neurons is matched with the statistics. Then, we investigate this hypothesis on the network optimized to represent natural sound statistics and to compare its output with biological responses. We also estimate biomimetic STRFs from the artificial network and correlate their characteristics to those of biological neurons. Tuning properties of both biological and artificial neurons reveal strong agreement along both spectral and temporal dimensions, and suggest the presence of nonlinearity, sparsity, and complexity constraints that underlie the neural representation in the auditory midbrain. Additionally, the artificial neurons replicate IC neural activities in discrimination of social calls, and provide simulated results for a noise robust discrimination. In this way, the biomimetic network allows us to infer the neural mechanisms by which the bat’s IC processes natural sounds used to construct the auditory scene.
Collapse
|
6
|
Montes-Lourido P, Kar M, David SV, Sadagopan S. Neuronal selectivity to complex vocalization features emerges in the superficial layers of primary auditory cortex. PLoS Biol 2021; 19:e3001299. [PMID: 34133413 PMCID: PMC8238193 DOI: 10.1371/journal.pbio.3001299] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 06/28/2021] [Accepted: 05/24/2021] [Indexed: 01/11/2023] Open
Abstract
Early in auditory processing, neural responses faithfully reflect acoustic input. At higher stages of auditory processing, however, neurons become selective for particular call types, eventually leading to specialized regions of cortex that preferentially process calls at the highest auditory processing stages. We previously proposed that an intermediate step in how nonselective responses are transformed into call-selective responses is the detection of informative call features. But how neural selectivity for informative call features emerges from nonselective inputs, whether feature selectivity gradually emerges over the processing hierarchy, and how stimulus information is represented in nonselective and feature-selective populations remain open question. In this study, using unanesthetized guinea pigs (GPs), a highly vocal and social rodent, as an animal model, we characterized the neural representation of calls in 3 auditory processing stages-the thalamus (ventral medial geniculate body (vMGB)), and thalamorecipient (L4) and superficial layers (L2/3) of primary auditory cortex (A1). We found that neurons in vMGB and A1 L4 did not exhibit call-selective responses and responded throughout the call durations. However, A1 L2/3 neurons showed high call selectivity with about a third of neurons responding to only 1 or 2 call types. These A1 L2/3 neurons only responded to restricted portions of calls suggesting that they were highly selective for call features. Receptive fields of these A1 L2/3 neurons showed complex spectrotemporal structures that could underlie their high call feature selectivity. Information theoretic analysis revealed that in A1 L4, stimulus information was distributed over the population and was spread out over the call durations. In contrast, in A1 L2/3, individual neurons showed brief bursts of high stimulus-specific information and conveyed high levels of information per spike. These data demonstrate that a transformation in the neural representation of calls occurs between A1 L4 and A1 L2/3, leading to the emergence of a feature-based representation of calls in A1 L2/3. Our data thus suggest that observed cortical specializations for call processing emerge in A1 and set the stage for further mechanistic studies.
Collapse
Affiliation(s)
- Pilar Montes-Lourido
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Manaswini Kar
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- Center for Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Stephen V. David
- Department of Otolaryngology, Oregon Health and Science University, Portland, Oregon, United States of America
| | - Srivatsun Sadagopan
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- Center for Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| |
Collapse
|
7
|
Heard M, Li X, Lee YS. Hybrid auditory fMRI: In pursuit of increasing data acquisition while decreasing the impact of scanner noise. J Neurosci Methods 2021; 358:109198. [PMID: 33901568 DOI: 10.1016/j.jneumeth.2021.109198] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2020] [Revised: 03/28/2021] [Accepted: 04/16/2021] [Indexed: 11/17/2022]
Abstract
BACKGROUND Two challenges in auditory fMRI include the loud scanner noise during sound presentation and slow data acquisition. Here, we introduce a new auditory imaging protocol, termed "hybrid", that alleviates these obstacles. NEW METHOD We designed a within-subject experiment (N = 14) wherein language-driven activity was measured by hybrid, interleaved silent (ISSS), and continuous multiband acquisition. To determine the advantage of noise attenuation during sound presentation, hybrid was compared to multiband. To identify the benefits of increased temporal resolution, hybrid was compared to ISSS. Data were evaluated by whole-brain univariate general linear modeling (GLM) and multivariate pattern analysis (MVPA). RESULTS Comparison with existing methods: CONCLUSIONS: Our data revealed that hybrid imaging restored neural activity in the canonical language network that was absent due to the loud noise or slow sampling in the conventional imaging protocols. With its noise-attenuated sound presentation windows and increased acquisition speed, the hybrid protocol is well-suited for auditory fMRI research tracking neural activity pertaining to fast, time-varying acoustic events.
Collapse
Affiliation(s)
- Matthew Heard
- School of Behavioral and Brain Sciences, University of Texas at Dallas, United States
| | - Xiangrui Li
- Center for Cognitive and Behavioral Brain Imaging, The Ohio State University, United States
| | - Yune S Lee
- School of Behavioral and Brain Sciences, University of Texas at Dallas, United States; Center for BrainHealth, University of Texas at Dallas, United States.
| |
Collapse
|
8
|
Hosseini M, Rodriguez G, Guo H, Lim HH, Plourde E. The effect of input noises on the activity of auditory neurons using GLM-based metrics. J Neural Eng 2021; 18. [PMID: 33626516 DOI: 10.1088/1741-2552/abe979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2020] [Accepted: 02/24/2021] [Indexed: 11/11/2022]
Abstract
CONTEXT The auditory system is extremely efficient in extracting auditory information in the presence of background noise. However, people with auditory implants have a hard time understanding speech in noisy conditions. Understanding the mechanisms of perception in noise could lead to better stimulation or preprocessing strategies for such implants. OBJECTIVE The neural mechanisms related to the processing of background noise, especially in the inferior colliculus (IC) where the auditory midbrain implant is located, are still not well understood. We thus wish to investigate if there is a difference in the activity of neurons in the IC when presenting noisy vocalizations with different types of noise (stationary vs. non-stationary), input signal-to-noise ratios (SNR) and signal levels. APPROACH We developed novel metrics based on a generalized linear model (GLM) to investigate the effect of a given input noise on neural activity. We used these metrics to analyze neural data recorded from the IC in ketamine-anesthetized female Hartley guinea pigs while presenting noisy vocalizations. MAIN RESULTS We found that non-stationary noise clearly contributes to the multi-unit neural activity in the IC by causing excitation, regardless of the SNR, input level or vocalization type. However, when presenting white or natural stationary noises, a great diversity of responses was observed for the different conditions, where the multi-unit activity of some sites was affected by the presence of noise and the activity of others was not. SIGNIFICANCE The GLM-based metrics allowed the identification of a clear distinction between the effect of white or natural stationary noises and that of non-stationary noise on the multi-unit activity in the IC. This had not been observed before and indicates that the so-called noise invariance in the IC is dependent on the input noisy conditions. This could suggest different preprocessing or stimulation approaches for auditory midbrain implants depending on the noisy conditions.
Collapse
Affiliation(s)
- Maryam Hosseini
- Electrical engineering, Université de Sherbrooke, 2500 Boulevard de l'Université, Sherbrooke, Quebec, J1K 2R1, CANADA
| | - Gerardo Rodriguez
- Biomedical engineering, University of Minnesota, 312 Church St SE, Minneapolis, Minnesota, 55455, UNITED STATES
| | - Hongsun Guo
- Biomedical engineering, University of Minnesota, 312 Church St SE, Minneapolis, Minnesota, 55455, UNITED STATES
| | - Hubert H Lim
- Department of Biomedical Engineering, University of Minnesota, 7-105 Hasselmo Hall, 312 Church Street SE, Minneapolis, MN 55455, USA, Minneapolis, Minnesota, 55455, UNITED STATES
| | - Eric Plourde
- Electrical engineering, Université de Sherbrooke, 2500 Boulevard de l'Université, Sherbrooke, Quebec, J1K 2R1, CANADA
| |
Collapse
|
9
|
Logerot P, Smith PF, Wild M, Kubke MF. Auditory processing in the zebra finch midbrain: single unit responses and effect of rearing experience. PeerJ 2020; 8:e9363. [PMID: 32775046 PMCID: PMC7384439 DOI: 10.7717/peerj.9363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2019] [Accepted: 05/26/2020] [Indexed: 11/26/2022] Open
Abstract
In birds the auditory system plays a key role in providing the sensory input used to discriminate between conspecific and heterospecific vocal signals. In those species that are known to learn their vocalizations, for example, songbirds, it is generally considered that this ability arises and is manifest in the forebrain, although there is no a priori reason why brainstem components of the auditory system could not also play an important part. To test this assumption, we used groups of normal reared and cross-fostered zebra finches that had previously been shown in behavioural experiments to reduce their preference for conspecific songs subsequent to cross fostering experience with Bengalese finches, a related species with a distinctly different song. The question we asked, therefore, is whether this experiential change also changes the bias in favour of conspecific song displayed by auditory midbrain units of normally raised zebra finches. By recording the responses of single units in MLd to a variety of zebra finch and Bengalese finch songs in both normally reared and cross-fostered zebra finches, we provide a positive answer to this question. That is, the difference in response to conspecific and heterospecific songs seen in normal reared zebra finches is reduced following cross-fostering. In birds the virtual absence of mammalian-like cortical projections upon auditory brainstem nuclei argues against the interpretation that MLd units change, as observed in the present experiments, as a result of top-down influences on sensory processing. Instead, it appears that MLd units can be influenced significantly by sensory inputs arising directly from a change in auditory experience during development.
Collapse
Affiliation(s)
- Priscilla Logerot
- Anatomy and Medical Imaging, University of Auckland, University of Auckland, Auckland, New Zealand
| | - Paul F. Smith
- Dept. of Pharmacology and Toxicology, School of Biomedical Sciences, Brain Health Research Centre, Brain Research New Zealand, and Eisdell Moore Centre, University of Otago, Dunedin, New Zealand
| | - Martin Wild
- Anatomy and Medical Imaging and Eisdell Moore Centre, University of Auckland, University of Auckland, Auckland, New Zealand
| | - M. Fabiana Kubke
- Anatomy and Medical Imaging, Centre for Brain Research and Eisdell Moore Centre, University of Auckland, University of Auckland, Auckland, New Zealand
| |
Collapse
|
10
|
Hosseini M, Rodriguez G, Guo H, Lim H, Plourde E. Novel metrics to measure the effect of additive inputs on the activity of sensory system neurons. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:5141-5145. [PMID: 31947016 DOI: 10.1109/embc.2019.8857622] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Sensory systems, such as the visual or auditory system, are highly non linear. It is therefore not easy to predict the effect of additive inputs on the spiking activity of related brain structures. Here, we propose two metrics to study the effect of additive covariates on the spiking activity of neurons. These metrics are directly obtained from a generalized linear model. We apply these metrics to the study of the effect of additive input audio noise on the spiking activity of neurons in the auditory system. To do so, we combine clean vocalisations with natural stationary or non-stationary noises and record activity in the auditory system while presenting the noisy vocalisations. We found that non-stationary noise has a greater effect on the neural activity than stationary noise. We observe that the results, obtained using the proposed metrics, is more consistent with current knowledge in auditory neuroscience than the results obtained when using a common metric from the literature, the extraction index.
Collapse
|
11
|
Gourévitch B, Mahrt EJ, Bakay W, Elde C, Portfors CV. GABA A receptors contribute more to rate than temporal coding in the IC of awake mice. J Neurophysiol 2020; 123:134-148. [PMID: 31721644 DOI: 10.1152/jn.00377.2019] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
Speech is our most important form of communication, yet we have a poor understanding of how communication sounds are processed by the brain. Mice make great model organisms to study neural processing of communication sounds because of their rich repertoire of social vocalizations and because they have brain structures analogous to humans, such as the auditory midbrain nucleus inferior colliculus (IC). Although the combined roles of GABAergic and glycinergic inhibition on vocalization selectivity in the IC have been studied to a limited degree, the discrete contributions of GABAergic inhibition have only rarely been examined. In this study, we examined how GABAergic inhibition contributes to shaping responses to pure tones as well as selectivity to complex sounds in the IC of awake mice. In our set of long-latency neurons, we found that GABAergic inhibition extends the evoked firing rate range of IC neurons by lowering the baseline firing rate but maintaining the highest probability of firing rate. GABAergic inhibition also prevented IC neurons from bursting in a spontaneous state. Finally, we found that although GABAergic inhibition shaped the spectrotemporal response to vocalizations in a nonlinear fashion, it did not affect the neural code needed to discriminate vocalizations, based either on spiking patterns or on firing rate. Overall, our results emphasize that even if GABAergic inhibition generally decreases the firing rate, it does so while maintaining or extending the abilities of neurons in the IC to code the wide variety of sounds that mammals are exposed to in their daily lives.NEW & NOTEWORTHY GABAergic inhibition adds nonlinearity to neuronal response curves. This increases the neuronal range of evoked firing rate by reducing baseline firing. GABAergic inhibition prevents bursting responses from neurons in a spontaneous state, reducing noise in the temporal coding of the neuron. This could result in improved signal transmission to the cortex.
Collapse
Affiliation(s)
- Boris Gourévitch
- Institut de l'Audition, Institut Pasteur, INSERM, Sorbonne Université, F-75012 Paris, France.,CNRS, France
| | - Elena J Mahrt
- School of Biological Sciences, Washington State University, Vancouver, Washington
| | - Warren Bakay
- Institut de l'Audition, Institut Pasteur, INSERM, Sorbonne Université, F-75012 Paris, France
| | - Cameron Elde
- School of Biological Sciences, Washington State University, Vancouver, Washington
| | - Christine V Portfors
- School of Biological Sciences, Washington State University, Vancouver, Washington
| |
Collapse
|
12
|
Naert G, Pasdelou MP, Le Prell CG. Use of the guinea pig in studies on the development and prevention of acquired sensorineural hearing loss, with an emphasis on noise. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:3743. [PMID: 31795705 PMCID: PMC7195866 DOI: 10.1121/1.5132711] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/10/2019] [Revised: 07/30/2019] [Accepted: 08/12/2019] [Indexed: 05/10/2023]
Abstract
Guinea pigs have been used in diverse studies to better understand acquired hearing loss induced by noise and ototoxic drugs. The guinea pig has its best hearing at slightly higher frequencies relative to humans, but its hearing is more similar to humans than the rat or mouse. Like other rodents, it is more vulnerable to noise injury than the human or nonhuman primate models. There is a wealth of information on auditory function and vulnerability of the inner ear to diverse insults in the guinea pig. With respect to the assessment of potential otoprotective agents, guinea pigs are also docile animals that are relatively easy to dose via systemic injections or gavage. Of interest, the cochlea and the round window are easily accessible, notably for direct cochlear therapy, as in the chinchilla, making the guinea pig a most relevant and suitable model for hearing. This article reviews the use of the guinea pig in basic auditory research, provides detailed discussion of its use in studies on noise injury and other injuries leading to acquired sensorineural hearing loss, and lists some therapeutics assessed in these laboratory animal models to prevent acquired sensorineural hearing loss.
Collapse
Affiliation(s)
| | | | - Colleen G Le Prell
- School of Behavioral and Brain Sciences, University of Texas at Dallas, Dallas, Texas 75080, USA
| |
Collapse
|
13
|
Liu ST, Montes-Lourido P, Wang X, Sadagopan S. Optimal features for auditory categorization. Nat Commun 2019; 10:1302. [PMID: 30899018 PMCID: PMC6428858 DOI: 10.1038/s41467-019-09115-y] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2018] [Accepted: 02/20/2019] [Indexed: 01/13/2023] Open
Abstract
Humans and vocal animals use vocalizations to communicate with members of their species. A necessary function of auditory perception is to generalize across the high variability inherent in vocalization production and classify them into behaviorally distinct categories ('words' or 'call types'). Here, we demonstrate that detecting mid-level features in calls achieves production-invariant classification. Starting from randomly chosen marmoset call features, we use a greedy search algorithm to determine the most informative and least redundant features necessary for call classification. High classification performance is achieved using only 10-20 features per call type. Predictions of tuning properties of putative feature-selective neurons accurately match some observed auditory cortical responses. This feature-based approach also succeeds for call categorization in other species, and for other complex classification tasks such as caller identification. Our results suggest that high-level neural representations of sounds are based on task-dependent features optimized for specific computational goals.
Collapse
Affiliation(s)
- Shi Tong Liu
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, 15213, PA, USA
| | - Pilar Montes-Lourido
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, 15213, PA, USA
| | - Xiaoqin Wang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, 21205, MD, USA
| | - Srivatsun Sadagopan
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, 15213, PA, USA. .,Department of Neurobiology, University of Pittsburgh, Pittsburgh, 15213, PA, USA. .,Department of Otolaryngology, University of Pittsburgh, Pittsburgh, 15213, PA, USA.
| |
Collapse
|
14
|
Neural processes of vocal social perception: Dog-human comparative fMRI studies. Neurosci Biobehav Rev 2019; 85:54-64. [PMID: 29287629 DOI: 10.1016/j.neubiorev.2017.11.017] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2017] [Revised: 11/20/2017] [Accepted: 11/23/2017] [Indexed: 11/20/2022]
Abstract
In this review we focus on the exciting new opportunities in comparative neuroscience to study neural processes of vocal social perception by comparing dog and human neural activity using fMRI methods. The dog is a relatively new addition to this research area; however, it has a large potential to become a standard species in such investigations. Although there has been great interest in the emergence of human language abilities, in case of fMRI methods, most research to date focused on homologue comparisons within Primates. By belonging to a very different clade of mammalian evolution, dogs could give such research agendas a more general mammalian foundation. In addition, broadening the scope of investigations into vocal communication in general can also deepen our understanding of human vocal skills. Being selected for and living in an anthropogenic environment, research with dogs may also be informative about the way in which human non-linguistic and linguistic signals are represented in a mammalian brain without skills for language production.
Collapse
|
15
|
Peng F, Innes-Brown H, McKay CM, Fallon JB, Zhou Y, Wang X, Hu N, Hou W. Temporal Coding of Voice Pitch Contours in Mandarin Tones. Front Neural Circuits 2018; 12:55. [PMID: 30087597 PMCID: PMC6066958 DOI: 10.3389/fncir.2018.00055] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2017] [Accepted: 06/27/2018] [Indexed: 11/13/2022] Open
Abstract
Accurate perception of time-variant pitch is important for speech recognition, particularly for tonal languages with different lexical tones such as Mandarin, in which different tones convey different semantic information. Previous studies reported that the auditory nerve and cochlear nucleus can encode different pitches through phase-locked neural activities. However, little is known about how the inferior colliculus (IC) encodes the time-variant periodicity pitch of natural speech. In this study, the Mandarin syllable /ba/ pronounced with four lexical tones (flat, rising, falling then rising and falling) were used as stimuli. Local field potentials (LFPs) and single neuron activity were simultaneously recorded from 90 sites within contralateral IC of six urethane-anesthetized and decerebrate guinea pigs in response to the four stimuli. Analysis of the temporal information of LFPs showed that 93% of the LFPs exhibited robust encoding of periodicity pitch. Pitch strength of LFPs derived from the autocorrelogram was significantly (p < 0.001) stronger for rising tones than flat and falling tones. Pitch strength are also significantly increased (p < 0.05) with the characteristic frequency (CF). On the other hand, only 47% (42 or 90) of single neuron activities were significantly synchronized to the fundamental frequency of the stimulus suggesting that the temporal spiking pattern of single IC neuron could encode the time variant periodicity pitch of speech robustly. The difference between the number of LFPs and single neurons that encode the time-variant F0 voice pitch supports the notion of a transition at the level of IC from direct temporal coding in the spike trains of individual neurons to other form of neural representation.
Collapse
Affiliation(s)
- Fei Peng
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, Bioengineering College of Chongqing University, Chongqing, China
- Collaborative Innovation Center for Brain Science, Chongqing University, Chongqing, China
| | - Hamish Innes-Brown
- Bionics Institute, East Melbourne, VIC, Australia
- Department of Medical Bionics Department, University of Melbourne, Melbourne, VIC, Australia
| | - Colette M. McKay
- Bionics Institute, East Melbourne, VIC, Australia
- Department of Medical Bionics Department, University of Melbourne, Melbourne, VIC, Australia
| | - James B. Fallon
- Bionics Institute, East Melbourne, VIC, Australia
- Department of Medical Bionics Department, University of Melbourne, Melbourne, VIC, Australia
- Department of Otolaryngology, University of Melbourne, Melbourne, VIC, Australia
| | - Yi Zhou
- Chongqing Key Laboratory of Neurobiology, Department of Neurobiology, Third Military Medical University, Chongqing, China
| | - Xing Wang
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, Bioengineering College of Chongqing University, Chongqing, China
- Chongqing Medical Electronics Engineering Technology Research Center, Chongqing University, Chongqing, China
| | - Ning Hu
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, Bioengineering College of Chongqing University, Chongqing, China
- Collaborative Innovation Center for Brain Science, Chongqing University, Chongqing, China
| | - Wensheng Hou
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, Bioengineering College of Chongqing University, Chongqing, China
- Collaborative Innovation Center for Brain Science, Chongqing University, Chongqing, China
- Chongqing Medical Electronics Engineering Technology Research Center, Chongqing University, Chongqing, China
| |
Collapse
|
16
|
Mohr RA, Chang Y, Bhandiwad AA, Forlano PM, Sisneros JA. Brain Activation Patterns in Response to Conspecific and Heterospecific Social Acoustic Signals in Female Plainfin Midshipman Fish, Porichthys notatus. BRAIN, BEHAVIOR AND EVOLUTION 2018; 91:31-44. [PMID: 29597197 DOI: 10.1159/000487122] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Received: 11/13/2017] [Accepted: 01/24/2018] [Indexed: 01/09/2023]
Abstract
While the peripheral auditory system of fish has been well studied, less is known about how the fish's brain and central auditory system process complex social acoustic signals. The plainfin midshipman fish, Porichthys notatus, has become a good species for investigating the neural basis of acoustic communication because the production and reception of acoustic signals is paramount for this species' reproductive success. Nesting males produce long-duration advertisement calls that females detect and localize among the noise in the intertidal zone to successfully find mates and spawn. How female midshipman are able to discriminate male advertisement calls from environmental noise and other acoustic stimuli is unknown. Using the immediate early gene product cFos as a marker for neural activity, we quantified neural activation of the ascending auditory pathway in female midshipman exposed to conspecific advertisement calls, heterospecific white seabass calls, or ambient environment noise. We hypothesized that auditory hindbrain nuclei would be activated by general acoustic stimuli (ambient noise and other biotic acoustic stimuli) whereas auditory neurons in the midbrain and forebrain would be selectively activated by conspecific advertisement calls. We show that neural activation in two regions of the auditory hindbrain, i.e., the rostral intermediate division of the descending octaval nucleus and the ventral division of the secondary octaval nucleus, did not differ via cFos immunoreactive (cFos-ir) activity when exposed to different acoustic stimuli. In contrast, female midshipman exposed to conspecific advertisement calls showed greater cFos-ir in the nucleus centralis of the midbrain torus semicircularis compared to fish exposed only to ambient noise. No difference in cFos-ir was observed in the torus semicircularis of animals exposed to conspecific versus heterospecific calls. However, cFos-ir was greater in two forebrain structures that receive auditory input, i.e., the central posterior nucleus of the thalamus and the anterior tuberal hypothalamus, when exposed to conspecific calls versus either ambient noise or heterospecific calls. Our results suggest that higher-order neurons in the female midshipman midbrain torus semicircularis, thalamic central posterior nucleus, and hypothalamic anterior tuberal nucleus may be necessary for the discrimination of complex social acoustic signals. Furthermore, neurons in the central posterior and anterior tuberal nuclei are differentially activated by exposure to conspecific versus other acoustic stimuli.
Collapse
Affiliation(s)
- Robert A Mohr
- Department of Psychology, University of Washington, Seattle, Washington, USA
| | - Yiran Chang
- Department of Biology, University of Washington, Seattle, Washington, USA
| | - Ashwin A Bhandiwad
- Department of Psychology, University of Washington, Seattle, Washington, USA
| | - Paul M Forlano
- Department of Biology, Brooklyn College, City University of New York, Brooklyn, New York, USA.,Program in Ecology, Evolution, and Behavior, The Graduate Center, City University of New York, New York, New York, USA.,Program in Neuroscience, The Graduate Center, City University of New York, New York, New York, USA.,Program in Behavioral and Cognitive Neuroscience, The Graduate Center, City University of New York, New York, New York, USA
| | - Joseph A Sisneros
- Department of Psychology, University of Washington, Seattle, Washington, USA.,Department of Biology, University of Washington, Seattle, Washington, USA.,Virginia Merrill Bloedel Hearing Research Center, Seattle, Washington, USA
| |
Collapse
|
17
|
|
18
|
Zheng XL, Fan XY, Hou WS. Neural representation of different mandarin tones in the inferior colliculus of the guinea pig. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2017; 2016:1608-1611. [PMID: 28268636 DOI: 10.1109/embc.2016.7591020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Mandarin speech has four different tones and the coding mechanism underlying tone identification still remain unclear. Here in the inferior colliculus (IC) of anesthetized guinea pigs, we recorded single neuron activities to one word with four tones using tungsten electrode. Peri-stimulus time histograms (PSTHs) and inter-spike-interval (ISI) were used to evaluate the neural response. The results showed that PSTHs grouped into frequency band reflected the spectrotemporal patterns of different tones; average population PSTHs matched envelops of different tones; and the peaks of histogram of ISIs in three time segments exhibited a displacement which reflected the profile of fundamental frequency (F0). These preliminary results suggested IC neurons could encode the spectrotemporal acoustic features of different Mandarin tones.
Collapse
|
19
|
Finton CJ, Keesom SM, Hood KE, Hurley LM. What's in a squeak? Female vocal signals predict the sexual behaviour of male house mice during courtship. Anim Behav 2017. [DOI: 10.1016/j.anbehav.2017.01.021] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
|
20
|
Akimov AG, Egorova MA, Ehret G. Spectral summation and facilitation in on- and off-responses for optimized representation of communication calls in mouse inferior colliculus. Eur J Neurosci 2017; 45:440-459. [PMID: 27891665 DOI: 10.1111/ejn.13488] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2016] [Revised: 11/17/2016] [Accepted: 11/21/2016] [Indexed: 12/01/2022]
Abstract
Selectivity for processing of species-specific vocalizations and communication sounds has often been associated with the auditory cortex. The midbrain inferior colliculus, however, is the first center in the auditory pathways of mammals integrating acoustic information processed in separate nuclei and channels in the brainstem and, therefore, could significantly contribute to enhance the perception of species' communication sounds. Here, we used natural wriggling calls of mouse pups, which communicate need for maternal care to adult females, and further 15 synthesized sounds to test the hypothesis that neurons in the central nucleus of the inferior colliculus of adult females optimize their response rates for reproduction of the three main harmonics (formants) of wriggling calls. The results confirmed the hypothesis showing that average response rates, as recorded extracellularly from single units, were highest and spectral facilitation most effective for both onset and offset responses to the call and call models with three resolved frequencies according to critical bands in perception. In addition, the general on- and/or off-response enhancement in almost half the investigated 122 neurons favors not only perception of single calls but also of vocalization rhythm. In summary, our study provides strong evidence that critical-band resolved frequency components within a communication sound increase the probability of its perception by boosting the signal-to-noise ratio of neural response rates within the inferior colliculus for at least 20% (our criterion for facilitation). These mechanisms, including enhancement of rhythm coding, are generally favorable to processing of other animal and human vocalizations, including formants of speech sounds.
Collapse
Affiliation(s)
- Alexander G Akimov
- Sechnov Institute of Evolutionary Physiology and Biochemistry, Russian Academy of Sciences, St. Petersburg, Russia
| | - Marina A Egorova
- Sechnov Institute of Evolutionary Physiology and Biochemistry, Russian Academy of Sciences, St. Petersburg, Russia
| | - Günter Ehret
- Institute of Neurobiology, University of Ulm, D-89069, Ulm, Germany
| |
Collapse
|
21
|
Lyzwa D, Wörgötter F. Neural and Response Correlations to Complex Natural Sounds in the Auditory Midbrain. Front Neural Circuits 2016; 10:89. [PMID: 27891078 PMCID: PMC5102906 DOI: 10.3389/fncir.2016.00089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2016] [Accepted: 10/21/2016] [Indexed: 11/13/2022] Open
Abstract
How natural communication sounds are spatially represented across the inferior colliculus, the main center of convergence for auditory information in the midbrain, is not known. The neural representation of the acoustic stimuli results from the interplay of locally differing input and the organization of spectral and temporal neural preferences that change gradually across the nucleus. This raises the question of how similar the neural representation of the communication sounds is across these gradients of neural preferences, and whether it also changes gradually. Analyzed neural recordings were multi-unit cluster spike trains from guinea pigs presented with a spectrotemporally rich set of eleven species-specific communication sounds. Using cross-correlation, we analyzed the response similarity of spiking activity across a broad frequency range for neurons of similar and different frequency tuning. Furthermore, we separated the contribution of the stimulus to the correlations to investigate whether similarity is only attributable to the stimulus, or, whether interactions exist between the multi-unit clusters that lead to neural correlations and whether these follow the same representation as the response correlations. We found that similarity of responses is dependent on the neurons' spatial distance for similarly and differently frequency-tuned neurons, and that similarity decreases gradually with spatial distance. Significant neural correlations exist, and contribute to the total response similarity. Our findings suggest that for multi-unit clusters in the mammalian inferior colliculus, the gradual response similarity with spatial distance to natural complex sounds is shaped by neural interactions and the gradual organization of neural preferences.
Collapse
Affiliation(s)
- Dominika Lyzwa
- Department of Nonlinear Dynamics, Max Planck Institute for Dynamics and Self-OrganizationGöttingen, Germany
- Physics Department, Institute for Nonlinear Dynamics, Georg-August-UniversityGöttingen, Germany
- Bernstein Focus NeurotechnologyGöttingen, Germany
| | - Florentin Wörgötter
- Bernstein Focus NeurotechnologyGöttingen, Germany
- Institute for Physics-Biophysics, Georg-August UniversityGöttingen, Germany
| |
Collapse
|
22
|
Ojima H, Horikawa J. Recognition of Modified Conditioning Sounds by Competitively Trained Guinea Pigs. Front Behav Neurosci 2016; 9:373. [PMID: 26858617 PMCID: PMC4726754 DOI: 10.3389/fnbeh.2015.00373] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2015] [Accepted: 12/24/2015] [Indexed: 11/13/2022] Open
Abstract
The guinea pig (GP) is an often-used species in hearing research. However, behavioral studies are rare, especially in the context of sound recognition, because of difficulties in training these animals. We examined sound recognition in a social competitive setting in order to examine whether this setting could be used as an easy model. Two starved GPs were placed in the same training arena and compelled to compete for food after hearing a conditioning sound (CS), which was a repeat of almost identical sound segments. Through a 2-week intensive training, animals were trained to demonstrate a set of distinct behaviors solely to the CS. Then, each of them was subjected to generalization tests for recognition of sounds that had been modified from the CS in spectral, fine temporal and tempo (i.e., intersegment interval, ISI) dimensions. Results showed that they discriminated between the CS and band-rejected test sounds but had no preference for a particular frequency range for the recognition. In contrast, sounds modified in the fine temporal domain were largely perceived to be in the same category as the CS, except for the test sound generated by fully reversing the CS in time. Animals also discriminated sounds played at different tempos. Test sounds with ISIs shorter than that of the multi-segment CS were discriminated from the CS, while test sounds with ISIs longer than that of the CS segments were not. For the shorter ISIs, most animals initiated apparently positive food-access behavior as they did in response to the CS, but discontinued it during the sound-on period probably because of later recognition of tempo. Interestingly, the population range and mean of the delay time before animals initiated the food-access behavior were very similar among different ISI test sounds. This study, for the first time, demonstrates a wide aspect of sound discrimination abilities of the GP and will provide a way to examine tempo perception mechanisms using this animal species.
Collapse
Affiliation(s)
- Hisayuki Ojima
- Cognitive Neurobiology and The Center for Brain Integration Research (CBIR), Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University Tokyo, Japan
| | - Junsei Horikawa
- Computer Science and Engineering, Graduate School of Engineering, Toyohashi University of Technology Toyohashi, Japan
| |
Collapse
|
23
|
Lyzwa D, Herrmann JM, Wörgötter F. Natural Vocalizations in the Mammalian Inferior Colliculus are Broadly Encoded by a Small Number of Independent Multi-Units. Front Neural Circuits 2016; 9:91. [PMID: 26869890 PMCID: PMC4740783 DOI: 10.3389/fncir.2015.00091] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2015] [Accepted: 12/28/2015] [Indexed: 11/18/2022] Open
Abstract
How complex natural sounds are represented by the main converging center of the auditory midbrain, the central inferior colliculus, is an open question. We applied neural discrimination to determine the variation of detailed encoding of individual vocalizations across the best frequency gradient of the central inferior colliculus. The analysis was based on collective responses from several neurons. These multi-unit spike trains were recorded from guinea pigs exposed to a spectrotemporally rich set of eleven species-specific vocalizations. Spike trains of disparate units from the same recording were combined in order to investigate whether groups of multi-unit clusters represent the whole set of vocalizations more reliably than only one unit, and whether temporal response correlations between them facilitate an unambiguous neural representation of the vocalizations. We found a spatial distribution of the capability to accurately encode groups of vocalizations across the best frequency gradient. Different vocalizations are optimally discriminated at different locations of the best frequency gradient. Furthermore, groups of a few multi-unit clusters yield improved discrimination over only one multi-unit cluster between all tested vocalizations. However, temporal response correlations between units do not yield better discrimination. Our study is based on a large set of units of simultaneously recorded responses from several guinea pigs and electrode insertion positions. Our findings suggest a broadly distributed code for behaviorally relevant vocalizations in the mammalian inferior colliculus. Responses from a few non-interacting units are sufficient to faithfully represent the whole set of studied vocalizations with diverse spectrotemporal properties.
Collapse
Affiliation(s)
- Dominika Lyzwa
- Max Planck Institute for Dynamics and Self-OrganizationGöttingen, Germany
- Institute for Nonlinear Dynamics, Physics Department, Georg-August-UniversityGöttingen, Germany
- Bernstein Focus NeurotechnologyGöttingen, Germany
| | - J. Michael Herrmann
- Bernstein Focus NeurotechnologyGöttingen, Germany
- Institute of Perception, Action and Behavior, School of Informatics, University of EdinburghEdinburgh, UK
| | - Florentin Wörgötter
- Bernstein Focus NeurotechnologyGöttingen, Germany
- Institute for Physics - Biophysics, Georg-August-UniversityGöttingen, Germany
| |
Collapse
|
24
|
Tomková M, Tomek J, Novák O, Zelenka O, Syka J, Brom C. Formation and disruption of tonotopy in a large-scale model of the auditory cortex. J Comput Neurosci 2015; 39:131-53. [PMID: 26344164 DOI: 10.1007/s10827-015-0568-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2014] [Revised: 05/15/2015] [Accepted: 05/19/2015] [Indexed: 12/19/2022]
Abstract
There is ample experimental evidence describing changes of tonotopic organisation in the auditory cortex due to environmental factors. In order to uncover the underlying mechanisms, we designed a large-scale computational model of the auditory cortex. The model has up to 100 000 Izhikevich's spiking neurons of 17 different types, almost 21 million synapses, which are evolved according to Spike-Timing-Dependent Plasticity (STDP) and have an architecture akin to existing observations. Validation of the model revealed alternating synchronised/desynchronised states and different modes of oscillatory activity. We provide insight into these phenomena via analysing the activity of neuronal subtypes and testing different causal interventions into the simulation. Our model is able to produce experimental predictions on a cell type basis. To study the influence of environmental factors on the tonotopy, different types of auditory stimulations during the evolution of the network were modelled and compared. We found that strong white noise resulted in completely disrupted tonotopy, which is consistent with in vivo experimental observations. Stimulation with pure tones or spontaneous activity led to a similar degree of tonotopy as in the initial state of the network. Interestingly, weak white noise led to a substantial increase in tonotopy. As the STDP was the only mechanism of plasticity in our model, our results suggest that STDP is a sufficient condition for the emergence and disruption of tonotopy under various types of stimuli. The presented large-scale model of the auditory cortex and the core simulator, SUSNOIMAC, have been made publicly available.
Collapse
Affiliation(s)
- Markéta Tomková
- Faculty of Mathematics and Physics, Charles University in Prague, Prague, Czech Republic. .,Life Sciences Interface Doctoral Training Centre, University of Oxford, Oxford, UK.
| | - Jakub Tomek
- Faculty of Mathematics and Physics, Charles University in Prague, Prague, Czech Republic.,Life Sciences Interface Doctoral Training Centre, University of Oxford, Oxford, UK
| | - Ondřej Novák
- Department of Auditory Neuroscience, Institute of Experimental Medicine, Academy of Sciences of the Czech Republic, Prague, Czech Republic.
| | - Ondřej Zelenka
- Department of Auditory Neuroscience, Institute of Experimental Medicine, Academy of Sciences of the Czech Republic, Prague, Czech Republic
| | - Josef Syka
- Department of Auditory Neuroscience, Institute of Experimental Medicine, Academy of Sciences of the Czech Republic, Prague, Czech Republic
| | - Cyril Brom
- Faculty of Mathematics and Physics, Charles University in Prague, Prague, Czech Republic
| |
Collapse
|
25
|
High-field functional magnetic resonance imaging of vocalization processing in marmosets. Sci Rep 2015; 5:10950. [PMID: 26091254 PMCID: PMC4473644 DOI: 10.1038/srep10950] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2014] [Accepted: 04/29/2015] [Indexed: 11/17/2022] Open
Abstract
Vocalizations are behaviorally critical sounds, and this behavioral importance is reflected in the ascending auditory system, where conspecific vocalizations are increasingly over-represented at higher processing stages. Recent evidence suggests that, in macaques, this increasing selectivity for vocalizations might culminate in a cortical region that is densely populated by vocalization-preferring neurons. Such a region might be a critical node in the representation of vocal communication sounds, underlying the recognition of vocalization type, caller and social context. These results raise the questions of whether cortical specializations for vocalization processing exist in other species, their cortical location, and their relationship to the auditory processing hierarchy. To explore cortical specializations for vocalizations in another species, we performed high-field fMRI of the auditory cortex of a vocal New World primate, the common marmoset (Callithrix jacchus). Using a sparse imaging paradigm, we discovered a caudal-rostral gradient for the processing of conspecific vocalizations in marmoset auditory cortex, with regions of the anterior temporal lobe close to the temporal pole exhibiting the highest preference for vocalizations. These results demonstrate similar cortical specializations for vocalization processing in macaques and marmosets, suggesting that cortical specializations for vocal processing might have evolved before the lineages of these species diverged.
Collapse
|
26
|
Markovitz CD, Hogan PS, Wesen KA, Lim HH. Pairing broadband noise with cortical stimulation induces extensive suppression of ascending sensory activity. J Neural Eng 2015; 12:026006. [PMID: 25686163 PMCID: PMC4359690 DOI: 10.1088/1741-2560/12/2/026006] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
OBJECTIVE The corticofugal system can alter coding along the ascending sensory pathway. Within the auditory system, electrical stimulation of the auditory cortex (AC) paired with a pure tone can cause egocentric shifts in the tuning of auditory neurons, making them more sensitive to the pure tone frequency. Since tinnitus has been linked with hyperactivity across auditory neurons, we sought to develop a new neuromodulation approach that could suppress a wide range of neurons rather than enhance specific frequency-tuned neurons. APPROACH We performed experiments in the guinea pig to assess the effects of cortical stimulation paired with broadband noise (PN-Stim) on ascending auditory activity within the central nucleus of the inferior colliculus (CNIC), a widely studied region for AC stimulation paradigms. MAIN RESULTS All eight stimulated AC subregions induced extensive suppression of activity across the CNIC that was not possible with noise stimulation alone. This suppression built up over time and remained after the PN-Stim paradigm. SIGNIFICANCE We propose that the corticofugal system is designed to decrease the brain's input gain to irrelevant stimuli and PN-Stim is able to artificially amplify this effect to suppress neural firing across the auditory system. The PN-Stim concept may have potential for treating tinnitus and other neurological disorders.
Collapse
Affiliation(s)
- Craig D. Markovitz
- University of Minnesota, Department of Biomedical Engineering, Minneapolis, MN USA
| | - Patrick S. Hogan
- University of Minnesota, Department of Biomedical Engineering, Minneapolis, MN USA
| | - Kyle A. Wesen
- University of Minnesota, Department of Biomedical Engineering, Minneapolis, MN USA
| | - Hubert H. Lim
- University of Minnesota, Department of Biomedical Engineering, Minneapolis, MN USA
- University of Minnesota, Department of Otolaryngology-Head and Neck Surgery, Minneapolis, MN USA
- University of Minnesota, Institute for Translational Neuroscience, Minneapolis, MN USA
| |
Collapse
|
27
|
Lim HH, Lenarz T. Auditory midbrain implant: research and development towards a second clinical trial. Hear Res 2015; 322:212-23. [PMID: 25613994 DOI: 10.1016/j.heares.2015.01.006] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/01/2014] [Revised: 12/04/2014] [Accepted: 01/08/2015] [Indexed: 11/30/2022]
Abstract
The cochlear implant is considered one of the most successful neural prostheses to date, which was made possible by visionaries who continued to develop the cochlear implant through multiple technological and clinical challenges. However, patients without a functional auditory nerve or implantable cochlea cannot benefit from a cochlear implant. The focus of the paper is to review the development and translation of a new type of central auditory prosthesis for this group of patients that is known as the auditory midbrain implant (AMI) and is designed for electrical stimulation within the inferior colliculus. The rationale and results for the first AMI clinical study using a multi-site single-shank array will be presented initially. Although the AMI has achieved encouraging results in terms of safety and improvements in lip-reading capabilities and environmental awareness, it has not yet provided sufficient speech perception. Animal and human data will then be presented to show that a two-shank AMI array can potentially improve hearing performance by targeting specific neurons of the inferior colliculus. A new two-shank array, stimulation strategy, and surgical approach are planned for the AMI that are expected to improve hearing performance in the patients who will be implanted in an upcoming clinical trial funded by the National Institutes of Health. Positive outcomes from this clinical trial will motivate new efforts and developments toward improving central auditory prostheses for those who cannot sufficiently benefit from cochlear implants. This article is part of a Special Issue entitled <Lasker Award>.
Collapse
Affiliation(s)
- Hubert H Lim
- Department of Biomedical Engineering, Department of Otolaryngology, and Institute for Translational Neuroscience, University of Minnesota, 312 Church Street S.E., Minneapolis, MN, 55455, USA.
| | - Thomas Lenarz
- Department of Otolaryngology, Hannover Medical School, Carl-Neuberg-Str.1, Hannover, 30625, Germany.
| |
Collapse
|
28
|
Straka MM, Schmitz S, Lim HH. Response features across the auditory midbrain reveal an organization consistent with a dual lemniscal pathway. J Neurophysiol 2014; 112:981-98. [PMID: 25128560 DOI: 10.1152/jn.00008.2014] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The central auditory system has traditionally been divided into lemniscal and nonlemniscal pathways leading from the midbrain through the thalamus to the cortex. This view has served as an organizing principle for studying, modeling, and understanding the encoding of sound within the brain. However, there is evidence that the lemniscal pathway could be further divided into at least two subpathways, each potentially coding for sound in different ways. We investigated whether such an interpretation is supported by the spatial distribution of response features in the central nucleus of the inferior colliculus (ICC), the part of the auditory midbrain assigned to the lemniscal pathway. We recorded responses to pure tone stimuli in the ICC of ketamine-xylazine-anesthetized guinea pigs and used three-dimensional brain reconstruction techniques to map the location of the recording sites. Compared with neurons in caudal-and-medial regions within an isofrequency lamina of the ICC, neurons in rostral-and-lateral regions responded with shorter first-spike latencies with less spiking jitter, shorter durations of spiking responses, a higher proportion of spikes occurring near the onset of the stimulus, lower thresholds, and larger local field potentials with shorter latencies. Further analysis revealed two distinct clusters of response features located in either the caudal-and-medial or the rostral-and-lateral parts of the isofrequency laminae of the ICC. Thus we report substantial differences in coding properties in two regions of the ICC that are consistent with the hypothesis that the lemniscal pathway is made up of at least two distinct subpathways from the midbrain up to the cortex.
Collapse
Affiliation(s)
- Małgorzata M Straka
- Department of Biomedical Engineering, University of Minnesota, Twin Cities, Minneapolis, Minnesota;
| | - Samuel Schmitz
- Department of Biomedical Engineering, University of Minnesota, Twin Cities, Minneapolis, Minnesota
| | - Hubert H Lim
- Department of Biomedical Engineering, University of Minnesota, Twin Cities, Minneapolis, Minnesota; Institute for Translational Neuroscience, University of Minnesota, Twin Cities, Minneapolis, Minnesota; and Department of Otolaryngology, University of Minnesota, Twin Cities, Minneapolis, Minnesota
| |
Collapse
|
29
|
Dimitrov AG, Cummins GI, Mayko ZM, Portfors CV. Inhibition does not affect the timing code for vocalizations in the mouse auditory midbrain. Front Physiol 2014; 5:140. [PMID: 24795640 PMCID: PMC3997027 DOI: 10.3389/fphys.2014.00140] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2013] [Accepted: 03/23/2014] [Indexed: 11/13/2022] Open
Abstract
Many animals use a diverse repertoire of complex acoustic signals to convey different types of information to other animals. The information in each vocalization therefore must be coded by neurons in the auditory system. One way in which the auditory system may discriminate among different vocalizations is by having highly selective neurons, where only one or two different vocalizations evoke a strong response from a single neuron. Another strategy is to have specific spike timing patterns for particular vocalizations such that each neural response can be matched to a specific vocalization. Both of these strategies seem to occur in the auditory midbrain of mice. The neural mechanisms underlying rate and time coding are unclear, however, it is likely that inhibition plays a role. Here, we examined whether inhibition is involved in shaping neural selectivity to vocalizations via rate and/or time coding in the mouse inferior colliculus (IC). We examined extracellular single unit responses to vocalizations before and after iontophoretically blocking GABAA and glycine receptors in the IC of awake mice. We then applied a number of neurometrics to examine the rate and timing information of individual neurons. We initially evaluated the neuronal responses using inspection of the raster plots, spike-counting measures of response rate and stimulus preference, and a measure of maximum available stimulus-response mutual information. Subsequently, we used two different event sequence distance measures, one based on vector space embedding, and one derived from the Victor/Purpura D q metric, to direct hierarchical clustering of responses. In general, we found that the most salient feature of pharmacologically blocking inhibitory receptors in the IC was the lack of major effects on the functional properties of IC neurons. Blocking inhibition did increase response rate to vocalizations, as expected. However, it did not significantly affect spike timing, or stimulus selectivity of the studied neurons. We observed two main effects when inhibition was locally blocked: (1) Highly selective neurons maintained their selectivity and the information about the stimuli did not change, but response rate increased slightly. (2) Neurons that responded to multiple vocalizations in the control condition, also responded to the same stimuli in the test condition, with similar timing and pattern, but with a greater number of spikes. For some neurons the information rate generally increased, but the information per spike decreased. In many of these neurons, vocalizations that generated no responses in the control condition generated some response in the test condition. Overall, we found that inhibition in the IC does not play a substantial role in creating the distinguishable and reliable neuronal temporal spike patterns in response to different vocalizations.
Collapse
Affiliation(s)
- Alexander G Dimitrov
- Department of Mathematics, Washington State University Vancouver Vancouver, WA, USA
| | - Graham I Cummins
- Department of Mathematics, Washington State University Vancouver Vancouver, WA, USA
| | - Zachary M Mayko
- School of Biological Sciences, Washington State University Vancouver Vancouver, WA, USA
| | - Christine V Portfors
- School of Biological Sciences, Washington State University Vancouver Vancouver, WA, USA
| |
Collapse
|
30
|
Wakefulness-promoting role of the inferior colliculus. Behav Brain Res 2013; 256:82-94. [DOI: 10.1016/j.bbr.2013.07.049] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2012] [Revised: 07/23/2013] [Accepted: 07/27/2013] [Indexed: 11/16/2022]
|
31
|
Rode T, Hartmann T, Hubka P, Scheper V, Lenarz M, Lenarz T, Kral A, Lim HH. Neural representation in the auditory midbrain of the envelope of vocalizations based on a peripheral ear model. Front Neural Circuits 2013; 7:166. [PMID: 24155694 PMCID: PMC3800787 DOI: 10.3389/fncir.2013.00166] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2013] [Accepted: 09/24/2013] [Indexed: 11/24/2022] Open
Abstract
The auditory midbrain implant (AMI) consists of a single shank array (20 sites) for stimulation along the tonotopic axis of the central nucleus of the inferior colliculus (ICC) and has been safely implanted in deaf patients who cannot benefit from a cochlear implant (CI). The AMI improves lip-reading abilities and environmental awareness in the implanted patients. However, the AMI cannot achieve the high levels of speech perception possible with the CI. It appears the AMI can transmit sufficient spectral cues but with limited temporal cues required for speech understanding. Currently, the AMI uses a CI-based strategy, which was originally designed to stimulate each frequency region along the cochlea with amplitude-modulated pulse trains matching the envelope of the bandpass-filtered sound components. However, it is unclear if this type of stimulation with only a single site within each frequency lamina of the ICC can elicit sufficient temporal cues for speech perception. At least speech understanding in quiet is still possible with envelope cues as low as 50 Hz. Therefore, we investigated how ICC neurons follow the bandpass-filtered envelope structure of natural stimuli in ketamine-anesthetized guinea pigs. We identified a subset of ICC neurons that could closely follow the envelope structure (up to ß100 Hz) of a diverse set of species-specific calls, which was revealed by using a peripheral ear model to estimate the true bandpass-filtered envelopes observed by the brain. Although previous studies have suggested a complex neural transformation from the auditory nerve to the ICC, our data suggest that the brain maintains a robust temporal code in a subset of ICC neurons matching the envelope structure of natural stimuli. Clinically, these findings suggest that a CI-based strategy may still be effective for the AMI if the appropriate neurons are entrained to the envelope of the acoustic stimulus and can transmit sufficient temporal cues to higher centers.
Collapse
Affiliation(s)
- Thilo Rode
- Department of Otorhinolaryngology, Hannover Medical University Hannover, Germany
| | | | | | | | | | | | | | | |
Collapse
|
32
|
Suta D, Popelář J, Burianová J, Syka J. Cortical representation of species-specific vocalizations in Guinea pig. PLoS One 2013; 8:e65432. [PMID: 23785425 PMCID: PMC3681779 DOI: 10.1371/journal.pone.0065432] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2012] [Accepted: 04/30/2013] [Indexed: 11/18/2022] Open
Abstract
We investigated the representation of four typical guinea pig vocalizations in the auditory cortex (AI) in anesthetized guinea pigs with the aim to compare cortical data to the data already published for identical calls in subcortical structures - the inferior colliculus (IC) and medial geniculate body (MGB). Like the subcortical neurons also cortical neurons typically responded to many calls with a time-locked response to one or more temporal elements of the calls. The neuronal response patterns in the AI correlated well with the sound temporal envelope of chirp (an isolated short phrase), but correlated less well in the case of chutter and whistle (longer calls) or purr (a call with a fast repetition rate of phrases). Neuronal rate vs. characteristic frequency profiles provided only a coarse representation of the calls' frequency spectra. A comparison between the activity in the AI and those of subcortical structures showed a different transformation of the neuronal response patterns from the IC to the AI for individual calls: i) while the temporal representation of chirp remained unchanged, the representations of whistle and chutter were transformed at the thalamic level and the response to purr at the cortical level; ii) for the wideband calls (whistle, chirp) the rate representation of the call spectra was preserved in the AI and MGB at the level present in the IC, while in the case of low-frequency calls (chutter, purr), the representation was less precise in the AI and MGB than in the IC; iii) the difference in the response strength to natural and time-reversed whistle was found to be smaller in the AI than in the IC or MGB.
Collapse
Affiliation(s)
- Daniel Suta
- Department of Auditory Neuroscience, Institute of Experimental Medicine, Academy of Sciences of the Czech Republic, Prague, Czech Republic.
| | | | | | | |
Collapse
|
33
|
Ter-Mikaelian M, Semple MN, Sanes DH. Effects of spectral and temporal disruption on cortical encoding of gerbil vocalizations. J Neurophysiol 2013; 110:1190-204. [PMID: 23761696 DOI: 10.1152/jn.00645.2012] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
Animal communication sounds contain spectrotemporal fluctuations that provide powerful cues for detection and discrimination. Human perception of speech is influenced both by spectral and temporal acoustic features but is most critically dependent on envelope information. To investigate the neural coding principles underlying the perception of communication sounds, we explored the effect of disrupting the spectral or temporal content of five different gerbil call types on neural responses in the awake gerbil's primary auditory cortex (AI). The vocalizations were impoverished spectrally by reduction to 4 or 16 channels of band-passed noise. For this acoustic manipulation, an average firing rate of the neuron did not carry sufficient information to distinguish between call types. In contrast, the discharge patterns of individual AI neurons reliably categorized vocalizations composed of only four spectral bands with the appropriate natural token. The pooled responses of small populations of AI cells classified spectrally disrupted and natural calls with an accuracy that paralleled human performance on an analogous speech task. To assess whether discharge pattern was robust to temporal perturbations of an individual call, vocalizations were disrupted by time-reversing segments of variable duration. For this acoustic manipulation, cortical neurons were relatively insensitive to short reversal lengths. Consistent with human perception of speech, these results indicate that the stable representation of communication sounds in AI is more dependent on sensitivity to slow temporal envelopes than on spectral detail.
Collapse
Affiliation(s)
- Maria Ter-Mikaelian
- Center for Neural Science, New York University, New York, New York 10003, USA
| | | | | |
Collapse
|
34
|
Conserved mechanisms of vocalization coding in mammalian and songbird auditory midbrain. Hear Res 2013; 305:45-56. [PMID: 23726970 DOI: 10.1016/j.heares.2013.05.005] [Citation(s) in RCA: 42] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/17/2012] [Revised: 03/23/2013] [Accepted: 05/11/2013] [Indexed: 11/23/2022]
Abstract
The ubiquity of social vocalizations among animals provides the opportunity to identify conserved mechanisms of auditory processing that subserve communication. Identifying auditory coding properties that are shared across vocal communicators will provide insight into how human auditory processing leads to speech perception. Here, we compare auditory response properties and neural coding of social vocalizations in auditory midbrain neurons of mammalian and avian vocal communicators. The auditory midbrain is a nexus of auditory processing because it receives and integrates information from multiple parallel pathways and provides the ascending auditory input to the thalamus. The auditory midbrain is also the first region in the ascending auditory system where neurons show complex tuning properties that are correlated with the acoustics of social vocalizations. Single unit studies in mice, bats and zebra finches reveal shared principles of auditory coding including tonotopy, excitatory and inhibitory interactions that shape responses to vocal signals, nonlinear response properties that are important for auditory coding of social vocalizations and modulation tuning. Additionally, single neuron responses in the mouse and songbird midbrain are reliable, selective for specific syllables, and rely on spike timing for neural discrimination of distinct vocalizations. We propose that future research on auditory coding of vocalizations in mouse and songbird midbrain neurons adopt similar experimental and analytical approaches so that conserved principles of vocalization coding may be distinguished from those that are specialized for each species. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives".
Collapse
|
35
|
Parsons CE, Young KS, Joensson M, Brattico E, Hyam JA, Stein A, Green AL, Aziz TZ, Kringelbach ML. Ready for action: a role for the human midbrain in responding to infant vocalizations. Soc Cogn Affect Neurosci 2013; 9:977-84. [PMID: 23720574 PMCID: PMC4090964 DOI: 10.1093/scan/nst076] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023] Open
Abstract
Infant vocalizations are among the most biologically salient sounds in the environment and can draw the listener to the infant rapidly in both times of distress and joy. A region of the midbrain, the periaqueductal gray (PAG), has long been implicated in the control of urgent, survival-related behaviours. To test for PAG involvement in the processing of infant vocalizations, we recorded local field potentials from macroelectrodes implanted in this region in four adults who had undergone deep brain stimulation. We found a significant difference occurring as early as 49 ms after hearing a sound in activity recorded from the PAG in response to infant vocalizations compared with constructed control sounds and adult and animal affective vocalizations. This difference was not present in recordings from thalamic electrodes implanted in three of the patients. Time frequency analyses revealed distinct patterns of activity in the PAG for infant vocalisations, constructed control sounds and adult and animal vocalisations. These results suggest that human infant vocalizations can be discriminated from other emotional or acoustically similar sounds early in the auditory pathway. We propose that this specific, rapid activity in response to infant vocalizations may reflect the initiation of a state of heightened alertness necessary to instigate protective caregiving.
Collapse
Affiliation(s)
- Christine E Parsons
- University Department of Psychiatry, University of Oxford, Oxford, OX3 7JX, UK, Department of Clinical Medicine, Center of Functionally Integrative Neuroscience, Aarhus University, 8000 Aarhus C, Denmark, Cognitive Brain Research Unit, Institute of Behavioral Sciences, University of Helsinki and Center of Excellence in Interdisciplinary Music Research, University of Jyväskylä, Finland, and Department of Neurosurgery, John Radcliffe Hospital, Oxford, OX3 9DU, UKUniversity Department of Psychiatry, University of Oxford, Oxford, OX3 7JX, UK, Department of Clinical Medicine, Center of Functionally Integrative Neuroscience, Aarhus University, 8000 Aarhus C, Denmark, Cognitive Brain Research Unit, Institute of Behavioral Sciences, University of Helsinki and Center of Excellence in Interdisciplinary Music Research, University of Jyväskylä, Finland, and Department of Neurosurgery, John Radcliffe Hospital, Oxford, OX3 9DU, UK
| | - Katherine S Young
- University Department of Psychiatry, University of Oxford, Oxford, OX3 7JX, UK, Department of Clinical Medicine, Center of Functionally Integrative Neuroscience, Aarhus University, 8000 Aarhus C, Denmark, Cognitive Brain Research Unit, Institute of Behavioral Sciences, University of Helsinki and Center of Excellence in Interdisciplinary Music Research, University of Jyväskylä, Finland, and Department of Neurosurgery, John Radcliffe Hospital, Oxford, OX3 9DU, UKUniversity Department of Psychiatry, University of Oxford, Oxford, OX3 7JX, UK, Department of Clinical Medicine, Center of Functionally Integrative Neuroscience, Aarhus University, 8000 Aarhus C, Denmark, Cognitive Brain Research Unit, Institute of Behavioral Sciences, University of Helsinki and Center of Excellence in Interdisciplinary Music Research, University of Jyväskylä, Finland, and Department of Neurosurgery, John Radcliffe Hospital, Oxford, OX3 9DU, UK
| | - Morten Joensson
- University Department of Psychiatry, University of Oxford, Oxford, OX3 7JX, UK, Department of Clinical Medicine, Center of Functionally Integrative Neuroscience, Aarhus University, 8000 Aarhus C, Denmark, Cognitive Brain Research Unit, Institute of Behavioral Sciences, University of Helsinki and Center of Excellence in Interdisciplinary Music Research, University of Jyväskylä, Finland, and Department of Neurosurgery, John Radcliffe Hospital, Oxford, OX3 9DU, UKUniversity Department of Psychiatry, University of Oxford, Oxford, OX3 7JX, UK, Department of Clinical Medicine, Center of Functionally Integrative Neuroscience, Aarhus University, 8000 Aarhus C, Denmark, Cognitive Brain Research Unit, Institute of Behavioral Sciences, University of Helsinki and Center of Excellence in Interdisciplinary Music Research, University of Jyväskylä, Finland, and Department of Neurosurgery, John Radcliffe Hospital, Oxford, OX3 9DU, UK
| | - Elvira Brattico
- University Department of Psychiatry, University of Oxford, Oxford, OX3 7JX, UK, Department of Clinical Medicine, Center of Functionally Integrative Neuroscience, Aarhus University, 8000 Aarhus C, Denmark, Cognitive Brain Research Unit, Institute of Behavioral Sciences, University of Helsinki and Center of Excellence in Interdisciplinary Music Research, University of Jyväskylä, Finland, and Department of Neurosurgery, John Radcliffe Hospital, Oxford, OX3 9DU, UK
| | - Jonathan A Hyam
- University Department of Psychiatry, University of Oxford, Oxford, OX3 7JX, UK, Department of Clinical Medicine, Center of Functionally Integrative Neuroscience, Aarhus University, 8000 Aarhus C, Denmark, Cognitive Brain Research Unit, Institute of Behavioral Sciences, University of Helsinki and Center of Excellence in Interdisciplinary Music Research, University of Jyväskylä, Finland, and Department of Neurosurgery, John Radcliffe Hospital, Oxford, OX3 9DU, UK
| | - Alan Stein
- University Department of Psychiatry, University of Oxford, Oxford, OX3 7JX, UK, Department of Clinical Medicine, Center of Functionally Integrative Neuroscience, Aarhus University, 8000 Aarhus C, Denmark, Cognitive Brain Research Unit, Institute of Behavioral Sciences, University of Helsinki and Center of Excellence in Interdisciplinary Music Research, University of Jyväskylä, Finland, and Department of Neurosurgery, John Radcliffe Hospital, Oxford, OX3 9DU, UK
| | - Alexander L Green
- University Department of Psychiatry, University of Oxford, Oxford, OX3 7JX, UK, Department of Clinical Medicine, Center of Functionally Integrative Neuroscience, Aarhus University, 8000 Aarhus C, Denmark, Cognitive Brain Research Unit, Institute of Behavioral Sciences, University of Helsinki and Center of Excellence in Interdisciplinary Music Research, University of Jyväskylä, Finland, and Department of Neurosurgery, John Radcliffe Hospital, Oxford, OX3 9DU, UK
| | - Tipu Z Aziz
- University Department of Psychiatry, University of Oxford, Oxford, OX3 7JX, UK, Department of Clinical Medicine, Center of Functionally Integrative Neuroscience, Aarhus University, 8000 Aarhus C, Denmark, Cognitive Brain Research Unit, Institute of Behavioral Sciences, University of Helsinki and Center of Excellence in Interdisciplinary Music Research, University of Jyväskylä, Finland, and Department of Neurosurgery, John Radcliffe Hospital, Oxford, OX3 9DU, UK
| | - Morten L Kringelbach
- University Department of Psychiatry, University of Oxford, Oxford, OX3 7JX, UK, Department of Clinical Medicine, Center of Functionally Integrative Neuroscience, Aarhus University, 8000 Aarhus C, Denmark, Cognitive Brain Research Unit, Institute of Behavioral Sciences, University of Helsinki and Center of Excellence in Interdisciplinary Music Research, University of Jyväskylä, Finland, and Department of Neurosurgery, John Radcliffe Hospital, Oxford, OX3 9DU, UKUniversity Department of Psychiatry, University of Oxford, Oxford, OX3 7JX, UK, Department of Clinical Medicine, Center of Functionally Integrative Neuroscience, Aarhus University, 8000 Aarhus C, Denmark, Cognitive Brain Research Unit, Institute of Behavioral Sciences, University of Helsinki and Center of Excellence in Interdisciplinary Music Research, University of Jyväskylä, Finland, and Department of Neurosurgery, John Radcliffe Hospital, Oxford, OX3 9DU, UKUniversity Department of Psychiatry, University of Oxford, Oxford, OX3 7JX, UK, Department of Clinical Medicine, Center of Functionally Integrative Neuroscience, Aarhus University, 8000 Aarhus C, Denmark, Cognitive Brain Research Unit, Institute of Behavioral Sciences, University of Helsinki and Center of Excellence in Interdisciplinary Music Research, University of Jyväskylä, Finland, and Department of Neurosurgery, John Radcliffe Hospital, Oxford, OX3 9DU, UK
| |
Collapse
|
36
|
Straka MM, Schendel D, Lim HH. Neural integration and enhancement from the inferior colliculus up to different layers of auditory cortex. J Neurophysiol 2013; 110:1009-20. [PMID: 23719210 DOI: 10.1152/jn.00022.2013] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
While the cochlear implant has successfully restored hearing to many deaf patients, it cannot benefit those without a functional auditory nerve or an implantable cochlea. As an alternative, the auditory midbrain implant (AMI) has been developed and implanted into deaf patients. Consisting of a single-shank array, the AMI is designed for stimulation along the tonotopic gradient of the inferior colliculus (ICC). Although the AMI can provide frequency cues, it appears to insufficiently transmit temporal cues for speech understanding because repeated stimulation of a single site causes strong suppressive and refractory effects. Applying the electrical stimulation to at least two sites within an isofrequency lamina can circumvent these refractory processes. Moreover, coactivation with short intersite delays (<5 ms) can elicit cortical activation which is enhanced beyond the summation of activity induced by the individual sites. The goal of our study was to further investigate the role of the auditory cortex in this enhancement effect. In guinea pigs, we electrically stimulated two locations within an ICC lamina or along different laminae with varying interpulse intervals (0-10 ms) and recorded activity in different locations and layers of primary auditory cortex (A1). Our findings reveal a neural mechanism that integrates activity only from neurons located within the same ICC lamina for short spiking intervals (<6 ms). This mechanism leads to enhanced activity into layers III-V of A1 that is further magnified in supragranular layers. This integration mechanism may contribute to perceptual coding of different sound features that are relevant for improving AMI performance.
Collapse
Affiliation(s)
- Malgorzata M Straka
- Department of Biomedical Engineering, University of Minnesota, Twin Cities, Minneapolis, Minnesota, USA.
| | | | | |
Collapse
|
37
|
Pollak GD. The dominant role of inhibition in creating response selectivities for communication calls in the brainstem auditory system. Hear Res 2013; 305:86-101. [PMID: 23545427 DOI: 10.1016/j.heares.2013.03.001] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/12/2012] [Revised: 02/20/2013] [Accepted: 03/06/2013] [Indexed: 10/27/2022]
Abstract
This review is concerned with how communication calls are processed and represented by populations of neurons in both the inferior colliculus (IC), the auditory midbrain nucleus, and the dorsal nucleus of the lateral lemniscus (DNLL), the nucleus just caudal to the IC. The review has five sections where focus in each section is on inhibition and its role in shaping response selectivity for communication calls. In the first section, the lack of response selectivity for calls in DNLL neurons is presented and discusses why inhibition plays virtually no role in shaping selectivity. In the second section, the lack of selectivity in the DNLL is contrasted with the high degree of response selectivity in the IC. The third section then reviews how inhibition in the IC shapes response selectivities for calls, and how those selectivities can create a population response with a distinctive response profile to a particular call, which differs from the population profile evoked by any other call. The fourth section is concerned with the specifics of inhibition in the IC, and how the interaction of excitation and inhibition creates directional selectivities for frequency modulations, one of the principal acoustic features of communication signals. The two major hypotheses for directional selectivity are presented. One is the timing hypothesis, which holds that the precise timing of excitation relative to inhibition is the feature that shapes directionality. The other hypothesis is that the relative magnitudes of excitation and inhibition are the dominant features that shape directionality, where timing is relatively unimportant. The final section then turns to the role of serotonin, a neuromodulator that can markedly change responses to calls in the IC. Serotonin provides a linkage between behavioral states and processing. This linkage is discussed in the final section together with the hypothesis that serotonin acts to enhances the contrast in the population responses to various calls over and above the distinctive population responses that were created by inhibition. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives".
Collapse
Affiliation(s)
- George D Pollak
- Section of Neurobiology and Center for Perceptual Systems, 337 Patterson Laboratory Building, The University of Texas at Austin, Austin, TX 78712, USA.
| |
Collapse
|
38
|
Profant O, Burianová J, Syka J. The response properties of neurons in different fields of the auditory cortex in the rat. Hear Res 2013; 296:51-9. [DOI: 10.1016/j.heares.2012.11.021] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/04/2012] [Revised: 10/19/2012] [Accepted: 11/18/2012] [Indexed: 10/27/2022]
|
39
|
Noto CT, Mahzar S, Gnadt J, Kanwal JS. A flexible user-interface for audiovisual presentation and interactive control in neurobehavioral experiments. F1000Res 2013; 2:20. [PMID: 24627768 PMCID: PMC3907162 DOI: 10.12688/f1000research.2-20.v2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 05/16/2013] [Indexed: 11/23/2022] Open
Abstract
A major problem facing behavioral neuroscientists is a lack of unified, vendor-distributed data acquisition systems that allow stimulus presentation and behavioral monitoring while recording neural activity. Numerous systems perform one of these tasks well independently, but to our knowledge, a useful package with a straightforward user interface does not exist. Here we describe the development of a flexible, script-based user interface that enables customization for real-time stimulus presentation, behavioral monitoring and data acquisition. The experimental design can also incorporate neural microstimulation paradigms. We used this interface to deliver multimodal, auditory and visual (images or video) stimuli to a nonhuman primate and acquire single-unit data. Our design is cost-effective and works well with commercially available hardware and software. Our design incorporates a script, providing high-level control of data acquisition via a sequencer running on a digital signal processor to enable behaviorally triggered control of the presentation of visual and auditory stimuli. Our experiments were conducted in combination with eye-tracking hardware. The script, however, is designed to be broadly useful to neuroscientists who may want to deliver stimuli of different modalities using any animal model.
Collapse
Affiliation(s)
- Christopher T Noto
- Department of Neurology, Georgetown University, Washington DC, 20057, USA ; Department of Physiology and Biophysics, Georgetown University, Washington DC, 20057, USA
| | - Suleman Mahzar
- Department of Neurology, Georgetown University, Washington DC, 20057, USA ; Department of Physiology and Biophysics, Georgetown University, Washington DC, 20057, USA ; Current address: Faculty of Computer Science and Engineering, GIK Institute, Topi, 23640, Pakistan
| | - James Gnadt
- Department of Physiology and Biophysics, Georgetown University, Washington DC, 20057, USA ; Current address: NINDS/NIH, Systems and Cognitive Neuroscience, Neuroscience Center, Bethesda MD, 20892, USA
| | - Jagmeet S Kanwal
- Department of Neurology, Georgetown University, Washington DC, 20057, USA ; Department of Physiology and Biophysics, Georgetown University, Washington DC, 20057, USA
| |
Collapse
|
40
|
Grimsley JMS, Shanbhag SJ, Palmer AR, Wallace MN. Processing of communication calls in Guinea pig auditory cortex. PLoS One 2012; 7:e51646. [PMID: 23251604 PMCID: PMC3520958 DOI: 10.1371/journal.pone.0051646] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2011] [Accepted: 11/08/2012] [Indexed: 11/25/2022] Open
Abstract
Vocal communication is an important aspect of guinea pig behaviour and a large contributor to their acoustic environment. We postulated that some cortical areas have distinctive roles in processing conspecific calls. In order to test this hypothesis we presented exemplars from all ten of their main adult vocalizations to urethane anesthetised animals while recording from each of the eight areas of the auditory cortex. We demonstrate that the primary area (AI) and three adjacent auditory belt areas contain many units that give isomorphic responses to vocalizations. These are the ventrorostral belt (VRB), the transitional belt area (T) that is ventral to AI and the small area (area S) that is rostral to AI. Area VRB has a denser representation of cells that are better at discriminating among calls by using either a rate code or a temporal code than any other area. Furthermore, 10% of VRB cells responded to communication calls but did not respond to stimuli such as clicks, broadband noise or pure tones. Area S has a sparse distribution of call responsive cells that showed excellent temporal locking, 31% of which selectively responded to a single call. AI responded well to all vocalizations and was much more responsive to vocalizations than the adjacent dorsocaudal core area. Areas VRB, AI and S contained units with the highest levels of mutual information about call stimuli. Area T also responded well to some calls but seems to be specialized for low sound levels. The two dorsal belt areas are comparatively unresponsive to vocalizations and contain little information about the calls. AI projects to areas S, VRB and T, so there may be both rostral and ventral pathways for processing vocalizations in the guinea pig.
Collapse
Affiliation(s)
- Jasmine M. S. Grimsley
- Institute of Hearing Research, Medical Research Council, Nottingham, United Kingdom
- Department of Anatomy and Neurobiology, Northeast Ohio Medical University, Rootstown, Ohio, United States of America
| | - Sharad J. Shanbhag
- Department of Anatomy and Neurobiology, Northeast Ohio Medical University, Rootstown, Ohio, United States of America
| | - Alan R. Palmer
- Institute of Hearing Research, Medical Research Council, Nottingham, United Kingdom
| | - Mark N. Wallace
- Institute of Hearing Research, Medical Research Council, Nottingham, United Kingdom
- * E-mail:
| |
Collapse
|
41
|
Gruters KG, Groh JM. Sounds and beyond: multisensory and other non-auditory signals in the inferior colliculus. Front Neural Circuits 2012; 6:96. [PMID: 23248584 PMCID: PMC3518932 DOI: 10.3389/fncir.2012.00096] [Citation(s) in RCA: 74] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2012] [Accepted: 11/15/2012] [Indexed: 11/20/2022] Open
Abstract
The inferior colliculus (IC) is a major processing center situated mid-way along both the ascending and descending auditory pathways of the brain stem. Although it is fundamentally an auditory area, the IC also receives anatomical input from non-auditory sources. Neurophysiological studies corroborate that non-auditory stimuli can modulate auditory processing in the IC and even elicit responses independent of coincident auditory stimulation. In this article, we review anatomical and physiological evidence for multisensory and other non-auditory processing in the IC. Specifically, the contributions of signals related to vision, eye movements and position, somatosensation, and behavioral context to neural activity in the IC will be described. These signals are potentially important for localizing sound sources, attending to salient stimuli, distinguishing environmental from self-generated sounds, and perceiving and generating communication sounds. They suggest that the IC should be thought of as a node in a highly interconnected sensory, motor, and cognitive network dedicated to synthesizing a higher-order auditory percept rather than simply reporting patterns of air pressure detected by the cochlea. We highlight some of the potential pitfalls that can arise from experimental manipulations that may disrupt the normal function of this network, such as the use of anesthesia or the severing of connections from cortical structures that project to the IC. Finally, we note that the presence of these signals in the IC has implications for our understanding not just of the IC but also of the multitude of other regions within and beyond the auditory system that are dependent on signals that pass through the IC. Whatever the IC “hears” would seem to be passed both “upward” to thalamus and thence to auditory cortex and beyond, as well as “downward” via centrifugal connections to earlier areas of the auditory pathway such as the cochlear nucleus.
Collapse
Affiliation(s)
- Kurtis G Gruters
- Department of Psychology and Neuroscience, Duke University Durham, NC, USA
| | | |
Collapse
|
42
|
Ouda L, Syka J. Immunocytochemical profiles of inferior colliculus neurons in the rat and their changes with aging. Front Neural Circuits 2012; 6:68. [PMID: 23049499 PMCID: PMC3448074 DOI: 10.3389/fncir.2012.00068] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2012] [Accepted: 09/04/2012] [Indexed: 12/04/2022] Open
Abstract
The inferior colliculus (IC) plays a strategic role in the central auditory system in relaying and processing acoustical information, and therefore its age-related changes may significantly influence the quality of the auditory function. A very complex processing of acoustical stimuli occurs in the IC, as supported also by the fact that the rat IC contains more neurons than all other subcortical auditory structures combined. GABAergic neurons, which predominantly co-express parvalbumin (PV), are present in the central nucleus of the IC in large numbers and to a lesser extent in the dorsal and external/lateral cortices of the IC. On the other hand, calbindin (CB) and calretinin (CR) are prevalent in the dorsal and external cortices of the IC, with only a few positive neurons in the central nucleus. The relationship between CB and CR expression in the IC and any neurotransmitter system has not yet been well established, but the distribution and morphology of the immunoreactive neurons suggest that they are at least partially non-GABAergic cells. The expression of glutamate decarboxylase (GAD) (a key enzyme for GABA synthesis) and calcium binding proteins (CBPs) in the IC of rats undergoes pronounced changes with aging that involve mostly a decline in protein expression and a decline in the number of immunoreactive neurons. Similar age-related changes in GAD, CB, and CR expression are present in the IC of two rat strains with differently preserved inner ear function up to late senescence (Long-Evans and Fischer 344), which suggests that these changes do not depend exclusively on peripheral deafferentation but are, at least partially, of central origin. These changes may be associated with the age-related deterioration in the processing of the temporal parameters of acoustical stimuli, which is not correlated with hearing threshold shifts, and therefore may contribute to central presbycusis.
Collapse
Affiliation(s)
- Ladislav Ouda
- Institute of Experimental Medicine, Academy of Sciences of the Czech Republic Prague, Czech Republic
| | | |
Collapse
|
43
|
|
44
|
Grimsley J, Palmer A, Wallace M. Age differences in the purr call distinguished by units in the adult guinea pig primary auditory cortex. Hear Res 2011; 277:134-42. [PMID: 21296136 PMCID: PMC4548717 DOI: 10.1016/j.heares.2011.01.018] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/21/2010] [Revised: 01/27/2011] [Accepted: 01/28/2011] [Indexed: 11/19/2022]
Abstract
Many communication calls contain information about the physical characteristics of the calling animal. During maturation of the guinea pig purr call the pitch becomes lower as the fundamental frequency progressively decreases from 476 to 261 Hz on average. Neurons in the primary auditory cortex (AI) often respond strongly to the purr and we postulated that some of them are capable of distinguishing between purr calls of different pitch. Consequently four pitch-shifted versions of a single call were used as stimuli. Many units in AI (79/182) responded to the purr call either with an onset response or with multiple bursts of firing that were time-locked to the phrases of the call. All had a characteristic frequency ≤5 kHz. Both types of unit altered their firing rate in response to pitch-shifted versions of the call. Of the responsive units, 41% (32/79) had a firing rate locked to the stimulus envelope that was at least 50% higher for one version of the call than any other. Some (14/32) had a preference that could be predicted from their frequency response area while others (18/32) were not predictable. We conclude that about 18% of stimulus-driven cells at the low-frequency end of AI are very sensitive to age-related changes in the purr call.
Collapse
Affiliation(s)
- J.M.S. Grimsley
- MRC Institute of Hearing Research, University Park, Nottingham NG7 2RD, UK
| | - A.R. Palmer
- MRC Institute of Hearing Research, University Park, Nottingham NG7 2RD, UK
| | - M.N. Wallace
- MRC Institute of Hearing Research, University Park, Nottingham NG7 2RD, UK
| |
Collapse
|
45
|
Difference in response reliability predicted by spectrotemporal tuning in the cochlear nuclei of barn owls. J Neurosci 2011; 31:3234-42. [PMID: 21368035 DOI: 10.1523/jneurosci.5422-10.2011] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
The brainstem auditory pathway is obligatory for all aural information. Brainstem auditory neurons must encode the level and timing of sounds, as well as their time-dependent spectral properties, the fine structure, and envelope, which are essential for sound discrimination. This study focused on envelope coding in the two cochlear nuclei of the barn owl, nucleus angularis (NA) and nucleus magnocellularis (NM). NA and NM receive input from bifurcating auditory nerve fibers and initiate processing pathways specialized in encoding interaural time (ITD) and level (ILD) differences, respectively. We found that NA neurons, although unable to accurately encode stimulus phase, lock more strongly to the stimulus envelope than NM units. The spectrotemporal receptive fields (STRFs) of NA neurons exhibit a pre-excitatory suppressive field. Using multilinear regression analysis and computational modeling, we show that this feature of STRFs can account for enhanced across-trial response reliability, by locking spikes to the stimulus envelope. Our findings indicate a dichotomy in envelope coding between the time and intensity processing pathways as early as at the level of the cochlear nuclei. This allows the ILD processing pathway to encode envelope information with greater fidelity than the ITD processing pathway. Furthermore, we demonstrate that the properties of the STRFs of the neurons can be quantitatively related to spike timing reliability.
Collapse
|
46
|
A possible role for a paralemniscal auditory pathway in the coding of slow temporal information. Hear Res 2010; 272:125-34. [PMID: 21094680 DOI: 10.1016/j.heares.2010.10.009] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/17/2010] [Revised: 09/27/2010] [Accepted: 10/19/2010] [Indexed: 11/20/2022]
Abstract
Low-frequency temporal information present in speech is critical for normal perception, however the neural mechanism underlying the differentiation of slow rates in acoustic signals is not known. Data from the rat trigeminal system suggest that the paralemniscal pathway may be specifically tuned to code low-frequency temporal information. We tested whether this phenomenon occurs in the auditory system by measuring the representation of temporal rate in lemniscal and paralemniscal auditory thalamus and cortex in guinea pig. Similar to the trigeminal system, responses measured in auditory thalamus indicate that slow rates are differentially represented in a paralemniscal pathway. In cortex, both lemniscal and paralemniscal neurons indicated sensitivity to slow rates. We speculate that a paralemniscal pathway in the auditory system may be specifically tuned to code low-frequency temporal information present in acoustic signals. These data suggest that somatosensory and auditory modalities have parallel sub-cortical pathways that separately process slow rates and the spatial representation of the sensory periphery.
Collapse
|
47
|
Pollak GD. Discriminating among complex signals: the roles of inhibition for creating response selectivities. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2010; 197:625-40. [DOI: 10.1007/s00359-010-0602-9] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2010] [Revised: 10/11/2010] [Accepted: 10/17/2010] [Indexed: 12/18/2022]
|
48
|
Razak KA, Fuzessery ZM. Experience-dependent development of vocalization selectivity in the auditory cortex. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2010; 128:1446-1451. [PMID: 20815478 PMCID: PMC2945755 DOI: 10.1121/1.3377057] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/14/2010] [Revised: 03/05/2010] [Accepted: 03/10/2010] [Indexed: 05/29/2023]
Abstract
Vocalization-selective neurons are present in the auditory systems of several vertebrate groups. Vocalization selectivity is influenced by developmental experience, but the underlying mechanisms are only beginning to be understood. Evidence is presented in this review for the hypothesis that plasticity of timing and strength of inhibition is a mechanism for plasticity of vocalization selectivity. The pallid bat echolocates using downward frequency modulated (FM) sweeps. Nearly 70% of neurons with tuning in the echolocation frequency range in its auditory cortex respond selectively to the direction and rate of change of frequencies present in the echolocation call. During development, FM rate selectivity matures early, while direction selectivity emerges later. Based on the time course of development it was hypothesized that FM direction, but not rate, selectivity is experience-dependent. This hypothesis was tested by altering echolocation experience during development. The results show that normal echolocation experience is required for both refinement and maintenance of direction selectivity. Interestingly, experience is required for the maintenance of rate selectivity, but not for initial development. Across all ages and experimental groups, the timing relationship between inhibitory and excitatory inputs explains sweep selectivity. These experiments suggest that inhibitory plasticity is a substrate for experience-dependent changes in vocalization selectivity.
Collapse
Affiliation(s)
- Khaleel A Razak
- Department of Psychology, University of California, 900 University Avenue, Riverside, California 92521, USA
| | | |
Collapse
|
49
|
Effects of pulse phase duration and location of stimulation within the inferior colliculus on auditory cortical evoked potentials in a guinea pig model. J Assoc Res Otolaryngol 2010; 11:689-708. [PMID: 20717834 DOI: 10.1007/s10162-010-0229-0] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2009] [Accepted: 07/23/2010] [Indexed: 12/19/2022] Open
Abstract
The auditory midbrain implant (AMI), which consists of a single shank array designed for stimulation within the central nucleus of the inferior colliculus (ICC), has been developed for deaf patients who cannot benefit from a cochlear implant. Currently, performance levels in clinical trials for the AMI are far from those achieved by the cochlear implant and vary dramatically across patients, in part due to stimulation location effects. As an initial step towards improving the AMI, we investigated how stimulation of different regions along the isofrequency domain of the ICC as well as varying pulse phase durations and levels affected auditory cortical activity in anesthetized guinea pigs. This study was motivated by the need to determine in which region to implant the single shank array within a three-dimensional ICC structure and what stimulus parameters to use in patients. Our findings indicate that complex and unfavorable cortical activation properties are elicited by stimulation of caudal-dorsal ICC regions with the AMI array. Our results also confirm the existence of different functional regions along the isofrequency domain of the ICC (i.e., a caudal-dorsal and a rostral-ventral region), which has been traditionally unclassified. Based on our study as well as previous animal and human AMI findings, we may need to deliver more complex stimuli than currently used in the AMI patients to effectively activate the caudal ICC or ensure that the single shank AMI is only implanted into a rostral-ventral ICC region in future patients.
Collapse
|
50
|
Pollak GD, Xie R, Gittelman JX, Andoni S, Li N. The dominance of inhibition in the inferior colliculus. Hear Res 2010; 274:27-39. [PMID: 20685288 DOI: 10.1016/j.heares.2010.05.010] [Citation(s) in RCA: 46] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/25/2010] [Revised: 05/19/2010] [Accepted: 05/19/2010] [Indexed: 11/16/2022]
Abstract
Almost all of the processing that occurs in the various lower auditory nuclei converges upon a common target in the central nucleus of the inferior colliculus (ICc) thus making the ICc the nexus of the auditory system. A variety of new response properties are formed in the ICc through the interactions among the excitatory and inhibitory inputs that converge upon it. Here we review studies that illustrate the dominant role inhibition plays in the ICc. We begin by reviewing studies of tuning curves and show how inhibition shapes the variety of tuning curves in the ICc through sideband inhibition. We then show how inhibition shapes selective response properties for complex signals, focusing on selectivity for the sweep direction of frequency modulations (FM). In the final section we consider results from in vivo whole-cell recordings that show how parameters of the incoming excitation and inhibition interact to shape directional selectivity. We show that post-synaptic potentials (PSPs) evoked by different signals can be similar but evoke markedly different spike-counts. In these cases, spike threshold acts as a non-linear amplifier that converts small differences in PSPs into large differences in spike output. Such differences between the inputs to a cell compared to the outputs from the same cell suggest that highly selective discharge properties can be created by only minor adjustments in the synaptic strengths evoked by one or both signals. These findings also suggest that plasticity of response features may be achieved with far less modifications in circuitry than previously supposed.
Collapse
Affiliation(s)
- George D Pollak
- Section of Neurobiology, The University of Texas at Austin, Austin, TX 78712, USA.
| | | | | | | | | |
Collapse
|