1
|
Parameshwarappa V, Norena AJ. The effects of acute and chronic noise trauma on stimulus-evoked activity across primary auditory cortex layers. J Neurophysiol 2024; 131:225-240. [PMID: 38198658 DOI: 10.1152/jn.00427.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Revised: 12/19/2023] [Accepted: 01/02/2024] [Indexed: 01/12/2024] Open
Abstract
Exposure to intense noise environments is a major cause of sensorineural hearing loss and auditory perception disorders, such as tinnitus and hyperacusis, which may have a central origin. The effects of noise-induced hearing loss on the auditory cortex have been documented in many studies. One limitation of these studies, however, is that the effects of noise trauma have been mostly studied at the granular layer (i.e, the main cortical recipient of thalamic input), while the cortex is a very complex structure, with six different layers each having its own pattern of connectivity and role in sensory processing. The present study aims to investigate the effects of acute and chronic noise trauma on the laminar pattern of stimulus-evoked activity in the primary auditory cortex of the anesthetized guinea pig. We show that acute and chronic noise trauma are both followed by an increase in stimulus-evoked cortical responses, mostly in the granular and supragranular layers. The cortical responses are more monotonic as a function of the intensity level after noise trauma. There was minimal change, if any, in local field potential (LFP) amplitude after acute noise trauma, while LFP amplitude was enhanced after chronic noise trauma. Finally, LFP and the current source density analysis suggest that acute but more specifically chronic noise trauma is associated with the emergence of a new sink in the supragranular layer. This result suggests that supragranular layers become a major input recipient. We discuss the possible mechanisms and functional implications of these changes.NEW & NOTEWORTHY Our study shows that cortical activity is enhanced after trauma and that the sequence of cortical column activation during stimulus-evoked response is altered, i.e. the supragranular layer becomes a major input recipient. We speculate that these large cortical changes may play a key role in the auditory hypersensitivity (hyperacusis) that can be triggered after noise trauma in human subjects.
Collapse
Affiliation(s)
- Vinay Parameshwarappa
- Centre National de la Recherche Scientifique, Aix-Marseille University, Marseille, France
| | - Arnaud J Norena
- Centre National de la Recherche Scientifique, Aix-Marseille University, Marseille, France
| |
Collapse
|
2
|
Agarwalla S, De A, Bandyopadhyay S. Predictive Mouse Ultrasonic Vocalization Sequences: Uncovering Behavioral Significance, Auditory Cortex Neuronal Preferences, and Social-Experience-Driven Plasticity. J Neurosci 2023; 43:6141-6163. [PMID: 37541836 PMCID: PMC10476644 DOI: 10.1523/jneurosci.2353-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 07/29/2023] [Accepted: 07/31/2023] [Indexed: 08/06/2023] Open
Abstract
Mouse ultrasonic vocalizations (USVs) contain predictable sequential structures like bird songs and speech. Neural representation of USVs in the mouse primary auditory cortex (Au1) and its plasticity with experience has been largely studied with single-syllables or dyads, without using the predictability in USV sequences. Studies using playback of USV sequences have used randomly selected sequences from numerous possibilities. The current study uses mutual information to obtain context-specific natural sequences (NSeqs) of USV syllables capturing the observed predictability in male USVs in different contexts of social interaction with females. Behavioral and physiological significance of NSeqs over random sequences (RSeqs) lacking predictability were examined. Female mice, never having the social experience of being exposed to males, showed higher selectivity for NSeqs behaviorally and at cellular levels probed by expression of immediate early gene c-fos in Au1. The Au1 supragranular single units also showed higher selectivity to NSeqs over RSeqs. Social-experience-driven plasticity in encoding NSeqs and RSeqs in adult females was probed by examining neural selectivities to the same sequences before and after the above social experience. Single units showed enhanced selectivity for NSeqs over RSeqs after the social experience. Further, using two-photon Ca2+ imaging, we observed social experience-dependent changes in the selectivity of sequences of excitatory and somatostatin-positive inhibitory neurons but not parvalbumin-positive inhibitory neurons of Au1. Using optogenetics, somatostatin-positive neurons were identified as a possible mediator of the observed social-experience-driven plasticity. Our study uncovers the importance of predictive sequences and introduces mouse USVs as a promising model to study context-dependent speech like communications.SIGNIFICANCE STATEMENT Humans need to detect patterns in the sensory world. For instance, speech is meaningful sequences of acoustic tokens easily differentiated from random ordered tokens. The structure derives from the predictability of the tokens. Similarly, mouse vocalization sequences have predictability and undergo context-dependent modulation. Our work investigated whether mice differentiate such informative predictable sequences (NSeqs) of communicative significance from RSeqs at the behavioral, molecular, and neuronal levels. Following a social experience in which NSeqs occur as a crucial component, mouse auditory cortical neurons become more sensitive to differences between NSeqs and RSeqs, although preference for individual tokens is unchanged. Thus, speech-like communication and its dysfunction may be studied in circuit, cellular, and molecular levels in mice.
Collapse
Affiliation(s)
- Swapna Agarwalla
- Information Processing Laboratory, Department of Electronics and Electrical Communication Engineering, Indian Institute of Technology Kharagpur, Kharagpur 721302, India
| | - Amiyangshu De
- Information Processing Laboratory, Department of Electronics and Electrical Communication Engineering, Indian Institute of Technology Kharagpur, Kharagpur 721302, India
- Advanced Technology Development Centre, Indian Institute of Technology Kharagpur, Kharagpur 721302, India
| | - Sharba Bandyopadhyay
- Information Processing Laboratory, Department of Electronics and Electrical Communication Engineering, Indian Institute of Technology Kharagpur, Kharagpur 721302, India
- Advanced Technology Development Centre, Indian Institute of Technology Kharagpur, Kharagpur 721302, India
| |
Collapse
|
3
|
Sadagopan S, Kar M, Parida S. Quantitative models of auditory cortical processing. Hear Res 2023; 429:108697. [PMID: 36696724 PMCID: PMC9928778 DOI: 10.1016/j.heares.2023.108697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 12/17/2022] [Accepted: 01/12/2023] [Indexed: 01/15/2023]
Abstract
To generate insight from experimental data, it is critical to understand the inter-relationships between individual data points and place them in context within a structured framework. Quantitative modeling can provide the scaffolding for such an endeavor. Our main objective in this review is to provide a primer on the range of quantitative tools available to experimental auditory neuroscientists. Quantitative modeling is advantageous because it can provide a compact summary of observed data, make underlying assumptions explicit, and generate predictions for future experiments. Quantitative models may be developed to characterize or fit observed data, to test theories of how a task may be solved by neural circuits, to determine how observed biophysical details might contribute to measured activity patterns, or to predict how an experimental manipulation would affect neural activity. In complexity, quantitative models can range from those that are highly biophysically realistic and that include detailed simulations at the level of individual synapses, to those that use abstract and simplified neuron models to simulate entire networks. Here, we survey the landscape of recently developed models of auditory cortical processing, highlighting a small selection of models to demonstrate how they help generate insight into the mechanisms of auditory processing. We discuss examples ranging from models that use details of synaptic properties to explain the temporal pattern of cortical responses to those that use modern deep neural networks to gain insight into human fMRI data. We conclude by discussing a biologically realistic and interpretable model that our laboratory has developed to explore aspects of vocalization categorization in the auditory pathway.
Collapse
Affiliation(s)
- Srivatsun Sadagopan
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA; Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, USA.
| | - Manaswini Kar
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA
| | - Satyabrata Parida
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
4
|
Robotka H, Thomas L, Yu K, Wood W, Elie JE, Gahr M, Theunissen FE. Sparse ensemble neural code for a complete vocal repertoire. Cell Rep 2023; 42:112034. [PMID: 36696266 PMCID: PMC10363576 DOI: 10.1016/j.celrep.2023.112034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 08/08/2022] [Accepted: 01/09/2023] [Indexed: 01/26/2023] Open
Abstract
The categorization of animal vocalizations into distinct behaviorally relevant groups for communication is an essential operation that must be performed by the auditory system. This auditory object recognition is a difficult task that requires selectivity to the group identifying acoustic features and invariance to renditions within each group. We find that small ensembles of auditory neurons in the forebrain of a social songbird can code the bird's entire vocal repertoire (∼10 call types). Ensemble neural discrimination is not, however, correlated with single unit selectivity, but instead with how well the joint single unit tunings to characteristic spectro-temporal modulations span the acoustic subspace optimized for the discrimination of call types. Thus, akin to face recognition in the visual system, call type recognition in the auditory system is based on a sparse code representing a small number of high-level features and not on highly selective grandmother neurons.
Collapse
Affiliation(s)
- H Robotka
- Max Planck Institute for Ornithology, Seewiesen, Germany
| | - L Thomas
- University of California, Berkeley, Helen Wills Neuroscience Institute, Berkeley, CA, USA
| | - K Yu
- University of California, Berkeley, Helen Wills Neuroscience Institute, Berkeley, CA, USA
| | - W Wood
- University of California, Berkeley, Helen Wills Neuroscience Institute, Berkeley, CA, USA
| | - J E Elie
- University of California, Berkeley, Helen Wills Neuroscience Institute, Berkeley, CA, USA
| | - M Gahr
- Max Planck Institute for Ornithology, Seewiesen, Germany
| | - F E Theunissen
- Max Planck Institute for Ornithology, Seewiesen, Germany; University of California, Berkeley, Helen Wills Neuroscience Institute, Berkeley, CA, USA; Department of Psychology and Integrative Biology, University of California, Berkeley, Berkeley, CA, USA.
| |
Collapse
|
5
|
López-Jury L, García-Rosales F, González-Palomares E, Wetekam J, Pasek M, Hechavarria JC. A neuron model with unbalanced synaptic weights explains the asymmetric effects of anaesthesia on the auditory cortex. PLoS Biol 2023; 21:e3002013. [PMID: 36802356 PMCID: PMC10013928 DOI: 10.1371/journal.pbio.3002013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 03/14/2023] [Accepted: 01/27/2023] [Indexed: 02/23/2023] Open
Abstract
Substantial progress in the field of neuroscience has been made from anaesthetized preparations. Ketamine is one of the most used drugs in electrophysiology studies, but how ketamine affects neuronal responses is poorly understood. Here, we used in vivo electrophysiology and computational modelling to study how the auditory cortex of bats responds to vocalisations under anaesthesia and in wakefulness. In wakefulness, acoustic context increases neuronal discrimination of natural sounds. Neuron models predicted that ketamine affects the contextual discrimination of sounds regardless of the type of context heard by the animals (echolocation or communication sounds). However, empirical evidence showed that the predicted effect of ketamine occurs only if the acoustic context consists of low-pitched sounds (e.g., communication calls in bats). Using the empirical data, we updated the naïve models to show that differential effects of ketamine on cortical responses can be mediated by unbalanced changes in the firing rate of feedforward inputs to cortex, and changes in the depression of thalamo-cortical synaptic receptors. Combined, our findings obtained in vivo and in silico reveal the effects and mechanisms by which ketamine affects cortical responses to vocalisations.
Collapse
Affiliation(s)
- Luciana López-Jury
- Institute for Cell Biology and Neuroscience, Goethe University, Frankfurt am Main, Germany
- * E-mail: (LL-J); (JCH)
| | - Francisco García-Rosales
- Institute for Cell Biology and Neuroscience, Goethe University, Frankfurt am Main, Germany
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Frankfurt am Main, Germany
| | | | - Johannes Wetekam
- Institute for Cell Biology and Neuroscience, Goethe University, Frankfurt am Main, Germany
| | - Michael Pasek
- Institut für Theoretische Physik, Goethe University, Frankfurt am Main, Germany
| | - Julio C. Hechavarria
- Institute for Cell Biology and Neuroscience, Goethe University, Frankfurt am Main, Germany
- * E-mail: (LL-J); (JCH)
| |
Collapse
|
6
|
Bálint A, Szabó Á, Andics A, Gácsi M. Dog and human neural sensitivity to voicelikeness: A comparative fMRI study. Neuroimage 2023; 265:119791. [PMID: 36476565 DOI: 10.1016/j.neuroimage.2022.119791] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 12/01/2022] [Accepted: 12/03/2022] [Indexed: 12/12/2022] Open
Abstract
Voice-sensitivity in the auditory cortex of a range of mammals has been proposed to be determined primarily by tuning to conspecific auditory stimuli, but recent human findings indicate a role for a more general tuning to voicelikeness. Vocal emotional valence, a central characteristic of vocalisations, has been linked to the same basic acoustic parameters across species. Comparative neuroimaging revealed that during voice perception, such acoustic parameters modulate emotional valence-sensitivity in auditory cortical regions in both family dogs and humans. To explore the role of voicelikeness in auditory emotional valence-sensitivity across species, here we constructed artificial emotional sounds in two sound categories: voice-like vs. sine-wave sounds, parametrically modulating two main acoustic parameters, f0 and call length. We hypothesised that if mammalian auditory systems are characterised by a general tuning to voicelikeness, voice-like sounds will be processed preferentially, and acoustic parameters for voice-like sounds will be processed differently than for sine-wave sounds - both in dogs and humans. We found cortical areas in both species that responded stronger to voice-like than to sine-wave stimuli, while there were no regions responding stronger to sine-wave sounds in either species. Additionally, we found that in bilateral primary and emotional valence-sensitive auditory regions of both species, the processing of voice-like and sine-wave sounds are modulated by f0 in opposite ways. These results reveal functional similarities between evolutionarily distant mammals for processing voicelikeness and its effect on processing basic acoustic cues of vocal emotions.
Collapse
Affiliation(s)
- Anna Bálint
- ELKH-ELTE Comparative Ethology Research Group, H-1117 Budapest, Pázmány Péter sétány 1/C, Hungary.
| | - Ádám Szabó
- Department of Neuroradiology at the Medical Imaging Centre of the Semmelweis University, H-1082 Budapest, Üllői út 78a, Hungary
| | - Attila Andics
- Department of Ethology, Eötvös Loránd University, H-1117 Budapest, Pázmány Péter sétány 1/C, Hungary; MTA-ELTE 'Lendület' Neuroethology of Communication Research Group, Hungarian Academy of Sciences - Eötvös Loránd University, H-1117 Budapest, Pázmány Péter sétány 1/C, Hungary; ELTE NAP Canine Brain Research Group, H-1117 Budapest, Pázmány Péter sétány 1/C, Hungary
| | - Márta Gácsi
- ELKH-ELTE Comparative Ethology Research Group, H-1117 Budapest, Pázmány Péter sétány 1/C, Hungary; Department of Ethology, Eötvös Loránd University, H-1117 Budapest, Pázmány Péter sétány 1/C, Hungary
| |
Collapse
|
7
|
Souffi S, Varnet L, Zaidi M, Bathellier B, Huetz C, Edeline JM. Reduction in sound discrimination in noise is related to envelope similarity and not to a decrease in envelope tracking abilities. J Physiol 2023; 601:123-149. [PMID: 36373184 DOI: 10.1113/jp283526] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 11/08/2022] [Indexed: 11/15/2022] Open
Abstract
Humans and animals constantly face challenging acoustic environments, such as various background noises, that impair the detection, discrimination and identification of behaviourally relevant sounds. Here, we disentangled the role of temporal envelope tracking in the reduction in neuronal and behavioural discrimination between communication sounds in situations of acoustic degradations. By collecting neuronal activity from six different levels of the auditory system, from the auditory nerve up to the secondary auditory cortex, in anaesthetized guinea-pigs, we found that tracking of slow changes of the temporal envelope is a general functional property of auditory neurons for encoding communication sounds in quiet conditions and in adverse, challenging conditions. Results from a go/no-go sound discrimination task in mice support the idea that the loss of distinct slow envelope cues in noisy conditions impacted the discrimination performance. Together, these results suggest that envelope tracking is potentially a universal mechanism operating in the central auditory system, which allows the detection of any between-stimulus difference in the slow envelope and thus copes with degraded conditions. KEY POINTS: In quiet conditions, envelope tracking in the low amplitude modulation range (<20 Hz) is correlated with the neuronal discrimination between communication sounds as quantified by mutual information from the cochlear nucleus up to the auditory cortex. At each level of the auditory system, auditory neurons retain their abilities to track the communication sound envelopes in situations of acoustic degradation, such as vocoding and the addition of masking noises up to a signal-to-noise ratio of -10 dB. In noisy conditions, the increase in between-stimulus envelope similarity explains the reduction in both behavioural and neuronal discrimination in the auditory system. Envelope tracking can be viewed as a universal mechanism that allows neural and behavioural discrimination as long as the temporal envelope of communication sounds displays some differences.
Collapse
Affiliation(s)
- Samira Souffi
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS - Université Paris-Saclay, Saclay, France
| | - Léo Varnet
- Laboratoire des systèmes perceptifs, UMR CNRS 8248, Département d'Etudes Cognitives, Ecole Normale Supérieure, Université Paris Sciences & Lettres, Paris, France
| | - Meryem Zaidi
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS - Université Paris-Saclay, Saclay, France
| | - Brice Bathellier
- Institut de l'Audition, Institut Pasteur, Université de Paris, INSERM, Paris, France
| | - Chloé Huetz
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS - Université Paris-Saclay, Saclay, France
| | - Jean-Marc Edeline
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS - Université Paris-Saclay, Saclay, France
| |
Collapse
|
8
|
Montes-Lourido P, Kar M, Pernia M, Parida S, Sadagopan S. Updates to the guinea pig animal model for in-vivo auditory neuroscience in the low-frequency hearing range. Hear Res 2022; 424:108603. [PMID: 36099806 PMCID: PMC9922531 DOI: 10.1016/j.heares.2022.108603] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Revised: 08/29/2022] [Accepted: 09/03/2022] [Indexed: 02/08/2023]
Abstract
For gaining insight into general principles of auditory processing, it is critical to choose model organisms whose set of natural behaviors encompasses the processes being investigated. This reasoning has led to the development of a variety of animal models for auditory neuroscience research, such as guinea pigs, gerbils, chinchillas, rabbits, and ferrets; but in recent years, the availability of cutting-edge molecular tools and other methodologies in the mouse model have led to waning interest in these unique model species. As laboratories increasingly look to include in-vivo components in their research programs, a comprehensive description of procedures and techniques for applying some of these modern neuroscience tools to a non-mouse small animal model would enable researchers to leverage unique model species that may be best suited for testing their specific hypotheses. In this manuscript, we describe in detail the methods we have developed to apply these tools to the guinea pig animal model to answer questions regarding the neural processing of complex sounds, such as vocalizations. We describe techniques for vocalization acquisition, behavioral testing, recording of auditory brainstem responses and frequency-following responses, intracranial neural signals including local field potential and single unit activity, and the expression of transgenes allowing for optogenetic manipulation of neural activity, all in awake and head-fixed guinea pigs. We demonstrate the rich datasets at the behavioral and electrophysiological levels that can be obtained using these techniques, underscoring the guinea pig as a versatile animal model for studying complex auditory processing. More generally, the methods described here are applicable to a broad range of small mammals, enabling investigators to address specific auditory processing questions in model organisms that are best suited for answering them.
Collapse
Affiliation(s)
- Pilar Montes-Lourido
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
| | - Manaswini Kar
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA
| | - Marianny Pernia
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
| | - Satyabrata Parida
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
| | - Srivatsun Sadagopan
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA; Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, USA.
| |
Collapse
|
9
|
Parameshwarappa V, Pezard L, Norena AJ. Changes in the spatiotemporal pattern of spontaneous activity across a cortical column after noise trauma. J Neurophysiol 2021; 127:239-254. [PMID: 34936500 DOI: 10.1152/jn.00262.2021] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
In the auditory modality, noise trauma has often been used to investigate cortical plasticity as it causes cochlear hearing loss. One limitation of these past studies, however, is that the effects of noise trauma have been mostly documented at the granular layer, which is the main cortical recipient of thalamic inputs. Importantly, the cortex is composed of six different layers each having its own pattern of connectivity and specific role in sensory processing. The present study aims at investigating the effects of acute and chronic noise trauma on the laminar pattern of spontaneous activity in primary auditory cortex of the anesthetized guinea pig. We show that spontaneous activity is dramatically altered across cortical layers after acute and chronic noise-induced hearing loss. First, spontaneous activity was globally enhanced across cortical layers, both in terms of firing rate and amplitude of spike-triggered average of local field potentials. Second, current source density on (spontaneous) spike-triggered average of local field potentials indicates that current sinks develop in the supra- and infragranular layers. These latter results suggest that supragranular layers become a major input recipient and that the propagation of spontaneous activity over a cortical column is greatly enhanced after acute and chronic noise-induced hearing loss. We discuss the possible mechanisms and functional implications of these changes.
Collapse
Affiliation(s)
- Vinay Parameshwarappa
- Centre National de la Recherche Scientifique, Aix-Marseille University, Marseille, France
| | - Laurent Pezard
- Centre National de la Recherche Scientifique, Aix-Marseille University, Marseille, France
| | - Arnaud Jean Norena
- Centre National de la Recherche Scientifique, Aix-Marseille University, Marseille, France
| |
Collapse
|
10
|
Gnanateja GN, Rupp K, Llanos F, Remick M, Pernia M, Sadagopan S, Teichert T, Abel TJ, Chandrasekaran B. Frequency-Following Responses to Speech Sounds Are Highly Conserved across Species and Contain Cortical Contributions. eNeuro 2021; 8:ENEURO.0451-21.2021. [PMID: 34799409 PMCID: PMC8704423 DOI: 10.1523/eneuro.0451-21.2021] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 11/02/2021] [Indexed: 11/21/2022] Open
Abstract
Time-varying pitch is a vital cue for human speech perception. Neural processing of time-varying pitch has been extensively assayed using scalp-recorded frequency-following responses (FFRs), an electrophysiological signal thought to reflect integrated phase-locked neural ensemble activity from subcortical auditory areas. Emerging evidence increasingly points to a putative contribution of auditory cortical ensembles to the scalp-recorded FFRs. However, the properties of cortical FFRs and precise characterization of laminar sources are still unclear. Here we used direct human intracortical recordings as well as extracranial and intracranial recordings from macaques and guinea pigs to characterize the properties of cortical sources of FFRs to time-varying pitch patterns. We found robust FFRs in the auditory cortex across all species. We leveraged representational similarity analysis as a translational bridge to characterize similarities between the human and animal models. Laminar recordings in animal models showed FFRs emerging primarily from the thalamorecipient layers of the auditory cortex. FFRs arising from these cortical sources significantly contributed to the scalp-recorded FFRs via volume conduction. Our research paves the way for a wide array of studies to investigate the role of cortical FFRs in auditory perception and plasticity.
Collapse
Affiliation(s)
- G Nike Gnanateja
- Department of Communication Sciences and Disorders, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
| | - Kyle Rupp
- Department of Neurological Surgery, UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania 15213
| | - Fernando Llanos
- Department of Linguistics, The University of Texas at Austin, Austin, Texas 78712
| | - Madison Remick
- Department of Neurological Surgery, UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania 15213
| | - Marianny Pernia
- Center for Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
| | - Srivatsun Sadagopan
- Department of Communication Sciences and Disorders, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
- Center for Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
- Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
| | - Tobias Teichert
- Center for Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
- Department of Psychiatry, University of Pittsburgh, Pittsburgh, Pennsylvania 15213
| | - Taylor J Abel
- Department of Neurological Surgery, UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania 15213
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
| | - Bharath Chandrasekaran
- Department of Communication Sciences and Disorders, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
- Center for Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
| |
Collapse
|
11
|
Montes-Lourido P, Kar M, David SV, Sadagopan S. Neuronal selectivity to complex vocalization features emerges in the superficial layers of primary auditory cortex. PLoS Biol 2021; 19:e3001299. [PMID: 34133413 PMCID: PMC8238193 DOI: 10.1371/journal.pbio.3001299] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 06/28/2021] [Accepted: 05/24/2021] [Indexed: 01/11/2023] Open
Abstract
Early in auditory processing, neural responses faithfully reflect acoustic input. At higher stages of auditory processing, however, neurons become selective for particular call types, eventually leading to specialized regions of cortex that preferentially process calls at the highest auditory processing stages. We previously proposed that an intermediate step in how nonselective responses are transformed into call-selective responses is the detection of informative call features. But how neural selectivity for informative call features emerges from nonselective inputs, whether feature selectivity gradually emerges over the processing hierarchy, and how stimulus information is represented in nonselective and feature-selective populations remain open question. In this study, using unanesthetized guinea pigs (GPs), a highly vocal and social rodent, as an animal model, we characterized the neural representation of calls in 3 auditory processing stages-the thalamus (ventral medial geniculate body (vMGB)), and thalamorecipient (L4) and superficial layers (L2/3) of primary auditory cortex (A1). We found that neurons in vMGB and A1 L4 did not exhibit call-selective responses and responded throughout the call durations. However, A1 L2/3 neurons showed high call selectivity with about a third of neurons responding to only 1 or 2 call types. These A1 L2/3 neurons only responded to restricted portions of calls suggesting that they were highly selective for call features. Receptive fields of these A1 L2/3 neurons showed complex spectrotemporal structures that could underlie their high call feature selectivity. Information theoretic analysis revealed that in A1 L4, stimulus information was distributed over the population and was spread out over the call durations. In contrast, in A1 L2/3, individual neurons showed brief bursts of high stimulus-specific information and conveyed high levels of information per spike. These data demonstrate that a transformation in the neural representation of calls occurs between A1 L4 and A1 L2/3, leading to the emergence of a feature-based representation of calls in A1 L2/3. Our data thus suggest that observed cortical specializations for call processing emerge in A1 and set the stage for further mechanistic studies.
Collapse
Affiliation(s)
- Pilar Montes-Lourido
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Manaswini Kar
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- Center for Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Stephen V. David
- Department of Otolaryngology, Oregon Health and Science University, Portland, Oregon, United States of America
| | - Srivatsun Sadagopan
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- Center for Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| |
Collapse
|
12
|
Robustness to Noise in the Auditory System: A Distributed and Predictable Property. eNeuro 2021; 8:ENEURO.0043-21.2021. [PMID: 33632813 PMCID: PMC7986545 DOI: 10.1523/eneuro.0043-21.2021] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Revised: 02/17/2021] [Accepted: 02/17/2021] [Indexed: 12/30/2022] Open
Abstract
Background noise strongly penalizes auditory perception of speech in humans or vocalizations in animals. Despite this, auditory neurons are still able to detect communications sounds against considerable levels of background noise. We collected neuronal recordings in cochlear nucleus (CN), inferior colliculus (IC), auditory thalamus, and primary and secondary auditory cortex in response to vocalizations presented either against a stationary or a chorus noise in anesthetized guinea pigs at three signal-to-noise ratios (SNRs; −10, 0, and 10 dB). We provide evidence that, at each level of the auditory system, five behaviors in noise exist within a continuum, from neurons with high-fidelity representations of the signal, mostly found in IC and thalamus, to neurons with high-fidelity representations of the noise, mostly found in CN for the stationary noise and in similar proportions in each structure for the chorus noise. The two cortical areas displayed fewer robust responses than the IC and thalamus. Furthermore, between 21% and 72% of the neurons (depending on the structure) switch categories from one background noise to another, even if the initial assignment of these neurons to a category was confirmed by a severe bootstrap procedure. Importantly, supervised learning pointed out that assigning a recording to one of the five categories can be predicted up to a maximum of 70% based on both the response to signal alone and noise alone.
Collapse
|
13
|
Pupillometry as a reliable metric of auditory detection and discrimination across diverse stimulus paradigms in animal models. Sci Rep 2021; 11:3108. [PMID: 33542266 PMCID: PMC7862232 DOI: 10.1038/s41598-021-82340-y] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2020] [Accepted: 01/08/2021] [Indexed: 12/30/2022] Open
Abstract
Estimates of detection and discrimination thresholds are often used to explore broad perceptual similarities between human subjects and animal models. Pupillometry shows great promise as a non-invasive, easily-deployable method of comparing human and animal thresholds. Using pupillometry, previous studies in animal models have obtained threshold estimates to simple stimuli such as pure tones, but have not explored whether similar pupil responses can be evoked by complex stimuli, what other stimulus contingencies might affect stimulus-evoked pupil responses, and if pupil responses can be modulated by experience or short-term training. In this study, we used an auditory oddball paradigm to estimate detection and discrimination thresholds across a wide range of stimuli in guinea pigs. We demonstrate that pupillometry yields reliable detection and discrimination thresholds across a range of simple (tones) and complex (conspecific vocalizations) stimuli; that pupil responses can be robustly evoked using different stimulus contingencies (low-level acoustic changes, or higher level categorical changes); and that pupil responses are modulated by short-term training. These results lay the foundation for using pupillometry as a reliable method of estimating thresholds in large experimental cohorts, and unveil the full potential of using pupillometry to explore broad similarities between humans and animal models.
Collapse
|
14
|
Noise-Sensitive But More Precise Subcortical Representations Coexist with Robust Cortical Encoding of Natural Vocalizations. J Neurosci 2020; 40:5228-5246. [PMID: 32444386 DOI: 10.1523/jneurosci.2731-19.2020] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2019] [Revised: 05/08/2020] [Accepted: 05/15/2020] [Indexed: 01/30/2023] Open
Abstract
Humans and animals maintain accurate sound discrimination in the presence of loud sources of background noise. It is commonly assumed that this ability relies on the robustness of auditory cortex responses. However, only a few attempts have been made to characterize neural discrimination of communication sounds masked by noise at each stage of the auditory system and to quantify the noise effects on the neuronal discrimination in terms of alterations in amplitude modulations. Here, we measured neural discrimination between communication sounds masked by a vocalization-shaped stationary noise from multiunit responses recorded in the cochlear nucleus, inferior colliculus, auditory thalamus, and primary and secondary auditory cortex at several signal-to-noise ratios (SNRs) in anesthetized male or female guinea pigs. Masking noise decreased sound discrimination of neuronal populations in each auditory structure, but collicular and thalamic populations showed better performance than cortical populations at each SNR. In contrast, in each auditory structure, discrimination by neuronal populations was slightly decreased when tone-vocoded vocalizations were tested. These results shed new light on the specific contributions of subcortical structures to robust sound encoding, and suggest that the distortion of slow amplitude modulation cues conveyed by communication sounds is one of the factors constraining the neuronal discrimination in subcortical and cortical levels.SIGNIFICANCE STATEMENT Dissecting how auditory neurons discriminate communication sounds in noise is a major goal in auditory neuroscience. Robust sound coding in noise is often viewed as a specific property of cortical networks, although this remains to be demonstrated. Here, we tested the discrimination performance of neuronal populations at five levels of the auditory system in response to conspecific vocalizations masked by noise. In each acoustic condition, subcortical neurons better discriminated target vocalizations than cortical ones and in each structure, the reduction in discrimination performance was related to the reduction in slow amplitude modulation cues.
Collapse
|
15
|
Elie JE, Theunissen FE. Invariant neural responses for sensory categories revealed by the time-varying information for communication calls. PLoS Comput Biol 2019; 15:e1006698. [PMID: 31557151 PMCID: PMC6762074 DOI: 10.1371/journal.pcbi.1006698] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2018] [Accepted: 06/08/2019] [Indexed: 12/20/2022] Open
Abstract
Although information theoretic approaches have been used extensively in the analysis of the neural code, they have yet to be used to describe how information is accumulated in time while sensory systems are categorizing dynamic sensory stimuli such as speech sounds or visual objects. Here, we present a novel method to estimate the cumulative information for stimuli or categories. We further define a time-varying categorical information index that, by comparing the information obtained for stimuli versus categories of these same stimuli, quantifies invariant neural representations. We use these methods to investigate the dynamic properties of avian cortical auditory neurons recorded in zebra finches that were listening to a large set of call stimuli sampled from the complete vocal repertoire of this species. We found that the time-varying rates carry 5 times more information than the mean firing rates even in the first 100 ms. We also found that cumulative information has slow time constants (100–600 ms) relative to the typical integration time of single neurons, reflecting the fact that the behaviorally informative features of auditory objects are time-varying sound patterns. When we correlated firing rates and information values, we found that average information correlates with average firing rate but that higher-rates found at the onset response yielded similar information values as the lower-rates found in the sustained response: the onset and sustained response of avian cortical auditory neurons provide similar levels of independent information about call identity and call-type. Finally, our information measures allowed us to rigorously define categorical neurons; these categorical neurons show a high degree of invariance for vocalizations within a call-type. Peak invariance is found around 150 ms after stimulus onset. Surprisingly, call-type invariant neurons were found in both primary and secondary avian auditory areas. Just as the recognition of faces requires neural representations that are invariant to scale and rotation, the recognition of behaviorally relevant auditory objects, such as spoken words, requires neural representations that are invariant to the speaker uttering the word and to his or her location. Here, we used information theory to investigate the time course of the neural representation of bird communication calls and of behaviorally relevant categories of these same calls: the call-types of the bird’s repertoire. We found that neurons in both the primary and secondary avian auditory cortex exhibit invariant responses to call renditions within a call-type, suggestive of a potential role for extracting the meaning of these communication calls. We also found that time plays an important role: first, neural responses carry significantly more information when represented by temporal patterns calculated at the small time scale of 10 ms than when measured as average rates and, second, this information accumulates in a non-redundant fashion up to long integration times of 600 ms. This rich temporal neural representation is matched to the temporal richness found in the communication calls of this species.
Collapse
Affiliation(s)
- Julie E. Elie
- Helen Wills Neuroscience Institute, University of California Berkeley, Berkeley, California, United States of America
- Department of Bioengineering, University of California Berkeley, Berkeley, California, United States of America
- * E-mail:
| | - Frédéric E. Theunissen
- Helen Wills Neuroscience Institute, University of California Berkeley, Berkeley, California, United States of America
- Department of Psychology, University of California Berkeley, Berkeley, California, United States of America
| |
Collapse
|
16
|
Liu ST, Montes-Lourido P, Wang X, Sadagopan S. Optimal features for auditory categorization. Nat Commun 2019; 10:1302. [PMID: 30899018 PMCID: PMC6428858 DOI: 10.1038/s41467-019-09115-y] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2018] [Accepted: 02/20/2019] [Indexed: 01/13/2023] Open
Abstract
Humans and vocal animals use vocalizations to communicate with members of their species. A necessary function of auditory perception is to generalize across the high variability inherent in vocalization production and classify them into behaviorally distinct categories ('words' or 'call types'). Here, we demonstrate that detecting mid-level features in calls achieves production-invariant classification. Starting from randomly chosen marmoset call features, we use a greedy search algorithm to determine the most informative and least redundant features necessary for call classification. High classification performance is achieved using only 10-20 features per call type. Predictions of tuning properties of putative feature-selective neurons accurately match some observed auditory cortical responses. This feature-based approach also succeeds for call categorization in other species, and for other complex classification tasks such as caller identification. Our results suggest that high-level neural representations of sounds are based on task-dependent features optimized for specific computational goals.
Collapse
Affiliation(s)
- Shi Tong Liu
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, 15213, PA, USA
| | - Pilar Montes-Lourido
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, 15213, PA, USA
| | - Xiaoqin Wang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, 21205, MD, USA
| | - Srivatsun Sadagopan
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, 15213, PA, USA. .,Department of Neurobiology, University of Pittsburgh, Pittsburgh, 15213, PA, USA. .,Department of Otolaryngology, University of Pittsburgh, Pittsburgh, 15213, PA, USA.
| |
Collapse
|
17
|
García-Rosales F, Beetz MJ, Cabral-Calderin Y, Kössl M, Hechavarria JC. Neuronal coding of multiscale temporal features in communication sequences within the bat auditory cortex. Commun Biol 2018; 1:200. [PMID: 30480101 PMCID: PMC6244232 DOI: 10.1038/s42003-018-0205-5] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2018] [Accepted: 10/30/2018] [Indexed: 11/18/2022] Open
Abstract
Experimental evidence supports that cortical oscillations represent multiscale temporal modulations existent in natural stimuli, yet little is known about the processing of these multiple timescales at a neuronal level. Here, using extracellular recordings from the auditory cortex (AC) of awake bats (Carollia perspicillata), we show the existence of three neuronal types which represent different levels of the temporal structure of conspecific vocalizations, and therefore constitute direct evidence of multiscale temporal processing of naturalistic stimuli by neurons in the AC. These neuronal subpopulations synchronize differently to local-field potentials, particularly in theta- and high frequency bands, and are informative to a different degree in terms of their spike rate. Interestingly, we also observed that both low and high frequency cortical oscillations can be highly informative about the listened calls. Our results suggest that multiscale neuronal processing allows for the precise and non-redundant representation of natural vocalizations in the AC.
Collapse
Affiliation(s)
- Francisco García-Rosales
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, 60438, Frankfurt/M., Germany.
| | - M Jerome Beetz
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, 60438, Frankfurt/M., Germany
- Department of Zoology II, University of Würzburg, Am Hubland, 97074, Würzburg, Germany
| | - Yuranny Cabral-Calderin
- MEG Labor, Brain Imaging Center, Goethe-Universität, 60528, Frankfurt/M., Germany
- German Resilience Center, University Medical Center Mainz, 55131, Mainz, Germany
| | - Manfred Kössl
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, 60438, Frankfurt/M., Germany
| | - Julio C Hechavarria
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, 60438, Frankfurt/M., Germany.
| |
Collapse
|
18
|
Greene NT, Anbuhl KL, Ferber AT, DeGuzman M, Allen PD, Tollin DJ. Spatial hearing ability of the pigmented Guinea pig (Cavia porcellus): Minimum audible angle and spatial release from masking in azimuth. Hear Res 2018; 365:62-76. [PMID: 29778290 DOI: 10.1016/j.heares.2018.04.011] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/12/2017] [Revised: 04/11/2018] [Accepted: 04/25/2018] [Indexed: 11/17/2022]
Abstract
Despite the common use of guinea pigs in investigations of the neural mechanisms of binaural and spatial hearing, their behavioral capabilities in spatial hearing tasks have surprisingly not been thoroughly investigated. To begin to fill this void, we tested the spatial hearing of adult male guinea pigs in several experiments using a paradigm based on the prepulse inhibition (PPI) of the acoustic startle response. In the first experiment, we presented continuous broadband noise from one speaker location and switched to a second speaker location (the "prepulse") along the azimuth prior to presenting a brief, ∼110 dB SPL startle-eliciting stimulus. We found that the startle response amplitude was systematically reduced for larger changes in speaker swap angle (i.e., greater PPI), indicating that using the speaker "swap" paradigm is sufficient to assess stimulus detection of spatially separated sounds. In a second set of experiments, we swapped low- and high-pass noise across the midline to estimate their ability to utilize interaural time- and level-difference cues, respectively. The results reveal that guinea pigs can utilize both binaural cues to discriminate azimuthal sound sources. A third set of experiments examined spatial release from masking using a continuous broadband noise masker and a broadband chirp signal, both presented concurrently at various speaker locations. In general, animals displayed an increase in startle amplitude (i.e., lower PPI) when the masker was presented at speaker locations near that of the chirp signal, and reduced startle amplitudes (increased PPI) indicating lower detection thresholds when the noise was presented from more distant speaker locations. In summary, these results indicate that guinea pigs can: 1) discriminate changes in source location within a hemifield as well as across the midline, 2) discriminate sources of low- and high-pass sounds, demonstrating that they can effectively utilize both low-frequency interaural time and high-frequency level difference sound localization cues, and 3) utilize spatial release from masking to discriminate sound sources. This report confirms the guinea pig as a suitable spatial hearing model and reinforces prior estimates of guinea pig hearing ability from acoustical and physiological measurements.
Collapse
Affiliation(s)
- Nathaniel T Greene
- Department of Physiology & Biophysics, University of Colorado Anschutz Medical Campus, Aurora, CO, 80045, USA; Department of Otolaryngology, University of Colorado School of Medicine, Aurora, CO, 80045, USA.
| | - Kelsey L Anbuhl
- Department of Physiology & Biophysics, University of Colorado Anschutz Medical Campus, Aurora, CO, 80045, USA; Neuroscience Training Program, University of Colorado Anschutz Medical Campus, Aurora, CO, 80045, USA
| | - Alexander T Ferber
- Department of Physiology & Biophysics, University of Colorado Anschutz Medical Campus, Aurora, CO, 80045, USA; Neuroscience Training Program, University of Colorado Anschutz Medical Campus, Aurora, CO, 80045, USA; Medical Scientist Training Program, University of Colorado Anschutz Medical Campus, Aurora, CO, 80045, USA
| | - Marisa DeGuzman
- Neuroscience Training Program, University of Colorado Anschutz Medical Campus, Aurora, CO, 80045, USA
| | - Paul D Allen
- Department of Otolaryngology, University of Rochester, Rochester, NY, 14642, USA
| | - Daniel J Tollin
- Department of Physiology & Biophysics, University of Colorado Anschutz Medical Campus, Aurora, CO, 80045, USA; Department of Otolaryngology, University of Colorado School of Medicine, Aurora, CO, 80045, USA; Neuroscience Training Program, University of Colorado Anschutz Medical Campus, Aurora, CO, 80045, USA
| |
Collapse
|
19
|
Green DB, Shackleton TM, Grimsley JMS, Zobay O, Palmer AR, Wallace MN. Communication calls produced by electrical stimulation of four structures in the guinea pig brain. PLoS One 2018; 13:e0194091. [PMID: 29584746 PMCID: PMC5870961 DOI: 10.1371/journal.pone.0194091] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2017] [Accepted: 02/25/2018] [Indexed: 02/03/2023] Open
Abstract
One of the main central processes affecting the cortical representation of conspecific vocalizations is the collateral output from the extended motor system for call generation. Before starting to study this interaction we sought to compare the characteristics of calls produced by stimulating four different parts of the brain in guinea pigs (Cavia porcellus). By using anaesthetised animals we were able to reposition electrodes without distressing the animals. Trains of 100 electrical pulses were used to stimulate the midbrain periaqueductal grey (PAG), hypothalamus, amygdala, and anterior cingulate cortex (ACC). Each structure produced a similar range of calls, but in significantly different proportions. Two of the spontaneous calls (chirrup and purr) were never produced by electrical stimulation and although we identified versions of chutter, durr and tooth chatter, they differed significantly from our natural call templates. However, we were routinely able to elicit seven other identifiable calls. All seven calls were produced both during the 1.6 s period of stimulation and subsequently in a period which could last for more than a minute. A single stimulation site could produce four or five different calls, but the amygdala was much less likely to produce a scream, whistle or rising whistle than any of the other structures. These three high-frequency calls were more likely to be produced by females than males. There were also differences in the timing of the call production with the amygdala primarily producing calls during the electrical stimulation and the hypothalamus mainly producing calls after the electrical stimulation. For all four structures a significantly higher stimulation current was required in males than females. We conclude that all four structures can be stimulated to produce fictive vocalizations that should be useful in studying the relationship between the vocal motor system and cortical sensory representation.
Collapse
Affiliation(s)
- David B. Green
- Medical Research Council Institute of Hearing Research, School of Medicine, The University of Nottingham, Nottingham, United Kingdom
| | - Trevor M. Shackleton
- Medical Research Council Institute of Hearing Research, School of Medicine, The University of Nottingham, Nottingham, United Kingdom
| | - Jasmine M. S. Grimsley
- Department of Anatomy and Neurobiology, Northeast Ohio Medical University, Rootstown, Ohio, United States of America
| | - Oliver Zobay
- Medical Research Council Institute of Hearing Research, School of Medicine, The University of Nottingham, Nottingham, United Kingdom
| | - Alan R. Palmer
- Medical Research Council Institute of Hearing Research, School of Medicine, The University of Nottingham, Nottingham, United Kingdom
| | - Mark N. Wallace
- Medical Research Council Institute of Hearing Research, School of Medicine, The University of Nottingham, Nottingham, United Kingdom
- * E-mail:
| |
Collapse
|
20
|
Berger JI, Coomber B, Hill S, Alexander SPH, Owen W, Palmer AR, Wallace MN. Effects of the cannabinoid CB 1 agonist ACEA on salicylate ototoxicity, hyperacusis and tinnitus in guinea pigs. Hear Res 2017; 356:51-62. [PMID: 29108871 PMCID: PMC5714060 DOI: 10.1016/j.heares.2017.10.012] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/24/2017] [Revised: 10/09/2017] [Accepted: 10/30/2017] [Indexed: 11/25/2022]
Abstract
Cannabinoids have been suggested as a therapeutic target for a variety of brain disorders. Despite the presence of their receptors throughout the auditory system, little is known about how cannabinoids affect auditory function. We sought to determine whether administration of arachidonyl-2′-chloroethylamide (ACEA), a highly-selective CB1 agonist, could attenuate a variety of auditory effects caused by prior administration of salicylate, and potentially treat tinnitus. We recorded cortical resting-state activity, auditory-evoked cortical activity and auditory brainstem responses (ABRs), from chronically-implanted awake guinea pigs, before and after salicylate + ACEA. Salicylate-induced reductions in click-evoked ABR amplitudes were smaller in the presence of ACEA, suggesting that the ototoxic effects of salicylate were less severe. ACEA also abolished salicylate-induced changes in cortical alpha band (6–10 Hz) oscillatory activity. However, salicylate-induced increases in cortical evoked activity (suggestive of the presence of hyperacusis) were still present with salicylate + ACEA. ACEA administered alone did not induce significant changes in either ABR amplitudes or oscillatory activity, but did increase cortical evoked potentials. Furthermore, in two separate groups of non-implanted animals, we found no evidence that ACEA could reverse behavioural identification of salicylate- or noise-induced tinnitus. Together, these data suggest that while ACEA may be potentially otoprotective, selective CB1 agonists are not effective in diminishing the presence of tinnitus or hyperacusis. CB1 agonist (ACEA) effects were assessed in awake guinea pigs following salicylate. Salicylate-induced decreases in brainstem response amplitudes were tempered by ACEA. Decreases in alpha band oscillations were not evident following salicylate + ACEA. ACEA did not eliminate salicylate-induced increases in cortical evoked potentials. ACEA failed to prevent or reverse salicylate- or noise-induced tinnitus behaviour.
Collapse
Affiliation(s)
- Joel I Berger
- Medical Research Council Institute of Hearing Research, School of Medicine, The University of Nottingham, University Park, Nottingham, NG7 2RD, United Kingdom.
| | - Ben Coomber
- Medical Research Council Institute of Hearing Research, School of Medicine, The University of Nottingham, University Park, Nottingham, NG7 2RD, United Kingdom
| | - Samantha Hill
- Medical Research Council Institute of Hearing Research, School of Medicine, The University of Nottingham, University Park, Nottingham, NG7 2RD, United Kingdom
| | - Steve P H Alexander
- School of Life Sciences, Medical School, The University of Nottingham, Nottingham, NG7 2UH, United Kingdom
| | - William Owen
- Medical Research Council Institute of Hearing Research, School of Medicine, The University of Nottingham, University Park, Nottingham, NG7 2RD, United Kingdom
| | - Alan R Palmer
- Medical Research Council Institute of Hearing Research, School of Medicine, The University of Nottingham, University Park, Nottingham, NG7 2RD, United Kingdom
| | - Mark N Wallace
- Medical Research Council Institute of Hearing Research, School of Medicine, The University of Nottingham, University Park, Nottingham, NG7 2RD, United Kingdom
| |
Collapse
|
21
|
Zoefel B, Costa-Faidella J, Lakatos P, Schroeder CE, VanRullen R. Characterization of neural entrainment to speech with and without slow spectral energy fluctuations in laminar recordings in monkey A1. Neuroimage 2017; 150:344-357. [PMID: 28188912 DOI: 10.1016/j.neuroimage.2017.02.014] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2016] [Revised: 02/02/2017] [Accepted: 02/06/2017] [Indexed: 10/20/2022] Open
Abstract
Neural entrainment, the alignment between neural oscillations and rhythmic stimulation, is omnipresent in current theories of speech processing - nevertheless, the underlying neural mechanisms are still largely unknown. Here, we hypothesized that laminar recordings in non-human primates provide us with important insight into these mechanisms, in particular with respect to processing in cortical layers. We presented one monkey with human everyday speech sounds and recorded neural (as current-source density, CSD) oscillations in primary auditory cortex (A1). We observed that the high-excitability phase of neural oscillations was only aligned with those spectral components of speech the recording site was tuned to; the opposite, low-excitability phase was aligned with other spectral components. As low- and high-frequency components in speech alternate, this finding might reflect a particularly efficient way of stimulus processing that includes the preparation of the relevant neuronal populations to the upcoming input. Moreover, presenting speech/noise sounds without systematic fluctuations in amplitude and spectral content and their time-reversed versions, we found significant entrainment in all conditions and cortical layers. When compared with everyday speech, the entrainment in the speech/noise conditions was characterized by a change in the phase relation between neural signal and stimulus and the low-frequency neural phase was dominantly coupled to activity in a lower gamma-band. These results show that neural entrainment in response to speech without slow fluctuations in spectral energy includes a process with specific characteristics that is presumably preserved across species.
Collapse
Affiliation(s)
- Benedikt Zoefel
- Université Paul Sabatier, Toulouse, France; Centre de Recherche Cerveau et Cognition (CerCo), CNRS, UMR5549, Pavillon Baudot CHU Purpan, BP 25202, 31052 Toulouse Cedex, France; Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, United States.
| | - Jordi Costa-Faidella
- Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, United States; Institute of Neurosciences, University of Barcelona, Barcelona, Catalonia 08035, Spain; Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Barcelona, Catalonia 08035, Spain
| | - Peter Lakatos
- Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, United States; Department of Psychiatry, New York University School of Medicine, New York, NY, United States
| | - Charles E Schroeder
- Nathan Kline Institute for Psychiatric Research, Orangeburg, NY, United States; Departments of Neurosurgery and Psychiatry, Columbia University College of Physicians and Surgeons, New York, NY, United States
| | - Rufin VanRullen
- Université Paul Sabatier, Toulouse, France; Centre de Recherche Cerveau et Cognition (CerCo), CNRS, UMR5549, Pavillon Baudot CHU Purpan, BP 25202, 31052 Toulouse Cedex, France
| |
Collapse
|
22
|
Vocal sequences suppress spiking in the bat auditory cortex while evoking concomitant steady-state local field potentials. Sci Rep 2016; 6:39226. [PMID: 27976691 PMCID: PMC5156950 DOI: 10.1038/srep39226] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2016] [Accepted: 11/18/2016] [Indexed: 12/27/2022] Open
Abstract
The mechanisms by which the mammalian brain copes with information from natural vocalization streams remain poorly understood. This article shows that in highly vocal animals, such as the bat species Carollia perspicillata, the spike activity of auditory cortex neurons does not track the temporal information flow enclosed in fast time-varying vocalization streams emitted by conspecifics. For example, leading syllables of so-called distress sequences (produced by bats subjected to duress) suppress cortical spiking to lagging syllables. Local fields potentials (LFPs) recorded simultaneously to cortical spiking evoked by distress sequences carry multiplexed information, with response suppression occurring in low frequency LFPs (i.e. 2–15 Hz) and steady-state LFPs occurring at frequencies that match the rate of energy fluctuations in the incoming sound streams (i.e. >50 Hz). Such steady-state LFPs could reflect underlying synaptic activity that does not necessarily lead to cortical spiking in response to natural fast time-varying vocal sequences.
Collapse
|
23
|
Berger JI, Coomber B, Wallace MN, Palmer AR. Reductions in cortical alpha activity, enhancements in neural responses and impaired gap detection caused by sodium salicylate in awake guinea pigs. Eur J Neurosci 2016; 45:398-409. [PMID: 27862478 PMCID: PMC5763375 DOI: 10.1111/ejn.13474] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2016] [Accepted: 11/07/2016] [Indexed: 11/30/2022]
Abstract
Tinnitus chronically affects between 10-15% of the population but, despite its prevalence, the underlying mechanisms are still not properly understood. One experimental model involves administration of high doses of sodium salicylate, as this is known to reliably induce tinnitus in both humans and animals. Guinea pigs were implanted with chronic electrocorticography (ECoG) electrode arrays, with silver-ball electrodes placed on the dura over left and right auditory cortex. Two more electrodes were positioned over the cerebellum to monitor auditory brainstem responses (ABRs). We recorded resting-state and auditory evoked neural activity from awake animals before and 2 h following salicylate administration (350 mg/kg; i.p.). Large increases in click-evoked responses (> 100%) were evident across the whole auditory cortex, despite significant reductions in wave I ABR amplitudes (in response to 20 kHz tones), which are indicative of auditory nerve activity. In the same animals, significant decreases in 6-10 Hz spontaneous oscillations (alpha waves) were evident over dorsocaudal auditory cortex. We were also able to demonstrate for the first time that cortical evoked potentials can be inhibited by a preceding gap in background noise [gap-induced pre-pulse inhibition (PPI)], in a similar fashion to the gap-induced inhibition of the acoustic startle reflex that is used as a behavioural test for tinnitus. Furthermore, 2 h following salicylate administration, we observed significant deficits in PPI of cortical responses that were closely aligned with significant deficits in behavioural responses to the same stimuli. Together, these data are suggestive of neural correlates of tinnitus and oversensitivity to sound (hyperacusis).
Collapse
Affiliation(s)
- Joel I Berger
- MRC Institute of Hearing Research, University Park, Nottingham, NG7 2RD, UK.,School of Medicine, University of Nottingham, Nottingham, UK
| | - Ben Coomber
- MRC Institute of Hearing Research, University Park, Nottingham, NG7 2RD, UK.,School of Medicine, University of Nottingham, Nottingham, UK
| | - Mark N Wallace
- MRC Institute of Hearing Research, University Park, Nottingham, NG7 2RD, UK.,School of Medicine, University of Nottingham, Nottingham, UK
| | - Alan R Palmer
- MRC Institute of Hearing Research, University Park, Nottingham, NG7 2RD, UK.,School of Medicine, University of Nottingham, Nottingham, UK
| |
Collapse
|
24
|
Ni R, Bender DA, Shanechi AM, Gamble JR, Barbour DL. Contextual effects of noise on vocalization encoding in primary auditory cortex. J Neurophysiol 2016; 117:713-727. [PMID: 27881720 DOI: 10.1152/jn.00476.2016] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2016] [Accepted: 11/17/2016] [Indexed: 11/22/2022] Open
Abstract
Robust auditory perception plays a pivotal function for processing behaviorally relevant sounds, particularly with distractions from the environment. The neuronal coding enabling this ability, however, is still not well understood. In this study, we recorded single-unit activity from the primary auditory cortex (A1) of awake marmoset monkeys (Callithrix jacchus) while delivering conspecific vocalizations degraded by two different background noises: broadband white noise and vocalization babble. Noise effects on neural representation of target vocalizations were quantified by measuring the responses' similarity to those elicited by natural vocalizations as a function of signal-to-noise ratio. A clustering approach was used to describe the range of response profiles by reducing the population responses to a summary of four response classes (robust, balanced, insensitive, and brittle) under both noise conditions. This clustering approach revealed that, on average, approximately two-thirds of the neurons change their response class when encountering different noises. Therefore, the distortion induced by one particular masking background in single-unit responses is not necessarily predictable from that induced by another, suggesting the low likelihood of a unique group of noise-invariant neurons across different background conditions in A1. Regarding noise influence on neural activities, the brittle response group showed addition of spiking activity both within and between phrases of vocalizations relative to clean vocalizations, whereas the other groups generally showed spiking activity suppression within phrases, and the alteration between phrases was noise dependent. Overall, the variable single-unit responses, yet consistent response types, imply that primate A1 performs scene analysis through the collective activity of multiple neurons. NEW & NOTEWORTHY The understanding of where and how auditory scene analysis is accomplished is of broad interest to neuroscientists. In this paper, we systematically investigated neuronal coding of multiple vocalizations degraded by two distinct noises at various signal-to-noise ratios in nonhuman primates. In the process, we uncovered heterogeneity of single-unit representations for different auditory scenes yet homogeneity of responses across the population.
Collapse
Affiliation(s)
- Ruiye Ni
- Laboratory of Sensory Neuroscience and Neuroengineering, Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri
| | - David A Bender
- Laboratory of Sensory Neuroscience and Neuroengineering, Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri
| | - Amirali M Shanechi
- Laboratory of Sensory Neuroscience and Neuroengineering, Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri
| | - Jeffrey R Gamble
- Laboratory of Sensory Neuroscience and Neuroengineering, Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri
| | - Dennis L Barbour
- Laboratory of Sensory Neuroscience and Neuroengineering, Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri
| |
Collapse
|
25
|
Lyzwa D, Herrmann JM, Wörgötter F. Natural Vocalizations in the Mammalian Inferior Colliculus are Broadly Encoded by a Small Number of Independent Multi-Units. Front Neural Circuits 2016; 9:91. [PMID: 26869890 PMCID: PMC4740783 DOI: 10.3389/fncir.2015.00091] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2015] [Accepted: 12/28/2015] [Indexed: 11/18/2022] Open
Abstract
How complex natural sounds are represented by the main converging center of the auditory midbrain, the central inferior colliculus, is an open question. We applied neural discrimination to determine the variation of detailed encoding of individual vocalizations across the best frequency gradient of the central inferior colliculus. The analysis was based on collective responses from several neurons. These multi-unit spike trains were recorded from guinea pigs exposed to a spectrotemporally rich set of eleven species-specific vocalizations. Spike trains of disparate units from the same recording were combined in order to investigate whether groups of multi-unit clusters represent the whole set of vocalizations more reliably than only one unit, and whether temporal response correlations between them facilitate an unambiguous neural representation of the vocalizations. We found a spatial distribution of the capability to accurately encode groups of vocalizations across the best frequency gradient. Different vocalizations are optimally discriminated at different locations of the best frequency gradient. Furthermore, groups of a few multi-unit clusters yield improved discrimination over only one multi-unit cluster between all tested vocalizations. However, temporal response correlations between units do not yield better discrimination. Our study is based on a large set of units of simultaneously recorded responses from several guinea pigs and electrode insertion positions. Our findings suggest a broadly distributed code for behaviorally relevant vocalizations in the mammalian inferior colliculus. Responses from a few non-interacting units are sufficient to faithfully represent the whole set of studied vocalizations with diverse spectrotemporal properties.
Collapse
Affiliation(s)
- Dominika Lyzwa
- Max Planck Institute for Dynamics and Self-OrganizationGöttingen, Germany
- Institute for Nonlinear Dynamics, Physics Department, Georg-August-UniversityGöttingen, Germany
- Bernstein Focus NeurotechnologyGöttingen, Germany
| | - J. Michael Herrmann
- Bernstein Focus NeurotechnologyGöttingen, Germany
- Institute of Perception, Action and Behavior, School of Informatics, University of EdinburghEdinburgh, UK
| | - Florentin Wörgötter
- Bernstein Focus NeurotechnologyGöttingen, Germany
- Institute for Physics - Biophysics, Georg-August-UniversityGöttingen, Germany
| |
Collapse
|
26
|
Honey C, Schnupp J. Neural Resolution of Formant Frequencies in the Primary Auditory Cortex of Rats. PLoS One 2015; 10:e0134078. [PMID: 26252382 PMCID: PMC4529216 DOI: 10.1371/journal.pone.0134078] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2014] [Accepted: 07/06/2015] [Indexed: 11/18/2022] Open
Abstract
Pulse-resonance sounds play an important role in animal communication and auditory object recognition, yet very little is known about the cortical representation of this class of sounds. In this study we shine light on one simple aspect: how well does the firing rate of cortical neurons resolve resonant ("formant") frequencies of vowel-like pulse-resonance sounds. We recorded neural responses in the primary auditory cortex (A1) of anesthetized rats to two-formant pulse-resonance sounds, and estimated their formant resolving power using a statistical kernel smoothing method which takes into account the natural variability of cortical responses. While formant-tuning functions were diverse in structure across different penetrations, most were sensitive to changes in formant frequency, with a frequency resolution comparable to that reported for rat cochlear filters.
Collapse
Affiliation(s)
| | - Jan Schnupp
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
27
|
Descending and tonotopic projection patterns from the auditory cortex to the inferior colliculus. Neuroscience 2015; 300:325-37. [DOI: 10.1016/j.neuroscience.2015.05.032] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2014] [Revised: 04/27/2015] [Accepted: 05/14/2015] [Indexed: 11/20/2022]
|
28
|
High-field functional magnetic resonance imaging of vocalization processing in marmosets. Sci Rep 2015; 5:10950. [PMID: 26091254 PMCID: PMC4473644 DOI: 10.1038/srep10950] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2014] [Accepted: 04/29/2015] [Indexed: 11/17/2022] Open
Abstract
Vocalizations are behaviorally critical sounds, and this behavioral importance is reflected in the ascending auditory system, where conspecific vocalizations are increasingly over-represented at higher processing stages. Recent evidence suggests that, in macaques, this increasing selectivity for vocalizations might culminate in a cortical region that is densely populated by vocalization-preferring neurons. Such a region might be a critical node in the representation of vocal communication sounds, underlying the recognition of vocalization type, caller and social context. These results raise the questions of whether cortical specializations for vocalization processing exist in other species, their cortical location, and their relationship to the auditory processing hierarchy. To explore cortical specializations for vocalizations in another species, we performed high-field fMRI of the auditory cortex of a vocal New World primate, the common marmoset (Callithrix jacchus). Using a sparse imaging paradigm, we discovered a caudal-rostral gradient for the processing of conspecific vocalizations in marmoset auditory cortex, with regions of the anterior temporal lobe close to the temporal pole exhibiting the highest preference for vocalizations. These results demonstrate similar cortical specializations for vocalization processing in macaques and marmosets, suggesting that cortical specializations for vocal processing might have evolved before the lineages of these species diverged.
Collapse
|
29
|
Montejo N, Noreña AJ. Dynamic representation of spectral edges in guinea pig primary auditory cortex. J Neurophysiol 2015; 113:2998-3012. [PMID: 25744885 PMCID: PMC4416612 DOI: 10.1152/jn.00785.2014] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2014] [Accepted: 03/02/2015] [Indexed: 11/22/2022] Open
Abstract
The central representation of a given acoustic motif is thought to be strongly context dependent, i.e., to rely on the spectrotemporal past and present of the acoustic mixture in which it is embedded. The present study investigated the cortical representation of spectral edges (i.e., where stimulus energy changes abruptly over frequency) and its dependence on stimulus duration and depth of the spectral contrast in guinea pig. We devised a stimulus ensemble composed of random tone pips with or without an attenuated frequency band (AFB) of variable depth. Additionally, the multitone ensemble with AFB was interleaved with periods of silence or with multitone ensembles without AFB. We have shown that the representation of the frequencies near but outside the AFB is greatly enhanced, whereas the representation of frequencies near and inside the AFB is strongly suppressed. These cortical changes depend on the depth of the AFB: although they are maximal for the largest depth of the AFB, they are also statistically significant for depths as small as 10 dB. Finally, the cortical changes are quick, occurring within a few seconds of stimulus ensemble presentation with AFB, and are very labile, disappearing within a few seconds after the presentation without AFB. Overall, this study demonstrates that the representation of spectral edges is dynamically enhanced in the auditory centers. These central changes may have important functional implications, particularly in noisy environments where they could contribute to preserving the central representation of spectral edges.
Collapse
Affiliation(s)
- Noelia Montejo
- Laboratoire de Neurosciences Intégratives et Adaptatives, Aix Marseille Université, CNRS UMR 7260, Marseille, France
| | - Arnaud J Noreña
- Laboratoire de Neurosciences Intégratives et Adaptatives, Aix Marseille Université, CNRS UMR 7260, Marseille, France
| |
Collapse
|
30
|
Nakamoto KT, Shackleton TM, Magezi DA, Palmer AR. A function for binaural integration in auditory grouping and segregation in the inferior colliculus. J Neurophysiol 2014; 113:1819-30. [PMID: 25540219 DOI: 10.1152/jn.00472.2014] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Responses of neurons to binaural, harmonic complex stimuli in urethane-anesthetized guinea pig inferior colliculus (IC) are reported. To assess the binaural integration of harmonicity cues for sound segregation and grouping, responses were measured to harmonic complexes with different fundamental frequencies presented to each ear. Simultaneously gated harmonic stimuli with fundamental frequencies of 125 Hz and 145 Hz were presented to the left and right ears, respectively, and recordings made from 96 neurons with characteristic frequencies >2 kHz in the central nucleus of the IC. Of these units, 70 responded continuously throughout the stimulus and were excited by the stimulus at the contralateral ear. The stimulus at the ipsilateral ear excited (EE: 14%; 10/70), inhibited (EI: 33%; 23/70), or had no significant effect (EO: 53%; 37/70), defined by the effect on firing rate. The neurons phase locked to the temporal envelope at each ear to varying degrees depending on signal level. Many of the cells (predominantly EO) were dominated by the response to the contralateral stimulus. Another group (predominantly EI) synchronized to the contralateral stimulus and were suppressed by the ipsilateral stimulus in a phasic manner. A third group synchronized to the stimuli at both ears (predominantly EE). Finally, a group only responded when the waveform peaks from each ear coincided. We conclude that these groups of neurons represent different "streams" of information but exhibit modifications of the response rather than encoding a feature of the stimulus, like pitch.
Collapse
Affiliation(s)
- Kyle T Nakamoto
- Medical Research Council Institute of Hearing Research, University Park, Nottingham, United Kingdom; Department of Anatomy and Neurobiology, Northeast Ohio Medical University, Rootstown, Ohio; and
| | - Trevor M Shackleton
- Medical Research Council Institute of Hearing Research, University Park, Nottingham, United Kingdom
| | - David A Magezi
- Laboratory for Cognitive and Neurological Sciences, Neurology Unit, Department of Medicine, Faculty of Science, University of Fribourg, Fribourg, Switzerland
| | - Alan R Palmer
- Medical Research Council Institute of Hearing Research, University Park, Nottingham, United Kingdom
| |
Collapse
|
31
|
Orton LD, Rees A. Intercollicular commissural connections refine the representation of sound frequency and level in the auditory midbrain. eLife 2014; 3. [PMID: 25406067 PMCID: PMC4235006 DOI: 10.7554/elife.03764] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2014] [Accepted: 10/15/2014] [Indexed: 11/13/2022] Open
Abstract
Connections unifying hemispheric sensory representations of vision and touch occur in cortex, but for hearing, commissural connections earlier in the pathway may be important. The brainstem auditory pathways course bilaterally to the inferior colliculi (ICs). Each IC represents one side of auditory space but they are interconnected by a commissure. By deactivating one IC in guinea pig with cooling or microdialysis of procaine, and recording neural activity to sound in the other, we found that commissural input influences fundamental aspects of auditory processing. The areas of nonV frequency response areas (FRAs) were modulated, but the areas of almost all V-shaped FRAs were not. The supra-threshold sensitivity of rate level functions decreased during deactivation and the ability to signal changes in sound level was decremented. This commissural enhancement suggests the ICs should be viewed as a single entity in which the representation of sound in each is governed by the other. DOI:http://dx.doi.org/10.7554/eLife.03764.001 The bilateral arrangement of our eyes and ears enables us to receive information from both sides of our body. This information is conveyed via various sensory pathways that take different routes through the brain to culminate in the cerebral hemispheres. The information is then processed in the brain's outer layer, which is called the cortex. In the visual system, information from both eyes is kept separate until it reaches the cortex. A similar arrangement exists for touch. However, hearing is unusual among our senses in that sounds undergo much more processing in the brainstem, which is located at the base of the brain, than other types of stimuli. Orton and Rees now show that, in contrast to vision and touch, information about sounds occurring to our left or right is refined by interactions between the two sides of the midbrain. To test for sideward interactions between the two limbs of the auditory pathway, electrodes were lowered into the brains of anesthetized guinea pigs so that neuronal responses to tones could be recorded. The electrodes were placed in the region of the midbrain that contains two structures called the inferior colliculi (meaning ‘lower hills’ in Latin). Each inferior colliculus predominantly receives inputs from the opposite ear. However, recordings made in one colliculus when the other was deactivated revealed that one colliculus normally alters the response of the other. This shows that there is an important sideward interaction between the two halves of the auditory pathway in the midbrain that refines how fundamental aspects of sound, such as its frequency and intensity, are processed. This represents a marked departure from our previous understanding of auditory processing in the mammalian brain, and opens up new lines of investigation into the functioning of the auditory system in health and disease. DOI:http://dx.doi.org/10.7554/eLife.03764.002
Collapse
Affiliation(s)
- Llwyd David Orton
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Adrian Rees
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, United Kingdom
| |
Collapse
|
32
|
|
33
|
Abstract
The fundamental perceptual unit in hearing is the 'auditory object'. Similar to visual objects, auditory objects are the computational result of the auditory system's capacity to detect, extract, segregate and group spectrotemporal regularities in the acoustic environment; the multitude of acoustic stimuli around us together form the auditory scene. However, unlike the visual scene, resolving the component objects within the auditory scene crucially depends on their temporal structure. Neural correlates of auditory objects are found throughout the auditory system. However, neural responses do not become correlated with a listener's perceptual reports until the level of the cortex. The roles of different neural structures and the contribution of different cognitive states to the perception of auditory objects are not yet fully understood.
Collapse
|
34
|
Steinschneider M, Nourski KV, Fishman YI. Representation of speech in human auditory cortex: is it special? Hear Res 2013; 305:57-73. [PMID: 23792076 PMCID: PMC3818517 DOI: 10.1016/j.heares.2013.05.013] [Citation(s) in RCA: 73] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/30/2013] [Revised: 05/13/2013] [Accepted: 05/28/2013] [Indexed: 11/20/2022]
Abstract
Successful categorization of phonemes in speech requires that the brain analyze the acoustic signal along both spectral and temporal dimensions. Neural encoding of the stimulus amplitude envelope is critical for parsing the speech stream into syllabic units. Encoding of voice onset time (VOT) and place of articulation (POA), cues necessary for determining phonemic identity, occurs within shorter time frames. An unresolved question is whether the neural representation of speech is based on processing mechanisms that are unique to humans and shaped by learning and experience, or is based on rules governing general auditory processing that are also present in non-human animals. This question was examined by comparing the neural activity elicited by speech and other complex vocalizations in primary auditory cortex of macaques, who are limited vocal learners, with that in Heschl's gyrus, the putative location of primary auditory cortex in humans. Entrainment to the amplitude envelope is neither specific to humans nor to human speech. VOT is represented by responses time-locked to consonant release and voicing onset in both humans and monkeys. Temporal representation of VOT is observed both for isolated syllables and for syllables embedded in the more naturalistic context of running speech. The fundamental frequency of male speakers is represented by more rapid neural activity phase-locked to the glottal pulsation rate in both humans and monkeys. In both species, the differential representation of stop consonants varying in their POA can be predicted by the relationship between the frequency selectivity of neurons and the onset spectra of the speech sounds. These findings indicate that the neurophysiology of primary auditory cortex is similar in monkeys and humans despite their vastly different experience with human speech, and that Heschl's gyrus is engaged in general auditory, and not language-specific, processing. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives".
Collapse
Affiliation(s)
- Mitchell Steinschneider
- Department of Neurology, Rose F. Kennedy Center, Room 322, 1300 Morris Park Avenue, Albert Einstein College of Medicine, Bronx, NY 10461, USA
- Department of Neuroscience, Rose F. Kennedy Center, Room 322, 1300 Morris Park Avenue, Albert Einstein College of Medicine, Bronx, NY 10461, USA
| | - Kirill V. Nourski
- Department of Neurosurgery, The University of Iowa, Iowa City, Iowa 52242, USA
| | - Yonatan I. Fishman
- Department of Neurology, Rose F. Kennedy Center, Room 322, 1300 Morris Park Avenue, Albert Einstein College of Medicine, Bronx, NY 10461, USA
| |
Collapse
|
35
|
Rode T, Hartmann T, Hubka P, Scheper V, Lenarz M, Lenarz T, Kral A, Lim HH. Neural representation in the auditory midbrain of the envelope of vocalizations based on a peripheral ear model. Front Neural Circuits 2013; 7:166. [PMID: 24155694 PMCID: PMC3800787 DOI: 10.3389/fncir.2013.00166] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2013] [Accepted: 09/24/2013] [Indexed: 11/24/2022] Open
Abstract
The auditory midbrain implant (AMI) consists of a single shank array (20 sites) for stimulation along the tonotopic axis of the central nucleus of the inferior colliculus (ICC) and has been safely implanted in deaf patients who cannot benefit from a cochlear implant (CI). The AMI improves lip-reading abilities and environmental awareness in the implanted patients. However, the AMI cannot achieve the high levels of speech perception possible with the CI. It appears the AMI can transmit sufficient spectral cues but with limited temporal cues required for speech understanding. Currently, the AMI uses a CI-based strategy, which was originally designed to stimulate each frequency region along the cochlea with amplitude-modulated pulse trains matching the envelope of the bandpass-filtered sound components. However, it is unclear if this type of stimulation with only a single site within each frequency lamina of the ICC can elicit sufficient temporal cues for speech perception. At least speech understanding in quiet is still possible with envelope cues as low as 50 Hz. Therefore, we investigated how ICC neurons follow the bandpass-filtered envelope structure of natural stimuli in ketamine-anesthetized guinea pigs. We identified a subset of ICC neurons that could closely follow the envelope structure (up to ß100 Hz) of a diverse set of species-specific calls, which was revealed by using a peripheral ear model to estimate the true bandpass-filtered envelopes observed by the brain. Although previous studies have suggested a complex neural transformation from the auditory nerve to the ICC, our data suggest that the brain maintains a robust temporal code in a subset of ICC neurons matching the envelope structure of natural stimuli. Clinically, these findings suggest that a CI-based strategy may still be effective for the AMI if the appropriate neurons are entrained to the envelope of the acoustic stimulus and can transmit sufficient temporal cues to higher centers.
Collapse
Affiliation(s)
- Thilo Rode
- Department of Otorhinolaryngology, Hannover Medical University Hannover, Germany
| | | | | | | | | | | | | | | |
Collapse
|
36
|
Suta D, Popelář J, Burianová J, Syka J. Cortical representation of species-specific vocalizations in Guinea pig. PLoS One 2013; 8:e65432. [PMID: 23785425 PMCID: PMC3681779 DOI: 10.1371/journal.pone.0065432] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2012] [Accepted: 04/30/2013] [Indexed: 11/18/2022] Open
Abstract
We investigated the representation of four typical guinea pig vocalizations in the auditory cortex (AI) in anesthetized guinea pigs with the aim to compare cortical data to the data already published for identical calls in subcortical structures - the inferior colliculus (IC) and medial geniculate body (MGB). Like the subcortical neurons also cortical neurons typically responded to many calls with a time-locked response to one or more temporal elements of the calls. The neuronal response patterns in the AI correlated well with the sound temporal envelope of chirp (an isolated short phrase), but correlated less well in the case of chutter and whistle (longer calls) or purr (a call with a fast repetition rate of phrases). Neuronal rate vs. characteristic frequency profiles provided only a coarse representation of the calls' frequency spectra. A comparison between the activity in the AI and those of subcortical structures showed a different transformation of the neuronal response patterns from the IC to the AI for individual calls: i) while the temporal representation of chirp remained unchanged, the representations of whistle and chutter were transformed at the thalamic level and the response to purr at the cortical level; ii) for the wideband calls (whistle, chirp) the rate representation of the call spectra was preserved in the AI and MGB at the level present in the IC, while in the case of low-frequency calls (chutter, purr), the representation was less precise in the AI and MGB than in the IC; iii) the difference in the response strength to natural and time-reversed whistle was found to be smaller in the AI than in the IC or MGB.
Collapse
Affiliation(s)
- Daniel Suta
- Department of Auditory Neuroscience, Institute of Experimental Medicine, Academy of Sciences of the Czech Republic, Prague, Czech Republic.
| | | | | | | |
Collapse
|
37
|
Gaucher Q, Huetz C, Gourévitch B, Laudanski J, Occelli F, Edeline JM. How do auditory cortex neurons represent communication sounds? Hear Res 2013; 305:102-12. [PMID: 23603138 DOI: 10.1016/j.heares.2013.03.011] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/03/2012] [Revised: 03/18/2013] [Accepted: 03/26/2013] [Indexed: 11/30/2022]
Abstract
A major goal in auditory neuroscience is to characterize how communication sounds are represented at the cortical level. The present review aims at investigating the role of auditory cortex in the processing of speech, bird songs and other vocalizations, which all are spectrally and temporally highly structured sounds. Whereas earlier studies have simply looked for neurons exhibiting higher firing rates to particular conspecific vocalizations over their modified, artificially synthesized versions, more recent studies determined the coding capacity of temporal spike patterns, which are prominent in primary and non-primary areas (and also in non-auditory cortical areas). In several cases, this information seems to be correlated with the behavioral performance of human or animal subjects, suggesting that spike-timing based coding strategies might set the foundations of our perceptive abilities. Also, it is now clear that the responses of auditory cortex neurons are highly nonlinear and that their responses to natural stimuli cannot be predicted from their responses to artificial stimuli such as moving ripples and broadband noises. Since auditory cortex neurons cannot follow rapid fluctuations of the vocalizations envelope, they only respond at specific time points during communication sounds, which can serve as temporal markers for integrating the temporal and spectral processing taking place at subcortical relays. Thus, the temporal sparse code of auditory cortex neurons can be considered as a first step for generating high level representations of communication sounds independent of the acoustic characteristic of these sounds. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives".
Collapse
Affiliation(s)
- Quentin Gaucher
- Centre de Neurosciences Paris-Sud (CNPS), CNRS UMR 8195, Université Paris-Sud, Bâtiment 446, 91405 Orsay cedex, France
| | | | | | | | | | | |
Collapse
|