1
|
Wadle SL, Ritter TC, Wadle TTX, Hirtz JJ. Topography and Ensemble Activity in the Auditory Cortex of a Mouse Model of Fragile X Syndrome. eNeuro 2024; 11:ENEURO.0396-23.2024. [PMID: 38627066 PMCID: PMC11097631 DOI: 10.1523/eneuro.0396-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 03/11/2024] [Accepted: 04/01/2024] [Indexed: 05/18/2024] Open
Abstract
Autism spectrum disorder (ASD) is often associated with social communication impairments and specific sound processing deficits, for example, problems in following speech in noisy environments. To investigate underlying neuronal processing defects located in the auditory cortex (AC), we performed two-photon Ca2+ imaging in FMR1 (fragile X messenger ribonucleoprotein 1) knock-out (KO) mice, a model for fragile X syndrome (FXS), the most common cause of hereditary ASD in humans. For primary AC (A1) and the anterior auditory field (AAF), topographic frequency representation was less ordered compared with control animals. We additionally analyzed ensemble AC activity in response to various sounds and found subfield-specific differences. In A1, ensemble correlations were lower in general, while in secondary AC (A2), correlations were higher in response to complex sounds, but not to pure tones. Furthermore, sound specificity of ensemble activity was decreased in AAF. Repeating these experiments 1 week later revealed no major differences regarding representational drift. Nevertheless, we found subfield- and genotype-specific changes in ensemble correlation values between the two times points, hinting at alterations in network stability in FMR1 KO mice. These detailed insights into AC network activity and topography in FMR1 KO mice add to the understanding of auditory processing defects in FXS.
Collapse
Affiliation(s)
- Simon L Wadle
- Physiology of Neuronal Networks, Department of Biology, RPTU University of Kaiserslautern-Landau, Kaiserslautern D-67663, Germany
| | - Tamara C Ritter
- Physiology of Neuronal Networks, Department of Biology, RPTU University of Kaiserslautern-Landau, Kaiserslautern D-67663, Germany
| | - Tatjana T X Wadle
- Physiology of Neuronal Networks, Department of Biology, RPTU University of Kaiserslautern-Landau, Kaiserslautern D-67663, Germany
| | - Jan J Hirtz
- Physiology of Neuronal Networks, Department of Biology, RPTU University of Kaiserslautern-Landau, Kaiserslautern D-67663, Germany
| |
Collapse
|
2
|
Martin A, Souffi S, Huetz C, Edeline JM. Can Extensive Training Transform a Mouse into a Guinea Pig? An Evaluation Based on the Discriminative Abilities of Inferior Colliculus Neurons. BIOLOGY 2024; 13:92. [PMID: 38392310 PMCID: PMC10886615 DOI: 10.3390/biology13020092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 01/19/2024] [Accepted: 01/30/2024] [Indexed: 02/24/2024]
Abstract
Humans and animals maintain accurate discrimination between communication sounds in the presence of loud sources of background noise. In previous studies performed in anesthetized guinea pigs, we showed that, in the auditory pathway, the highest discriminative abilities between conspecific vocalizations were found in the inferior colliculus. Here, we trained CBA/J mice in a Go/No-Go task to discriminate between two similar guinea pig whistles, first in quiet conditions, then in two types of noise, a stationary noise and a chorus noise at three SNRs. Control mice were passively exposed to the same number of whistles as trained mice. After three months of extensive training, inferior colliculus (IC) neurons were recorded under anesthesia and the responses were quantified as in our previous studies. In quiet, the mean values of the firing rate, the temporal reliability and mutual information obtained from trained mice were higher than from the exposed mice and the guinea pigs. In stationary and chorus noise, there were only a few differences between the trained mice and the guinea pigs; and the lowest mean values of the parameters were found in the exposed mice. These results suggest that behavioral training can trigger plasticity in IC that allows mice neurons to reach guinea pig-like discrimination abilities.
Collapse
Affiliation(s)
- Alexandra Martin
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS & Université Paris-Saclay, 91400 Saclay, France
| | - Samira Souffi
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS & Université Paris-Saclay, 91400 Saclay, France
| | - Chloé Huetz
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS & Université Paris-Saclay, 91400 Saclay, France
| | - Jean-Marc Edeline
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS & Université Paris-Saclay, 91400 Saclay, France
| |
Collapse
|
3
|
Grijseels DM, Prendergast BJ, Gorman JC, Miller CT. The neurobiology of vocal communication in marmosets. Ann N Y Acad Sci 2023; 1528:13-28. [PMID: 37615212 PMCID: PMC10592205 DOI: 10.1111/nyas.15057] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/25/2023]
Abstract
An increasingly popular animal model for studying the neural basis of social behavior, cognition, and communication is the common marmoset (Callithrix jacchus). Interest in this New World primate across neuroscience is now being driven by their proclivity for prosociality across their repertoire, high volubility, and rapid development, as well as their amenability to naturalistic testing paradigms and freely moving neural recording and imaging technologies. The complement of these characteristics set marmosets up to be a powerful model of the primate social brain in the years to come. Here, we focus on vocal communication because it is the area that has both made the most progress and illustrates the prodigious potential of this species. We review the current state of the field with a focus on the various brain areas and networks involved in vocal perception and production, comparing the findings from marmosets to other animals, including humans.
Collapse
Affiliation(s)
- Dori M Grijseels
- Cortical Systems and Behavior Laboratory, University of California, San Diego, La Jolla, California, USA
| | - Brendan J Prendergast
- Cortical Systems and Behavior Laboratory, University of California, San Diego, La Jolla, California, USA
| | - Julia C Gorman
- Cortical Systems and Behavior Laboratory, University of California, San Diego, La Jolla, California, USA
- Neurosciences Graduate Program, University of California, San Diego, La Jolla, California, USA
| | - Cory T Miller
- Cortical Systems and Behavior Laboratory, University of California, San Diego, La Jolla, California, USA
- Neurosciences Graduate Program, University of California, San Diego, La Jolla, California, USA
| |
Collapse
|
4
|
Angeloni CF, Młynarski W, Piasini E, Williams AM, Wood KC, Garami L, Hermundstad AM, Geffen MN. Dynamics of cortical contrast adaptation predict perception of signals in noise. Nat Commun 2023; 14:4817. [PMID: 37558677 PMCID: PMC10412650 DOI: 10.1038/s41467-023-40477-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Accepted: 07/27/2023] [Indexed: 08/11/2023] Open
Abstract
Neurons throughout the sensory pathway adapt their responses depending on the statistical structure of the sensory environment. Contrast gain control is a form of adaptation in the auditory cortex, but it is unclear whether the dynamics of gain control reflect efficient adaptation, and whether they shape behavioral perception. Here, we trained mice to detect a target presented in background noise shortly after a change in the contrast of the background. The observed changes in cortical gain and behavioral detection followed the dynamics of a normative model of efficient contrast gain control; specifically, target detection and sensitivity improved slowly in low contrast, but degraded rapidly in high contrast. Auditory cortex was required for this task, and cortical responses were not only similarly affected by contrast but predicted variability in behavioral performance. Combined, our results demonstrate that dynamic gain adaptation supports efficient coding in auditory cortex and predicts the perception of sounds in noise.
Collapse
Affiliation(s)
- Christopher F Angeloni
- Psychology Graduate Group, University of Pennsylvania, Philadelphia, PA, USA
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA, USA
| | - Wiktor Młynarski
- Faculty of Biology, Ludwig Maximilian University of Munich, Munich, Germany
- Bernstein Center for Computational Neuroscience, Munich, Germany
| | - Eugenio Piasini
- International School for Advanced Studies (SISSA), Trieste, Italy
| | - Aaron M Williams
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA, USA
- Neuroscience Graduate Group, University of Pennsylvania, Philadelphia, PA, USA
| | - Katherine C Wood
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA, USA
| | - Linda Garami
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA, USA
| | - Ann M Hermundstad
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Maria N Geffen
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, PA, USA.
- Neuroscience Graduate Group, University of Pennsylvania, Philadelphia, PA, USA.
- Department of Neuroscience, Department of Neurology, University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
5
|
Schmitt TTX, Andrea KMA, Wadle SL, Hirtz JJ. Distinct topographic organization and network activity patterns of corticocollicular neurons within layer 5 auditory cortex. Front Neural Circuits 2023; 17:1210057. [PMID: 37521334 PMCID: PMC10372447 DOI: 10.3389/fncir.2023.1210057] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Accepted: 06/22/2023] [Indexed: 08/01/2023] Open
Abstract
The auditory cortex (AC) modulates the activity of upstream pathways in the auditory brainstem via descending (corticofugal) projections. This feedback system plays an important role in the plasticity of the auditory system by shaping response properties of neurons in many subcortical nuclei. The majority of layer (L) 5 corticofugal neurons project to the inferior colliculus (IC). This corticocollicular (CC) pathway is involved in processing of complex sounds, auditory-related learning, and defense behavior. Partly due to their location in deep cortical layers, CC neuron population activity patterns within neuronal AC ensembles remain poorly understood. We employed two-photon imaging to record the activity of hundreds of L5 neurons in anesthetized as well as awake animals. CC neurons are broader tuned than other L5 pyramidal neurons and display weaker topographic order in core AC subfields. Network activity analyses revealed stronger clusters of CC neurons compared to non-CC neurons, which respond more reliable and integrate information over larger distances. However, results obtained from secondary auditory cortex (A2) differed considerably. Here CC neurons displayed similar or higher topography, depending on the subset of neurons analyzed. Furthermore, specifically in A2, CC activity clusters formed in response to complex sounds were spatially more restricted compared to other L5 neurons. Our findings indicate distinct network mechanism of CC neurons in analyzing sound properties with pronounced subfield differences, demonstrating that the topography of sound-evoked responses within AC is neuron-type dependent.
Collapse
|
6
|
Liu W, Vicario DS. Dynamic encoding of phonetic categories in zebra finch auditory forebrain. Sci Rep 2023; 13:11172. [PMID: 37430030 DOI: 10.1038/s41598-023-37982-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2023] [Accepted: 06/30/2023] [Indexed: 07/12/2023] Open
Abstract
Vocal communication requires the formation of acoustic categories to enable invariant representations of sounds despite superficial variations. Humans form acoustic categories for speech phonemes, enabling the listener to recognize words independent of speakers; animals can also discriminate speech phonemes. We investigated the neural mechanisms of this process using electrophysiological recordings from the zebra finch secondary auditory area, caudomedial nidopallium (NCM), during passive exposure to human speech stimuli consisting of two naturally spoken words produced by multiple speakers. Analysis of neural distance and decoding accuracy showed improvements in neural discrimination between word categories over the course of exposure, and this improved representation transferred to the same words by novel speakers. We conclude that NCM neurons formed generalized representations of word categories independent of speaker-specific variations that became more refined over the course of passive exposure. The discovery of this dynamic encoding process in NCM suggests a general processing mechanism for forming categorical representations of complex acoustic signals that humans share with other animals.
Collapse
Affiliation(s)
- Wanyi Liu
- Department of Psychology, Rutgers, The State University of New Jersey, Piscataway, NJ, 08854, USA.
| | - David S Vicario
- Department of Psychology, Rutgers, The State University of New Jersey, Piscataway, NJ, 08854, USA.
| |
Collapse
|
7
|
van der Meer MA. Behavioral and neural generalization: Hitting the right notes. Neuron 2023; 111:1849-1851. [PMID: 37348457 DOI: 10.1016/j.neuron.2023.05.025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Revised: 05/25/2023] [Accepted: 05/26/2023] [Indexed: 06/24/2023]
Abstract
Successful generalization to novel stimuli is a core goal of learning and memory systems, but how do we do it? In this issue of Neuron, Miller et al.1 use a novel auditory structure learning task to reveal neural and behavioral signatures of generalization.
Collapse
|
8
|
McPherson MJ, McDermott JH. Relative pitch representations and invariance to timbre. Cognition 2023; 232:105327. [PMID: 36495710 PMCID: PMC10016107 DOI: 10.1016/j.cognition.2022.105327] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Revised: 09/13/2022] [Accepted: 11/10/2022] [Indexed: 12/12/2022]
Abstract
Information in speech and music is often conveyed through changes in fundamental frequency (f0), perceived by humans as "relative pitch". Relative pitch judgments are complicated by two facts. First, sounds can simultaneously vary in timbre due to filtering imposed by a vocal tract or instrument body. Second, relative pitch can be extracted in two ways: by measuring changes in constituent frequency components from one sound to another, or by estimating the f0 of each sound and comparing the estimates. We examined the effects of timbral differences on relative pitch judgments, and whether any invariance to timbre depends on whether judgments are based on constituent frequencies or their f0. Listeners performed up/down and interval discrimination tasks with pairs of spoken vowels, instrument notes, or synthetic tones, synthesized to be either harmonic or inharmonic. Inharmonic sounds lack a well-defined f0, such that relative pitch must be extracted from changes in individual frequencies. Pitch judgments were less accurate when vowels/instruments were different compared to when they were the same, and were biased by the associated timbre differences. However, this bias was similar for harmonic and inharmonic sounds, and was observed even in conditions where judgments of harmonic sounds were based on f0 representations. Relative pitch judgments are thus not invariant to timbre, even when timbral variation is naturalistic, and when such judgments are based on representations of f0.
Collapse
Affiliation(s)
- Malinda J McPherson
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States of America; Program in Speech and Hearing Biosciences and Technology, Harvard University, Boston, MA 02115, United States of America; McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States of America.
| | - Josh H McDermott
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States of America; Program in Speech and Hearing Biosciences and Technology, Harvard University, Boston, MA 02115, United States of America; McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States of America; Center for Brains Minds and Machines, MIT, Cambridge, MA 02139, United States of America
| |
Collapse
|
9
|
Robotka H, Thomas L, Yu K, Wood W, Elie JE, Gahr M, Theunissen FE. Sparse ensemble neural code for a complete vocal repertoire. Cell Rep 2023; 42:112034. [PMID: 36696266 PMCID: PMC10363576 DOI: 10.1016/j.celrep.2023.112034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 08/08/2022] [Accepted: 01/09/2023] [Indexed: 01/26/2023] Open
Abstract
The categorization of animal vocalizations into distinct behaviorally relevant groups for communication is an essential operation that must be performed by the auditory system. This auditory object recognition is a difficult task that requires selectivity to the group identifying acoustic features and invariance to renditions within each group. We find that small ensembles of auditory neurons in the forebrain of a social songbird can code the bird's entire vocal repertoire (∼10 call types). Ensemble neural discrimination is not, however, correlated with single unit selectivity, but instead with how well the joint single unit tunings to characteristic spectro-temporal modulations span the acoustic subspace optimized for the discrimination of call types. Thus, akin to face recognition in the visual system, call type recognition in the auditory system is based on a sparse code representing a small number of high-level features and not on highly selective grandmother neurons.
Collapse
Affiliation(s)
- H Robotka
- Max Planck Institute for Ornithology, Seewiesen, Germany
| | - L Thomas
- University of California, Berkeley, Helen Wills Neuroscience Institute, Berkeley, CA, USA
| | - K Yu
- University of California, Berkeley, Helen Wills Neuroscience Institute, Berkeley, CA, USA
| | - W Wood
- University of California, Berkeley, Helen Wills Neuroscience Institute, Berkeley, CA, USA
| | - J E Elie
- University of California, Berkeley, Helen Wills Neuroscience Institute, Berkeley, CA, USA
| | - M Gahr
- Max Planck Institute for Ornithology, Seewiesen, Germany
| | - F E Theunissen
- Max Planck Institute for Ornithology, Seewiesen, Germany; University of California, Berkeley, Helen Wills Neuroscience Institute, Berkeley, CA, USA; Department of Psychology and Integrative Biology, University of California, Berkeley, Berkeley, CA, USA.
| |
Collapse
|
10
|
Suri H, Rothschild G. Enhanced stability of complex sound representations relative to simple sounds in the auditory cortex. eNeuro 2022; 9:ENEURO.0031-22.2022. [PMID: 35868858 PMCID: PMC9347310 DOI: 10.1523/eneuro.0031-22.2022] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Revised: 06/29/2022] [Accepted: 06/30/2022] [Indexed: 11/29/2022] Open
Abstract
Typical everyday sounds, such as those of speech or running water, are spectrotemporally complex. The ability to recognize complex sounds (CxS) and their associated meaning is presumed to rely on their stable neural representations across time. The auditory cortex is critical for processing of CxS, yet little is known of the degree of stability of auditory cortical representations of CxS across days. Previous studies have shown that the auditory cortex represents CxS identity with a substantial degree of invariance to basic sound attributes such as frequency. We therefore hypothesized that auditory cortical representations of CxS are more stable across days than those of sounds that lack spectrotemporal structure such as pure tones (PTs). To test this hypothesis, we recorded responses of identified L2/3 auditory cortical excitatory neurons to both PTs and CxS across days using two-photon calcium imaging in awake mice. Auditory cortical neurons showed significant daily changes of responses to both types of sounds, yet responses to CxS exhibited significantly lower rates of daily change than those of PTs. Furthermore, daily changes in response profiles to PTs tended to be more stimulus-specific, reflecting changes in sound selectivity, as compared to changes of CxS responses. Lastly, the enhanced stability of responses to CxS was evident across longer time intervals as well. Together, these results suggest that spectrotemporally CxS are more stably represented in the auditory cortex across time than PTs. These findings support a role of the auditory cortex in representing CxS identity across time.Significance statementThe ability to recognize everyday complex sounds such as those of speech or running water is presumed to rely on their stable neural representations. Yet, little is known of the degree of stability of single-neuron sound responses across days. As the auditory cortex is critical for complex sound perception, we hypothesized that the auditory cortical representations of complex sounds are relatively stable across days. To test this, we recorded sound responses of identified auditory cortical neurons across days in awake mice. We found that auditory cortical responses to complex sounds are significantly more stable across days as compared to those of simple pure tones. These findings support a role of the auditory cortex in representing complex sound identity across time.
Collapse
Affiliation(s)
- Harini Suri
- Department of Psychology, University of Michigan, Ann Arbor, MI, 48109, USA
| | - Gideon Rothschild
- Department of Psychology, University of Michigan, Ann Arbor, MI, 48109, USA
- Kresge Hearing Research Institute and Department of Otolaryngology - Head and Neck Surgery, University of Michigan, Ann Arbor, MI 48109, USA
| |
Collapse
|
11
|
Lakunina AA, Menashe N, Jaramillo S. Contributions of Distinct Auditory Cortical Inhibitory Neuron Types to the Detection of Sounds in Background Noise. eNeuro 2022; 9:ENEURO.0264-21.2021. [PMID: 35168950 PMCID: PMC8906447 DOI: 10.1523/eneuro.0264-21.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2021] [Revised: 10/17/2021] [Accepted: 12/28/2021] [Indexed: 12/01/2022] Open
Abstract
The ability to separate background noise from relevant acoustic signals is essential for appropriate sound-driven behavior in natural environments. Examples of this separation are apparent in the auditory system, where neural responses to behaviorally relevant stimuli become increasingly noise invariant along the ascending auditory pathway. However, the mechanisms that underlie this reduction in responses to background noise are not well understood. To address this gap in knowledge, we first evaluated the effects of auditory cortical inactivation on mice of both sexes trained to perform a simple auditory signal-in-noise detection task and found that outputs from the auditory cortex are important for the detection of auditory stimuli in noisy environments. Next, we evaluated the contributions of the two most common cortical inhibitory cell types, parvalbumin-expressing (PV+) and somatostatin-expressing (SOM+) interneurons, to the perception of masked auditory stimuli. We found that inactivation of either PV+ or SOM+ cells resulted in a reduction in the ability of mice to determine the presence of auditory stimuli masked by noise. These results indicate that a disruption of auditory cortical network dynamics by either of these two types of inhibitory cells is sufficient to impair the ability to separate acoustic signals from noise.
Collapse
Affiliation(s)
- Anna A Lakunina
- Institute of Neuroscience and Department of Biology, University of Oregon, Eugene, Oregon 97403
| | - Nadav Menashe
- Institute of Neuroscience and Department of Biology, University of Oregon, Eugene, Oregon 97403
| | - Santiago Jaramillo
- Institute of Neuroscience and Department of Biology, University of Oregon, Eugene, Oregon 97403
| |
Collapse
|
12
|
Auerbach BD, Gritton HJ. Hearing in Complex Environments: Auditory Gain Control, Attention, and Hearing Loss. Front Neurosci 2022; 16:799787. [PMID: 35221899 PMCID: PMC8866963 DOI: 10.3389/fnins.2022.799787] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Accepted: 01/18/2022] [Indexed: 12/12/2022] Open
Abstract
Listening in noisy or complex sound environments is difficult for individuals with normal hearing and can be a debilitating impairment for those with hearing loss. Extracting meaningful information from a complex acoustic environment requires the ability to accurately encode specific sound features under highly variable listening conditions and segregate distinct sound streams from multiple overlapping sources. The auditory system employs a variety of mechanisms to achieve this auditory scene analysis. First, neurons across levels of the auditory system exhibit compensatory adaptations to their gain and dynamic range in response to prevailing sound stimulus statistics in the environment. These adaptations allow for robust representations of sound features that are to a large degree invariant to the level of background noise. Second, listeners can selectively attend to a desired sound target in an environment with multiple sound sources. This selective auditory attention is another form of sensory gain control, enhancing the representation of an attended sound source while suppressing responses to unattended sounds. This review will examine both “bottom-up” gain alterations in response to changes in environmental sound statistics as well as “top-down” mechanisms that allow for selective extraction of specific sound features in a complex auditory scene. Finally, we will discuss how hearing loss interacts with these gain control mechanisms, and the adaptive and/or maladaptive perceptual consequences of this plasticity.
Collapse
Affiliation(s)
- Benjamin D. Auerbach
- Department of Molecular and Integrative Physiology, Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- Neuroscience Program, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- *Correspondence: Benjamin D. Auerbach,
| | - Howard J. Gritton
- Neuroscience Program, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- Department of Comparative Biosciences, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL, United States
| |
Collapse
|
13
|
Ruthig P, Schönwiesner M. Common principles in the lateralisation of auditory cortex structure and function for vocal communication in primates and rodents. Eur J Neurosci 2022; 55:827-845. [PMID: 34984748 DOI: 10.1111/ejn.15590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2021] [Accepted: 12/24/2021] [Indexed: 11/27/2022]
Abstract
This review summarises recent findings on the lateralisation of communicative sound processing in the auditory cortex (AC) of humans, non-human primates, and rodents. Functional imaging in humans has demonstrated a left hemispheric preference for some acoustic features of speech, but it is unclear to which degree this is caused by bottom-up acoustic feature selectivity or top-down modulation from language areas. Although non-human primates show a less pronounced functional lateralisation in AC, the properties of AC fields and behavioral asymmetries are qualitatively similar. Rodent studies demonstrate microstructural circuits that might underlie bottom-up acoustic feature selectivity in both hemispheres. Functionally, the left AC in the mouse appears to be specifically tuned to communication calls, whereas the right AC may have a more 'generalist' role. Rodents also show anatomical AC lateralisation, such as differences in size and connectivity. Several of these functional and anatomical characteristics are also lateralized in human AC. Thus, complex vocal communication processing shares common features among rodents and primates. We argue that a synthesis of results from humans, non-human primates, and rodents is necessary to identify the neural circuitry of vocal communication processing. However, data from different species and methods are often difficult to compare. Recent advances may enable better integration of methods across species. Efforts to standardise data formats and analysis tools would benefit comparative research and enable synergies between psychological and biological research in the area of vocal communication processing.
Collapse
Affiliation(s)
- Philip Ruthig
- Faculty of Life Sciences, Leipzig University, Leipzig, Sachsen.,Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig
| | | |
Collapse
|
14
|
Souffi S, Nodal FR, Bajo VM, Edeline JM. When and How Does the Auditory Cortex Influence Subcortical Auditory Structures? New Insights About the Roles of Descending Cortical Projections. Front Neurosci 2021; 15:690223. [PMID: 34413722 PMCID: PMC8369261 DOI: 10.3389/fnins.2021.690223] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Accepted: 06/14/2021] [Indexed: 12/28/2022] Open
Abstract
For decades, the corticofugal descending projections have been anatomically well described but their functional role remains a puzzling question. In this review, we will first describe the contributions of neuronal networks in representing communication sounds in various types of degraded acoustic conditions from the cochlear nucleus to the primary and secondary auditory cortex. In such situations, the discrimination abilities of collicular and thalamic neurons are clearly better than those of cortical neurons although the latter remain very little affected by degraded acoustic conditions. Second, we will report the functional effects resulting from activating or inactivating corticofugal projections on functional properties of subcortical neurons. In general, modest effects have been observed in anesthetized and in awake, passively listening, animals. In contrast, in behavioral tasks including challenging conditions, behavioral performance was severely reduced by removing or transiently silencing the corticofugal descending projections. This suggests that the discriminative abilities of subcortical neurons may be sufficient in many acoustic situations. It is only in particularly challenging situations, either due to the task difficulties and/or to the degraded acoustic conditions that the corticofugal descending connections bring additional abilities. Here, we propose that it is both the top-down influences from the prefrontal cortex, and those from the neuromodulatory systems, which allow the cortical descending projections to impact behavioral performance in reshaping the functional circuitry of subcortical structures. We aim at proposing potential scenarios to explain how, and under which circumstances, these projections impact on subcortical processing and on behavioral responses.
Collapse
Affiliation(s)
- Samira Souffi
- Department of Integrative and Computational Neurosciences, Paris-Saclay Institute of Neuroscience (NeuroPSI), UMR CNRS 9197, Paris-Saclay University, Orsay, France
| | - Fernando R Nodal
- Department of Physiology, Anatomy and Genetics, Medical Sciences Division, University of Oxford, Oxford, United Kingdom
| | - Victoria M Bajo
- Department of Physiology, Anatomy and Genetics, Medical Sciences Division, University of Oxford, Oxford, United Kingdom
| | - Jean-Marc Edeline
- Department of Integrative and Computational Neurosciences, Paris-Saclay Institute of Neuroscience (NeuroPSI), UMR CNRS 9197, Paris-Saclay University, Orsay, France
| |
Collapse
|
15
|
Sparse Coding in Temporal Association Cortex Improves Complex Sound Discriminability. J Neurosci 2021; 41:7048-7064. [PMID: 34244361 DOI: 10.1523/jneurosci.3167-20.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Revised: 06/05/2021] [Accepted: 06/18/2021] [Indexed: 11/21/2022] Open
Abstract
The mouse auditory cortex is comprised of several auditory fields spanning the dorsoventral axis of the temporal lobe. The ventral most auditory field is the temporal association cortex (TeA), which remains largely unstudied. Using Neuropixels probes, we simultaneously recorded from primary auditory cortex (AUDp), secondary auditory cortex (AUDv), and TeA, characterizing neuronal responses to pure tones and frequency modulated (FM) sweeps in awake head-restrained female mice. As compared with AUDp and AUDv, single-unit (SU) responses to pure tones in TeA were sparser, delayed, and prolonged. Responses to FMs were also sparser. Population analysis showed that the sparser responses in TeA render it less sensitive to pure tones, yet more sensitive to FMs. When characterizing responses to pure tones under anesthesia, the distinct signature of TeA was changed considerably as compared with that in awake mice, implying that responses in TeA are strongly modulated by non-feedforward connections. Together, these findings provide a basic electrophysiological description of TeA as an integral part of sound processing along the cortical hierarchy.SIGNIFICANCE STATEMENT This is the first comprehensive characterization of the auditory responses in the awake mouse auditory temporal association cortex (TeA). The study provides the foundations for further investigation of TeA and its involvement in auditory learning, plasticity, auditory driven behaviors etc. The study was conducted using state of the art data collection tools, allowing for simultaneous recording from multiple cortical regions and numerous neurons.
Collapse
|
16
|
Causal inference in environmental sound recognition. Cognition 2021; 214:104627. [PMID: 34044231 DOI: 10.1016/j.cognition.2021.104627] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2020] [Revised: 01/28/2021] [Accepted: 02/05/2021] [Indexed: 11/23/2022]
Abstract
Sound is caused by physical events in the world. Do humans infer these causes when recognizing sound sources? We tested whether the recognition of common environmental sounds depends on the inference of a basic physical variable - the source intensity (i.e., the power that produces a sound). A source's intensity can be inferred from the intensity it produces at the ear and its distance, which is normally conveyed by reverberation. Listeners could thus use intensity at the ear and reverberation to constrain recognition by inferring the underlying source intensity. Alternatively, listeners might separate these acoustic cues from their representation of a sound's identity in the interest of invariant recognition. We compared these two hypotheses by measuring recognition accuracy for sounds with typically low or high source intensity (e.g., pepper grinders vs. trucks) that were presented across a range of intensities at the ear or with reverberation cues to distance. The recognition of low-intensity sources (e.g., pepper grinders) was impaired by high presentation intensities or reverberation that conveyed distance, either of which imply high source intensity. Neither effect occurred for high-intensity sources. The results suggest that listeners implicitly use the intensity at the ear along with distance cues to infer a source's power and constrain its identity. The recognition of real-world sounds thus appears to depend upon the inference of their physical generative parameters, even generative parameters whose cues might otherwise be separated from the representation of a sound's identity.
Collapse
|
17
|
Chiang CH, Lee J, Wang C, Williams AJ, Lucas TH, Cohen YE, Viventi J. A modular high-density μECoG system on macaque vlPFC for auditory cognitive decoding. J Neural Eng 2020; 17:046008. [PMID: 32498058 DOI: 10.1088/1741-2552/ab9986] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
OBJECTIVE A fundamental goal of the auditory system is to parse the auditory environment into distinct perceptual representations. Auditory perception is mediated by the ventral auditory pathway, which includes the ventrolateral prefrontal cortex (vlPFC). Because large-scale recordings of auditory signals are quite rare, the spatiotemporal resolution of the neuronal code that underlies vlPFC's contribution to auditory perception has not been fully elucidated. Therefore, we developed a modular, chronic, high-resolution, multi-electrode array system with long-term viability in order to identify the information that could be decoded from μECoG vlPFC signals. APPROACH We molded three separate μECoG arrays into one and implanted this system in a non-human primate. A custom 3D-printed titanium chamber was mounted on the left hemisphere. The molded 294-contact μECoG array was implanted subdurally over the vlPFC. μECoG activity was recorded while the monkey participated in a 'hearing-in-noise' task in which they reported hearing a 'target' vocalization from a background 'chorus' of vocalizations. We titrated task difficulty by varying the sound level of the target vocalization, relative to the chorus (target-to-chorus ratio, TCr). MAIN RESULTS We decoded the TCr and the monkey's behavioral choices from the μECoG signal. We analyzed decoding accuracy as a function of number of electrodes, spatial resolution, and time from implantation. Over a one-year period, we found significant decoding with individual electrodes that increased significantly as we decoded simultaneously more electrodes. Further, we found that the decoding for behavioral choice was better than the decoding of TCr. Finally, because the decoding accuracy of individual electrodes varied on a day-by-day basis, electrode arrays with high channel counts ensure robust decoding in the long term. SIGNIFICANCE Our results demonstrate the utility of high-resolution and high-channel-count, chronic µECoG recording. We developed a surface electrode array that can be scaled to cover larger cortical areas without increasing the chamber footprint.
Collapse
Affiliation(s)
- Chia-Han Chiang
- Department of Biomedical Engineering, Duke University, Durham, NC, United States of America. These authors contributed equally to this work
| | | | | | | | | | | | | |
Collapse
|
18
|
Tasaka GI, Feigin L, Maor I, Groysman M, DeNardo LA, Schiavo JK, Froemke RC, Luo L, Mizrahi A. The Temporal Association Cortex Plays a Key Role in Auditory-Driven Maternal Plasticity. Neuron 2020; 107:566-579.e7. [PMID: 32473095 DOI: 10.1016/j.neuron.2020.05.004] [Citation(s) in RCA: 48] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2019] [Revised: 01/29/2020] [Accepted: 05/01/2020] [Indexed: 11/24/2022]
Abstract
Mother-infant bonding develops rapidly following parturition and is accompanied by changes in sensory perception and behavior. Here, we study how ultrasonic vocalizations (USVs) are represented in the brain of mothers. Using a mouse line that allows temporally controlled genetic access to active neurons, we find that the temporal association cortex (TeA) in mothers exhibits robust USV responses. Rabies tracing from USV-responsive neurons reveals extensive subcortical and cortical inputs into TeA. A particularly dominant cortical source of inputs is the primary auditory cortex (A1), suggesting strong A1-to-TeA connectivity. Chemogenetic silencing of USV-responsive neurons in TeA impairs auditory-driven maternal preference in a pup-retrieval assay. Furthermore, dense extracellular recordings from awake mice reveal changes of both single-neuron and population responses to USVs in TeA, improving discriminability of pup calls in mothers compared with naive females. These data indicate that TeA plays a key role in encoding and perceiving pup cries during motherhood.
Collapse
Affiliation(s)
- Gen-Ichi Tasaka
- Department of Neurobiology, The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem 91904, Israel
| | - Libi Feigin
- Department of Neurobiology, The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem 91904, Israel
| | - Ido Maor
- Department of Neurobiology, The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem 91904, Israel
| | - Maya Groysman
- Department of Neurobiology, The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem 91904, Israel
| | - Laura A DeNardo
- Department of Biology, Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305, USA
| | - Jennifer K Schiavo
- Skirball Institute for Biomolecular Medicine, Neuroscience Institute, and Department of Otolaryngology, New York University School of Medicine, New York, NY 10016, USA
| | - Robert C Froemke
- Skirball Institute for Biomolecular Medicine, Neuroscience Institute, and Department of Otolaryngology, New York University School of Medicine, New York, NY 10016, USA
| | - Liqun Luo
- Department of Biology, Howard Hughes Medical Institute, Stanford University, Stanford, CA 94305, USA
| | - Adi Mizrahi
- Department of Neurobiology, The Edmond and Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem, Jerusalem 91904, Israel.
| |
Collapse
|
19
|
Noise-Sensitive But More Precise Subcortical Representations Coexist with Robust Cortical Encoding of Natural Vocalizations. J Neurosci 2020; 40:5228-5246. [PMID: 32444386 DOI: 10.1523/jneurosci.2731-19.2020] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2019] [Revised: 05/08/2020] [Accepted: 05/15/2020] [Indexed: 01/30/2023] Open
Abstract
Humans and animals maintain accurate sound discrimination in the presence of loud sources of background noise. It is commonly assumed that this ability relies on the robustness of auditory cortex responses. However, only a few attempts have been made to characterize neural discrimination of communication sounds masked by noise at each stage of the auditory system and to quantify the noise effects on the neuronal discrimination in terms of alterations in amplitude modulations. Here, we measured neural discrimination between communication sounds masked by a vocalization-shaped stationary noise from multiunit responses recorded in the cochlear nucleus, inferior colliculus, auditory thalamus, and primary and secondary auditory cortex at several signal-to-noise ratios (SNRs) in anesthetized male or female guinea pigs. Masking noise decreased sound discrimination of neuronal populations in each auditory structure, but collicular and thalamic populations showed better performance than cortical populations at each SNR. In contrast, in each auditory structure, discrimination by neuronal populations was slightly decreased when tone-vocoded vocalizations were tested. These results shed new light on the specific contributions of subcortical structures to robust sound encoding, and suggest that the distortion of slow amplitude modulation cues conveyed by communication sounds is one of the factors constraining the neuronal discrimination in subcortical and cortical levels.SIGNIFICANCE STATEMENT Dissecting how auditory neurons discriminate communication sounds in noise is a major goal in auditory neuroscience. Robust sound coding in noise is often viewed as a specific property of cortical networks, although this remains to be demonstrated. Here, we tested the discrimination performance of neuronal populations at five levels of the auditory system in response to conspecific vocalizations masked by noise. In each acoustic condition, subcortical neurons better discriminated target vocalizations than cortical ones and in each structure, the reduction in discrimination performance was related to the reduction in slow amplitude modulation cues.
Collapse
|
20
|
Experience-Dependent Coding of Time-Dependent Frequency Trajectories by Off Responses in Secondary Auditory Cortex. J Neurosci 2020; 40:4469-4482. [PMID: 32327533 PMCID: PMC7275866 DOI: 10.1523/jneurosci.2665-19.2020] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2019] [Revised: 04/02/2020] [Accepted: 04/07/2020] [Indexed: 11/21/2022] Open
Abstract
Time-dependent frequency trajectories are an inherent feature of many behaviorally relevant sounds, such as species-specific vocalizations. Dynamic frequency trajectories, even in short sounds, often convey meaningful information, which may be used to differentiate sound categories. However, it is not clear what and where neural responses in the auditory cortical pathway are critical for conveying information about behaviorally relevant frequency trajectories, and how these responses change with experience. Here, we uncover tuning to subtle variations in frequency trajectories in auditory cortex of female mice. We found that auditory cortical responses could be modulated by variations in a pure tone trajectory as small as 1/24th of an octave, comparable to what has been reported in primates. In particular, late spiking after the end of a sound stimulus was more often sensitive to the sound's subtle frequency variation compared with spiking during the sound. Such “Off” responses in the adult A2, but not those in core auditory cortex, were plastic in a way that may enhance the representation of a newly acquired, behaviorally relevant sound category. We illustrate this with the maternal mouse paradigm for natural vocalization learning. By using an ethologically inspired paradigm to drive auditory responses in higher-order neurons, our results demonstrate that mouse auditory cortex can track fine frequency changes, which allows A2 Off responses in particular to better respond to pitch trajectories that distinguish behaviorally relevant, natural sound categories. SIGNIFICANCE STATEMENT A whistle's pitch conveys meaning to its listener, as when dogs learn that distinct pitch trajectories whistled by their owner differentiate specific commands. Many species use pitch trajectories in their own vocalizations to distinguish sound categories, such as in human languages, such as Mandarin. How and where auditory neural activity encodes these pitch trajectories as their meaning is learned but not well understood, especially for short-duration sounds. We studied this in mice, where infants use ultrasonic whistles to communicate to adults. We found that late neural firing after a sound ends can be tuned to how the pitch changes in time, and that this response in a secondary auditory cortical field changes with experience to acquire a pitch change's meaning.
Collapse
|
21
|
Neophytou D, Oviedo HV. Using Neural Circuit Interrogation in Rodents to Unravel Human Speech Decoding. Front Neural Circuits 2020; 14:2. [PMID: 32116569 PMCID: PMC7009302 DOI: 10.3389/fncir.2020.00002] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2019] [Accepted: 01/09/2020] [Indexed: 01/21/2023] Open
Abstract
The neural circuits responsible for social communication are among the least understood in the brain. Human studies have made great progress in advancing our understanding of the global computations required for processing speech, and animal models offer the opportunity to discover evolutionarily conserved mechanisms for decoding these signals. In this review article, we describe some of the most well-established speech decoding computations from human studies and describe animal research designed to reveal potential circuit mechanisms underlying these processes. Human and animal brains must perform the challenging tasks of rapidly recognizing, categorizing, and assigning communicative importance to sounds in a noisy environment. The instructions to these functions are found in the precise connections neurons make with one another. Therefore, identifying circuit-motifs in the auditory cortices and linking them to communicative functions is pivotal. We review recent advances in human recordings that have revealed the most basic unit of speech decoded by neurons is a phoneme, and consider circuit-mapping studies in rodents that have shown potential connectivity schemes to achieve this. Finally, we discuss other potentially important processing features in humans like lateralization, sensitivity to fine temporal features, and hierarchical processing. The goal is for animal studies to investigate neurophysiological and anatomical pathways responsible for establishing behavioral phenotypes that are shared between humans and animals. This can be accomplished by establishing cell types, connectivity patterns, genetic pathways and critical periods that are relevant in the development and function of social communication.
Collapse
Affiliation(s)
- Demetrios Neophytou
- Biology Department, The City College of New York, New York, NY, United States
| | - Hysell V Oviedo
- Biology Department, The City College of New York, New York, NY, United States.,CUNY Graduate Center, New York, NY, United States
| |
Collapse
|
22
|
Elie JE, Theunissen FE. Invariant neural responses for sensory categories revealed by the time-varying information for communication calls. PLoS Comput Biol 2019; 15:e1006698. [PMID: 31557151 PMCID: PMC6762074 DOI: 10.1371/journal.pcbi.1006698] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2018] [Accepted: 06/08/2019] [Indexed: 12/20/2022] Open
Abstract
Although information theoretic approaches have been used extensively in the analysis of the neural code, they have yet to be used to describe how information is accumulated in time while sensory systems are categorizing dynamic sensory stimuli such as speech sounds or visual objects. Here, we present a novel method to estimate the cumulative information for stimuli or categories. We further define a time-varying categorical information index that, by comparing the information obtained for stimuli versus categories of these same stimuli, quantifies invariant neural representations. We use these methods to investigate the dynamic properties of avian cortical auditory neurons recorded in zebra finches that were listening to a large set of call stimuli sampled from the complete vocal repertoire of this species. We found that the time-varying rates carry 5 times more information than the mean firing rates even in the first 100 ms. We also found that cumulative information has slow time constants (100–600 ms) relative to the typical integration time of single neurons, reflecting the fact that the behaviorally informative features of auditory objects are time-varying sound patterns. When we correlated firing rates and information values, we found that average information correlates with average firing rate but that higher-rates found at the onset response yielded similar information values as the lower-rates found in the sustained response: the onset and sustained response of avian cortical auditory neurons provide similar levels of independent information about call identity and call-type. Finally, our information measures allowed us to rigorously define categorical neurons; these categorical neurons show a high degree of invariance for vocalizations within a call-type. Peak invariance is found around 150 ms after stimulus onset. Surprisingly, call-type invariant neurons were found in both primary and secondary avian auditory areas. Just as the recognition of faces requires neural representations that are invariant to scale and rotation, the recognition of behaviorally relevant auditory objects, such as spoken words, requires neural representations that are invariant to the speaker uttering the word and to his or her location. Here, we used information theory to investigate the time course of the neural representation of bird communication calls and of behaviorally relevant categories of these same calls: the call-types of the bird’s repertoire. We found that neurons in both the primary and secondary avian auditory cortex exhibit invariant responses to call renditions within a call-type, suggestive of a potential role for extracting the meaning of these communication calls. We also found that time plays an important role: first, neural responses carry significantly more information when represented by temporal patterns calculated at the small time scale of 10 ms than when measured as average rates and, second, this information accumulates in a non-redundant fashion up to long integration times of 600 ms. This rich temporal neural representation is matched to the temporal richness found in the communication calls of this species.
Collapse
Affiliation(s)
- Julie E. Elie
- Helen Wills Neuroscience Institute, University of California Berkeley, Berkeley, California, United States of America
- Department of Bioengineering, University of California Berkeley, Berkeley, California, United States of America
- * E-mail:
| | - Frédéric E. Theunissen
- Helen Wills Neuroscience Institute, University of California Berkeley, Berkeley, California, United States of America
- Department of Psychology, University of California Berkeley, Berkeley, California, United States of America
| |
Collapse
|
23
|
Kell AJE, McDermott JH. Invariance to background noise as a signature of non-primary auditory cortex. Nat Commun 2019; 10:3958. [PMID: 31477711 PMCID: PMC6718388 DOI: 10.1038/s41467-019-11710-y] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2018] [Accepted: 07/30/2019] [Indexed: 12/22/2022] Open
Abstract
Despite well-established anatomical differences between primary and non-primary auditory cortex, the associated representational transformations have remained elusive. Here we show that primary and non-primary auditory cortex are differentiated by their invariance to real-world background noise. We measured fMRI responses to natural sounds presented in isolation and in real-world noise, quantifying invariance as the correlation between the two responses for individual voxels. Non-primary areas were substantially more noise-invariant than primary areas. This primary-nonprimary difference occurred both for speech and non-speech sounds and was unaffected by a concurrent demanding visual task, suggesting that the observed invariance is not specific to speech processing and is robust to inattention. The difference was most pronounced for real-world background noise-both primary and non-primary areas were relatively robust to simple types of synthetic noise. Our results suggest a general representational transformation between auditory cortical stages, illustrating a representational consequence of hierarchical organization in the auditory system.
Collapse
Affiliation(s)
- Alexander J E Kell
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, 02139, USA.
- McGovern Institute for Brain Research, MIT, Cambridge, MA, 02139, USA.
- Center for Brains, Minds, and Machines, MIT, Cambridge, MA, 02139, USA.
- Zuckerman Institute of Mind, Brain, and Behavior, Columbia University, New York, NY, 10027, USA.
| | - Josh H McDermott
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, 02139, USA.
- McGovern Institute for Brain Research, MIT, Cambridge, MA, 02139, USA.
- Center for Brains, Minds, and Machines, MIT, Cambridge, MA, 02139, USA.
- Program in Speech and Hearing Biosciences and Technology, Harvard University, Boston, MA, USA.
| |
Collapse
|
24
|
Xu N, Luo L, Wang Q, Li L. Binaural unmasking of the accuracy of envelope-signal representation in rat auditory cortex but not auditory midbrain. Hear Res 2019; 377:224-233. [PMID: 30991272 DOI: 10.1016/j.heares.2019.04.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/26/2018] [Revised: 03/25/2019] [Accepted: 04/03/2019] [Indexed: 01/16/2023]
Abstract
Accurate neural representations of acoustic signals under noisy conditions are critical for animals' survival. Detecting signal against background noise can be improved by binaural hearing particularly when an interaural-time-difference (ITD) disparity is introduced between the signal and the noise, a phenomenon known as binaural unmasking. Previous studies have mainly focused on the binaural unmasking effect on response magnitudes, and it is not clear whether binaural unmasking affects the accuracy of central representations of target acoustic signals and the relative contributions of different central auditory structures to this accuracy. Frequency following responses (FFRs), which are sustained phase-locked neural activities, can be used for measuring the accuracy of the representation of signals. Using intracranial recordings of local field potentials, this study aimed to assess whether the binaural unmasking effects include an improvement of the accuracy of neural representations of sound-envelope signals in the rat IC and/or auditory cortex (AC). The results showed that (1) when a narrow-band noise was presented binaurally, the stimulus-response (S-R) coherence of the FFRs to the envelope (FFRenvelope) of the narrow-band noise recorded in the IC was higher than that recorded in the AC. (2) Presenting a broad-band masking noise caused a larger reduction of the S-R coherence for FFRenvelope in the IC than that in the AC. (3) Introducing an ITD disparity between the narrow-band signal noise and the broad-band masking noise did not affect the IC S-R coherence, but enhanced both the AC S-R coherence and the coherence between the IC FFRenvelope and AC FFRenvelope. Thus, although the accuracy of representing envelope signals in the AC is lower than that in the IC, it can be binaurally unmasked, indicating a binaural-unmasking mechanism that is formed during the signal transmission from the IC to the AC.
Collapse
Affiliation(s)
- Na Xu
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, 100080, China
| | - Lu Luo
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, 100080, China
| | - Qian Wang
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, 100080, China; Beijing Key Laboratory of Epilepsy, Epilepsy Center, Department of Functional Neurosurgery, Sanbo Brain Hospital, Capital Medical University, Beijing, 100093, China
| | - Liang Li
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, 100080, China; Speech and Hearing Research Center, Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing, 100871, China; Beijing Institute for Brain Disorders, Beijing, 100096, China.
| |
Collapse
|
25
|
Gervain J, Geffen MN. Efficient Neural Coding in Auditory and Speech Perception. Trends Neurosci 2019; 42:56-65. [PMID: 30297085 PMCID: PMC6542557 DOI: 10.1016/j.tins.2018.09.004] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2018] [Revised: 09/06/2018] [Accepted: 09/10/2018] [Indexed: 02/05/2023]
Abstract
Speech has long been recognized as 'special'. Here, we suggest that one of the reasons for speech being special is that our auditory system has evolved to encode it in an efficient, optimal way. The theory of efficient neural coding argues that our perceptual systems have evolved to encode environmental stimuli in the most efficient way. Mathematically, this can be achieved if the optimally efficient codes match the statistics of the signals they represent. Experimental evidence suggests that the auditory code is optimal in this mathematical sense: statistical properties of speech closely match response properties of the cochlea, the auditory nerve, and the auditory cortex. Even more interestingly, these results may be linked to phenomena in auditory and speech perception.
Collapse
Affiliation(s)
- Judit Gervain
- Laboratoire Psychologie de la Perception, Université Paris Descartes, Paris, France; Laboratoire Psychologie de la Perception, CNRS, Paris, France
| | - Maria N Geffen
- Departments of Otorhinolaryngology, Neuroscience and Neurology, University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
26
|
Sound identity is represented robustly in auditory cortex during perceptual constancy. Nat Commun 2018; 9:4786. [PMID: 30429465 PMCID: PMC6235866 DOI: 10.1038/s41467-018-07237-3] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2018] [Accepted: 10/23/2018] [Indexed: 12/02/2022] Open
Abstract
Perceptual constancy requires neural representations that are selective for object identity, but also tolerant across identity-preserving transformations. How such representations arise in the brain and support perception remains unclear. Here, we study tolerant representation of sound identity in the auditory system by recording neural activity in auditory cortex of ferrets during perceptual constancy. Ferrets generalize vowel identity across variations in fundamental frequency, sound level and location, while neurons represent sound identity robustly across acoustic variations. Stimulus features are encoded with distinct time-courses in all conditions, however encoding of sound identity is delayed when animals fail to generalize and during passive listening. Neurons also encode information about task-irrelevant sound features, as well as animals’ choices and accuracy, while population decoding out-performs animals’ behavior. Our results show that during perceptual constancy, sound identity is represented robustly in auditory cortex across widely varying conditions, and behavioral generalization requires conserved timing of identity information. Perceptual constancy requires neural representations selective for object identity, yet tolerant of identity-preserving transformations. Here, the authors show that sound identity is represented robustly in auditory cortex and that behavioral generalization requires precise timing of identity information.
Collapse
|
27
|
A Hierarchy of Time Scales for Discriminating and Classifying the Temporal Shape of Sound in Three Auditory Cortical Fields. J Neurosci 2018; 38:6967-6982. [PMID: 29954851 DOI: 10.1523/jneurosci.2871-17.2018] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2017] [Revised: 05/29/2018] [Accepted: 06/17/2018] [Indexed: 11/21/2022] Open
Abstract
Auditory cortex is essential for mammals, including rodents, to detect temporal "shape" cues in the sound envelope but it remains unclear how different cortical fields may contribute to this ability (Lomber and Malhotra, 2008; Threlkeld et al., 2008). Previously, we found that precise spiking patterns provide a potential neural code for temporal shape cues in the sound envelope in the primary auditory (A1), and ventral auditory field (VAF) and caudal suprarhinal auditory field (cSRAF) of the rat (Lee et al., 2016). Here, we extend these findings and characterize the time course of the temporally precise output of auditory cortical neurons in male rats. A pairwise sound discrimination index and a Naive Bayesian classifier are used to determine how these spiking patterns could provide brain signals for behavioral discrimination and classification of sounds. We find response durations and optimal time constants for discriminating sound envelope shape increase in rank order with: A1 < VAF < cSRAF. Accordingly, sustained spiking is more prominent and results in more robust sound discrimination in non-primary cortex versus A1. Spike-timing patterns classify 10 different sound envelope shape sequences and there is a twofold increase in maximal performance when pooling output across the neuron population indicating a robust distributed neural code in all three cortical fields. Together, these results support the idea that temporally precise spiking patterns from primary and non-primary auditory cortical fields provide the necessary signals for animals to discriminate and classify a large range of temporal shapes in the sound envelope.SIGNIFICANCE STATEMENT Functional hierarchies in the visual cortices support the concept that classification of visual objects requires successive cortical stages of processing including a progressive increase in classical receptive field size. The present study is significant as it supports the idea that a similar progression exists in auditory cortices in the time domain. We demonstrate for the first time that three cortices provide temporal spiking patterns for robust temporal envelope shape discrimination but only the ventral non-primary cortices do so on long time scales. This study raises the possibility that primary and non-primary cortices provide unique temporal spiking patterns and time scales for perception of sound envelope shape.
Collapse
|
28
|
Natan RG, Rao W, Geffen MN. Cortical Interneurons Differentially Shape Frequency Tuning following Adaptation. Cell Rep 2018; 21:878-890. [PMID: 29069595 DOI: 10.1016/j.celrep.2017.10.012] [Citation(s) in RCA: 68] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2017] [Revised: 08/07/2017] [Accepted: 10/03/2017] [Indexed: 01/16/2023] Open
Abstract
Neuronal stimulus selectivity is shaped by feedforward and recurrent excitatory-inhibitory interactions. In the auditory cortex (AC), parvalbumin- (PV) and somatostatin-positive (SOM) inhibitory interneurons differentially modulate frequency-dependent responses of excitatory neurons. Responsiveness of neurons in the AC to sound is also dependent on stimulus history. We found that the inhibitory effects of SOMs and PVs diverged as a function of adaptation to temporal repetition of tones. Prior to adaptation, suppressing either SOM or PV inhibition drove both increases and decreases in excitatory spiking activity. After adaptation, suppressing SOM activity caused predominantly disinhibitory effects, whereas suppressing PV activity still evoked bi-directional changes. SOM, but not PV-driven inhibition, dynamically modulated frequency tuning with adaptation. Unlike PV-driven inhibition, SOM-driven inhibition elicited gain-like increases in frequency tuning reflective of adaptation. Our findings suggest that distinct cortical interneurons differentially shape tuning to sensory stimuli across the neuronal receptive field, altering frequency selectivity of excitatory neurons during adaptation.
Collapse
Affiliation(s)
- Ryan G Natan
- Department of Otorhinolaryngology: HNS and Department of Neuroscience, University of Pennsylvania, Philadelphia, PA, USA
| | - Winnie Rao
- Department of Otorhinolaryngology: HNS and Department of Neuroscience, University of Pennsylvania, Philadelphia, PA, USA
| | - Maria N Geffen
- Department of Otorhinolaryngology: HNS and Department of Neuroscience, University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
29
|
Yao JD, Sanes DH. Developmental deprivation-induced perceptual and cortical processing deficits in awake-behaving animals. eLife 2018; 7:33891. [PMID: 29873632 PMCID: PMC6005681 DOI: 10.7554/elife.33891] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2017] [Accepted: 06/04/2018] [Indexed: 01/02/2023] Open
Abstract
Sensory deprivation during development induces lifelong changes to central nervous system function that are associated with perceptual impairments. However, the relationship between neural and behavioral deficits is uncertain due to a lack of simultaneous measurements during task performance. Therefore, we telemetrically recorded from auditory cortex neurons in gerbils reared with developmental conductive hearing loss as they performed an auditory task in which rapid fluctuations in amplitude are detected. These data were compared to a measure of auditory brainstem temporal processing from each animal. We found that developmental HL diminished behavioral performance, but did not alter brainstem temporal processing. However, the simultaneous assessment of neural and behavioral processing revealed that perceptual deficits were associated with a degraded cortical population code that could be explained by greater trial-to-trial response variability. Our findings suggest that the perceptual limitations that attend early hearing loss are best explained by an encoding deficit in auditory cortex.
Collapse
Affiliation(s)
- Justin D Yao
- Center for Neural Science, New York University, New York, United States
| | - Dan H Sanes
- Center for Neural Science, New York University, New York, United States.,Department of Psychology, New York University, New York, United States.,Department of Biology, New York University, New York, United States.,Neuroscience Institute, NYU Langone Medical Center, New York, United States
| |
Collapse
|
30
|
Kuchibhotla K, Bathellier B. Neural encoding of sensory and behavioral complexity in the auditory cortex. Curr Opin Neurobiol 2018; 52:65-71. [PMID: 29709885 DOI: 10.1016/j.conb.2018.04.002] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2018] [Revised: 03/01/2018] [Accepted: 04/07/2018] [Indexed: 01/07/2023]
Abstract
Converging evidence now supports the idea that auditory cortex is an important step for the emergence of auditory percepts. Recent studies have extended the list of complex, nonlinear sound features coded by cortical neurons. Moreover, we are beginning to uncover general properties of cortical representations, such as invariance and discreteness, which reflect the structure of auditory perception. Complexity, however, emerges not only through nonlinear shaping of auditory information into perceptual bricks. Behavioral context and task-related information strongly influence cortical encoding of sounds via ascending neuromodulation and descending top-down frontal control. These effects appear to be mediated through local inhibitory networks. Thus, auditory cortex can be seen as a hub linking structured sensory representations with behavioral variables.
Collapse
Affiliation(s)
- Kishore Kuchibhotla
- Department of Psychological and Brain Sciences, Department of Neuroscience, Johns Hopkins University, Baltimore, MD 21218, United States; Laboratoire de Neurosciences Cognitives, INSERM U960, École Normale Supérieure - PSL Research University, Paris, France
| | - Brice Bathellier
- Unité de Neuroscience, Information et Complexité (UNIC), FRE 3693, Centre National de la Recherche Scientifique and Paris-Saclay University, Gif-sur-Yvette, 91198, France.
| |
Collapse
|
31
|
Angeloni C, Geffen MN. Contextual modulation of sound processing in the auditory cortex. Curr Opin Neurobiol 2018; 49:8-15. [PMID: 29125987 PMCID: PMC6037899 DOI: 10.1016/j.conb.2017.10.012] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2017] [Revised: 10/11/2017] [Accepted: 10/13/2017] [Indexed: 12/26/2022]
Abstract
In everyday acoustic environments, we navigate through a maze of sounds that possess a complex spectrotemporal structure, spanning many frequencies and exhibiting temporal modulations that differ within frequency bands. Our auditory system needs to efficiently encode the same sounds in a variety of different contexts, while preserving the ability to separate complex sounds within an acoustic scene. Recent work in auditory neuroscience has made substantial progress in studying how sounds are represented in the auditory system under different contexts, demonstrating that auditory processing of seemingly simple acoustic features, such as frequency and time, is highly dependent on co-occurring acoustic and behavioral stimuli. Through a combination of electrophysiological recordings, computational analysis and behavioral techniques, recent research identified the interactions between external spectral and temporal context of stimuli, as well as the internal behavioral state.
Collapse
Affiliation(s)
- C Angeloni
- Department of Otorhinolaryngology: HNS, Department of Neuroscience, Psychology Graduate Group, Computational Neuroscience Initiative, University of Pennsylvania, Philadelphia, PA, United States
| | - M N Geffen
- Department of Otorhinolaryngology: HNS, Department of Neuroscience, Psychology Graduate Group, Computational Neuroscience Initiative, University of Pennsylvania, Philadelphia, PA, United States.
| |
Collapse
|
32
|
Albert JT, Kozlov AS. Comparative Aspects of Hearing in Vertebrates and Insects with Antennal Ears. Curr Biol 2017; 26:R1050-R1061. [PMID: 27780047 DOI: 10.1016/j.cub.2016.09.017] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
The evolution of hearing in terrestrial animals has resulted in remarkable adaptations enabling exquisitely sensitive sound detection by the ear and sophisticated sound analysis by the brain. In this review, we examine several such characteristics, using examples from insects and vertebrates. We focus on two strong and interdependent forces that have been shaping the auditory systems across taxa: the physical environment of auditory transducers on the small, subcellular scale, and the sensory-ecological environment within which hearing happens, on a larger, evolutionary scale. We briefly discuss acoustical feature selectivity and invariance in the central auditory system, highlighting a major difference between insects and vertebrates as well as a major similarity. Through such comparisons within a sensory ecological framework, we aim to emphasize general principles underlying acute sensitivity to airborne sounds.
Collapse
Affiliation(s)
- Joerg T Albert
- UCL Ear Institute, 332 Gray's Inn Road, London WC1X 8EE, UK.
| | - Andrei S Kozlov
- Department of Bioengineering, Imperial College London, London SW7 2AZ, UK.
| |
Collapse
|
33
|
Młynarski W, McDermott JH. Learning Midlevel Auditory Codes from Natural Sound Statistics. Neural Comput 2017; 30:631-669. [PMID: 29220308 DOI: 10.1162/neco_a_01048] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Interaction with the world requires an organism to transform sensory signals into representations in which behaviorally meaningful properties of the environment are made explicit. These representations are derived through cascades of neuronal processing stages in which neurons at each stage recode the output of preceding stages. Explanations of sensory coding may thus involve understanding how low-level patterns are combined into more complex structures. To gain insight into such midlevel representations for sound, we designed a hierarchical generative model of natural sounds that learns combinations of spectrotemporal features from natural stimulus statistics. In the first layer, the model forms a sparse convolutional code of spectrograms using a dictionary of learned spectrotemporal kernels. To generalize from specific kernel activation patterns, the second layer encodes patterns of time-varying magnitude of multiple first-layer coefficients. When trained on corpora of speech and environmental sounds, some second-layer units learned to group similar spectrotemporal features. Others instantiate opponency between distinct sets of features. Such groupings might be instantiated by neurons in the auditory cortex, providing a hypothesis for midlevel neuronal computation.
Collapse
|
34
|
Juavinett AL, Erlich JC, Churchland AK. Decision-making behaviors: weighing ethology, complexity, and sensorimotor compatibility. Curr Opin Neurobiol 2017; 49:42-50. [PMID: 29179005 DOI: 10.1016/j.conb.2017.11.001] [Citation(s) in RCA: 36] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2017] [Revised: 10/31/2017] [Accepted: 11/01/2017] [Indexed: 01/15/2023]
Abstract
Rodent decision-making research aims to uncover the neural circuitry underlying the ability to evaluate alternatives and select appropriate actions. Designing behavioral paradigms that provide a solid foundation to ask questions about decision-making computations and mechanisms is a difficult and often underestimated challenge. Here, we propose three dimensions on which we can consider rodent decision-making tasks: ethological validity, task complexity, and stimulus-response compatibility. We review recent research through this lens, and provide practical guidance for researchers in the decision-making field.
Collapse
Affiliation(s)
| | - Jeffrey C Erlich
- NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, China
| | - Anne K Churchland
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, United States.
| |
Collapse
|
35
|
Christison-Lagay KL, Bennur S, Cohen YE. Contribution of spiking activity in the primary auditory cortex to detection in noise. J Neurophysiol 2017; 118:3118-3131. [PMID: 28855294 DOI: 10.1152/jn.00521.2017] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2017] [Revised: 08/25/2017] [Accepted: 08/27/2017] [Indexed: 01/08/2023] Open
Abstract
A fundamental problem in hearing is detecting a "target" stimulus (e.g., a friend's voice) that is presented with a noisy background (e.g., the din of a crowded restaurant). Despite its importance to hearing, a relationship between spiking activity and behavioral performance during such a "detection-in-noise" task has yet to be fully elucidated. In this study, we recorded spiking activity in primary auditory cortex (A1) while rhesus monkeys detected a target stimulus that was presented with a noise background. Although some neurons were modulated, the response of the typical A1 neuron was not modulated by the stimulus- and task-related parameters of our task. In contrast, we found more robust representations of these parameters in population-level activity: small populations of neurons matched the monkeys' behavioral sensitivity. Overall, these findings are consistent with the hypothesis that the sensory evidence, which is needed to solve such detection-in-noise tasks, is represented in population-level A1 activity and may be available to be read out by downstream neurons that are involved in mediating this task.NEW & NOTEWORTHY This study examines the contribution of A1 to detecting a sound that is presented with a noisy background. We found that population-level A1 activity, but not single neurons, could provide the evidence needed to make this perceptual decision.
Collapse
Affiliation(s)
| | - Sharath Bennur
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, Pennsylvania
| | - Yale E Cohen
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, Pennsylvania; .,Department of Neuroscience, University of Pennsylvania, Philadelphia, Pennsylvania; and.,Department of Bioengineering, University of Pennsylvania, Philadelphia, Pennsylvania
| |
Collapse
|
36
|
Natan RG, Carruthers IM, Mwilambwe-Tshilobo L, Geffen MN. Gain Control in the Auditory Cortex Evoked by Changing Temporal Correlation of Sounds. Cereb Cortex 2017; 27:2385-2402. [PMID: 27095823 DOI: 10.1093/cercor/bhw083] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Natural sounds exhibit statistical variation in their spectrotemporal structure. This variation is central to identification of unique environmental sounds and to vocal communication. Using limited resources, the auditory system must create a faithful representation of sounds across the full range of variation in temporal statistics. Imaging studies in humans demonstrated that the auditory cortex is sensitive to temporal correlations. However, the mechanisms by which the auditory cortex represents the spectrotemporal structure of sounds and how neuronal activity adjusts to vastly different statistics remain poorly understood. In this study, we recorded responses of neurons in the primary auditory cortex of awake rats to sounds with systematically varied temporal correlation, to determine whether and how this feature alters sound encoding. Neuronal responses adapted to changing stimulus temporal correlation. This adaptation was mediated by a change in the firing rate gain of neuronal responses rather than their spectrotemporal properties. This gain adaptation allowed neurons to maintain similar firing rates across stimuli with different statistics, preserving their ability to efficiently encode temporal modulation. This dynamic gain control mechanism may underlie comprehension of vocalizations and other natural sounds under different contexts, subject to distortions in temporal correlation structure via stretching or compression.
Collapse
Affiliation(s)
- Ryan G Natan
- Department of Otorhinolaryngology and Head and Neck Surgery.,Graduate Group in Neuroscience
| | - Isaac M Carruthers
- Department of Otorhinolaryngology and Head and Neck Surgery.,Graduate Group in Physics
| | | | - Maria N Geffen
- Department of Otorhinolaryngology and Head and Neck Surgery.,Graduate Group in Neuroscience.,Graduate Group in Physics.,Department of Neuroscience, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| |
Collapse
|
37
|
Downer JD, Niwa M, Sutter ML. Hierarchical differences in population coding within auditory cortex. J Neurophysiol 2017; 118:717-731. [PMID: 28446588 PMCID: PMC5539454 DOI: 10.1152/jn.00899.2016] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2016] [Revised: 04/21/2017] [Accepted: 04/21/2017] [Indexed: 01/04/2023] Open
Abstract
Most models of auditory cortical (AC) population coding have focused on primary auditory cortex (A1). Thus our understanding of how neural coding for sounds progresses along the cortical hierarchy remains obscure. To illuminate this, we recorded from two AC fields: A1 and middle lateral belt (ML) of rhesus macaques. We presented amplitude-modulated (AM) noise during both passive listening and while the animals performed an AM detection task ("active" condition). In both fields, neurons exhibit monotonic AM-depth tuning, with A1 neurons mostly exhibiting increasing rate-depth functions and ML neurons approximately evenly distributed between increasing and decreasing functions. We measured noise correlation (rnoise) between simultaneously recorded neurons and found that whereas engagement decreased average rnoise in A1, engagement increased average rnoise in ML. This finding surprised us, because attentive states are commonly reported to decrease average rnoise We analyzed the effect of rnoise on AM coding in both A1 and ML and found that whereas engagement-related shifts in rnoise in A1 enhance AM coding, rnoise shifts in ML have little effect. These results imply that the effect of rnoise differs between sensory areas, based on the distribution of tuning properties among the neurons within each population. A possible explanation of this is that higher areas need to encode nonsensory variables (e.g., attention, choice, and motor preparation), which impart common noise, thus increasing rnoise Therefore, the hierarchical emergence of rnoise-robust population coding (e.g., as we observed in ML) enhances the ability of sensory cortex to integrate cognitive and sensory information without a loss of sensory fidelity.NEW & NOTEWORTHY Prevailing models of population coding of sensory information are based on a limited subset of neural structures. An important and under-explored question in neuroscience is how distinct areas of sensory cortex differ in their population coding strategies. In this study, we compared population coding between primary and secondary auditory cortex. Our findings demonstrate striking differences between the two areas and highlight the importance of considering the diversity of neural structures as we develop models of population coding.
Collapse
Affiliation(s)
- Joshua D Downer
- Center for Neuroscience and Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| | - Mamiko Niwa
- Center for Neuroscience and Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| | - Mitchell L Sutter
- Center for Neuroscience and Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| |
Collapse
|
38
|
Blackwell JM, Taillefumier TO, Natan RG, Carruthers IM, Magnasco MO, Geffen MN. Stable encoding of sounds over a broad range of statistical parameters in the auditory cortex. Eur J Neurosci 2016; 43:751-64. [PMID: 26663571 PMCID: PMC5021175 DOI: 10.1111/ejn.13144] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2015] [Revised: 11/22/2015] [Accepted: 12/01/2015] [Indexed: 11/29/2022]
Abstract
Natural auditory scenes possess highly structured statistical regularities, which are dictated by the physics of sound production in nature, such as scale‐invariance. We recently identified that natural water sounds exhibit a particular type of scale invariance, in which the temporal modulation within spectral bands scales with the centre frequency of the band. Here, we tested how neurons in the mammalian primary auditory cortex encode sounds that exhibit this property, but differ in their statistical parameters. The stimuli varied in spectro‐temporal density and cyclo‐temporal statistics over several orders of magnitude, corresponding to a range of water‐like percepts, from pattering of rain to a slow stream. We recorded neuronal activity in the primary auditory cortex of awake rats presented with these stimuli. The responses of the majority of individual neurons were selective for a subset of stimuli with specific statistics. However, as a neuronal population, the responses were remarkably stable over large changes in stimulus statistics, exhibiting a similar range in firing rate, response strength, variability and information rate, and only minor variation in receptive field parameters. This pattern of neuronal responses suggests a potentially general principle for cortical encoding of complex acoustic scenes: while individual cortical neurons exhibit selectivity for specific statistical features, a neuronal population preserves a constant response structure across a broad range of statistical parameters.
Collapse
Affiliation(s)
- Jennifer M Blackwell
- Department of Otorhinolaryngology and Head and Neck Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Thibaud O Taillefumier
- Center for Physics and Biology, Rockefeller University, New York, NY, USA.,Lewis-Sigler Institute for Integrative Genomics, Princeton University, Princeton, NJ, USA
| | - Ryan G Natan
- Department of Otorhinolaryngology and Head and Neck Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Isaac M Carruthers
- Department of Otorhinolaryngology and Head and Neck Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Marcelo O Magnasco
- Center for Physics and Biology, Rockefeller University, New York, NY, USA
| | - Maria N Geffen
- Department of Otorhinolaryngology and Head and Neck Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA.,Center for Physics and Biology, Rockefeller University, New York, NY, USA
| |
Collapse
|
39
|
Egnor SR, Seagraves KM. The contribution of ultrasonic vocalizations to mouse courtship. Curr Opin Neurobiol 2016; 38:1-5. [PMID: 26789140 DOI: 10.1016/j.conb.2015.12.009] [Citation(s) in RCA: 54] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2015] [Revised: 12/21/2015] [Accepted: 12/23/2015] [Indexed: 12/17/2022]
Abstract
Vocalizations transmit information to social partners, and mice use these signals to exchange information during courtship. Ultrasonic vocalizations from adult males are tightly associated with their interactions with females, and vary as a function of male quality. Work in the last decade has established that the spectrotemporal features of male vocalizations are not learned, but that female attention toward specific vocal features is modified by social experience. Additionally, progress has been made on elucidating how mouse vocalizations are encoded in the auditory system, and on the olfactory circuits that trigger their production. Together these findings provide us with important insights into how vocal communication can contribute to social interactions.
Collapse
Affiliation(s)
- Se Roian Egnor
- Janelia Research Campus, HHMI, 19700 Helix Drive, Ashburn, VA 20147, USA.
| | - Kelly M Seagraves
- Janelia Research Campus, HHMI, 19700 Helix Drive, Ashburn, VA 20147, USA
| |
Collapse
|
40
|
Natan RG, Briguglio JJ, Mwilambwe-Tshilobo L, Jones SI, Aizenberg M, Goldberg EM, Geffen MN. Complementary control of sensory adaptation by two types of cortical interneurons. eLife 2015; 4. [PMID: 26460542 PMCID: PMC4641469 DOI: 10.7554/elife.09868] [Citation(s) in RCA: 128] [Impact Index Per Article: 14.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2015] [Accepted: 10/01/2015] [Indexed: 01/14/2023] Open
Abstract
Reliably detecting unexpected sounds is important for environmental awareness and survival. By selectively reducing responses to frequently, but not rarely, occurring sounds, auditory cortical neurons are thought to enhance the brain's ability to detect unexpected events through stimulus-specific adaptation (SSA). The majority of neurons in the primary auditory cortex exhibit SSA, yet little is known about the underlying cortical circuits. We found that two types of cortical interneurons differentially amplify SSA in putative excitatory neurons. Parvalbumin-positive interneurons (PVs) amplify SSA by providing non-specific inhibition: optogenetic suppression of PVs led to an equal increase in responses to frequent and rare tones. In contrast, somatostatin-positive interneurons (SOMs) selectively reduce excitatory responses to frequent tones: suppression of SOMs led to an increase in responses to frequent, but not to rare tones. A mutually coupled excitatory-inhibitory network model accounts for distinct mechanisms by which cortical inhibitory neurons enhance the brain's sensitivity to unexpected sounds. DOI:http://dx.doi.org/10.7554/eLife.09868.001 In everyday life, we are often exposed to a mix of different sounds. An essential task for our brain is to separate the important sounds from the unimportant ones. For example, stepping out onto a busy street, you may at first be very aware of the noise of traffic. Later, you may start to ignore the din and instead only notice sounds that break the monotony: a honking car horn or maybe a stranger's voice. This is because the neurons in the auditory pathway respond differently to common and rare sounds. In particular, excitatory neurons in the region termed the ‘auditory cortex’ send fewer nerve impulses in response to frequent sounds, but respond vigorously to rare sounds. This phenomenon is called ‘stimulus-specific adaptation’, but it is not known exactly which neurons in this brain region enable this process to occur. Now, Natan et al. have combined different cutting-edge neuroscience techniques to identify the circuit of brain cells that drives this stimulus specific adaptation. A technique called optogenetics was used to effectively ‘turn off’ each of two kinds of inhibitory neuron in the auditory cortex of mice, by exposing the brain to colored light from a laser. Natan et al. found that both kinds of inhibitory neuron amplified stimulus-specific adaptation, but via different mechanisms. One of these neuron types, called ‘parvalbumin-positive interneurons’, exerted a general effect on excitatory neurons and suppressed responses to both frequent and rare sounds As the responses to rare sounds started off greater than the responses to frequent sounds, suppressing both by an equal amount actually led to an increase in the relative difference between them. On the other hand, the second kind of inhibitory neuron, called ‘somatostatin-positive interneurons’, only reduced the excitatory neurons' responses to frequent sounds; these neurons had no effect on responses to rare noises. Future studies will test how specific adaptation in different contexts can help us to behaviorally detect rare sounds while ignoring common ones, and search for the circuits beyond the auditory cortex that support hearing in complex sound environments. DOI:http://dx.doi.org/10.7554/eLife.09868.002
Collapse
Affiliation(s)
- Ryan G Natan
- Department of Otorhinolaryngology Head and Neck Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, United States
| | - John J Briguglio
- Department of Otorhinolaryngology Head and Neck Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, United States
| | - Laetitia Mwilambwe-Tshilobo
- Department of Otorhinolaryngology Head and Neck Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, United States
| | - Sara I Jones
- Department of Otorhinolaryngology Head and Neck Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, United States
| | - Mark Aizenberg
- Department of Otorhinolaryngology Head and Neck Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, United States
| | - Ethan M Goldberg
- Department of Neurology, University of Pennsylvania, Philadelphia, United States.,Division of Neurology, The Children's Hospital of Philadelphia, Philadelphia, United States
| | - Maria Neimark Geffen
- Department of Otorhinolaryngology Head and Neck Surgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, United States
| |
Collapse
|