1
|
Macedo-Lima M, Hamlette LS, Caras ML. Orbitofrontal cortex modulates auditory cortical sensitivity and sound perception in Mongolian gerbils. Curr Biol 2024:S0960-9822(24)00820-0. [PMID: 38996534 DOI: 10.1016/j.cub.2024.06.036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Revised: 04/25/2024] [Accepted: 06/12/2024] [Indexed: 07/14/2024]
Abstract
Sensory perception is dynamic, quickly adapting to sudden shifts in environmental or behavioral context. Although decades of work have established that these dynamics are mediated by rapid fluctuations in sensory cortical activity, we have a limited understanding of the brain regions and pathways that orchestrate these changes. Neurons in the orbitofrontal cortex (OFC) encode contextual information, and recent data suggest that some of these signals are transmitted to sensory cortices. Whether and how these signals shape sensory encoding and perceptual sensitivity remain uncertain. Here, we asked whether the OFC mediates context-dependent changes in auditory cortical sensitivity and sound perception by monitoring and manipulating OFC activity in freely moving Mongolian gerbils of both sexes under two behavioral contexts: passive sound exposure and engagement in an amplitude modulation (AM) detection task. We found that the majority of OFC neurons, including the specific subset that innervates the auditory cortex, were strongly modulated by task engagement. Pharmacological inactivation of the OFC prevented rapid context-dependent changes in auditory cortical firing and significantly impaired behavioral AM detection. Our findings suggest that contextual information from the OFC mediates rapid plasticity in the auditory cortex and facilitates the perception of behaviorally relevant sounds.
Collapse
Affiliation(s)
| | | | - Melissa L Caras
- Department of Biology, University of Maryland, College Park, MD 20742, USA.
| |
Collapse
|
2
|
Shi K, Quass GL, Rogalla MM, Ford AN, Czarny JE, Apostolides PF. Population coding of time-varying sounds in the nonlemniscal inferior colliculus. J Neurophysiol 2024; 131:842-864. [PMID: 38505907 DOI: 10.1152/jn.00013.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Revised: 02/29/2024] [Accepted: 03/15/2024] [Indexed: 03/21/2024] Open
Abstract
The inferior colliculus (IC) of the midbrain is important for complex sound processing, such as discriminating conspecific vocalizations and human speech. The IC's nonlemniscal, dorsal "shell" region is likely important for this process, as neurons in these layers project to higher-order thalamic nuclei that subsequently funnel acoustic signals to the amygdala and nonprimary auditory cortices, forebrain circuits important for vocalization coding in a variety of mammals, including humans. However, the extent to which shell IC neurons transmit acoustic features necessary to discern vocalizations is less clear, owing to the technical difficulty of recording from neurons in the IC's superficial layers via traditional approaches. Here, we use two-photon Ca2+ imaging in mice of either sex to test how shell IC neuron populations encode the rate and depth of amplitude modulation, important sound cues for speech perception. Most shell IC neurons were broadly tuned, with a low neurometric discrimination of amplitude modulation rate; only a subset was highly selective to specific modulation rates. Nevertheless, neural network classifier trained on fluorescence data from shell IC neuron populations accurately classified amplitude modulation rate, and decoding accuracy was only marginally reduced when highly tuned neurons were omitted from training data. Rather, classifier accuracy increased monotonically with the modulation depth of the training data, such that classifiers trained on full-depth modulated sounds had median decoding errors of ∼0.2 octaves. Thus, shell IC neurons may transmit time-varying signals via a population code, with perhaps limited reliance on the discriminative capacity of any individual neuron.NEW & NOTEWORTHY The IC's shell layers originate a "nonlemniscal" pathway important for perceiving vocalization sounds. However, prior studies suggest that individual shell IC neurons are broadly tuned and have high response thresholds, implying a limited reliability of efferent signals. Using Ca2+ imaging, we show that amplitude modulation is accurately represented in the population activity of shell IC neurons. Thus, downstream targets can read out sounds' temporal envelopes from distributed rate codes transmitted by populations of broadly tuned neurons.
Collapse
Affiliation(s)
- Kaiwen Shi
- Department of Otolaryngology-Head & Neck Surgery, Kresge Hearing Research Institute, University of Michigan Medical School, Ann Arbor, Michigan, United States
| | - Gunnar L Quass
- Department of Otolaryngology-Head & Neck Surgery, Kresge Hearing Research Institute, University of Michigan Medical School, Ann Arbor, Michigan, United States
| | - Meike M Rogalla
- Department of Otolaryngology-Head & Neck Surgery, Kresge Hearing Research Institute, University of Michigan Medical School, Ann Arbor, Michigan, United States
| | - Alexander N Ford
- Department of Otolaryngology-Head & Neck Surgery, Kresge Hearing Research Institute, University of Michigan Medical School, Ann Arbor, Michigan, United States
| | - Jordyn E Czarny
- Department of Otolaryngology-Head & Neck Surgery, Kresge Hearing Research Institute, University of Michigan Medical School, Ann Arbor, Michigan, United States
| | - Pierre F Apostolides
- Department of Otolaryngology-Head & Neck Surgery, Kresge Hearing Research Institute, University of Michigan Medical School, Ann Arbor, Michigan, United States
- Department of Molecular & Integrative Physiology, University of Michigan Medical School, Ann Arbor, Michigan, United States
| |
Collapse
|
3
|
Ying R, Stolzberg DJ, Caras ML. Neural correlates of flexible sound perception in the auditory midbrain and thalamus. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.12.589266. [PMID: 38645241 PMCID: PMC11030403 DOI: 10.1101/2024.04.12.589266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/23/2024]
Abstract
Hearing is an active process in which listeners must detect and identify sounds, segregate and discriminate stimulus features, and extract their behavioral relevance. Adaptive changes in sound detection can emerge rapidly, during sudden shifts in acoustic or environmental context, or more slowly as a result of practice. Although we know that context- and learning-dependent changes in the spectral and temporal sensitivity of auditory cortical neurons support many aspects of flexible listening, the contribution of subcortical auditory regions to this process is less understood. Here, we recorded single- and multi-unit activity from the central nucleus of the inferior colliculus (ICC) and the ventral subdivision of the medial geniculate nucleus (MGV) of Mongolian gerbils under two different behavioral contexts: as animals performed an amplitude modulation (AM) detection task and as they were passively exposed to AM sounds. Using a signal detection framework to estimate neurometric sensitivity, we found that neural thresholds in both regions improved during task performance, and this improvement was driven by changes in firing rate rather than phase locking. We also found that ICC and MGV neurometric thresholds improved and correlated with behavioral performance as animals learn to detect small AM depths during a multi-day perceptual training paradigm. Finally, we reveal that in the MGV, but not the ICC, context-dependent enhancements in AM sensitivity grow stronger during perceptual training, mirroring prior observations in the auditory cortex. Together, our results suggest that the auditory midbrain and thalamus contribute to flexible sound processing and perception over rapid and slow timescales.
Collapse
Affiliation(s)
- Rose Ying
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland, 20742
- Department of Biology, University of Maryland, College Park, Maryland, 20742
- Center for Comparative and Evolutionary Biology of Hearing, University of Maryland, College Park, Maryland, 20742
| | - Daniel J. Stolzberg
- Department of Biology, University of Maryland, College Park, Maryland, 20742
| | - Melissa L. Caras
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland, 20742
- Department of Biology, University of Maryland, College Park, Maryland, 20742
- Center for Comparative and Evolutionary Biology of Hearing, University of Maryland, College Park, Maryland, 20742
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland, 20742
| |
Collapse
|
4
|
Fernández-Vargas M, Macedo-Lima M, Remage-Healey L. Acute Aromatase Inhibition Impairs Neural and Behavioral Auditory Scene Analysis in Zebra Finches. eNeuro 2024; 11:ENEURO.0423-23.2024. [PMID: 38467426 PMCID: PMC10960633 DOI: 10.1523/eneuro.0423-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 12/31/2023] [Accepted: 01/04/2024] [Indexed: 03/13/2024] Open
Abstract
Auditory perception can be significantly disrupted by noise. To discriminate sounds from noise, auditory scene analysis (ASA) extracts the functionally relevant sounds from acoustic input. The zebra finch communicates in noisy environments. Neurons in their secondary auditory pallial cortex (caudomedial nidopallium, NCM) can encode song from background chorus, or scenes, and this capacity may aid behavioral ASA. Furthermore, song processing is modulated by the rapid synthesis of neuroestrogens when hearing conspecific song. To examine whether neuroestrogens support neural and behavioral ASA in both sexes, we retrodialyzed fadrozole (aromatase inhibitor, FAD) and recorded in vivo awake extracellular NCM responses to songs and scenes. We found that FAD affected neural encoding of songs by decreasing responsiveness and timing reliability in inhibitory (narrow-spiking), but not in excitatory (broad-spiking) neurons. Congruently, FAD decreased neural encoding of songs in scenes for both cell types, particularly in females. Behaviorally, we trained birds using operant conditioning and tested their ability to detect songs in scenes after administering FAD orally or injected bilaterally into NCM. Oral FAD increased response bias and decreased correct rejections in females, but not in males. FAD in NCM did not affect performance. Thus, FAD in the NCM impaired neuronal ASA but that did not lead to behavioral disruption suggesting the existence of resilience or compensatory responses. Moreover, impaired performance after systemic FAD suggests involvement of other aromatase-rich networks outside the auditory pathway in ASA. This work highlights how transient estrogen synthesis disruption can modulate higher-order processing in an animal model of vocal communication.
Collapse
Affiliation(s)
- Marcela Fernández-Vargas
- Neuroscience and Behavior Program, Center for Neuroendocrine Studies, University of Massachusetts Amherst, Amherst, Massachusetts 01003
| | - Matheus Macedo-Lima
- Neuroscience and Behavior Program, Center for Neuroendocrine Studies, University of Massachusetts Amherst, Amherst, Massachusetts 01003
| | - Luke Remage-Healey
- Neuroscience and Behavior Program, Center for Neuroendocrine Studies, University of Massachusetts Amherst, Amherst, Massachusetts 01003
| |
Collapse
|
5
|
Martin A, Souffi S, Huetz C, Edeline JM. Can Extensive Training Transform a Mouse into a Guinea Pig? An Evaluation Based on the Discriminative Abilities of Inferior Colliculus Neurons. BIOLOGY 2024; 13:92. [PMID: 38392310 PMCID: PMC10886615 DOI: 10.3390/biology13020092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 01/19/2024] [Accepted: 01/30/2024] [Indexed: 02/24/2024]
Abstract
Humans and animals maintain accurate discrimination between communication sounds in the presence of loud sources of background noise. In previous studies performed in anesthetized guinea pigs, we showed that, in the auditory pathway, the highest discriminative abilities between conspecific vocalizations were found in the inferior colliculus. Here, we trained CBA/J mice in a Go/No-Go task to discriminate between two similar guinea pig whistles, first in quiet conditions, then in two types of noise, a stationary noise and a chorus noise at three SNRs. Control mice were passively exposed to the same number of whistles as trained mice. After three months of extensive training, inferior colliculus (IC) neurons were recorded under anesthesia and the responses were quantified as in our previous studies. In quiet, the mean values of the firing rate, the temporal reliability and mutual information obtained from trained mice were higher than from the exposed mice and the guinea pigs. In stationary and chorus noise, there were only a few differences between the trained mice and the guinea pigs; and the lowest mean values of the parameters were found in the exposed mice. These results suggest that behavioral training can trigger plasticity in IC that allows mice neurons to reach guinea pig-like discrimination abilities.
Collapse
Affiliation(s)
- Alexandra Martin
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS & Université Paris-Saclay, 91400 Saclay, France
| | - Samira Souffi
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS & Université Paris-Saclay, 91400 Saclay, France
| | - Chloé Huetz
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS & Université Paris-Saclay, 91400 Saclay, France
| | - Jean-Marc Edeline
- Paris-Saclay Institute of Neuroscience (Neuro-PSI, UMR 9197), CNRS & Université Paris-Saclay, 91400 Saclay, France
| |
Collapse
|
6
|
Macedo-Lima M, Hamlette LS, Caras ML. Orbitofrontal Cortex Modulates Auditory Cortical Sensitivity and Sound Perception. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.12.18.570797. [PMID: 38187685 PMCID: PMC10769262 DOI: 10.1101/2023.12.18.570797] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2024]
Abstract
Sensory perception is dynamic, quickly adapting to sudden shifts in environmental or behavioral context. Though decades of work have established that these dynamics are mediated by rapid fluctuations in sensory cortical activity, we have a limited understanding of the brain regions and pathways that orchestrate these changes. Neurons in the orbitofrontal cortex (OFC) encode contextual information, and recent data suggest that some of these signals are transmitted to sensory cortices. Whether and how these signals shape sensory encoding and perceptual sensitivity remains uncertain. Here, we asked whether the OFC mediates context-dependent changes in auditory cortical sensitivity and sound perception by monitoring and manipulating OFC activity in freely moving animals under two behavioral contexts: passive sound exposure and engagement in an amplitude modulation (AM) detection task. We found that the majority of OFC neurons, including the specific subset that innervate the auditory cortex, were strongly modulated by task engagement. Pharmacological inactivation of the OFC prevented rapid context-dependent changes in auditory cortical firing, and significantly impaired behavioral AM detection. Our findings suggest that contextual information from the OFC mediates rapid plasticity in the auditory cortex and facilitates the perception of behaviorally relevant sounds. Significance Statement Sensory perception depends on the context in which stimuli are presented. For example, perception is enhanced when stimuli are informative, such as when they are important to solve a task. Perceptual enhancements result from an increase in the sensitivity of sensory cortical neurons; however, we do not fully understand how such changes are initiated in the brain. Here, we tested the role of the orbitofrontal cortex (OFC) in controlling auditory cortical sensitivity and sound perception. We found that OFC neurons change their activity when animals perform a sound detection task. Inactivating OFC impairs sound detection and prevents task-dependent increases in auditory cortical sensitivity. Our findings suggest that the OFC controls contextual modulations of the auditory cortex and sound perception.
Collapse
|
7
|
Anbuhl KL, Diez Castro M, Lee NA, Lee VS, Sanes DH. Cingulate cortex facilitates auditory perception under challenging listening conditions. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.11.10.566668. [PMID: 38014324 PMCID: PMC10680599 DOI: 10.1101/2023.11.10.566668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
We often exert greater cognitive resources (i.e., listening effort) to understand speech under challenging acoustic conditions. This mechanism can be overwhelmed in those with hearing loss, resulting in cognitive fatigue in adults, and potentially impeding language acquisition in children. However, the neural mechanisms that support listening effort are uncertain. Evidence from human studies suggest that the cingulate cortex is engaged under difficult listening conditions, and may exert top-down modulation of the auditory cortex (AC). Here, we asked whether the gerbil cingulate cortex (Cg) sends anatomical projections to the AC that facilitate perceptual performance. To model challenging listening conditions, we used a sound discrimination task in which stimulus parameters were presented in either 'Easy' or 'Hard' blocks (i.e., long or short stimulus duration, respectively). Gerbils achieved statistically identical psychometric performance in Easy and Hard blocks. Anatomical tracing experiments revealed a strong, descending projection from layer 2/3 of the Cg1 subregion of the cingulate cortex to superficial and deep layers of primary and dorsal AC. To determine whether Cg improves task performance under challenging conditions, we bilaterally infused muscimol to inactivate Cg1, and found that psychometric thresholds were degraded for only Hard blocks. To test whether the Cg-to-AC projection facilitates task performance, we chemogenetically inactivated these inputs and found that performance was only degraded during Hard blocks. Taken together, the results reveal a descending cortical pathway that facilitates perceptual performance during challenging listening conditions. Significance Statement Sensory perception often occurs under challenging conditions, such a noisy background or dim environment, yet stimulus sensitivity can remain unaffected. One hypothesis is that cognitive resources are recruited to the task, thereby facilitating perceptual performance. Here, we identify a top-down cortical circuit, from cingulate to auditory cortex in the gerbils, that supports auditory perceptual performance under challenging listening conditions. This pathway is a plausible circuit that supports effortful listening, and may be degraded by hearing loss.
Collapse
|
8
|
Mowery TM, Wackym PA, Nacipucha J, Dangcil E, Stadler RD, Tucker A, Carayannopoulos NL, Beshy MA, Hong SS, Yao JD. Superior semicircular canal dehiscence and subsequent closure induces reversible impaired decision-making. Front Neurol 2023; 14:1259030. [PMID: 37905188 PMCID: PMC10613502 DOI: 10.3389/fneur.2023.1259030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Accepted: 09/14/2023] [Indexed: 11/02/2023] Open
Abstract
Background Vestibular loss and dysfunction has been associated with cognitive deficits, decreased spatial navigation, spatial memory, visuospatial ability, attention, executive function, and processing speed among others. Superior semicircular canal dehiscence (SSCD) is a vestibular-cochlear disorder in humans in which a pathological third mobile window of the otic capsule creates changes to the flow of sound pressure energy through the perilymph/endolymph. The primary symptoms include sound-induced dizziness/vertigo, inner ear conductive hearing loss, autophony, headaches, and visual problems; however, individuals also experience measurable deficits in basic decision-making, short-term memory, concentration, spatial cognition, and depression. These suggest central mechanisms of impairment are associated with vestibular disorders; therefore, we directly tested this hypothesis using both an auditory and visual decision-making task of varying difficulty levels in our model of SSCD. Methods Adult Mongolian gerbils (n = 33) were trained on one of four versions of a Go-NoGo stimulus presentation rate discrimination task that included standard ("easy") or more difficult ("hard") auditory and visual stimuli. After 10 days of training, preoperative ABR and c+VEMP testing was followed by a surgical fenestration of the left superior semicircular canal. Animals with persistent circling or head tilt were excluded to minimize effects from acute vestibular injury. Testing recommenced at postoperative day 5 and continued through postoperative day 15 at which point final ABR and c+VEMP testing was carried out. Results Behavioral data (d-primes) were compared between preoperative performance (training day 8-10) and postoperative days 6-8 and 13-15. Behavioral performance was measured during the peak of SSCD induced ABR and c + VEMP impairment and the return towards baseline as the dehiscence began to resurface by osteoneogenesis. There were significant differences in behavioral performance (d-prime) and its behavioral components (Hits, Misses, False Alarms, and Correct Rejections). These changes were highly correlated with persistent deficits in c + VEMPs at the end of training (postoperative day 15). The controls demonstrated additional learning post procedure that was absent in the SSCD group. Conclusion These results suggest that aberrant asymmetric vestibular output results in decision-making impairments in these discrimination tasks and could be associated with the other cognitive impairments resulting from vestibular dysfunction.
Collapse
Affiliation(s)
- Todd M. Mowery
- Department of Otolaryngology – Head and Neck Surgery, Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, United States
- Rutgers Brain Health Institute, New Brunswick, NJ, United States
| | - P. Ashley Wackym
- Department of Otolaryngology – Head and Neck Surgery, Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, United States
- Rutgers Brain Health Institute, New Brunswick, NJ, United States
| | - Jacqueline Nacipucha
- Department of Otolaryngology – Head and Neck Surgery, Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, United States
| | - Evelynne Dangcil
- Department of Otolaryngology – Head and Neck Surgery, Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, United States
| | - Ryan D. Stadler
- Department of Otolaryngology – Head and Neck Surgery, Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, United States
| | - Aaron Tucker
- Department of Otolaryngology – Head and Neck Surgery, Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, United States
| | - Nicolas L. Carayannopoulos
- Department of Otolaryngology – Head and Neck Surgery, Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, United States
| | - Mina A. Beshy
- Department of Otolaryngology – Head and Neck Surgery, Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, United States
| | - Sean S. Hong
- Department of Otolaryngology – Head and Neck Surgery, Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, United States
| | - Justin D. Yao
- Department of Otolaryngology – Head and Neck Surgery, Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, United States
- Rutgers Brain Health Institute, New Brunswick, NJ, United States
| |
Collapse
|
9
|
Ying R, Hamlette L, Nikoobakht L, Balaji R, Miko N, Caras ML. Organization of orbitofrontal-auditory pathways in the Mongolian gerbil. J Comp Neurol 2023; 531:1459-1481. [PMID: 37477903 PMCID: PMC10529810 DOI: 10.1002/cne.25525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 06/11/2023] [Accepted: 06/26/2023] [Indexed: 07/22/2023]
Abstract
Sound perception is highly malleable, rapidly adjusting to the acoustic environment and behavioral demands. This flexibility is the result of ongoing changes in auditory cortical activity driven by fluctuations in attention, arousal, or prior expectations. Recent work suggests that the orbitofrontal cortex (OFC) may mediate some of these rapid changes, but the anatomical connections between the OFC and the auditory system are not well characterized. Here, we used virally mediated fluorescent tracers to map the projection from OFC to the auditory midbrain, thalamus, and cortex in a classic animal model for auditory research, the Mongolian gerbil (Meriones unguiculatus). We observed no connectivity between the OFC and the auditory midbrain, and an extremely sparse connection between the dorsolateral OFC and higher order auditory thalamic regions. In contrast, we observed a robust connection between the ventral and medial subdivisions of the OFC and the auditory cortex, with a clear bias for secondary auditory cortical regions. OFC axon terminals were found in all auditory cortical lamina but were significantly more concentrated in the infragranular layers. Tissue-clearing and lightsheet microscopy further revealed that auditory cortical-projecting OFC neurons send extensive axon collaterals throughout the brain, targeting both sensory and non-sensory regions involved in learning, decision-making, and memory. These findings provide a more detailed map of orbitofrontal-auditory connections and shed light on the possible role of the OFC in supporting auditory cognition.
Collapse
Affiliation(s)
- Rose Ying
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland, 20742
- Department of Biology, University of Maryland, College Park, Maryland, 20742
- Center for Comparative and Evolutionary Biology of Hearing, University of Maryland, College Park, Maryland, 20742
| | - Lashaka Hamlette
- Department of Biology, University of Maryland, College Park, Maryland, 20742
| | - Laudan Nikoobakht
- Department of Biology, University of Maryland, College Park, Maryland, 20742
| | - Rakshita Balaji
- Department of Biology, University of Maryland, College Park, Maryland, 20742
| | - Nicole Miko
- Department of Biology, University of Maryland, College Park, Maryland, 20742
| | - Melissa L. Caras
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland, 20742
- Department of Biology, University of Maryland, College Park, Maryland, 20742
- Center for Comparative and Evolutionary Biology of Hearing, University of Maryland, College Park, Maryland, 20742
| |
Collapse
|
10
|
Paraouty N, Yao JD, Varnet L, Chou CN, Chung S, Sanes DH. Sensory cortex plasticity supports auditory social learning. Nat Commun 2023; 14:5828. [PMID: 37730696 PMCID: PMC10511464 DOI: 10.1038/s41467-023-41641-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 09/11/2023] [Indexed: 09/22/2023] Open
Abstract
Social learning (SL) through experience with conspecifics can facilitate the acquisition of many behaviors. Thus, when Mongolian gerbils are exposed to a demonstrator performing an auditory discrimination task, their subsequent task acquisition is facilitated, even in the absence of visual cues. Here, we show that transient inactivation of auditory cortex (AC) during exposure caused a significant delay in task acquisition during the subsequent practice phase, suggesting that AC activity is necessary for SL. Moreover, social exposure induced an improvement in AC neuron sensitivity to auditory task cues. The magnitude of neural change during exposure correlated with task acquisition during practice. In contrast, exposure to only auditory task cues led to poorer neurometric and behavioral outcomes. Finally, social information during exposure was encoded in the AC of observer animals. Together, our results suggest that auditory SL is supported by AC neuron plasticity occurring during social exposure and prior to behavioral performance.
Collapse
Affiliation(s)
- Nihaad Paraouty
- Center for Neural Science New York University, New York, NY, 10003, USA.
| | - Justin D Yao
- Department of Otolaryngology, Rutgers University, New Brunswick, NJ, 08901, USA
| | - Léo Varnet
- Laboratoire des Systèmes Perceptifs, UMR 8248, Ecole Normale Supérieure, PSL University, Paris, 75005, France
| | - Chi-Ning Chou
- Center for Computational Neuroscience, Flatiron Institute, Simons Foundation, New York, NY, USA
- School of Engineering & Applied Sciences, Harvard University, Cambridge, MA, 02138, USA
| | - SueYeon Chung
- Center for Neural Science New York University, New York, NY, 10003, USA
- Center for Computational Neuroscience, Flatiron Institute, Simons Foundation, New York, NY, USA
| | - Dan H Sanes
- Center for Neural Science New York University, New York, NY, 10003, USA
- Department of Psychology, New York University, New York, NY, 10003, USA
- Department of Biology, New York University, New York, NY, 10003, USA
- Neuroscience Institute, NYU Langone Medical Center, New York, NY, 10003, USA
| |
Collapse
|
11
|
van den Berg MM, Busscher E, Borst JGG, Wong AB. Neuronal responses in mouse inferior colliculus correlate with behavioral detection of amplitude-modulated sound. J Neurophysiol 2023; 130:524-546. [PMID: 37465872 DOI: 10.1152/jn.00048.2023] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Revised: 07/18/2023] [Accepted: 07/18/2023] [Indexed: 07/20/2023] Open
Abstract
Amplitude modulation (AM) is a common feature of natural sounds, including speech and animal vocalizations. Here, we used operant conditioning and in vivo electrophysiology to determine the AM detection threshold of mice as well as its underlying neuronal encoding. Mice were trained in a Go-NoGo task to detect the transition to AM within a noise stimulus designed to prevent the use of spectral side-bands or a change in intensity as alternative cues. Our results indicate that mice, compared with other species, detect high modulation frequencies up to 512 Hz well, but show much poorer performance at low frequencies. Our in vivo multielectrode recordings in the inferior colliculus (IC) of both anesthetized and awake mice revealed a few single units with remarkable phase-locking ability to 512 Hz modulation, but not sufficient to explain the good behavioral detection at that frequency. Using a model of the population response that combined dimensionality reduction with threshold detection, we reproduced the general band-pass characteristics of behavioral detection based on a subset of neurons showing the largest firing rate change (both increase and decrease) in response to AM, suggesting that these neurons are instrumental in the behavioral detection of AM stimuli by the mice.NEW & NOTEWORTHY The amplitude of natural sounds, including speech and animal vocalizations, often shows characteristic modulations. We examined the relationship between neuronal responses in the mouse inferior colliculus and the behavioral detection of amplitude modulation (AM) in sound and modeled how the former can give rise to the latter. Our model suggests that behavioral detection can be well explained by the activity of a subset of neurons showing the largest firing rate changes in response to AM.
Collapse
Affiliation(s)
- Maurits M van den Berg
- Department of Neuroscience, Erasmus MC, University Medical Center Rotterdam, Rotterdam, The Netherlands
| | - Esmée Busscher
- Department of Neuroscience, Erasmus MC, University Medical Center Rotterdam, Rotterdam, The Netherlands
| | - J Gerard G Borst
- Department of Neuroscience, Erasmus MC, University Medical Center Rotterdam, Rotterdam, The Netherlands
| | - Aaron B Wong
- Department of Neuroscience, Erasmus MC, University Medical Center Rotterdam, Rotterdam, The Netherlands
| |
Collapse
|
12
|
Shi K, Quass GL, Rogalla MM, Ford AN, Czarny JE, Apostolides PF. Population coding of time-varying sounds in the non-lemniscal Inferior Colliculus. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.08.14.553263. [PMID: 37645904 PMCID: PMC10461978 DOI: 10.1101/2023.08.14.553263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/31/2023]
Abstract
The inferior colliculus (IC) of the midbrain is important for complex sound processing, such as discriminating conspecific vocalizations and human speech. The IC's non-lemniscal, dorsal "shell" region is likely important for this process, as neurons in these layers project to higher-order thalamic nuclei that subsequently funnel acoustic signals to the amygdala and non-primary auditory cortices; forebrain circuits important for vocalization coding in a variety of mammals, including humans. However, the extent to which shell IC neurons transmit acoustic features necessary to discern vocalizations is less clear, owing to the technical difficulty of recording from neurons in the IC's superficial layers via traditional approaches. Here we use 2-photon Ca2+ imaging in mice of either sex to test how shell IC neuron populations encode the rate and depth of amplitude modulation, important sound cues for speech perception. Most shell IC neurons were broadly tuned, with a low neurometric discrimination of amplitude modulation rate; only a subset were highly selective to specific modulation rates. Nevertheless, neural network classifier trained on fluorescence data from shell IC neuron populations accurately classified amplitude modulation rate, and decoding accuracy was only marginally reduced when highly tuned neurons were omitted from training data. Rather, classifier accuracy increased monotonically with the modulation depth of the training data, such that classifiers trained on full-depth modulated sounds had median decoding errors of ~0.2 octaves. Thus, shell IC neurons may transmit time-varying signals via a population code, with perhaps limited reliance on the discriminative capacity of any individual neuron.
Collapse
Affiliation(s)
- Kaiwen Shi
- Kresge Hearing Research Institute, Department of Otolaryngology — Head & Neck Surgery, University of Michigan Medical School, Ann Arbor, MI, 48109
| | - Gunnar L. Quass
- Kresge Hearing Research Institute, Department of Otolaryngology — Head & Neck Surgery, University of Michigan Medical School, Ann Arbor, MI, 48109
| | - Meike M. Rogalla
- Kresge Hearing Research Institute, Department of Otolaryngology — Head & Neck Surgery, University of Michigan Medical School, Ann Arbor, MI, 48109
| | - Alexander N. Ford
- Kresge Hearing Research Institute, Department of Otolaryngology — Head & Neck Surgery, University of Michigan Medical School, Ann Arbor, MI, 48109
| | - Jordyn E. Czarny
- Kresge Hearing Research Institute, Department of Otolaryngology — Head & Neck Surgery, University of Michigan Medical School, Ann Arbor, MI, 48109
| | - Pierre F. Apostolides
- Kresge Hearing Research Institute, Department of Otolaryngology — Head & Neck Surgery, University of Michigan Medical School, Ann Arbor, MI, 48109
- Department of Molecular & Integrative Physiology, University of Michigan Medical School, Ann Arbor, MI, 48109
| |
Collapse
|
13
|
Williams AM, Angeloni CF, Geffen MN. Sound Improves Neuronal Encoding of Visual Stimuli in Mouse Primary Visual Cortex. J Neurosci 2023; 43:2885-2906. [PMID: 36944489 PMCID: PMC10124961 DOI: 10.1523/jneurosci.2444-21.2023] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Revised: 02/14/2023] [Accepted: 02/23/2023] [Indexed: 03/23/2023] Open
Abstract
In everyday life, we integrate visual and auditory information in routine tasks such as navigation and communication. While concurrent sound can improve visual perception, the neuronal correlates of audiovisual integration are not fully understood. Specifically, it remains unclear whether neuronal firing patters in the primary visual cortex (V1) of awake animals demonstrate similar sound-induced improvement in visual discriminability. Furthermore, presentation of sound is associated with movement in the subjects, but little is understood about whether and how sound-associated movement affects audiovisual integration in V1. Here, we investigated how sound and movement interact to modulate V1 visual responses in awake, head-fixed mice and whether this interaction improves neuronal encoding of the visual stimulus. We presented visual drifting gratings with and without simultaneous auditory white noise to awake mice while recording mouse movement and V1 neuronal activity. Sound modulated activity of 80% of light-responsive neurons, with 95% of neurons increasing activity when the auditory stimulus was present. A generalized linear model (GLM) revealed that sound and movement had distinct and complementary effects of the neuronal visual responses. Furthermore, decoding of the visual stimulus from the neuronal activity was improved with sound, an effect that persisted even when controlling for movement. These results demonstrate that sound and movement modulate visual responses in complementary ways, improving neuronal representation of the visual stimulus. This study clarifies the role of movement as a potential confound in neuronal audiovisual responses and expands our knowledge of how multimodal processing is mediated at a neuronal level in the awake brain.SIGNIFICANCE STATEMENT Sound and movement are both known to modulate visual responses in the primary visual cortex; however, sound-induced movement has largely remained unaccounted for as a potential confound in audiovisual studies in awake animals. Here, authors found that sound and movement both modulate visual responses in an important visual brain area, the primary visual cortex, in distinct, yet complementary ways. Furthermore, sound improved encoding of the visual stimulus even when accounting for movement. This study reconciles contrasting theories on the mechanism underlying audiovisual integration and asserts the primary visual cortex as a key brain region participating in tripartite sensory interactions.
Collapse
Affiliation(s)
- Aaron M Williams
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, Pennsylvania, 19104
- Department of Neuroscience, University of Pennsylvania, Philadelphia, Pennsylvania, 19104
- Department of Neurology, University of Pennsylvania, Philadelphia, Pennsylvania, 19104
| | - Christopher F Angeloni
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, Pennsylvania, 19104
- Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania 19104
| | - Maria N Geffen
- Department of Otorhinolaryngology, University of Pennsylvania, Philadelphia, Pennsylvania, 19104
- Department of Neuroscience, University of Pennsylvania, Philadelphia, Pennsylvania, 19104
- Department of Neurology, University of Pennsylvania, Philadelphia, Pennsylvania, 19104
| |
Collapse
|
14
|
Anbuhl KL, Yao JD, Hotz RA, Mowery TM, Sanes DH. Auditory processing remains sensitive to environmental experience during adolescence in a rodent model. Nat Commun 2022; 13:2872. [PMID: 35610222 PMCID: PMC9130260 DOI: 10.1038/s41467-022-30455-9] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Accepted: 05/02/2022] [Indexed: 11/09/2022] Open
Abstract
Elevated neural plasticity during development contributes to dramatic improvements in perceptual, motor, and cognitive skills. However, malleable neural circuits are vulnerable to environmental influences that may disrupt behavioral maturation. While these risks are well-established prior to sexual maturity (i.e., critical periods), the degree of neural vulnerability during adolescence remains uncertain. Here, we induce transient hearing loss (HL) spanning adolescence in gerbils, and ask whether behavioral and neural maturation are disrupted. We find that adolescent HL causes a significant perceptual deficit that can be attributed to degraded auditory cortex processing, as assessed with wireless single neuron recordings and within-session population-level analyses. Finally, auditory cortex brain slices from adolescent HL animals reveal synaptic deficits that are distinct from those typically observed after critical period deprivation. Taken together, these results show that diminished adolescent sensory experience can cause long-lasting behavioral deficits that originate, in part, from a dysfunctional cortical circuit.
Collapse
Affiliation(s)
- Kelsey L Anbuhl
- Center for Neural Science, New York University, 4 Washington Place, New York, NY, 10003, USA.
| | - Justin D Yao
- Center for Neural Science, New York University, 4 Washington Place, New York, NY, 10003, USA
| | - Robert A Hotz
- Center for Neural Science, New York University, 4 Washington Place, New York, NY, 10003, USA
| | - Todd M Mowery
- Center for Neural Science, New York University, 4 Washington Place, New York, NY, 10003, USA
- Department of Otolaryngology, Rutgers University, New Brunswick, NJ, USA
| | - Dan H Sanes
- Center for Neural Science, New York University, 4 Washington Place, New York, NY, 10003, USA.
- Department of Psychology, New York University, New York, NY, USA.
- Department of Biology, New York University, New York, NY, USA.
- Neuroscience Institute at NYU Langone School of Medicine, New York, NY, USA.
| |
Collapse
|
15
|
Lakunina AA, Menashe N, Jaramillo S. Contributions of Distinct Auditory Cortical Inhibitory Neuron Types to the Detection of Sounds in Background Noise. eNeuro 2022; 9:ENEURO.0264-21.2021. [PMID: 35168950 PMCID: PMC8906447 DOI: 10.1523/eneuro.0264-21.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2021] [Revised: 10/17/2021] [Accepted: 12/28/2021] [Indexed: 12/01/2022] Open
Abstract
The ability to separate background noise from relevant acoustic signals is essential for appropriate sound-driven behavior in natural environments. Examples of this separation are apparent in the auditory system, where neural responses to behaviorally relevant stimuli become increasingly noise invariant along the ascending auditory pathway. However, the mechanisms that underlie this reduction in responses to background noise are not well understood. To address this gap in knowledge, we first evaluated the effects of auditory cortical inactivation on mice of both sexes trained to perform a simple auditory signal-in-noise detection task and found that outputs from the auditory cortex are important for the detection of auditory stimuli in noisy environments. Next, we evaluated the contributions of the two most common cortical inhibitory cell types, parvalbumin-expressing (PV+) and somatostatin-expressing (SOM+) interneurons, to the perception of masked auditory stimuli. We found that inactivation of either PV+ or SOM+ cells resulted in a reduction in the ability of mice to determine the presence of auditory stimuli masked by noise. These results indicate that a disruption of auditory cortical network dynamics by either of these two types of inhibitory cells is sufficient to impair the ability to separate acoustic signals from noise.
Collapse
Affiliation(s)
- Anna A Lakunina
- Institute of Neuroscience and Department of Biology, University of Oregon, Eugene, Oregon 97403
| | - Nadav Menashe
- Institute of Neuroscience and Department of Biology, University of Oregon, Eugene, Oregon 97403
| | - Santiago Jaramillo
- Institute of Neuroscience and Department of Biology, University of Oregon, Eugene, Oregon 97403
| |
Collapse
|
16
|
Memory Specific to Temporal Features of Sound Is Formed by Cue-Selective Enhancements in Temporal Coding Enabled by Inhibition of an Epigenetic Regulator. J Neurosci 2021; 41:9192-9209. [PMID: 34544835 DOI: 10.1523/jneurosci.0691-21.2021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Revised: 07/23/2021] [Accepted: 08/18/2021] [Indexed: 11/21/2022] Open
Abstract
Recent investigation of memory-related functions in the auditory system have capitalized on the use of memory-modulating molecules to probe the relationship between memory and substrates of memory in auditory system coding. For example, epigenetic mechanisms, which regulate gene expression necessary for memory consolidation, are powerful modulators of learning-induced neuroplasticity and long-term memory (LTM) formation. Inhibition of the epigenetic regulator histone deacetylase 3 (HDAC3) promotes LTM, which is highly specific for spectral features of sound. The present work demonstrates for the first time that HDAC3 inhibition also enables memory for temporal features of sound. Adult male rats trained in an amplitude modulation (AM) rate discrimination task and treated with a selective inhibitor of HDAC3 formed memory that was highly specific to the AM rate paired with reward. Sound-specific memory revealed behaviorally was associated with a signal-specific enhancement in temporal coding in the auditory system; stronger phase locking that was specific to the rewarded AM rate was revealed in both the surface-recorded frequency following response and auditory cortical multiunit activity in rats treated with the HDAC3 inhibitor. Furthermore, HDAC3 inhibition increased trial-to-trial cortical response consistency (relative to naive and trained vehicle-treated rats), which generalized across different AM rates. Stronger signal-specific phase locking correlated with individual behavioral differences in memory specificity for the AM signal. These findings support that epigenetic mechanisms regulate activity-dependent processes that enhance discriminability of sensory cues encoded into LTM in both spectral and temporal domains, which may be important for remembering spectrotemporal features of sounds, for example, as in human voices and speech.SIGNIFICANCE STATEMENT Epigenetic mechanisms have recently been implicated in memory and information processing. Here, we use a pharmacological inhibitor of HDAC3 in a sensory model of learning to reveal the ability of HDAC3 to enable precise memory for amplitude-modulated sound cues. In so doing, we uncover neural substrates for memory's specificity for temporal sound cues. Memory specificity was supported by auditory cortical changes in temporal coding, including greater response consistency and stronger phase locking. HDAC3 appears to regulate effects across domains that determine specific cue saliency for behavior. Thus, epigenetic players may gate how sensory information is stored in long-term memory and can be leveraged to reveal the neural substrates of sensory details stored in memory.
Collapse
|
17
|
Amaro D, Ferreiro DN, Grothe B, Pecka M. Source identity shapes spatial preference in primary auditory cortex during active navigation. Curr Biol 2021; 31:3875-3883.e5. [PMID: 34192513 DOI: 10.1016/j.cub.2021.06.025] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Revised: 05/10/2021] [Accepted: 06/09/2021] [Indexed: 01/05/2023]
Abstract
Information about the position of sensory objects and identifying their concurrent behavioral relevance is vital to navigate the environment. In the auditory system, spatial information is computed in the brain based on the position of the sound source relative to the observer and thus assumed to be egocentric throughout the auditory pathway. This assumption is largely based on studies conducted in either anesthetized or head-fixed and passively listening animals, thus lacking self-motion and selective listening. Yet these factors are fundamental components of natural sensing1 that may crucially impact the nature of spatial coding and sensory object representation.2 How individual objects are neuronally represented during unrestricted self-motion and active sensing remains mostly unexplored. Here, we trained gerbils on a behavioral foraging paradigm that required localization and identification of sound sources during free navigation. Chronic tetrode recordings in primary auditory cortex during task performance revealed previously unreported sensory object representations. Strikingly, the egocentric angle preference of the majority of spatially sensitive neurons changed significantly depending on the task-specific identity (outcome association) of the sound source. Spatial tuning also exhibited large temporal complexity. Moreover, we encountered egocentrically untuned neurons whose response magnitude differed between source identities. Using a neural network decoder, we show that, together, these neuronal response ensembles provide spatiotemporally co-existent information about both the egocentric location and the identity of individual sensory objects during self-motion, revealing a novel cortical computation principle for naturalistic sensing.
Collapse
Affiliation(s)
- Diana Amaro
- Division of Neurobiology, Department Biology II, Ludwig-Maximilians-Universität München, Planegg-Martinsried, Germany; Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität München, Planegg-Martinsried, Germany
| | - Dardo N Ferreiro
- Division of Neurobiology, Department Biology II, Ludwig-Maximilians-Universität München, Planegg-Martinsried, Germany; Department of General Psychology and Education, Ludwig-Maximilians-Universität München, Germany
| | - Benedikt Grothe
- Division of Neurobiology, Department Biology II, Ludwig-Maximilians-Universität München, Planegg-Martinsried, Germany; Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität München, Planegg-Martinsried, Germany; Max Planck Institute of Neurobiology, Planegg-Martinsried, Germany
| | - Michael Pecka
- Division of Neurobiology, Department Biology II, Ludwig-Maximilians-Universität München, Planegg-Martinsried, Germany.
| |
Collapse
|
18
|
Downer JD, Verhein JR, Rapone BC, O'Connor KN, Sutter ML. An Emergent Population Code in Primary Auditory Cortex Supports Selective Attention to Spectral and Temporal Sound Features. J Neurosci 2021; 41:7561-7577. [PMID: 34210783 PMCID: PMC8425978 DOI: 10.1523/jneurosci.0693-20.2021] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Revised: 05/19/2021] [Accepted: 05/28/2021] [Indexed: 11/21/2022] Open
Abstract
Textbook descriptions of primary sensory cortex (PSC) revolve around single neurons' representation of low-dimensional sensory features, such as visual object orientation in primary visual cortex (V1), location of somatic touch in primary somatosensory cortex (S1), and sound frequency in primary auditory cortex (A1). Typically, studies of PSC measure neurons' responses along few (one or two) stimulus and/or behavioral dimensions. However, real-world stimuli usually vary along many feature dimensions and behavioral demands change constantly. In order to illuminate how A1 supports flexible perception in rich acoustic environments, we recorded from A1 neurons while rhesus macaques (one male, one female) performed a feature-selective attention task. We presented sounds that varied along spectral and temporal feature dimensions (carrier bandwidth and temporal envelope, respectively). Within a block, subjects attended to one feature of the sound in a selective change detection task. We found that single neurons tend to be high-dimensional, in that they exhibit substantial mixed selectivity for both sound features, as well as task context. We found no overall enhancement of single-neuron coding of the attended feature, as attention could either diminish or enhance this coding. However, a population-level analysis reveals that ensembles of neurons exhibit enhanced encoding of attended sound features, and this population code tracks subjects' performance. Importantly, surrogate neural populations with intact single-neuron tuning but shuffled higher-order correlations among neurons fail to yield attention- related effects observed in the intact data. These results suggest that an emergent population code not measurable at the single-neuron level might constitute the functional unit of sensory representation in PSC.SIGNIFICANCE STATEMENT The ability to adapt to a dynamic sensory environment promotes a range of important natural behaviors. We recorded from single neurons in monkey primary auditory cortex (A1), while subjects attended to either the spectral or temporal features of complex sounds. Surprisingly, we found no average increase in responsiveness to, or encoding of, the attended feature across single neurons. However, when we pooled the activity of the sampled neurons via targeted dimensionality reduction (TDR), we found enhanced population-level representation of the attended feature and suppression of the distractor feature. This dissociation of the effects of attention at the level of single neurons versus the population highlights the synergistic nature of cortical sound encoding and enriches our understanding of sensory cortical function.
Collapse
Affiliation(s)
- Joshua D Downer
- Center for Neuroscience, University of California, Davis, Davis, California 95618
- Department of Otolaryngology, Head and Neck Surgery, University of California, San Francisco, California 94143
| | - Jessica R Verhein
- Center for Neuroscience, University of California, Davis, Davis, California 95618
- School of Medicine, Stanford University, Stanford, California 94305
| | - Brittany C Rapone
- Center for Neuroscience, University of California, Davis, Davis, California 95618
- School of Social Sciences, Oxford Brookes University, Oxford, OX4 0BP, United Kingdom
| | - Kevin N O'Connor
- Center for Neuroscience, University of California, Davis, Davis, California 95618
- Department of Neurobiology, Physiology and Behavior, University of California, Davis, Davis, California 95618
| | - Mitchell L Sutter
- Center for Neuroscience, University of California, Davis, Davis, California 95618
- Department of Neurobiology, Physiology and Behavior, University of California, Davis, Davis, California 95618
| |
Collapse
|
19
|
Yao JD, Sanes DH. Temporal Encoding is Required for Categorization, But Not Discrimination. Cereb Cortex 2021; 31:2886-2897. [PMID: 33429423 DOI: 10.1093/cercor/bhaa396] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Revised: 10/26/2020] [Accepted: 11/03/2020] [Indexed: 11/14/2022] Open
Abstract
Core auditory cortex (AC) neurons encode slow fluctuations of acoustic stimuli with temporally patterned activity. However, whether temporal encoding is necessary to explain auditory perceptual skills remains uncertain. Here, we recorded from gerbil AC neurons while they discriminated between a 4-Hz amplitude modulation (AM) broadband noise and AM rates >4 Hz. We found a proportion of neurons possessed neural thresholds based on spike pattern or spike count that were better than the recorded session's behavioral threshold, suggesting that spike count could provide sufficient information for this perceptual task. A population decoder that relied on temporal information outperformed a decoder that relied on spike count alone, but the spike count decoder still remained sufficient to explain average behavioral performance. This leaves open the possibility that more demanding perceptual judgments require temporal information. Thus, we asked whether accurate classification of different AM rates between 4 and 12 Hz required the information contained in AC temporal discharge patterns. Indeed, accurate classification of these AM stimuli depended on the inclusion of temporal information rather than spike count alone. Overall, our results compare two different representations of time-varying acoustic features that can be accessed by downstream circuits required for perceptual judgments.
Collapse
Affiliation(s)
- Justin D Yao
- Center for Neural Science, New York University, New York, NY 10003, USA
| | - Dan H Sanes
- Center for Neural Science, New York University, New York, NY 10003, USA.,Department of Psychology, New York University, New York, NY 10003, USA.,Department of Biology, New York University, New York, NY 10003, USA.,Neuroscience Institute, NYU Langone Medical Center, New York University, New York, NY 10016, USA
| |
Collapse
|
20
|
Mohn JL, Downer JD, O'Connor KN, Johnson JS, Sutter ML. Choice-related activity and neural encoding in primary auditory cortex and lateral belt during feature-selective attention. J Neurophysiol 2021; 125:1920-1937. [PMID: 33788616 DOI: 10.1152/jn.00406.2020] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Selective attention is necessary to sift through, form a coherent percept of, and make behavioral decisions on the vast amount of information present in most sensory environments. How and where selective attention is employed in cortex and how this perceptual information then informs the relevant behavioral decisions is still not well understood. Studies probing selective attention and decision-making in visual cortex have been enlightening as to how sensory attention might work in that modality; whether or not similar mechanisms are employed in auditory attention is not yet clear. Therefore, we trained rhesus macaques on a feature-selective attention task, where they switched between reporting changes in temporal (amplitude modulation, AM) and spectral (carrier bandwidth) features of a broadband noise stimulus. We investigated how the encoding of these features by single neurons in primary (A1) and secondary (middle lateral belt, ML) auditory cortex was affected by the different attention conditions. We found that neurons in A1 and ML showed mixed selectivity to the sound and task features. We found no difference in AM encoding between the attention conditions. We found that choice-related activity in both A1 and ML neurons shifts between attentional conditions. This finding suggests that choice-related activity in auditory cortex does not simply reflect motor preparation or action and supports the relationship between reported choice-related activity and the decision and perceptual process.NEW & NOTEWORTHY We recorded from primary and secondary auditory cortex while monkeys performed a nonspatial feature attention task. Both areas exhibited rate-based choice-related activity. The manifestation of choice-related activity was attention dependent, suggesting that choice-related activity in auditory cortex does not simply reflect arousal or motor influences but relates to the specific perceptual choice.
Collapse
Affiliation(s)
- Jennifer L Mohn
- Center for Neuroscience, University of California, Davis, California.,Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| | - Joshua D Downer
- Center for Neuroscience, University of California, Davis, California.,Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, California
| | - Kevin N O'Connor
- Center for Neuroscience, University of California, Davis, California.,Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| | - Jeffrey S Johnson
- Center for Neuroscience, University of California, Davis, California.,Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| | - Mitchell L Sutter
- Center for Neuroscience, University of California, Davis, California.,Department of Neurobiology, Physiology and Behavior, University of California, Davis, California
| |
Collapse
|
21
|
Kim C, Chacron MJ. Lower Baseline Variability Gives Rise to Lower Detection Thresholds in Midbrain than Hindbrain Electrosensory Neurons. Neuroscience 2020; 448:43-54. [PMID: 32926952 DOI: 10.1016/j.neuroscience.2020.09.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2020] [Revised: 09/01/2020] [Accepted: 09/02/2020] [Indexed: 10/23/2022]
Abstract
Understanding how the brain decodes sensory information to give rise to behaviour remains an important problem in systems neuroscience. Across various sensory modalities (e.g. auditory, visual), the time-varying contrast of natural stimuli has been shown to carry behaviourally relevant information. However, it is unclear how such information is actually decoded by the brain to evoke perception and behaviour. Here we investigated how midbrain electrosensory neurons respond to weak contrasts in the electrosensory system of the weakly electric fish Apteronotus leptorhynchus. We found that these neurons displayed lower detection thresholds than their afferent hindbrain electrosensory neurons. Further analysis revealed that the lower detection thresholds of midbrain neurons were not due to increased sensitivity to the stimulus. Rather, these were due to the fact that midbrain neurons displayed lower variability in their firing activities in the absence of stimulation, which is due to lower firing rates. Our results suggest that midbrain neurons play an active role towards enabling the detection of weak stimulus contrasts, which in turn leads to perception and behavioral responses.
Collapse
Affiliation(s)
- Chelsea Kim
- Department of Physiology, McGill University, Montreal, QC, Canada
| | | |
Collapse
|
22
|
Bakst L, McGuire JT. Eye movements reflect adaptive predictions and predictive precision. J Exp Psychol Gen 2020; 150:915-929. [PMID: 33048566 DOI: 10.1037/xge0000977] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Successful decision-making depends on the ability to form predictions about uncertain future events. Existing evidence suggests predictive representations are not limited to point estimates but also include information about the associated level of predictive uncertainty. Estimates of predictive uncertainty have an important role in governing the rate at which beliefs are updated in response to new observations. It is not yet known, however, whether the same form of uncertainty-modulated learning occurs naturally and spontaneously when there is no task requirement to express predictions explicitly. Here, we used a gaze-based predictive inference paradigm to show that (a) predictive inference manifested in spontaneous gaze dynamics, (b) feedback-driven updating of spontaneous gaze-based predictions reflected adaptation to environmental statistics, and (c) anticipatory gaze variability tracked predictive uncertainty in an event-by-event manner. Our results demonstrate that sophisticated predictive inference can occur spontaneously and that oculomotor behavior can provide a multidimensional readout of internal predictive beliefs. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Collapse
Affiliation(s)
- Leah Bakst
- Department of Psychological and Brain Sciences, Boston University
| | - Joseph T McGuire
- Department of Psychological and Brain Sciences, Boston University
| |
Collapse
|
23
|
Early Visual Motion Experience Improves Retinal Encoding of Motion Directions. J Neurosci 2020; 40:5431-5442. [PMID: 32532886 DOI: 10.1523/jneurosci.0569-20.2020] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2020] [Revised: 05/14/2020] [Accepted: 05/16/2020] [Indexed: 11/21/2022] Open
Abstract
Altered sensory experience in early life often leads to altered response properties of the sensory neurons. This process is mostly thought to happen in the brain, not in the sensory organs. We show that in the mouse retina of both sexes, exposed to a motion-dominated visual environment from eye-opening, the ON-OFF direction selective ganglion cells (ooDSGCs) develop significantly stronger direction encoding ability for motion in all directions. This improvement occurs independent of the motion direction used for training. We demonstrated that this enhanced ability to encode motion direction is mainly attributed to increased response reliability of ooDSGCs. Closer examination revealed that the excitatory inputs from the ON bipolar pathway showed enhanced response reliability after the motion experience training, while other synaptic inputs remain relatively unchanged. Our results demonstrate that retina adapts to the visual environment during neonatal development.SIGNIFICANCE STATEMENT We found that retina, as the first stage of visual sensation, can also be affected by experience dependent plasticity during development. Exposure to a motion enriched visual environment immediately after eye-opening greatly improves motion direction encoding by direction selective retinal ganglion cells (RGCs). These results motivate future studies aimed at understanding how visual experience shapes the retinal circuits and the response properties of retinal neurons.
Collapse
|
24
|
Gay JD, Rosen MJ, Huyck JJ. Effects of Gap Position on Perceptual Gap Detection Across Late Childhood and Adolescence. J Assoc Res Otolaryngol 2020; 21:243-258. [PMID: 32488537 DOI: 10.1007/s10162-020-00756-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2019] [Accepted: 04/28/2020] [Indexed: 11/27/2022] Open
Abstract
The ability to detect a silent gap within a sound is critical for accurate speech perception, and gap detection has been shown to have an extended developmental trajectory. In certain conditions, the detectability of the gap decreases as the gap is placed closer to the beginning of the signal. Early in development, the detection of gaps shortly after signal onset may be especially difficult due to immaturities in the encoding and perception of rapidly changing sounds. The present study explored the development of gap detection from age 8 to 19 years, specifically when the temporal placement of the gap varied. Performance improved with age for all temporal placements of the gap, demonstrating a gradual maturation of gap detection abilities throughout adolescence. Younger adolescents did not benefit from increasing gap onset times, while older adolescents' thresholds gradually improved as gap onset time lengthened. Regardless of age, listeners learned between the two testing days but did not improve within days. Younger adolescents had poorer thresholds for the last block of testing on the second day, returning to baseline performance despite learning between days. These data support earlier studies showing that gaps are harder to detect near stimulus onset and confirm that gap detection abilities continue to mature into adolescence. The data also suggest that younger adolescents do not receive the same benefit of increasing gap onset time and respond differently to repeated testing than older adolescents and young adults.
Collapse
Affiliation(s)
- Jennifer D Gay
- Department of Anatomy & Neurobiology, Northeast Ohio Medical University, 4209 State Route 44, Rootstown, OH, 44272, USA.,Biomedical Sciences Program, Kent State University, 800 East Summit St, Kent, OH, 44242, USA
| | - Merri J Rosen
- Department of Anatomy & Neurobiology, Northeast Ohio Medical University, 4209 State Route 44, Rootstown, OH, 44272, USA.,Kent State Brain Health Research Institute, Kent State University, 251M Integrated Sciences Building, 1175 Lefton Esplanade, Kent, OH, 44242, USA
| | - Julia Jones Huyck
- Kent State Brain Health Research Institute, Kent State University, 251M Integrated Sciences Building, 1175 Lefton Esplanade, Kent, OH, 44242, USA. .,Speech Pathology and Audiology Program, Kent State University, 1325 Theatre Drive, Kent, OH, 44242, USA.
| |
Collapse
|
25
|
Ghanbari A, Lee CM, Read HL, Stevenson IH. Modeling stimulus-dependent variability improves decoding of population neural responses. J Neural Eng 2019; 16:066018. [PMID: 31404915 DOI: 10.1088/1741-2552/ab3a68] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
OBJECTIVE Neural responses to repeated presentations of an identical stimulus often show substantial trial-to-trial variability. How the mean firing rate varies in response to different stimuli or during different movements (tuning curves) has been extensively modeled in a wide variety of neural systems. However, the variability of neural responses can also have clear tuning independent of the tuning in the mean firing rate. This suggests that the variability could contain information regarding the stimulus/movement beyond what is encoded in the mean firing rate. Here we demonstrate how taking variability into account can improve neural decoding. APPROACH In a typical neural coding model spike counts are assumed to be Poisson with the mean response depending on an external variable, such as a stimulus or movement. Bayesian decoding methods then use the probabilities under these Poisson tuning models (the likelihood) to estimate the probability of each stimulus given the spikes on a given trial (the posterior). However, under the Poisson model, spike count variability is always exactly equal to the mean (Fano factor = 1). Here we use two alternative models-the Conway-Maxwell-Poisson (CMP) model and negative binomial (NB) model-to more flexibly characterize how neural variability depends on external stimuli. These models both contain the Poisson distribution as a special case but have an additional parameter that allows the variance to be greater than the mean (Fano factor > 1) or, for the CMP model, less than the mean (Fano factor < 1). MAIN RESULTS We find that neural responses in primary motor (M1), visual (V1), and auditory (A1) cortices have diverse tuning in both their mean firing rates and response variability. Across cortical areas, we find that Bayesian decoders using the CMP or NB models improve stimulus/movement estimation accuracy by 4%-12% compared to the Poisson model. SIGNIFICANCE Moreover, the uncertainty of the non-Poisson decoders more accurately reflects the magnitude of estimation errors. In addition to tuning curves that reflect average neural responses, stimulus-dependent response variability may be an important aspect of the neural code. Modeling this structure could, potentially, lead to improvements in brain machine interfaces.
Collapse
Affiliation(s)
- Abed Ghanbari
- Department of Biomedical Engineering, University of Connecticut, Storrs, CT, United States of America
| | | | | | | |
Collapse
|
26
|
Zuk NJ, Delgutte B. Neural coding and perception of auditory motion direction based on interaural time differences. J Neurophysiol 2019; 122:1821-1842. [PMID: 31461376 DOI: 10.1152/jn.00081.2019] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023] Open
Abstract
While motion is important for parsing a complex auditory scene into perceptual objects, how it is encoded in the auditory system is unclear. Perceptual studies suggest that the ability to identify the direction of motion is limited by the duration of the moving sound, yet we can detect changes in interaural differences at even shorter durations. To understand the source of these distinct temporal limits, we recorded from single units in the inferior colliculus (IC) of unanesthetized rabbits in response to noise stimuli containing a brief segment with linearly time-varying interaural time difference ("ITD sweep") temporally embedded in interaurally uncorrelated noise. We also tested the ability of human listeners to either detect the ITD sweeps or identify the motion direction. Using a point-process model to separate the contributions of stimulus dependence and spiking history to single-neuron responses, we found that the neurons respond primarily by following the instantaneous ITD rather than exhibiting true direction selectivity. Furthermore, using an optimal classifier to decode the single-neuron responses, we found that neural threshold durations of ITD sweeps for both direction identification and detection overlapped with human threshold durations even though the average response of the neurons could track the instantaneous ITD beyond psychophysical limits. Our results suggest that the IC does not explicitly encode motion direction, but internal neural noise may limit the speed at which we can identify the direction of motion.NEW & NOTEWORTHY Recognizing motion and identifying an object's trajectory are important for parsing a complex auditory scene, but how we do so is unclear. We show that neurons in the auditory midbrain do not exhibit direction selectivity as found in the visual system but instead follow the trajectory of the motion in their temporal firing patterns. Our results suggest that the inherent variability in neural firings may limit our ability to identify motion direction at short durations.
Collapse
Affiliation(s)
- Nathaniel J Zuk
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, Massachusetts
| | - Bertrand Delgutte
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, Massachusetts.,Department of Otolaryngology, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
27
|
Zorio DAR, Monsma S, Sanes DH, Golding NL, Rubel EW, Wang Y. De novo sequencing and initial annotation of the Mongolian gerbil (Meriones unguiculatus) genome. Genomics 2019; 111:441-449. [PMID: 29526484 PMCID: PMC6129228 DOI: 10.1016/j.ygeno.2018.03.001] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2017] [Revised: 02/26/2018] [Accepted: 03/01/2018] [Indexed: 12/28/2022]
Abstract
The Mongolian gerbil (Meriones unguiculatus) is a member of the rodent family that displays several features not found in mice or rats, including sensory specializations and social patterns more similar to those in humans. These features have made gerbils a valuable animal for research studies of auditory and visual processing, brain development, learning and memory, and neurological disorders. Here, we report the whole gerbil annotated genome sequence, and identify important similarities and differences to the human and mouse genomes. We further analyze the chromosomal structure of eight genes with high relevance for controlling neural signaling and demonstrate a high degree of homology between these genes in mouse and gerbil. This homology increases the likelihood that individual genes can be rapidly identified in gerbil and used for genetic manipulations. The availability of the gerbil genome provides a foundation for advancing our knowledge towards understanding evolution, behavior and neural function in mammals. ACCESSION NUMBER: The Whole Genome Shotgun sequence data from this project has been deposited at DDBJ/ENA/GenBank under the accession NHTI00000000. The version described in this paper is version NHTI01000000. The fragment reads, and mate pair reads have been deposited in the Sequence Read Archive under BioSample accession SAMN06897401.
Collapse
Affiliation(s)
- Diego A R Zorio
- Department of Biomedical Sciences, College of Medicine, Florida State University, Tallahassee, FL, USA.
| | | | - Dan H Sanes
- Center for Neural Science, New York University, New York, NY, USA
| | - Nace L Golding
- University of Texas at Austin, Department of Neuroscience, Center for Learning and Memory, Austin, TX, USA
| | - Edwin W Rubel
- Virginia Merrill Bloedel Hearing Research Center, Department of Otolaryngology-Head and Neck Surgery, University of Washington, Seattle, WA, USA
| | - Yuan Wang
- Department of Biomedical Sciences, College of Medicine, Florida State University, Tallahassee, FL, USA; Program in Neuroscience, Florida State University, Tallahassee, FL, USA.
| |
Collapse
|
28
|
Neural Variability Limits Adolescent Skill Learning. J Neurosci 2019; 39:2889-2902. [PMID: 30755494 DOI: 10.1523/jneurosci.2878-18.2019] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2018] [Revised: 01/24/2019] [Accepted: 01/26/2019] [Indexed: 12/31/2022] Open
Abstract
Skill learning is fundamental to the acquisition of many complex behaviors that emerge during development. For example, years of practice give rise to perceptual improvements that contribute to mature speech and language skills. While fully honed learning skills might be thought to offer an advantage during the juvenile period, the ability to learn actually continues to develop through childhood and adolescence, suggesting that the neural mechanisms that support skill learning are slow to mature. To address this issue, we asked whether the rate and magnitude of perceptual learning varies as a function of age as male and female gerbils trained on an auditory task. Adolescents displayed a slower rate of perceptual learning compared with their young and mature counterparts. We recorded auditory cortical neuron activity from a subset of adolescent and adult gerbils as they underwent perceptual training. While training enhanced the sensitivity of most adult units, the sensitivity of many adolescent units remained unchanged, or even declined across training days. Therefore, the average rate of cortical improvement was significantly slower in adolescents compared with adults. Both smaller differences between sound-evoked response magnitudes and greater trial-to-trial response fluctuations contributed to the poorer sensitivity of individual adolescent neurons. Together, these findings suggest that elevated sensory neural variability limits adolescent skill learning.SIGNIFICANCE STATEMENT The ability to learn new skills emerges gradually as children age. This prolonged development, often lasting well into adolescence, suggests that children, teens, and adults may rely on distinct neural strategies to improve their sensory and motor capabilities. Here, we found that practice-based improvement on a sound detection task is slower in adolescent gerbils than in younger or older animals. Neural recordings made during training revealed that practice enhanced the sound sensitivity of adult cortical neurons, but had a weaker effect in adolescents. This latter finding was partially explained by the fact that adolescent neural responses were more variable than in adults. Our results suggest that one mechanistic basis of adult-like skill learning is a reduction in neural response variability.
Collapse
|
29
|
Motor output, neural states and auditory perception. Neurosci Biobehav Rev 2019; 96:116-126. [DOI: 10.1016/j.neubiorev.2018.10.021] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2018] [Revised: 10/26/2018] [Accepted: 10/29/2018] [Indexed: 12/12/2022]
|
30
|
Mehta K, Kliewer J, Ihlefeld A. Quantifying Neuronal Information Flow in Response to Frequency and Intensity Changes in the Auditory Cortex. CONFERENCE RECORD. ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS 2018; 2018:1367-1371. [PMID: 31595139 PMCID: PMC6782062 DOI: 10.1109/acssc.2018.8645091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Studies increasingly show that behavioral relevance alters the population representation of sensory stimuli in the sensory cortices. However, the mechanisms underlying this behavior are incompletely understood. Here, we record neuronal responses in the auditory cortex while a highly trained, awake, normal-hearing gerbil listens passively to target tones of high versus low behavioral relevance. Using an information theoretic framework, we model the overall transmission chain from acoustic input stimulus to recorded cortical response as a communication channel. To quantify how much information core auditory cortex carries about high versus low relevance sound, we then compute the mutual information of the multi-unit neuronal responses. Results show that the output over the stimulus-to-response channel can be modeled as a Poisson mixture. We derive a closed-form fast approximation for the entropy of a mixture of univariate Poisson random variables. A purely rate-code based model reveals reduced information transfer for high relevance compared to low relevance tones, hinting that changes in temporal discharge pattern may encode behavioral relevance.
Collapse
Affiliation(s)
- Ketan Mehta
- Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA 22030
| | - Jörg Kliewer
- Helen and John C. Hartmann Dept. of Electrical and Computer Engineering New Jersey Institute of Technology, Newark, NJ 07102
| | - Antje Ihlefeld
- Dept. of Biomedical Engineering, New Jersey Institute of Technology, Newark, NJ 07102
| |
Collapse
|
31
|
Abstract
Our ability to make sense of the auditory world results from neural processing that begins in the ear, goes through multiple subcortical areas, and continues in the cortex. The specific contribution of the auditory cortex to this chain of processing is far from understood. Although many of the properties of neurons in the auditory cortex resemble those of subcortical neurons, they show somewhat more complex selectivity for sound features, which is likely to be important for the analysis of natural sounds, such as speech, in real-life listening conditions. Furthermore, recent work has shown that auditory cortical processing is highly context-dependent, integrates auditory inputs with other sensory and motor signals, depends on experience, and is shaped by cognitive demands, such as attention. Thus, in addition to being the locus for more complex sound selectivity, the auditory cortex is increasingly understood to be an integral part of the network of brain regions responsible for prediction, auditory perceptual decision-making, and learning. In this review, we focus on three key areas that are contributing to this understanding: the sound features that are preferentially represented by cortical neurons, the spatial organization of those preferences, and the cognitive roles of the auditory cortex.
Collapse
Affiliation(s)
- Andrew J King
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, OX1 3PT, UK
| | - Sundeep Teki
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, OX1 3PT, UK
| | - Ben D B Willmore
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, OX1 3PT, UK
| |
Collapse
|
32
|
A Hierarchy of Time Scales for Discriminating and Classifying the Temporal Shape of Sound in Three Auditory Cortical Fields. J Neurosci 2018; 38:6967-6982. [PMID: 29954851 DOI: 10.1523/jneurosci.2871-17.2018] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2017] [Revised: 05/29/2018] [Accepted: 06/17/2018] [Indexed: 11/21/2022] Open
Abstract
Auditory cortex is essential for mammals, including rodents, to detect temporal "shape" cues in the sound envelope but it remains unclear how different cortical fields may contribute to this ability (Lomber and Malhotra, 2008; Threlkeld et al., 2008). Previously, we found that precise spiking patterns provide a potential neural code for temporal shape cues in the sound envelope in the primary auditory (A1), and ventral auditory field (VAF) and caudal suprarhinal auditory field (cSRAF) of the rat (Lee et al., 2016). Here, we extend these findings and characterize the time course of the temporally precise output of auditory cortical neurons in male rats. A pairwise sound discrimination index and a Naive Bayesian classifier are used to determine how these spiking patterns could provide brain signals for behavioral discrimination and classification of sounds. We find response durations and optimal time constants for discriminating sound envelope shape increase in rank order with: A1 < VAF < cSRAF. Accordingly, sustained spiking is more prominent and results in more robust sound discrimination in non-primary cortex versus A1. Spike-timing patterns classify 10 different sound envelope shape sequences and there is a twofold increase in maximal performance when pooling output across the neuron population indicating a robust distributed neural code in all three cortical fields. Together, these results support the idea that temporally precise spiking patterns from primary and non-primary auditory cortical fields provide the necessary signals for animals to discriminate and classify a large range of temporal shapes in the sound envelope.SIGNIFICANCE STATEMENT Functional hierarchies in the visual cortices support the concept that classification of visual objects requires successive cortical stages of processing including a progressive increase in classical receptive field size. The present study is significant as it supports the idea that a similar progression exists in auditory cortices in the time domain. We demonstrate for the first time that three cortices provide temporal spiking patterns for robust temporal envelope shape discrimination but only the ventral non-primary cortices do so on long time scales. This study raises the possibility that primary and non-primary cortices provide unique temporal spiking patterns and time scales for perception of sound envelope shape.
Collapse
|
33
|
Yao JD, Sanes DH. Developmental deprivation-induced perceptual and cortical processing deficits in awake-behaving animals. eLife 2018; 7:33891. [PMID: 29873632 PMCID: PMC6005681 DOI: 10.7554/elife.33891] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2017] [Accepted: 06/04/2018] [Indexed: 01/02/2023] Open
Abstract
Sensory deprivation during development induces lifelong changes to central nervous system function that are associated with perceptual impairments. However, the relationship between neural and behavioral deficits is uncertain due to a lack of simultaneous measurements during task performance. Therefore, we telemetrically recorded from auditory cortex neurons in gerbils reared with developmental conductive hearing loss as they performed an auditory task in which rapid fluctuations in amplitude are detected. These data were compared to a measure of auditory brainstem temporal processing from each animal. We found that developmental HL diminished behavioral performance, but did not alter brainstem temporal processing. However, the simultaneous assessment of neural and behavioral processing revealed that perceptual deficits were associated with a degraded cortical population code that could be explained by greater trial-to-trial response variability. Our findings suggest that the perceptual limitations that attend early hearing loss are best explained by an encoding deficit in auditory cortex.
Collapse
Affiliation(s)
- Justin D Yao
- Center for Neural Science, New York University, New York, United States
| | - Dan H Sanes
- Center for Neural Science, New York University, New York, United States.,Department of Psychology, New York University, New York, United States.,Department of Biology, New York University, New York, United States.,Neuroscience Institute, NYU Langone Medical Center, New York, United States
| |
Collapse
|
34
|
Irvine DRF. Auditory perceptual learning and changes in the conceptualization of auditory cortex. Hear Res 2018; 366:3-16. [PMID: 29551308 DOI: 10.1016/j.heares.2018.03.011] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/13/2017] [Revised: 03/06/2018] [Accepted: 03/09/2018] [Indexed: 12/11/2022]
Abstract
Perceptual learning, improvement in discriminative ability as a consequence of training, is one of the forms of sensory system plasticity that has driven profound changes in our conceptualization of sensory cortical function. Psychophysical and neurophysiological studies of auditory perceptual learning have indicated that the characteristics of the learning, and by implication the nature of the underlying neural changes, are highly task specific. Some studies in animals have indicated that recruitment of neurons to the population responding to the training stimuli, and hence an increase in the so-called cortical "area of representation" of those stimuli, is the substrate of improved performance, but such changes have not been observed in other studies. A possible reconciliation of these conflicting results is provided by evidence that changes in area of representation constitute a transient stage in the processes underlying perceptual learning. This expansion - renormalization hypothesis is supported by evidence from studies of the learning of motor skills, another form of procedural learning, but leaves open the nature of the permanent neural substrate of improved performance. Other studies have suggested that the substrate might be reduced response variability - a decrease in internal noise. Neuroimaging studies in humans have also provided compelling evidence that training results in long-term changes in auditory cortical function and in the auditory brainstem frequency-following response. Musical training provides a valuable model, but the evidence it provides is qualified by the fact that most such training is multimodal and sensorimotor, and that few of the studies are experimental and allow control over confounding variables. More generally, the overwhelming majority of experimental studies of the various forms of auditory perceptual learning have established the co-occurrence of neural and perceptual changes, but have not established that the former are causally related to the latter. Important forms of perceptual learning in humans are those involved in language acquisition and in the improvement in speech perception performance of post-lingually deaf cochlear implantees over the months following implantation. The development of a range of auditory training programs has focused interest on the factors determining the extent to which perceptual learning is specific or generalises to tasks other than those used in training. The context specificity demonstrated in a number of studies of perceptual learning suggests a multiplexing model, in which learning relating to a particular stimulus attribute depends on a subset of the diverse inputs to a given cortical neuron being strengthened, and different subsets being gated by top-down influences. This hypothesis avoids the difficulty of balancing system stability with plasticity, which is a problem for recruitment hypotheses. The characteristics of auditory perceptual learning reflect the fact that auditory cortex forms part of distributed networks that integrate the representation of auditory stimuli with attention, decision, and reward processes.
Collapse
Affiliation(s)
- Dexter R F Irvine
- Bionics Institute, East Melbourne, Victoria 3002, Australia; School of Psychological Sciences, Monash University, Victoria 3800, Australia.
| |
Collapse
|
35
|
Aoki R, Tsubota T, Goya Y, Benucci A. An automated platform for high-throughput mouse behavior and physiology with voluntary head-fixation. Nat Commun 2017; 8:1196. [PMID: 29084948 PMCID: PMC5662625 DOI: 10.1038/s41467-017-01371-0] [Citation(s) in RCA: 41] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2017] [Accepted: 09/10/2017] [Indexed: 11/18/2022] Open
Abstract
Recording neural activity during animal behavior is a cornerstone of modern brain research. However, integration of cutting-edge technologies for neural circuit analysis with complex behavioral measurements poses a severe experimental bottleneck for researchers. Critical problems include a lack of standardization for psychometric and neurometric integration, and lack of tools that can generate large, sharable data sets for the research community in a time and cost effective way. Here, we introduce a novel mouse behavioral learning platform featuring voluntary head fixation and automated high-throughput data collection for integrating complex behavioral assays with virtually any physiological device. We provide experimental validation by demonstrating behavioral training of mice in visual discrimination and auditory detection tasks. To examine facile integration with physiology systems, we coupled the platform to a two-photon microscope for imaging of cortical networks at single-cell resolution. Our behavioral learning and recording platform is a prototype for the next generation of mouse cognitive studies. Transgenic approaches and improvements in functional imaging have necessitated an advance in the behavioral toolkit. Here the authors describe an automated high-throughput voluntary head fixation system for training mice on complex psychophysical decision tasks compatible with concurrent two-photon microscopy.
Collapse
Affiliation(s)
- Ryo Aoki
- RIKEN Brain Science Institute, Wako-shi, Saitama, 351-0198, Japan
| | - Tadashi Tsubota
- RIKEN Brain Science Institute, Wako-shi, Saitama, 351-0198, Japan
| | - Yuki Goya
- RIKEN Brain Science Institute, Wako-shi, Saitama, 351-0198, Japan
| | - Andrea Benucci
- RIKEN Brain Science Institute, Wako-shi, Saitama, 351-0198, Japan.
| |
Collapse
|
36
|
Abstract
Practice sharpens our perceptual judgments, a process known as perceptual learning. Although several brain regions and neural mechanisms have been proposed to support perceptual learning, formal tests of causality are lacking. Furthermore, the temporal relationship between neural and behavioral plasticity remains uncertain. To address these issues, we recorded the activity of auditory cortical neurons as gerbils trained on a sound detection task. Training led to improvements in cortical and behavioral sensitivity that were closely matched in terms of magnitude and time course. Surprisingly, the degree of neural improvement was behaviorally gated. During task performance, cortical improvements were large and predicted behavioral outcomes. In contrast, during nontask listening sessions, cortical improvements were weak and uncorrelated with perceptual performance. Targeted reduction of auditory cortical activity during training diminished perceptual learning while leaving psychometric performance largely unaffected. Collectively, our findings suggest that training facilitates perceptual learning by strengthening both bottom-up sensory encoding and top-down modulation of auditory cortex.
Collapse
Affiliation(s)
- Melissa L Caras
- Center for Neural Science, New York University, New York, NY 10003;
| | - Dan H Sanes
- Center for Neural Science, New York University, New York, NY 10003
- Department of Psychology, New York University, New York, NY 10003
- Department of Biology, New York University, New York, NY 10003
- Neuroscience Institute, New York University Langone Medical Center, New York, NY 10016
| |
Collapse
|
37
|
Distinct Correlation Structure Supporting a Rate-Code for Sound Localization in the Owl's Auditory Forebrain. eNeuro 2017; 4:eN-NWR-0144-17. [PMID: 28674698 PMCID: PMC5492684 DOI: 10.1523/eneuro.0144-17.2017] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2017] [Revised: 05/31/2017] [Accepted: 06/07/2017] [Indexed: 11/21/2022] Open
Abstract
While a topographic map of auditory space exists in the vertebrate midbrain, it is absent in the forebrain. Yet, both brain regions are implicated in sound localization. The heterogeneous spatial tuning of adjacent sites in the forebrain compared to the midbrain reflects different underlying circuitries, which is expected to affect the correlation structure, i.e., signal (similarity of tuning) and noise (trial-by-trial variability) correlations. Recent studies have drawn attention to the impact of response correlations on the information readout from a neural population. We thus analyzed the correlation structure in midbrain and forebrain regions of the barn owl’s auditory system. Tetrodes were used to record in the midbrain and two forebrain regions, Field L and the downstream auditory arcopallium (AAr), in anesthetized owls. Nearby neurons in the midbrain showed high signal and noise correlations (RNCs), consistent with shared inputs. As previously reported, Field L was arranged in random clusters of similarly tuned neurons. Interestingly, AAr neurons displayed homogeneous monotonic azimuth tuning, while response variability of nearby neurons was significantly less correlated than the midbrain. Using a decoding approach, we demonstrate that low RNC in AAr restricts the potentially detrimental effect it can have on information, assuming a rate code proposed for mammalian sound localization. This study harnesses the power of correlation structure analysis to investigate the coding of auditory space. Our findings demonstrate distinct correlation structures in the auditory midbrain and forebrain, which would be beneficial for a rate-code framework for sound localization in the nontopographic forebrain representation of auditory space.
Collapse
|