1
|
Macedo-Lima M, Hamlette LS, Caras ML. Orbitofrontal cortex modulates auditory cortical sensitivity and sound perception in Mongolian gerbils. Curr Biol 2024:S0960-9822(24)00820-0. [PMID: 38996534 DOI: 10.1016/j.cub.2024.06.036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Revised: 04/25/2024] [Accepted: 06/12/2024] [Indexed: 07/14/2024]
Abstract
Sensory perception is dynamic, quickly adapting to sudden shifts in environmental or behavioral context. Although decades of work have established that these dynamics are mediated by rapid fluctuations in sensory cortical activity, we have a limited understanding of the brain regions and pathways that orchestrate these changes. Neurons in the orbitofrontal cortex (OFC) encode contextual information, and recent data suggest that some of these signals are transmitted to sensory cortices. Whether and how these signals shape sensory encoding and perceptual sensitivity remain uncertain. Here, we asked whether the OFC mediates context-dependent changes in auditory cortical sensitivity and sound perception by monitoring and manipulating OFC activity in freely moving Mongolian gerbils of both sexes under two behavioral contexts: passive sound exposure and engagement in an amplitude modulation (AM) detection task. We found that the majority of OFC neurons, including the specific subset that innervates the auditory cortex, were strongly modulated by task engagement. Pharmacological inactivation of the OFC prevented rapid context-dependent changes in auditory cortical firing and significantly impaired behavioral AM detection. Our findings suggest that contextual information from the OFC mediates rapid plasticity in the auditory cortex and facilitates the perception of behaviorally relevant sounds.
Collapse
Affiliation(s)
| | | | - Melissa L Caras
- Department of Biology, University of Maryland, College Park, MD 20742, USA.
| |
Collapse
|
2
|
Ross G, Radtke-Schuller S, Frohlich F. Ferret as a model system for studying the anatomy and function of the prefrontal cortex: A systematic review. Neurosci Biobehav Rev 2024; 162:105701. [PMID: 38718987 PMCID: PMC11162921 DOI: 10.1016/j.neubiorev.2024.105701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 04/12/2024] [Accepted: 05/01/2024] [Indexed: 05/19/2024]
Abstract
There is a lack of consensus on anatomical nomenclature, standards of documentation, and functional equivalence of the frontal cortex between species. There remains a major gap between human prefrontal function and interpretation of findings in the mouse brain that appears to lack several key prefrontal areas involved in cognition and psychiatric illnesses. The ferret is an emerging model organism that has gained traction as an intermediate model species for the study of top-down cognitive control and other higher-order brain functions. However, this research has yet to benefit from synthesis. Here, we provide a summary of all published research pertaining to the frontal and/or prefrontal cortex of the ferret across research scales. The targeted location within the ferret brain is summarized visually for each experiment, and the anatomical terminology used at time of publishing is compared to what would be the appropriate term to use presently. By doing so, we hope to improve clarity in the interpretation of both previous and future publications on the comparative study of frontal cortex.
Collapse
Affiliation(s)
- Grace Ross
- Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA; Carolina Center for Neurostimulation, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA; Neuroscience Center, University of North Carolina, Chapel Hill, NC, USA
| | - Susanne Radtke-Schuller
- Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA; Carolina Center for Neurostimulation, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Flavio Frohlich
- Department of Psychiatry, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA; Carolina Center for Neurostimulation, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA; Neuroscience Center, University of North Carolina, Chapel Hill, NC, USA; Department of Cell Biology and Physiology, University of North Carolina, Chapel Hill, NC, USA; Department of Biomedical Engineering, University of North Carolina, Chapel Hill, NC, USA; Department of Neurology, University of North Carolina, Chapel Hill, NC, USA.
| |
Collapse
|
3
|
Wang M, Jendrichovsky P, Kanold PO. Auditory discrimination learning differentially modulates neural representation in auditory cortex subregions and inter-areal connectivity. Cell Rep 2024; 43:114172. [PMID: 38703366 DOI: 10.1016/j.celrep.2024.114172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 02/06/2024] [Accepted: 04/16/2024] [Indexed: 05/06/2024] Open
Abstract
Changes in sound-evoked responses in the auditory cortex (ACtx) occur during learning, but how learning alters neural responses in different ACtx subregions and changes their interactions is unclear. To address these questions, we developed an automated training and widefield imaging system to longitudinally track the neural activity of all mouse ACtx subregions during a tone discrimination task. We find that responses in primary ACtx are highly informative of learned stimuli and behavioral outcomes throughout training. In contrast, representations of behavioral outcomes in the dorsal posterior auditory field, learned stimuli in the dorsal anterior auditory field, and inter-regional correlations between primary and higher-order areas are enhanced with training. Moreover, ACtx response changes vary between stimuli, and such differences display lag synchronization with the learning rate. These results indicate that learning alters functional connections between ACtx subregions, inducing region-specific modulations by propagating behavioral information from primary to higher-order areas.
Collapse
Affiliation(s)
- Mingxuan Wang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA
| | - Peter Jendrichovsky
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA
| | - Patrick O Kanold
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA; Kavli Neuroscience Discovery Institute, Johns Hopkins University, Baltimore, MD 21205, USA.
| |
Collapse
|
4
|
Viswanathan V, Rupp KM, Hect JL, Harford EE, Holt LL, Abel TJ. Intracranial Mapping of Response Latencies and Task Effects for Spoken Syllable Processing in the Human Brain. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.05.588349. [PMID: 38617227 PMCID: PMC11014624 DOI: 10.1101/2024.04.05.588349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/16/2024]
Abstract
Prior lesion, noninvasive-imaging, and intracranial-electroencephalography (iEEG) studies have documented hierarchical, parallel, and distributed characteristics of human speech processing. Yet, there have not been direct, intracranial observations of the latency with which regions outside the temporal lobe respond to speech, or how these responses are impacted by task demands. We leveraged human intracranial recordings via stereo-EEG to measure responses from diverse forebrain sites during (i) passive listening to /bi/ and /pi/ syllables, and (ii) active listening requiring /bi/-versus-/pi/ categorization. We find that neural response latency increases from a few tens of ms in Heschl's gyrus (HG) to several tens of ms in superior temporal gyrus (STG), superior temporal sulcus (STS), and early parietal areas, and hundreds of ms in later parietal areas, insula, frontal cortex, hippocampus, and amygdala. These data also suggest parallel flow of speech information dorsally and ventrally, from HG to parietal areas and from HG to STG and STS, respectively. Latency data also reveal areas in parietal cortex, frontal cortex, hippocampus, and amygdala that are not responsive to the stimuli during passive listening but are responsive during categorization. Furthermore, multiple regions-spanning auditory, parietal, frontal, and insular cortices, and hippocampus and amygdala-show greater neural response amplitudes during active versus passive listening (a task-related effect). Overall, these results are consistent with hierarchical processing of speech at a macro level and parallel streams of information flow in temporal and parietal regions. These data also reveal regions where the speech code is stimulus-faithful and those that encode task-relevant representations.
Collapse
Affiliation(s)
- Vibha Viswanathan
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA 15260
| | - Kyle M. Rupp
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA 15260
| | - Jasmine L. Hect
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA 15260
| | - Emily E. Harford
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA 15260
| | - Lori L. Holt
- Department of Psychology, The University of Texas at Austin, Austin, TX 78712
| | - Taylor J. Abel
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA 15260
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15238
| |
Collapse
|
5
|
Beerendonk L, Mejías JF, Nuiten SA, de Gee JW, Fahrenfort JJ, van Gaal S. A disinhibitory circuit mechanism explains a general principle of peak performance during mid-level arousal. Proc Natl Acad Sci U S A 2024; 121:e2312898121. [PMID: 38277436 PMCID: PMC10835062 DOI: 10.1073/pnas.2312898121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2023] [Accepted: 12/04/2023] [Indexed: 01/28/2024] Open
Abstract
Perceptual decision-making is highly dependent on the momentary arousal state of the brain, which fluctuates over time on a scale of hours, minutes, and even seconds. The textbook relationship between momentary arousal and task performance is captured by an inverted U-shape, as put forward in the Yerkes-Dodson law. This law suggests optimal performance at moderate levels of arousal and impaired performance at low or high arousal levels. However, despite its popularity, the evidence for this relationship in humans is mixed at best. Here, we use pupil-indexed arousal and performance data from various perceptual decision-making tasks to provide converging evidence for the inverted U-shaped relationship between spontaneous arousal fluctuations and performance across different decision types (discrimination, detection) and sensory modalities (visual, auditory). To further understand this relationship, we built a neurobiologically plausible mechanistic model and show that it is possible to reproduce our findings by incorporating two types of interneurons that are both modulated by an arousal signal. The model architecture produces two dynamical regimes under the influence of arousal: one regime in which performance increases with arousal and another regime in which performance decreases with arousal, together forming an inverted U-shaped arousal-performance relationship. We conclude that the inverted U-shaped arousal-performance relationship is a general and robust property of sensory processing. It might be brought about by the influence of arousal on two types of interneurons that together act as a disinhibitory pathway for the neural populations that encode the available sensory evidence used for the decision.
Collapse
Affiliation(s)
- Lola Beerendonk
- Research Priority Area Brain and Cognition, University of Amsterdam, Amsterdam1001NK, The Netherlands
- Department of Psychology, University of Amsterdam, Amsterdam1001NK, The Netherlands
| | - Jorge F. Mejías
- Research Priority Area Brain and Cognition, University of Amsterdam, Amsterdam1001NK, The Netherlands
- Cognitive and Systems Neuroscience, Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam1098XH, The Netherlands
| | - Stijn A. Nuiten
- Research Priority Area Brain and Cognition, University of Amsterdam, Amsterdam1001NK, The Netherlands
- Department of Psychology, University of Amsterdam, Amsterdam1001NK, The Netherlands
- Universitäre Psychiatrische Kliniken Basel, Wilhelm Klein-Strasse 27, Basel4002, Switzerland
| | - Jan Willem de Gee
- Research Priority Area Brain and Cognition, University of Amsterdam, Amsterdam1001NK, The Netherlands
- Cognitive and Systems Neuroscience, Swammerdam Institute for Life Sciences, University of Amsterdam, Amsterdam1098XH, The Netherlands
| | - Johannes J. Fahrenfort
- Institute for Brain and Behavior Amsterdam, Vrije Universiteit Amsterdam, Amsterdam1081HV, The Netherlands
- Department of Applied and Experimental Psychology, Vrije Universiteit Amsterdam, Amsterdam1081HV, The Netherlands
| | - Simon van Gaal
- Research Priority Area Brain and Cognition, University of Amsterdam, Amsterdam1001NK, The Netherlands
- Department of Psychology, University of Amsterdam, Amsterdam1001NK, The Netherlands
| |
Collapse
|
6
|
Macedo-Lima M, Hamlette LS, Caras ML. Orbitofrontal Cortex Modulates Auditory Cortical Sensitivity and Sound Perception. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.12.18.570797. [PMID: 38187685 PMCID: PMC10769262 DOI: 10.1101/2023.12.18.570797] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2024]
Abstract
Sensory perception is dynamic, quickly adapting to sudden shifts in environmental or behavioral context. Though decades of work have established that these dynamics are mediated by rapid fluctuations in sensory cortical activity, we have a limited understanding of the brain regions and pathways that orchestrate these changes. Neurons in the orbitofrontal cortex (OFC) encode contextual information, and recent data suggest that some of these signals are transmitted to sensory cortices. Whether and how these signals shape sensory encoding and perceptual sensitivity remains uncertain. Here, we asked whether the OFC mediates context-dependent changes in auditory cortical sensitivity and sound perception by monitoring and manipulating OFC activity in freely moving animals under two behavioral contexts: passive sound exposure and engagement in an amplitude modulation (AM) detection task. We found that the majority of OFC neurons, including the specific subset that innervate the auditory cortex, were strongly modulated by task engagement. Pharmacological inactivation of the OFC prevented rapid context-dependent changes in auditory cortical firing, and significantly impaired behavioral AM detection. Our findings suggest that contextual information from the OFC mediates rapid plasticity in the auditory cortex and facilitates the perception of behaviorally relevant sounds. Significance Statement Sensory perception depends on the context in which stimuli are presented. For example, perception is enhanced when stimuli are informative, such as when they are important to solve a task. Perceptual enhancements result from an increase in the sensitivity of sensory cortical neurons; however, we do not fully understand how such changes are initiated in the brain. Here, we tested the role of the orbitofrontal cortex (OFC) in controlling auditory cortical sensitivity and sound perception. We found that OFC neurons change their activity when animals perform a sound detection task. Inactivating OFC impairs sound detection and prevents task-dependent increases in auditory cortical sensitivity. Our findings suggest that the OFC controls contextual modulations of the auditory cortex and sound perception.
Collapse
|
7
|
Anbuhl KL, Diez Castro M, Lee NA, Lee VS, Sanes DH. Cingulate cortex facilitates auditory perception under challenging listening conditions. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.11.10.566668. [PMID: 38014324 PMCID: PMC10680599 DOI: 10.1101/2023.11.10.566668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
We often exert greater cognitive resources (i.e., listening effort) to understand speech under challenging acoustic conditions. This mechanism can be overwhelmed in those with hearing loss, resulting in cognitive fatigue in adults, and potentially impeding language acquisition in children. However, the neural mechanisms that support listening effort are uncertain. Evidence from human studies suggest that the cingulate cortex is engaged under difficult listening conditions, and may exert top-down modulation of the auditory cortex (AC). Here, we asked whether the gerbil cingulate cortex (Cg) sends anatomical projections to the AC that facilitate perceptual performance. To model challenging listening conditions, we used a sound discrimination task in which stimulus parameters were presented in either 'Easy' or 'Hard' blocks (i.e., long or short stimulus duration, respectively). Gerbils achieved statistically identical psychometric performance in Easy and Hard blocks. Anatomical tracing experiments revealed a strong, descending projection from layer 2/3 of the Cg1 subregion of the cingulate cortex to superficial and deep layers of primary and dorsal AC. To determine whether Cg improves task performance under challenging conditions, we bilaterally infused muscimol to inactivate Cg1, and found that psychometric thresholds were degraded for only Hard blocks. To test whether the Cg-to-AC projection facilitates task performance, we chemogenetically inactivated these inputs and found that performance was only degraded during Hard blocks. Taken together, the results reveal a descending cortical pathway that facilitates perceptual performance during challenging listening conditions. Significance Statement Sensory perception often occurs under challenging conditions, such a noisy background or dim environment, yet stimulus sensitivity can remain unaffected. One hypothesis is that cognitive resources are recruited to the task, thereby facilitating perceptual performance. Here, we identify a top-down cortical circuit, from cingulate to auditory cortex in the gerbils, that supports auditory perceptual performance under challenging listening conditions. This pathway is a plausible circuit that supports effortful listening, and may be degraded by hearing loss.
Collapse
|
8
|
Schmidt F, Chen Y, Keitel A, Rösch S, Hannemann R, Serman M, Hauswald A, Weisz N. Neural speech tracking shifts from the syllabic to the modulation rate of speech as intelligibility decreases. Psychophysiology 2023; 60:e14362. [PMID: 37350379 PMCID: PMC10909526 DOI: 10.1111/psyp.14362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 04/24/2023] [Accepted: 05/10/2023] [Indexed: 06/24/2023]
Abstract
The most prominent acoustic features in speech are intensity modulations, represented by the amplitude envelope of speech. Synchronization of neural activity with these modulations supports speech comprehension. As the acoustic modulation of speech is related to the production of syllables, investigations of neural speech tracking commonly do not distinguish between lower-level acoustic (envelope modulation) and higher-level linguistic (syllable rate) information. Here we manipulated speech intelligibility using noise-vocoded speech and investigated the spectral dynamics of neural speech processing, across two studies at cortical and subcortical levels of the auditory hierarchy, using magnetoencephalography. Overall, cortical regions mostly track the syllable rate, whereas subcortical regions track the acoustic envelope. Furthermore, with less intelligible speech, tracking of the modulation rate becomes more dominant. Our study highlights the importance of distinguishing between envelope modulation and syllable rate and provides novel possibilities to better understand differences between auditory processing and speech/language processing disorders.
Collapse
Affiliation(s)
- Fabian Schmidt
- Center for Cognitive NeuroscienceUniversity of SalzburgSalzburgAustria
- Department of PsychologyUniversity of SalzburgSalzburgAustria
| | - Ya‐Ping Chen
- Center for Cognitive NeuroscienceUniversity of SalzburgSalzburgAustria
- Department of PsychologyUniversity of SalzburgSalzburgAustria
| | - Anne Keitel
- Psychology, School of Social SciencesUniversity of DundeeDundeeUK
| | - Sebastian Rösch
- Department of OtorhinolaryngologyParacelsus Medical UniversitySalzburgAustria
| | | | - Maja Serman
- Audiological Research UnitSivantos GmbHErlangenGermany
| | - Anne Hauswald
- Center for Cognitive NeuroscienceUniversity of SalzburgSalzburgAustria
- Department of PsychologyUniversity of SalzburgSalzburgAustria
| | - Nathan Weisz
- Center for Cognitive NeuroscienceUniversity of SalzburgSalzburgAustria
- Department of PsychologyUniversity of SalzburgSalzburgAustria
- Neuroscience Institute, Christian Doppler University Hospital, Paracelsus Medical UniversitySalzburgAustria
| |
Collapse
|
9
|
Mittelstadt JK, Kanold PO. Orbitofrontal cortex conveys stimulus and task information to the auditory cortex. Curr Biol 2023; 33:4160-4173.e4. [PMID: 37716349 PMCID: PMC10602585 DOI: 10.1016/j.cub.2023.08.059] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 06/29/2023] [Accepted: 08/21/2023] [Indexed: 09/18/2023]
Abstract
Auditory cortical neurons modify their response profiles in response to numerous external factors. During task performance, changes in primary auditory cortex (A1) responses are thought to be driven by top-down inputs from the orbitofrontal cortex (OFC), which may lead to response modification on a trial-by-trial basis. While OFC neurons respond to auditory stimuli and project to A1, the function of OFC projections to A1 during auditory tasks is unknown. Here, we observed the activity of putative OFC terminals in A1 in mice by using in vivo two-photon calcium imaging of OFC terminals under passive conditions and during a tone detection task. We found that behavioral activity modulates but is not necessary to evoke OFC terminal responses in A1. OFC terminals in A1 form distinct populations that exclusively respond to either the tone, reward, or error. Using tones against a background of white noise, we found that OFC terminal activity was modulated by the signal-to-noise ratio (SNR) in both the passive and active conditions and that OFC terminal activity varied with SNR, and thus task difficulty in the active condition. Therefore, OFC projections in A1 are heterogeneous in their modulation of auditory encoding and likely contribute to auditory processing under various auditory conditions.
Collapse
Affiliation(s)
- Jonah K Mittelstadt
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA; Solomon H. Snyder Department of Neuroscience, Johns Hopkins University, Baltimore, MD 21205, USA
| | - Patrick O Kanold
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA; Solomon H. Snyder Department of Neuroscience, Johns Hopkins University, Baltimore, MD 21205, USA; Kavli Neuroscience Discovery Institute, Johns Hopkins University, Baltimore, MD 21205, USA.
| |
Collapse
|
10
|
Ying R, Hamlette L, Nikoobakht L, Balaji R, Miko N, Caras ML. Organization of orbitofrontal-auditory pathways in the Mongolian gerbil. J Comp Neurol 2023; 531:1459-1481. [PMID: 37477903 PMCID: PMC10529810 DOI: 10.1002/cne.25525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 06/11/2023] [Accepted: 06/26/2023] [Indexed: 07/22/2023]
Abstract
Sound perception is highly malleable, rapidly adjusting to the acoustic environment and behavioral demands. This flexibility is the result of ongoing changes in auditory cortical activity driven by fluctuations in attention, arousal, or prior expectations. Recent work suggests that the orbitofrontal cortex (OFC) may mediate some of these rapid changes, but the anatomical connections between the OFC and the auditory system are not well characterized. Here, we used virally mediated fluorescent tracers to map the projection from OFC to the auditory midbrain, thalamus, and cortex in a classic animal model for auditory research, the Mongolian gerbil (Meriones unguiculatus). We observed no connectivity between the OFC and the auditory midbrain, and an extremely sparse connection between the dorsolateral OFC and higher order auditory thalamic regions. In contrast, we observed a robust connection between the ventral and medial subdivisions of the OFC and the auditory cortex, with a clear bias for secondary auditory cortical regions. OFC axon terminals were found in all auditory cortical lamina but were significantly more concentrated in the infragranular layers. Tissue-clearing and lightsheet microscopy further revealed that auditory cortical-projecting OFC neurons send extensive axon collaterals throughout the brain, targeting both sensory and non-sensory regions involved in learning, decision-making, and memory. These findings provide a more detailed map of orbitofrontal-auditory connections and shed light on the possible role of the OFC in supporting auditory cognition.
Collapse
Affiliation(s)
- Rose Ying
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland, 20742
- Department of Biology, University of Maryland, College Park, Maryland, 20742
- Center for Comparative and Evolutionary Biology of Hearing, University of Maryland, College Park, Maryland, 20742
| | - Lashaka Hamlette
- Department of Biology, University of Maryland, College Park, Maryland, 20742
| | - Laudan Nikoobakht
- Department of Biology, University of Maryland, College Park, Maryland, 20742
| | - Rakshita Balaji
- Department of Biology, University of Maryland, College Park, Maryland, 20742
| | - Nicole Miko
- Department of Biology, University of Maryland, College Park, Maryland, 20742
| | - Melissa L. Caras
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland, 20742
- Department of Biology, University of Maryland, College Park, Maryland, 20742
- Center for Comparative and Evolutionary Biology of Hearing, University of Maryland, College Park, Maryland, 20742
| |
Collapse
|
11
|
Wybo WAM, Tsai MC, Tran VAK, Illing B, Jordan J, Morrison A, Senn W. NMDA-driven dendritic modulation enables multitask representation learning in hierarchical sensory processing pathways. Proc Natl Acad Sci U S A 2023; 120:e2300558120. [PMID: 37523562 PMCID: PMC10410730 DOI: 10.1073/pnas.2300558120] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 06/14/2023] [Indexed: 08/02/2023] Open
Abstract
While sensory representations in the brain depend on context, it remains unclear how such modulations are implemented at the biophysical level, and how processing layers further in the hierarchy can extract useful features for each possible contextual state. Here, we demonstrate that dendritic N-Methyl-D-Aspartate spikes can, within physiological constraints, implement contextual modulation of feedforward processing. Such neuron-specific modulations exploit prior knowledge, encoded in stable feedforward weights, to achieve transfer learning across contexts. In a network of biophysically realistic neuron models with context-independent feedforward weights, we show that modulatory inputs to dendritic branches can solve linearly nonseparable learning problems with a Hebbian, error-modulated learning rule. We also demonstrate that local prediction of whether representations originate either from different inputs, or from different contextual modulations of the same input, results in representation learning of hierarchical feedforward weights across processing layers that accommodate a multitude of contexts.
Collapse
Affiliation(s)
- Willem A. M. Wybo
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure–Function Relationships (INM-10), Jülich Research Center, DE-52428Jülich, Germany
| | - Matthias C. Tsai
- Department of Physiology, University of Bern, CH-3012Bern, Switzerland
| | - Viet Anh Khoa Tran
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure–Function Relationships (INM-10), Jülich Research Center, DE-52428Jülich, Germany
- Department of Computer Science - 3, Faculty 1, RWTH Aachen University, DE-52074Aachen, Germany
| | - Bernd Illing
- Laboratory of Computational Neuroscience, École Polytechnique Fédérale de Lausanne, CH-1015Lausanne, Switzerland
| | - Jakob Jordan
- Department of Physiology, University of Bern, CH-3012Bern, Switzerland
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure–Function Relationships (INM-10), Jülich Research Center, DE-52428Jülich, Germany
- Department of Computer Science - 3, Faculty 1, RWTH Aachen University, DE-52074Aachen, Germany
| | - Walter Senn
- Department of Physiology, University of Bern, CH-3012Bern, Switzerland
| |
Collapse
|
12
|
Marmelshtein A, Eckerling A, Hadad B, Ben-Eliyahu S, Nir Y. Sleep-like changes in neural processing emerge during sleep deprivation in early auditory cortex. Curr Biol 2023:S0960-9822(23)00773-X. [PMID: 37385257 DOI: 10.1016/j.cub.2023.06.022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Revised: 03/30/2023] [Accepted: 06/07/2023] [Indexed: 07/01/2023]
Abstract
Insufficient sleep is commonplace in modern lifestyle and can lead to grave outcomes, yet the changes in neuronal activity accumulating over hours of extended wakefulness remain poorly understood. Specifically, which aspects of cortical processing are affected by sleep deprivation (SD), and whether they also affect early sensory regions, remain unclear. Here, we recorded spiking activity in the rat auditory cortex along with polysomnography while presenting sounds during SD followed by recovery sleep. We found that frequency tuning, onset responses, and spontaneous firing rates were largely unaffected by SD. By contrast, SD decreased entrainment to rapid (≥20 Hz) click trains, increased population synchrony, and increased the prevalence of sleep-like stimulus-induced silent periods, even when ongoing activity was similar. Recovery NREM sleep was associated with similar effects as SD with even greater magnitude, while auditory processing during REM sleep was similar to vigilant wakefulness. Our results show that processes akin to those in NREM sleep invade the activity of cortical circuits during SD, even in the early sensory cortex.
Collapse
Affiliation(s)
- Amit Marmelshtein
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv 6997801, Israel; Department of Physiology and Pharmacology, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv 6997801, Israel
| | - Anabel Eckerling
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv 6997801, Israel; School of Psychological Sciences, Tel Aviv University, Tel Aviv 6997801, Israel
| | - Barak Hadad
- School of Electrical Engineering, Faculty of Engineering, Tel Aviv University, Tel Aviv 6997801, Israel
| | - Shamgar Ben-Eliyahu
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv 6997801, Israel; School of Psychological Sciences, Tel Aviv University, Tel Aviv 6997801, Israel
| | - Yuval Nir
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv 6997801, Israel; Department of Physiology and Pharmacology, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv 6997801, Israel; Department of Biomedical Engineering, Faculty of Engineering, Tel Aviv University, Tel Aviv 6997801, Israel; The Sieratzki-Sagol Center for Sleep Medicine, Tel Aviv Sourasky Medical Center, Tel Aviv 6423906, Israel.
| |
Collapse
|
13
|
Bellur A, Thakkar K, Elhilali M. Explicit-memory multiresolution adaptive framework for speech and music separation. EURASIP JOURNAL ON AUDIO, SPEECH, AND MUSIC PROCESSING 2023; 2023:20. [PMID: 37181589 PMCID: PMC10169896 DOI: 10.1186/s13636-023-00286-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Accepted: 04/21/2023] [Indexed: 05/16/2023]
Abstract
The human auditory system employs a number of principles to facilitate the selection of perceptually separated streams from a complex sound mixture. The brain leverages multi-scale redundant representations of the input and uses memory (or priors) to guide the selection of a target sound from the input mixture. Moreover, feedback mechanisms refine the memory constructs resulting in further improvement of selectivity of a particular sound object amidst dynamic backgrounds. The present study proposes a unified end-to-end computational framework that mimics these principles for sound source separation applied to both speech and music mixtures. While the problems of speech enhancement and music separation have often been tackled separately due to constraints and specificities of each signal domain, the current work posits that common principles for sound source separation are domain-agnostic. In the proposed scheme, parallel and hierarchical convolutional paths map input mixtures onto redundant but distributed higher-dimensional subspaces and utilize the concept of temporal coherence to gate the selection of embeddings belonging to a target stream abstracted in memory. These explicit memories are further refined through self-feedback from incoming observations in order to improve the system's selectivity when faced with unknown backgrounds. The model yields stable outcomes of source separation for both speech and music mixtures and demonstrates benefits of explicit memory as a powerful representation of priors that guide information selection from complex inputs.
Collapse
Affiliation(s)
- Ashwin Bellur
- Electrical and Computer Engineering, Johns Hopkins University, Baltimore, USA
| | - Karan Thakkar
- Electrical and Computer Engineering, Johns Hopkins University, Baltimore, USA
| | - Mounya Elhilali
- Electrical and Computer Engineering, Johns Hopkins University, Baltimore, USA
| |
Collapse
|
14
|
Willmore BDB, King AJ. Adaptation in auditory processing. Physiol Rev 2023; 103:1025-1058. [PMID: 36049112 PMCID: PMC9829473 DOI: 10.1152/physrev.00011.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023] Open
Abstract
Adaptation is an essential feature of auditory neurons, which reduces their responses to unchanging and recurring sounds and allows their response properties to be matched to the constantly changing statistics of sounds that reach the ears. As a consequence, processing in the auditory system highlights novel or unpredictable sounds and produces an efficient representation of the vast range of sounds that animals can perceive by continually adjusting the sensitivity and, to a lesser extent, the tuning properties of neurons to the most commonly encountered stimulus values. Together with attentional modulation, adaptation to sound statistics also helps to generate neural representations of sound that are tolerant to background noise and therefore plays a vital role in auditory scene analysis. In this review, we consider the diverse forms of adaptation that are found in the auditory system in terms of the processing levels at which they arise, the underlying neural mechanisms, and their impact on neural coding and perception. We also ask what the dynamics of adaptation, which can occur over multiple timescales, reveal about the statistical properties of the environment. Finally, we examine how adaptation to sound statistics is influenced by learning and experience and changes as a result of aging and hearing loss.
Collapse
Affiliation(s)
- Ben D. B. Willmore
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Andrew J. King
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
15
|
Mischler G, Keshishian M, Bickel S, Mehta AD, Mesgarani N. Deep neural networks effectively model neural adaptation to changing background noise and suggest nonlinear noise filtering methods in auditory cortex. Neuroimage 2023; 266:119819. [PMID: 36529203 PMCID: PMC10510744 DOI: 10.1016/j.neuroimage.2022.119819] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 11/28/2022] [Accepted: 12/15/2022] [Indexed: 12/23/2022] Open
Abstract
The human auditory system displays a robust capacity to adapt to sudden changes in background noise, allowing for continuous speech comprehension despite changes in background environments. However, despite comprehensive studies characterizing this ability, the computations that underly this process are not well understood. The first step towards understanding a complex system is to propose a suitable model, but the classical and easily interpreted model for the auditory system, the spectro-temporal receptive field (STRF), cannot match the nonlinear neural dynamics involved in noise adaptation. Here, we utilize a deep neural network (DNN) to model neural adaptation to noise, illustrating its effectiveness at reproducing the complex dynamics at the levels of both individual electrodes and the cortical population. By closely inspecting the model's STRF-like computations over time, we find that the model alters both the gain and shape of its receptive field when adapting to a sudden noise change. We show that the DNN model's gain changes allow it to perform adaptive gain control, while the spectro-temporal change creates noise filtering by altering the inhibitory region of the model's receptive field. Further, we find that models of electrodes in nonprimary auditory cortex also exhibit noise filtering changes in their excitatory regions, suggesting differences in noise filtering mechanisms along the cortical hierarchy. These findings demonstrate the capability of deep neural networks to model complex neural adaptation and offer new hypotheses about the computations the auditory cortex performs to enable noise-robust speech perception in real-world, dynamic environments.
Collapse
Affiliation(s)
- Gavin Mischler
- Mortimer B. Zuckerman Mind Brain Behavior, Columbia University, New York, United States; Department of Electrical Engineering, Columbia University, New York, United States
| | - Menoua Keshishian
- Mortimer B. Zuckerman Mind Brain Behavior, Columbia University, New York, United States; Department of Electrical Engineering, Columbia University, New York, United States
| | - Stephan Bickel
- Hofstra Northwell School of Medicine, Manhasset, New York, United States
| | - Ashesh D Mehta
- Hofstra Northwell School of Medicine, Manhasset, New York, United States
| | - Nima Mesgarani
- Mortimer B. Zuckerman Mind Brain Behavior, Columbia University, New York, United States; Department of Electrical Engineering, Columbia University, New York, United States.
| |
Collapse
|
16
|
Plasticity Changes in Central Auditory Systems of School-Age Children Following a Brief Training With a Remote Microphone System. Ear Hear 2023:00003446-990000000-00109. [PMID: 36706057 DOI: 10.1097/aud.0000000000001329] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
OBJECTIVES The objective of this study was to investigate whether a brief speech-in-noise training with a remote microphone (RM) system (favorable listening condition) would contribute to enhanced post-training plasticity changes in the auditory system of school-age children. DESIGN Before training, event-related potentials (ERPs) were recorded from 49 typically developing children, who actively identified two syllables in quiet and in noise (+5 dB signal-to-noise ratio [SNR]). During training, children completed the same syllable identification task as in the pre-training noise condition, but received feedback on their performance. Following random assignment, half of the sample used an RM system during training (experimental group), while the other half did not (control group). That is, during training' children in the experimental group listened to a more favorable speech signal (+15 dB SNR) than children from the control group (+5 dB SNR). ERPs were collected after training at +5 dB SNR to evaluate the effects of training with and without the RM system. Electrical neuroimaging analyses quantified the effects of training in each group on ERP global field power (GFP) and topography, indexing response strength and network changes, respectively. Behavioral speech-perception-in-noise skills of children were also evaluated and compared before and after training. We hypothesized that training with the RM system (experimental group) would lead to greater enhancement of GFP and greater topographical changes post-training than training without the RM system (control group). We also expected greater behavioral improvement on the speech-perception-in-noise task when training with than without the RM system. RESULTS GFP was enhanced after training only in the experimental group. These effects were observed on early time-windows corresponding to traditional P1-N1 (100 to 200 msec) and P2-N2 (200 to 400 msec) ERP components. No training effects were observed on response topography. Finally, both groups increased their speech-perception-in-noise skills post-training. CONCLUSIONS Enhanced GFP after training with the RM system indicates plasticity changes in the neural representation of sound resulting from listening to an enriched auditory signal. Further investigation of longer training or auditory experiences with favorable listening conditions is needed to determine if that results in long-term speech-perception-in-noise benefits.
Collapse
|
17
|
Shilling-Scrivo K, Mittelstadt J, Kanold PO. Decreased Modulation of Population Correlations in Auditory Cortex Is Associated with Decreased Auditory Detection Performance in Old Mice. J Neurosci 2022; 42:9278-9292. [PMID: 36302637 PMCID: PMC9761686 DOI: 10.1523/jneurosci.0955-22.2022] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Revised: 09/17/2022] [Accepted: 10/24/2022] [Indexed: 02/02/2023] Open
Abstract
Age-related hearing loss (presbycusis) affects one-third of the world's population. One hallmark of presbycusis is difficulty hearing in noisy environments. Presbycusis can be separated into two components: the aging ear and the aging brain. To date, the role of the aging brain in presbycusis is not well understood. Activity in the primary auditory cortex (A1) during a behavioral task is because of a combination of responses representing the acoustic stimuli, attentional gain, and behavioral choice. Disruptions in any of these aspects can lead to decreased auditory processing. To investigate how these distinct components are disrupted in aging, we performed in vivo 2-photon Ca2+ imaging in both male and female mice (Thy1-GCaMP6s × CBA/CaJ mice) that retain peripheral hearing into old age. We imaged A1 neurons of young adult (2-6 months) and old mice (16-24 months) during a tone detection task in broadband noise. While young mice performed well, old mice performed worse at low signal-to-noise ratios. Calcium imaging showed that old animals have increased prestimulus activity, reduced attentional gain, and increased noise correlations. Increased correlations in old animals exist regardless of cell tuning and behavioral outcome, and these correlated networks exist over a much larger portion of cortical space. Neural decoding techniques suggest that this prestimulus activity is predictive of old animals making early responses. Together, our results suggest a model in which old animals have higher and more correlated prestimulus activity and cannot fully suppress this activity, leading to the decreased representation of targets among distracting stimuli.SIGNIFICANCE STATEMENT Aging inhibits the ability to hear clearly in noisy environments. We show that the aging auditory cortex is unable to fully suppress its responses to background noise. During an auditory behavior, fewer neurons were suppressed in the old relative to young animals, which leads to higher prestimulus activity and more false alarms. We show that this excess activity additionally leads to increased correlations between neurons, reducing the amount of relevant stimulus information in the auditory cortex. Future work identifying the lost circuits that are responsible for proper background suppression could provide new targets for therapeutic strategies to preserve auditory processing ability into old age.
Collapse
Affiliation(s)
- Kelson Shilling-Scrivo
- Department of Anatomy and Neurobiology, University of Maryland School of Medicine, Baltimore, Maryland 21230
| | - Jonah Mittelstadt
- Department of Biology, University of Maryland, College Park, Maryland 20742
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland 20215
| | - Patrick O Kanold
- Department of Biology, University of Maryland, College Park, Maryland 20742
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland 20215
| |
Collapse
|
18
|
Lage-Castellanos A, De Martino F, Ghose GM, Gulban OF, Moerel M. Selective attention sharpens population receptive fields in human auditory cortex. Cereb Cortex 2022; 33:5395-5408. [PMID: 36336333 PMCID: PMC10152083 DOI: 10.1093/cercor/bhac427] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 10/03/2022] [Accepted: 10/04/2022] [Indexed: 11/09/2022] Open
Abstract
Abstract
Selective attention enables the preferential processing of relevant stimulus aspects. Invasive animal studies have shown that attending a sound feature rapidly modifies neuronal tuning throughout the auditory cortex. Human neuroimaging studies have reported enhanced auditory cortical responses with selective attention. To date, it remains unclear how the results obtained with functional magnetic resonance imaging (fMRI) in humans relate to the electrophysiological findings in animal models. Here we aim to narrow the gap between animal and human research by combining a selective attention task similar in design to those used in animal electrophysiology with high spatial resolution ultra-high field fMRI at 7 Tesla. Specifically, human participants perform a detection task, whereas the probability of target occurrence varies with sound frequency. Contrary to previous fMRI studies, we show that selective attention resulted in population receptive field sharpening, and consequently reduced responses, at the attended sound frequencies. The difference between our results to those of previous fMRI studies supports the notion that the influence of selective attention on auditory cortex is diverse and may depend on context, stimulus, and task.
Collapse
Affiliation(s)
- Agustin Lage-Castellanos
- Department of Cognitive Neuroscience , Faculty of Psychology and Neuroscience, , 6200 MD, Maastricht , The Netherlands
- Maastricht University , Faculty of Psychology and Neuroscience, , 6200 MD, Maastricht , The Netherlands
- Maastricht Brain Imaging Center (MBIC) , 6200 MD, Maastricht , The Netherlands
- Department of NeuroInformatics, Cuban Neuroscience Center , Havana City 11600 , Cuba
| | - Federico De Martino
- Department of Cognitive Neuroscience , Faculty of Psychology and Neuroscience, , 6200 MD, Maastricht , The Netherlands
- Maastricht University , Faculty of Psychology and Neuroscience, , 6200 MD, Maastricht , The Netherlands
- Maastricht Brain Imaging Center (MBIC) , 6200 MD, Maastricht , The Netherlands
- Center for Magnetic Resonance Research , Department of Radiology, , Minneapolis, MN 55455 , United States
- University of Minnesota , Department of Radiology, , Minneapolis, MN 55455 , United States
| | - Geoffrey M Ghose
- Center for Magnetic Resonance Research , Department of Radiology, , Minneapolis, MN 55455 , United States
- University of Minnesota , Department of Radiology, , Minneapolis, MN 55455 , United States
| | | | - Michelle Moerel
- Department of Cognitive Neuroscience , Faculty of Psychology and Neuroscience, , 6200 MD, Maastricht , The Netherlands
- Maastricht University , Faculty of Psychology and Neuroscience, , 6200 MD, Maastricht , The Netherlands
- Maastricht Brain Imaging Center (MBIC) , 6200 MD, Maastricht , The Netherlands
- Maastricht Centre for Systems Biology, Maastricht University , 6200 MD, Maastricht , The Netherlands
| |
Collapse
|
19
|
Lai J, Price CN, Bidelman GM. Brainstem speech encoding is dynamically shaped online by fluctuations in cortical α state. Neuroimage 2022; 263:119627. [PMID: 36122686 PMCID: PMC10017375 DOI: 10.1016/j.neuroimage.2022.119627] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 09/12/2022] [Indexed: 11/25/2022] Open
Abstract
Experimental evidence in animals demonstrates cortical neurons innervate subcortex bilaterally to tune brainstem auditory coding. Yet, the role of the descending (corticofugal) auditory system in modulating earlier sound processing in humans during speech perception remains unclear. Here, we measured EEG activity as listeners performed speech identification tasks in different noise backgrounds designed to tax perceptual and attentional processing. We hypothesized brainstem speech coding might be tied to attention and arousal states (indexed by cortical α power) that actively modulate the interplay of brainstem-cortical signal processing. When speech-evoked brainstem frequency-following responses (FFRs) were categorized according to cortical α states, we found low α FFRs in noise were weaker, correlated positively with behavioral response times, and were more "decodable" via neural classifiers. Our data provide new evidence for online corticofugal interplay in humans and establish that brainstem sensory representations are continuously yoked to (i.e., modulated by) the ebb and flow of cortical states to dynamically update perceptual processing.
Collapse
Affiliation(s)
- Jesyin Lai
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA; Diagnostic Imaging Department, St. Jude Children's Research Hospital, Memphis, TN, USA.
| | - Caitlin N Price
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA; Department of Audiology and Speech Pathology, University of Arkansas for Medical Sciences, Little Rock, AR, USA
| | - Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA; Department of Speech, Language and Hearing Sciences, Indiana University, 2631 East Discovery Parkway, Bloomington, IN 47408, USA; Program in Neuroscience, Indiana University, 1101 E 10th St, Bloomington, IN 47405, USA.
| |
Collapse
|
20
|
Morrill RJ, Bigelow J, DeKloe J, Hasenstaub AR. Audiovisual task switching rapidly modulates sound encoding in mouse auditory cortex. eLife 2022; 11:e75839. [PMID: 35980027 PMCID: PMC9427107 DOI: 10.7554/elife.75839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Accepted: 08/17/2022] [Indexed: 11/13/2022] Open
Abstract
In everyday behavior, sensory systems are in constant competition for attentional resources, but the cellular and circuit-level mechanisms of modality-selective attention remain largely uninvestigated. We conducted translaminar recordings in mouse auditory cortex (AC) during an audiovisual (AV) attention shifting task. Attending to sound elements in an AV stream reduced both pre-stimulus and stimulus-evoked spiking activity, primarily in deep-layer neurons and neurons without spectrotemporal tuning. Despite reduced spiking, stimulus decoder accuracy was preserved, suggesting improved sound encoding efficiency. Similarly, task-irrelevant mapping stimuli during inter-trial intervals evoked fewer spikes without impairing stimulus encoding, indicating that attentional modulation generalized beyond training stimuli. Importantly, spiking reductions predicted trial-to-trial behavioral accuracy during auditory attention, but not visual attention. Together, these findings suggest auditory attention facilitates sound discrimination by filtering sound-irrelevant background activity in AC, and that the deepest cortical layers serve as a hub for integrating extramodal contextual information.
Collapse
Affiliation(s)
- Ryan J Morrill
- Coleman Memorial Laboratory, University of California, San FranciscoSan FranciscoUnited States
- Neuroscience Graduate Program, University of California, San FranciscoSan FranciscoUnited States
- Department of Otolaryngology–Head and Neck Surgery, University of California, San FranciscoSan FranciscoUnited States
| | - James Bigelow
- Coleman Memorial Laboratory, University of California, San FranciscoSan FranciscoUnited States
- Department of Otolaryngology–Head and Neck Surgery, University of California, San FranciscoSan FranciscoUnited States
| | - Jefferson DeKloe
- Coleman Memorial Laboratory, University of California, San FranciscoSan FranciscoUnited States
- Department of Otolaryngology–Head and Neck Surgery, University of California, San FranciscoSan FranciscoUnited States
| | - Andrea R Hasenstaub
- Coleman Memorial Laboratory, University of California, San FranciscoSan FranciscoUnited States
- Neuroscience Graduate Program, University of California, San FranciscoSan FranciscoUnited States
- Department of Otolaryngology–Head and Neck Surgery, University of California, San FranciscoSan FranciscoUnited States
| |
Collapse
|
21
|
Interaction of bottom-up and top-down neural mechanisms in spatial multi-talker speech perception. Curr Biol 2022; 32:3971-3986.e4. [PMID: 35973430 DOI: 10.1016/j.cub.2022.07.047] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 06/08/2022] [Accepted: 07/19/2022] [Indexed: 11/20/2022]
Abstract
How the human auditory cortex represents spatially separated simultaneous talkers and how talkers' locations and voices modulate the neural representations of attended and unattended speech are unclear. Here, we measured the neural responses from electrodes implanted in neurosurgical patients as they performed single-talker and multi-talker speech perception tasks. We found that spatial separation between talkers caused a preferential encoding of the contralateral speech in Heschl's gyrus (HG), planum temporale (PT), and superior temporal gyrus (STG). Location and spectrotemporal features were encoded in different aspects of the neural response. Specifically, the talker's location changed the mean response level, whereas the talker's spectrotemporal features altered the variation of response around response's baseline. These components were differentially modulated by the attended talker's voice or location, which improved the population decoding of attended speech features. Attentional modulation due to the talker's voice only appeared in the auditory areas with longer latencies, but attentional modulation due to location was present throughout. Our results show that spatial multi-talker speech perception relies upon a separable pre-attentive neural representation, which could be further tuned by top-down attention to the location and voice of the talker.
Collapse
|
22
|
Nakanishi M, Nemoto M, Kawai HD. Cortical nicotinic enhancement of tone-evoked heightened activities and subcortical nicotinic enlargement of activated areas in mouse auditory cortex. Neurosci Res 2022; 181:55-65. [DOI: 10.1016/j.neures.2022.04.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 03/19/2022] [Accepted: 04/01/2022] [Indexed: 10/18/2022]
|
23
|
Lakunina AA, Menashe N, Jaramillo S. Contributions of Distinct Auditory Cortical Inhibitory Neuron Types to the Detection of Sounds in Background Noise. eNeuro 2022; 9:ENEURO.0264-21.2021. [PMID: 35168950 PMCID: PMC8906447 DOI: 10.1523/eneuro.0264-21.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2021] [Revised: 10/17/2021] [Accepted: 12/28/2021] [Indexed: 12/01/2022] Open
Abstract
The ability to separate background noise from relevant acoustic signals is essential for appropriate sound-driven behavior in natural environments. Examples of this separation are apparent in the auditory system, where neural responses to behaviorally relevant stimuli become increasingly noise invariant along the ascending auditory pathway. However, the mechanisms that underlie this reduction in responses to background noise are not well understood. To address this gap in knowledge, we first evaluated the effects of auditory cortical inactivation on mice of both sexes trained to perform a simple auditory signal-in-noise detection task and found that outputs from the auditory cortex are important for the detection of auditory stimuli in noisy environments. Next, we evaluated the contributions of the two most common cortical inhibitory cell types, parvalbumin-expressing (PV+) and somatostatin-expressing (SOM+) interneurons, to the perception of masked auditory stimuli. We found that inactivation of either PV+ or SOM+ cells resulted in a reduction in the ability of mice to determine the presence of auditory stimuli masked by noise. These results indicate that a disruption of auditory cortical network dynamics by either of these two types of inhibitory cells is sufficient to impair the ability to separate acoustic signals from noise.
Collapse
Affiliation(s)
- Anna A Lakunina
- Institute of Neuroscience and Department of Biology, University of Oregon, Eugene, Oregon 97403
| | - Nadav Menashe
- Institute of Neuroscience and Department of Biology, University of Oregon, Eugene, Oregon 97403
| | - Santiago Jaramillo
- Institute of Neuroscience and Department of Biology, University of Oregon, Eugene, Oregon 97403
| |
Collapse
|
24
|
Beach SD, Lim SJ, Cardenas-Iniguez C, Eddy MD, Gabrieli JDE, Perrachione TK. Electrophysiological correlates of perceptual prediction error are attenuated in dyslexia. Neuropsychologia 2022; 165:108091. [PMID: 34801517 PMCID: PMC8807066 DOI: 10.1016/j.neuropsychologia.2021.108091] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Revised: 10/09/2021] [Accepted: 11/17/2021] [Indexed: 01/30/2023]
Abstract
A perceptual adaptation deficit often accompanies reading difficulty in dyslexia, manifesting in poor perceptual learning of consistent stimuli and reduced neurophysiological adaptation to stimulus repetition. However, it is not known how adaptation deficits relate to differences in feedforward or feedback processes in the brain. Here we used electroencephalography (EEG) to interrogate the feedforward and feedback contributions to neural adaptation as adults with and without dyslexia viewed pairs of faces and words in a paradigm that manipulated whether there was a high probability of stimulus repetition versus a high probability of stimulus change. We measured three neural dependent variables: expectation (the difference between prestimulus EEG power with and without the expectation of stimulus repetition), feedforward repetition (the difference between event-related potentials (ERPs) evoked by an expected change and an unexpected repetition), and feedback-mediated prediction error (the difference between ERPs evoked by an unexpected change and an expected repetition). Expectation significantly modulated prestimulus theta- and alpha-band EEG in both groups. Unexpected repetitions of words, but not faces, also led to significant feedforward repetition effects in the ERPs of both groups. However, neural prediction error when an unexpected change occurred instead of an expected repetition was significantly weaker in dyslexia than the control group for both faces and words. These results suggest that the neural and perceptual adaptation deficits observed in dyslexia reflect the failure to effectively integrate perceptual predictions with feedforward sensory processing. In addition to reducing perceptual efficiency, the attenuation of neural prediction error signals would also be deleterious to the wide range of perceptual and procedural learning abilities that are critical for developing accurate and fluent reading skills.
Collapse
Affiliation(s)
- Sara D. Beach
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139 U.S.A.,Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139 U.S.A.,Program in Speech and Hearing Bioscience and Technology, Harvard University, 260 Longwood Avenue, Boston, MA 02115 U.S.A
| | - Sung-Joo Lim
- Department of Speech, Language, and Hearing Sciences, Boston University, 635 Commonwealth Avenue, Boston, MA 02215 U.S.A
| | - Carlos Cardenas-Iniguez
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139 U.S.A
| | - Marianna D. Eddy
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139 U.S.A
| | - John D. E. Gabrieli
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139 U.S.A.,Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139 U.S.A
| | - Tyler K. Perrachione
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139 U.S.A.,Department of Speech, Language, and Hearing Sciences, Boston University, 635 Commonwealth Avenue, Boston, MA 02215 U.S.A.,Correspondence: Tyler K. Perrachione, Ph.D., Department of Speech, Language, and Hearing Sciences, Boston University, 635 Commonwealth Ave., Boston, MA 02215, Phone: +1.617.358.7410,
| |
Collapse
|
25
|
Tian X, Liu Y, Guo Z, Cai J, Tang J, Chen F, Zhang H. Cerebral Representation of Sound Localization Using Functional Near-Infrared Spectroscopy. Front Neurosci 2022; 15:739706. [PMID: 34970110 PMCID: PMC8712652 DOI: 10.3389/fnins.2021.739706] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2021] [Accepted: 11/09/2021] [Indexed: 11/30/2022] Open
Abstract
Sound localization is an essential part of auditory processing. However, the cortical representation of identifying the direction of sound sources presented in the sound field using functional near-infrared spectroscopy (fNIRS) is currently unknown. Therefore, in this study, we used fNIRS to investigate the cerebral representation of different sound sources. Twenty-five normal-hearing subjects (aged 26 ± 2.7, male 11, female 14) were included and actively took part in a block design task. The test setup for sound localization was composed of a seven-speaker array spanning a horizontal arc of 180° in front of the participants. Pink noise bursts with two intensity levels (48 dB/58 dB) were randomly applied via five loudspeakers (–90°/–30°/–0°/+30°/+90°). Sound localization task performances were collected, and simultaneous signals from auditory processing cortical fields were recorded for analysis by using a support vector machine (SVM). The results showed a classification accuracy of 73.60, 75.60, and 77.40% on average at –90°/0°, 0°/+90°, and –90°/+90° with high intensity, and 70.60, 73.6, and 78.6% with low intensity. The increase of oxyhemoglobin was observed in the bilateral non-primary auditory cortex (AC) and dorsolateral prefrontal cortex (dlPFC). In conclusion, the oxyhemoglobin (oxy-Hb) response showed different neural activity patterns between the lateral and front sources in the AC and dlPFC. Our results may serve as a basic contribution for further research on the use of fNIRS in spatial auditory studies.
Collapse
Affiliation(s)
- Xuexin Tian
- Department of Otolaryngology Head & Neck Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Yimeng Liu
- Department of Otolaryngology Head & Neck Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Zengzhi Guo
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Jieqing Cai
- Department of Otolaryngology Head & Neck Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Jie Tang
- Department of Otolaryngology Head & Neck Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China.,Department of Physiology, School of Basic Medical Sciences, Southern Medical University, Guangzhou, China.,Hearing Research Center, Southern Medical University, Guangzhou, China.,Key Laboratory of Mental Health of the Ministry of Education, Southern Medical University, Guangzhou, China
| | - Fei Chen
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Hongzheng Zhang
- Department of Otolaryngology Head & Neck Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China.,Hearing Research Center, Southern Medical University, Guangzhou, China
| |
Collapse
|
26
|
Veugen LCE, van Opstal AJ, van Wanrooij MM. Reaction Time Sensitivity to Spectrotemporal Modulations of Sound. Trends Hear 2022; 26:23312165221127589. [PMID: 36172759 PMCID: PMC9523861 DOI: 10.1177/23312165221127589] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
We tested whether sensitivity to acoustic spectrotemporal modulations can be observed from reaction times for normal-hearing and impaired-hearing conditions. In a manual reaction-time task, normal-hearing listeners had to detect the onset of a ripple (with density between 0–8 cycles/octave and a fixed modulation depth of 50%), that moved up or down the log-frequency axis at constant velocity (between 0–64 Hz), in an otherwise-unmodulated broadband white-noise. Spectral and temporal modulations elicited band-pass filtered sensitivity characteristics, with fastest detection rates around 1 cycle/oct and 32 Hz for normal-hearing conditions. These results closely resemble data from other studies that typically used the modulation-depth threshold as a sensitivity criterion. To simulate hearing-impairment, stimuli were processed with a 6-channel cochlear-implant vocoder, and a hearing-aid simulation that introduced separate spectral smearing and low-pass filtering. Reaction times were always much slower compared to normal hearing, especially for the highest spectral densities. Binaural performance was predicted well by the benchmark race model of binaural independence, which models statistical facilitation of independent monaural channels. For the impaired-hearing simulations this implied a “best-of-both-worlds” principle in which the listeners relied on the hearing-aid ear to detect spectral modulations, and on the cochlear-implant ear for temporal-modulation detection. Although singular-value decomposition indicated that the joint spectrotemporal sensitivity matrix could be largely reconstructed from independent temporal and spectral sensitivity functions, in line with time-spectrum separability, a substantial inseparable spectral-temporal interaction was present in all hearing conditions. These results suggest that the reaction-time task yields a valid and effective objective measure of acoustic spectrotemporal-modulation sensitivity.
Collapse
Affiliation(s)
- Lidwien C E Veugen
- Department of Biophysics, Donders Institute for Brain, Cognition and Behavior, 6029Radboud University, Nijmegen, Netherlands
| | - A John van Opstal
- Department of Biophysics, Donders Institute for Brain, Cognition and Behavior, 6029Radboud University, Nijmegen, Netherlands
| | - Marc M van Wanrooij
- Department of Biophysics, Donders Institute for Brain, Cognition and Behavior, 6029Radboud University, Nijmegen, Netherlands
| |
Collapse
|
27
|
Task-induced modulations of neuronal activity along the auditory pathway. Cell Rep 2021; 37:110115. [PMID: 34910908 DOI: 10.1016/j.celrep.2021.110115] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2020] [Revised: 01/29/2021] [Accepted: 11/19/2021] [Indexed: 11/23/2022] Open
Abstract
Sensory processing varies depending on behavioral context. Here, we ask how task engagement modulates neurons in the auditory system. We train mice in a simple tone-detection task and compare their neuronal activity during passive hearing and active listening. Electrophysiological extracellular recordings in the inferior colliculus, medial geniculate body, primary auditory cortex, and anterior auditory field reveal widespread modulations across all regions and cortical layers and in both putative regular- and fast-spiking cortical neurons. Clustering analysis unveils ten distinct modulation patterns that can either enhance or suppress neuronal activity. Task engagement changes the tone-onset response in most neurons. Such modulations first emerge in subcortical areas, ruling out cortical feedback as the only mechanism underlying subcortical modulations. Half the neurons additionally display late modulations associated with licking, arousal, or reward. Our results reveal the presence of functionally distinct subclasses of neurons, differentially sensitive to specific task-related variables but anatomically distributed along the auditory pathway.
Collapse
|
28
|
AIM: A network model of attention in auditory cortex. PLoS Comput Biol 2021; 17:e1009356. [PMID: 34449761 PMCID: PMC8462696 DOI: 10.1371/journal.pcbi.1009356] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 09/24/2021] [Accepted: 08/18/2021] [Indexed: 11/19/2022] Open
Abstract
Attentional modulation of cortical networks is critical for the cognitive flexibility required to process complex scenes. Current theoretical frameworks for attention are based almost exclusively on studies in visual cortex, where attentional effects are typically modest and excitatory. In contrast, attentional effects in auditory cortex can be large and suppressive. A theoretical framework for explaining attentional effects in auditory cortex is lacking, preventing a broader understanding of cortical mechanisms underlying attention. Here, we present a cortical network model of attention in primary auditory cortex (A1). A key mechanism in our network is attentional inhibitory modulation (AIM) of cortical inhibitory neurons. In this mechanism, top-down inhibitory neurons disinhibit bottom-up cortical circuits, a prominent circuit motif observed in sensory cortex. Our results reveal that the same underlying mechanisms in the AIM network can explain diverse attentional effects on both spatial and frequency tuning in A1. We find that a dominant effect of disinhibition on cortical tuning is suppressive, consistent with experimental observations. Functionally, the AIM network may play a key role in solving the cocktail party problem. We demonstrate how attention can guide the AIM network to monitor an acoustic scene, select a specific target, or switch to a different target, providing flexible outputs for solving the cocktail party problem. Selective attention plays a key role in how we navigate our everyday lives. For example, at a cocktail party, we can attend to friend’s speech amidst other speakers, music, and background noise. In stark contrast, hundreds of millions of people with hearing impairment and other disorders find such environments overwhelming and debilitating. Understanding the mechanisms underlying selective attention may lead to breakthroughs in improving the quality of life for those negatively affected. Here, we propose a mechanistic network model of attention in primary auditory cortex based on attentional inhibitory modulation (AIM). In the AIM model, attention targets specific cortical inhibitory neurons, which then modulate local cortical circuits to emphasize a particular feature of sounds and suppress competing features. We show that the AIM model can account for experimental observations across different species and stimulus domains. We also demonstrate that the same mechanisms can enable listeners to flexibly switch between attending to specific targets sounds and monitoring the environment in complex acoustic scenes, such as a cocktail party. The AIM network provides a theoretical framework which can work in tandem with new experiments to help unravel cortical circuits underlying attention.
Collapse
|
29
|
Homma NY, Bajo VM. Lemniscal Corticothalamic Feedback in Auditory Scene Analysis. Front Neurosci 2021; 15:723893. [PMID: 34489635 PMCID: PMC8417129 DOI: 10.3389/fnins.2021.723893] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Accepted: 07/30/2021] [Indexed: 12/15/2022] Open
Abstract
Sound information is transmitted from the ear to central auditory stations of the brain via several nuclei. In addition to these ascending pathways there exist descending projections that can influence the information processing at each of these nuclei. A major descending pathway in the auditory system is the feedback projection from layer VI of the primary auditory cortex (A1) to the ventral division of medial geniculate body (MGBv) in the thalamus. The corticothalamic axons have small glutamatergic terminals that can modulate thalamic processing and thalamocortical information transmission. Corticothalamic neurons also provide input to GABAergic neurons of the thalamic reticular nucleus (TRN) that receives collaterals from the ascending thalamic axons. The balance of corticothalamic and TRN inputs has been shown to refine frequency tuning, firing patterns, and gating of MGBv neurons. Therefore, the thalamus is not merely a relay stage in the chain of auditory nuclei but does participate in complex aspects of sound processing that include top-down modulations. In this review, we aim (i) to examine how lemniscal corticothalamic feedback modulates responses in MGBv neurons, and (ii) to explore how the feedback contributes to auditory scene analysis, particularly on frequency and harmonic perception. Finally, we will discuss potential implications of the role of corticothalamic feedback in music and speech perception, where precise spectral and temporal processing is essential.
Collapse
Affiliation(s)
- Natsumi Y. Homma
- Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, CA, United States
- Coleman Memorial Laboratory, Department of Otolaryngology – Head and Neck Surgery, University of California, San Francisco, San Francisco, CA, United States
| | - Victoria M. Bajo
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
30
|
Clayton KK, Asokan MM, Watanabe Y, Hancock KE, Polley DB. Behavioral Approaches to Study Top-Down Influences on Active Listening. Front Neurosci 2021; 15:666627. [PMID: 34305516 PMCID: PMC8299106 DOI: 10.3389/fnins.2021.666627] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Accepted: 06/09/2021] [Indexed: 11/21/2022] Open
Abstract
The massive network of descending corticofugal projections has been long-recognized by anatomists, but their functional contributions to sound processing and auditory-guided behaviors remain a mystery. Most efforts to characterize the auditory corticofugal system have been inductive; wherein function is inferred from a few studies employing a wide range of methods to manipulate varying limbs of the descending system in a variety of species and preparations. An alternative approach, which we focus on here, is to first establish auditory-guided behaviors that reflect the contribution of top-down influences on auditory perception. To this end, we postulate that auditory corticofugal systems may contribute to active listening behaviors in which the timing of bottom-up sound cues can be predicted from top-down signals arising from cross-modal cues, temporal integration, or self-initiated movements. Here, we describe a behavioral framework for investigating how auditory perceptual performance is enhanced when subjects can anticipate the timing of upcoming target sounds. Our first paradigm, studied both in human subjects and mice, reports species-specific differences in visually cued expectation of sound onset in a signal-in-noise detection task. A second paradigm performed in mice reveals the benefits of temporal regularity as a perceptual grouping cue when detecting repeating target tones in complex background noise. A final behavioral approach demonstrates significant improvements in frequency discrimination threshold and perceptual sensitivity when auditory targets are presented at a predictable temporal interval following motor self-initiation of the trial. Collectively, these three behavioral approaches identify paradigms to study top-down influences on sound perception that are amenable to head-fixed preparations in genetically tractable animals, where it is possible to monitor and manipulate particular nodes of the descending auditory pathway with unparalleled precision.
Collapse
Affiliation(s)
- Kameron K. Clayton
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA, United States
| | - Meenakshi M. Asokan
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA, United States
| | - Yurika Watanabe
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA, United States
| | - Kenneth E. Hancock
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA, United States
- Department of Otolaryngology – Head and Neck Surgery, Harvard Medical School, Boston, MA, United States
| | - Daniel B. Polley
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA, United States
- Department of Otolaryngology – Head and Neck Surgery, Harvard Medical School, Boston, MA, United States
| |
Collapse
|
31
|
Neuronal figure-ground responses in primate primary auditory cortex. Cell Rep 2021; 35:109242. [PMID: 34133935 PMCID: PMC8220257 DOI: 10.1016/j.celrep.2021.109242] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2020] [Revised: 12/09/2020] [Accepted: 05/20/2021] [Indexed: 11/22/2022] Open
Abstract
Figure-ground segregation, the brain’s ability to group related features into stable perceptual entities, is crucial for auditory perception in noisy environments. The neuronal mechanisms for this process are poorly understood in the auditory system. Here, we report figure-ground modulation of multi-unit activity (MUA) in the primary and non-primary auditory cortex of rhesus macaques. Across both regions, MUA increases upon presentation of auditory figures, which consist of coherent chord sequences. We show increased activity even in the absence of any perceptual decision, suggesting that neural mechanisms for perceptual grouping are, to some extent, independent of behavioral demands. Furthermore, we demonstrate differences in figure encoding between more anterior and more posterior regions; perceptual saliency is represented in anterior cortical fields only. Our results suggest an encoding of auditory figures from the earliest cortical stages by a rate code. Neuronal figure-ground modulation in primary auditory cortex A rate code is used to signal the presence of auditory figures Anteriorly located recording sites encode perceptual saliency Figure-ground modulation is present without perceptual detection
Collapse
|
32
|
Zhang M, Riecke L, Bonte M. Neurophysiological tracking of speech-structure learning in typical and dyslexic readers. Neuropsychologia 2021; 158:107889. [PMID: 33991561 DOI: 10.1016/j.neuropsychologia.2021.107889] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Revised: 05/03/2021] [Accepted: 05/10/2021] [Indexed: 10/21/2022]
Abstract
Statistical learning, or the ability to extract statistical regularities from the sensory environment, plays a critical role in language acquisition and reading development. Here we employed electroencephalography (EEG) with frequency-tagging measures to track the temporal evolution of speech-structure learning in individuals with reading difficulties due to developmental dyslexia and in typical readers. We measured EEG while participants listened to (a) a structured stream of repeated tri-syllabic pseudowords, (b) a random stream of the same isochronous syllables, and (c) a series of tri-syllabic real Dutch words. Participants' behavioral learning outcome (pseudoword recognition) was measured after training. We found that syllable-rate tracking was comparable between the two groups and stable across both the random and structured streams of syllables. More importantly, we observed a gradual emergence of the tracking of tri-syllabic pseudoword structures in both groups. Compared to the typical readers, however, in the dyslexic readers this implicit speech structure learning seemed to build up at a slower pace. A brain-behavioral correlation analysis showed that slower learners (i.e., participants who were slower in establishing the neural tracking of pseudowords) were less skilled in phonological awareness. Moreover, those who showed stronger neural tracking of real words tended to be less fluent in the visual-verbal conversion of linguistic symbols. Taken together, our study provides an online neurophysiological approach to track the progression of implicit learning processes and gives insights into the learning difficulties associated with dyslexia from a dynamic perspective.
Collapse
Affiliation(s)
- Manli Zhang
- Maastricht Brain Imaging Center, Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands.
| | - Lars Riecke
- Maastricht Brain Imaging Center, Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands
| | - Milene Bonte
- Maastricht Brain Imaging Center, Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands
| |
Collapse
|
33
|
Obleser J, Kreitewolf J, Vielhauer R, Lindner F, David C, Oster H, Tune S. Circadian fluctuations in glucocorticoid level predict perceptual discrimination sensitivity. iScience 2021; 24:102345. [PMID: 33870139 PMCID: PMC8047178 DOI: 10.1016/j.isci.2021.102345] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2020] [Revised: 03/01/2021] [Accepted: 03/18/2021] [Indexed: 01/17/2023] Open
Abstract
Slow neurobiological rhythms, such as the circadian secretion of glucocorticoid (GC) hormones, modulate a variety of body functions. Whether and how endocrine fluctuations also exert an influence on perceptual abilities is largely uncharted. Here, we show that phasic increases in GC availability prove beneficial to auditory discrimination. In an age-varying sample of N = 68 healthy human participants, we characterize the covariation of saliva cortisol with perceptual sensitivity in an auditory pitch discrimination task at five time points across the sleep-wake cycle. First, momentary saliva cortisol levels were captured well by the time relative to wake-up and overall sleep duration. Second, within individuals, higher cortisol levels just prior to behavioral testing predicted better pitch discrimination ability, expressed as a steepened psychometric curve. This effect of GCs held under a set of statistical controls. Our results pave the way for more in-depth studies on neuroendocrinological determinants of sensory encoding and perception.
Collapse
Affiliation(s)
- Jonas Obleser
- Department of Psychology, University of Lübeck, 23562 Lübeck, Germany
- Center for Brain, Behavior, and Metabolism, University of Lübeck, 23562 Lübeck, Germany
| | - Jens Kreitewolf
- Department of Psychology, University of Lübeck, 23562 Lübeck, Germany
- Department of Psychology, McGill University, Montréal, QC, Canada
- Department of Mathematics and Statistics, McGill University, Montréal, QC, Canada
| | - Ricarda Vielhauer
- Department of Psychology, University of Lübeck, 23562 Lübeck, Germany
| | - Fanny Lindner
- Department of Psychology, University of Lübeck, 23562 Lübeck, Germany
| | - Carolin David
- Department of Psychology, University of Lübeck, 23562 Lübeck, Germany
| | - Henrik Oster
- Institute of Neurobiology, University of Lübeck, 23562 Lübeck, Germany
- Center for Brain, Behavior, and Metabolism, University of Lübeck, 23562 Lübeck, Germany
| | - Sarah Tune
- Department of Psychology, University of Lübeck, 23562 Lübeck, Germany
- Center for Brain, Behavior, and Metabolism, University of Lübeck, 23562 Lübeck, Germany
| |
Collapse
|
34
|
Price CN, Bidelman GM. Attention reinforces human corticofugal system to aid speech perception in noise. Neuroimage 2021; 235:118014. [PMID: 33794356 PMCID: PMC8274701 DOI: 10.1016/j.neuroimage.2021.118014] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2020] [Revised: 03/09/2021] [Accepted: 03/25/2021] [Indexed: 12/13/2022] Open
Abstract
Perceiving speech-in-noise (SIN) demands precise neural coding between brainstem and cortical levels of the hearing system. Attentional processes can then select and prioritize task-relevant cues over competing background noise for successful speech perception. In animal models, brainstem-cortical interplay is achieved via descending corticofugal projections from cortex that shape midbrain responses to behaviorally-relevant sounds. Attentional engagement of corticofugal feedback may assist SIN understanding but has never been confirmed and remains highly controversial in humans. To resolve these issues, we recorded source-level, anatomically constrained brainstem frequency-following responses (FFRs) and cortical event-related potentials (ERPs) to speech via high-density EEG while listeners performed rapid SIN identification tasks. We varied attention with active vs. passive listening scenarios whereas task difficulty was manipulated with additive noise interference. Active listening (but not arousal-control tasks) exaggerated both ERPs and FFRs, confirming attentional gain extends to lower subcortical levels of speech processing. We used functional connectivity to measure the directed strength of coupling between levels and characterize "bottom-up" vs. "top-down" (corticofugal) signaling within the auditory brainstem-cortical pathway. While attention strengthened connectivity bidirectionally, corticofugal transmission disengaged under passive (but not active) SIN listening. Our findings (i) show attention enhances the brain's transcription of speech even prior to cortex and (ii) establish a direct role of the human corticofugal feedback system as an aid to cocktail party speech perception.
Collapse
Affiliation(s)
- Caitlin N Price
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, 4055 North Park Loop, Memphis, TN 38152, USA.
| | - Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, 4055 North Park Loop, Memphis, TN 38152, USA; Department of Anatomy and Neurobiology, University of Tennessee Health Sciences Center, Memphis, TN, USA.
| |
Collapse
|
35
|
Alickovic E, Ng EHN, Fiedler L, Santurette S, Innes-Brown H, Graversen C. Effects of Hearing Aid Noise Reduction on Early and Late Cortical Representations of Competing Talkers in Noise. Front Neurosci 2021; 15:636060. [PMID: 33841081 PMCID: PMC8032942 DOI: 10.3389/fnins.2021.636060] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Accepted: 02/26/2021] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVES Previous research using non-invasive (magnetoencephalography, MEG) and invasive (electrocorticography, ECoG) neural recordings has demonstrated the progressive and hierarchical representation and processing of complex multi-talker auditory scenes in the auditory cortex. Early responses (<85 ms) in primary-like areas appear to represent the individual talkers with almost equal fidelity and are independent of attention in normal-hearing (NH) listeners. However, late responses (>85 ms) in higher-order non-primary areas selectively represent the attended talker with significantly higher fidelity than unattended talkers in NH and hearing-impaired (HI) listeners. Motivated by these findings, the objective of this study was to investigate the effect of a noise reduction scheme (NR) in a commercial hearing aid (HA) on the representation of complex multi-talker auditory scenes in distinct hierarchical stages of the auditory cortex by using high-density electroencephalography (EEG). DESIGN We addressed this issue by investigating early (<85 ms) and late (>85 ms) EEG responses recorded in 34 HI subjects fitted with HAs. The HA noise reduction (NR) was either on or off while the participants listened to a complex auditory scene. Participants were instructed to attend to one of two simultaneous talkers in the foreground while multi-talker babble noise played in the background (+3 dB SNR). After each trial, a two-choice question about the content of the attended speech was presented. RESULTS Using a stimulus reconstruction approach, our results suggest that the attention-related enhancement of neural representations of target and masker talkers located in the foreground, as well as suppression of the background noise in distinct hierarchical stages is significantly affected by the NR scheme. We found that the NR scheme contributed to the enhancement of the foreground and of the entire acoustic scene in the early responses, and that this enhancement was driven by better representation of the target speech. We found that the target talker in HI listeners was selectively represented in late responses. We found that use of the NR scheme resulted in enhanced representations of the target and masker speech in the foreground and a suppressed representation of the noise in the background in late responses. We found a significant effect of EEG time window on the strengths of the cortical representation of the target and masker. CONCLUSION Together, our analyses of the early and late responses obtained from HI listeners support the existing view of hierarchical processing in the auditory cortex. Our findings demonstrate the benefits of a NR scheme on the representation of complex multi-talker auditory scenes in different areas of the auditory cortex in HI listeners.
Collapse
Affiliation(s)
- Emina Alickovic
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
- Department of Electrical Engineering, Linkoping University, Linkoping, Sweden
| | - Elaine Hoi Ning Ng
- Centre for Applied Audiology Research, Oticon A/S, Smørum, Denmark
- Department of Behavioral Sciences and Learning, Linkoping University, Linkoping, Sweden
| | - Lorenz Fiedler
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
| | - Sébastien Santurette
- Centre for Applied Audiology Research, Oticon A/S, Smørum, Denmark
- Department of Health Technology, Technical University of Denmark, Lyngby, Denmark
| | | | | |
Collapse
|
36
|
Homma NY, Hullett PW, Atencio CA, Schreiner CE. Auditory Cortical Plasticity Dependent on Environmental Noise Statistics. Cell Rep 2021; 30:4445-4458.e5. [PMID: 32234479 PMCID: PMC7326484 DOI: 10.1016/j.celrep.2020.03.014] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2019] [Revised: 08/07/2019] [Accepted: 03/05/2020] [Indexed: 01/14/2023] Open
Abstract
During critical periods, neural circuits develop to form receptive fields that adapt to the sensory environment and enable optimal performance of relevant tasks. We hypothesized that early exposure to background noise can improve signal-in-noise processing, and the resulting receptive field plasticity in the primary auditory cortex can reveal functional principles guiding that important task. We raised rat pups in different spectro-temporal noise statistics during their auditory critical period. As adults, they showed enhanced behavioral performance in detecting vocalizations in noise. Concomitantly, encoding of vocalizations in noise in the primary auditory cortex improves with noise-rearing. Significantly, spectro-temporal modulation plasticity shifts cortical preferences away from the exposed noise statistics, thus reducing noise interference with the foreground sound representation. Auditory cortical plasticity shapes receptive field preferences to optimally extract foreground information in noisy environments during noise-rearing. Early noise exposure induces cortical circuits to implement efficient coding in the joint spectral and temporal modulation domain. After rearing rats in moderately loud spectro-temporally modulated background noise, Homma et al. investigated signal-in-noise processing in the primary auditory cortex. Noise-rearing improved vocalization-in-noise performance in both behavioral testing and neural decoding. Cortical plasticity shifted neuronal spectro-temporal modulation preferences away from the exposed noise statistics.
Collapse
Affiliation(s)
- Natsumi Y Homma
- Coleman Memorial Laboratory, Department of Otolaryngology - Head and Neck Surgery, University of California, San Francisco, San Francisco, CA 94143, USA; Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, CA 94143, USA
| | - Patrick W Hullett
- Coleman Memorial Laboratory, Department of Otolaryngology - Head and Neck Surgery, University of California, San Francisco, San Francisco, CA 94143, USA; Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, CA 94143, USA
| | - Craig A Atencio
- Coleman Memorial Laboratory, Department of Otolaryngology - Head and Neck Surgery, University of California, San Francisco, San Francisco, CA 94143, USA; Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, CA 94143, USA
| | - Christoph E Schreiner
- Coleman Memorial Laboratory, Department of Otolaryngology - Head and Neck Surgery, University of California, San Francisco, San Francisco, CA 94143, USA; Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, CA 94143, USA.
| |
Collapse
|
37
|
Resnik J, Polley DB. Cochlear neural degeneration disrupts hearing in background noise by increasing auditory cortex internal noise. Neuron 2021; 109:984-996.e4. [PMID: 33561398 PMCID: PMC7979519 DOI: 10.1016/j.neuron.2021.01.015] [Citation(s) in RCA: 59] [Impact Index Per Article: 19.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Revised: 12/09/2020] [Accepted: 01/14/2021] [Indexed: 12/29/2022]
Abstract
Correlational evidence in humans suggests that selective difficulties hearing in noisy, social settings may reflect premature auditory nerve degeneration. Here, we induced primary cochlear neural degeneration (CND) in adult mice and found direct behavioral evidence for selective detection deficits in background noise. To identify central determinants for this perceptual disorder, we tracked daily changes in ensembles of layer 2/3 auditory cortex parvalbumin-expressing inhibitory neurons and excitatory pyramidal neurons with chronic two-photon calcium imaging. CND induced distinct forms of plasticity in cortical excitatory and inhibitory neurons that culminated in net hyperactivity, increased neural gain, and reduced adaptation to background noise. Ensemble activity measured while mice detected targets in noise could accurately decode whether individual behavioral trials were hits or misses. After CND, random surges of hypercorrelated cortical activity occurring just before target onset reliably predicted impending detection failures, revealing a source of internal cortical noise underlying perceptual difficulties in external noise.
Collapse
Affiliation(s)
- Jennifer Resnik
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, MA 02114, USA; Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Boston, MA 02114, USA
| | - Daniel B Polley
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, MA 02114, USA; Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, Boston, MA 02114, USA.
| |
Collapse
|
38
|
Defining the Role of Attention in Hierarchical Auditory Processing. Audiol Res 2021; 11:112-128. [PMID: 33805600 PMCID: PMC8006147 DOI: 10.3390/audiolres11010012] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Revised: 03/07/2021] [Accepted: 03/10/2021] [Indexed: 01/09/2023] Open
Abstract
Communication in noise is a complex process requiring efficient neural encoding throughout the entire auditory pathway as well as contributions from higher-order cognitive processes (i.e., attention) to extract speech cues for perception. Thus, identifying effective clinical interventions for individuals with speech-in-noise deficits relies on the disentanglement of bottom-up (sensory) and top-down (cognitive) factors to appropriately determine the area of deficit; yet, how attention may interact with early encoding of sensory inputs remains unclear. For decades, attentional theorists have attempted to address this question with cleverly designed behavioral studies, but the neural processes and interactions underlying attention's role in speech perception remain unresolved. While anatomical and electrophysiological studies have investigated the neurological structures contributing to attentional processes and revealed relevant brain-behavior relationships, recent electrophysiological techniques (i.e., simultaneous recording of brainstem and cortical responses) may provide novel insight regarding the relationship between early sensory processing and top-down attentional influences. In this article, we review relevant theories that guide our present understanding of attentional processes, discuss current electrophysiological evidence of attentional involvement in auditory processing across subcortical and cortical levels, and propose areas for future study that will inform the development of more targeted and effective clinical interventions for individuals with speech-in-noise deficits.
Collapse
|
39
|
Saderi D, Schwartz ZP, Heller CR, Pennington JR, David SV. Dissociation of task engagement and arousal effects in auditory cortex and midbrain. eLife 2021; 10:e60153. [PMID: 33570493 PMCID: PMC7909948 DOI: 10.7554/elife.60153] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2020] [Accepted: 02/10/2021] [Indexed: 12/18/2022] Open
Abstract
Both generalized arousal and engagement in a specific task influence sensory neural processing. To isolate effects of these state variables in the auditory system, we recorded single-unit activity from primary auditory cortex (A1) and inferior colliculus (IC) of ferrets during a tone detection task, while monitoring arousal via changes in pupil size. We used a generalized linear model to assess the influence of task engagement and pupil size on sound-evoked activity. In both areas, these two variables affected independent neural populations. Pupil size effects were more prominent in IC, while pupil and task engagement effects were equally likely in A1. Task engagement was correlated with larger pupil; thus, some apparent effects of task engagement should in fact be attributed to fluctuations in pupil size. These results indicate a hierarchy of auditory processing, where generalized arousal enhances activity in midbrain, and effects specific to task engagement become more prominent in cortex.
Collapse
Affiliation(s)
- Daniela Saderi
- Oregon Hearing Research Center, Oregon Health and Science UniversityPortlandUnited States
- Neuroscience Graduate Program, Oregon Health and Science UniversityPortlandUnited States
| | - Zachary P Schwartz
- Oregon Hearing Research Center, Oregon Health and Science UniversityPortlandUnited States
- Neuroscience Graduate Program, Oregon Health and Science UniversityPortlandUnited States
| | - Charles R Heller
- Oregon Hearing Research Center, Oregon Health and Science UniversityPortlandUnited States
- Neuroscience Graduate Program, Oregon Health and Science UniversityPortlandUnited States
| | - Jacob R Pennington
- Department of Mathematics and Statistics, Washington State UniversityVancouverUnited States
| | - Stephen V David
- Oregon Hearing Research Center, Oregon Health and Science UniversityPortlandUnited States
| |
Collapse
|
40
|
Pupillometry as a reliable metric of auditory detection and discrimination across diverse stimulus paradigms in animal models. Sci Rep 2021; 11:3108. [PMID: 33542266 PMCID: PMC7862232 DOI: 10.1038/s41598-021-82340-y] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2020] [Accepted: 01/08/2021] [Indexed: 12/30/2022] Open
Abstract
Estimates of detection and discrimination thresholds are often used to explore broad perceptual similarities between human subjects and animal models. Pupillometry shows great promise as a non-invasive, easily-deployable method of comparing human and animal thresholds. Using pupillometry, previous studies in animal models have obtained threshold estimates to simple stimuli such as pure tones, but have not explored whether similar pupil responses can be evoked by complex stimuli, what other stimulus contingencies might affect stimulus-evoked pupil responses, and if pupil responses can be modulated by experience or short-term training. In this study, we used an auditory oddball paradigm to estimate detection and discrimination thresholds across a wide range of stimuli in guinea pigs. We demonstrate that pupillometry yields reliable detection and discrimination thresholds across a range of simple (tones) and complex (conspecific vocalizations) stimuli; that pupil responses can be robustly evoked using different stimulus contingencies (low-level acoustic changes, or higher level categorical changes); and that pupil responses are modulated by short-term training. These results lay the foundation for using pupillometry as a reliable method of estimating thresholds in large experimental cohorts, and unveil the full potential of using pupillometry to explore broad similarities between humans and animal models.
Collapse
|
41
|
Correlates of Auditory Decision-Making in Prefrontal, Auditory, and Basal Lateral Amygdala Cortical Areas. J Neurosci 2020; 41:1301-1316. [PMID: 33303679 DOI: 10.1523/jneurosci.2217-20.2020] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Revised: 11/02/2020] [Accepted: 11/26/2020] [Indexed: 11/21/2022] Open
Abstract
Spatial selective listening and auditory choice underlie important processes including attending to a speaker at a cocktail party and knowing how (or whether) to respond. To examine task encoding and the relative timing of potential neural substrates underlying these behaviors, we developed a spatial selective detection paradigm for monkeys, and recorded activity in primary auditory cortex (AC), dorsolateral prefrontal cortex (dlPFC), and the basolateral amygdala (BLA). A comparison of neural responses among these three areas showed that, as expected, AC encoded the side of the cue and target characteristics before dlPFC and BLA. Interestingly, AC also encoded the choice of the monkey before dlPFC and around the time of BLA. Generally, BLA showed weak responses to all task features except the choice. Decoding analyses suggested that errors followed from a failure to encode the target stimulus in both AC and dlPFC, but again, these differences arose earlier in AC. The similarities between AC and dlPFC responses were abolished during passive sensory stimulation with identical trial conditions, suggesting that the robust sensory encoding in dlPFC is contextually gated. Thus, counter to a strictly PFC-driven decision process, in this spatial selective listening task AC neural activity represents the sensory and decision information before dlPFC. Unlike in the visual domain, in this auditory task, the BLA does not appear to be robustly involved in selective spatial processing.SIGNIFICANCE STATEMENT We examined neural correlates of an auditory spatial selective listening task by recording single-neuron activity in behaving monkeys from the amygdala, dorsolateral prefrontal cortex, and auditory cortex. We found that auditory cortex coded spatial cues and choice-related activity before dorsolateral prefrontal cortex or the amygdala. Auditory cortex also had robust delay period activity. Therefore, we found that auditory cortex could support the neural computations that underlie the behavioral processes in the task.
Collapse
|
42
|
Task Engagement Improves Neural Discriminability in the Auditory Midbrain of the Marmoset Monkey. J Neurosci 2020; 41:284-297. [PMID: 33208469 DOI: 10.1523/jneurosci.1112-20.2020] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Revised: 10/24/2020] [Accepted: 10/27/2020] [Indexed: 11/21/2022] Open
Abstract
While task-dependent changes have been demonstrated in auditory cortex for a number of behavioral paradigms and mammalian species, less is known about how behavioral state can influence neural coding in the midbrain areas that provide auditory information to cortex. We measured single-unit activity in the inferior colliculus (IC) of common marmosets of both sexes while they performed a tone-in-noise detection task and during passive presentation of identical task stimuli. In contrast to our previous study in the ferret IC, task engagement had little effect on sound-evoked activity in central (lemniscal) IC of the marmoset. However, activity was significantly modulated in noncentral fields, where responses were selectively enhanced for the target tone relative to the distractor noise. This led to an increase in neural discriminability between target and distractors. The results confirm that task engagement can modulate sound coding in the auditory midbrain, and support a hypothesis that subcortical pathways can mediate highly trained auditory behaviors.SIGNIFICANCE STATEMENT While the cerebral cortex is widely viewed as playing an essential role in the learning and performance of complex auditory behaviors, relatively little attention has been paid to the role of brainstem and midbrain areas that process sound information before it reaches cortex. This study demonstrates that the auditory midbrain is also modulated during behavior. These modulations amplify task-relevant sensory information, a process that is traditionally attributed to cortex.
Collapse
|
43
|
Brodbeck C, Jiao A, Hong LE, Simon JZ. Neural speech restoration at the cocktail party: Auditory cortex recovers masked speech of both attended and ignored speakers. PLoS Biol 2020; 18:e3000883. [PMID: 33091003 PMCID: PMC7644085 DOI: 10.1371/journal.pbio.3000883] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2020] [Revised: 11/05/2020] [Accepted: 09/14/2020] [Indexed: 01/09/2023] Open
Abstract
Humans are remarkably skilled at listening to one speaker out of an acoustic mixture of several speech sources. Two speakers are easily segregated, even without binaural cues, but the neural mechanisms underlying this ability are not well understood. One possibility is that early cortical processing performs a spectrotemporal decomposition of the acoustic mixture, allowing the attended speech to be reconstructed via optimally weighted recombinations that discount spectrotemporal regions where sources heavily overlap. Using human magnetoencephalography (MEG) responses to a 2-talker mixture, we show evidence for an alternative possibility, in which early, active segregation occurs even for strongly spectrotemporally overlapping regions. Early (approximately 70-millisecond) responses to nonoverlapping spectrotemporal features are seen for both talkers. When competing talkers’ spectrotemporal features mask each other, the individual representations persist, but they occur with an approximately 20-millisecond delay. This suggests that the auditory cortex recovers acoustic features that are masked in the mixture, even if they occurred in the ignored speech. The existence of such noise-robust cortical representations, of features present in attended as well as ignored speech, suggests an active cortical stream segregation process, which could explain a range of behavioral effects of ignored background speech. How do humans focus on one speaker when several are talking? MEG responses to a continuous two-talker mixture suggest that, even though listeners attend only to one of the talkers, their auditory cortex tracks acoustic features from both speakers. This occurs even when those features are locally masked by the other speaker.
Collapse
Affiliation(s)
- Christian Brodbeck
- Institute for Systems Research, University of Maryland, College Park, Maryland, United States of America
- * E-mail:
| | - Alex Jiao
- Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland, United States of America
| | - L. Elliot Hong
- Maryland Psychiatric Research Center, Department of Psychiatry, University of Maryland School of Medicine, Baltimore, Maryland, United States of America
| | - Jonathan Z. Simon
- Institute for Systems Research, University of Maryland, College Park, Maryland, United States of America
- Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland, United States of America
- Department of Biology, University of Maryland, College Park, Maryland, United States of America
| |
Collapse
|
44
|
Ferreiro DN, Amaro D, Schmidtke D, Sobolev A, Gundi P, Belliveau L, Sirota A, Grothe B, Pecka M. Sensory Island Task (SIT): A New Behavioral Paradigm to Study Sensory Perception and Neural Processing in Freely Moving Animals. Front Behav Neurosci 2020; 14:576154. [PMID: 33100981 PMCID: PMC7546252 DOI: 10.3389/fnbeh.2020.576154] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Accepted: 08/27/2020] [Indexed: 11/17/2022] Open
Abstract
A central function of sensory systems is the gathering of information about dynamic interactions with the environment during self-motion. To determine whether modulation of a sensory cue was externally caused or a result of self-motion is fundamental to perceptual invariance and requires the continuous update of sensory processing about recent movements. This process is highly context-dependent and crucial for perceptual performances such as decision-making and sensory object formation. Yet despite its fundamental ecological role, voluntary self-motion is rarely incorporated in perceptual or neurophysiological investigations of sensory processing in animals. Here, we present the Sensory Island Task (SIT), a new freely moving search paradigm to study sensory processing and perception. In SIT, animals explore an open-field arena to find a sensory target relying solely on changes in the presented stimulus, which is controlled by closed-loop position tracking in real-time. Within a few sessions, animals are trained via positive reinforcement to search for a particular area in the arena (“target island”), which triggers the presentation of the target stimulus. The location of the target island is randomized across trials, making the modulated stimulus feature the only informative cue for task completion. Animals report detection of the target stimulus by remaining within the island for a defined time (“sit-time”). Multiple “non-target” islands can be incorporated to test psychometric discrimination and identification performance. We exemplify the suitability of SIT for rodents (Mongolian gerbil, Meriones unguiculatus) and small primates (mouse lemur, Microcebus murinus) and for studying various sensory perceptual performances (auditory frequency discrimination, sound source localization, visual orientation discrimination). Furthermore, we show that pairing SIT with chronic electrophysiological recordings allows revealing neuronal signatures of sensory processing under ecologically relevant conditions during goal-oriented behavior. In conclusion, SIT represents a flexible and easily implementable behavioral paradigm for mammals that combines self-motion and natural exploratory behavior to study sensory sensitivity and decision-making and their underlying neuronal processing.
Collapse
Affiliation(s)
- Dardo N Ferreiro
- Division of Neurobiology, Department Biology II, Ludwig-Maximilians-Universität München, Munich, Germany.,Department of General Psychology and Education, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Diana Amaro
- Division of Neurobiology, Department Biology II, Ludwig-Maximilians-Universität München, Munich, Germany.,Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Daniel Schmidtke
- Institute of Zoology, University of Veterinary Medicine Hannover, Hanover, Germany
| | - Andrey Sobolev
- Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Paula Gundi
- Division of Neurobiology, Department Biology II, Ludwig-Maximilians-Universität München, Munich, Germany.,Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Lucile Belliveau
- Division of Neurobiology, Department Biology II, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Anton Sirota
- Faculty of Medicine, Bernstein Center for Computational Neuroscience Munich, Munich Cluster of Systems Neurology (SyNergy), Ludwig-Maximilians-Universität München, Munich, Germany
| | - Benedikt Grothe
- Division of Neurobiology, Department Biology II, Ludwig-Maximilians-Universität München, Munich, Germany.,Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Michael Pecka
- Division of Neurobiology, Department Biology II, Ludwig-Maximilians-Universität München, Munich, Germany
| |
Collapse
|
45
|
Christensen RK, Lindén H, Nakamura M, Barkat TR. White Noise Background Improves Tone Discrimination by Suppressing Cortical Tuning Curves. Cell Rep 2020; 29:2041-2053.e4. [PMID: 31722216 DOI: 10.1016/j.celrep.2019.10.049] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2018] [Revised: 07/25/2019] [Accepted: 10/09/2019] [Indexed: 12/23/2022] Open
Abstract
The brain faces the difficult task of maintaining a stable representation of key features of the outside world in noisy sensory surroundings. How does the sensory representation change with noise, and how does the brain make sense of it? We investigated the effect of background white noise (WN) on tuning properties of neurons in mouse A1 and its impact on discrimination performance in a go/no-go task. We find that WN suppresses the activity of A1 neurons, which surprisingly increases the discriminability of tones spectrally close to each other. To confirm the involvement of A1, we optogenetically excited parvalbumin-positive (PV+) neurons in A1, which have similar effects as WN on both tuning properties and frequency discrimination. A population model suggests that the suppression of A1 tuning curves increases frequency selectivity and thereby improves discrimination. Our findings demonstrate that the cortical representation of pure tones adapts during noise to improve sensory acuity.
Collapse
Affiliation(s)
- Rasmus Kordt Christensen
- Department of Biomedicine, Basel University, 4056 Basel, Switzerland; Department of Neuroscience, University of Copenhagen, 2200 Copenhagen, Denmark
| | - Henrik Lindén
- Department of Neuroscience, University of Copenhagen, 2200 Copenhagen, Denmark
| | - Mari Nakamura
- Department of Biomedicine, Basel University, 4056 Basel, Switzerland
| | | |
Collapse
|
46
|
Knyazeva S, Selezneva E, Gorkin A, Ohl FW, Brosch M. Representation of Auditory Task Components and of Their Relationships in Primate Auditory Cortex. Front Neurosci 2020; 14:306. [PMID: 32372903 PMCID: PMC7186436 DOI: 10.3389/fnins.2020.00306] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Accepted: 03/16/2020] [Indexed: 11/13/2022] Open
Abstract
The current study aimed to resolve some of the inconsistencies in the literature on which mental processes affect auditory cortical activity. To this end, we studied auditory cortical firing in four monkeys with different experience while they were involved in six conditions with different arrangements of the task components sound, motor action, and water reward. Firing rates changed most strongly when a sound-only condition was compared to a condition in which sound was paired with water. Additional smaller changes occurred in more complex conditions in which the monkeys received water for motor actions before or after sounds. Our findings suggest that auditory cortex is most strongly modulated by the subjects’ level of arousal, thus by a psychological concept related to motor activity triggered by reinforcers and to readiness for operant behavior. Our findings also suggest that auditory cortex is involved in associative and emotional functions, but not in agency and cognitive effort.
Collapse
Affiliation(s)
| | | | - Alexander Gorkin
- Institute of Psychology, Russian Academy of Sciences, Moscow, Russia
| | - Frank W Ohl
- Leibniz Institut für Neurobiologie, Magdeburg, Germany.,Institute of Biology, Otto-von-Guericke University, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Otto-von-Guericke University, Magdeburg, Germany
| | - Michael Brosch
- Leibniz Institut für Neurobiologie, Magdeburg, Germany.,Center for Behavioral Brain Sciences, Otto-von-Guericke University, Magdeburg, Germany
| |
Collapse
|
47
|
Associations between sounds and actions in primate prefrontal cortex. Brain Res 2020; 1738:146775. [PMID: 32194079 DOI: 10.1016/j.brainres.2020.146775] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2019] [Revised: 01/27/2020] [Accepted: 03/09/2020] [Indexed: 12/11/2022]
Abstract
Behavioral flexibility allows animals to cope with changing situations, for example, to execute different actions to the same stimulus to achieve specific goals in different situations. The selection of the appropriate action in a given situation hinges on the previously learned associations between stimuli, actions, and outcomes. We showed in our recent study that early auditory cortex of nonhuman primates contributes to the selection of the actions to sounds by representing the associations between sounds and actions. That is, neurons in auditory cortex respond differently to a given sound when it signals different actions that are required to obtain a reward. Here, using the same monkey and the same tasks, we investigated whether the ventrolateral part of prefrontal cortex also represents such audiomotor associations as well as whether and how these representations differ from those in auditory cortex. Mirroring auditory cortex, neuronal responses to a given sound in prefrontal cortex changed with audiomotor associations, and the neuronal responses were largest when the sound signaled a no-go response. These findings suggest that prefrontal cortex also represents audiomotor associations and thus contributes to the selection of the actions to sounds during goal-directed behavior. The neuronal activity related to audiomotor associations started later in prefrontal cortex than in auditory cortex, suggesting that the representations in prefrontal cortex may originate in auditory cortex or in earlier stages of the auditory system.
Collapse
|
48
|
Roach JP, Eniwaye B, Booth V, Sander LM, Zochowski MR. Acetylcholine Mediates Dynamic Switching Between Information Coding Schemes in Neuronal Networks. Front Syst Neurosci 2019; 13:64. [PMID: 31780905 PMCID: PMC6861375 DOI: 10.3389/fnsys.2019.00064] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2018] [Accepted: 10/14/2019] [Indexed: 11/23/2022] Open
Abstract
Rate coding and phase coding are the two major coding modes seen in the brain. For these two modes, network dynamics must either have a wide distribution of frequencies for rate coding, or a narrow one to achieve stability in phase dynamics for phase coding. Acetylcholine (ACh) is a potent regulator of neural excitability. Acting through the muscarinic receptor, ACh reduces the magnitude of the potassium M-current, a hyperpolarizing current that builds up as neurons fire. The M-current contributes to several excitability features of neurons, becoming a major player in facilitating the transition between Type 1 (integrator) and Type 2 (resonator) excitability. In this paper we argue that this transition enables a dynamic switch between rate coding and phase coding as levels of ACh release change. When a network is in a high ACh state variations in synaptic inputs will lead to a wider distribution of firing rates across the network and this distribution will reflect the network structure or pattern of external input to the network. When ACh is low, network frequencies become narrowly distributed and the structure of a network or pattern of external inputs will be represented through phase relationships between firing neurons. This work provides insights into how modulation of neuronal features influences network dynamics and information processing across brain states.
Collapse
Affiliation(s)
- James P Roach
- Neuroscience Graduate Program, University of Michigan, Ann Arbor, MI, United States
| | - Bolaji Eniwaye
- Department of Physics, University of Michigan, Ann Arbor, MI, United States
| | - Victoria Booth
- Neuroscience Graduate Program, University of Michigan, Ann Arbor, MI, United States.,Department of Mathematics, University of Michigan, Ann Arbor, MI, United States.,Department of Anesthesiology, University of Michigan, Ann Arbor, MI, United States
| | - Leonard M Sander
- Department of Physics, University of Michigan, Ann Arbor, MI, United States.,Center for the Study of Complex Systems, University of Michigan, Ann Arbor, MI, United States
| | - Michal R Zochowski
- Neuroscience Graduate Program, University of Michigan, Ann Arbor, MI, United States.,Department of Physics, University of Michigan, Ann Arbor, MI, United States.,Center for the Study of Complex Systems, University of Michigan, Ann Arbor, MI, United States.,Biophysics Program, University of Michigan, Ann Arbor, MI, United States
| |
Collapse
|
49
|
Martin S, Mikutta C, Leonard MK, Hungate D, Koelsch S, Shamma S, Chang EF, Millán JDR, Knight RT, Pasley BN. Neural Encoding of Auditory Features during Music Perception and Imagery. Cereb Cortex 2019; 28:4222-4233. [PMID: 29088345 DOI: 10.1093/cercor/bhx277] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2017] [Indexed: 11/12/2022] Open
Abstract
Despite many behavioral and neuroimaging investigations, it remains unclear how the human cortex represents spectrotemporal sound features during auditory imagery, and how this representation compares to auditory perception. To assess this, we recorded electrocorticographic signals from an epileptic patient with proficient music ability in 2 conditions. First, the participant played 2 piano pieces on an electronic piano with the sound volume of the digital keyboard on. Second, the participant replayed the same piano pieces, but without auditory feedback, and the participant was asked to imagine hearing the music in his mind. In both conditions, the sound output of the keyboard was recorded, thus allowing precise time-locking between the neural activity and the spectrotemporal content of the music imagery. This novel task design provided a unique opportunity to apply receptive field modeling techniques to quantitatively study neural encoding during auditory mental imagery. In both conditions, we built encoding models to predict high gamma neural activity (70-150 Hz) from the spectrogram representation of the recorded sound. We found robust spectrotemporal receptive fields during auditory imagery with substantial, but not complete overlap in frequency tuning and cortical location compared to receptive fields measured during auditory perception.
Collapse
Affiliation(s)
- Stephanie Martin
- Defitech Chair in Brain-Machine Interface, Center for Neuroprosthetics, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.,Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USA
| | - Christian Mikutta
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USA.,Translational Research Center and Division of Clinical Research Support, Psychiatric Services University of Bern (UPD), University Hospital of Psychiatry, Bern, Switzerland.,Department of Neurology, Inselspital, Bern, University Hospital, University of Bern, Bern, Switzerland
| | - Matthew K Leonard
- Department of Neurological Surgery, Department of Physiology, and Center for Integrative Neuroscience, University of California, San Francisco, CA, USA
| | - Dylan Hungate
- Department of Neurological Surgery, Department of Physiology, and Center for Integrative Neuroscience, University of California, San Francisco, CA, USA
| | | | - Shihab Shamma
- Département d'études cognitives, École normale supérieure, PSL Research University, Paris, France.,Electrical and Computer Engineering & Institute for Systems Research, Univ. of Maryland in College Park, MD, USA
| | - Edward F Chang
- Department of Neurological Surgery, Department of Physiology, and Center for Integrative Neuroscience, University of California, San Francisco, CA, USA
| | - José Del R Millán
- Defitech Chair in Brain-Machine Interface, Center for Neuroprosthetics, Ecole Polytechnique Fe´de´rale de Lausanne, Lausanne, Switzerland
| | - Robert T Knight
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USA.,Department of Psychology, University of California, Berkeley, CA, USA
| | - Brian N Pasley
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USA
| |
Collapse
|
50
|
O'Sullivan J, Herrero J, Smith E, Schevon C, McKhann GM, Sheth SA, Mehta AD, Mesgarani N. Hierarchical Encoding of Attended Auditory Objects in Multi-talker Speech Perception. Neuron 2019; 104:1195-1209.e3. [PMID: 31648900 DOI: 10.1016/j.neuron.2019.09.007] [Citation(s) in RCA: 54] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Revised: 07/11/2019] [Accepted: 09/06/2019] [Indexed: 11/15/2022]
Abstract
Humans can easily focus on one speaker in a multi-talker acoustic environment, but how different areas of the human auditory cortex (AC) represent the acoustic components of mixed speech is unknown. We obtained invasive recordings from the primary and nonprimary AC in neurosurgical patients as they listened to multi-talker speech. We found that neural sites in the primary AC responded to individual speakers in the mixture and were relatively unchanged by attention. In contrast, neural sites in the nonprimary AC were less discerning of individual speakers but selectively represented the attended speaker. Moreover, the encoding of the attended speaker in the nonprimary AC was invariant to the degree of acoustic overlap with the unattended speaker. Finally, this emergent representation of attended speech in the nonprimary AC was linearly predictable from the primary AC responses. Our results reveal the neural computations underlying the hierarchical formation of auditory objects in human AC during multi-talker speech perception.
Collapse
Affiliation(s)
- James O'Sullivan
- Department of Electrical Engineering, Columbia University, New York, NY, USA
| | - Jose Herrero
- Department of Neurosurgery, Hofstra-Northwell School of Medicine and Feinstein Institute for Medical Research, Manhasset, New York, NY, USA
| | - Elliot Smith
- Department of Neurological Surgery, The Neurological Institute, New York, NY, USA; Department of Neurosurgery, University of Utah, Salt Lake City, UT, USA
| | - Catherine Schevon
- Department of Neurological Surgery, The Neurological Institute, New York, NY, USA
| | - Guy M McKhann
- Department of Neurological Surgery, The Neurological Institute, New York, NY, USA
| | - Sameer A Sheth
- Department of Neurological Surgery, The Neurological Institute, New York, NY, USA; Department of Neurosurgery, Baylor College of Medicine, Houston, TX, USA
| | - Ashesh D Mehta
- Department of Neurosurgery, Hofstra-Northwell School of Medicine and Feinstein Institute for Medical Research, Manhasset, New York, NY, USA
| | - Nima Mesgarani
- Department of Electrical Engineering, Columbia University, New York, NY, USA.
| |
Collapse
|