1
|
van der Heijden K, Patel P, Bickel S, Herrero JL, Mehta AD, Mesgarani N. Joint population coding and temporal coherence link an attended talker's voice and location features in naturalistic multi-talker scenes. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.13.593814. [PMID: 38798551 PMCID: PMC11118436 DOI: 10.1101/2024.05.13.593814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2024]
Abstract
Listeners readily extract multi-dimensional auditory objects such as a 'localized talker' from complex acoustic scenes with multiple talkers. Yet, the neural mechanisms underlying simultaneous encoding and linking of different sound features - for example, a talker's voice and location - are poorly understood. We analyzed invasive intracranial recordings in neurosurgical patients attending to a localized talker in real-life cocktail party scenarios. We found that sensitivity to an individual talker's voice and location features was distributed throughout auditory cortex and that neural sites exhibited a gradient from sensitivity to a single feature to joint sensitivity to both features. On a population level, cortical response patterns of both dual-feature sensitive sites but also single-feature sensitive sites revealed simultaneous encoding of an attended talker's voice and location features. However, for single-feature sensitive sites, the representation of the primary feature was more precise. Further, sites which selective tracked an attended speech stream concurrently encoded an attended talker's voice and location features, indicating that such sites combine selective tracking of an attended auditory object with encoding of the object's features. Finally, we found that attending a localized talker selectively enhanced temporal coherence between single-feature voice sensitive sites and single-feature location sensitive sites, providing an additional mechanism for linking voice and location in multi-talker scenes. These results demonstrate that a talker's voice and location features are linked during multi-dimensional object formation in naturalistic multi-talker scenes by joint population coding as well as by temporal coherence between neural sites. SIGNIFICANCE STATEMENT Listeners effortlessly extract auditory objects from complex acoustic scenes consisting of multiple sound sources in naturalistic, spatial sound scenes. Yet, how the brain links different sound features to form a multi-dimensional auditory object is poorly understood. We investigated how neural responses encode and integrate an attended talker's voice and location features in spatial multi-talker sound scenes to elucidate which neural mechanisms underlie simultaneous encoding and linking of different auditory features. Our results show that joint population coding as well as temporal coherence mechanisms contribute to distributed multi-dimensional auditory object encoding. These findings shed new light on cortical functional specialization and multidimensional auditory object formation in complex, naturalistic listening scenes. HIGHLIGHTS Cortical responses to an single talker exhibit a distributed gradient, ranging from sites that are sensitive to both a talker's voice and location (dual-feature sensitive sites) to sites that are sensitive to either voice or location (single-feature sensitive sites).Population response patterns of dual-feature sensitive sites encode voice and location features of the attended talker in multi-talker scenes jointly and with equal precision.Despite their sensitivity to a single feature at the level of individual cortical sites, population response patterns of single-feature sensitive sites also encode location and voice features of a talker jointly, but with higher precision for the feature they are primarily sensitive to.Neural sites which selectively track an attended speech stream concurrently encode the attended talker's voice and location features.Attention selectively enhances temporal coherence between voice and location selective sites over time.Joint population coding as well as temporal coherence mechanisms underlie distributed multi-dimensional auditory object encoding in auditory cortex.
Collapse
|
2
|
Puschmann S, Regev M, Fakhar K, Zatorre RJ, Thiel CM. Attention-Driven Modulation of Auditory Cortex Activity during Selective Listening in a Multispeaker Setting. J Neurosci 2024; 44:e1157232023. [PMID: 38388426 PMCID: PMC11007309 DOI: 10.1523/jneurosci.1157-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 10/30/2023] [Accepted: 11/05/2023] [Indexed: 02/24/2024] Open
Abstract
Real-world listening settings often consist of multiple concurrent sound streams. To limit perceptual interference during selective listening, the auditory system segregates and filters the relevant sensory input. Previous work provided evidence that the auditory cortex is critically involved in this process and selectively gates attended input toward subsequent processing stages. We studied at which level of auditory cortex processing this filtering of attended information occurs using functional magnetic resonance imaging (fMRI) and a naturalistic selective listening task. Forty-five human listeners (of either sex) attended to one of two continuous speech streams, presented either concurrently or in isolation. Functional data were analyzed using an inter-subject analysis to assess stimulus-specific components of ongoing auditory cortex activity. Our results suggest that stimulus-related activity in the primary auditory cortex and the adjacent planum temporale are hardly affected by attention, whereas brain responses at higher stages of the auditory cortex processing hierarchy become progressively more selective for the attended input. Consistent with these findings, a complementary analysis of stimulus-driven functional connectivity further demonstrated that information on the to-be-ignored speech stream is shared between the primary auditory cortex and the planum temporale but largely fails to reach higher processing stages. Our findings suggest that the neural processing of ignored speech cannot be effectively suppressed at the level of early cortical processing of acoustic features but is gradually attenuated once the competing speech streams are fully segregated.
Collapse
Affiliation(s)
- Sebastian Puschmann
- Biological Psychology Lab, Department of Psychology, Carl von Ossietzky Universität Oldenburg, 26129 Oldenburg, Germany
- Cluster of Excellence "Hearing4all", Carl von Ossietzky Universität Oldenburg, 26129 Oldenburg, Germany
| | - Mor Regev
- Montreal Neurological Institute, McGill University, Montreal, Quebec H3A 2B4, Canada
| | - Kayson Fakhar
- Institute of Computational Neuroscience, University Medical Center Eppendorf, Hamburg University, Hamburg Center of Neuroscience, Hamburg 20246, Germany
| | - Robert J Zatorre
- Montreal Neurological Institute, McGill University, Montreal, Quebec H3A 2B4, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Quebec H2V 2S9, Canada
| | - Christiane M Thiel
- Biological Psychology Lab, Department of Psychology, Carl von Ossietzky Universität Oldenburg, 26129 Oldenburg, Germany
- Cluster of Excellence "Hearing4all", Carl von Ossietzky Universität Oldenburg, 26129 Oldenburg, Germany
| |
Collapse
|
3
|
Mittelstadt JK, Kanold PO. Orbitofrontal cortex conveys stimulus and task information to the auditory cortex. Curr Biol 2023; 33:4160-4173.e4. [PMID: 37716349 PMCID: PMC10602585 DOI: 10.1016/j.cub.2023.08.059] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 06/29/2023] [Accepted: 08/21/2023] [Indexed: 09/18/2023]
Abstract
Auditory cortical neurons modify their response profiles in response to numerous external factors. During task performance, changes in primary auditory cortex (A1) responses are thought to be driven by top-down inputs from the orbitofrontal cortex (OFC), which may lead to response modification on a trial-by-trial basis. While OFC neurons respond to auditory stimuli and project to A1, the function of OFC projections to A1 during auditory tasks is unknown. Here, we observed the activity of putative OFC terminals in A1 in mice by using in vivo two-photon calcium imaging of OFC terminals under passive conditions and during a tone detection task. We found that behavioral activity modulates but is not necessary to evoke OFC terminal responses in A1. OFC terminals in A1 form distinct populations that exclusively respond to either the tone, reward, or error. Using tones against a background of white noise, we found that OFC terminal activity was modulated by the signal-to-noise ratio (SNR) in both the passive and active conditions and that OFC terminal activity varied with SNR, and thus task difficulty in the active condition. Therefore, OFC projections in A1 are heterogeneous in their modulation of auditory encoding and likely contribute to auditory processing under various auditory conditions.
Collapse
Affiliation(s)
- Jonah K Mittelstadt
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA; Solomon H. Snyder Department of Neuroscience, Johns Hopkins University, Baltimore, MD 21205, USA
| | - Patrick O Kanold
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA; Solomon H. Snyder Department of Neuroscience, Johns Hopkins University, Baltimore, MD 21205, USA; Kavli Neuroscience Discovery Institute, Johns Hopkins University, Baltimore, MD 21205, USA.
| |
Collapse
|
4
|
Graham G, Chimenti MS, Knudtson KL, Grenard DN, Co L, Sumner M, Tchou T, Bieszczad KM. Learning induces unique transcriptional landscapes in the auditory cortex. Hear Res 2023; 438:108878. [PMID: 37659220 PMCID: PMC10529106 DOI: 10.1016/j.heares.2023.108878] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 08/11/2023] [Accepted: 08/18/2023] [Indexed: 09/04/2023]
Abstract
Learning can induce neurophysiological plasticity in the auditory cortex at multiple timescales. Lasting changes to auditory cortical function that persist over days, weeks, or even a lifetime, require learning to induce de novo gene expression. Indeed, transcription is the molecular determinant for long-term memories to form with a lasting impact on sound-related behavior. However, auditory cortical genes that support auditory learning, memory, and acquired sound-specific behavior are largely unknown. Using an animal model of adult, male Sprague-Dawley rats, this report is the first to identify genome-wide changes in learning-induced gene expression within the auditory cortex that may underlie long-lasting discriminative memory formation of acoustic frequency cues. Auditory cortical samples were collected from animals in the initial learning phase of a two-tone discrimination sound-reward task known to induce sound-specific neurophysiological and behavioral effects. Bioinformatic analyses on gene enrichment profiles from bulk RNA sequencing identified cholinergic synapse (KEGG rno04725), extra-cellular matrix receptor interaction (KEGG rno04512), and neuroactive receptor interaction (KEGG rno04080) among the top biological pathways are likely to be important for auditory discrimination learning. The findings characterize candidate effectors underlying the early stages of changes in cortical and behavioral function to ultimately support the formation of long-term discriminative auditory memory in the adult brain. The molecules and mechanisms identified are potential therapeutic targets to facilitate experiences that induce long-lasting changes to sound-specific auditory function in adulthood and prime for future gene-targeted investigations.
Collapse
Affiliation(s)
- G Graham
- Neuroscience Graduate Program, Rutgers Univ., Piscataway, NJ, USA; Behavioral and Systems Neuroscience, Dept. of Psychology, Rutgers Univ., Piscataway, NJ, USA
| | - M S Chimenti
- Iowa Institute of Human Genetics, University of Iowa Carver College of Medicine, Iowa City, IA, USA
| | - K L Knudtson
- Iowa Institute of Human Genetics, University of Iowa Carver College of Medicine, Iowa City, IA, USA
| | - D N Grenard
- Behavioral and Systems Neuroscience, Dept. of Psychology, Rutgers Univ., Piscataway, NJ, USA
| | - L Co
- Behavioral and Systems Neuroscience, Dept. of Psychology, Rutgers Univ., Piscataway, NJ, USA
| | - M Sumner
- Behavioral and Systems Neuroscience, Dept. of Psychology, Rutgers Univ., Piscataway, NJ, USA
| | - T Tchou
- Behavioral and Systems Neuroscience, Dept. of Psychology, Rutgers Univ., Piscataway, NJ, USA
| | - K M Bieszczad
- Neuroscience Graduate Program, Rutgers Univ., Piscataway, NJ, USA; Behavioral and Systems Neuroscience, Dept. of Psychology, Rutgers Univ., Piscataway, NJ, USA; Rutgers Center for Cognitive Science, Rutgers Univ., Piscataway, NJ, USA; Dept. of Otolaryngology-Head and Neck Surgery, Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, USA.
| |
Collapse
|
5
|
Choi I, Demir I, Oh S, Lee SH. Multisensory integration in the mammalian brain: diversity and flexibility in health and disease. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220338. [PMID: 37545309 PMCID: PMC10404930 DOI: 10.1098/rstb.2022.0338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 04/30/2023] [Indexed: 08/08/2023] Open
Abstract
Multisensory integration (MSI) occurs in a variety of brain areas, spanning cortical and subcortical regions. In traditional studies on sensory processing, the sensory cortices have been considered for processing sensory information in a modality-specific manner. The sensory cortices, however, send the information to other cortical and subcortical areas, including the higher association cortices and the other sensory cortices, where the multiple modality inputs converge and integrate to generate a meaningful percept. This integration process is neither simple nor fixed because these brain areas interact with each other via complicated circuits, which can be modulated by numerous internal and external conditions. As a result, dynamic MSI makes multisensory decisions flexible and adaptive in behaving animals. Impairments in MSI occur in many psychiatric disorders, which may result in an altered perception of the multisensory stimuli and an abnormal reaction to them. This review discusses the diversity and flexibility of MSI in mammals, including humans, primates and rodents, as well as the brain areas involved. It further explains how such flexibility influences perceptual experiences in behaving animals in both health and disease. This article is part of the theme issue 'Decision and control processes in multisensory perception'.
Collapse
Affiliation(s)
- Ilsong Choi
- Center for Synaptic Brain Dysfunctions, Institute for Basic Science (IBS), Daejeon 34141, Republic of Korea
| | - Ilayda Demir
- Department of biological sciences, KAIST, Daejeon 34141, Republic of Korea
| | - Seungmi Oh
- Department of biological sciences, KAIST, Daejeon 34141, Republic of Korea
| | - Seung-Hee Lee
- Center for Synaptic Brain Dysfunctions, Institute for Basic Science (IBS), Daejeon 34141, Republic of Korea
- Department of biological sciences, KAIST, Daejeon 34141, Republic of Korea
| |
Collapse
|
6
|
Graham G, Chimenti MS, Knudtson KL, Grenard DN, Co L, Sumner M, Tchou T, Bieszczad KM. Learning induces unique transcriptional landscapes in the auditory cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.04.15.536914. [PMID: 37090563 PMCID: PMC10120736 DOI: 10.1101/2023.04.15.536914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/25/2023]
Abstract
Learning can induce neurophysiological plasticity in the auditory cortex at multiple timescales. Lasting changes to auditory cortical function that persist over days, weeks, or even a lifetime, require learning to induce de novo gene expression. Indeed, transcription is the molecular determinant for long-term memories to form with a lasting impact on sound-related behavior. However, auditory cortical genes that support auditory learning, memory, and acquired sound-specific behavior are largely unknown. This report is the first to identify in young adult male rats (Sprague-Dawley) genome-wide changes in learning-induced gene expression within the auditory cortex that may underlie the formation of long-lasting discriminative memory for acoustic frequency cues. Auditory cortical samples were collected from animals in the initial learning phase of a two-tone discrimination sound-reward task known to induce sound-specific neurophysiological and behavioral effects (e.g., Shang et al., 2019). Bioinformatic analyses on gene enrichment profiles from bulk RNA sequencing identified cholinergic synapse (KEGG 04725), extra-cellular matrix receptor interaction (KEGG 04512) , and neuroactive ligand-receptor interaction (KEGG 04080) as top biological pathways for auditory discrimination learning. The findings characterize key candidate effectors underlying changes in cortical function that support the initial formation of long-term discriminative auditory memory in the adult brain. The molecules and mechanisms identified are potential therapeutic targets to facilitate lasting changes to sound-specific auditory function in adulthood and prime for future gene-targeted investigations.
Collapse
Affiliation(s)
- G Graham
- Neuroscience Graduate Program, Rutgers Univ., Piscataway, NJ
- Behavioral and Systems Neuroscience, Dept. of Psychology, Rutgers Univ., Piscataway, NJ
| | - M S Chimenti
- Iowa Institute of Human Genetics, Univ. of Iowa Carver College of Medicine, Iowa City, IA
| | - K L Knudtson
- Iowa Institute of Human Genetics, Univ. of Iowa Carver College of Medicine, Iowa City, IA
| | - D N Grenard
- Behavioral and Systems Neuroscience, Dept. of Psychology, Rutgers Univ., Piscataway, NJ
| | - L Co
- Behavioral and Systems Neuroscience, Dept. of Psychology, Rutgers Univ., Piscataway, NJ
| | - M Sumner
- Behavioral and Systems Neuroscience, Dept. of Psychology, Rutgers Univ., Piscataway, NJ
| | - T Tchou
- Behavioral and Systems Neuroscience, Dept. of Psychology, Rutgers Univ., Piscataway, NJ
| | - K M Bieszczad
- Neuroscience Graduate Program, Rutgers Univ., Piscataway, NJ
- Behavioral and Systems Neuroscience, Dept. of Psychology, Rutgers Univ., Piscataway, NJ
- Rutgers Center for Cognitive Science, Rutgers Univ., Piscataway, NJ
- Dept. of Otolaryngology-Head and Neck Surgery, Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ
| |
Collapse
|
7
|
Loeb GE. Remembrance of things perceived: Adding thalamocortical function to artificial neural networks. Front Integr Neurosci 2023; 17:1108271. [PMID: 36959924 PMCID: PMC10027940 DOI: 10.3389/fnint.2023.1108271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 02/13/2023] [Indexed: 03/09/2023] Open
Abstract
Recent research has illuminated the complexity and importance of the thalamocortical system but it has been difficult to identify what computational functions it performs. Meanwhile, deep-learning artificial neural networks (ANNs) based on bio-inspired models of purely cortical circuits have achieved surprising success solving sophisticated cognitive problems associated historically with human intelligence. Nevertheless, the limitations and shortcomings of artificial intelligence (AI) based on such ANNs are becoming increasingly clear. This review considers how the addition of thalamocortical connectivity and its putative functions related to cortical attention might address some of those shortcomings. Such bio-inspired models are now providing both testable theories of biological cognition and improved AI technology, much of which is happening outside the usual academic venues.
Collapse
|
8
|
Nakanishi M, Nemoto M, Kawai HD. Cortical nicotinic enhancement of tone-evoked heightened activities and subcortical nicotinic enlargement of activated areas in mouse auditory cortex. Neurosci Res 2022; 181:55-65. [DOI: 10.1016/j.neures.2022.04.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 03/19/2022] [Accepted: 04/01/2022] [Indexed: 10/18/2022]
|
9
|
Auerbach BD, Gritton HJ. Hearing in Complex Environments: Auditory Gain Control, Attention, and Hearing Loss. Front Neurosci 2022; 16:799787. [PMID: 35221899 PMCID: PMC8866963 DOI: 10.3389/fnins.2022.799787] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Accepted: 01/18/2022] [Indexed: 12/12/2022] Open
Abstract
Listening in noisy or complex sound environments is difficult for individuals with normal hearing and can be a debilitating impairment for those with hearing loss. Extracting meaningful information from a complex acoustic environment requires the ability to accurately encode specific sound features under highly variable listening conditions and segregate distinct sound streams from multiple overlapping sources. The auditory system employs a variety of mechanisms to achieve this auditory scene analysis. First, neurons across levels of the auditory system exhibit compensatory adaptations to their gain and dynamic range in response to prevailing sound stimulus statistics in the environment. These adaptations allow for robust representations of sound features that are to a large degree invariant to the level of background noise. Second, listeners can selectively attend to a desired sound target in an environment with multiple sound sources. This selective auditory attention is another form of sensory gain control, enhancing the representation of an attended sound source while suppressing responses to unattended sounds. This review will examine both “bottom-up” gain alterations in response to changes in environmental sound statistics as well as “top-down” mechanisms that allow for selective extraction of specific sound features in a complex auditory scene. Finally, we will discuss how hearing loss interacts with these gain control mechanisms, and the adaptive and/or maladaptive perceptual consequences of this plasticity.
Collapse
Affiliation(s)
- Benjamin D. Auerbach
- Department of Molecular and Integrative Physiology, Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- Neuroscience Program, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- *Correspondence: Benjamin D. Auerbach,
| | - Howard J. Gritton
- Neuroscience Program, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- Department of Comparative Biosciences, University of Illinois at Urbana-Champaign, Urbana, IL, United States
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL, United States
| |
Collapse
|
10
|
Cai H, Dent ML. Dimensionally Specific Attention Capture in Birds Performing Auditory Streaming Task. J Assoc Res Otolaryngol 2022; 23:241-252. [PMID: 34988866 DOI: 10.1007/s10162-021-00825-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Accepted: 11/17/2021] [Indexed: 11/25/2022] Open
Abstract
Previous studies in budgerigars (Melopsittacus undulatus) have indicated that they experience attention capture in a qualitatively similar way to humans. Here, we apply a similar objective auditory streaming paradigm, using modified budgerigar vocalizations instead of ABAB-… patterned pure tones, in the sound sequences. The birds were trained to respond to deviants in the target stream while ignoring the distractors in the background stream. The background distractor could vary among five different categories and two different sequential positions, while the target deviants could randomly appear at five different sequential positions and vary among two different categories. We found that unpredictable background distractors could deteriorate birds' sensitivity to the target deviants. Compared to conditions where the background distractor appeared right before the target deviant, the attention capture effect decayed in conditions when the background distractor appeared earlier. In contrast to results from the same paradigm using pure tones, the results here are evidence for a faster recovery from attention capture using modified vocalization segments. We found that the temporally modulated background distractor captured birds' attention more and deteriorated birds' performance more than other categories of background distractor, as the temporally modulated target deviant enabled the birds to focus their attention toward the temporal modulation dimension. However, different from humans, birds have lower tolerances for suppressing the distractors from the same feature dimensions as the targets, which is evidenced by higher false alarm rates for the temporally modulated distractor than other distractors from different feature dimensions.
Collapse
Affiliation(s)
- Huaizhen Cai
- Department of Psychology, University at Buffalo, The State University of New York, Buffalo, NY, USA
| | - Micheal L Dent
- Department of Psychology, University at Buffalo, The State University of New York, Buffalo, NY, USA.
| |
Collapse
|
11
|
AIM: A network model of attention in auditory cortex. PLoS Comput Biol 2021; 17:e1009356. [PMID: 34449761 PMCID: PMC8462696 DOI: 10.1371/journal.pcbi.1009356] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 09/24/2021] [Accepted: 08/18/2021] [Indexed: 11/19/2022] Open
Abstract
Attentional modulation of cortical networks is critical for the cognitive flexibility required to process complex scenes. Current theoretical frameworks for attention are based almost exclusively on studies in visual cortex, where attentional effects are typically modest and excitatory. In contrast, attentional effects in auditory cortex can be large and suppressive. A theoretical framework for explaining attentional effects in auditory cortex is lacking, preventing a broader understanding of cortical mechanisms underlying attention. Here, we present a cortical network model of attention in primary auditory cortex (A1). A key mechanism in our network is attentional inhibitory modulation (AIM) of cortical inhibitory neurons. In this mechanism, top-down inhibitory neurons disinhibit bottom-up cortical circuits, a prominent circuit motif observed in sensory cortex. Our results reveal that the same underlying mechanisms in the AIM network can explain diverse attentional effects on both spatial and frequency tuning in A1. We find that a dominant effect of disinhibition on cortical tuning is suppressive, consistent with experimental observations. Functionally, the AIM network may play a key role in solving the cocktail party problem. We demonstrate how attention can guide the AIM network to monitor an acoustic scene, select a specific target, or switch to a different target, providing flexible outputs for solving the cocktail party problem. Selective attention plays a key role in how we navigate our everyday lives. For example, at a cocktail party, we can attend to friend’s speech amidst other speakers, music, and background noise. In stark contrast, hundreds of millions of people with hearing impairment and other disorders find such environments overwhelming and debilitating. Understanding the mechanisms underlying selective attention may lead to breakthroughs in improving the quality of life for those negatively affected. Here, we propose a mechanistic network model of attention in primary auditory cortex based on attentional inhibitory modulation (AIM). In the AIM model, attention targets specific cortical inhibitory neurons, which then modulate local cortical circuits to emphasize a particular feature of sounds and suppress competing features. We show that the AIM model can account for experimental observations across different species and stimulus domains. We also demonstrate that the same mechanisms can enable listeners to flexibly switch between attending to specific targets sounds and monitoring the environment in complex acoustic scenes, such as a cocktail party. The AIM network provides a theoretical framework which can work in tandem with new experiments to help unravel cortical circuits underlying attention.
Collapse
|
12
|
Souffi S, Nodal FR, Bajo VM, Edeline JM. When and How Does the Auditory Cortex Influence Subcortical Auditory Structures? New Insights About the Roles of Descending Cortical Projections. Front Neurosci 2021; 15:690223. [PMID: 34413722 PMCID: PMC8369261 DOI: 10.3389/fnins.2021.690223] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Accepted: 06/14/2021] [Indexed: 12/28/2022] Open
Abstract
For decades, the corticofugal descending projections have been anatomically well described but their functional role remains a puzzling question. In this review, we will first describe the contributions of neuronal networks in representing communication sounds in various types of degraded acoustic conditions from the cochlear nucleus to the primary and secondary auditory cortex. In such situations, the discrimination abilities of collicular and thalamic neurons are clearly better than those of cortical neurons although the latter remain very little affected by degraded acoustic conditions. Second, we will report the functional effects resulting from activating or inactivating corticofugal projections on functional properties of subcortical neurons. In general, modest effects have been observed in anesthetized and in awake, passively listening, animals. In contrast, in behavioral tasks including challenging conditions, behavioral performance was severely reduced by removing or transiently silencing the corticofugal descending projections. This suggests that the discriminative abilities of subcortical neurons may be sufficient in many acoustic situations. It is only in particularly challenging situations, either due to the task difficulties and/or to the degraded acoustic conditions that the corticofugal descending connections bring additional abilities. Here, we propose that it is both the top-down influences from the prefrontal cortex, and those from the neuromodulatory systems, which allow the cortical descending projections to impact behavioral performance in reshaping the functional circuitry of subcortical structures. We aim at proposing potential scenarios to explain how, and under which circumstances, these projections impact on subcortical processing and on behavioral responses.
Collapse
Affiliation(s)
- Samira Souffi
- Department of Integrative and Computational Neurosciences, Paris-Saclay Institute of Neuroscience (NeuroPSI), UMR CNRS 9197, Paris-Saclay University, Orsay, France
| | - Fernando R Nodal
- Department of Physiology, Anatomy and Genetics, Medical Sciences Division, University of Oxford, Oxford, United Kingdom
| | - Victoria M Bajo
- Department of Physiology, Anatomy and Genetics, Medical Sciences Division, University of Oxford, Oxford, United Kingdom
| | - Jean-Marc Edeline
- Department of Integrative and Computational Neurosciences, Paris-Saclay Institute of Neuroscience (NeuroPSI), UMR CNRS 9197, Paris-Saclay University, Orsay, France
| |
Collapse
|
13
|
Homma NY, Atencio CA, Schreiner CE. Plasticity of Multidimensional Receptive Fields in Core Rat Auditory Cortex Directed by Sound Statistics. Neuroscience 2021; 467:150-170. [PMID: 33951506 DOI: 10.1016/j.neuroscience.2021.04.028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Revised: 04/09/2021] [Accepted: 04/24/2021] [Indexed: 11/17/2022]
Abstract
Sensory cortical neurons can nonlinearly integrate a wide range of inputs. The outcome of this nonlinear process can be approximated by more than one receptive field component or filter to characterize the ensuing stimulus preference. The functional properties of multidimensional filters are, however, not well understood. Here we estimated two spectrotemporal receptive fields (STRFs) per neuron using maximally informative dimension analysis. We compared their temporal and spectral modulation properties and determined the stimulus information captured by the two STRFs in core rat auditory cortical fields, primary auditory cortex (A1) and ventral auditory field (VAF). The first STRF is the dominant filter and acts as a sound feature detector in both fields. The second STRF is less feature specific, preferred lower modulations, and had less spike information compared to the first STRF. The information jointly captured by the two STRFs was larger than that captured by the sum of the individual STRFs, reflecting nonlinear interactions of two filters. This information gain was larger in A1. We next determined how the acoustic environment affects the structure and relationship of these two STRFs. Rats were exposed to moderate levels of spectrotemporally modulated noise during development. Noise exposure strongly altered the spectrotemporal preference of the first STRF in both cortical fields. The interaction between the two STRFs was reduced by noise exposure in A1 but not in VAF. The results reveal new functional distinctions between A1 and VAF indicating that (i) A1 has stronger interactions of the two STRFs than VAF, (ii) noise exposure diminishes modulation parameter representation contained in the noise more strongly for the first STRF in both fields, and (iii) plasticity induced by noise exposure can affect the strength of filter interactions in A1. Taken together, ascertaining two STRFs per neuron enhances the understanding of cortical information processing and plasticity effects in core auditory cortex.
Collapse
Affiliation(s)
- Natsumi Y Homma
- Coleman Memorial Laboratory, Department of Otolaryngology - Head and Neck Surgery, University of California San Francisco, San Francisco, USA; Center for Integrative Neuroscience, University of California San Francisco, San Francisco, USA.
| | - Craig A Atencio
- Coleman Memorial Laboratory, Department of Otolaryngology - Head and Neck Surgery, University of California San Francisco, San Francisco, USA
| | - Christoph E Schreiner
- Coleman Memorial Laboratory, Department of Otolaryngology - Head and Neck Surgery, University of California San Francisco, San Francisco, USA; Center for Integrative Neuroscience, University of California San Francisco, San Francisco, USA
| |
Collapse
|
14
|
Saderi D, Schwartz ZP, Heller CR, Pennington JR, David SV. Dissociation of task engagement and arousal effects in auditory cortex and midbrain. eLife 2021; 10:e60153. [PMID: 33570493 PMCID: PMC7909948 DOI: 10.7554/elife.60153] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2020] [Accepted: 02/10/2021] [Indexed: 12/18/2022] Open
Abstract
Both generalized arousal and engagement in a specific task influence sensory neural processing. To isolate effects of these state variables in the auditory system, we recorded single-unit activity from primary auditory cortex (A1) and inferior colliculus (IC) of ferrets during a tone detection task, while monitoring arousal via changes in pupil size. We used a generalized linear model to assess the influence of task engagement and pupil size on sound-evoked activity. In both areas, these two variables affected independent neural populations. Pupil size effects were more prominent in IC, while pupil and task engagement effects were equally likely in A1. Task engagement was correlated with larger pupil; thus, some apparent effects of task engagement should in fact be attributed to fluctuations in pupil size. These results indicate a hierarchy of auditory processing, where generalized arousal enhances activity in midbrain, and effects specific to task engagement become more prominent in cortex.
Collapse
Affiliation(s)
- Daniela Saderi
- Oregon Hearing Research Center, Oregon Health and Science UniversityPortlandUnited States
- Neuroscience Graduate Program, Oregon Health and Science UniversityPortlandUnited States
| | - Zachary P Schwartz
- Oregon Hearing Research Center, Oregon Health and Science UniversityPortlandUnited States
- Neuroscience Graduate Program, Oregon Health and Science UniversityPortlandUnited States
| | - Charles R Heller
- Oregon Hearing Research Center, Oregon Health and Science UniversityPortlandUnited States
- Neuroscience Graduate Program, Oregon Health and Science UniversityPortlandUnited States
| | - Jacob R Pennington
- Department of Mathematics and Statistics, Washington State UniversityVancouverUnited States
| | - Stephen V David
- Oregon Hearing Research Center, Oregon Health and Science UniversityPortlandUnited States
| |
Collapse
|
15
|
Auditory attentional filter in the absence of masking noise. Atten Percept Psychophys 2021; 83:1737-1751. [PMID: 33389676 DOI: 10.3758/s13414-020-02210-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/13/2020] [Indexed: 12/16/2022]
Abstract
Signals containing attended frequencies are facilitated while those with unexpected frequencies are suppressed by an auditory filtering process. The neurocognitive mechanism underlying the auditory attentional filter is, however, poorly understood. The olivocochlear bundle (OCB), a brainstem neural circuit that is part of the efferent system, has been suggested to be partly responsible for the filtering via its noise-dependent antimasking effect. The current study examined the role of the OCB in attentional filtering, particularly the validity of the antimasking hypothesis, by comparing attentional filters measured in quiet and in the presence of background noise in a group of normal-hearing listeners. Filters obtained in both conditions were comparable, suggesting that the presence of background noise is not crucial for attentional filter generation. In addition, comparison of frequency-specific changes of the cue-evoked enhancement component of filters in quiet and noise also did not reveal any major contribution of background noise to the cue effect. These findings argue against the involvement of an antimasking effect in the attentional process. Instead of the antimasking effect mediated via medial olivocochlear fibers, results from current and earlier studies can be explained by frequency-specific modulation of afferent spontaneous activity by lateral olivocochlear fibers. It is proposed that the activity of these lateral fibers could be driven by top-down cortical control via a noise-independent mechanism. SIGNIFICANCE: The neural basis for auditory attentional filter remains a fundamental but poorly understood area in auditory neuroscience. The efferent olivocochlear pathway that projects from the brainstem back to the cochlea has been suggested to mediate the attentional effect via its noise-dependent antimasking effect. The current study demonstrates that the filter generation is mostly independent of the background noise, and therefore is unlikely to be mediated by the olivocochlear brainstem reflex. It is proposed that the entire cortico-olivocochlear system might instead be used to alter the hearing sensitivity during focus attention via frequency-specific modulation of afferent spontaneous activity.
Collapse
|
16
|
Rocchi F, Ramachandran R. Foreground stimuli and task engagement enhance neuronal adaptation to background noise in the inferior colliculus of macaques. J Neurophysiol 2020; 124:1315-1326. [PMID: 32937088 DOI: 10.1152/jn.00153.2020] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Auditory neuronal responses are modified by background noise. Inferior colliculus (IC) neuronal responses adapt to the most frequent sound level within an acoustic scene (adaptation to stimulus statistics), a mechanism that may preserve neuronal and behavioral thresholds for signal detection. However, it is still unclear whether the presence of foreground stimuli and/or task involvement can modify neuronal adaptation. To investigate how task engagement interacts with this mechanism, we compared the response of IC neurons to background noise, which caused adaptation to stimulus statistics, while macaque monkeys performed a masked tone detection task (task-driven condition) with responses recorded when the same background noise was presented alone (passive listening condition). In the task-dependent condition, monkeys performed a Go/No-Go task while 50-ms tones were embedded within an adaptation-inducing continuous background noise whose levels changed every 50 ms and were drawn from a probability distribution. The adaptation to noise stimulus statistics in IC neuronal responses was significantly enhanced in the task-driven condition compared with the passive listening condition, showing that foreground stimuli and/or task-engagement can modify IC neuronal responses. Additionally, the response of IC neurons to noise was significantly affected by the preceding sensory information (history effect) regardless of task involvement. These studies show that dynamic range adaptation in IC preserves behavioral and neurometric thresholds irrespective of noise type and a dependence of neuronal activity on task-related factors at subcortical levels of processing.NEW & NOTEWORTHY Auditory neuronal responses are influenced by maskers and distractors. However, it is still unclear whether the neuronal sensitivity to the masker stimulus is influenced by task-dependent factors. Our study represents one of the first attempts to investigate how task involvement influences the neural representation of background sounds in the subcortical, midbrain auditory neurons of behaving animals.
Collapse
Affiliation(s)
- Francesca Rocchi
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee
| | - Ramnarayan Ramachandran
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee
| |
Collapse
|
17
|
Ferreiro DN, Amaro D, Schmidtke D, Sobolev A, Gundi P, Belliveau L, Sirota A, Grothe B, Pecka M. Sensory Island Task (SIT): A New Behavioral Paradigm to Study Sensory Perception and Neural Processing in Freely Moving Animals. Front Behav Neurosci 2020; 14:576154. [PMID: 33100981 PMCID: PMC7546252 DOI: 10.3389/fnbeh.2020.576154] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Accepted: 08/27/2020] [Indexed: 11/17/2022] Open
Abstract
A central function of sensory systems is the gathering of information about dynamic interactions with the environment during self-motion. To determine whether modulation of a sensory cue was externally caused or a result of self-motion is fundamental to perceptual invariance and requires the continuous update of sensory processing about recent movements. This process is highly context-dependent and crucial for perceptual performances such as decision-making and sensory object formation. Yet despite its fundamental ecological role, voluntary self-motion is rarely incorporated in perceptual or neurophysiological investigations of sensory processing in animals. Here, we present the Sensory Island Task (SIT), a new freely moving search paradigm to study sensory processing and perception. In SIT, animals explore an open-field arena to find a sensory target relying solely on changes in the presented stimulus, which is controlled by closed-loop position tracking in real-time. Within a few sessions, animals are trained via positive reinforcement to search for a particular area in the arena (“target island”), which triggers the presentation of the target stimulus. The location of the target island is randomized across trials, making the modulated stimulus feature the only informative cue for task completion. Animals report detection of the target stimulus by remaining within the island for a defined time (“sit-time”). Multiple “non-target” islands can be incorporated to test psychometric discrimination and identification performance. We exemplify the suitability of SIT for rodents (Mongolian gerbil, Meriones unguiculatus) and small primates (mouse lemur, Microcebus murinus) and for studying various sensory perceptual performances (auditory frequency discrimination, sound source localization, visual orientation discrimination). Furthermore, we show that pairing SIT with chronic electrophysiological recordings allows revealing neuronal signatures of sensory processing under ecologically relevant conditions during goal-oriented behavior. In conclusion, SIT represents a flexible and easily implementable behavioral paradigm for mammals that combines self-motion and natural exploratory behavior to study sensory sensitivity and decision-making and their underlying neuronal processing.
Collapse
Affiliation(s)
- Dardo N Ferreiro
- Division of Neurobiology, Department Biology II, Ludwig-Maximilians-Universität München, Munich, Germany.,Department of General Psychology and Education, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Diana Amaro
- Division of Neurobiology, Department Biology II, Ludwig-Maximilians-Universität München, Munich, Germany.,Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Daniel Schmidtke
- Institute of Zoology, University of Veterinary Medicine Hannover, Hanover, Germany
| | - Andrey Sobolev
- Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Paula Gundi
- Division of Neurobiology, Department Biology II, Ludwig-Maximilians-Universität München, Munich, Germany.,Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Lucile Belliveau
- Division of Neurobiology, Department Biology II, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Anton Sirota
- Faculty of Medicine, Bernstein Center for Computational Neuroscience Munich, Munich Cluster of Systems Neurology (SyNergy), Ludwig-Maximilians-Universität München, Munich, Germany
| | - Benedikt Grothe
- Division of Neurobiology, Department Biology II, Ludwig-Maximilians-Universität München, Munich, Germany.,Graduate School of Systemic Neurosciences, Ludwig-Maximilians-Universität München, Munich, Germany
| | - Michael Pecka
- Division of Neurobiology, Department Biology II, Ludwig-Maximilians-Universität München, Munich, Germany
| |
Collapse
|
18
|
A mathematical model of the interaction between bottom-up and top-down attention controllers in response to a target and a distractor in human beings. COGN SYST RES 2019. [DOI: 10.1016/j.cogsys.2019.07.007] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
19
|
Lopez Espejo M, Schwartz ZP, David SV. Spectral tuning of adaptation supports coding of sensory context in auditory cortex. PLoS Comput Biol 2019; 15:e1007430. [PMID: 31626624 PMCID: PMC6821137 DOI: 10.1371/journal.pcbi.1007430] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2019] [Revised: 10/30/2019] [Accepted: 09/23/2019] [Indexed: 12/19/2022] Open
Abstract
Perception of vocalizations and other behaviorally relevant sounds requires integrating acoustic information over hundreds of milliseconds. Sound-evoked activity in auditory cortex typically has much shorter latency, but the acoustic context, i.e., sound history, can modulate sound evoked activity over longer periods. Contextual effects are attributed to modulatory phenomena, such as stimulus-specific adaption and contrast gain control. However, an encoding model that links context to natural sound processing has yet to be established. We tested whether a model in which spectrally tuned inputs undergo adaptation mimicking short-term synaptic plasticity (STP) can account for contextual effects during natural sound processing. Single-unit activity was recorded from primary auditory cortex of awake ferrets during presentation of noise with natural temporal dynamics and fully natural sounds. Encoding properties were characterized by a standard linear-nonlinear spectro-temporal receptive field (LN) model and variants that incorporated STP-like adaptation. In the adapting models, STP was applied either globally across all input spectral channels or locally to subsets of channels. For most neurons, models incorporating local STP predicted neural activity as well or better than LN and global STP models. The strength of nonlinear adaptation varied across neurons. Within neurons, adaptation was generally stronger for spectral channels with excitatory than inhibitory gain. Neurons showing improved STP model performance also tended to undergo stimulus-specific adaptation, suggesting a common mechanism for these phenomena. When STP models were compared between passive and active behavior conditions, response gain often changed, but average STP parameters were stable. Thus, spectrally and temporally heterogeneous adaptation, subserved by a mechanism with STP-like dynamics, may support representation of the complex spectro-temporal patterns that comprise natural sounds across wide-ranging sensory contexts.
Collapse
Affiliation(s)
- Mateo Lopez Espejo
- Neuroscience Graduate Program, Oregon Health and Science University, Portland, OR, United States of America
| | - Zachary P. Schwartz
- Neuroscience Graduate Program, Oregon Health and Science University, Portland, OR, United States of America
| | - Stephen V. David
- Oregon Hearing Research Center, Oregon Health and Science University, Portland, OR, United States of America
| |
Collapse
|
20
|
Demarchi G, Sanchez G, Weisz N. Automatic and feature-specific prediction-related neural activity in the human auditory system. Nat Commun 2019; 10:3440. [PMID: 31371713 PMCID: PMC6672009 DOI: 10.1038/s41467-019-11440-1] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2018] [Accepted: 07/11/2019] [Indexed: 12/04/2022] Open
Abstract
Prior experience enables the formation of expectations of upcoming sensory events. However, in the auditory modality, it is not known whether prediction-related neural signals carry feature-specific information. Here, using magnetoencephalography (MEG), we examined whether predictions of future auditory stimuli carry tonotopic specific information. Participants passively listened to sound sequences of four carrier frequencies (tones) with a fixed presentation rate, ensuring strong temporal expectations of when the next stimulus would occur. Expectation of which frequency would occur was parametrically modulated across the sequences, and sounds were occasionally omitted. We show that increasing the regularity of the sequence boosts carrier-frequency-specific neural activity patterns during both the anticipatory and omission periods, indicating that prediction-related neural activity is indeed feature-specific. Our results illustrate that even without bottom-up input, auditory predictions can activate tonotopically specific templates. After listening to a predictable sequence of sounds, we can anticipate and predict the next sound in the sequence. Here, the authors show that during expectation of a sound, the brain generates neural activity matching that which is produced by actually hearing the same sound.
Collapse
Affiliation(s)
- Gianpaolo Demarchi
- Centre for Cognitive Neuroscience and Division of Physiological Psychology, University of Salzburg, Hellbrunnerstraße 34, 5020, Salzburg, Austria.
| | - Gaëtan Sanchez
- Centre for Cognitive Neuroscience and Division of Physiological Psychology, University of Salzburg, Hellbrunnerstraße 34, 5020, Salzburg, Austria.,Lyon Neuroscience Research Center, Brain Dynamics and Cognition Team, INSERM UMRS 1028, CNRS UMR 5292, Université Claude Bernard Lyon 1, Université de Lyon, F-69000, Lyon, France
| | - Nathan Weisz
- Centre for Cognitive Neuroscience and Division of Physiological Psychology, University of Salzburg, Hellbrunnerstraße 34, 5020, Salzburg, Austria
| |
Collapse
|
21
|
Conde T, Gonçalves ÓF, Pinheiro AP. Stimulus complexity matters when you hear your own voice: Attention effects on self-generated voice processing. Int J Psychophysiol 2018; 133:66-78. [PMID: 30114437 DOI: 10.1016/j.ijpsycho.2018.08.007] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2017] [Revised: 06/05/2018] [Accepted: 08/10/2018] [Indexed: 11/26/2022]
Abstract
The ability to discriminate self- and non-self voice cues is a fundamental aspect of self-awareness and subserves self-monitoring during verbal communication. Nonetheless, the neurofunctional underpinnings of self-voice perception and recognition are still poorly understood. Moreover, how attention and stimulus complexity influence the processing and recognition of one's own voice remains to be clarified. Using an oddball task, the current study investigated how self-relevance and stimulus type interact during selective attention to voices, and how they affect the representation of regularity during voice perception. Event-related potentials (ERPs) were recorded from 18 right-handed males. Pre-recorded self-generated (SGV) and non-self (NSV) voices, consisting of a nonverbal vocalization (vocalization condition) or disyllabic word (word condition), were presented as either standard or target stimuli in different experimental blocks. The results showed increased N2 amplitude to SGV relative to NSV stimuli. Stimulus type modulated later processing stages only: P3 amplitude was increased for SGV relative to NSV words, whereas no differences between SGV and NSV were observed in the case of vocalizations. Moreover, SGV standards elicited reduced N1 and P2 amplitude relative to NSV standards. These findings revealed that the self-voice grabs more attention when listeners are exposed to words but not vocalizations. Further, they indicate that detection of regularity in an auditory stream is facilitated for one's own voice at early processing stages. Together, they demonstrate that self-relevance affects attention to voices differently as a function of stimulus type.
Collapse
Affiliation(s)
- Tatiana Conde
- Faculdade de Psicologia, Universidade de Lisboa, Lisbon, Portugal; Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal
| | - Óscar F Gonçalves
- Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal; Spaulding Center of Neuromodulation, Department of Physical Medicine & Rehabilitation, Spaulding Rehabilitation Hospital & Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Bouvé College of Health Sciences, Northeastern University, Boston, MA, USA
| | - Ana P Pinheiro
- Faculdade de Psicologia, Universidade de Lisboa, Lisbon, Portugal; Neuropsychophysiology Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal; Cognitive Neuroscience Lab, Department of Psychiatry, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
22
|
Lohse M, Bajo VM, King AJ. Development, organization and plasticity of auditory circuits: Lessons from a cherished colleague. Eur J Neurosci 2018; 49:990-1004. [PMID: 29804304 PMCID: PMC6519211 DOI: 10.1111/ejn.13979] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2018] [Revised: 05/11/2018] [Accepted: 05/23/2018] [Indexed: 12/20/2022]
Abstract
Ray Guillery was a neuroscientist known primarily for his ground-breaking studies on the development of the visual pathways and subsequently on the nature of thalamocortical processing loops. The legacy of his work, however, extends well beyond the visual system. Thanks to Ray Guillery's pioneering anatomical studies, the ferret has become a widely used animal model for investigating the development and plasticity of sensory processing. This includes our own work on the auditory system, where experiments in ferrets have revealed the role of sensory experience during development in shaping the neural circuits responsible for sound localization, as well as the capacity of the mature brain to adapt to changes in inputs resulting from hearing loss. Our research has also built on Ray Guillery's ideas about the possible functions of the massive descending projections that link sensory areas of the cerebral cortex to the thalamus and other subcortical targets, by demonstrating a role for corticothalamic feedback in the perception of complex sounds and for corticollicular projection neurons in learning to accommodate altered auditory spatial cues. Finally, his insights into the organization and functions of transthalamic corticocortical connections have inspired a raft of research, including by our own laboratory, which has attempted to identify how information flows through the thalamus.
Collapse
Affiliation(s)
- Michael Lohse
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, UK
| | - Victoria M Bajo
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, UK
| | - Andrew J King
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, UK
| |
Collapse
|
23
|
Riecke L, Peters JC, Valente G, Poser BA, Kemper VG, Formisano E, Sorger B. Frequency-specific attentional modulation in human primary auditory cortex and midbrain. Neuroimage 2018; 174:274-287. [DOI: 10.1016/j.neuroimage.2018.03.038] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2017] [Revised: 03/15/2018] [Accepted: 03/17/2018] [Indexed: 12/24/2022] Open
|
24
|
Malek S, Sperschneider K. Aftereffects of Spectrally Similar and Dissimilar Spectral Motion Adaptors in the Tritone Paradox. Front Psychol 2018; 9:677. [PMID: 29867653 PMCID: PMC5953344 DOI: 10.3389/fpsyg.2018.00677] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2017] [Accepted: 04/19/2018] [Indexed: 11/13/2022] Open
Affiliation(s)
- Stephanie Malek
- Psychology Department, Martin Luther University Halle-Wittenberg, Halle, Germany
- *Correspondence: Stephanie Malek
| | | |
Collapse
|
25
|
Irvine DRF. Auditory perceptual learning and changes in the conceptualization of auditory cortex. Hear Res 2018; 366:3-16. [PMID: 29551308 DOI: 10.1016/j.heares.2018.03.011] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/13/2017] [Revised: 03/06/2018] [Accepted: 03/09/2018] [Indexed: 12/11/2022]
Abstract
Perceptual learning, improvement in discriminative ability as a consequence of training, is one of the forms of sensory system plasticity that has driven profound changes in our conceptualization of sensory cortical function. Psychophysical and neurophysiological studies of auditory perceptual learning have indicated that the characteristics of the learning, and by implication the nature of the underlying neural changes, are highly task specific. Some studies in animals have indicated that recruitment of neurons to the population responding to the training stimuli, and hence an increase in the so-called cortical "area of representation" of those stimuli, is the substrate of improved performance, but such changes have not been observed in other studies. A possible reconciliation of these conflicting results is provided by evidence that changes in area of representation constitute a transient stage in the processes underlying perceptual learning. This expansion - renormalization hypothesis is supported by evidence from studies of the learning of motor skills, another form of procedural learning, but leaves open the nature of the permanent neural substrate of improved performance. Other studies have suggested that the substrate might be reduced response variability - a decrease in internal noise. Neuroimaging studies in humans have also provided compelling evidence that training results in long-term changes in auditory cortical function and in the auditory brainstem frequency-following response. Musical training provides a valuable model, but the evidence it provides is qualified by the fact that most such training is multimodal and sensorimotor, and that few of the studies are experimental and allow control over confounding variables. More generally, the overwhelming majority of experimental studies of the various forms of auditory perceptual learning have established the co-occurrence of neural and perceptual changes, but have not established that the former are causally related to the latter. Important forms of perceptual learning in humans are those involved in language acquisition and in the improvement in speech perception performance of post-lingually deaf cochlear implantees over the months following implantation. The development of a range of auditory training programs has focused interest on the factors determining the extent to which perceptual learning is specific or generalises to tasks other than those used in training. The context specificity demonstrated in a number of studies of perceptual learning suggests a multiplexing model, in which learning relating to a particular stimulus attribute depends on a subset of the diverse inputs to a given cortical neuron being strengthened, and different subsets being gated by top-down influences. This hypothesis avoids the difficulty of balancing system stability with plasticity, which is a problem for recruitment hypotheses. The characteristics of auditory perceptual learning reflect the fact that auditory cortex forms part of distributed networks that integrate the representation of auditory stimuli with attention, decision, and reward processes.
Collapse
Affiliation(s)
- Dexter R F Irvine
- Bionics Institute, East Melbourne, Victoria 3002, Australia; School of Psychological Sciences, Monash University, Victoria 3800, Australia.
| |
Collapse
|
26
|
Riecke L, Peters JC, Valente G, Kemper VG, Formisano E, Sorger B. Frequency-Selective Attention in Auditory Scenes Recruits Frequency Representations Throughout Human Superior Temporal Cortex. Cereb Cortex 2018; 27:3002-3014. [PMID: 27230215 DOI: 10.1093/cercor/bhw160] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023] Open
Abstract
A sound of interest may be tracked amid other salient sounds by focusing attention on its characteristic features including its frequency. Functional magnetic resonance imaging findings have indicated that frequency representations in human primary auditory cortex (AC) contribute to this feat. However, attentional modulations were examined at relatively low spatial and spectral resolutions, and frequency-selective contributions outside the primary AC could not be established. To address these issues, we compared blood oxygenation level-dependent (BOLD) responses in the superior temporal cortex of human listeners while they identified single frequencies versus listened selectively for various frequencies within a multifrequency scene. Using best-frequency mapping, we observed that the detailed spatial layout of attention-induced BOLD response enhancements in primary AC follows the tonotopy of stimulus-driven frequency representations-analogous to the "spotlight" of attention enhancing visuospatial representations in retinotopic visual cortex. Moreover, using an algorithm trained to discriminate stimulus-driven frequency representations, we could successfully decode the focus of frequency-selective attention from listeners' BOLD response patterns in nonprimary AC. Our results indicate that the human brain facilitates selective listening to a frequency of interest in a scene by reinforcing the fine-grained activity pattern throughout the entire superior temporal cortex that would be evoked if that frequency was present alone.
Collapse
Affiliation(s)
- Lars Riecke
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands
| | - Judith C Peters
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands.,Netherlands Institute for Neuroscience, Institute of the Royal Netherlands Academy of Arts and Sciences (KNAW), 1105 BA Amsterdam, The Netherlands
| | - Giancarlo Valente
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands
| | - Valentin G Kemper
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands
| | - Elia Formisano
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands
| | - Bettina Sorger
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands
| |
Collapse
|
27
|
Neural Decoding of Bistable Sounds Reveals an Effect of Intention on Perceptual Organization. J Neurosci 2018; 38:2844-2853. [PMID: 29440556 PMCID: PMC5852662 DOI: 10.1523/jneurosci.3022-17.2018] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2017] [Revised: 01/21/2018] [Accepted: 02/06/2018] [Indexed: 12/05/2022] Open
Abstract
Auditory signals arrive at the ear as a mixture that the brain must decompose into distinct sources based to a large extent on acoustic properties of the sounds. An important question concerns whether listeners have voluntary control over how many sources they perceive. This has been studied using pure high (H) and low (L) tones presented in the repeating pattern HLH-HLH-, which can form a bistable percept heard either as an integrated whole (HLH-) or as segregated into high (H-H-) and low (-L-) sequences. Although instructing listeners to try to integrate or segregate sounds affects reports of what they hear, this could reflect a response bias rather than a perceptual effect. We had human listeners (15 males, 12 females) continuously report their perception of such sequences and recorded neural activity using MEG. During neutral listening, a classifier trained on patterns of neural activity distinguished between periods of integrated and segregated perception. In other conditions, participants tried to influence their perception by allocating attention either to the whole sequence or to a subset of the sounds. They reported hearing the desired percept for a greater proportion of time than when listening neutrally. Critically, neural activity supported these reports; stimulus-locked brain responses in auditory cortex were more likely to resemble the signature of segregation when participants tried to hear segregation than when attempting to perceive integration. These results indicate that listeners can influence how many sound sources they perceive, as reflected in neural responses that track both the input and its perceptual organization. SIGNIFICANCE STATEMENT Can we consciously influence our perception of the external world? We address this question using sound sequences that can be heard either as coming from a single source or as two distinct auditory streams. Listeners reported spontaneous changes in their perception between these two interpretations while we recorded neural activity to identify signatures of such integration and segregation. They also indicated that they could, to some extent, choose between these alternatives. This claim was supported by corresponding changes in responses in auditory cortex. By linking neural and behavioral correlates of perception, we demonstrate that the number of objects that we perceive can depend not only on the physical attributes of our environment, but also on how we intend to experience it.
Collapse
|
28
|
Attention Is Required for Knowledge-Based Sequential Grouping: Insights from the Integration of Syllables into Words. J Neurosci 2017; 38:1178-1188. [PMID: 29255005 DOI: 10.1523/jneurosci.2606-17.2017] [Citation(s) in RCA: 47] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2017] [Revised: 11/08/2017] [Accepted: 12/05/2017] [Indexed: 11/21/2022] Open
Abstract
How the brain groups sequential sensory events into chunks is a fundamental question in cognitive neuroscience. This study investigates whether top-down attention or specific tasks are required for the brain to apply lexical knowledge to group syllables into words. Neural responses tracking the syllabic and word rhythms of a rhythmic speech sequence were concurrently monitored using electroencephalography (EEG). The participants performed different tasks, attending to either the rhythmic speech sequence or a distractor, which was another speech stream or a nonlinguistic auditory/visual stimulus. Attention to speech, but not a lexical-meaning-related task, was required for reliable neural tracking of words, even when the distractor was a nonlinguistic stimulus presented cross-modally. Neural tracking of syllables, however, was reliably observed in all tested conditions. These results strongly suggest that neural encoding of individual auditory events (i.e., syllables) is automatic, while knowledge-based construction of temporal chunks (i.e., words) crucially relies on top-down attention.SIGNIFICANCE STATEMENT Why we cannot understand speech when not paying attention is an old question in psychology and cognitive neuroscience. Speech processing is a complex process that involves multiple stages, e.g., hearing and analyzing the speech sound, recognizing words, and combining words into phrases and sentences. The current study investigates which speech-processing stage is blocked when we do not listen carefully. We show that the brain can reliably encode syllables, basic units of speech sounds, even when we do not pay attention. Nevertheless, when distracted, the brain cannot group syllables into multisyllabic words, which are basic units for speech meaning. Therefore, the process of converting speech sound into meaning crucially relies on attention.
Collapse
|
29
|
Extensive Tonotopic Mapping across Auditory Cortex Is Recapitulated by Spectrally Directed Attention and Systematically Related to Cortical Myeloarchitecture. J Neurosci 2017; 37:12187-12201. [PMID: 29109238 PMCID: PMC5729191 DOI: 10.1523/jneurosci.1436-17.2017] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2017] [Revised: 10/04/2017] [Accepted: 10/06/2017] [Indexed: 11/21/2022] Open
Abstract
Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation—acoustic frequency—might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R1-estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions.
Collapse
|
30
|
Abstract
Over the last 30 years a wide range of manipulations of auditory input and experience have been shown to result in plasticity in auditory cortical and subcortical structures. The time course of plasticity ranges from very rapid stimulus-specific adaptation to longer-term changes associated with, for example, partial hearing loss or perceptual learning. Evidence for plasticity as a consequence of these and a range of other manipulations of auditory input and/or its significance is reviewed, with an emphasis on plasticity in adults and in the auditory cortex. The nature of the changes in auditory cortex associated with attention, memory and perceptual learning depend critically on task structure, reward contingencies, and learning strategy. Most forms of auditory system plasticity are adaptive, in that they serve to optimize auditory performance, prompting attempts to harness this plasticity for therapeutic purposes. However, plasticity associated with cochlear trauma and partial hearing loss appears to be maladaptive, and has been linked to tinnitus. Three important forms of human learning-related auditory system plasticity are those associated with language development, musical training, and improvement in performance with a cochlear implant. Almost all forms of plasticity involve changes in synaptic excitatory - inhibitory balance within existing patterns of connectivity. An attractive model applicable to a number of forms of learning-related plasticity is dynamic multiplexing by individual neurons, such that learning involving a particular stimulus attribute reflects a particular subset of the diverse inputs to a given neuron being gated by top-down influences. The plasticity evidence indicates that auditory cortex is a component of complex distributed networks that integrate the representation of auditory stimuli with attention, decision and reward processes.
Collapse
Affiliation(s)
- Dexter R F Irvine
- Bionics Institute, East Melbourne, Victoria 3002, Australia; School of Psychological Sciences, Monash University, Victoria 3800, Australia.
| |
Collapse
|
31
|
Forte AE, Etard O, Reichenbach T. The human auditory brainstem response to running speech reveals a subcortical mechanism for selective attention. eLife 2017; 6. [PMID: 28992445 PMCID: PMC5634786 DOI: 10.7554/elife.27203] [Citation(s) in RCA: 70] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2017] [Accepted: 09/14/2017] [Indexed: 11/26/2022] Open
Abstract
Humans excel at selectively listening to a target speaker in background noise such as competing voices. While the encoding of speech in the auditory cortex is modulated by selective attention, it remains debated whether such modulation occurs already in subcortical auditory structures. Investigating the contribution of the human brainstem to attention has, in particular, been hindered by the tiny amplitude of the brainstem response. Its measurement normally requires a large number of repetitions of the same short sound stimuli, which may lead to a loss of attention and to neural adaptation. Here we develop a mathematical method to measure the auditory brainstem response to running speech, an acoustic stimulus that does not repeat and that has a high ecological validity. We employ this method to assess the brainstem's activity when a subject listens to one of two competing speakers, and show that the brainstem response is consistently modulated by attention.
Collapse
Affiliation(s)
- Antonio Elia Forte
- Department of Bioengineering, Centre for Neurotechnology, Imperial College London, London, United Kingdom
| | - Octave Etard
- Department of Bioengineering, Centre for Neurotechnology, Imperial College London, London, United Kingdom
| | - Tobias Reichenbach
- Department of Bioengineering, Centre for Neurotechnology, Imperial College London, London, United Kingdom
| |
Collapse
|
32
|
Brefczynski-Lewis JA, Lewis JW. Auditory object perception: A neurobiological model and prospective review. Neuropsychologia 2017; 105:223-242. [PMID: 28467888 PMCID: PMC5662485 DOI: 10.1016/j.neuropsychologia.2017.04.034] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2016] [Revised: 04/27/2017] [Accepted: 04/27/2017] [Indexed: 12/15/2022]
Abstract
Interaction with the world is a multisensory experience, but most of what is known about the neural correlates of perception comes from studying vision. Auditory inputs enter cortex with its own set of unique qualities, and leads to use in oral communication, speech, music, and the understanding of emotional and intentional states of others, all of which are central to the human experience. To better understand how the auditory system develops, recovers after injury, and how it may have transitioned in its functions over the course of hominin evolution, advances are needed in models of how the human brain is organized to process real-world natural sounds and "auditory objects". This review presents a simple fundamental neurobiological model of hearing perception at a category level that incorporates principles of bottom-up signal processing together with top-down constraints of grounded cognition theories of knowledge representation. Though mostly derived from human neuroimaging literature, this theoretical framework highlights rudimentary principles of real-world sound processing that may apply to most if not all mammalian species with hearing and acoustic communication abilities. The model encompasses three basic categories of sound-source: (1) action sounds (non-vocalizations) produced by 'living things', with human (conspecific) and non-human animal sources representing two subcategories; (2) action sounds produced by 'non-living things', including environmental sources and human-made machinery; and (3) vocalizations ('living things'), with human versus non-human animals as two subcategories therein. The model is presented in the context of cognitive architectures relating to multisensory, sensory-motor, and spoken language organizations. The models' predictive values are further discussed in the context of anthropological theories of oral communication evolution and the neurodevelopment of spoken language proto-networks in infants/toddlers. These phylogenetic and ontogenetic frameworks both entail cortical network maturations that are proposed to at least in part be organized around a number of universal acoustic-semantic signal attributes of natural sounds, which are addressed herein.
Collapse
Affiliation(s)
- Julie A Brefczynski-Lewis
- Blanchette Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA; Department of Physiology, Pharmacology, & Neuroscience, West Virginia University, PO Box 9229, Morgantown, WV 26506, USA
| | - James W Lewis
- Blanchette Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA; Department of Physiology, Pharmacology, & Neuroscience, West Virginia University, PO Box 9229, Morgantown, WV 26506, USA.
| |
Collapse
|
33
|
The Hierarchical Cortical Organization of Human Speech Processing. J Neurosci 2017; 37:6539-6557. [PMID: 28588065 DOI: 10.1523/jneurosci.3267-16.2017] [Citation(s) in RCA: 123] [Impact Index Per Article: 17.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2016] [Revised: 05/22/2017] [Accepted: 05/25/2017] [Indexed: 12/13/2022] Open
Abstract
Speech comprehension requires that the brain extract semantic meaning from the spectral features represented at the cochlea. To investigate this process, we performed an fMRI experiment in which five men and two women passively listened to several hours of natural narrative speech. We then used voxelwise modeling to predict BOLD responses based on three different feature spaces that represent the spectral, articulatory, and semantic properties of speech. The amount of variance explained by each feature space was then assessed using a separate validation dataset. Because some responses might be explained equally well by more than one feature space, we used a variance partitioning analysis to determine the fraction of the variance that was uniquely explained by each feature space. Consistent with previous studies, we found that speech comprehension involves hierarchical representations starting in primary auditory areas and moving laterally on the temporal lobe: spectral features are found in the core of A1, mixtures of spectral and articulatory in STG, mixtures of articulatory and semantic in STS, and semantic in STS and beyond. Our data also show that both hemispheres are equally and actively involved in speech perception and interpretation. Further, responses as early in the auditory hierarchy as in STS are more correlated with semantic than spectral representations. These results illustrate the importance of using natural speech in neurolinguistic research. Our methodology also provides an efficient way to simultaneously test multiple specific hypotheses about the representations of speech without using block designs and segmented or synthetic speech.SIGNIFICANCE STATEMENT To investigate the processing steps performed by the human brain to transform natural speech sound into meaningful language, we used models based on a hierarchical set of speech features to predict BOLD responses of individual voxels recorded in an fMRI experiment while subjects listened to natural speech. Both cerebral hemispheres were actively involved in speech processing in large and equal amounts. Also, the transformation from spectral features to semantic elements occurs early in the cortical speech-processing stream. Our experimental and analytical approaches are important alternatives and complements to standard approaches that use segmented speech and block designs, which report more laterality in speech processing and associated semantic processing to higher levels of cortex than reported here.
Collapse
|
34
|
Baghdadi G, Towhidkhah F, Rostami R. Left and right reaction time differences to the sound intensity in normal and AD/HD children. Int J Pediatr Otorhinolaryngol 2017; 97:240-244. [PMID: 28483244 DOI: 10.1016/j.ijporl.2017.04.025] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/19/2016] [Revised: 04/13/2017] [Accepted: 04/15/2017] [Indexed: 10/19/2022]
Abstract
OBJECTIVES Right hemisphere, which is attributed to the sound intensity discrimination, has abnormality in people with attention deficit/hyperactivity disorder (AD/HD). However, it is not studied whether the defect in the right hemisphere has influenced on the intensity sensation of AD/HD subjects or not. In this study, the sensitivity of normal and AD/HD children to the sound intensity was investigated. METHODS Nineteen normal and fourteen AD/HD children participated in the study and performed a simple auditory reaction time task. Using the regression analysis, the sensitivity of right and left ears to various sound intensity levels was examined. RESULTS The statistical results showed that the sensitivity of AD/HD subjects to the intensity was lower than the normal group (p < 0.0001). Left and right pathways of the auditory system had the same pattern of response in AD/HD subjects (p > 0.05). However, in control group the left pathway was more sensitive to the sound intensity level than the right one (p = 0.0156). CONCLUSIONS It can be probable that the deficit of the right hemisphere has influenced on the auditory sensitivity of AD/HD children. The possible existent deficits of other auditory system components such as middle ear, inner ear, or involved brain stem nucleuses may also lead to the observed results. The development of new biomarkers based on the sensitivity of the brain hemispheres to the sound intensity has been suggested to estimate the risk of AD/HD. Designing new technique to correct the auditory feedback has been also proposed in behavioral treatment sessions.
Collapse
Affiliation(s)
- Golnaz Baghdadi
- Department of Biomedical Engineering, Amirkabir University of Technology, Tehran, Iran
| | - Farzad Towhidkhah
- Department of Biomedical Engineering, Amirkabir University of Technology, Tehran, Iran.
| | - Reza Rostami
- Department of Psychology and Educational Sciences, University of Tehran, Tehran, Iran
| |
Collapse
|
35
|
Bolders AC, Band GPH, Stallen PJM. Inconsistent Effect of Arousal on Early Auditory Perception. Front Psychol 2017; 8:447. [PMID: 28424639 PMCID: PMC5372791 DOI: 10.3389/fpsyg.2017.00447] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2016] [Accepted: 03/09/2017] [Indexed: 11/23/2022] Open
Abstract
Mood has been shown to influence cognitive performance. However, little is known about the influence of mood on sensory processing, specifically in the auditory domain. With the current study, we sought to investigate how auditory processing of neutral sounds is affected by the mood state of the listener. This was tested in two experiments by measuring masked-auditory detection thresholds before and after a standard mood-induction procedure. In the first experiment (N = 76), mood was induced by imagining a mood-appropriate event combined with listening to mood inducing music. In the second experiment (N = 80), imagining was combined with affective picture viewing to exclude any possibility of confounding the results by acoustic properties of the music. In both experiments, the thresholds were determined by means of an adaptive staircase tracking method in a two-interval forced-choice task. Masked detection thresholds were compared between participants in four different moods (calm, happy, sad, and anxious), which enabled differentiation of mood effects along the dimensions arousal and pleasure. Results of the two experiments were analyzed both in separate analyses and in a combined analysis. The first experiment showed that, while there was no impact of pleasure level on the masked threshold, lower arousal was associated with lower threshold (higher masked sensitivity). However, as indicated by an interaction effect between experiment and arousal, arousal did have a different effect on the threshold in Experiment 2. Experiment 2 showed a trend of arousal in opposite direction. These results show that the effect of arousal on auditory-masked sensitivity may depend on the modality of the mood-inducing stimuli. As clear conclusions regarding the genuineness of the arousal effect on the masked threshold cannot be drawn, suggestions for further research that could clarify this issue are provided.
Collapse
Affiliation(s)
- Anna C Bolders
- Cognitive Psychology Unit, Institute of Psychology, Leiden UniversityLeiden, Netherlands
| | - Guido P H Band
- Cognitive Psychology Unit, Institute of Psychology, Leiden UniversityLeiden, Netherlands.,Leiden Institute for Brain and Cognition, Leiden UniversityLeiden, Netherlands
| | - Pieter Jan M Stallen
- Cognitive Psychology Unit, Institute of Psychology, Leiden UniversityLeiden, Netherlands
| |
Collapse
|
36
|
Eliades SJ, Wang X. Contributions of sensory tuning to auditory-vocal interactions in marmoset auditory cortex. Hear Res 2017; 348:98-111. [PMID: 28284736 DOI: 10.1016/j.heares.2017.03.001] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/09/2016] [Revised: 02/27/2017] [Accepted: 03/02/2017] [Indexed: 01/30/2023]
Abstract
During speech, humans continuously listen to their own vocal output to ensure accurate communication. Such self-monitoring is thought to require the integration of information about the feedback of vocal acoustics with internal motor control signals. The neural mechanism of this auditory-vocal interaction remains largely unknown at the cellular level. Previous studies in naturally vocalizing marmosets have demonstrated diverse neural activities in auditory cortex during vocalization, dominated by a vocalization-induced suppression of neural firing. How underlying auditory tuning properties of these neurons might contribute to this sensory-motor processing is unknown. In the present study, we quantitatively compared marmoset auditory cortex neural activities during vocal production with those during passive listening. We found that neurons excited during vocalization were readily driven by passive playback of vocalizations and other acoustic stimuli. In contrast, neurons suppressed during vocalization exhibited more diverse playback responses, including responses that were not predictable by auditory tuning properties. These results suggest that vocalization-related excitation in auditory cortex is largely a sensory-driven response. In contrast, vocalization-induced suppression is not well predicted by a neuron's auditory responses, supporting the prevailing theory that internal motor-related signals contribute to the auditory-vocal interaction observed in auditory cortex.
Collapse
Affiliation(s)
- Steven J Eliades
- Department of Otorhinolaryngology: Head and Neck Surgery, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA.
| | - Xiaoqin Wang
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
37
|
Abstract
Sounds in everyday life seldom appear in isolation. Both humans and machines are constantly flooded with a cacophony of sounds that need to be sorted through and scoured for relevant information-a phenomenon referred to as the 'cocktail party problem'. A key component in parsing acoustic scenes is the role of attention, which mediates perception and behaviour by focusing both sensory and cognitive resources on pertinent information in the stimulus space. The current article provides a review of modelling studies of auditory attention. The review highlights how the term attention refers to a multitude of behavioural and cognitive processes that can shape sensory processing. Attention can be modulated by 'bottom-up' sensory-driven factors, as well as 'top-down' task-specific goals, expectations and learned schemas. Essentially, it acts as a selection process or processes that focus both sensory and cognitive resources on the most relevant events in the soundscape; with relevance being dictated by the stimulus itself (e.g. a loud explosion) or by a task at hand (e.g. listen to announcements in a busy airport). Recent computational models of auditory attention provide key insights into its role in facilitating perception in cluttered auditory scenes.This article is part of the themed issue 'Auditory and visual scene analysis'.
Collapse
Affiliation(s)
- Emine Merve Kaya
- Laboratory for Computational Audio Perception, Department of Electrical and Computer Engineering, The Johns Hopkins University, 3400 N Charles Street, Barton Hall, Baltimore, MD 21218, USA
| | - Mounya Elhilali
- Laboratory for Computational Audio Perception, Department of Electrical and Computer Engineering, The Johns Hopkins University, 3400 N Charles Street, Barton Hall, Baltimore, MD 21218, USA
| |
Collapse
|
38
|
Yufik YM, Friston K. Life and Understanding: The Origins of "Understanding" in Self-Organizing Nervous Systems. Front Syst Neurosci 2016; 10:98. [PMID: 28018185 PMCID: PMC5145877 DOI: 10.3389/fnsys.2016.00098] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2016] [Accepted: 11/08/2016] [Indexed: 11/16/2022] Open
Abstract
This article is motivated by a formulation of biotic self-organization in Friston (2013), where the emergence of "life" in coupled material entities (e.g., macromolecules) was predicated on bounded subsets that maintain a degree of statistical independence from the rest of the network. Boundary elements in such systems constitute a Markov blanket; separating the internal states of a system from its surrounding states. In this article, we ask whether Markov blankets operate in the nervous system and underlie the development of intelligence, enabling a progression from the ability to sense the environment to the ability to understand it. Markov blankets have been previously hypothesized to form in neuronal networks as a result of phase transitions that cause network subsets to fold into bounded assemblies, or packets (Yufik and Sheridan, 1997; Yufik, 1998a). The ensuing neuronal packets hypothesis builds on the notion of neuronal assemblies (Hebb, 1949, 1980), treating such assemblies as flexible but stable biophysical structures capable of withstanding entropic erosion. In other words, structures that maintain their integrity under changing conditions. In this treatment, neuronal packets give rise to perception of "objects"; i.e., quasi-stable (stimulus bound) feature groupings that are conserved over multiple presentations (e.g., the experience of perceiving "apple" can be interrupted and resumed many times). Monitoring the variations in such groups enables the apprehension of behavior; i.e., attributing to objects the ability to undergo changes without loss of self-identity. Ultimately, "understanding" involves self-directed composition and manipulation of the ensuing "mental models" that are constituted by neuronal packets, whose dynamics capture relationships among objects: that is, dependencies in the behavior of objects under varying conditions. For example, movement is known to involve rotation of population vectors in the motor cortex (Georgopoulos et al., 1988, 1993). The neuronal packet hypothesis associates "understanding" with the ability to detect and generate coordinated rotation of population vectors-in neuronal packets-in associative cortex and other regions in the brain. The ability to coordinate vector representations in this way is assumed to have developed in conjunction with the ability to postpone overt motor expression of implicit movement, thus creating a mechanism for prediction and behavioral optimization via mental modeling that is unique to higher species. This article advances the notion that Markov blankets-necessary for the emergence of life-have been subsequently exploited by evolution and thus ground the ways that living organisms adapt to their environment, culminating in their ability to understand it.
Collapse
Affiliation(s)
- Yan M. Yufik
- Virtual Structures Research, Inc.Potomac, MD, USA
| | - Karl Friston
- Wellcome Trust Centre for Neuroimaging at UCLLondon, UK
| |
Collapse
|
39
|
Ayala YA, Malmierca MS. Cholinergic Modulation of Stimulus-Specific Adaptation in the Inferior Colliculus. J Neurosci 2015; 35:12261-72. [PMID: 26338336 PMCID: PMC6605313 DOI: 10.1523/jneurosci.0909-15.2015] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2015] [Revised: 07/13/2015] [Accepted: 07/28/2015] [Indexed: 01/28/2023] Open
Abstract
Neural encoding of an ever-changing acoustic environment is a complex and demanding process that depends on modulation by neuroactive substances. Some neurons of the inferior colliculus (IC) exhibit "stimulus-specific adaptation" (SSA), i.e., a decrease in their response to a repetitive sound, but not to a rare one. Previous studies have demonstrated that acetylcholine (ACh) alters the frequency response areas of auditory neurons and therefore is important in the encoding of spectral information. Here, we address how microiontophoretic application of ACh modulates SSA in the IC of the anesthetized rat. We found that ACh decreased SSA in IC neurons by increasing the response to the repetitive tone. This effect was mainly mediated by muscarinic receptors. The strength of the cholinergic modulation depended on the baseline SSA level, exerting its greatest effect on neurons with intermediate SSA responses across IC subdivisions. Our data demonstrate that the increased availability of ACh exerts transient functional changes in partially adapting IC neurons, enhancing the sensory encoding of the ongoing stimulation. This effect potentially contributes to the propagation of ascending sensory-evoked afferent activity through the thalamus en route to the cortex. SIGNIFICANCE STATEMENT Neural encoding of an ever-changing acoustic environment is a complex and demanding task that may depend on the available levels of neuroactive substances. We explored how the cholinergic inputs affect the responses of neurons in the auditory midbrain that exhibit different degrees of stimulus-specific adaptation (SSA), i.e., a specific decrease in their response to a repeated sound that does not generalize to other, rare sounds. This work addresses the role of cholinergic synaptic inputs as well as the contribution of the muscarinic and nicotinic receptors on SSA. This is the first report on the role of neuromodulation on SSA, and the results contribute to our understanding of the cellular bases for processing low- and high-probability sounds.
Collapse
Affiliation(s)
- Yaneri A Ayala
- Auditory Neuroscience Laboratory, Institute of Neuroscience of Castilla y León and
| | - Manuel S Malmierca
- Auditory Neuroscience Laboratory, Institute of Neuroscience of Castilla y León and Department of Cell Biology and Pathology, Faculty of Medicine, University of Salamanca, 37007 Salamanca, Spain
| |
Collapse
|
40
|
Carlin MA, Elhilali M. Modeling attention-driven plasticity in auditory cortical receptive fields. Front Comput Neurosci 2015; 9:106. [PMID: 26347643 PMCID: PMC4541291 DOI: 10.3389/fncom.2015.00106] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2014] [Accepted: 07/30/2015] [Indexed: 11/24/2022] Open
Abstract
To navigate complex acoustic environments, listeners adapt neural processes to focus on behaviorally relevant sounds in the acoustic foreground while minimizing the impact of distractors in the background, an ability referred to as top-down selective attention. Particularly striking examples of attention-driven plasticity have been reported in primary auditory cortex via dynamic reshaping of spectro-temporal receptive fields (STRFs). By enhancing the neural response to features of the foreground while suppressing those to the background, STRFs can act as adaptive contrast matched filters that directly contribute to an improved cognitive segregation between behaviorally relevant and irrelevant sounds. In this study, we propose a novel discriminative framework for modeling attention-driven plasticity of STRFs in primary auditory cortex. The model describes a general strategy for cortical plasticity via an optimization that maximizes discriminability between the foreground and distractors while maintaining a degree of stability in the cortical representation. The first instantiation of the model describes a form of feature-based attention and yields STRF adaptation patterns consistent with a contrast matched filter previously reported in neurophysiological studies. An extension of the model captures a form of object-based attention, where top-down signals act on an abstracted representation of the sensory input characterized in the modulation domain. The object-based model makes explicit predictions in line with limited neurophysiological data currently available but can be readily evaluated experimentally. Finally, we draw parallels between the model and anatomical circuits reported to be engaged during active attention. The proposed model strongly suggests an interpretation of attention-driven plasticity as a discriminative adaptation operating at the level of sensory cortex, in line with similar strategies previously described across different sensory modalities.
Collapse
Affiliation(s)
- Michael A Carlin
- Laboratory for Computational Audio Perception, Department of Electrical and Computer Engineering, Johns Hopkins University Baltimore, MD, USA
| | - Mounya Elhilali
- Laboratory for Computational Audio Perception, Department of Electrical and Computer Engineering, Johns Hopkins University Baltimore, MD, USA
| |
Collapse
|
41
|
Evidence against attentional state modulating scalp-recorded auditory brainstem steady-state responses. Brain Res 2015; 1626:146-64. [PMID: 26187756 DOI: 10.1016/j.brainres.2015.06.038] [Citation(s) in RCA: 61] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2015] [Revised: 06/18/2015] [Accepted: 06/24/2015] [Indexed: 11/20/2022]
Abstract
Auditory brainstem responses (ABRs) and their steady-state counterpart (subcortical steady-state responses, SSSRs) are generally thought to be insensitive to cognitive demands. However, a handful of studies report that SSSRs are modulated depending on the subject׳s focus of attention, either towards or away from an auditory stimulus. Here, we explored whether attentional focus affects the envelope-following response (EFR), which is a particular kind of SSSR, and if so, whether the effects are specific to which sound elements in a sound mixture a subject is attending (selective auditory attentional modulation), specific to attended sensory input (inter-modal attentional modulation), or insensitive to attentional focus. We compared the strength of EFR-stimulus phase locking in human listeners under various tasks: listening to a monaural stimulus, selectively attending to a particular ear during dichotic stimulus presentation, and attending to visual stimuli while ignoring dichotic auditory inputs. We observed no systematic changes in the EFR across experimental manipulations, even though cortical EEG revealed attention-related modulations of alpha activity during the task. We conclude that attentional effects, if any, on human subcortical representation of sounds cannot be observed robustly using EFRs. This article is part of a Special Issue entitled SI: Prediction and Attention.
Collapse
|
42
|
Layer specific sharpening of frequency tuning by selective attention in primary auditory cortex. J Neurosci 2015; 34:16496-508. [PMID: 25471586 DOI: 10.1523/jneurosci.2055-14.2014] [Citation(s) in RCA: 70] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Recent electrophysiological and neuroimaging studies provide converging evidence that attending to sounds increases the response selectivity of neuronal ensembles even at the first cortical stage of auditory stimulus processing in primary auditory cortex (A1). This is achieved by enhancement of responses in the regions that process attended frequency content, and by suppression of responses in the surrounding regions. The goals of our study were to define the extent to which A1 neuronal ensembles are involved in this process, determine its effect on the frequency tuning of A1 neuronal ensembles, and examine the involvement of the different cortical layers. To accomplish these, we analyzed laminar profiles of synaptic activity and action potentials recorded in A1 of macaques performing a rhythmic intermodal selective attention task. We found that the frequency tuning of neuronal ensembles was sharpened due to both increased gain at the preferentially processed or best frequency and increased response suppression at all other frequencies when auditory stimuli were attended. Our results suggest that these effects are due to a frequency-specific counterphase entrainment of ongoing delta oscillations, which predictively orchestrates opposite sign excitability changes across all of A1. This results in a net suppressive effect due to the large proportion of neuronal ensembles that do not specifically process the attended frequency content. Furthermore, analysis of laminar activation profiles revealed that although attention-related suppressive effects predominate the responses of supragranular neuronal ensembles, response enhancement is dominant in the granular and infragranular layers, providing evidence for layer-specific cortical operations in attentive stimulus processing.
Collapse
|
43
|
Statistical learning of recurring sound patterns encodes auditory objects in songbird forebrain. Proc Natl Acad Sci U S A 2014; 111:14553-8. [PMID: 25246563 DOI: 10.1073/pnas.1412109111] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Auditory neurophysiology has demonstrated how basic acoustic features are mapped in the brain, but it is still not clear how multiple sound components are integrated over time and recognized as an object. We investigated the role of statistical learning in encoding the sequential features of complex sounds by recording neuronal responses bilaterally in the auditory forebrain of awake songbirds that were passively exposed to long sound streams. These streams contained sequential regularities, and were similar to streams used in human infants to demonstrate statistical learning for speech sounds. For stimulus patterns with contiguous transitions and with nonadjacent elements, single and multiunit responses reflected neuronal discrimination of the familiar patterns from novel patterns. In addition, discrimination of nonadjacent patterns was stronger in the right hemisphere than in the left, and may reflect an effect of top-down modulation that is lateralized. Responses to recurring patterns showed stimulus-specific adaptation, a sparsening of neural activity that may contribute to encoding invariants in the sound stream and that appears to increase coding efficiency for the familiar stimuli across the population of neurons recorded. As auditory information about the world must be received serially over time, recognition of complex auditory objects may depend on this type of mnemonic process to create and differentiate representations of recently heard sounds.
Collapse
|
44
|
Detorakis GI, Rougier NP. Structure of receptive fields in a computational model of area 3b of primary sensory cortex. Front Comput Neurosci 2014; 8:76. [PMID: 25120461 PMCID: PMC4112916 DOI: 10.3389/fncom.2014.00076] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2014] [Accepted: 06/29/2014] [Indexed: 11/24/2022] Open
Abstract
In a previous work, we introduced a computational model of area 3b which is built upon the neural field theory and receives input from a simplified model of the index distal finger pad populated by a random set of touch receptors (Merkell cells). This model has been shown to be able to self-organize following the random stimulation of the finger pad model and to cope, to some extent, with cortical or skin lesions. The main hypothesis of the model is that learning of skin representations occurs at the thalamo-cortical level while cortico-cortical connections serve a stereotyped competition mechanism that shapes the receptive fields. To further assess this hypothesis and the validity of the model, we reproduced in this article the exact experimental protocol of DiCarlo et al. that has been used to examine the structure of receptive fields in area 3b of the primary somatosensory cortex. Using the same analysis toolset, the model yields consistent results, having most of the receptive fields to contain a single region of excitation and one to several regions of inhibition. We further proceeded our study using a dynamic competition that deeply influences the formation of the receptive fields. We hypothesized this dynamic competition to correspond to some form of somatosensory attention that may help to precisely shape the receptive fields. To test this hypothesis, we designed a protocol where an arbitrary region of interest is delineated on the index distal finger pad and we either (1) instructed explicitly the model to attend to this region (simulating an attentional signal) (2) preferentially trained the model on this region or (3) combined the two aforementioned protocols simultaneously. Results tend to confirm that dynamic competition leads to shrunken receptive fields and its joint interaction with intensive training promotes a massive receptive fields migration and shrinkage.
Collapse
Affiliation(s)
| | - Nicolas P Rougier
- INRIA Bordeaux Sud-Ouest Bordeaux, France ; Institut des Maladies Neurodégénératives, Université de Bordeaux, Centre National de la Recherche Scientifique, UMR 5293 Bordeaux, France ; LaBRI, Université de Bordeaux, Institut Polytechnique de Bordeaux, Centre National de la Recherche Scientifique, UMR 5800 Talence, France
| |
Collapse
|
45
|
Shuai L, Elhilali M. Task-dependent neural representations of salient events in dynamic auditory scenes. Front Neurosci 2014; 8:203. [PMID: 25100934 PMCID: PMC4104552 DOI: 10.3389/fnins.2014.00203] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2014] [Accepted: 06/27/2014] [Indexed: 11/13/2022] Open
Abstract
Selecting pertinent events in the cacophony of sounds that impinge on our ears every day is regulated by the acoustic salience of sounds in the scene as well as their behavioral relevance as dictated by top-down task-dependent demands. The current study aims to explore the neural signature of both facets of attention, as well as their possible interactions in the context of auditory scenes. Using a paradigm with dynamic auditory streams with occasional salient events, we recorded neurophysiological responses of human listeners using EEG while manipulating the subjects' attentional state as well as the presence or absence of a competing auditory stream. Our results showed that salient events caused an increase in the auditory steady-state response (ASSR) irrespective of attentional state or complexity of the scene. Such increase supplemented ASSR increases due to task-driven attention. Salient events also evoked a strong N1 peak in the ERP response when listeners were attending to the target sound stream, accompanied by an MMN-like component in some cases and changes in the P1 and P300 components under all listening conditions. Overall, bottom-up attention induced by a salient change in the auditory stream appears to mostly modulate the amplitude of the steady-state response and certain event-related potentials to salient sound events; though this modulation is affected by top-down attentional processes and the prominence of these events in the auditory scene as well.
Collapse
Affiliation(s)
| | - Mounya Elhilali
- Laboratory of Computational Audio Perception, Department of Electrical and Computer Engineering, Center for Speech and Language Processing, Johns Hopkins UniversityBaltimore, MD, USA
| |
Collapse
|
46
|
Excitatory synaptic feedback from the motor layer to the sensory layers of the superior colliculus. J Neurosci 2014; 34:6822-33. [PMID: 24828636 DOI: 10.1523/jneurosci.3137-13.2014] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Neural circuits that translate sensory information into motor commands are organized in a feedforward manner converting sensory information into motor output. The superior colliculus (SC) follows this pattern as it plays a role in converting visual information from the retina and visual cortex into motor commands for rapid eye movements (saccades). Feedback from movement to sensory regions is hypothesized to play critical roles in attention, visual image stability, and saccadic suppression, but in contrast to feedforward pathways, motor feedback to sensory regions has received much less attention. The present study used voltage imaging and patch-clamp recording in slices of rat SC to test the hypothesis of an excitatory synaptic pathway from the motor layers of the SC back to the sensory superficial layers. Voltage imaging revealed an extensive depolarization of the superficial layers evoked by electrical stimulation of the motor layers. A pharmacologically isolated excitatory synaptic potential in the superficial layers depended on stimulus strength in the motor layers in a manner consistent with orthodromic excitation. Patch-clamp recording from neurons in the sensory layers revealed excitatory synaptic potentials in response to glutamate application in the motor layers. The location, size, and morphology of responsive neurons indicated they were likely to be narrow-field vertical cells. This excitatory projection from motor to sensory layers adds an important element to the circuitry of the SC and reveals a novel feedback pathway that could play a role in enhancing sensory responses to attended targets as well as visual image stabilization.
Collapse
|
47
|
Miranda JA, Shepard KN, McClintock SK, Liu RC. Adult plasticity in the subcortical auditory pathway of the maternal mouse. PLoS One 2014; 9:e101630. [PMID: 24992362 PMCID: PMC4081580 DOI: 10.1371/journal.pone.0101630] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2014] [Accepted: 06/09/2014] [Indexed: 11/18/2022] Open
Abstract
Subcortical auditory nuclei were traditionally viewed as non-plastic in adulthood so that acoustic information could be stably conveyed to higher auditory areas. Studies in a variety of species, including humans, now suggest that prolonged acoustic training can drive long-lasting brainstem plasticity. The neurobiological mechanisms for such changes are not well understood in natural behavioral contexts due to a relative dearth of in vivo animal models in which to study this. Here, we demonstrate in a mouse model that a natural life experience with increased demands on the auditory system - motherhood - is associated with improved temporal processing in the subcortical auditory pathway. We measured the auditory brainstem response to test whether mothers and pup-naïve virgin mice differed in temporal responses to both broadband and tone stimuli, including ultrasonic frequencies found in mouse pup vocalizations. Mothers had shorter latencies for early ABR peaks, indicating plasticity in the auditory nerve and the cochlear nucleus. Shorter interpeak latency between waves IV and V also suggest plasticity in the inferior colliculus. Hormone manipulations revealed that these cannot be explained solely by estrogen levels experienced during pregnancy and parturition in mothers. In contrast, we found that pup-care experience, independent of pregnancy and parturition, contributes to shortening auditory brainstem response latencies. These results suggest that acoustic experience in the maternal context imparts plasticity on early auditory processing that lasts beyond pup weaning. In addition to establishing an animal model for exploring adult auditory brainstem plasticity in a neuroethological context, our results have broader implications for models of perceptual, behavioral and neural changes that arise during maternity, where subcortical sensorineural plasticity has not previously been considered.
Collapse
Affiliation(s)
- Jason A. Miranda
- Department of Biology, Emory University, Atlanta, Georgia, United States of America
- Center for Behavioral Neuroscience, Atlanta, Georgia, United States of America
| | - Kathryn N. Shepard
- Department of Biology, Emory University, Atlanta, Georgia, United States of America
- Center for Behavioral Neuroscience, Atlanta, Georgia, United States of America
- Graduate Program in Neuroscience, Emory University, Atlanta, Georgia, United States of America
| | - Shannon K. McClintock
- Institute for Quantitative Theory and Methods, Emory University, Atlanta, Georgia, United States of America
| | - Robert C. Liu
- Department of Biology, Emory University, Atlanta, Georgia, United States of America
- Center for Behavioral Neuroscience, Atlanta, Georgia, United States of America
- Center for Translational Social Neuroscience, Atlanta, Georgia, United States of America
| |
Collapse
|
48
|
Hsu YF, Hämäläinen JA, Waszak F. Both attention and prediction are necessary for adaptive neuronal tuning in sensory processing. Front Hum Neurosci 2014; 8:152. [PMID: 24723871 PMCID: PMC3972470 DOI: 10.3389/fnhum.2014.00152] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2014] [Accepted: 02/28/2014] [Indexed: 11/13/2022] Open
Abstract
The brain as a proactive system processes sensory information under the top-down influence of attention and prediction. However, the relation between attention and prediction remains undetermined given the conflation of these two mechanisms in the literature. To evaluate whether attention and prediction are dependent of each other, and if so, how these two top-down mechanisms may interact in sensory processing, we orthogonally manipulated attention and prediction in a target detection task. Participants were instructed to pay attention to one of two interleaved stimulus streams of predictable/unpredictable tone frequency. We found that attention and prediction interacted on the amplitude of the N1 ERP component. The N1 amplitude in the attended/predictable condition was larger than that in any of the other conditions. Dipole source localization analysis showed that the effect came from the activation in bilateral auditory areas. No significant effect was found in the P2 time window. Our results suggest that attention and prediction are dependent of each other. While attention might determine the overall cortical responsiveness to stimuli when prediction is involved, prediction might provide an anchor for the modulation of the synaptic input strengths which needs to be operated on the basis of attention.
Collapse
Affiliation(s)
- Yi-Fang Hsu
- Université Paris Descartes, Sorbonne Paris Cité Paris, France ; CNRS, Laboratoire Psychologie de la Perception, UMR 8242 Paris, France
| | | | - Florian Waszak
- Université Paris Descartes, Sorbonne Paris Cité Paris, France ; CNRS, Laboratoire Psychologie de la Perception, UMR 8242 Paris, France
| |
Collapse
|
49
|
Mullens D, Woodley J, Whitson L, Provost A, Heathcote A, Winkler I, Todd J. Altering the primacy bias-How does a prior task affect mismatch negativity? Psychophysiology 2014; 51:437-45. [DOI: 10.1111/psyp.12190] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2013] [Accepted: 12/03/2013] [Indexed: 11/30/2022]
Affiliation(s)
- Daniel Mullens
- School of Psychology; University of Newcastle; Callaghan Australia
- Priority Research Centre for Translational Neuroscience and Mental Health Research; University of Newcastle; Callaghan Australia
| | - Jessica Woodley
- School of Psychology; University of Newcastle; Callaghan Australia
| | - Lisa Whitson
- School of Psychology; University of Newcastle; Callaghan Australia
- Priority Research Centre for Translational Neuroscience and Mental Health Research; University of Newcastle; Callaghan Australia
| | - Alexander Provost
- School of Psychology; University of Newcastle; Callaghan Australia
- Priority Research Centre for Translational Neuroscience and Mental Health Research; University of Newcastle; Callaghan Australia
| | - Andrew Heathcote
- School of Psychology; University of Newcastle; Callaghan Australia
- Priority Research Centre for Translational Neuroscience and Mental Health Research; University of Newcastle; Callaghan Australia
| | - István Winkler
- Schizophrenia Research Institute; Darlinghurst Australia
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences; MTA; Budapest Hungary
| | - Juanita Todd
- School of Psychology; University of Newcastle; Callaghan Australia
- Priority Research Centre for Translational Neuroscience and Mental Health Research; University of Newcastle; Callaghan Australia
- Institute of Psychology; University of Szeged; Szeged Hungary
| |
Collapse
|
50
|
Auditory-cortex short-term plasticity induced by selective attention. Neural Plast 2014; 2014:216731. [PMID: 24551458 PMCID: PMC3914570 DOI: 10.1155/2014/216731] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2013] [Accepted: 12/15/2013] [Indexed: 11/23/2022] Open
Abstract
The ability to concentrate on relevant sounds in the acoustic environment is crucial for everyday function and communication. Converging lines of evidence suggests that transient functional changes in auditory-cortex neurons, “short-term plasticity”, might explain this fundamental function. Under conditions of strongly focused attention, enhanced processing of attended sounds can take place at very early latencies (~50 ms from sound onset) in primary auditory cortex and possibly even at earlier latencies in subcortical structures. More robust selective-attention short-term plasticity is manifested as modulation of responses peaking at ~100 ms from sound onset in functionally specialized nonprimary auditory-cortical areas by way of stimulus-specific reshaping of neuronal receptive fields that supports filtering of selectively attended sound features from task-irrelevant ones. Such effects have been shown to take effect in ~seconds following shifting of attentional focus. There are findings suggesting that the reshaping of neuronal receptive fields is even stronger at longer auditory-cortex response latencies (~300 ms from sound onset). These longer-latency short-term plasticity effects seem to build up more gradually, within tens of seconds after shifting the focus of attention. Importantly, some of the auditory-cortical short-term plasticity effects observed during selective attention predict enhancements in behaviorally measured sound discrimination performance.
Collapse
|