1
|
Luthra S, Razin RN, Tierney AT, Holt LL, Dick F. Systematic changes in neural selectivity reflect the acquired salience of category-diagnostic dimensions. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.09.21.614258. [PMID: 39386708 PMCID: PMC11463673 DOI: 10.1101/2024.09.21.614258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 10/12/2024]
Abstract
Humans and other animals develop remarkable behavioral specializations for identifying, differentiating, and acting on classes of ecologically important signals. Ultimately, this expertise is flexible enough to support diverse perceptual judgments: a voice, for example, simultaneously conveys what a talker says as well as myriad cues about her identity and state. Mature perception across complex signals thus involves both discovering and learning regularities that best inform diverse perceptual judgments, and weighting this information flexibly as task demands change. Here, we test whether this flexibility may involve endogenous attentional gain to task-relevant dimensions. We use two prospective auditory category learning tasks to relate a complex, entirely novel soundscape to four classes of "alien identity" and two classes of "alien size." Identity, but not size, categorization requires discovery and learning of patterned acoustic input situated in one of two simultaneous, frequency-delimited bands. This allows us to capitalize on the coarsely segregated frequency-band-specific channels of auditory tonotopic maps using fMRI to ask whether category-relevant perceptual information is prioritized relative to simultaneous, uninformative information. Among participants expert at alien identity categorization, we observe prioritization of the diagnostic frequency band that persists even when the diagnostic information becomes irrelevant in the size categorization task. Tellingly, the neural selectivity evoked implicitly in categorization aligns closely with activation driven by explicit, sustained selective attention to other sounds presented in the same frequency band. Additionally, we observe fingerprints of individual differences in the learning trajectories taken to achieve expert-level categorization in patterns of neural activity associated with the diagnostic dimension. In all, this indicates that acquiring categories can drive the emergence of acquired attentional salience to dimensions of acoustic input.
Collapse
|
2
|
Dai B, Zhai Y, Long Y, Lu C. How the Listener's Attention Dynamically Switches Between Different Speakers During a Natural Conversation. Psychol Sci 2024; 35:635-652. [PMID: 38657276 DOI: 10.1177/09567976241243367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/26/2024] Open
Abstract
The neural mechanisms underpinning the dynamic switching of a listener's attention between speakers are not well understood. Here we addressed this issue in a natural conversation involving 21 triadic adult groups. Results showed that when the listener's attention dynamically switched between speakers, neural synchronization with the to-be-attended speaker was significantly enhanced, whereas that with the to-be-ignored speaker was significantly suppressed. Along with attention switching, semantic distances between sentences significantly increased in the to-be-ignored speech. Moreover, neural synchronization negatively correlated with the increase in semantic distance but not with acoustic change of the to-be-ignored speech. However, no difference in neural synchronization was found between the listener and the two speakers during the phase of sustained attention. These findings support the attenuation model of attention, indicating that both speech signals are processed beyond the basic physical level. Additionally, shifting attention imposes a cognitive burden, as demonstrated by the opposite fluctuations of interpersonal neural synchronization.
Collapse
Affiliation(s)
- Bohan Dai
- Max Planck Institute for Psycholinguistics
- Donders Institute for Brain, Cognition and Behaviour, Radboud University
| | - Yu Zhai
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University
| | - Yuhang Long
- Institute of Developmental Psychology, Faculty of Psychology, Beijing Normal University
| | - Chunming Lu
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University
| |
Collapse
|
3
|
DeYoe EA, Huddleston W, Greenberg AS. Are neuronal mechanisms of attention universal across human sensory and motor brain maps? Psychon Bull Rev 2024:10.3758/s13423-024-02495-3. [PMID: 38587756 DOI: 10.3758/s13423-024-02495-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/10/2024] [Indexed: 04/09/2024]
Abstract
One's experience of shifting attention from the color to the smell to the act of picking a flower seems like a unitary process applied, at will, to one modality after another. Yet, the unique and separable experiences of sight versus smell versus movement might suggest that the neural mechanisms of attention have been separately optimized to employ each modality to its greatest advantage. Moreover, addressing the issue of universality can be particularly difficult due to a paucity of existing cross-modal comparisons and a dearth of neurophysiological methods that can be applied equally well across disparate modalities. Here we outline some of the conceptual and methodological issues related to this problem and present an instructive example of an experimental approach that can be applied widely throughout the human brain to permit detailed, quantitative comparison of attentional mechanisms across modalities. The ultimate goal is to spur efforts across disciplines to provide a large and varied database of empirical observations that will either support the notion of a universal neural substrate for attention or more clearly identify the degree to which attentional mechanisms are specialized for each modality.
Collapse
Affiliation(s)
- Edgar A DeYoe
- Department of Radiology, Medical College of Wisconsin, 8701 Watertown Plank Rd, Milwaukee, WI, 53226, USA.
- , Signal Mountain, USA.
| | - Wendy Huddleston
- School of Rehabilitation Sciences and Technology, College of Health Professions and Sciences, University of Wisconsin - Milwaukee, 3409 N. Downer Ave, Milwaukee, WI, 53211, USA
| | - Adam S Greenberg
- Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, Milwaukee, WI, 53226, USA
| |
Collapse
|
4
|
Mori F, Sugino M, Kabashima K, Nara T, Jimbo Y, Kotani K. Limiting parameter range for cortical-spherical mapping improves activated domain estimation for attention modulated auditory response. J Neurosci Methods 2024; 402:110032. [PMID: 38043853 DOI: 10.1016/j.jneumeth.2023.110032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 11/21/2023] [Accepted: 11/29/2023] [Indexed: 12/05/2023]
Abstract
BACKGROUND Attention is one of the factors involved in selecting input information for the brain. We applied a method for estimating domains with clear boundaries using magnetoencephalography (the domain estimation method) for auditory-evoked responses (N100m) to evaluate the effects of attention in milliseconds. However, because the surface around the auditory cortex is folded in a complicated manner, it is unknown whether the activity in the auditory cortex can be estimated. NEW METHOD The parameter range to express current sources was set to include the auditory cortex. Their search region was expressed as a direct product of the parameter ranges used in the adaptive diagonal curves. RESULTS Without a limitation of the range, activity was estimated in regions other than the auditory cortex in all cases. However, with the limitation of the range, the activity was estimated in the primary or higher auditory cortex. Further analysis of the limitation of the range showed that the domains activated during attention included the regions activated during no attention for the participants whose amplitudes of N100m were higher during attention. COMPARISON WITH EXISTING METHOD We proposed a method for effectively limiting the search region to evaluate the extent of the activated domain in regions with complex folded structures. CONCLUSION To evaluate the extent of activated domains in regions with complex folded structures, it is necessary to limit the parameter search range. The area of the activated domains in the auditory cortex may increase by attention on the millisecond timescale.
Collapse
Affiliation(s)
- Fumina Mori
- School of Engineering, The University of Tokyo, Tokyo, Japan.
| | - Masato Sugino
- School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Kenta Kabashima
- Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
| | - Takaaki Nara
- Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
| | - Yasuhiko Jimbo
- School of Engineering, The University of Tokyo, Tokyo, Japan
| | - Kiyoshi Kotani
- The Graduate School of Frontier Science, The University of Tokyo, Chiba, Japan
| |
Collapse
|
5
|
MacLean J, Stirn J, Sisson A, Bidelman GM. Short- and long-term neuroplasticity interact during the perceptual learning of concurrent speech. Cereb Cortex 2024; 34:bhad543. [PMID: 38212291 PMCID: PMC10839853 DOI: 10.1093/cercor/bhad543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 12/20/2023] [Accepted: 12/21/2023] [Indexed: 01/13/2024] Open
Abstract
Plasticity from auditory experience shapes the brain's encoding and perception of sound. However, whether such long-term plasticity alters the trajectory of short-term plasticity during speech processing has yet to be investigated. Here, we explored the neural mechanisms and interplay between short- and long-term neuroplasticity for rapid auditory perceptual learning of concurrent speech sounds in young, normal-hearing musicians and nonmusicians. Participants learned to identify double-vowel mixtures during ~ 45 min training sessions recorded simultaneously with high-density electroencephalography (EEG). We analyzed frequency-following responses (FFRs) and event-related potentials (ERPs) to investigate neural correlates of learning at subcortical and cortical levels, respectively. Although both groups showed rapid perceptual learning, musicians showed faster behavioral decisions than nonmusicians overall. Learning-related changes were not apparent in brainstem FFRs. However, plasticity was highly evident in cortex, where ERPs revealed unique hemispheric asymmetries between groups suggestive of different neural strategies (musicians: right hemisphere bias; nonmusicians: left hemisphere). Source reconstruction and the early (150-200 ms) time course of these effects localized learning-induced cortical plasticity to auditory-sensory brain areas. Our findings reinforce the domain-general benefits of musicianship but reveal that successful speech sound learning is driven by a critical interplay between long- and short-term mechanisms of auditory plasticity, which first emerge at a cortical level.
Collapse
Affiliation(s)
- Jessica MacLean
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
- Program in Neuroscience, Indiana University, Bloomington, IN, USA
| | - Jack Stirn
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
| | - Alexandria Sisson
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
| | - Gavin M Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
- Program in Neuroscience, Indiana University, Bloomington, IN, USA
- Cognitive Science Program, Indiana University, Bloomington, IN, USA
| |
Collapse
|
6
|
MacLean J, Stirn J, Sisson A, Bidelman GM. Short- and long-term experience-dependent neuroplasticity interact during the perceptual learning of concurrent speech. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.26.559640. [PMID: 37808665 PMCID: PMC10557636 DOI: 10.1101/2023.09.26.559640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2023]
Abstract
Plasticity from auditory experiences shapes brain encoding and perception of sound. However, whether such long-term plasticity alters the trajectory of short-term plasticity during speech processing has yet to be investigated. Here, we explored the neural mechanisms and interplay between short- and long-term neuroplasticity for rapid auditory perceptual learning of concurrent speech sounds in young, normal-hearing musicians and nonmusicians. Participants learned to identify double-vowel mixtures during ∼45 minute training sessions recorded simultaneously with high-density EEG. We analyzed frequency-following responses (FFRs) and event-related potentials (ERPs) to investigate neural correlates of learning at subcortical and cortical levels, respectively. While both groups showed rapid perceptual learning, musicians showed faster behavioral decisions than nonmusicians overall. Learning-related changes were not apparent in brainstem FFRs. However, plasticity was highly evident in cortex, where ERPs revealed unique hemispheric asymmetries between groups suggestive of different neural strategies (musicians: right hemisphere bias; nonmusicians: left hemisphere). Source reconstruction and the early (150-200 ms) time course of these effects localized learning-induced cortical plasticity to auditory-sensory brain areas. Our findings confirm domain-general benefits for musicianship but reveal successful speech sound learning is driven by a critical interplay between long- and short-term mechanisms of auditory plasticity that first emerge at a cortical level.
Collapse
|
7
|
Grisendi T, Clarke S, Da Costa S. Emotional sounds in space: asymmetrical representation within early-stage auditory areas. Front Neurosci 2023; 17:1164334. [PMID: 37274197 PMCID: PMC10235458 DOI: 10.3389/fnins.2023.1164334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Accepted: 04/07/2023] [Indexed: 06/06/2023] Open
Abstract
Evidence from behavioral studies suggests that the spatial origin of sounds may influence the perception of emotional valence. Using 7T fMRI we have investigated the impact of the categories of sound (vocalizations; non-vocalizations), emotional valence (positive, neutral, negative) and spatial origin (left, center, right) on the encoding in early-stage auditory areas and in the voice area. The combination of these different characteristics resulted in a total of 18 conditions (2 categories x 3 valences x 3 lateralizations), which were presented in a pseudo-randomized order in blocks of 11 different sounds (of the same condition) in 12 distinct runs of 6 min. In addition, two localizers, i.e., tonotopy mapping; human vocalizations, were used to define regions of interest. A three-way repeated measure ANOVA on the BOLD responses revealed bilateral significant effects and interactions in the primary auditory cortex, the lateral early-stage auditory areas, and the voice area. Positive vocalizations presented on the left side yielded greater activity in the ipsilateral and contralateral primary auditory cortex than did neutral or negative vocalizations or any other stimuli at any of the three positions. Right, but not left area L3 responded more strongly to (i) positive vocalizations presented ipsi- or contralaterally than to neutral or negative vocalizations presented at the same positions; and (ii) to neutral than positive or negative non-vocalizations presented contralaterally. Furthermore, comparison with a previous study indicates that spatial cues may render emotional valence more salient within the early-stage auditory areas.
Collapse
Affiliation(s)
- Tiffany Grisendi
- Service de Neuropsychologie et de Neuroréhabilitation, Centre Hospitalier Universitaire Vaudois (CHUV) and University of Lausanne, Lausanne, Switzerland
| | - Stephanie Clarke
- Service de Neuropsychologie et de Neuroréhabilitation, Centre Hospitalier Universitaire Vaudois (CHUV) and University of Lausanne, Lausanne, Switzerland
| | - Sandra Da Costa
- Centre d’Imagerie Biomédicale, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| |
Collapse
|
8
|
Heynckes M, Lage-Castellanos A, De Weerd P, Formisano E, De Martino F. Layer-specific correlates of detected and undetected auditory targets during attention. CURRENT RESEARCH IN NEUROBIOLOGY 2023; 4:100075. [PMID: 36755988 PMCID: PMC9900365 DOI: 10.1016/j.crneur.2023.100075] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Revised: 11/24/2022] [Accepted: 01/12/2023] [Indexed: 01/26/2023] Open
Abstract
In everyday life, the processing of acoustic information allows us to react to subtle changes in the auditory scene. Yet even when closely attending to sounds in the context of a task, we occasionally miss task-relevant features. The neural computations that underlie our ability to detect behavioral relevant sound changes are thought to be grounded in both feedforward and feedback processes within the auditory hierarchy. Here, we assessed the role of feedforward and feedback contributions in primary and non-primary auditory areas during behavioral detection of target sounds using submillimeter spatial resolution functional magnetic resonance imaging (fMRI) at high-fields (7 T) in humans. We demonstrate that the successful detection of subtle temporal shifts in target sounds leads to a selective increase of activation in superficial layers of primary auditory cortex (PAC). These results indicate that feedback signals reaching as far back as PAC may be relevant to the detection of targets in the auditory scene.
Collapse
Affiliation(s)
- Miriam Heynckes
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 ER, Maastricht, the Netherlands
| | - Agustin Lage-Castellanos
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 ER, Maastricht, the Netherlands
| | - Peter De Weerd
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 ER, Maastricht, the Netherlands
| | - Elia Formisano
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 ER, Maastricht, the Netherlands,Maastricht Centre for Systems Biology, Maastricht University, Universiteitssingel 60, 6229 ER, Maastricht, the Netherlands
| | - Federico De Martino
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 ER, Maastricht, the Netherlands,Corresponding author. Federico De Martino Department Cognitive Neuroscience Oxfordlaan 55, 6229EV, Maastricht, the Netherlands.
| |
Collapse
|
9
|
Lage-Castellanos A, De Martino F, Ghose GM, Gulban OF, Moerel M. Selective attention sharpens population receptive fields in human auditory cortex. Cereb Cortex 2022; 33:5395-5408. [PMID: 36336333 PMCID: PMC10152083 DOI: 10.1093/cercor/bhac427] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 10/03/2022] [Accepted: 10/04/2022] [Indexed: 11/09/2022] Open
Abstract
Abstract
Selective attention enables the preferential processing of relevant stimulus aspects. Invasive animal studies have shown that attending a sound feature rapidly modifies neuronal tuning throughout the auditory cortex. Human neuroimaging studies have reported enhanced auditory cortical responses with selective attention. To date, it remains unclear how the results obtained with functional magnetic resonance imaging (fMRI) in humans relate to the electrophysiological findings in animal models. Here we aim to narrow the gap between animal and human research by combining a selective attention task similar in design to those used in animal electrophysiology with high spatial resolution ultra-high field fMRI at 7 Tesla. Specifically, human participants perform a detection task, whereas the probability of target occurrence varies with sound frequency. Contrary to previous fMRI studies, we show that selective attention resulted in population receptive field sharpening, and consequently reduced responses, at the attended sound frequencies. The difference between our results to those of previous fMRI studies supports the notion that the influence of selective attention on auditory cortex is diverse and may depend on context, stimulus, and task.
Collapse
Affiliation(s)
- Agustin Lage-Castellanos
- Department of Cognitive Neuroscience , Faculty of Psychology and Neuroscience, , 6200 MD, Maastricht , The Netherlands
- Maastricht University , Faculty of Psychology and Neuroscience, , 6200 MD, Maastricht , The Netherlands
- Maastricht Brain Imaging Center (MBIC) , 6200 MD, Maastricht , The Netherlands
- Department of NeuroInformatics, Cuban Neuroscience Center , Havana City 11600 , Cuba
| | - Federico De Martino
- Department of Cognitive Neuroscience , Faculty of Psychology and Neuroscience, , 6200 MD, Maastricht , The Netherlands
- Maastricht University , Faculty of Psychology and Neuroscience, , 6200 MD, Maastricht , The Netherlands
- Maastricht Brain Imaging Center (MBIC) , 6200 MD, Maastricht , The Netherlands
- Center for Magnetic Resonance Research , Department of Radiology, , Minneapolis, MN 55455 , United States
- University of Minnesota , Department of Radiology, , Minneapolis, MN 55455 , United States
| | - Geoffrey M Ghose
- Center for Magnetic Resonance Research , Department of Radiology, , Minneapolis, MN 55455 , United States
- University of Minnesota , Department of Radiology, , Minneapolis, MN 55455 , United States
| | | | - Michelle Moerel
- Department of Cognitive Neuroscience , Faculty of Psychology and Neuroscience, , 6200 MD, Maastricht , The Netherlands
- Maastricht University , Faculty of Psychology and Neuroscience, , 6200 MD, Maastricht , The Netherlands
- Maastricht Brain Imaging Center (MBIC) , 6200 MD, Maastricht , The Netherlands
- Maastricht Centre for Systems Biology, Maastricht University , 6200 MD, Maastricht , The Netherlands
| |
Collapse
|
10
|
Morrill RJ, Bigelow J, DeKloe J, Hasenstaub AR. Audiovisual task switching rapidly modulates sound encoding in mouse auditory cortex. eLife 2022; 11:e75839. [PMID: 35980027 PMCID: PMC9427107 DOI: 10.7554/elife.75839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Accepted: 08/17/2022] [Indexed: 11/13/2022] Open
Abstract
In everyday behavior, sensory systems are in constant competition for attentional resources, but the cellular and circuit-level mechanisms of modality-selective attention remain largely uninvestigated. We conducted translaminar recordings in mouse auditory cortex (AC) during an audiovisual (AV) attention shifting task. Attending to sound elements in an AV stream reduced both pre-stimulus and stimulus-evoked spiking activity, primarily in deep-layer neurons and neurons without spectrotemporal tuning. Despite reduced spiking, stimulus decoder accuracy was preserved, suggesting improved sound encoding efficiency. Similarly, task-irrelevant mapping stimuli during inter-trial intervals evoked fewer spikes without impairing stimulus encoding, indicating that attentional modulation generalized beyond training stimuli. Importantly, spiking reductions predicted trial-to-trial behavioral accuracy during auditory attention, but not visual attention. Together, these findings suggest auditory attention facilitates sound discrimination by filtering sound-irrelevant background activity in AC, and that the deepest cortical layers serve as a hub for integrating extramodal contextual information.
Collapse
Affiliation(s)
- Ryan J Morrill
- Coleman Memorial Laboratory, University of California, San FranciscoSan FranciscoUnited States
- Neuroscience Graduate Program, University of California, San FranciscoSan FranciscoUnited States
- Department of Otolaryngology–Head and Neck Surgery, University of California, San FranciscoSan FranciscoUnited States
| | - James Bigelow
- Coleman Memorial Laboratory, University of California, San FranciscoSan FranciscoUnited States
- Department of Otolaryngology–Head and Neck Surgery, University of California, San FranciscoSan FranciscoUnited States
| | - Jefferson DeKloe
- Coleman Memorial Laboratory, University of California, San FranciscoSan FranciscoUnited States
- Department of Otolaryngology–Head and Neck Surgery, University of California, San FranciscoSan FranciscoUnited States
| | - Andrea R Hasenstaub
- Coleman Memorial Laboratory, University of California, San FranciscoSan FranciscoUnited States
- Neuroscience Graduate Program, University of California, San FranciscoSan FranciscoUnited States
- Department of Otolaryngology–Head and Neck Surgery, University of California, San FranciscoSan FranciscoUnited States
| |
Collapse
|
11
|
Kachlicka M, Laffere A, Dick F, Tierney A. Slow phase-locked modulations support selective attention to sound. Neuroimage 2022; 252:119024. [PMID: 35231629 PMCID: PMC9133470 DOI: 10.1016/j.neuroimage.2022.119024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Revised: 02/16/2022] [Accepted: 02/19/2022] [Indexed: 11/16/2022] Open
Abstract
To make sense of complex soundscapes, listeners must select and attend to task-relevant streams while ignoring uninformative sounds. One possible neural mechanism underlying this process is alignment of endogenous oscillations with the temporal structure of the target sound stream. Such a mechanism has been suggested to mediate attentional modulation of neural phase-locking to the rhythms of attended sounds. However, such modulations are compatible with an alternate framework, where attention acts as a filter that enhances exogenously-driven neural auditory responses. Here we attempted to test several predictions arising from the oscillatory account by playing two tone streams varying across conditions in tone duration and presentation rate; participants attended to one stream or listened passively. Attentional modulation of the evoked waveform was roughly sinusoidal and scaled with rate, while the passive response did not. However, there was only limited evidence for continuation of modulations through the silence between sequences. These results suggest that attentionally-driven changes in phase alignment reflect synchronization of slow endogenous activity with the temporal structure of attended stimuli.
Collapse
Affiliation(s)
- Magdalena Kachlicka
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London WC1E 7HX, England
| | - Aeron Laffere
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London WC1E 7HX, England
| | - Fred Dick
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London WC1E 7HX, England; Division of Psychology & Language Sciences, UCL, Gower Street, London WC1E 6BT, England
| | - Adam Tierney
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London WC1E 7HX, England.
| |
Collapse
|
12
|
Lee JH, Shim H, Gantz B, Choi I. Strength of Attentional Modulation on Cortical Auditory Evoked Responses Correlates with Speech-in-Noise Performance in Bimodal Cochlear Implant Users. Trends Hear 2022; 26:23312165221141143. [PMID: 36464791 PMCID: PMC9726851 DOI: 10.1177/23312165221141143] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2022] [Revised: 10/10/2022] [Accepted: 10/17/2022] [Indexed: 12/12/2022] Open
Abstract
Auditory selective attention is a crucial top-down cognitive mechanism for understanding speech in noise. Cochlear implant (CI) users display great variability in speech-in-noise performance that is not easily explained by peripheral auditory profile or demographic factors. Thus, it is imperative to understand if auditory cognitive processes such as selective attention explain such variability. The presented study directly addressed this question by quantifying attentional modulation of cortical auditory responses during an attention task and comparing its individual differences with speech-in-noise performance. In our attention experiment, participants with CI were given a pre-stimulus visual cue that directed their attention to either of two speech streams and were asked to select a deviant syllable in the target stream. The two speech streams consisted of the female voice saying "Up" five times every 800 ms and the male voice saying "Down" four times every 1 s. The onset of each syllable elicited distinct event-related potentials (ERPs). At each syllable onset, the difference in the amplitudes of ERPs between the two attentional conditions (attended - ignored) was computed. This ERP amplitude difference served as a proxy for attentional modulation strength. Our group-level analysis showed that the amplitude of ERPs was greater when the syllable was attended than ignored, exhibiting that attention modulated cortical auditory responses. Moreover, the strength of attentional modulation showed a significant correlation with speech-in-noise performance. These results suggest that the attentional modulation of cortical auditory responses may provide a neural marker for predicting CI users' success in clinical tests of speech-in-noise listening.
Collapse
Affiliation(s)
- Jae-Hee Lee
- Dept. Communication Sciences and Disorders, University of Iowa, Iowa City, IA, 52242, USA
- Dept. Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA
| | - Hwan Shim
- Dept. Electrical and Computer Engineering Technology, Rochester Institute of Technology, Rochester, NY, 14623, USA
| | - Bruce Gantz
- Dept. Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA
| | - Inyong Choi
- Dept. Communication Sciences and Disorders, University of Iowa, Iowa City, IA, 52242, USA
- Dept. Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA
| |
Collapse
|
13
|
Kiremitçi I, Yilmaz Ö, Çelik E, Shahdloo M, Huth AG, Çukur T. Attentional Modulation of Hierarchical Speech Representations in a Multitalker Environment. Cereb Cortex 2021; 31:4986-5005. [PMID: 34115102 PMCID: PMC8491717 DOI: 10.1093/cercor/bhab136] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Revised: 04/01/2021] [Accepted: 04/21/2021] [Indexed: 11/13/2022] Open
Abstract
Humans are remarkably adept in listening to a desired speaker in a crowded environment, while filtering out nontarget speakers in the background. Attention is key to solving this difficult cocktail-party task, yet a detailed characterization of attentional effects on speech representations is lacking. It remains unclear across what levels of speech features and how much attentional modulation occurs in each brain area during the cocktail-party task. To address these questions, we recorded whole-brain blood-oxygen-level-dependent (BOLD) responses while subjects either passively listened to single-speaker stories, or selectively attended to a male or a female speaker in temporally overlaid stories in separate experiments. Spectral, articulatory, and semantic models of the natural stories were constructed. Intrinsic selectivity profiles were identified via voxelwise models fit to passive listening responses. Attentional modulations were then quantified based on model predictions for attended and unattended stories in the cocktail-party task. We find that attention causes broad modulations at multiple levels of speech representations while growing stronger toward later stages of processing, and that unattended speech is represented up to the semantic level in parabelt auditory cortex. These results provide insights on attentional mechanisms that underlie the ability to selectively listen to a desired speaker in noisy multispeaker environments.
Collapse
Affiliation(s)
- Ibrahim Kiremitçi
- Neuroscience Program, Sabuncu Brain Research Center, Bilkent University, Ankara TR-06800, Turkey
- National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara TR-06800, Turkey
| | - Özgür Yilmaz
- National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara TR-06800, Turkey
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara TR-06800, Turkey
| | - Emin Çelik
- Neuroscience Program, Sabuncu Brain Research Center, Bilkent University, Ankara TR-06800, Turkey
- National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara TR-06800, Turkey
| | - Mo Shahdloo
- National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara TR-06800, Turkey
- Department of Experimental Psychology, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford OX3 9DU, UK
| | - Alexander G Huth
- Department of Neuroscience, The University of Texas at Austin, Austin, TX 78712, USA
- Department of Computer Science, The University of Texas at Austin, Austin, TX 78712, USA
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA 94702, USA
| | - Tolga Çukur
- Neuroscience Program, Sabuncu Brain Research Center, Bilkent University, Ankara TR-06800, Turkey
- National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara TR-06800, Turkey
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara TR-06800, Turkey
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA 94702, USA
| |
Collapse
|
14
|
Homma NY, Bajo VM. Lemniscal Corticothalamic Feedback in Auditory Scene Analysis. Front Neurosci 2021; 15:723893. [PMID: 34489635 PMCID: PMC8417129 DOI: 10.3389/fnins.2021.723893] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Accepted: 07/30/2021] [Indexed: 12/15/2022] Open
Abstract
Sound information is transmitted from the ear to central auditory stations of the brain via several nuclei. In addition to these ascending pathways there exist descending projections that can influence the information processing at each of these nuclei. A major descending pathway in the auditory system is the feedback projection from layer VI of the primary auditory cortex (A1) to the ventral division of medial geniculate body (MGBv) in the thalamus. The corticothalamic axons have small glutamatergic terminals that can modulate thalamic processing and thalamocortical information transmission. Corticothalamic neurons also provide input to GABAergic neurons of the thalamic reticular nucleus (TRN) that receives collaterals from the ascending thalamic axons. The balance of corticothalamic and TRN inputs has been shown to refine frequency tuning, firing patterns, and gating of MGBv neurons. Therefore, the thalamus is not merely a relay stage in the chain of auditory nuclei but does participate in complex aspects of sound processing that include top-down modulations. In this review, we aim (i) to examine how lemniscal corticothalamic feedback modulates responses in MGBv neurons, and (ii) to explore how the feedback contributes to auditory scene analysis, particularly on frequency and harmonic perception. Finally, we will discuss potential implications of the role of corticothalamic feedback in music and speech perception, where precise spectral and temporal processing is essential.
Collapse
Affiliation(s)
- Natsumi Y. Homma
- Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, CA, United States
- Coleman Memorial Laboratory, Department of Otolaryngology – Head and Neck Surgery, University of California, San Francisco, San Francisco, CA, United States
| | - Victoria M. Bajo
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
15
|
Auditory attentional filter in the absence of masking noise. Atten Percept Psychophys 2021; 83:1737-1751. [PMID: 33389676 DOI: 10.3758/s13414-020-02210-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/13/2020] [Indexed: 12/16/2022]
Abstract
Signals containing attended frequencies are facilitated while those with unexpected frequencies are suppressed by an auditory filtering process. The neurocognitive mechanism underlying the auditory attentional filter is, however, poorly understood. The olivocochlear bundle (OCB), a brainstem neural circuit that is part of the efferent system, has been suggested to be partly responsible for the filtering via its noise-dependent antimasking effect. The current study examined the role of the OCB in attentional filtering, particularly the validity of the antimasking hypothesis, by comparing attentional filters measured in quiet and in the presence of background noise in a group of normal-hearing listeners. Filters obtained in both conditions were comparable, suggesting that the presence of background noise is not crucial for attentional filter generation. In addition, comparison of frequency-specific changes of the cue-evoked enhancement component of filters in quiet and noise also did not reveal any major contribution of background noise to the cue effect. These findings argue against the involvement of an antimasking effect in the attentional process. Instead of the antimasking effect mediated via medial olivocochlear fibers, results from current and earlier studies can be explained by frequency-specific modulation of afferent spontaneous activity by lateral olivocochlear fibers. It is proposed that the activity of these lateral fibers could be driven by top-down cortical control via a noise-independent mechanism. SIGNIFICANCE: The neural basis for auditory attentional filter remains a fundamental but poorly understood area in auditory neuroscience. The efferent olivocochlear pathway that projects from the brainstem back to the cochlea has been suggested to mediate the attentional effect via its noise-dependent antimasking effect. The current study demonstrates that the filter generation is mostly independent of the background noise, and therefore is unlikely to be mediated by the olivocochlear brainstem reflex. It is proposed that the entire cortico-olivocochlear system might instead be used to alter the hearing sensitivity during focus attention via frequency-specific modulation of afferent spontaneous activity.
Collapse
|
16
|
Fogerty D, Sevich VA, Healy EW. Spectro-temporal glimpsing of speech in noise: Regularity and coherence of masking patterns reduces uncertainty and increases intelligibility. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 148:1552. [PMID: 33003879 PMCID: PMC7500957 DOI: 10.1121/10.0001971] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Revised: 08/27/2020] [Accepted: 08/28/2020] [Indexed: 06/11/2023]
Abstract
Adverse listening conditions involve glimpses of spectro-temporal speech information. This study investigated if the acoustic organization of the spectro-temporal masking pattern affects speech glimpsing in "checkerboard" noise. The regularity and coherence of the masking pattern was varied. Regularity was reduced by randomizing the spectral or temporal gating of the masking noise. Coherence involved the spectral alignment of frequency bands across time or the temporal alignment of gated onsets/offsets across frequency bands. Experiment 1 investigated the effect of spectral or temporal coherence. Experiment 2 investigated independent and combined factors of regularity and coherence. Performance was best in spectro-temporally modulated noise having larger glimpses. Generally, performance also improved as the regularity and coherence of masker fluctuations increased, with regularity having a stronger effect than coherence. An acoustic glimpsing model suggested that the effect of regularity (but not coherence) could be partially attributed to the availability of glimpses retained after energetic masking. Performance tended to be better with maskers that were spectrally coherent as compared to temporally coherent. Overall, performance was best when the spectro-temporal masking pattern imposed even spectral sampling and minimal temporal uncertainty, indicating that listeners use reliable masking patterns to aid in spectro-temporal speech glimpsing.
Collapse
Affiliation(s)
- Daniel Fogerty
- Department of Communication Sciences and Disorders, University of South Carolina, 1705 College Street, Columbia, South Carolina 29208, USA
| | - Victoria A Sevich
- Department of Speech and Hearing Science, The Ohio State University, 1070 Carmack Road, Columbus, Ohio 43210, USA
| | - Eric W Healy
- Department of Speech and Hearing Science, The Ohio State University, 1070 Carmack Road, Columbus, Ohio 43210, USA
| |
Collapse
|
17
|
Fogerty D, Sevich VA, Healy EW. Spectro-temporal glimpsing of speech in noise: Regularity and coherence of masking patterns reduces uncertainty and increases intelligibility. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 148:1552. [PMID: 33003879 DOI: 10.5041466/10.0001971] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
Adverse listening conditions involve glimpses of spectro-temporal speech information. This study investigated if the acoustic organization of the spectro-temporal masking pattern affects speech glimpsing in "checkerboard" noise. The regularity and coherence of the masking pattern was varied. Regularity was reduced by randomizing the spectral or temporal gating of the masking noise. Coherence involved the spectral alignment of frequency bands across time or the temporal alignment of gated onsets/offsets across frequency bands. Experiment 1 investigated the effect of spectral or temporal coherence. Experiment 2 investigated independent and combined factors of regularity and coherence. Performance was best in spectro-temporally modulated noise having larger glimpses. Generally, performance also improved as the regularity and coherence of masker fluctuations increased, with regularity having a stronger effect than coherence. An acoustic glimpsing model suggested that the effect of regularity (but not coherence) could be partially attributed to the availability of glimpses retained after energetic masking. Performance tended to be better with maskers that were spectrally coherent as compared to temporally coherent. Overall, performance was best when the spectro-temporal masking pattern imposed even spectral sampling and minimal temporal uncertainty, indicating that listeners use reliable masking patterns to aid in spectro-temporal speech glimpsing.
Collapse
Affiliation(s)
- Daniel Fogerty
- Department of Communication Sciences and Disorders, University of South Carolina, 1705 College Street, Columbia, South Carolina 29208, USA
| | - Victoria A Sevich
- Department of Speech and Hearing Science, The Ohio State University, 1070 Carmack Road, Columbus, Ohio 43210, USA
| | - Eric W Healy
- Department of Speech and Hearing Science, The Ohio State University, 1070 Carmack Road, Columbus, Ohio 43210, USA
| |
Collapse
|
18
|
Choi JY, Perrachione TK. Time and information in perceptual adaptation to speech. Cognition 2019; 192:103982. [PMID: 31229740 PMCID: PMC6732236 DOI: 10.1016/j.cognition.2019.05.019] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2018] [Revised: 05/11/2019] [Accepted: 05/25/2019] [Indexed: 11/18/2022]
Abstract
Perceptual adaptation to a talker enables listeners to efficiently resolve the many-to-many mapping between variable speech acoustics and abstract linguistic representations. However, models of speech perception have not delved into the variety or the quantity of information necessary for successful adaptation, nor how adaptation unfolds over time. In three experiments using speeded classification of spoken words, we explored how the quantity (duration), quality (phonetic detail), and temporal continuity of talker-specific context contribute to facilitating perceptual adaptation to speech. In single- and mixed-talker conditions, listeners identified phonetically-confusable target words in isolation or preceded by carrier phrases of varying lengths and phonetic content, spoken by the same talker as the target word. Word identification was always slower in mixed-talker conditions than single-talker ones. However, interference from talker variability decreased as the duration of preceding speech increased but was not affected by the amount of preceding talker-specific phonetic information. Furthermore, efficiency gains from adaptation depended on temporal continuity between preceding speech and the target word. These results suggest that perceptual adaptation to speech may be understood via models of auditory streaming, where perceptual continuity of an auditory object (e.g., a talker) facilitates allocation of attentional resources, resulting in more efficient perceptual processing.
Collapse
Affiliation(s)
- Ja Young Choi
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, MA, United States; Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA, United States
| | - Tyler K Perrachione
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, MA, United States.
| |
Collapse
|
19
|
Doucet GE, Luber MJ, Balchandani P, Sommer IE, Frangou S. Abnormal auditory tonotopy in patients with schizophrenia. NPJ SCHIZOPHRENIA 2019; 5:16. [PMID: 31578332 PMCID: PMC6775081 DOI: 10.1038/s41537-019-0084-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/08/2019] [Accepted: 08/28/2019] [Indexed: 12/19/2022]
Abstract
Auditory hallucinations are among the most prevalent and most distressing symptoms of schizophrenia. Despite significant progress, it is still unclear whether auditory hallucinations arise from abnormalities in primary sensory processing or whether they represent failures of higher-order functions. To address this knowledge gap, we capitalized on the increased spatial resolution afforded by ultra-high field imaging at 7 Tesla to investigate the tonotopic organization of the auditory cortex in patients with schizophrenia with a history of recurrent hallucinations. Tonotopy is a fundamental feature of the functional organization of the auditory cortex that is established very early in development and predates the onset of symptoms by decades. Compared to healthy participants, patients showed abnormally increased activation and altered tonotopic organization of the auditory cortex during a purely perceptual task, which involved passive listening to tones across a range of frequencies (88–8000 Hz). These findings suggest that the predisposition to auditory hallucinations is likely to be predicated on abnormalities in the functional organization of the auditory cortex and which may serve as a biomarker for the early identification of vulnerable individuals.
Collapse
Affiliation(s)
- Gaelle E Doucet
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY, 10029, USA
| | - Maxwell J Luber
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY, 10029, USA
| | - Priti Balchandani
- Translational and Molecular Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY, 10029, USA
| | - Iris E Sommer
- University Medical Center Groningen, 9713AW, Groningen, Netherlands
| | - Sophia Frangou
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY, 10029, USA.
| |
Collapse
|
20
|
Rutten S, Santoro R, Hervais-Adelman A, Formisano E, Golestani N. Cortical encoding of speech enhances task-relevant acoustic information. Nat Hum Behav 2019; 3:974-987. [DOI: 10.1038/s41562-019-0648-9] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2018] [Accepted: 06/03/2019] [Indexed: 11/09/2022]
|
21
|
Grisendi T, Reynaud O, Clarke S, Da Costa S. Processing pathways for emotional vocalizations. Brain Struct Funct 2019; 224:2487-2504. [DOI: 10.1007/s00429-019-01912-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2019] [Accepted: 06/12/2019] [Indexed: 01/06/2023]
|
22
|
Object-based attention in complex, naturalistic auditory streams. Sci Rep 2019; 9:2854. [PMID: 30814547 PMCID: PMC6393668 DOI: 10.1038/s41598-019-39166-6] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2018] [Accepted: 01/14/2019] [Indexed: 11/08/2022] Open
Abstract
In vision, objects have been described as the 'units' on which non-spatial attention operates in many natural settings. Here, we test the idea of object-based attention in the auditory domain within ecologically valid auditory scenes, composed of two spatially and temporally overlapping sound streams (speech signal vs. environmental soundscapes in Experiment 1 and two speech signals in Experiment 2). Top-down attention was directed to one or the other auditory stream by a non-spatial cue. To test for high-level, object-based attention effects we introduce an auditory repetition detection task in which participants have to detect brief repetitions of auditory objects, ruling out any possible confounds with spatial or feature-based attention. The participants' responses were significantly faster and more accurate in the valid cue condition compared to the invalid cue condition, indicating a robust cue-validity effect of high-level, object-based auditory attention.
Collapse
|
23
|
Holt LL, Tierney AT, Guerra G, Laffere A, Dick F. Dimension-selective attention as a possible driver of dynamic, context-dependent re-weighting in speech processing. Hear Res 2018; 366:50-64. [PMID: 30131109 PMCID: PMC6107307 DOI: 10.1016/j.heares.2018.06.014] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/18/2018] [Revised: 06/10/2018] [Accepted: 06/19/2018] [Indexed: 12/24/2022]
Abstract
The contribution of acoustic dimensions to an auditory percept is dynamically adjusted and reweighted based on prior experience about how informative these dimensions are across the long-term and short-term environment. This is especially evident in speech perception, where listeners differentially weight information across multiple acoustic dimensions, and use this information selectively to update expectations about future sounds. The dynamic and selective adjustment of how acoustic input dimensions contribute to perception has made it tempting to conceive of this as a form of non-spatial auditory selective attention. Here, we review several human speech perception phenomena that might be consistent with auditory selective attention although, as of yet, the literature does not definitively support a mechanistic tie. We relate these human perceptual phenomena to illustrative nonhuman animal neurobiological findings that offer informative guideposts in how to test mechanistic connections. We next present a novel empirical approach that can serve as a methodological bridge from human research to animal neurobiological studies. Finally, we describe four preliminary results that demonstrate its utility in advancing understanding of human non-spatial dimension-based auditory selective attention.
Collapse
Affiliation(s)
- Lori L Holt
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, 15213, USA; Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA, 15213, USA.
| | - Adam T Tierney
- Department of Psychological Sciences, Birkbeck College, University of London, London, WC1E 7HX, UK; Centre for Brain and Cognitive Development, Birkbeck College, London, WC1E 7HX, UK
| | - Giada Guerra
- Department of Psychological Sciences, Birkbeck College, University of London, London, WC1E 7HX, UK; Centre for Brain and Cognitive Development, Birkbeck College, London, WC1E 7HX, UK
| | - Aeron Laffere
- Department of Psychological Sciences, Birkbeck College, University of London, London, WC1E 7HX, UK
| | - Frederic Dick
- Department of Psychological Sciences, Birkbeck College, University of London, London, WC1E 7HX, UK; Centre for Brain and Cognitive Development, Birkbeck College, London, WC1E 7HX, UK; Department of Experimental Psychology, University College London, London, WC1H 0AP, UK
| |
Collapse
|
24
|
Hjortkjær J, Kassuba T, Madsen KH, Skov M, Siebner HR. Task-Modulated Cortical Representations of Natural Sound Source Categories. Cereb Cortex 2018; 28:295-306. [PMID: 29069292 DOI: 10.1093/cercor/bhx263] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
In everyday sound environments, we recognize sound sources and events by attending to relevant aspects of an acoustic input. Evidence about the cortical mechanisms involved in extracting relevant category information from natural sounds is, however, limited to speech. Here, we used functional MRI to measure cortical response patterns while human listeners categorized real-world sounds created by objects of different solid materials (glass, metal, wood) manipulated by different sound-producing actions (striking, rattling, dropping). In different sessions, subjects had to identify either material or action categories in the same sound stimuli. The sound-producing action and the material of the sound source could be decoded from multivoxel activity patterns in auditory cortex, including Heschl's gyrus and planum temporale. Importantly, decoding success depended on task relevance and category discriminability. Action categories were more accurately decoded in auditory cortex when subjects identified action information. Conversely, the material of the same sound sources was decoded with higher accuracy in the inferior frontal cortex during material identification. Representational similarity analyses indicated that both early and higher-order auditory cortex selectively enhanced spectrotemporal features relevant to the target category. Together, the results indicate a cortical selection mechanism that favors task-relevant information in the processing of nonvocal sound categories.
Collapse
Affiliation(s)
- Jens Hjortkjær
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Hvidovre, 2650 Hvidovre, Denmark.,Hearing Systems Group, Department of Electrical Engineering, Technical University of Denmark, 2800 Kgs. Lyngby, Denmark
| | - Tanja Kassuba
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA
| | - Kristoffer H Madsen
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Hvidovre, 2650 Hvidovre, Denmark.,Cognitive Systems, Department of Applied Mathematics and Computer Science, Technical University of Denmark, 2800 Kgs. Lyngby, Denmark
| | - Martin Skov
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Hvidovre, 2650 Hvidovre, Denmark.,Decision Neuroscience Research Group, Copenhagen Business School, 2000 Frederiksberg, Denmark
| | - Hartwig R Siebner
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Hvidovre, 2650 Hvidovre, Denmark.,Department of Neurology, Copenhagen University Hospital Bispebjerg, Copenhagen, 2400 København NV, Denmark
| |
Collapse
|
25
|
Schwartz ZP, David SV. Focal Suppression of Distractor Sounds by Selective Attention in Auditory Cortex. Cereb Cortex 2018; 28:323-339. [PMID: 29136104 PMCID: PMC6057511 DOI: 10.1093/cercor/bhx288] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2017] [Indexed: 11/15/2022] Open
Abstract
Auditory selective attention is required for parsing crowded acoustic environments, but cortical systems mediating the influence of behavioral state on auditory perception are not well characterized. Previous neurophysiological studies suggest that attention produces a general enhancement of neural responses to important target sounds versus irrelevant distractors. However, behavioral studies suggest that in the presence of masking noise, attention provides a focal suppression of distractors that compete with targets. Here, we compared effects of attention on cortical responses to masking versus non-masking distractors, controlling for effects of listening effort and general task engagement. We recorded single-unit activity from primary auditory cortex (A1) of ferrets during behavior and found that selective attention decreased responses to distractors masking targets in the same spectral band, compared with spectrally distinct distractors. This suppression enhanced neural target detection thresholds, suggesting that limited attention resources serve to focally suppress responses to distractors that interfere with target detection. Changing effort by manipulating target salience consistently modulated spontaneous but not evoked activity. Task engagement and changing effort tended to affect the same neurons, while attention affected an independent population, suggesting that distinct feedback circuits mediate effects of attention and effort in A1.
Collapse
Affiliation(s)
- Zachary P Schwartz
- Neuroscience Graduate Program, Oregon Health and Science University, OR, USA
| | - Stephen V David
- Oregon Hearing Research Center, Oregon Health and Science University, OR, USA
- Address Correspondence to Stephen V. David, Oregon Hearing Research Center, Oregon Health and Science University, 3181 SW Sam Jackson Park Road, MC L335A, Portland, OR 97239, USA.
| |
Collapse
|
26
|
Riecke L, Peters JC, Valente G, Poser BA, Kemper VG, Formisano E, Sorger B. Frequency-specific attentional modulation in human primary auditory cortex and midbrain. Neuroimage 2018; 174:274-287. [DOI: 10.1016/j.neuroimage.2018.03.038] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2017] [Revised: 03/15/2018] [Accepted: 03/17/2018] [Indexed: 12/24/2022] Open
|
27
|
Fisher JM, Dick FK, Levy DF, Wilson SM. Neural representation of vowel formants in tonotopic auditory cortex. Neuroimage 2018; 178:574-582. [PMID: 29860083 DOI: 10.1016/j.neuroimage.2018.05.072] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2018] [Revised: 05/29/2018] [Accepted: 05/30/2018] [Indexed: 11/25/2022] Open
Abstract
Speech sounds are encoded by distributed patterns of activity in bilateral superior temporal cortex. However, it is unclear whether speech sounds are topographically represented in cortex, or which acoustic or phonetic dimensions might be spatially mapped. Here, using functional MRI, we investigated the potential spatial representation of vowels, which are largely distinguished from one another by the frequencies of their first and second formants, i.e. peaks in their frequency spectra. This allowed us to generate clear hypotheses about the representation of specific vowels in tonotopic regions of auditory cortex. We scanned participants as they listened to multiple natural tokens of the vowels [ɑ] and [i], which we selected because their first and second formants overlap minimally. Formant-based regions of interest were defined for each vowel based on spectral analysis of the vowel stimuli and independently acquired tonotopic maps for each participant. We found that perception of [ɑ] and [i] yielded differential activation of tonotopic regions corresponding to formants of [ɑ] and [i], such that each vowel was associated with increased signal in tonotopic regions corresponding to its own formants. This pattern was observed in Heschl's gyrus and the superior temporal gyrus, in both hemispheres, and for both the first and second formants. Using linear discriminant analysis of mean signal change in formant-based regions of interest, the identity of untrained vowels was predicted with ∼73% accuracy. Our findings show that cortical encoding of vowels is scaffolded on tonotopy, a fundamental organizing principle of auditory cortex that is not language-specific.
Collapse
Affiliation(s)
- Julia M Fisher
- Department of Linguistics, University of Arizona, Tucson, AZ, USA; Statistics Consulting Laboratory, BIO5 Institute, University of Arizona, Tucson, AZ, USA
| | - Frederic K Dick
- Department of Psychological Sciences, Birkbeck College, University of London, UK; Birkbeck-UCL Center for Neuroimaging, London, UK; Department of Experimental Psychology, University College London, UK
| | - Deborah F Levy
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Stephen M Wilson
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.
| |
Collapse
|
28
|
Hamilton LS, Edwards E, Chang EF. A Spatial Map of Onset and Sustained Responses to Speech in the Human Superior Temporal Gyrus. Curr Biol 2018; 28:1860-1871.e4. [DOI: 10.1016/j.cub.2018.04.033] [Citation(s) in RCA: 98] [Impact Index Per Article: 16.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2017] [Revised: 03/04/2018] [Accepted: 04/10/2018] [Indexed: 01/05/2023]
|
29
|
Hausfeld L, Riecke L, Formisano E. Acoustic and higher-level representations of naturalistic auditory scenes in human auditory and frontal cortex. Neuroimage 2018. [DOI: 10.1016/j.neuroimage.2018.02.065] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022] Open
|
30
|
Mapping Frequency-Specific Tone Predictions in the Human Auditory Cortex at High Spatial Resolution. J Neurosci 2018; 38:4934-4942. [PMID: 29712781 DOI: 10.1523/jneurosci.2205-17.2018] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2017] [Revised: 02/26/2018] [Accepted: 03/01/2018] [Indexed: 11/21/2022] Open
Abstract
Auditory inputs reaching our ears are often incomplete, but our brains nevertheless transform them into rich and complete perceptual phenomena such as meaningful conversations or pleasurable music. It has been hypothesized that our brains extract regularities in inputs, which enables us to predict the upcoming stimuli, leading to efficient sensory processing. However, it is unclear whether tone predictions are encoded with similar specificity as perceived signals. Here, we used high-field fMRI to investigate whether human auditory regions encode one of the most defining characteristics of auditory perception: the frequency of predicted tones. Two pairs of tone sequences were presented in ascending or descending directions, with the last tone omitted in half of the trials. Every pair of incomplete sequences contained identical sounds, but was associated with different expectations about the last tone (a high- or low-frequency target). This allowed us to disambiguate predictive signaling from sensory-driven processing. We recorded fMRI responses from eight female participants during passive listening to complete and incomplete sequences. Inspection of specificity and spatial patterns of responses revealed that target frequencies were encoded similarly during their presentations, as well as during omissions, suggesting frequency-specific encoding of predicted tones in the auditory cortex (AC). Importantly, frequency specificity of predictive signaling was observed already at the earliest levels of auditory cortical hierarchy: in the primary AC. Our findings provide evidence for content-specific predictive processing starting at the earliest cortical levels.SIGNIFICANCE STATEMENT Given the abundance of sensory information around us in any given moment, it has been proposed that our brain uses contextual information to prioritize and form predictions about incoming signals. However, there remains a surprising lack of understanding of the specificity and content of such prediction signaling; for example, whether a predicted tone is encoded with similar specificity as a perceived tone. Here, we show that early auditory regions encode the frequency of a tone that is predicted yet omitted. Our findings contribute to the understanding of how expectations shape sound processing in the human auditory cortex and provide further insights into how contextual information influences computations in neuronal circuits.
Collapse
|
31
|
Da Costa S, Clarke S, Crottaz-Herbette S. Keeping track of sound objects in space: The contribution of early-stage auditory areas. Hear Res 2018; 366:17-31. [PMID: 29643021 DOI: 10.1016/j.heares.2018.03.027] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/15/2017] [Revised: 03/21/2018] [Accepted: 03/28/2018] [Indexed: 12/01/2022]
Abstract
The influential dual-stream model of auditory processing stipulates that information pertaining to the meaning and to the position of a given sound object is processed in parallel along two distinct pathways, the ventral and dorsal auditory streams. Functional independence of the two processing pathways is well documented by conscious experience of patients with focal hemispheric lesions. On the other hand there is growing evidence that the meaning and the position of a sound are combined early in the processing pathway, possibly already at the level of early-stage auditory areas. Here, we investigated how early auditory areas integrate sound object meaning and space (simulated by interaural time differences) using a repetition suppression fMRI paradigm at 7 T. Subjects listen passively to environmental sounds presented in blocks of repetitions of the same sound object (same category) or different sounds objects (different categories), perceived either in the left or right space (no change within block) or shifted left-to-right or right-to-left halfway in the block (change within block). Environmental sounds activated bilaterally the superior temporal gyrus, middle temporal gyrus, inferior frontal gyrus, and right precentral cortex. Repetitions suppression effects were measured within bilateral early-stage auditory areas in the lateral portion of the Heschl's gyrus and posterior superior temporal plane. Left lateral early-stages areas showed significant effects for position and change, interactions Category x Initial Position and Category x Change in Position, while right lateral areas showed main effect of category and interaction Category x Change in Position. The combined evidence from our study and from previous studies speaks in favour of a position-linked representation of sound objects, which is independent from semantic encoding within the ventral stream and from spatial encoding within the dorsal stream. We argue for a third auditory stream, which has its origin in lateral belt areas and tracks sound objects across space.
Collapse
Affiliation(s)
- Sandra Da Costa
- Centre d'Imagerie BioMédicale (CIBM), EPFL et Universités de Lausanne et de Genève, Bâtiment CH, Station 6, CH-1015 Lausanne, Switzerland.
| | - Stephanie Clarke
- Service de Neuropsychologie et de Neuroréhabilitation, CHUV, Université de Lausanne, Avenue Pierre Decker 5, CH-1011 Lausanne, Switzerland
| | - Sonia Crottaz-Herbette
- Service de Neuropsychologie et de Neuroréhabilitation, CHUV, Université de Lausanne, Avenue Pierre Decker 5, CH-1011 Lausanne, Switzerland
| |
Collapse
|
32
|
Rinne T, Muers RS, Salo E, Slater H, Petkov CI. Functional Imaging of Audio-Visual Selective Attention in Monkeys and Humans: How do Lapses in Monkey Performance Affect Cross-Species Correspondences? Cereb Cortex 2018; 27:3471-3484. [PMID: 28419201 PMCID: PMC5654311 DOI: 10.1093/cercor/bhx092] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2016] [Indexed: 11/22/2022] Open
Abstract
The cross-species correspondences and differences in how attention modulates brain responses in humans and animal models are poorly understood. We trained 2 monkeys to perform an audio–visual selective attention task during functional magnetic resonance imaging (fMRI), rewarding them to attend to stimuli in one modality while ignoring those in the other. Monkey fMRI identified regions strongly modulated by auditory or visual attention. Surprisingly, auditory attention-related modulations were much more restricted in monkeys than humans performing the same tasks during fMRI. Further analyses ruled out trivial explanations, suggesting that labile selective-attention performance was associated with inhomogeneous modulations in wide cortical regions in the monkeys. The findings provide initial insights into how audio–visual selective attention modulates the primate brain, identify sources for “lost” attention effects in monkeys, and carry implications for modeling the neurobiology of human cognition with nonhuman animals.
Collapse
Affiliation(s)
- Teemu Rinne
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland.,Advanced Magnetic Imaging Centre, Aalto University School of Science, Espoo, Finland
| | - Ross S Muers
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, UK.,Centre for Behaviour and Evolution, Newcastle University, Newcastle upon Tyne, UK
| | - Emma Salo
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
| | - Heather Slater
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, UK.,Centre for Behaviour and Evolution, Newcastle University, Newcastle upon Tyne, UK
| | - Christopher I Petkov
- Institute of Neuroscience, Newcastle University, Newcastle upon Tyne, UK.,Centre for Behaviour and Evolution, Newcastle University, Newcastle upon Tyne, UK
| |
Collapse
|
33
|
David SV. Incorporating behavioral and sensory context into spectro-temporal models of auditory encoding. Hear Res 2018; 360:107-123. [PMID: 29331232 PMCID: PMC6292525 DOI: 10.1016/j.heares.2017.12.021] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/27/2017] [Revised: 12/18/2017] [Accepted: 12/26/2017] [Indexed: 01/11/2023]
Abstract
For several decades, auditory neuroscientists have used spectro-temporal encoding models to understand how neurons in the auditory system represent sound. Derived from early applications of systems identification tools to the auditory periphery, the spectro-temporal receptive field (STRF) and more sophisticated variants have emerged as an efficient means of characterizing representation throughout the auditory system. Most of these encoding models describe neurons as static sensory filters. However, auditory neural coding is not static. Sensory context, reflecting the acoustic environment, and behavioral context, reflecting the internal state of the listener, can both influence sound-evoked activity, particularly in central auditory areas. This review explores recent efforts to integrate context into spectro-temporal encoding models. It begins with a brief tutorial on the basics of estimating and interpreting STRFs. Then it describes three recent studies that have characterized contextual effects on STRFs, emerging over a range of timescales, from many minutes to tens of milliseconds. An important theme of this work is not simply that context influences auditory coding, but also that contextual effects span a large continuum of internal states. The added complexity of these context-dependent models introduces new experimental and theoretical challenges that must be addressed in order to be used effectively. Several new methodological advances promise to address these limitations and allow the development of more comprehensive context-dependent models in the future.
Collapse
Affiliation(s)
- Stephen V David
- Oregon Hearing Research Center, Oregon Health & Science University, 3181 SW Sam Jackson Park Rd, MC L335A, Portland, OR 97239, United States.
| |
Collapse
|
34
|
Ultra-high field MRI: Advancing systems neuroscience towards mesoscopic human brain function. Neuroimage 2018; 168:345-357. [DOI: 10.1016/j.neuroimage.2017.01.028] [Citation(s) in RCA: 106] [Impact Index Per Article: 17.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2016] [Revised: 11/06/2016] [Accepted: 01/12/2017] [Indexed: 01/26/2023] Open
|
35
|
Riecke L, Peters JC, Valente G, Kemper VG, Formisano E, Sorger B. Frequency-Selective Attention in Auditory Scenes Recruits Frequency Representations Throughout Human Superior Temporal Cortex. Cereb Cortex 2018; 27:3002-3014. [PMID: 27230215 DOI: 10.1093/cercor/bhw160] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023] Open
Abstract
A sound of interest may be tracked amid other salient sounds by focusing attention on its characteristic features including its frequency. Functional magnetic resonance imaging findings have indicated that frequency representations in human primary auditory cortex (AC) contribute to this feat. However, attentional modulations were examined at relatively low spatial and spectral resolutions, and frequency-selective contributions outside the primary AC could not be established. To address these issues, we compared blood oxygenation level-dependent (BOLD) responses in the superior temporal cortex of human listeners while they identified single frequencies versus listened selectively for various frequencies within a multifrequency scene. Using best-frequency mapping, we observed that the detailed spatial layout of attention-induced BOLD response enhancements in primary AC follows the tonotopy of stimulus-driven frequency representations-analogous to the "spotlight" of attention enhancing visuospatial representations in retinotopic visual cortex. Moreover, using an algorithm trained to discriminate stimulus-driven frequency representations, we could successfully decode the focus of frequency-selective attention from listeners' BOLD response patterns in nonprimary AC. Our results indicate that the human brain facilitates selective listening to a frequency of interest in a scene by reinforcing the fine-grained activity pattern throughout the entire superior temporal cortex that would be evoked if that frequency was present alone.
Collapse
Affiliation(s)
- Lars Riecke
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands
| | - Judith C Peters
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands.,Netherlands Institute for Neuroscience, Institute of the Royal Netherlands Academy of Arts and Sciences (KNAW), 1105 BA Amsterdam, The Netherlands
| | - Giancarlo Valente
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands
| | - Valentin G Kemper
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands
| | - Elia Formisano
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands
| | - Bettina Sorger
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 EV Maastricht, The Netherlands
| |
Collapse
|
36
|
Chang KH, Thomas JM, Boynton GM, Fine I. Reconstructing Tone Sequences from Functional Magnetic Resonance Imaging Blood-Oxygen Level Dependent Responses within Human Primary Auditory Cortex. Front Psychol 2017; 8:1983. [PMID: 29184522 PMCID: PMC5694557 DOI: 10.3389/fpsyg.2017.01983] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2017] [Accepted: 10/30/2017] [Indexed: 01/12/2023] Open
Abstract
Here we show that, using functional magnetic resonance imaging (fMRI) blood-oxygen level dependent (BOLD) responses in human primary auditory cortex, it is possible to reconstruct the sequence of tones that a person has been listening to over time. First, we characterized the tonotopic organization of each subject’s auditory cortex by measuring auditory responses to randomized pure tone stimuli and modeling the frequency tuning of each fMRI voxel as a Gaussian in log frequency space. Then, we tested our model by examining its ability to work in reverse. Auditory responses were re-collected in the same subjects, except this time they listened to sequences of frequencies taken from simple songs (e.g., “Somewhere Over the Rainbow”). By finding the frequency that minimized the difference between the model’s prediction of BOLD responses and actual BOLD responses, we were able to reconstruct tone sequences, with mean frequency estimation errors of half an octave or less, and little evidence of systematic biases.
Collapse
Affiliation(s)
- Kelly H Chang
- Department of Psychology, University of Washington, Seattle, WA, United States
| | - Jessica M Thomas
- Department of Psychology, University of Washington, Seattle, WA, United States
| | - Geoffrey M Boynton
- Department of Psychology, University of Washington, Seattle, WA, United States
| | - Ione Fine
- Department of Psychology, University of Washington, Seattle, WA, United States
| |
Collapse
|
37
|
Fernández-Soto A, Martínez-Rodrigo A, Moncho-Bogani J, Latorre JM, Fernández-Caballero A. Neural Correlates of Phrase Quadrature Perception in Harmonic Rhythm: An EEG Study Using a Brain-Computer Interface. Int J Neural Syst 2017; 28:1750054. [PMID: 29298521 DOI: 10.1142/s012906571750054x] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
For the sake of establishing the neural correlates of phrase quadrature perception in harmonic rhythm, a musical experiment has been designed to induce music-evoked stimuli related to one important aspect of harmonic rhythm, namely the phrase quadrature. Brain activity is translated to action through electroencephalography (EEG) by using a brain-computer interface. The power spectral value of each EEG channel is estimated to obtain how power variance distributes as a function of frequency. The results of processing the acquired signals are in line with previous studies that use different musical parameters to induce emotions. Indeed, our experiment shows statistical differences in theta and alpha bands between the fulfillment and break of phrase quadrature, an important cue of harmonic rhythm, in two classical sonatas.
Collapse
Affiliation(s)
| | - Arturo Martínez-Rodrigo
- Departamento de Sistemas Informáticos, Universidad de Castilla-La Mancha, 13071-Cuenca, Spain
| | - José Moncho-Bogani
- Departamento de Ciencias Médicas, Universidad de Castilla-La Mancha, 02071-Albacete, Spain
| | - José Miguel Latorre
- Departamento de Psicología, Universidad de Castilla-La Mancha, 02071-Albacete, Spain
| | | |
Collapse
|
38
|
Extensive Tonotopic Mapping across Auditory Cortex Is Recapitulated by Spectrally Directed Attention and Systematically Related to Cortical Myeloarchitecture. J Neurosci 2017; 37:12187-12201. [PMID: 29109238 PMCID: PMC5729191 DOI: 10.1523/jneurosci.1436-17.2017] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2017] [Revised: 10/04/2017] [Accepted: 10/06/2017] [Indexed: 11/21/2022] Open
Abstract
Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation—acoustic frequency—might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R1-estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions.
Collapse
|
39
|
Tonotopic organisation of the auditory cortex in sloping sensorineural hearing loss. Hear Res 2017; 355:81-96. [DOI: 10.1016/j.heares.2017.09.012] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/14/2017] [Revised: 07/28/2017] [Accepted: 09/23/2017] [Indexed: 01/09/2023]
|
40
|
Attentional Modulation of Envelope-Following Responses at Lower (93-109 Hz) but Not Higher (217-233 Hz) Modulation Rates. J Assoc Res Otolaryngol 2017; 19:83-97. [PMID: 28971333 PMCID: PMC5783923 DOI: 10.1007/s10162-017-0641-9] [Citation(s) in RCA: 38] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2017] [Accepted: 09/04/2017] [Indexed: 11/03/2022] Open
Abstract
Directing attention to sounds of different frequencies allows listeners to perceive a sound of interest, like a talker, in a mixture. Whether cortically generated frequency-specific attention affects responses as low as the auditory brainstem is currently unclear. Participants attended to either a high- or low-frequency tone stream, which was presented simultaneously and tagged with different amplitude modulation (AM) rates. In a replication design, we showed that envelope-following responses (EFRs) were modulated by attention only when the stimulus AM rate was slow enough for the auditory cortex to track—and not for stimuli with faster AM rates, which are thought to reflect ‘purer’ brainstem sources. Thus, we found no evidence of frequency-specific attentional modulation that can be confidently attributed to brainstem generators. The results demonstrate that different neural populations contribute to EFRs at higher and lower rates, compatible with cortical contributions at lower rates. The results further demonstrate that stimulus AM rate can alter conclusions of EFR studies.
Collapse
|
41
|
Evidence for cue-independent spatial representation in the human auditory cortex during active listening. Proc Natl Acad Sci U S A 2017; 114:E7602-E7611. [PMID: 28827357 DOI: 10.1073/pnas.1707522114] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues-particularly interaural time and level differences (ITD and ILD)-that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and-critically-for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues.
Collapse
|
42
|
De Martino F, Yacoub E, Kemper V, Moerel M, Uludağ K, De Weerd P, Ugurbil K, Goebel R, Formisano E. The impact of ultra-high field MRI on cognitive and computational neuroimaging. Neuroimage 2017; 168:366-382. [PMID: 28396293 DOI: 10.1016/j.neuroimage.2017.03.060] [Citation(s) in RCA: 73] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2016] [Revised: 03/20/2017] [Accepted: 03/29/2017] [Indexed: 01/14/2023] Open
Abstract
The ability to measure functional brain responses non-invasively with ultra high field MRI (7 T and above) represents a unique opportunity in advancing our understanding of the human brain. Compared to lower fields (3 T and below), ultra high field MRI has an increased sensitivity, which can be used to acquire functional images with greater spatial resolution, and greater specificity of the blood oxygen level dependent (BOLD) signal to the underlying neuronal responses. Together, increased resolution and specificity enable investigating brain functions at a submillimeter scale, which so far could only be done with invasive techniques. At this mesoscopic spatial scale, perception, cognition and behavior can be probed at the level of fundamental units of neural computations, such as cortical columns, cortical layers, and subcortical nuclei. This represents a unique and distinctive advantage that differentiates ultra high from lower field imaging and that can foster a tighter link between fMRI and computational modeling of neural networks. So far, functional brain mapping at submillimeter scale has focused on the processing of sensory information and on well-known systems for which extensive information is available from invasive recordings in animals. It remains an open challenge to extend this methodology to uniquely human functions and, more generally, to systems for which animal models may be problematic. To succeed, the possibility to acquire high-resolution functional data with large spatial coverage, the availability of computational models of neural processing as well as accurate biophysical modeling of neurovascular coupling at mesoscopic scale all appear necessary.
Collapse
Affiliation(s)
- Federico De Martino
- Department of Cognitive Neurosciences, Faculty of Psychology and Neuroscience, Maastricht University, Oxfordlaan 55, 6229 ER Maastricht, The Netherlands; Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, 2021 sixth street SE, 55455 Minneapolis, MN, USA.
| | - Essa Yacoub
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, 2021 sixth street SE, 55455 Minneapolis, MN, USA
| | - Valentin Kemper
- Department of Cognitive Neurosciences, Faculty of Psychology and Neuroscience, Maastricht University, Oxfordlaan 55, 6229 ER Maastricht, The Netherlands
| | - Michelle Moerel
- Department of Cognitive Neurosciences, Faculty of Psychology and Neuroscience, Maastricht University, Oxfordlaan 55, 6229 ER Maastricht, The Netherlands; Maastricht Center for System Biology, Maastricht University, Universiteitssingel 60, 6229 ER Maastricht, The Netherlands
| | - Kâmil Uludağ
- Department of Cognitive Neurosciences, Faculty of Psychology and Neuroscience, Maastricht University, Oxfordlaan 55, 6229 ER Maastricht, The Netherlands
| | - Peter De Weerd
- Department of Cognitive Neurosciences, Faculty of Psychology and Neuroscience, Maastricht University, Oxfordlaan 55, 6229 ER Maastricht, The Netherlands
| | - Kamil Ugurbil
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, 2021 sixth street SE, 55455 Minneapolis, MN, USA
| | - Rainer Goebel
- Department of Cognitive Neurosciences, Faculty of Psychology and Neuroscience, Maastricht University, Oxfordlaan 55, 6229 ER Maastricht, The Netherlands
| | - Elia Formisano
- Department of Cognitive Neurosciences, Faculty of Psychology and Neuroscience, Maastricht University, Oxfordlaan 55, 6229 ER Maastricht, The Netherlands; Maastricht Center for System Biology, Maastricht University, Universiteitssingel 60, 6229 ER Maastricht, The Netherlands
| |
Collapse
|
43
|
Engaging in a tone-detection task differentially modulates neural activity in the auditory cortex, amygdala, and striatum. Sci Rep 2017; 7:677. [PMID: 28386101 PMCID: PMC5429729 DOI: 10.1038/s41598-017-00819-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2016] [Accepted: 03/14/2017] [Indexed: 11/19/2022] Open
Abstract
The relationship between attention and sensory coding is an area of active investigation. Previous studies have revealed that an animal’s behavioral state can play a crucial role in shaping the characteristics of neural responses in the auditory cortex (AC). However, behavioral modulation of auditory response in brain areas outside the AC is not well studied. In this study, we used the same experimental paradigm to examine the effects of attention on neural activity in multiple brain regions including the primary auditory cortex (A1), posterior auditory field (PAF), amygdala (AMY), and striatum (STR). Single-unit spike activity was recorded while cats were actively performing a tone-detection task or passively listening to the same tones. We found that tone-evoked neural responses in A1 were not significantly affected by task-engagement; however, those in PAF and AMY were enhanced, and those in STR were suppressed. The enhanced effect was associated with an improvement of accuracy of tone detection, which was estimated from the spike activity. Additionally, the firing rates of A1 and PAF neurons decreased upon motor response (licking) during the detection task. Our results suggest that attention may have different effects on auditory responsive brain areas depending on their physiological functions.
Collapse
|
44
|
High-Resolution fMRI of Auditory Cortical Map Changes in Unilateral Hearing Loss and Tinnitus. Brain Topogr 2017; 30:685-697. [DOI: 10.1007/s10548-017-0547-1] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2016] [Accepted: 01/18/2017] [Indexed: 12/19/2022]
|
45
|
Nourski KV, Steinschneider M, Rhone AE, Howard Iii MA. Intracranial Electrophysiology of Auditory Selective Attention Associated with Speech Classification Tasks. Front Hum Neurosci 2017; 10:691. [PMID: 28119593 PMCID: PMC5222875 DOI: 10.3389/fnhum.2016.00691] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2016] [Accepted: 12/26/2016] [Indexed: 11/30/2022] Open
Abstract
Auditory selective attention paradigms are powerful tools for elucidating the various stages of speech processing. This study examined electrocorticographic activation during target detection tasks within and beyond auditory cortex. Subjects were nine neurosurgical patients undergoing chronic invasive monitoring for treatment of medically refractory epilepsy. Four subjects had left hemisphere electrode coverage, four had right coverage and one had bilateral coverage. Stimuli were 300 ms complex tones or monosyllabic words, each spoken by a different male or female talker. Subjects were instructed to press a button whenever they heard a target corresponding to a specific stimulus category (e.g., tones, animals, numbers). High gamma (70–150 Hz) activity was simultaneously recorded from Heschl’s gyrus (HG), superior, middle temporal and supramarginal gyri (STG, MTG, SMG), as well as prefrontal cortex (PFC). Data analysis focused on: (1) task effects (non-target words in tone detection vs. semantic categorization task); and (2) target effects (words as target vs. non-target during semantic classification). Responses within posteromedial HG (auditory core cortex) were minimally modulated by task and target. Non-core auditory cortex (anterolateral HG and lateral STG) exhibited sensitivity to task, with a smaller proportion of sites showing target effects. Auditory-related areas (MTG and SMG) and PFC showed both target and, to a lesser extent, task effects, that occurred later than those in the auditory cortex. Significant task and target effects were more prominent in the left hemisphere than in the right. Findings demonstrate a hierarchical organization of speech processing during auditory selective attention.
Collapse
Affiliation(s)
- Kirill V Nourski
- Human Brain Research Laboratory, Department of Neurosurgery, The University of Iowa Iowa City, IA, USA
| | - Mitchell Steinschneider
- Departments of Neurology and Neuroscience, Albert Einstein College of Medicine Bronx, NY, USA
| | - Ariane E Rhone
- Human Brain Research Laboratory, Department of Neurosurgery, The University of Iowa Iowa City, IA, USA
| | - Matthew A Howard Iii
- Human Brain Research Laboratory, Department of Neurosurgery, The University of IowaIowa City, IA, USA; Pappajohn Biomedical Institute, The University of IowaIowa City, IA, USA
| |
Collapse
|
46
|
Guinchard AC, Ghazaleh N, Saenz M, Fornari E, Prior J, Maeder P, Adib S, Maire R. Study of tonotopic brain changes with functional MRI and FDG-PET in a patient with unilateral objective cochlear tinnitus. Hear Res 2016; 341:232-239. [DOI: 10.1016/j.heares.2016.09.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/01/2015] [Revised: 05/11/2016] [Accepted: 09/07/2016] [Indexed: 01/30/2023]
|
47
|
van der Zwaag W, Schäfer A, Marques JP, Turner R, Trampel R. Recent applications of UHF-MRI in the study of human brain function and structure: a review. NMR IN BIOMEDICINE 2016; 29:1274-1288. [PMID: 25762497 DOI: 10.1002/nbm.3275] [Citation(s) in RCA: 63] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2014] [Revised: 12/19/2014] [Accepted: 01/22/2015] [Indexed: 06/04/2023]
Abstract
The increased availability of ultra-high-field (UHF) MRI has led to its application in a wide range of neuroimaging studies, which are showing promise in transforming fundamental approaches to human neuroscience. This review presents recent work on structural and functional brain imaging, at 7 T and higher field strengths. After a short outline of the effects of high field strength on MR images, the rapidly expanding literature on UHF applications of blood-oxygenation-level-dependent-based functional MRI is reviewed. Structural imaging is then discussed, divided into sections on imaging weighted by relaxation time, including quantitative relaxation time mapping, phase imaging and quantitative susceptibility mapping, angiography, diffusion-weighted imaging, and finally magnetization-transfer imaging. The final section discusses studies using the high spatial resolution available at UHF to identify explicit links between structure and function. Copyright © 2015 John Wiley & Sons, Ltd.
Collapse
Affiliation(s)
- Wietske van der Zwaag
- Centre d'Imagerie Biomédicale, Ecole Polytechnique Fédérale de Lausanne, Switzerland
| | - Andreas Schäfer
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - José P Marques
- Centre d'Imagerie Biomédicale, Ecole Polytechnique Fédérale de Lausanne, Switzerland
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Robert Turner
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Spinoza Centre, University of Amsterdam, The Netherlands
- SPMMRC, School of Physics and Astronomy, University of Nottingham, UK
| | - Robert Trampel
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
48
|
Abstract
Congenital amusia is a lifelong deficit in music perception thought to reflect an underlying impairment in the perception and memory of pitch. The neural basis of amusic impairments is actively debated. Some prior studies have suggested that amusia stems from impaired connectivity between auditory and frontal cortex. However, it remains possible that impairments in pitch coding within auditory cortex also contribute to the disorder, in part because prior studies have not measured responses from the cortical regions most implicated in pitch perception in normal individuals. We addressed this question by measuring fMRI responses in 11 subjects with amusia and 11 age- and education-matched controls to a stimulus contrast that reliably identifies pitch-responsive regions in normal individuals: harmonic tones versus frequency-matched noise. Our findings demonstrate that amusic individuals with a substantial pitch perception deficit exhibit clusters of pitch-responsive voxels that are comparable in extent, selectivity, and anatomical location to those of control participants. We discuss possible explanations for why amusics might be impaired at perceiving pitch relations despite exhibiting normal fMRI responses to pitch in their auditory cortex: (1) individual neurons within the pitch-responsive region might exhibit abnormal tuning or temporal coding not detectable with fMRI, (2) anatomical tracts that link pitch-responsive regions to other brain areas (e.g., frontal cortex) might be altered, and (3) cortical regions outside of pitch-responsive cortex might be abnormal. The ability to identify pitch-responsive regions in individual amusic subjects will make it possible to ask more precise questions about their role in amusia in future work.
Collapse
|
49
|
Jiang F, Stecker GC, Boynton GM, Fine I. Early Blindness Results in Developmental Plasticity for Auditory Motion Processing within Auditory and Occipital Cortex. Front Hum Neurosci 2016; 10:324. [PMID: 27458357 PMCID: PMC4932114 DOI: 10.3389/fnhum.2016.00324] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2015] [Accepted: 06/13/2016] [Indexed: 12/02/2022] Open
Abstract
Early blind subjects exhibit superior abilities for processing auditory motion, which are accompanied by enhanced BOLD responses to auditory motion within hMT+ and reduced responses within right planum temporale (rPT). Here, by comparing BOLD responses to auditory motion in hMT+ and rPT within sighted controls, early blind, late blind, and sight-recovery individuals, we were able to separately examine the effects of developmental and adult visual deprivation on cortical plasticity within these two areas. We find that both the enhanced auditory motion responses in hMT+ and the reduced functionality in rPT are driven by the absence of visual experience early in life; neither loss nor recovery of vision later in life had a discernable influence on plasticity within these areas. Cortical plasticity as a result of blindness has generally be presumed to be mediated by competition across modalities within a given cortical region. The reduced functionality within rPT as a result of early visual loss implicates an additional mechanism for cross modal plasticity as a result of early blindness—competition across different cortical areas for functional role.
Collapse
Affiliation(s)
- Fang Jiang
- Department of Psychology, University of Nevada Reno, NV, USA
| | - G Christopher Stecker
- Department of Hearing and Speech Sciences, Vanderbilt University School of Medicine Nashville, TN, USA
| | | | - Ione Fine
- Department of Psychology, University of Washington Seattle, WA, USA
| |
Collapse
|
50
|
Bernal B, Ardila A. From Hearing Sounds to Recognizing Phonemes: Primary Auditory Cortex is A Truly Perceptual Language Area. AIMS Neurosci 2016. [DOI: 10.3934/neuroscience.2016.4.454] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
|