1
|
Wang A, Yan X, Feng G, Cao F. Shared and task-specific brain functional differences across multiple tasks in children with developmental dyslexia. Neuropsychologia 2024; 201:108935. [PMID: 38848989 DOI: 10.1016/j.neuropsychologia.2024.108935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Revised: 06/04/2024] [Accepted: 06/05/2024] [Indexed: 06/09/2024]
Abstract
Different tasks have been used in examining the neural functional differences associated with developmental dyslexia (DD), and consequently, different findings have been reported. However, very few studies have systematically compared multiple tasks in understanding what specific task differences each brain region is associated with. In this study, we employed an auditory rhyming task, a visual rhyming task, and a visual spelling task, in order to investigate shared and task-specific neural differences in Chinese children with DD. First, we found that children with DD had reduced activation in the opercular part of the left inferior frontal gyrus (IFG) only in the two rhyming tasks, suggesting impaired phonological analysis. Children with DD showed functional differences in the right lingual gyrus/inferior occipital gyrus only in the two visual tasks, suggesting deficiency in their visuo-orthographic processing. Moreover, children with DD showed reduced activation in the left dorsal inferior frontal gyrus and increased activation in the right precentral gyrus across all of the three tasks, suggesting neural signatures of DD in Chinese. In summary, our study successfully separated brain regions associated with differences in orthographic processing, phonological processing, and general lexical processing in DD. It advances our understanding about the neural mechanisms of DD.
Collapse
Affiliation(s)
- Anqi Wang
- Department of Psychology, Sun Yat-Sen University, China
| | - Xiaohui Yan
- Department of Psychology, the University of Hong Kong, China; State Key Lab of Brain and Cognitive Sciences, the University of Hong Kong, China
| | - Guoyan Feng
- Department of Psychology, Sun Yat-Sen University, China; School of Management, Guangzhou Xinhua University, China
| | - Fan Cao
- Department of Psychology, the University of Hong Kong, China; State Key Lab of Brain and Cognitive Sciences, the University of Hong Kong, China.
| |
Collapse
|
2
|
Hauw F, Béranger B, Cohen L. Subtitled speech: the neural mechanisms of ticker-tape synaesthesia. Brain 2024; 147:2530-2541. [PMID: 38620012 PMCID: PMC11224615 DOI: 10.1093/brain/awae114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Revised: 02/21/2024] [Accepted: 03/21/2024] [Indexed: 04/17/2024] Open
Abstract
The acquisition of reading modifies areas of the brain associated with vision and with language, in addition to their connections. These changes enable reciprocal translation between orthography and the sounds and meaning of words. Individual variability in the pre-existing cerebral substrate contributes to the range of eventual reading abilities, extending to atypical developmental patterns, including dyslexia and reading-related synaesthesias. The present study is devoted to the little-studied but highly informative ticker-tape synaesthesia, in which speech perception triggers the vivid and irrepressible perception of words in their written form in the mind's eye. We scanned a group of 17 synaesthetes and 17 matched controls with functional MRI, while they listened to spoken sentences, words, numbers or pseudowords (Experiment 1), viewed images and written words (Experiment 2) or were at rest (Experiment 3). First, we found direct correlates of the ticker-tape synaesthesia phenomenon: during speech perception, as ticker-tape synaesthesia was active, synaesthetes showed over-activation of left perisylvian regions supporting phonology and of the occipitotemporal visual word form area, where orthography is represented. Second, we provided support to the hypothesis that ticker-tape synaesthesia results from atypical relationships between spoken and written language processing: the ticker-tape synaesthesia-related regions overlap closely with cortices activated during reading, and the overlap of speech-related and reading-related areas is larger in synaesthetes than in controls. Furthermore, the regions over-activated in ticker-tape synaesthesia overlap with regions under-activated in dyslexia. Third, during the resting state (i.e. in the absence of current ticker-tape synaesthesia), synaesthetes showed increased functional connectivity between left prefrontal and bilateral occipital regions. This pattern might reflect a lowered threshold for conscious access to visual mental contents and might imply a non-specific predisposition to all synaesthesias with a visual content. These data provide a rich and coherent account of ticker-tape synaesthesia as a non-detrimental developmental condition created by the interaction of reading acquisition with an atypical cerebral substrate.
Collapse
Affiliation(s)
- Fabien Hauw
- Inserm U 1127, CNRS UMR 7225, Sorbonne Universités, Institut du Cerveau, ICM, Paris 75013, France
- AP-HP, Hôpital de La Pitié Salpêtrière, Fédération de Neurologie, Paris 75013, France
| | - Benoît Béranger
- Inserm U 1127, CNRS UMR 7225, Sorbonne Universités, Institut du Cerveau, ICM, Paris 75013, France
| | - Laurent Cohen
- Inserm U 1127, CNRS UMR 7225, Sorbonne Universités, Institut du Cerveau, ICM, Paris 75013, France
- AP-HP, Hôpital de La Pitié Salpêtrière, Fédération de Neurologie, Paris 75013, France
| |
Collapse
|
3
|
Zou T, Li L, Huang X, Deng C, Wang X, Gao Q, Chen H, Li R. Dynamic causal modeling analysis reveals the modulation of motor cortex and integration in superior temporal gyrus during multisensory speech perception. Cogn Neurodyn 2024; 18:931-946. [PMID: 38826672 PMCID: PMC11143173 DOI: 10.1007/s11571-023-09945-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Revised: 02/03/2023] [Accepted: 02/10/2023] [Indexed: 03/06/2023] Open
Abstract
The processing of speech information from various sensory modalities is crucial for human communication. Both left posterior superior temporal gyrus (pSTG) and motor cortex importantly involve in the multisensory speech perception. However, the dynamic integration of primary sensory regions to pSTG and the motor cortex remain unclear. Here, we implemented a behavioral experiment of classical McGurk effect paradigm and acquired the task functional magnetic resonance imaging (fMRI) data during synchronized audiovisual syllabic perception from 63 normal adults. We conducted dynamic causal modeling (DCM) analysis to explore the cross-modal interactions among the left pSTG, left precentral gyrus (PrG), left middle superior temporal gyrus (mSTG), and left fusiform gyrus (FuG). Bayesian model selection favored a winning model that included modulations of connections to PrG (mSTG → PrG, FuG → PrG), from PrG (PrG → mSTG, PrG → FuG), and to pSTG (mSTG → pSTG, FuG → pSTG). Moreover, the coupling strength of the above connections correlated with behavioral McGurk susceptibility. In addition, significant differences were found in the coupling strength of these connections between strong and weak McGurk perceivers. Strong perceivers modulated less inhibitory visual influence, allowed less excitatory auditory information flowing into PrG, but integrated more audiovisual information in pSTG. Taken together, our findings show that the PrG and pSTG interact dynamically with primary cortices during audiovisual speech, and support the motor cortex plays a specifically functional role in modulating the gain and salience between auditory and visual modalities. Supplementary Information The online version contains supplementary material available at 10.1007/s11571-023-09945-z.
Collapse
Affiliation(s)
- Ting Zou
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054 People’s Republic of China
| | - Liyuan Li
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054 People’s Republic of China
| | - Xinju Huang
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054 People’s Republic of China
| | - Chijun Deng
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054 People’s Republic of China
| | - Xuyang Wang
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054 People’s Republic of China
| | - Qing Gao
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054 People’s Republic of China
| | - Huafu Chen
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054 People’s Republic of China
| | - Rong Li
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054 People’s Republic of China
| |
Collapse
|
4
|
Rupp KM, Hect JL, Harford EE, Holt LL, Ghuman AS, Abel TJ. A hierarchy of processing complexity and timescales for natural sounds in human auditory cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.24.595822. [PMID: 38826304 PMCID: PMC11142240 DOI: 10.1101/2024.05.24.595822] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2024]
Abstract
Efficient behavior is supported by humans' ability to rapidly recognize acoustically distinct sounds as members of a common category. Within auditory cortex, there are critical unanswered questions regarding the organization and dynamics of sound categorization. Here, we performed intracerebral recordings in the context of epilepsy surgery as 20 patient-participants listened to natural sounds. We built encoding models to predict neural responses using features of these sounds extracted from different layers within a sound-categorization deep neural network (DNN). This approach yielded highly accurate models of neural responses throughout auditory cortex. The complexity of a cortical site's representation (measured by the depth of the DNN layer that produced the best model) was closely related to its anatomical location, with shallow, middle, and deep layers of the DNN associated with core (primary auditory cortex), lateral belt, and parabelt regions, respectively. Smoothly varying gradients of representational complexity also existed within these regions, with complexity increasing along a posteromedial-to-anterolateral direction in core and lateral belt, and along posterior-to-anterior and dorsal-to-ventral dimensions in parabelt. When we estimated the time window over which each recording site integrates information, we found shorter integration windows in core relative to lateral belt and parabelt. Lastly, we found a relationship between the length of the integration window and the complexity of information processing within core (but not lateral belt or parabelt). These findings suggest hierarchies of timescales and processing complexity, and their interrelationship, represent a functional organizational principle of the auditory stream that underlies our perception of complex, abstract auditory information.
Collapse
Affiliation(s)
- Kyle M. Rupp
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Jasmine L. Hect
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Emily E. Harford
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Lori L. Holt
- Department of Psychology, The University of Texas at Austin, Austin, Texas, United States of America
| | - Avniel Singh Ghuman
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Taylor J. Abel
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| |
Collapse
|
5
|
Guerreiro Fernandes F, Raemaekers M, Freudenburg Z, Ramsey N. Considerations for implanting speech brain computer interfaces based on functional magnetic resonance imaging. J Neural Eng 2024; 21:036005. [PMID: 38648782 DOI: 10.1088/1741-2552/ad4178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Accepted: 04/22/2024] [Indexed: 04/25/2024]
Abstract
Objective.Brain-computer interfaces (BCIs) have the potential to reinstate lost communication faculties. Results from speech decoding studies indicate that a usable speech BCI based on activity in the sensorimotor cortex (SMC) can be achieved using subdurally implanted electrodes. However, the optimal characteristics for a successful speech implant are largely unknown. We address this topic in a high field blood oxygenation level dependent functional magnetic resonance imaging (fMRI) study, by assessing the decodability of spoken words as a function of hemisphere, gyrus, sulcal depth, and position along the ventral/dorsal-axis.Approach.Twelve subjects conducted a 7T fMRI experiment in which they pronounced 6 different pseudo-words over 6 runs. We divided the SMC by hemisphere, gyrus, sulcal depth, and position along the ventral/dorsal axis. Classification was performed on in these SMC areas using multiclass support vector machine (SVM).Main results.Significant classification was possible from the SMC, but no preference for the left or right hemisphere, nor for the precentral or postcentral gyrus for optimal word classification was detected. Classification while using information from the cortical surface was slightly better than when using information from deep in the central sulcus and was highest within the ventral 50% of SMC. Confusion matrices where highly similar across the entire SMC. An SVM-searchlight analysis revealed significant classification in the superior temporal gyrus and left planum temporale in addition to the SMC.Significance.The current results support a unilateral implant using surface electrodes, covering the ventral 50% of the SMC. The added value of depth electrodes is unclear. We did not observe evidence for variations in the qualitative nature of information across SMC. The current results need to be confirmed in paralyzed patients performing attempted speech.
Collapse
Affiliation(s)
- F Guerreiro Fernandes
- Department of Neurology and Neurosurgery, University Medical Center Utrecht Brain Center, Utrecht University, Utrecht, The Netherlands
| | - M Raemaekers
- Department of Neurology and Neurosurgery, University Medical Center Utrecht Brain Center, Utrecht University, Utrecht, The Netherlands
| | - Z Freudenburg
- Department of Neurology and Neurosurgery, University Medical Center Utrecht Brain Center, Utrecht University, Utrecht, The Netherlands
| | - N Ramsey
- Department of Neurology and Neurosurgery, University Medical Center Utrecht Brain Center, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
6
|
Kausel L, Michon M, Soto-Icaza P, Aboitiz F. A multimodal interface for speech perception: the role of the left superior temporal sulcus in social cognition and autism. Cereb Cortex 2024; 34:84-93. [PMID: 38696598 DOI: 10.1093/cercor/bhae066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 01/17/2024] [Accepted: 02/03/2024] [Indexed: 05/04/2024] Open
Abstract
Multimodal integration is crucial for human interaction, in particular for social communication, which relies on integrating information from various sensory modalities. Recently a third visual pathway specialized in social perception was proposed, which includes the right superior temporal sulcus (STS) playing a key role in processing socially relevant cues and high-level social perception. Importantly, it has also recently been proposed that the left STS contributes to audiovisual integration of speech processing. In this article, we propose that brain areas along the right STS that support multimodal integration for social perception and cognition can be considered homologs to those in the left, language-dominant hemisphere, sustaining multimodal integration of speech and semantic concepts fundamental for social communication. Emphasizing the significance of the left STS in multimodal integration and associated processes such as multimodal attention to socially relevant stimuli, we underscore its potential relevance in comprehending neurodevelopmental conditions characterized by challenges in social communication such as autism spectrum disorder (ASD). Further research into this left lateral processing stream holds the promise of enhancing our understanding of social communication in both typical development and ASD, which may lead to more effective interventions that could improve the quality of life for individuals with atypical neurodevelopment.
Collapse
Affiliation(s)
- Leonie Kausel
- Centro de Estudios en Neurociencia Humana y Neuropsicología (CENHN), Facultad de Psicología, Universidad Diego Portales, Chile, Vergara 275, 8370076 Santiago, Chile
| | - Maëva Michon
- Praxiling Laboratory, Joint Research Unit (UMR 5267), Centre National de la Recherche Scientifique (CNRS), Université Paul Valéry, Montpellier, France, Route de Mende, 34199 Montpellier cedex 5, France
- Centro Interdisciplinario de Neurociencia, Pontificia Universidad Católica de Chile, Chile, Marcoleta 391, 2do piso, 8330024 Santiago, Chile
- Laboratorio de Neurociencia Cognitiva y Evolutiva, Facultad de Medicina, Pontificia Universidad Católica de Chile, Chile, Marcoleta 391, 2do piso, 8330024 Santiago, Chile
| | - Patricia Soto-Icaza
- Centro de Investigación en Complejidad Social (CICS), Facultad de Gobierno, Universidad del Desarrollo, Chile, Av. Las Condes 12461, edificio 3, piso 3, 7590943, Las Condes Santiago, Chile
| | - Francisco Aboitiz
- Centro Interdisciplinario de Neurociencia, Pontificia Universidad Católica de Chile, Chile, Marcoleta 391, 2do piso, 8330024 Santiago, Chile
- Laboratorio de Neurociencia Cognitiva y Evolutiva, Facultad de Medicina, Pontificia Universidad Católica de Chile, Chile, Marcoleta 391, 2do piso, 8330024 Santiago, Chile
| |
Collapse
|
7
|
Chen X, Ouyang F, Liang J, Huang W, Zeng J, Xing S. Cerebral asymmetry in adult Macaca fascicularis as revealed by voxel-based MRI and DTI analysis. Brain Res 2024; 1830:148818. [PMID: 38387715 DOI: 10.1016/j.brainres.2024.148818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Revised: 01/28/2024] [Accepted: 02/19/2024] [Indexed: 02/24/2024]
Abstract
Investigating cerebral asymmetries in non-human primates would facilitate to understand the evolutional traits of the human brain specialization related to language and other high-level cognition. However, brain asymmetrical studies of monkeys produced controversial results. Here, we investigated the cerebral asymmetries using a combination of the optimized voxel-based morphometry (VBM) and tract-based spatial statistics (TBSS) protocols in monkeys. The study-specific MRI and DTI-based templates were created in 66 adult Macaca fascicularis, and the asymmetrical index of grey and white matter was subsequently examined. The VBM analysis detected the well-known frontal and occipital petalias and confirmed the presence of leftward asymmetry in the ventral frontal cortex. A marked leftward asymmetry of anterior superior temporal gyrus but not posterior portion were found. We also identified grey matter asymmetries in some regions that were not previously reported including rightward anterior cingulate, insular cortex and thalamus, and leftward caudate. In contrast, the results of TBSS analysis for the first time revealed the robust leftwards asymmetries of corpus callosum (splenium and body), internal/external capsule, and white matter in middle temporal gyrus, adjacent thalamus and amygdala whereas the rightwards in uncinate fasciculus, posterior thalamic radiation and cerebral peduncle. These findings provide robust evidence of grey and white matter asymmetries in the brain of monkeys, which may extend the understanding of brain evolution in cerebral specialization.
Collapse
Affiliation(s)
- Xinran Chen
- Department of Neurology and Stroke Center, The First Affiliated Hospital, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Diagnosis and Treatment of Major Neurological Diseases, National Key Clinical Department and Key Discipline of Neurology, Guangzhou, China
| | - Fubing Ouyang
- Department of Neurology and Stroke Center, The First Affiliated Hospital, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Diagnosis and Treatment of Major Neurological Diseases, National Key Clinical Department and Key Discipline of Neurology, Guangzhou, China
| | - Jiahui Liang
- Department of Neurology and Stroke Center, The First Affiliated Hospital, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Diagnosis and Treatment of Major Neurological Diseases, National Key Clinical Department and Key Discipline of Neurology, Guangzhou, China
| | - Weixian Huang
- Department of Neurology and Stroke Center, The First Affiliated Hospital, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Diagnosis and Treatment of Major Neurological Diseases, National Key Clinical Department and Key Discipline of Neurology, Guangzhou, China
| | - Jinsheng Zeng
- Department of Neurology and Stroke Center, The First Affiliated Hospital, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Diagnosis and Treatment of Major Neurological Diseases, National Key Clinical Department and Key Discipline of Neurology, Guangzhou, China.
| | - Shihui Xing
- Department of Neurology and Stroke Center, The First Affiliated Hospital, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Diagnosis and Treatment of Major Neurological Diseases, National Key Clinical Department and Key Discipline of Neurology, Guangzhou, China.
| |
Collapse
|
8
|
Bidelman GM, Bernard F, Skubic K. Hearing in categories aids speech streaming at the "cocktail party". BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.03.587795. [PMID: 38617284 PMCID: PMC11014555 DOI: 10.1101/2024.04.03.587795] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/16/2024]
Abstract
Our perceptual system bins elements of the speech signal into categories to make speech perception manageable. Here, we aimed to test whether hearing speech in categories (as opposed to a continuous/gradient fashion) affords yet another benefit to speech recognition: parsing noisy speech at the "cocktail party." We measured speech recognition in a simulated 3D cocktail party environment. We manipulated task difficulty by varying the number of additional maskers presented at other spatial locations in the horizontal soundfield (1-4 talkers) and via forward vs. time-reversed maskers, promoting more and less informational masking (IM), respectively. In separate tasks, we measured isolated phoneme categorization using two-alternative forced choice (2AFC) and visual analog scaling (VAS) tasks designed to promote more/less categorical hearing and thus test putative links between categorization and real-world speech-in-noise skills. We first show that listeners can only monitor up to ~3 talkers despite up to 5 in the soundscape and streaming is not related to extended high-frequency hearing thresholds (though QuickSIN scores are). We then confirm speech streaming accuracy and speed decline with additional competing talkers and amidst forward compared to reverse maskers with added IM. Dividing listeners into "discrete" vs. "continuous" categorizers based on their VAS labeling (i.e., whether responses were binary or continuous judgments), we then show the degree of IM experienced at the cocktail party is predicted by their degree of categoricity in phoneme labeling; more discrete listeners are less susceptible to IM than their gradient responding peers. Our results establish a link between speech categorization skills and cocktail party processing, with a categorical (rather than gradient) listening strategy benefiting degraded speech perception. These findings imply figure-ground deficits common in many disorders might arise through a surprisingly simple mechanism: a failure to properly bin sounds into categories.
Collapse
Affiliation(s)
- Gavin M. Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
- Program in Neuroscience, Indiana University, Bloomington, IN, USA
- Cognitive Science Program, Indiana University, Bloomington, IN, USA
| | - Fallon Bernard
- School of Communication Sciences & Disorders, University of Memphis, Memphis TN, USA
| | - Kimberly Skubic
- School of Communication Sciences & Disorders, University of Memphis, Memphis TN, USA
| |
Collapse
|
9
|
Jones SD, Stewart HJ, Westermann G. A maturational frequency discrimination deficit may explain developmental language disorder. Psychol Rev 2024; 131:695-715. [PMID: 37498700 PMCID: PMC11115354 DOI: 10.1037/rev0000436] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 05/04/2023] [Accepted: 05/10/2023] [Indexed: 07/29/2023]
Abstract
Auditory perceptual deficits are widely observed among children with developmental language disorder (DLD). Yet, the nature of these deficits and the extent to which they explain speech and language problems remain controversial. In this study, we hypothesize that disruption to the maturation of the basilar membrane may impede the optimization of the auditory pathway from brainstem to cortex, curtailing high-resolution frequency sensitivity and the efficient spectral decomposition and encoding of natural speech. A series of computational simulations involving deep convolutional neural networks that were trained to encode, recognize, and retrieve naturalistic speech are presented to demonstrate the strength of this account. These neural networks were built on top of biologically truthful inner ear models developed to model human cochlea function, which-in the key innovation of the present study-were scheduled to mature at different rates over time. Delaying cochlea maturation qualitatively replicated the linguistic behavior and neurophysiology of individuals with language learning difficulties in a number of ways, resulting in (a) delayed language acquisition profiles, (b) lower spoken word recognition accuracy, (c) word finding and retrieval difficulties, (d) "fuzzy" and intersecting speech encodings and signatures of immature neural optimization, and (e) emergent working memory and attentional deficits. These simulations illustrate many negative cascading effects that a primary maturational frequency discrimination deficit may have on early language development and generate precise and testable hypotheses for future research into the nature and cost of auditory processing deficits in children with language learning difficulties. (PsycInfo Database Record (c) 2024 APA, all rights reserved).
Collapse
|
10
|
Nourski KV, Steinschneider M, Rhone AE, Dappen ER, Kawasaki H, Howard MA. Processing of auditory novelty in human cortex during a semantic categorization task. Hear Res 2024; 444:108972. [PMID: 38359485 PMCID: PMC10984345 DOI: 10.1016/j.heares.2024.108972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Revised: 02/05/2024] [Accepted: 02/10/2024] [Indexed: 02/17/2024]
Abstract
Auditory semantic novelty - a new meaningful sound in the context of a predictable acoustical environment - can probe neural circuits involved in language processing. Aberrant novelty detection is a feature of many neuropsychiatric disorders. This large-scale human intracranial electrophysiology study examined the spatial distribution of gamma and alpha power and auditory evoked potentials (AEP) associated with responses to unexpected words during performance of semantic categorization tasks. Participants were neurosurgical patients undergoing monitoring for medically intractable epilepsy. Each task included repeatedly presented monosyllabic words from different talkers ("common") and ten words presented only once ("novel"). Targets were words belonging to a specific semantic category. Novelty effects were defined as differences between neural responses to novel and common words. Novelty increased task difficulty and was associated with augmented gamma, suppressed alpha power, and AEP differences broadly distributed across the cortex. Gamma novelty effect had the highest prevalence in planum temporale, posterior superior temporal gyrus (STG) and pars triangularis of the inferior frontal gyrus; alpha in anterolateral Heschl's gyrus (HG), anterior STG and middle anterior cingulate cortex; AEP in posteromedial HG, lower bank of the superior temporal sulcus, and planum polare. Gamma novelty effect had a higher prevalence in dorsal than ventral auditory-related areas. Novelty effects were more pronounced in the left hemisphere. Better novel target detection was associated with reduced gamma novelty effect within auditory cortex and enhanced gamma effect within prefrontal and sensorimotor cortex. Alpha and AEP novelty effects were generally more prevalent in better performing participants. Multiple areas, including auditory cortex on the superior temporal plane, featured AEP novelty effect within the time frame of P3a and N400 scalp-recorded novelty-related potentials. This work provides a detailed account of auditory novelty in a paradigm that directly examined brain regions associated with semantic processing. Future studies may aid in the development of objective measures to assess the integrity of semantic novelty processing in clinical populations.
Collapse
Affiliation(s)
- Kirill V Nourski
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, United States; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA 52242, United States.
| | - Mitchell Steinschneider
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, United States; Departments of Neurology, Neuroscience, and Pediatrics, Albert Einstein College of Medicine, Bronx, NY 10461, United States
| | - Ariane E Rhone
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, United States
| | - Emily R Dappen
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, United States; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA 52242, United States
| | - Hiroto Kawasaki
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, United States
| | - Matthew A Howard
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, United States; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA 52242, United States; Pappajohn Biomedical Institute, The University of Iowa, Iowa City, IA 52242, United States
| |
Collapse
|
11
|
Wikman P, Salmela V, Sjöblom E, Leminen M, Laine M, Alho K. Attention to audiovisual speech shapes neural processing through feedback-feedforward loops between different nodes of the speech network. PLoS Biol 2024; 22:e3002534. [PMID: 38466713 PMCID: PMC10957087 DOI: 10.1371/journal.pbio.3002534] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 03/21/2024] [Accepted: 01/30/2024] [Indexed: 03/13/2024] Open
Abstract
Selective attention-related top-down modulation plays a significant role in separating relevant speech from irrelevant background speech when vocal attributes separating concurrent speakers are small and continuously evolving. Electrophysiological studies have shown that such top-down modulation enhances neural tracking of attended speech. Yet, the specific cortical regions involved remain unclear due to the limited spatial resolution of most electrophysiological techniques. To overcome such limitations, we collected both electroencephalography (EEG) (high temporal resolution) and functional magnetic resonance imaging (fMRI) (high spatial resolution), while human participants selectively attended to speakers in audiovisual scenes containing overlapping cocktail party speech. To utilise the advantages of the respective techniques, we analysed neural tracking of speech using the EEG data and performed representational dissimilarity-based EEG-fMRI fusion. We observed that attention enhanced neural tracking and modulated EEG correlates throughout the latencies studied. Further, attention-related enhancement of neural tracking fluctuated in predictable temporal profiles. We discuss how such temporal dynamics could arise from a combination of interactions between attention and prediction as well as plastic properties of the auditory cortex. EEG-fMRI fusion revealed attention-related iterative feedforward-feedback loops between hierarchically organised nodes of the ventral auditory object related processing stream. Our findings support models where attention facilitates dynamic neural changes in the auditory cortex, ultimately aiding discrimination of relevant sounds from irrelevant ones while conserving neural resources.
Collapse
Affiliation(s)
- Patrik Wikman
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
- Advanced Magnetic Imaging Centre, Aalto NeuroImaging, Aalto University, Espoo, Finland
| | - Viljami Salmela
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
- Advanced Magnetic Imaging Centre, Aalto NeuroImaging, Aalto University, Espoo, Finland
| | - Eetu Sjöblom
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
| | - Miika Leminen
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
- AI and Analytics Unit, Helsinki University Hospital, Helsinki, Finland
| | - Matti Laine
- Department of Psychology, Åbo Akademi University, Turku, Finland
| | - Kimmo Alho
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
- Advanced Magnetic Imaging Centre, Aalto NeuroImaging, Aalto University, Espoo, Finland
| |
Collapse
|
12
|
Regev TI, Kim HS, Chen X, Affourtit J, Schipper AE, Bergen L, Mahowald K, Fedorenko E. High-level language brain regions process sublexical regularities. Cereb Cortex 2024; 34:bhae077. [PMID: 38494886 DOI: 10.1093/cercor/bhae077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2023] [Revised: 02/05/2024] [Accepted: 02/07/2024] [Indexed: 03/19/2024] Open
Abstract
A network of left frontal and temporal brain regions supports language processing. This "core" language network stores our knowledge of words and constructions as well as constraints on how those combine to form sentences. However, our linguistic knowledge additionally includes information about phonemes and how they combine to form phonemic clusters, syllables, and words. Are phoneme combinatorics also represented in these language regions? Across five functional magnetic resonance imaging experiments, we investigated the sensitivity of high-level language processing brain regions to sublexical linguistic regularities by examining responses to diverse nonwords-sequences of phonemes that do not constitute real words (e.g. punes, silory, flope). We establish robust responses in the language network to visually (experiment 1a, n = 605) and auditorily (experiments 1b, n = 12, and 1c, n = 13) presented nonwords. In experiment 2 (n = 16), we find stronger responses to nonwords that are more well-formed, i.e. obey the phoneme-combinatorial constraints of English. Finally, in experiment 3 (n = 14), we provide suggestive evidence that the responses in experiments 1 and 2 are not due to the activation of real words that share some phonology with the nonwords. The results suggest that sublexical regularities are stored and processed within the same fronto-temporal network that supports lexical and syntactic processes.
Collapse
Affiliation(s)
- Tamar I Regev
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Hee So Kim
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Xuanyi Chen
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- Department of Cognitive Sciences, Rice University, Houston, TX 77005, United States
| | - Josef Affourtit
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Abigail E Schipper
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
| | - Leon Bergen
- Department of Linguistics, University of California San Diego, San Diego CA 92093, United States
| | - Kyle Mahowald
- Department of Linguistics, University of Texas at Austin, Austin, TX 78712, United States
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- The Harvard Program in Speech and Hearing Bioscience and Technology, Boston, MA 02115, United States
| |
Collapse
|
13
|
Liuzzi AG, Meersmans K, Peeters R, De Deyne S, Dupont P, Vandenberghe R. Semantic representations in inferior frontal and lateral temporal cortex during picture naming, reading, and repetition. Hum Brain Mapp 2024; 45:e26603. [PMID: 38339900 PMCID: PMC10836176 DOI: 10.1002/hbm.26603] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 12/12/2023] [Accepted: 01/09/2024] [Indexed: 02/12/2024] Open
Abstract
Reading, naming, and repetition are classical neuropsychological tasks widely used in the clinic and psycholinguistic research. While reading and repetition can be accomplished by following a direct or an indirect route, pictures can be named only by means of semantic mediation. By means of fMRI multivariate pattern analysis, we evaluated whether this well-established fundamental difference at the cognitive level is associated at the brain level with a difference in the degree to which semantic representations are activated during these tasks. Semantic similarity between words was estimated based on a word association model. Twenty subjects participated in an event-related fMRI study where the three tasks were presented in pseudo-random order. Linear discriminant analysis of fMRI patterns identified a set of regions that allow to discriminate between words at a high level of word-specificity across tasks. Representational similarity analysis was used to determine whether semantic similarity was represented in these regions and whether this depended on the task performed. The similarity between neural patterns of the left Brodmann area 45 (BA45) and of the superior portion of the left supramarginal gyrus correlated with the similarity in meaning between entities during picture naming. In both regions, no significant effects were seen for repetition or reading. The semantic similarity effect during picture naming was significantly larger than the similarity effect during the two other tasks. In contrast, several regions including left anterior superior temporal gyrus and left ventral BA44/frontal operculum, among others, coded for semantic similarity in a task-independent manner. These findings provide new evidence for the dynamic, task-dependent nature of semantic representations in the left BA45 and a more task-independent nature of the representational activation in the lateral temporal cortex and ventral BA44/frontal operculum.
Collapse
Affiliation(s)
- Antonietta Gabriella Liuzzi
- Laboratory for Cognitive Neurology, Department of NeurosciencesLeuven Brain Institute, KU LeuvenLeuvenBelgium
| | - Karen Meersmans
- Laboratory for Cognitive Neurology, Department of NeurosciencesLeuven Brain Institute, KU LeuvenLeuvenBelgium
| | - Ronald Peeters
- Radiology DepartmentUniversity Hospitals LeuvenLeuvenBelgium
| | - Simon De Deyne
- School of Psychological SciencesUniversity of MelbourneMelbourneAustralia
| | - Patrick Dupont
- Laboratory for Cognitive Neurology, Department of NeurosciencesLeuven Brain Institute, KU LeuvenLeuvenBelgium
| | - Rik Vandenberghe
- Laboratory for Cognitive Neurology, Department of NeurosciencesLeuven Brain Institute, KU LeuvenLeuvenBelgium
- Neurology DepartmentUniversity Hospitals LeuvenLeuvenBelgium
| |
Collapse
|
14
|
Zhang Y, Rennig J, Magnotti JF, Beauchamp MS. Multivariate fMRI responses in superior temporal cortex predict visual contributions to, and individual differences in, the intelligibility of noisy speech. Neuroimage 2023; 278:120271. [PMID: 37442310 PMCID: PMC10460966 DOI: 10.1016/j.neuroimage.2023.120271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 06/20/2023] [Accepted: 07/06/2023] [Indexed: 07/15/2023] Open
Abstract
Humans have the unique ability to decode the rapid stream of language elements that constitute speech, even when it is contaminated by noise. Two reliable observations about noisy speech perception are that seeing the face of the talker improves intelligibility and the existence of individual differences in the ability to perceive noisy speech. We introduce a multivariate BOLD fMRI measure that explains both observations. In two independent fMRI studies, clear and noisy speech was presented in visual, auditory and audiovisual formats to thirty-seven participants who rated intelligibility. An event-related design was used to sort noisy speech trials by their intelligibility. Individual-differences multidimensional scaling was applied to fMRI response patterns in superior temporal cortex and the dissimilarity between responses to clear speech and noisy (but intelligible) speech was measured. Neural dissimilarity was less for audiovisual speech than auditory-only speech, corresponding to the greater intelligibility of noisy audiovisual speech. Dissimilarity was less in participants with better noisy speech perception, corresponding to individual differences. These relationships held for both single word and entire sentence stimuli, suggesting that they were driven by intelligibility rather than the specific stimuli tested. A neural measure of perceptual intelligibility may aid in the development of strategies for helping those with impaired speech perception.
Collapse
Affiliation(s)
- Yue Zhang
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States; Department of Neurosurgery, Baylor College of Medicine, Houston, TX, United States
| | - Johannes Rennig
- Division of Neuropsychology, Center of Neurology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - John F Magnotti
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Michael S Beauchamp
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States.
| |
Collapse
|
15
|
Hauw F, El Soudany M, Rosso C, Daunizeau J, Cohen L. A single case neuroimaging study of tickertape synesthesia. Sci Rep 2023; 13:12185. [PMID: 37500762 PMCID: PMC10374523 DOI: 10.1038/s41598-023-39276-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Accepted: 07/22/2023] [Indexed: 07/29/2023] Open
Abstract
Reading acquisition is enabled by deep changes in the brain's visual system and language areas, and in the links subtending their collaboration. Disruption of those plastic processes commonly results in developmental dyslexia. However, atypical development of reading mechanisms may occasionally result in ticker-tape synesthesia (TTS), a condition described by Francis Galton in 1883 wherein individuals "see mentally in print every word that is uttered (…) as from a long imaginary strip of paper". While reading is the bottom-up translation of letters into speech, TTS may be viewed as its opposite, the top-down translation of speech into internally visualized letters. In a series of functional MRI experiments, we studied MK, a man with TTS. We showed that a set of left-hemispheric areas were more active in MK than in controls during the perception of normal than reversed speech, including frontoparietal areas involved in speech processing, and the Visual Word Form Area, an occipitotemporal region subtending orthography. Those areas were identical to those involved in reading, supporting the construal of TTS as upended reading. Using dynamic causal modeling, we further showed that, parallel to reading, TTS induced by spoken words and pseudowords relied on top-down flow of information along distinct lexical and phonological routes, involving the middle temporal and supramarginal gyri, respectively. Future studies of TTS should shed new light on the neurodevelopmental mechanisms of reading acquisition, their variability and their disorders.
Collapse
Affiliation(s)
- Fabien Hauw
- Inserm U 1127, CNRS UMR 7225, Sorbonne Universités, Institut du Cerveau, ICM, Paris, France.
- AP-HP, Hôpital de la Pitié Salpêtrière, Fédération de Neurologie, Paris, France.
| | - Mohamed El Soudany
- Inserm U 1127, CNRS UMR 7225, Sorbonne Universités, Institut du Cerveau, ICM, Paris, France
| | - Charlotte Rosso
- Inserm U 1127, CNRS UMR 7225, Sorbonne Universités, Institut du Cerveau, ICM, Paris, France
- AP-HP, Urgences Cérébro-Vasculaires, Hôpital Pitié-Salpêtrière, Paris, France
| | - Jean Daunizeau
- Inserm U 1127, CNRS UMR 7225, Sorbonne Universités, Institut du Cerveau, ICM, Paris, France
| | - Laurent Cohen
- Inserm U 1127, CNRS UMR 7225, Sorbonne Universités, Institut du Cerveau, ICM, Paris, France
- AP-HP, Hôpital de la Pitié Salpêtrière, Fédération de Neurologie, Paris, France
| |
Collapse
|
16
|
Damera SR, Chang L, Nikolov PP, Mattei JA, Banerjee S, Glezer LS, Cox PH, Jiang X, Rauschecker JP, Riesenhuber M. Evidence for a Spoken Word Lexicon in the Auditory Ventral Stream. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2023; 4:420-434. [PMID: 37588129 PMCID: PMC10426387 DOI: 10.1162/nol_a_00108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 04/27/2023] [Indexed: 08/18/2023]
Abstract
The existence of a neural representation for whole words (i.e., a lexicon) is a common feature of many models of speech processing. Prior studies have provided evidence for a visual lexicon containing representations of whole written words in an area of the ventral visual stream known as the visual word form area. Similar experimental support for an auditory lexicon containing representations of spoken words has yet to be shown. Using functional magnetic resonance imaging rapid adaptation techniques, we provide evidence for an auditory lexicon in the auditory word form area in the human left anterior superior temporal gyrus that contains representations highly selective for individual spoken words. Furthermore, we show that familiarization with novel auditory words sharpens the selectivity of their representations in the auditory word form area. These findings reveal strong parallels in how the brain represents written and spoken words, showing convergent processing strategies across modalities in the visual and auditory ventral streams.
Collapse
Affiliation(s)
- Srikanth R. Damera
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC, USA
| | - Lillian Chang
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC, USA
| | - Plamen P. Nikolov
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC, USA
| | - James A. Mattei
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC, USA
| | - Suneel Banerjee
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC, USA
| | - Laurie S. Glezer
- Department of Speech, Language, and Hearing Sciences, San Diego State University, San Diego, CA, USA
| | - Patrick H. Cox
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC, USA
| | - Xiong Jiang
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC, USA
| | - Josef P. Rauschecker
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC, USA
| | | |
Collapse
|
17
|
Gong XL, Huth AG, Deniz F, Johnson K, Gallant JL, Theunissen FE. Phonemic segmentation of narrative speech in human cerebral cortex. Nat Commun 2023; 14:4309. [PMID: 37463907 PMCID: PMC10354060 DOI: 10.1038/s41467-023-39872-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Accepted: 06/29/2023] [Indexed: 07/20/2023] Open
Abstract
Speech processing requires extracting meaning from acoustic patterns using a set of intermediate representations based on a dynamic segmentation of the speech stream. Using whole brain mapping obtained in fMRI, we investigate the locus of cortical phonemic processing not only for single phonemes but also for short combinations made of diphones and triphones. We find that phonemic processing areas are much larger than previously described: they include not only the classical areas in the dorsal superior temporal gyrus but also a larger region in the lateral temporal cortex where diphone features are best represented. These identified phonemic regions overlap with the lexical retrieval region, but we show that short word retrieval is not sufficient to explain the observed responses to diphones. Behavioral studies have shown that phonemic processing and lexical retrieval are intertwined. Here, we also have identified candidate regions within the speech cortical network where this joint processing occurs.
Collapse
Affiliation(s)
- Xue L Gong
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, 94720, CA, USA.
| | - Alexander G Huth
- Departments of Neuroscience and Computer Science, University of Texas, Austin, Austin, 78712, TX, USA
| | - Fatma Deniz
- Faculty of Electrical Engineering and Computer Science, Technische Universität Berlin, Berlin, 10587, Berlin, Germany
| | - Keith Johnson
- Department of Linguistics, University of California, Berkeley, Berkeley, 94720, CA, USA
| | - Jack L Gallant
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, 94720, CA, USA
- Department of Psychology, University of California, Berkeley, Berkeley, 94720, CA, USA
| | - Frédéric E Theunissen
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, 94720, CA, USA.
- Department of Psychology, University of California, Berkeley, Berkeley, 94720, CA, USA.
- Department of Integrative Biology, University of California, Berkeley, Berkeley, 94720, CA, USA.
| |
Collapse
|
18
|
Chu Q, Ma O, Hang Y, Tian X. Dual-stream cortical pathways mediate sensory prediction. Cereb Cortex 2023:7169133. [PMID: 37197767 DOI: 10.1093/cercor/bhad168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Revised: 04/24/2023] [Accepted: 04/26/2023] [Indexed: 05/19/2023] Open
Abstract
Predictions are constantly generated from diverse sources to optimize cognitive functions in the ever-changing environment. However, the neural origin and generation process of top-down induced prediction remain elusive. We hypothesized that motor-based and memory-based predictions are mediated by distinct descending networks from motor and memory systems to the sensory cortices. Using functional magnetic resonance imaging (fMRI) and a dual imagery paradigm, we found that motor and memory upstream systems activated the auditory cortex in a content-specific manner. Moreover, the inferior and posterior parts of the parietal lobe differentially relayed predictive signals in motor-to-sensory and memory-to-sensory networks. Dynamic causal modeling of directed connectivity revealed selective enabling and modulation of connections that mediate top-down sensory prediction and ground the distinctive neurocognitive basis of predictive processing.
Collapse
Affiliation(s)
- Qian Chu
- Shanghai Frontiers Science Center of Artificial Intelligence and Deep Learning, Division of Arts and Sciences, New York University Shanghai, Shanghai 200126, China
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai 200062, China
- Max Planck-University of Toronto Centre for Neural Science and Technology, Toronto, ON M5S 2E4, Canada
| | - Ou Ma
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai 200062, China
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China
| | - Yuqi Hang
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai 200062, China
- Department of Administration, Leadership, and Technology, Steinhardt School of Culture, Education, and Human Development, New York University, New York, NY 10003, United States
| | - Xing Tian
- Shanghai Frontiers Science Center of Artificial Intelligence and Deep Learning, Division of Arts and Sciences, New York University Shanghai, Shanghai 200126, China
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai 200062, China
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China
| |
Collapse
|
19
|
Rolls ET, Rauschecker JP, Deco G, Huang CC, Feng J. Auditory cortical connectivity in humans. Cereb Cortex 2023; 33:6207-6227. [PMID: 36573464 PMCID: PMC10422925 DOI: 10.1093/cercor/bhac496] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Revised: 11/27/2022] [Accepted: 11/29/2022] [Indexed: 12/28/2022] Open
Abstract
To understand auditory cortical processing, the effective connectivity between 15 auditory cortical regions and 360 cortical regions was measured in 171 Human Connectome Project participants, and complemented with functional connectivity and diffusion tractography. 1. A hierarchy of auditory cortical processing was identified from Core regions (including A1) to Belt regions LBelt, MBelt, and 52; then to PBelt; and then to HCP A4. 2. A4 has connectivity to anterior temporal lobe TA2, and to HCP A5, which connects to dorsal-bank superior temporal sulcus (STS) regions STGa, STSda, and STSdp. These STS regions also receive visual inputs about moving faces and objects, which are combined with auditory information to help implement multimodal object identification, such as who is speaking, and what is being said. Consistent with this being a "what" ventral auditory stream, these STS regions then have effective connectivity to TPOJ1, STV, PSL, TGv, TGd, and PGi, which are language-related semantic regions connecting to Broca's area, especially BA45. 3. A4 and A5 also have effective connectivity to MT and MST, which connect to superior parietal regions forming a dorsal auditory "where" stream involved in actions in space. Connections of PBelt, A4, and A5 with BA44 may form a language-related dorsal stream.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, UK
- Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University, Shanghai 200602, China
| | - Josef P Rauschecker
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057, USA
- Institute for Advanced Study, Technical University, Munich, Germany
| | - Gustavo Deco
- Center for Brain and Cognition, Computational Neuroscience Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Roc Boronat 138, Brain and Cognition, Pompeu Fabra University, Barcelona 08018, Spain
- Institució Catalana de la Recerca i Estudis Avançats (ICREA), Universitat Pompeu Fabra, Passeig Lluís Companys 23, Barcelona 08010, Spain
| | - Chu-Chung Huang
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University, Shanghai 200602, China
| | - Jianfeng Feng
- Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK
- Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai 200403, China
| |
Collapse
|
20
|
Elsabbagh T, Wright-Wilson L, Brauer S, Morsella E. The habituation of higher-order conscious processes: Evidence from mental arithmetic. Acta Psychol (Amst) 2023; 236:103922. [PMID: 37167660 DOI: 10.1016/j.actpsy.2023.103922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 03/21/2023] [Accepted: 04/17/2023] [Indexed: 05/13/2023] Open
Abstract
A recurring idea in psychology is that one is conscious only of the "outputs" of mental operations, but not of the operations themselves. Often, such "entry into consciousness" occurs involuntarily. To investigate involuntary entry, some experimentalists have used the reflexive imagery task (RIT). The RIT has revealed that, under certain conditions, external stimuli (e.g., line drawings) can elicit involuntary entry of high-level cognitions. In the basic version of the task, participants are presented with visual objects and instructed not to subvocalize (i.e., say in one's head) the names of these objects. Participants cannot suppress these subvocalizations on a majority of the trials. It has been proposed that, if RIT effects resemble a reflex, then perhaps they will habituate as reflexes do. In the "habituation" variant of the RIT, the same stimulus object (e.g., CAT) is presented on ten consecutive trials (ten "instantiations"), in order to induce habituation (i.e., a weakened RIT effect). It remains unknown whether such habituation effects arise for stimulus-elicited processes that depend, not on subvocalization, but on more complex processes, such as mental arithmetic. To illuminate this issue, we conducted a conceptual replication of the "habituation" RIT that involves, on each trial, the participant trying not to add two numbers (e.g., 14 and 2). We assessed whether the habituation effects were stimulus-specific or set-specific. Understanding the boundary conditions of the RIT effect and its habituation illuminates the limits of unconscious processes and the role of conscious processing.
Collapse
Affiliation(s)
- Tala Elsabbagh
- Department of Psychology, San Francisco State University, United States of America
| | - Latoya Wright-Wilson
- Department of Psychology, San Francisco State University, United States of America
| | - Sarah Brauer
- Department of Psychology, San Francisco State University, United States of America
| | - Ezequiel Morsella
- Department of Psychology, San Francisco State University, United States of America; Neuroscape, Department of Neurology, University of California, San Francisco, United States of America.
| |
Collapse
|
21
|
Keshishian M, Akkol S, Herrero J, Bickel S, Mehta AD, Mesgarani N. Joint, distributed and hierarchically organized encoding of linguistic features in the human auditory cortex. Nat Hum Behav 2023; 7:740-753. [PMID: 36864134 PMCID: PMC10417567 DOI: 10.1038/s41562-023-01520-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2021] [Accepted: 01/05/2023] [Indexed: 03/04/2023]
Abstract
The precise role of the human auditory cortex in representing speech sounds and transforming them to meaning is not yet fully understood. Here we used intracranial recordings from the auditory cortex of neurosurgical patients as they listened to natural speech. We found an explicit, temporally ordered and anatomically distributed neural encoding of multiple linguistic features, including phonetic, prelexical phonotactics, word frequency, and lexical-phonological and lexical-semantic information. Grouping neural sites on the basis of their encoded linguistic features revealed a hierarchical pattern, with distinct representations of prelexical and postlexical features distributed across various auditory areas. While sites with longer response latencies and greater distance from the primary auditory cortex encoded higher-level linguistic features, the encoding of lower-level features was preserved and not discarded. Our study reveals a cumulative mapping of sound to meaning and provides empirical evidence for validating neurolinguistic and psycholinguistic models of spoken word recognition that preserve the acoustic variations in speech.
Collapse
Affiliation(s)
- Menoua Keshishian
- Department of Electrical Engineering, Columbia University, New York, NY, USA
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Serdar Akkol
- Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, USA
| | - Jose Herrero
- Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, USA
- Department of Neurosurgery, Hofstra-Northwell School of Medicine, Manhasset, NY, USA
| | - Stephan Bickel
- Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, USA
- Department of Neurosurgery, Hofstra-Northwell School of Medicine, Manhasset, NY, USA
| | - Ashesh D Mehta
- Feinstein Institutes for Medical Research, Northwell Health, Manhasset, NY, USA
- Department of Neurosurgery, Hofstra-Northwell School of Medicine, Manhasset, NY, USA
| | - Nima Mesgarani
- Department of Electrical Engineering, Columbia University, New York, NY, USA.
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA.
| |
Collapse
|
22
|
Rogenmoser L, Mouthon M, Etter F, Kamber J, Annoni JM, Schwab S. The processing of stress in a foreign language modulates functional antagonism between default mode and attention network regions. Neuropsychologia 2023; 185:108572. [PMID: 37119986 DOI: 10.1016/j.neuropsychologia.2023.108572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2022] [Revised: 04/19/2023] [Accepted: 04/26/2023] [Indexed: 05/01/2023]
Abstract
Lexical stress is an essential element of prosody. Mastering this prosodic feature is challenging, especially in a free-stress foreign language for individuals native to a fixed-stress language, a phenomenon referred to as stress deafness. By using functional magnetic resonance imaging, we elucidated the neuronal underpinnings of stress processing in a free-stress foreign language, and determined the underlying mechanism of stress deafness. Here, we contrasted behavioral and hemodynamic responses revealed by native speakers of a free-stress (German; N = 38) and a fixed-stress (French; N = 47) language while discriminating pairs of words in a free-stress foreign language (Spanish). Consistent with the stress deafness phenomenon, French speakers performed worse than German speakers in discriminating Spanish words based on cues of stress but not of vowel. Whole-brain analyses revealed widespread bilateral networks (cerebral regions including frontal, temporal and parietal areas as well as insular, subcortical and cerebellar structures), overlapping with the ones previously associated with stress processing within native languages. Moreover, our results provide evidence that the structures pertaining to a right-lateralized attention system (i.e., middle frontal gyrus, anterior insula) and the Default Mode Network modulate stress processing as a function of the performance level. In comparison to the German speakers, the French speakers activated the attention system and deactivated the Default Mode Network to a stronger degree, reflecting attentive engagement, likely a compensatory mechanism underlying the "stress-deaf" brain. The mechanism modulating stress processing argues for a rightward lateralization, indeed overlapping with the location covered by the dorsal stream but remaining unspecific to speech.
Collapse
Affiliation(s)
- Lars Rogenmoser
- Department of French, Université de Fribourg, Beauregard 11-13, 1700, Fribourg, Switzerland.
| | - Michael Mouthon
- Neurology-Laboratory for Cognitive and Neurological Sciences, University of Fribourg, Chemin Du Musée, 1700, Fribourg, Switzerland.
| | - Faustine Etter
- Department of French, Université de Fribourg, Beauregard 11-13, 1700, Fribourg, Switzerland.
| | - Julie Kamber
- Department of French, Université de Fribourg, Beauregard 11-13, 1700, Fribourg, Switzerland.
| | - Jean-Marie Annoni
- Neurology-Laboratory for Cognitive and Neurological Sciences, University of Fribourg, Chemin Du Musée, 1700, Fribourg, Switzerland.
| | - Sandra Schwab
- Department of French, Université de Fribourg, Beauregard 11-13, 1700, Fribourg, Switzerland; Institute of French, University of Bern, Längassstrasse 49, 3012, Bern, Switzerland; Computational Linguistics / Phonetics and Speech Sciences, University of Zurich, Andreastrasse 15, 8050, Zurich, Switzerland.
| |
Collapse
|
23
|
Giordano BL, Esposito M, Valente G, Formisano E. Intermediate acoustic-to-semantic representations link behavioral and neural responses to natural sounds. Nat Neurosci 2023; 26:664-672. [PMID: 36928634 PMCID: PMC10076214 DOI: 10.1038/s41593-023-01285-9] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Accepted: 02/15/2023] [Indexed: 03/18/2023]
Abstract
Recognizing sounds implicates the cerebral transformation of input waveforms into semantic representations. Although past research identified the superior temporal gyrus (STG) as a crucial cortical region, the computational fingerprint of these cerebral transformations remains poorly characterized. Here, we exploit a model comparison framework and contrasted the ability of acoustic, semantic (continuous and categorical) and sound-to-event deep neural network representation models to predict perceived sound dissimilarity and 7 T human auditory cortex functional magnetic resonance imaging responses. We confirm that spectrotemporal modulations predict early auditory cortex (Heschl's gyrus) responses, and that auditory dimensions (for example, loudness, periodicity) predict STG responses and perceived dissimilarity. Sound-to-event deep neural networks predict Heschl's gyrus responses similar to acoustic models but, notably, they outperform all competing models at predicting both STG responses and perceived dissimilarity. Our findings indicate that STG entails intermediate acoustic-to-semantic sound representations that neither acoustic nor semantic models can account for. These representations are compositional in nature and relevant to behavior.
Collapse
Affiliation(s)
- Bruno L Giordano
- Institut de Neurosciences de La Timone, UMR 7289, CNRS and Université Aix-Marseille, Marseille, France.
| | - Michele Esposito
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands
| | - Giancarlo Valente
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands
| | - Elia Formisano
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands. .,Maastricht Centre for Systems Biology (MaCSBio), Faculty of Science and Engineering, Maastricht University, Maastricht, the Netherlands. .,Brightlands Institute for Smart Society (BISS), Maastricht University, Maastricht, the Netherlands.
| |
Collapse
|
24
|
Setti F, Handjaras G, Bottari D, Leo A, Diano M, Bruno V, Tinti C, Cecchetti L, Garbarini F, Pietrini P, Ricciardi E. A modality-independent proto-organization of human multisensory areas. Nat Hum Behav 2023; 7:397-410. [PMID: 36646839 PMCID: PMC10038796 DOI: 10.1038/s41562-022-01507-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 12/05/2022] [Indexed: 01/18/2023]
Abstract
The processing of multisensory information is based upon the capacity of brain regions, such as the superior temporal cortex, to combine information across modalities. However, it is still unclear whether the representation of coherent auditory and visual events requires any prior audiovisual experience to develop and function. Here we measured brain synchronization during the presentation of an audiovisual, audio-only or video-only version of the same narrative in distinct groups of sensory-deprived (congenitally blind and deaf) and typically developed individuals. Intersubject correlation analysis revealed that the superior temporal cortex was synchronized across auditory and visual conditions, even in sensory-deprived individuals who lack any audiovisual experience. This synchronization was primarily mediated by low-level perceptual features, and relied on a similar modality-independent topographical organization of slow temporal dynamics. The human superior temporal cortex is naturally endowed with a functional scaffolding to yield a common representation across multisensory events.
Collapse
Affiliation(s)
- Francesca Setti
- MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | | | - Davide Bottari
- MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Andrea Leo
- Department of Translational Research and Advanced Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
| | - Matteo Diano
- Department of Psychology, University of Turin, Turin, Italy
| | - Valentina Bruno
- Manibus Lab, Department of Psychology, University of Turin, Turin, Italy
| | - Carla Tinti
- Department of Psychology, University of Turin, Turin, Italy
| | - Luca Cecchetti
- MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | | | - Pietro Pietrini
- MoMiLab, IMT School for Advanced Studies Lucca, Lucca, Italy
| | | |
Collapse
|
25
|
Sugimoto H, Abe MS, Otake-Matsuura M. Word-producing brain: Contribution of the left anterior middle temporal gyrus to word production patterns in spoken language. BRAIN AND LANGUAGE 2023; 238:105233. [PMID: 36842390 DOI: 10.1016/j.bandl.2023.105233] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/18/2022] [Revised: 12/27/2022] [Accepted: 02/01/2023] [Indexed: 06/18/2023]
Abstract
Vocabulary is based on semantic knowledge. The anterior temporal lobe (ATL) has been considered an essential region for processing semantic knowledge; nonetheless, the association between word production patterns and the structural and functional characteristics of the ATL remains unclear. To examine this, we analyzed over one million words from group conversations among community-dwelling older adults and their multimodal magnetic resonance imaging data. A quantitative index for the word production patterns, namely the exponent β of Heaps' law, positively correlated with the left anterior middle temporal gyrus volume. Moreover, β negatively correlated with its resting-state functional connectivity with the precuneus. There was no significant correlation with the diffusion tensor imaging metrics in any fiber. These findings suggest that the vocabulary richness in spoken language depends on the brain status characterized by the semantic knowledge-related brain structure and its activation dissimilarity with the precuneus, a core region of the default mode network.
Collapse
Affiliation(s)
- Hikaru Sugimoto
- RIKEN Center for Advanced Intelligence Project, Nihonbashi 1-chome Mitsui Building, 15th floor, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan.
| | - Masato S Abe
- RIKEN Center for Advanced Intelligence Project, Nihonbashi 1-chome Mitsui Building, 15th floor, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; Faculty of Culture and Information Science, Doshisha University, 1-3 Tatara Miyakodani, Kyotanabe-shi, Kyoto-fu 610-0394, Japan.
| | - Mihoko Otake-Matsuura
- RIKEN Center for Advanced Intelligence Project, Nihonbashi 1-chome Mitsui Building, 15th floor, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan.
| |
Collapse
|
26
|
Rassili O, Michelas A, Dufour S. Does accentual variation in the pronunciation of French words influence their recognition? It depends on the ear of presentation. JASA EXPRESS LETTERS 2023; 3:035204. [PMID: 37003717 DOI: 10.1121/10.0017516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
This repetition priming study examined how word accentual variation in French is represented and processed during spoken word recognition. Mismatched primes in the accentual pattern were less effective than matched primes in facilitating target word recognition when the targets were presented in the left ear but not in the right ear. This indicates that in French, the accentual pattern of words influences their recognition when processing is constrained in the right hemisphere. This study pleads in favor of two memory systems, the one retaining words in an abstract format and the other retaining words in their various forms.
Collapse
Affiliation(s)
- Outhmane Rassili
- Aix-Marseille Université, Centre National de la Recherche Scientifique, Laboratoire Parole et Langage, Unité Mixte de Recherche 7309, 13100 Aix-en-Provence, France ; ;
| | - Amandine Michelas
- Aix-Marseille Université, Centre National de la Recherche Scientifique, Laboratoire Parole et Langage, Unité Mixte de Recherche 7309, 13100 Aix-en-Provence, France ; ;
| | - Sophie Dufour
- Aix-Marseille Université, Centre National de la Recherche Scientifique, Laboratoire Parole et Langage, Unité Mixte de Recherche 7309, 13100 Aix-en-Provence, France ; ;
| |
Collapse
|
27
|
Saalasti S, Alho J, Lahnakoski JM, Bacha-Trams M, Glerean E, Jääskeläinen IP, Hasson U, Sams M. Lipreading a naturalistic narrative in a female population: Neural characteristics shared with listening and reading. Brain Behav 2023; 13:e2869. [PMID: 36579557 PMCID: PMC9927859 DOI: 10.1002/brb3.2869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Revised: 11/29/2022] [Accepted: 12/06/2022] [Indexed: 12/30/2022] Open
Abstract
INTRODUCTION Few of us are skilled lipreaders while most struggle with the task. Neural substrates that enable comprehension of connected natural speech via lipreading are not yet well understood. METHODS We used a data-driven approach to identify brain areas underlying the lipreading of an 8-min narrative with participants whose lipreading skills varied extensively (range 6-100%, mean = 50.7%). The participants also listened to and read the same narrative. The similarity between individual participants' brain activity during the whole narrative, within and between conditions, was estimated by a voxel-wise comparison of the Blood Oxygenation Level Dependent (BOLD) signal time courses. RESULTS Inter-subject correlation (ISC) of the time courses revealed that lipreading, listening to, and reading the narrative were largely supported by the same brain areas in the temporal, parietal and frontal cortices, precuneus, and cerebellum. Additionally, listening to and reading connected naturalistic speech particularly activated higher-level linguistic processing in the parietal and frontal cortices more consistently than lipreading, probably paralleling the limited understanding obtained via lip-reading. Importantly, higher lipreading test score and subjective estimate of comprehension of the lipread narrative was associated with activity in the superior and middle temporal cortex. CONCLUSIONS Our new data illustrates that findings from prior studies using well-controlled repetitive speech stimuli and stimulus-driven data analyses are also valid for naturalistic connected speech. Our results might suggest an efficient use of brain areas dealing with phonological processing in skilled lipreaders.
Collapse
Affiliation(s)
- Satu Saalasti
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland.,Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland.,Advanced Magnetic Imaging (AMI) Centre, Aalto NeuroImaging, School of Science, Aalto University, Espoo, Finland
| | - Jussi Alho
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland
| | - Juha M Lahnakoski
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland.,Independent Max Planck Research Group for Social Neuroscience, Max Planck Institute of Psychiatry, Munich, Germany.,Institute of Neuroscience and Medicine, Brain & Behaviour (INM-7), Research Center Jülich, Jülich, Germany.,Institute of Systems Neuroscience, Medical Faculty, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
| | - Mareike Bacha-Trams
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland
| | - Enrico Glerean
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland.,Department of Psychology and the Neuroscience Institute, Princeton University, Princeton, USA
| | - Iiro P Jääskeläinen
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland
| | - Uri Hasson
- Department of Psychology and the Neuroscience Institute, Princeton University, Princeton, USA
| | - Mikko Sams
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland.,Aalto Studios - MAGICS, Aalto University, Espoo, Finland
| |
Collapse
|
28
|
Fatić S, Stanojević N, Stokić M, Nenadović V, Jeličić L, Bilibajkić R, Gavrilović A, Maksimović S, Adamović T, Subotić M. Electroen cephalography correlates of word and non-word listening in children with specific language impairment: An observational study20F0. Medicine (Baltimore) 2022; 101:e31840. [PMID: 36401430 PMCID: PMC9678566 DOI: 10.1097/md.0000000000031840] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/03/2022] Open
Abstract
Auditory processing in children diagnosed with speech and language impairment (SLI) is atypical and characterized by reduced brain activation compared to typically developing (TD) children. In typical speech and language development processes, frontal, temporal, and posterior regions are engaged during single-word listening, while for non-word listening, it is highly unlikely that perceiving or speaking them is not followed by frequent neurones' activation enough to form stable network connections. This study aimed to investigate the electrophysiological cortical activity of alpha rhythm while listening words and non-words in children with SLI compared to TD children. The participants were 50 children with SLI, aged 4 to 6, and 50 age-related TD children. Groups were divided into 2 subgroups: first subgroup - children aged 4.0 to 5.0 years old (E = 25, C = 25) and second subgroup - children aged 5.0 to 6.0 years old (E = 25, C = 25). The younger children's group did not show statistically significant differences in alpha spectral power in word or non-word listening. In contrast, in the older age group for word and non-word listening, differences were present in the prefrontal, temporal, and parieto-occipital regions bilaterally. Children with SLI showed a certain lack of alpha desynchronization in word and non-word listening compared with TD children. Non-word perception arouses more brain regions because of the unknown presence of the word stimuli. The lack of adequate alpha desynchronization is consistent with established difficulties in lexical and phonological processing at the behavioral level in children with SLI.
Collapse
Affiliation(s)
- Saška Fatić
- Department for Cognitive Neuroscience, Research and Development Institute “Life Activities Advancement Center”, Belgrade, Serbia
- Department of Speech, Language, and Hearing Sciences, Institute for Experimental Phonetics and Speech Pathology ˝Đorđe Kostić˝, Belgrade, Serbia
- *Correspondence: Saška Fatić, Department for Cognitive Neuroscience, Research and Development Institute “Life Activities Advancement Center”, Gospodar Jovanova 35, Belgrade 11 000, Serbia (e-mail: )
| | - Nina Stanojević
- Department for Cognitive Neuroscience, Research and Development Institute “Life Activities Advancement Center”, Belgrade, Serbia
- Department of Speech, Language, and Hearing Sciences, Institute for Experimental Phonetics and Speech Pathology ˝Đorđe Kostić˝, Belgrade, Serbia
| | - Miodrag Stokić
- University of Belgrade, Faculty of Biology, Belgrade, Serbia
| | - Vanja Nenadović
- Department of Speech, Language, and Hearing Sciences, Institute for Experimental Phonetics and Speech Pathology ˝Đorđe Kostić˝, Belgrade, Serbia
| | - Ljiljana Jeličić
- Department for Cognitive Neuroscience, Research and Development Institute “Life Activities Advancement Center”, Belgrade, Serbia
- Department of Speech, Language, and Hearing Sciences, Institute for Experimental Phonetics and Speech Pathology ˝Đorđe Kostić˝, Belgrade, Serbia
| | - Ružica Bilibajkić
- Department for Cognitive Neuroscience, Research and Development Institute “Life Activities Advancement Center”, Belgrade, Serbia
| | - Aleksandar Gavrilović
- Faculty of Medical Sciences, Department of Neurology, University of Kragujevac, Kragujevac, Serbia
- Clinic of Neurology, Clinical Center Kragujevac, Kragujevac, Serbia
| | - Slavica Maksimović
- Department for Cognitive Neuroscience, Research and Development Institute “Life Activities Advancement Center”, Belgrade, Serbia
- Department of Speech, Language, and Hearing Sciences, Institute for Experimental Phonetics and Speech Pathology ˝Đorđe Kostić˝, Belgrade, Serbia
| | - Tatjana Adamović
- Department for Cognitive Neuroscience, Research and Development Institute “Life Activities Advancement Center”, Belgrade, Serbia
- Department of Speech, Language, and Hearing Sciences, Institute for Experimental Phonetics and Speech Pathology ˝Đorđe Kostić˝, Belgrade, Serbia
| | - Miško Subotić
- Department for Cognitive Neuroscience, Research and Development Institute “Life Activities Advancement Center”, Belgrade, Serbia
| |
Collapse
|
29
|
Brain activity during shadowing of audiovisual cocktail party speech, contributions of auditory-motor integration and selective attention. Sci Rep 2022; 12:18789. [PMID: 36335137 PMCID: PMC9637225 DOI: 10.1038/s41598-022-22041-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Accepted: 10/07/2022] [Indexed: 11/06/2022] Open
Abstract
Selective listening to cocktail-party speech involves a network of auditory and inferior frontal cortical regions. However, cognitive and motor cortical regions are differentially activated depending on whether the task emphasizes semantic or phonological aspects of speech. Here we tested whether processing of cocktail-party speech differs when participants perform a shadowing (immediate speech repetition) task compared to an attentive listening task in the presence of irrelevant speech. Participants viewed audiovisual dialogues with concurrent distracting speech during functional imaging. Participants either attentively listened to the dialogue, overtly repeated (i.e., shadowed) attended speech, or performed visual or speech motor control tasks where they did not attend to speech and responses were not related to the speech input. Dialogues were presented with good or poor auditory and visual quality. As a novel result, we show that attentive processing of speech activated the same network of sensory and frontal regions during listening and shadowing. However, in the superior temporal gyrus (STG), peak activations during shadowing were posterior to those during listening, suggesting that an anterior-posterior distinction is present for motor vs. perceptual processing of speech already at the level of the auditory cortex. We also found that activations along the dorsal auditory processing stream were specifically associated with the shadowing task. These activations are likely to be due to complex interactions between perceptual, attention dependent speech processing and motor speech generation that matches the heard speech. Our results suggest that interactions between perceptual and motor processing of speech relies on a distributed network of temporal and motor regions rather than any specific anatomical landmark as suggested by some previous studies.
Collapse
|
30
|
Dole M, Vilain C, Haldin C, Baciu M, Cousin E, Lamalle L, Lœvenbruck H, Vilain A, Schwartz JL. Comparing the selectivity of vowel representations in cortical auditory vs. motor areas: A repetition-suppression study. Neuropsychologia 2022; 176:108392. [DOI: 10.1016/j.neuropsychologia.2022.108392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2022] [Revised: 09/22/2022] [Accepted: 10/03/2022] [Indexed: 10/31/2022]
|
31
|
Jones SD, Westermann G. Under-resourced or overloaded? Rethinking working memory deficits in developmental language disorder. Psychol Rev 2022; 129:1358-1372. [PMID: 35482644 PMCID: PMC9899422 DOI: 10.1037/rev0000338] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
Abstract
Dominant theoretical accounts of developmental language disorder (DLD) commonly invoke working memory capacity limitations. In the current report, we present an alternative view: That working memory in DLD is not under-resourced but overloaded due to operating on speech representations with low discriminability. This account is developed through computational simulations involving deep convolutional neural networks trained on spoken word spectrograms in which information is either retained to mimic typical development or degraded to mimic the auditory processing deficits identified among some children with DLD. We assess not only spoken word recognition accuracy and predictive probability and entropy (i.e., predictive distribution spread), but also use mean-field-theory based manifold analysis to assess; (a) internal speech representation dimensionality and (b) classification capacity, a measure of the networks' ability to isolate any given internal speech representation that is used as a proxy for attentional control. We show that instantiating a low-level auditory processing deficit results in the formation of internal speech representations with atypically high dimensionality, and that classification capacity is exhausted due to low representation separability. These representation and control deficits underpin not only lower performance accuracy but also greater uncertainty even when making accurate predictions in a simulated spoken word recognition task (i.e., predictive distributions with low maximum probability and high entropy), which replicates the response delays and word finding difficulties often seen in DLD. Overall, these simulations demonstrate a theoretical account of speech representation and processing deficits in DLD in which working memory capacity limitations play no causal role. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Collapse
|
32
|
Mangardich H, Sabbagh MA. Event-related potential studies of cross-situational word learning in four-year-old children. J Exp Child Psychol 2022; 222:105468. [DOI: 10.1016/j.jecp.2022.105468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Revised: 03/18/2022] [Accepted: 05/02/2022] [Indexed: 10/18/2022]
|
33
|
Tang H, Fan S, Niu X, Li Z, Xiao P, Zeng J, Xing S. Remote cortical atrophy and language outcomes after chronic left subcortical stroke with aphasia. Front Neurosci 2022; 16:853169. [PMID: 35992910 PMCID: PMC9381815 DOI: 10.3389/fnins.2022.853169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Accepted: 07/18/2022] [Indexed: 11/25/2022] Open
Abstract
Objective Subcortical stroke can cause a variety of language deficits. However, the neural mechanisms underlying subcortical aphasia after stroke remain incompletely elucidated. We aimed to determine the effects of distant cortical structures on aphasia outcomes and examine the correlation of cortical thickness measures with connecting tracts integrity after chronic left subcortical stroke. Methods Thirty-two patients and 30 healthy control subjects underwent MRI scanning and language assessment with the Western Aphasia Battery-Revised (WAB-R) subtests. Among patients, the cortical thickness in brain regions that related to language performance were assessed by the FreeSurfer software. Fiber tracts connecting the identified cortical regions to stroke lesions were reconstructed to determine its correlations with the cortical thickness measures across individual patient. Results Cortical thickness in different parts of the left fronto-temporo-parietal (FTP) regions were positively related to auditory-verbal comprehension, spontaneous speech and naming/word finding abilities when controlling for key demographic variables and lesion size. Cortical thickness decline in the identified cortical regions was positively correlated with integrity loss of fiber tracts connected to stroke lesions. Additionally, no significant difference in cortical thickness was found across the left hemisphere between the subgroup of patients with hypoperfusion (HP) and those without HP at stroke onset. Conclusions These findings suggest that remote cortical atrophy independently predicts language outcomes in patients with chronic left subcortical stroke and aphasia and that cortical thinning in these regions might relate to integrity loss of fiber tracts connected to stroke lesions.
Collapse
Affiliation(s)
- Huijia Tang
- Department of Neurology and Stroke Center, Guangdong Provincial Key Laboratory of Diagnosis and Treatment of Major Neurological Diseases, National Key Clinical Department and Key Discipline of Neurology, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Shuhan Fan
- Department of Neurology and Stroke Center, Guangdong Provincial Key Laboratory of Diagnosis and Treatment of Major Neurological Diseases, National Key Clinical Department and Key Discipline of Neurology, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Xingyang Niu
- Department of Neurology and Stroke Center, Guangdong Provincial Key Laboratory of Diagnosis and Treatment of Major Neurological Diseases, National Key Clinical Department and Key Discipline of Neurology, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Zhuhao Li
- Department of Radiology, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Peiyi Xiao
- Department of Neurology and Stroke Center, Guangdong Provincial Key Laboratory of Diagnosis and Treatment of Major Neurological Diseases, National Key Clinical Department and Key Discipline of Neurology, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Jinsheng Zeng
- Department of Neurology and Stroke Center, Guangdong Provincial Key Laboratory of Diagnosis and Treatment of Major Neurological Diseases, National Key Clinical Department and Key Discipline of Neurology, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Shihui Xing
- Department of Neurology and Stroke Center, Guangdong Provincial Key Laboratory of Diagnosis and Treatment of Major Neurological Diseases, National Key Clinical Department and Key Discipline of Neurology, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
- *Correspondence: Shihui Xing,
| |
Collapse
|
34
|
Abstract
Swedish lexical word accents have been repeatedly said to have a low functional load. Even so, the language has kept these tones ever since they emerged probably over a thousand years ago. This article proposes that the primary function of word accents is for listeners to be able to predict upcoming morphological structures and narrow down the lexical competition rather than being lexically distinctive. Psycho- and neurophysiological evidence for the predictive function of word accents is discussed. A novel analysis displays that word accents have a facilitative role in word processing. Specifically, a correlation is revealed between how much incorrect word accents hinder listeners' processing and how much they reduce response times when correct. Finally, a dual-route model of the predictive use of word accents with distinct neural substrates is put forth.
Collapse
Affiliation(s)
- Mikael Roll
- Centre for Languages and Literature, Lund University, Lund, Sweden
| |
Collapse
|
35
|
Anurova I, Vetchinnikova S, Dobrego A, Williams N, Mikusova N, Suni A, Mauranen A, Palva S. Event-related responses reflect chunk boundaries in natural speech. Neuroimage 2022; 255:119203. [PMID: 35413442 DOI: 10.1016/j.neuroimage.2022.119203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Revised: 03/22/2022] [Accepted: 04/08/2022] [Indexed: 10/18/2022] Open
Abstract
Chunking language has been proposed to be vital for comprehension enabling the extraction of meaning from a continuous stream of speech. However, neurocognitive mechanisms of chunking are poorly understood. The present study investigated neural correlates of chunk boundaries intuitively identified by listeners in natural speech drawn from linguistic corpora using magneto- and electroencephalography (MEEG). In a behavioral experiment, subjects marked chunk boundaries in the excerpts intuitively, which revealed highly consistent chunk boundary markings across the subjects. We next recorded brain activity to investigate whether chunk boundaries with high and medium agreement rates elicit distinct evoked responses compared to non-boundaries. Pauses placed at chunk boundaries elicited a closure positive shift with the sources over bilateral auditory cortices. In contrast, pauses placed within a chunk were perceived as interruptions and elicited a biphasic emitted potential with sources located in the bilateral primary and non-primary auditory areas with right-hemispheric dominance, and in the right inferior frontal cortex. Furthermore, pauses placed at stronger boundaries elicited earlier and more prominent activation over the left hemisphere suggesting that brain responses to chunk boundaries of natural speech can be modulated by the relative strength of different linguistic cues, such as syntactic structure and prosody.
Collapse
Affiliation(s)
- Irina Anurova
- Helsinki Institute of Life Sciences, Neuroscience Center, University of Helsinki, Finland; BioMag Laboratory, HUS Medical Imaging Center, Helsinki, Finland.
| | | | | | - Nitin Williams
- Helsinki Institute of Life Sciences, Neuroscience Center, University of Helsinki, Finland; Department of Languages, University of Helsinki, Finland
| | - Nina Mikusova
- Department of Languages, University of Helsinki, Finland
| | - Antti Suni
- Department of Languages, University of Helsinki, Finland
| | - Anna Mauranen
- Department of Languages, University of Helsinki, Finland
| | - Satu Palva
- Helsinki Institute of Life Sciences, Neuroscience Center, University of Helsinki, Finland; Centre for Cognitive Neuroscience, Institute of Neuroscience and Psychology, University of Glasgow, United Kingdom.
| |
Collapse
|
36
|
Abstract
Through long-term training, music experts acquire complex and specialized sensorimotor skills, which are paralleled by continuous neuro-anatomical and -functional adaptations. The underlying neuroplasticity mechanisms have been extensively explored in decades of research in music, cognitive, and translational neuroscience. However, the absence of a comprehensive review and quantitative meta-analysis prevents the plethora of variegated findings to ultimately converge into a unified picture of the neuroanatomy of musical expertise. Here, we performed a comprehensive neuroimaging meta-analysis of publications investigating neuro-anatomical and -functional differences between musicians (M) and non-musicians (NM). Eighty-four studies were included in the qualitative synthesis. From these, 58 publications were included in coordinate-based meta-analyses using the anatomic/activation likelihood estimation (ALE) method. This comprehensive approach delivers a coherent cortico-subcortical network encompassing sensorimotor and limbic regions bilaterally. Particularly, M exhibited higher volume/activity in auditory, sensorimotor, interoceptive, and limbic brain areas and lower volume/activity in parietal areas as opposed to NM. Notably, we reveal topographical (dis-)similarities between the identified functional and anatomical networks and characterize their link to various cognitive functions by means of meta-analytic connectivity modelling. Overall, we effectively synthesized decades of research in the field and provide a consistent and controversies-free picture of the neuroanatomy of musical expertise.
Collapse
|
37
|
Rivera-Urbina GN, Martínez-Castañeda MF, Núñez-Gómez AM, Molero-Chamizo A, Nitsche MA, Alameda-Bailén JR. Effects of tDCS applied over the left IFG and pSTG language areas on verb recognition task performance. Psychophysiology 2022; 59:e14134. [PMID: 35780078 DOI: 10.1111/psyp.14134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Revised: 06/07/2022] [Accepted: 06/14/2022] [Indexed: 11/28/2022]
Abstract
Knowledge about the relevance of the left inferior frontal gyrus (lIFG) and the left posterior superior temporal gyrus (lpSTG) in visual recognition of word categories is limited at present. tDCS is a non-invasive brain stimulation method that alters cortical activity and excitability, and thus might be a useful tool for delineating the specific impact of both areas on word recognition. The objective of this study was to explore whether the visual recognition process of verb categories is improved by a single tDCS session. lIFG and lpSTG areas were separately modulated by anodal tDCS to evaluate its effects on verbal recognition. Compared to sham stimulation, motor reaction times (RTs) were reduced after anodal tDCS over the lpSTG, and this effect was independent of the performing hand (right/left). These findings suggest that this region is involved in visual word recognition independently from the performing hand.
Collapse
Affiliation(s)
| | | | | | | | - Michael A Nitsche
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany.,Department of Neurology, University Medical Hospital Bergmannsheil, Bochum, Germany
| | | |
Collapse
|
38
|
Rogalsky C, Basilakos A, Rorden C, Pillay S, LaCroix AN, Keator L, Mickelsen S, Anderson SW, Love T, Fridriksson J, Binder J, Hickok G. The Neuroanatomy of Speech Processing: A Large-scale Lesion Study. J Cogn Neurosci 2022; 34:1355-1375. [PMID: 35640102 PMCID: PMC9274306 DOI: 10.1162/jocn_a_01876] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The neural basis of language has been studied for centuries, yet the networks critically involved in simply identifying or understanding a spoken word remain elusive. Several functional-anatomical models of critical neural substrates of receptive speech have been proposed, including (1) auditory-related regions in the left mid-posterior superior temporal lobe, (2) motor-related regions in the left frontal lobe (in normal and/or noisy conditions), (3) the left anterior superior temporal lobe, or (4) bilateral mid-posterior superior temporal areas. One difficulty in comparing these models is that they often focus on different aspects of the sound-to-meaning pathway and are supported by different types of stimuli and tasks. Two auditory tasks that are typically used in separate studies-syllable discrimination and word comprehension-often yield different conclusions. We assessed syllable discrimination (words and nonwords) and word comprehension (clear speech and with a noise masker) in 158 individuals with focal brain damage: left (n = 113) or right (n = 19) hemisphere stroke, left (n = 18) or right (n = 8) anterior temporal lobectomy, and 26 neurologically intact controls. Discrimination and comprehension tasks are doubly dissociable both behaviorally and neurologically. In support of a bilateral model, clear speech comprehension was near ceiling in 95% of left stroke cases and right temporal damage impaired syllable discrimination. Lesion-symptom mapping analyses for the syllable discrimination and noisy word comprehension tasks each implicated most of the left superior temporal gyrus. Comprehension but not discrimination tasks also implicated the left posterior middle temporal gyrus, whereas discrimination but not comprehension tasks also implicated more dorsal sensorimotor regions in posterior perisylvian cortex.
Collapse
|
39
|
Zhang J, Shang D, Ye J, Ling Y, Zhong S, Zhang S, Zhang W, Zhang L, Yu Y, He F, Ye X, Luo B. Altered Coupling Between Cerebral Blood Flow and Voxel-Mirrored Homotopic Connectivity Affects Stroke-Induced Speech Comprehension Deficits. Front Aging Neurosci 2022; 14:922154. [PMID: 35813962 PMCID: PMC9260239 DOI: 10.3389/fnagi.2022.922154] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2022] [Accepted: 06/07/2022] [Indexed: 11/24/2022] Open
Abstract
The neurophysiological basis of the association between interhemispheric connectivity and speech comprehension processing remains unclear. This prospective study examined regional cerebral blood flow (CBF), homotopic functional connectivity, and neurovascular coupling, and their effects on comprehension performance in post-stroke aphasia. Multimodal imaging data (including data from functional magnetic resonance imaging and arterial spin labeling imaging) of 19 patients with post-stroke aphasia and 22 healthy volunteers were collected. CBF, voxel-mirrored homotopic connectivity (VMHC), CBF-VMHC correlation, and CBF/VMHC ratio maps were calculated. Between-group comparisons were performed to identify neurovascular changes, and correlation analyses were conducted to examine their relationship with the comprehension domain. The correlation between CBF and VMHC of the global gray matter decreased in patients with post-stroke aphasia. The total speech comprehension score was significantly associated with VMHC in the peri-Wernicke area [posterior superior temporal sulcus (pSTS): r = 0.748, p = 0.001; rostroventral area 39: r = 0.641, p = 0.008]. The decreased CBF/VMHC ratio was also mainly associated with the peri-Wernicke temporoparietal areas. Additionally, a negative relationship between the mean CBF/VMHC ratio of the cingulate gyrus subregion and sentence-level comprehension was observed (r = −0.658, p = 0.006). These findings indicate the contribution of peri-Wernicke homotopic functional connectivity to speech comprehension and reveal that abnormal neurovascular coupling of the cingulate gyrus subregion may underly comprehension deficits in patients with post-stroke aphasia.
Collapse
Affiliation(s)
- Jie Zhang
- Center for Rehabilitation Medicine, Rehabilitation & Sports Medicine Research Institute of Zhejiang Province, Department of Rehabilitation Medicine, Zhejiang Provincial People’s Hospital (Affiliated People’s Hospital, Hangzhou Medical College), Hangzhou, China
- Department of Neurology, Brain Medical Center, The First Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Desheng Shang
- Department of Radiology, The First Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Jing Ye
- Center for Rehabilitation Medicine, Rehabilitation & Sports Medicine Research Institute of Zhejiang Province, Department of Rehabilitation Medicine, Zhejiang Provincial People’s Hospital (Affiliated People’s Hospital, Hangzhou Medical College), Hangzhou, China
| | - Yi Ling
- Department of Neurology, Brain Medical Center, The First Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Shuchang Zhong
- Center for Rehabilitation Medicine, Rehabilitation & Sports Medicine Research Institute of Zhejiang Province, Department of Rehabilitation Medicine, Zhejiang Provincial People’s Hospital (Affiliated People’s Hospital, Hangzhou Medical College), Hangzhou, China
| | - Shuangshuang Zhang
- Center for Rehabilitation Medicine, Rehabilitation & Sports Medicine Research Institute of Zhejiang Province, Department of Rehabilitation Medicine, Zhejiang Provincial People’s Hospital (Affiliated People’s Hospital, Hangzhou Medical College), Hangzhou, China
| | - Wei Zhang
- Center for Rehabilitation Medicine, Rehabilitation & Sports Medicine Research Institute of Zhejiang Province, Department of Rehabilitation Medicine, Zhejiang Provincial People’s Hospital (Affiliated People’s Hospital, Hangzhou Medical College), Hangzhou, China
| | - Li Zhang
- Center for Rehabilitation Medicine, Rehabilitation & Sports Medicine Research Institute of Zhejiang Province, Department of Rehabilitation Medicine, Zhejiang Provincial People’s Hospital (Affiliated People’s Hospital, Hangzhou Medical College), Hangzhou, China
| | - Yamei Yu
- Department of Neurology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Fangping He
- Department of Neurology, Brain Medical Center, The First Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Xiangming Ye
- Center for Rehabilitation Medicine, Rehabilitation & Sports Medicine Research Institute of Zhejiang Province, Department of Rehabilitation Medicine, Zhejiang Provincial People’s Hospital (Affiliated People’s Hospital, Hangzhou Medical College), Hangzhou, China
- *Correspondence: Xiangming Ye,
| | - Benyan Luo
- Department of Neurology, Brain Medical Center, The First Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, China
- Collaborative Innovation Center for Brain Science, Zhejiang University School of Medicine, Hangzhou, China
- Benyan Luo,
| |
Collapse
|
40
|
Matchin W, den Ouden DB, Hickok G, Hillis AE, Bonilha L, Fridriksson J. The Wernicke conundrum revisited: evidence from connectome-based lesion-symptom mapping. Brain 2022; 145:3916-3930. [PMID: 35727949 DOI: 10.1093/brain/awac219] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Revised: 05/25/2022] [Accepted: 06/06/2022] [Indexed: 11/13/2022] Open
Abstract
Wernicke's area has been assumed since the 1800s to be the primary region supporting word and sentence comprehension. However, in 2015 and 2019, Mesulam and colleagues raised what they termed the 'Wernicke conundrum', noting widespread variability in the anatomical definition of this area and presenting data from primary progressive aphasia that challenged this classical assumption. To resolve the conundrum, they posited a 'double disconnection' hypothesis: that word and sentence comprehension deficits in stroke-based aphasia result from disconnection of anterior temporal and inferior frontal regions from other parts of the brain due to white matter damage, rather than dysfunction of Wernicke's area itself. To test this hypothesis, we performed lesion-deficit correlations, including connectome-based lesion-symptom mapping, in four large, partially overlapping groups of English-speaking chronic left hemisphere stroke survivors. After removing variance due to object recognition and associative semantic processing, the same middle and posterior temporal lobe regions were implicated in both word comprehension deficits and complex noncanonical sentence comprehension deficits. Connectome lesion-symptom mapping revealed similar temporal-occipital white matter disconnections for impaired word and noncanonical sentence comprehension, including the temporal pole. We found an additional significant temporal-parietal disconnection for noncanonical sentence comprehension deficits, which may indicate a role for phonological working memory in processing complex syntax, but no significant frontal disconnections. Moreover, damage to these middle-posterior temporal lobe regions was associated with both word and noncanonical sentence comprehension deficits even when accounting for variance due to the strongest anterior temporal and inferior frontal white matter disconnections, respectively. Our results largely agree with the classical notion that Wernicke's area, defined here as middle superior temporal gyrus and middle-posterior superior temporal sulcus, supports both word and sentence comprehension, suggest a supporting role for temporal pole in both word and sentence comprehension, and speak against the hypothesis that comprehension deficits in Wernicke's aphasia result from double disconnection.
Collapse
Affiliation(s)
- William Matchin
- Department of Communication Sciences and Disorders, University of South Carolina, Columbia, SC 29208, USA
| | - Dirk Bart den Ouden
- Department of Communication Sciences and Disorders, University of South Carolina, Columbia, SC 29208, USA
| | - Gregory Hickok
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA 92697, USA.,Department of Language Science, University of California, Irvine, Irvine, CA 92697, USA
| | - Argye E Hillis
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD 21218, USA.,Department of Physical Medicine and Rehabilitation, Johns Hopkins University School of Medicine, Baltimore, MD 21218, USA.,Department of Cognitive Science, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Leonardo Bonilha
- Department of Neurology, Emory University School of Medicine, Atlanta, GA 30322, USA
| | - Julius Fridriksson
- Department of Communication Sciences and Disorders, University of South Carolina, Columbia, SC 29208, USA
| |
Collapse
|
41
|
Hakonen M, Ikäheimonen A, Hultèn A, Kauttonen J, Koskinen M, Lin FH, Lowe A, Sams M, Jääskeläinen IP. Processing of an Audiobook in the Human Brain Is Shaped by Cultural Family Background. Brain Sci 2022; 12:brainsci12050649. [PMID: 35625035 PMCID: PMC9139798 DOI: 10.3390/brainsci12050649] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Revised: 05/10/2022] [Accepted: 05/13/2022] [Indexed: 11/16/2022] Open
Abstract
Perception of the same narrative can vary between individuals depending on a listener’s previous experiences. We studied whether and how cultural family background may shape the processing of an audiobook in the human brain. During functional magnetic resonance imaging (fMRI), 48 healthy volunteers from two different cultural family backgrounds listened to an audiobook depicting the intercultural social life of young adults with the respective cultural backgrounds. Shared cultural family background increased inter-subject correlation of hemodynamic activity in the left-hemispheric Heschl’s gyrus, insula, superior temporal gyrus, lingual gyrus and middle temporal gyrus, in the right-hemispheric lateral occipital and posterior cingulate cortices as well as in the bilateral middle temporal gyrus, middle occipital gyrus and precuneus. Thus, cultural family background is reflected in multiple areas of speech processing in the brain and may also modulate visual imagery. After neuroimaging, the participants listened to the narrative again and, after each passage, produced a list of words that had been on their minds when they heard the audiobook during neuroimaging. Cultural family background was reflected as semantic differences in these word lists as quantified by a word2vec-generated semantic model. Our findings may depict enhanced mutual understanding between persons who share similar cultural family backgrounds.
Collapse
Affiliation(s)
- Maria Hakonen
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, School of Science, Aalto University, 00076 Espoo, Finland; (A.I.); (A.L.); (M.S.); (I.P.J.)
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114, USA
- Faculty of Sport and Health Sciences, University of Jyväskylä, 40014 Jyväskylä, Finland
- Advanced Magnetic Imaging Centre, School of Science, Aalto University, 00076 Espoo, Finland
- Correspondence:
| | - Arsi Ikäheimonen
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, School of Science, Aalto University, 00076 Espoo, Finland; (A.I.); (A.L.); (M.S.); (I.P.J.)
| | - Annika Hultèn
- Imaging Language, Department of Neuroscience and Biomedical Engineering, School of Science, Aalto University, 00076 Espoo, Finland;
| | - Janne Kauttonen
- Digital Business, Haaga-Helia University of Applied Sciences, 00520 Helsinki, Finland;
| | - Miika Koskinen
- Faculty of Medicine, University of Helsinki, 00014 Helsinki, Finland;
| | - Fa-Hsuan Lin
- Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada;
- Department of Medical Biophysics, University of Toronto, Toronto, ON M5G 1L7, Canada
| | - Anastasia Lowe
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, School of Science, Aalto University, 00076 Espoo, Finland; (A.I.); (A.L.); (M.S.); (I.P.J.)
| | - Mikko Sams
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, School of Science, Aalto University, 00076 Espoo, Finland; (A.I.); (A.L.); (M.S.); (I.P.J.)
- MAGICS Infrastructure, Aalto Studios, Aalto University, 02150 Espoo, Finland
| | - Iiro P. Jääskeläinen
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, School of Science, Aalto University, 00076 Espoo, Finland; (A.I.); (A.L.); (M.S.); (I.P.J.)
- International Social Neuroscience Laboratory, Institute of Cognitive Neuroscience, National Research University Higher School of Economics, 101000 Moscow, Russia
| |
Collapse
|
42
|
Na Y, Jung J, Tench CR, Auer DP, Pyun SB. Language systems from lesion-symptom mapping in aphasia: A meta-analysis of voxel-based lesion mapping studies. Neuroimage Clin 2022; 35:103038. [PMID: 35569227 PMCID: PMC9112051 DOI: 10.1016/j.nicl.2022.103038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Revised: 03/30/2022] [Accepted: 05/04/2022] [Indexed: 11/28/2022]
Abstract
Meta-analysis of 2,007 individuals with aphasia from 25 voxel-based lesion mapping studies. Distinctive patterns of lesions in aphasia are associated with different language functions. The patterns of lesion in aphasia support the dual pathway model of language processing.
Background Aphasia is one of the most common causes of post-stroke disabilities. As the symptoms and impact of post-stroke aphasia are heterogeneous, it is important to understand how topographical lesion heterogeneity in patients with aphasia is associated with different domains of language impairments. Here, we aim to provide a comprehensive overview of neuroanatomical basis in post-stroke aphasia through coordinate based meta-analysis of voxel-based lesion-symptom mapping studies. Methods We performed a meta-analysis of lesion-symptom mapping studies in post-stroke aphasia. We obtained coordinate-based structural neuroimaging data for 2,007 individuals with aphasia from 25 studies that met predefined inclusion criteria. Results Overall, our results revealed that the distinctive patterns of lesions in aphasia are associated with different language functions and tasks. Damage to the insular-motor areas impaired speech with preserved comprehension and a similar pattern was observed when the lesion covered the insular-motor and inferior parietal lobule. Lesions in the frontal area severely impaired speaking with relatively good comprehension. The repetition-selective deficits only arise from lesions involving the posterior superior temporal gyrus. Damage in the anterior-to-posterior temporal cortex was associated with semantic deficits. Conclusion The association patterns of lesion topography and specific language deficits provide key insights into the specific underlying language pathways. Our meta-analysis results strongly support the dual pathway model of language processing, capturing the link between the different symptom complexes of aphasias and the different underlying location of damage.
Collapse
Affiliation(s)
- Yoonhye Na
- Department of Biomedical Sciences, Korea University College of Medicine, Seoul, Republic of Korea; Brain Convergence Research Center, Korea University College of Medicine, Seoul, Republic of Korea
| | - JeYoung Jung
- School of Psychology, University of Nottingham, Nottingham, UK
| | - Christopher R Tench
- Division of Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, UK; NIHR Nottingham Biomedical Research Centre, University of Nottingham, Nottingham, UK; Division of Clinical Neurosciences, Clinical Neurology, University of Nottingham, Queen's Medical Centre, Nottingham, UK
| | - Dorothee P Auer
- Sir Peter Mansfield Imaging Centre, University of Nottingham, Nottingham, UK; Division of Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, UK; NIHR Nottingham Biomedical Research Centre, University of Nottingham, Nottingham, UK; Neuroradiology, Nottingham University Hospitals Trust, Nottingham, UK.
| | - Sung-Bom Pyun
- Department of Biomedical Sciences, Korea University College of Medicine, Seoul, Republic of Korea; Brain Convergence Research Center, Korea University College of Medicine, Seoul, Republic of Korea; Department of Physical Medicine and Rehabilitation, Korea University Anam Hospital, Seoul, Republic of Korea.
| |
Collapse
|
43
|
Enhancement of speech-in-noise comprehension through vibrotactile stimulation at the syllabic rate. Proc Natl Acad Sci U S A 2022; 119:e2117000119. [PMID: 35312362 PMCID: PMC9060510 DOI: 10.1073/pnas.2117000119] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Syllables are important building blocks of speech. They occur at a rate between 4 and 8 Hz, corresponding to the theta frequency range of neural activity in the cerebral cortex. When listening to speech, the theta activity becomes aligned to the syllabic rhythm, presumably aiding in parsing a speech signal into distinct syllables. However, this neural activity cannot only be influenced by sound, but also by somatosensory information. Here, we show that the presentation of vibrotactile signals at the syllabic rate can enhance the comprehension of speech in background noise. We further provide evidence that this multisensory enhancement of speech comprehension reflects the multisensory integration of auditory and tactile information in the auditory cortex. Speech unfolds over distinct temporal scales, in particular, those related to the rhythm of phonemes, syllables, and words. When a person listens to continuous speech, the syllabic rhythm is tracked by neural activity in the theta frequency range. The tracking plays a functional role in speech processing: Influencing the theta activity through transcranial current stimulation, for instance, can impact speech perception. The theta-band activity in the auditory cortex can also be modulated through the somatosensory system, but the effect on speech processing has remained unclear. Here, we show that vibrotactile feedback presented at the rate of syllables can modulate and, in fact, enhance the comprehension of a speech signal in background noise. The enhancement occurs when vibrotactile pulses occur at the perceptual center of the syllables, whereas a temporal delay between the vibrotactile signals and the speech stream can lead to a lower level of speech comprehension. We further investigate the neural mechanisms underlying the audiotactile integration through electroencephalographic (EEG) recordings. We find that the audiotactile stimulation modulates the neural response to the speech rhythm, as well as the neural response to the vibrotactile pulses. The modulations of these neural activities reflect the behavioral effects on speech comprehension. Moreover, we demonstrate that speech comprehension can be predicted by particular aspects of the neural responses. Our results evidence a role of vibrotactile information for speech processing and may have applications in future auditory prosthesis.
Collapse
|
44
|
Norman-Haignere SV, Long LK, Devinsky O, Doyle W, Irobunda I, Merricks EM, Feldstein NA, McKhann GM, Schevon CA, Flinker A, Mesgarani N. Multiscale temporal integration organizes hierarchical computation in human auditory cortex. Nat Hum Behav 2022; 6:455-469. [PMID: 35145280 PMCID: PMC8957490 DOI: 10.1038/s41562-021-01261-y] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2020] [Accepted: 11/18/2021] [Indexed: 01/11/2023]
Abstract
To derive meaning from sound, the brain must integrate information across many timescales. What computations underlie multiscale integration in human auditory cortex? Evidence suggests that auditory cortex analyses sound using both generic acoustic representations (for example, spectrotemporal modulation tuning) and category-specific computations, but the timescales over which these putatively distinct computations integrate remain unclear. To answer this question, we developed a general method to estimate sensory integration windows-the time window when stimuli alter the neural response-and applied our method to intracranial recordings from neurosurgical patients. We show that human auditory cortex integrates hierarchically across diverse timescales spanning from ~50 to 400 ms. Moreover, we find that neural populations with short and long integration windows exhibit distinct functional properties: short-integration electrodes (less than ~200 ms) show prominent spectrotemporal modulation selectivity, while long-integration electrodes (greater than ~200 ms) show prominent category selectivity. These findings reveal how multiscale integration organizes auditory computation in the human brain.
Collapse
Affiliation(s)
- Sam V Norman-Haignere
- Zuckerman Mind, Brain, Behavior Institute, Columbia University,HHMI Postdoctoral Fellow of the Life Sciences Research Foundation
| | - Laura K. Long
- Zuckerman Mind, Brain, Behavior Institute, Columbia University,Doctoral Program in Neurobiology and Behavior, Columbia University
| | - Orrin Devinsky
- Department of Neurology, NYU Langone Medical Center,Comprehensive Epilepsy Center, NYU Langone Medical Center
| | - Werner Doyle
- Comprehensive Epilepsy Center, NYU Langone Medical Center,Department of Neurosurgery, NYU Langone Medical Center
| | - Ifeoma Irobunda
- Department of Neurology, Columbia University Irving Medical Center
| | | | - Neil A. Feldstein
- Department of Neurological Surgery, Columbia University Irving Medical Center
| | - Guy M. McKhann
- Department of Neurological Surgery, Columbia University Irving Medical Center
| | | | - Adeen Flinker
- Department of Neurology, NYU Langone Medical Center,Comprehensive Epilepsy Center, NYU Langone Medical Center,Department of Biomedical Engineering, NYU Tandon School of Engineering
| | - Nima Mesgarani
- Zuckerman Mind, Brain, Behavior Institute, Columbia University,Doctoral Program in Neurobiology and Behavior, Columbia University,Department of Electrical Engineering, Columbia University
| |
Collapse
|
45
|
Kaas JH, Qi HX, Stepniewska I. Escaping the nocturnal bottleneck, and the evolution of the dorsal and ventral streams of visual processing in primates. Philos Trans R Soc Lond B Biol Sci 2022; 377:20210293. [PMID: 34957843 PMCID: PMC8710890 DOI: 10.1098/rstb.2021.0293] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Accepted: 09/21/2021] [Indexed: 12/12/2022] Open
Abstract
Early mammals were small and nocturnal. Their visual systems had regressed and they had poor vision. After the extinction of the dinosaurs 66 mya, some but not all escaped the 'nocturnal bottleneck' by recovering high-acuity vision. By contrast, early primates escaped the bottleneck within the age of dinosaurs by having large forward-facing eyes and acute vision while remaining nocturnal. We propose that these primates differed from other mammals by changing the balance between two sources of visual information to cortex. Thus, cortical processing became less dependent on a relay of information from the superior colliculus (SC) to temporal cortex and more dependent on information distributed from primary visual cortex (V1). In addition, the two major classes of visual information from the retina became highly segregated into magnocellular (M cell) projections from V1 to the primate-specific temporal visual area (MT), and parvocellular-dominated projections to the dorsolateral visual area (DL or V4). The greatly expanded P cell inputs from V1 informed the ventral stream of cortical processing involving temporal and frontal cortex. The M cell pathways from V1 and the SC informed the dorsal stream of cortical processing involving MT, surrounding temporal cortex, and parietal-frontal sensorimotor domains. This article is part of the theme issue 'Systems neuroscience through the lens of evolutionary theory'.
Collapse
Affiliation(s)
- Jon H. Kaas
- Department of Pshycology, Vanderbilt University, 301 Wilson Hall, 111 21st Ave. S., Nashville, TN 37240, USA
| | - Hui-Xin Qi
- Department of Pshycology, Vanderbilt University, 301 Wilson Hall, 111 21st Ave. S., Nashville, TN 37240, USA
| | - Iwona Stepniewska
- Department of Pshycology, Vanderbilt University, 301 Wilson Hall, 111 21st Ave. S., Nashville, TN 37240, USA
| |
Collapse
|
46
|
Giampiccolo D, Duffau H. Controversy over the temporal cortical terminations of the left arcuate fasciculus: a reappraisal. Brain 2022; 145:1242-1256. [PMID: 35142842 DOI: 10.1093/brain/awac057] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2021] [Revised: 12/19/2021] [Accepted: 01/20/2022] [Indexed: 11/12/2022] Open
Abstract
The arcuate fasciculus has been considered a major dorsal fronto-temporal white matter pathway linking frontal language production regions with auditory perception in the superior temporal gyrus, the so-called Wernicke's area. In line with this tradition, both historical and contemporary models of language function have assigned primacy to superior temporal projections of the arcuate fasciculus. However, classical anatomical descriptions and emerging behavioural data are at odds with this assumption. On one hand, fronto-temporal projections to Wernicke's area may not be unique to the arcuate fasciculus. On the other hand, dorsal stream language deficits have been reported also for damage to middle, inferior and basal temporal gyri which may be linked to arcuate disconnection. These findings point to a reappraisal of arcuate projections in the temporal lobe. Here, we review anatomical and functional evidence regarding the temporal cortical terminations of the left arcuate fasciculus by incorporating dissection and tractography findings with stimulation data using cortico-cortical evoked potentials and direct electrical stimulation mapping in awake patients. Firstly, we discuss the fibers of the arcuate fasciculus projecting to the superior temporal gyrus and the functional rostro-caudal gradient in this region where both phonological encoding and auditory-motor transformation may be performed. Caudal regions within the temporoparietal junction may be involved in articulation and associated with temporoparietal projections of the third branch of the superior longitudinal fasciculus, while more rostral regions may support encoding of acoustic phonetic features, supported by arcuate fibres. We then move to examine clinical data showing that multimodal phonological encoding is facilitated by projections of the arcuate fasciculus to superior, but also middle, inferior and basal temporal regions. Hence, we discuss how projections of the arcuate fasciculus may contribute to acoustic (middle-posterior superior and middle temporal gyri), visual (posterior inferior temporal/fusiform gyri comprising the visual word form area) and lexical (anterior-middle inferior temporal/fusiform gyri in the basal temporal language area) information in the temporal lobe to be processed, encoded and translated into a dorsal phonological route to the frontal lobe. Finally, we point out surgical implications for this model in terms of the prediction and avoidance of neurological deficit.
Collapse
Affiliation(s)
- Davide Giampiccolo
- Section of Neurosurgery, Department of Neurosciences, Biomedicine and Movement Sciences, University Hospital, Verona, Italy.,Institute of Neuroscience, Cleveland Clinic London, Grosvenor Place, London, UK.,Department of Clinical and Experimental Epilepsy, UCL Queen Square Institute of Neurology, University College London, London, UK.,Victor Horsley Department of Neurosurgery, National Hospital for Neurology and Neurosurgery, Queen Square, London, UK
| | - Hugues Duffau
- Department of Neurosurgery, Gui de Chauliac Hospital, Montpellier University Medical Center, Montpellier, France.,Team "Neuroplasticity, Stem Cells and Low-grade Gliomas," INSERM U1191, Institute of Genomics of Montpellier, University of Montpellier, Montpellier, France
| |
Collapse
|
47
|
Taylor C, Hall S, Manivannan S, Mundil N, Border S. The neuroanatomical consequences and pathological implications of bilingualism. J Anat 2022; 240:410-427. [PMID: 34486112 PMCID: PMC8742975 DOI: 10.1111/joa.13542] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2021] [Revised: 07/26/2021] [Accepted: 08/23/2021] [Indexed: 01/17/2023] Open
Abstract
In recent years, there has been a rise in the number of people who are able to speak two or more languages. This has been paralleled by an increase in research related to bilingualism. Despite this, much of the neuroanatomical consequences and pathological implications of bilingualism are still subject to discussion. This review aims to evaluate the neuroanatomical structures related to language and to the acquisition of a second language as well as exploring how learning a second language can alter one's susceptibility to and the progression of certain cerebral pathologies. A literature search was conducted on the Medline, Embase, and Web of Science databases. A total of 137 articles regarding the neuroanatomical or pathological implications of bilingualism were included for review. Following analysis of the included papers, this review finds that bilingualism induces significant gray and white matter cerebral changes, particularly in the frontal lobes, anterior cingulate cortex, left inferior parietal lobule and subcortical areas, and that native language and acquired language largely recruit the same neuroanatomical structures with however, subtle functional and anatomical differences dependent on proficiency and age of language acquisition. There is adequate evidence to suggest that bilingualism offsets the symptoms and diagnosis of dementia, and that it is protective against both pathological and age-related cognitive decline. While many of the neuroanatomical changes are known, more remains to be elucidated and the relationship between bilingualism and other neurological pathologies remains unclear.
Collapse
Affiliation(s)
- Charles Taylor
- Centre for Learning Anatomical SciencesFaculty of MedicineUniversity of SouthamptonSouthamptonUK
| | - Samuel Hall
- Centre for Learning Anatomical SciencesFaculty of MedicineUniversity of SouthamptonSouthamptonUK
- Department of NeurosurgeryUniversity Hospitals Southampton NHS Foundation TrustSouthamptonUK
| | - Susruta Manivannan
- Department of NeurosurgeryUniversity Hospitals Southampton NHS Foundation TrustSouthamptonUK
| | - Nilesh Mundil
- Department of NeurosurgeryUniversity Hospitals Southampton NHS Foundation TrustSouthamptonUK
| | - Scott Border
- Centre for Learning Anatomical SciencesFaculty of MedicineUniversity of SouthamptonSouthamptonUK
| |
Collapse
|
48
|
Teoh ES, Ahmed F, Lalor EC. Attention Differentially Affects Acoustic and Phonetic Feature Encoding in a Multispeaker Environment. J Neurosci 2022; 42:682-691. [PMID: 34893546 PMCID: PMC8805628 DOI: 10.1523/jneurosci.1455-20.2021] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2020] [Revised: 09/28/2021] [Accepted: 09/29/2021] [Indexed: 11/21/2022] Open
Abstract
Humans have the remarkable ability to selectively focus on a single talker in the midst of other competing talkers. The neural mechanisms that underlie this phenomenon remain incompletely understood. In particular, there has been longstanding debate over whether attention operates at an early or late stage in the speech processing hierarchy. One way to better understand this is to examine how attention might differentially affect neurophysiological indices of hierarchical acoustic and linguistic speech representations. In this study, we do this by using encoding models to identify neural correlates of speech processing at various levels of representation. Specifically, we recorded EEG from fourteen human subjects (nine female and five male) during a "cocktail party" attention experiment. Model comparisons based on these data revealed phonetic feature processing for attended, but not unattended speech. Furthermore, we show that attention specifically enhances isolated indices of phonetic feature processing, but that such attention effects are not apparent for isolated measures of acoustic processing. These results provide new insights into the effects of attention on different prelexical representations of speech, insights that complement recent anatomic accounts of the hierarchical encoding of attended speech. Furthermore, our findings support the notion that, for attended speech, phonetic features are processed as a distinct stage, separate from the processing of the speech acoustics.SIGNIFICANCE STATEMENT Humans are very good at paying attention to one speaker in an environment with multiple speakers. However, the details of how attended and unattended speech are processed differently by the brain is not completely clear. Here, we explore how attention affects the processing of the acoustic sounds of speech as well as the mapping of those sounds onto categorical phonetic features. We find evidence of categorical phonetic feature processing for attended, but not unattended speech. Furthermore, we find evidence that categorical phonetic feature processing is enhanced by attention, but acoustic processing is not. These findings add an important new layer in our understanding of how the human brain solves the cocktail party problem.
Collapse
Affiliation(s)
- Emily S Teoh
- School of Engineering, Trinity Centre for Biomedical Engineering, and Trinity College Institute of Neuroscience, Trinity College, University of Dublin, Dublin 2, Ireland
| | - Farhin Ahmed
- Department of Neuroscience, Department of Biomedical Engineering, and Del Monte Neuroscience Institute, University of Rochester, Rochester, New York 14627
| | - Edmund C Lalor
- School of Engineering, Trinity Centre for Biomedical Engineering, and Trinity College Institute of Neuroscience, Trinity College, University of Dublin, Dublin 2, Ireland
- Department of Neuroscience, Department of Biomedical Engineering, and Del Monte Neuroscience Institute, University of Rochester, Rochester, New York 14627
| |
Collapse
|
49
|
McCall JD, Vivian Dickens J, Mandal AS, DeMarco AT, Fama ME, Lacey EH, Kelkar A, Medaglia JD, Turkeltaub PE. Structural disconnection of the posterior medial frontal cortex reduces speech error monitoring. Neuroimage Clin 2022; 33:102934. [PMID: 34995870 PMCID: PMC8739872 DOI: 10.1016/j.nicl.2021.102934] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Revised: 11/25/2021] [Accepted: 12/31/2021] [Indexed: 11/29/2022]
Abstract
Optimal performance in any task relies on the ability to detect and correct errors. The anterior cingulate cortex and the broader posterior medial frontal cortex (pMFC) are active during error processing. However, it is unclear whether damage to the pMFC impairs error monitoring. We hypothesized that successful error monitoring critically relies on connections between the pMFC and broader cortical networks involved in executive functions and the task being monitored. We tested this hypothesis in the context of speech error monitoring in people with post-stroke aphasia. Diffusion weighted images were collected in 51 adults with chronic left-hemisphere stroke and 37 age-matched control participants. Whole-brain connectomes were derived using constrained spherical deconvolution and anatomically-constrained probabilistic tractography. Support vector regressions identified white matter connections in which lost integrity in stroke survivors related to reduced error detection during confrontation naming. Lesioned connections to the bilateral pMFC were related to reduce error monitoring, including many connections to regions associated with speech production and executive function. We conclude that connections to the pMFC support error monitoring. Error monitoring in speech production is supported by the structural connectivity between the pMFC and regions involved in speech production, comprehension, and executive function. Interactions between pMFC and other task-relevant processors may similarly be critical for error monitoring in other task contexts.
Collapse
Affiliation(s)
- Joshua D McCall
- Center for Brain Plasticity and Recovery and Neurology Department, Georgetown University Medical Center, Washington, DC 20007, USA
| | - J Vivian Dickens
- Center for Brain Plasticity and Recovery and Neurology Department, Georgetown University Medical Center, Washington, DC 20007, USA
| | - Ayan S Mandal
- Center for Brain Plasticity and Recovery and Neurology Department, Georgetown University Medical Center, Washington, DC 20007, USA; Psychiatry Department, University of Cambridge, Cambridge CB2 1TN, UK
| | - Andrew T DeMarco
- Center for Brain Plasticity and Recovery and Neurology Department, Georgetown University Medical Center, Washington, DC 20007, USA; Rehabilitation Medicine Department, Georgetown University Medical Center, Washington, DC 20007, USA
| | - Mackenzie E Fama
- Center for Brain Plasticity and Recovery and Neurology Department, Georgetown University Medical Center, Washington, DC 20007, USA; Department of Speech, Language, and Hearing Sciences, The George Washington University, DC 20052, USA
| | - Elizabeth H Lacey
- Center for Brain Plasticity and Recovery and Neurology Department, Georgetown University Medical Center, Washington, DC 20007, USA; Research Division, MedStar National Rehabilitation Hospital, Washington, DC 20010, USA
| | - Apoorva Kelkar
- Psychology Department, Drexel University, Philadelphia, PA 19104, USA
| | - John D Medaglia
- Psychology Department, Drexel University, Philadelphia, PA 19104, USA; Neurology Department, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Peter E Turkeltaub
- Center for Brain Plasticity and Recovery and Neurology Department, Georgetown University Medical Center, Washington, DC 20007, USA; Research Division, MedStar National Rehabilitation Hospital, Washington, DC 20010, USA; Rehabilitation Medicine Department, Georgetown University Medical Center, Washington, DC 20007, USA.
| |
Collapse
|
50
|
Catani M. The connectional anatomy of the temporal lobe. HANDBOOK OF CLINICAL NEUROLOGY 2022; 187:3-16. [PMID: 35964979 DOI: 10.1016/b978-0-12-823493-8.00001-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The idea of a temporal lobe separated from the rest of the hemisphere by reason of its unique structural and functional properties is a clinically useful artifact. While the temporal lobe can be safely defined as the portion of the cerebrum lodged in the middle cranial fossa, the pattern of its connections is a more revealing description of its functional subdivisions and specific contribution to higher cognitive functions. This chapter provides an historical overview of the anatomy of the temporal lobe and an updated framework of temporal lobe connections based on tractography studies of human and nonhuman primates and patients with brain disorders. Compared to monkeys, the human temporal lobe shows a relatively increased connectivity with perisylvian frontal and parietal regions and a set of unique intrinsic connections, which may have supported the evolution of working memory, semantic representation, and language in our species. Conversely, the decreased volume of the anterior (limbic) interhemispheric temporal connections in humans is related to a reduced reliance on olfaction and a partial transference of functions from the anterior commissure to the posterior corpus callosum. Overall the novel data from tractography suggest a revision of current dual stream models for visual and auditory processing.
Collapse
Affiliation(s)
- Marco Catani
- Natbrainlab, Department of Forensic and Neurodevelopmental Sciences, Institute of Psychiatry, Psychology and Neuroscience, London, United Kingdom; Department of Neuroimaging Sciences, Institute of Psychiatry, Psychology and Neuroscience, London, United Kingdom.
| |
Collapse
|