1
|
Alekseeva M, Myachykov A, Bermudez-Margaretto B, Shtyrov Y. Morphosyntactic prediction in automatic neural processing of spoken language: EEG evidence. Brain Res 2024; 1836:148949. [PMID: 38641266 DOI: 10.1016/j.brainres.2024.148949] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Revised: 03/29/2024] [Accepted: 04/16/2024] [Indexed: 04/21/2024]
Abstract
Automatic parsing of syntactic information by the human brain is a well-established phenomenon, but its mechanisms remain poorly understood. Its best-known neurophysiological reflection is the so-called early left-anterior negativity (ELAN) component of event-related potentials (ERPs), with two alternative hypotheses for its origin: (1) error detection, or (2) morphosyntactic prediction/priming. To test these alternatives, we conducted two experiments using a non-attend passive design with visual distraction and recorded ERPs to spoken pronoun-verb phrases with/without agreement violations and to the same critical verbs presented in isolation without preceding pronouns. The results revealed an ELAN at ∼130-220 ms for pronoun-verb gender agreement violations, confirming a high degree of automaticity in early morphosyntactic parsing. Critically, the strongest ELAN was elicited by verbs outside phrasal context, which suggests that the typical ELAN pattern is underpinned by a reduction of ERP amplitudes for felicitous combinations, reflecting syntactic priming/predictability between related words/morphemes (potentially mediated by associative links formed during previous linguistic experience) rather than specialised error-detection processes.
Collapse
Affiliation(s)
- Maria Alekseeva
- Centre for Cognition and Decision Making, Institute for Cognitive Neuroscience, Higher School of Economics, Moscow, Russian Federation.
| | | | - Beatriz Bermudez-Margaretto
- Instituto de Integración en la Comunidad (INICO), Facultad de Psicología, Universidad de Salamanca, Salamanca, Spain
| | - Yury Shtyrov
- Center of Functionally Integrative Neuroscience, Aarhus University, Aarhus, Denmark
| |
Collapse
|
2
|
Lo CW, Meyer L. Chunk boundaries disrupt dependency processing in an AG: Reconciling incremental processing and discrete sampling. PLoS One 2024; 19:e0305333. [PMID: 38889141 PMCID: PMC11185458 DOI: 10.1371/journal.pone.0305333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2024] [Accepted: 05/29/2024] [Indexed: 06/20/2024] Open
Abstract
Language is rooted in our ability to compose: We link words together, fusing their meanings. Links are not limited to neighboring words but often span intervening words. The ability to process these non-adjacent dependencies (NADs) conflicts with the brain's sampling of speech: We consume speech in chunks that are limited in time, containing only a limited number of words. It is unknown how we link words together that belong to separate chunks. Here, we report that we cannot-at least not so well. In our electroencephalography (EEG) study, 37 human listeners learned chunks and dependencies from an artificial grammar (AG) composed of syllables. Multi-syllable chunks to be learned were equal-sized, allowing us to employ a frequency-tagging approach. On top of chunks, syllable streams contained NADs that were either confined to a single chunk or crossed a chunk boundary. Frequency analyses of the EEG revealed a spectral peak at the chunk rate, showing that participants learned the chunks. NADs that cross boundaries were associated with smaller electrophysiological responses than within-chunk NADs. This shows that NADs are processed readily when they are confined to the same chunk, but not as well when crossing a chunk boundary. Our findings help to reconcile the classical notion that language is processed incrementally with recent evidence for discrete perceptual sampling of speech. This has implications for language acquisition and processing as well as for the general view of syntax in human language.
Collapse
Affiliation(s)
- Chia-Wen Lo
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Lars Meyer
- Research Group Language Cycles, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- University Clinic Münster, Münster, Germany
| |
Collapse
|
3
|
Dufau S, Yeaton J, Badier JM, Chen S, Holcomb PJ, Grainger J. Sentence superiority in the reading brain. Neuropsychologia 2024; 198:108885. [PMID: 38604495 DOI: 10.1016/j.neuropsychologia.2024.108885] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2023] [Revised: 02/06/2024] [Accepted: 04/07/2024] [Indexed: 04/13/2024]
Abstract
When a sequence of written words is briefly presented and participants are asked to identify just one word at a post-cued location, then word identification accuracy is higher when the word is presented in a grammatically correct sequence compared with an ungrammatical sequence. This sentence superiority effect has been reported in several behavioral studies and two EEG investigations. Taken together, the results of these studies support the hypothesis that the sentence superiority effect is primarily driven by rapid access to a sentence-level representation via partial word identification processes that operate in parallel over several words. Here we used MEG to examine the neural structures involved in this early stage of written sentence processing, and to further specify the timing of the different processes involved. Source activities over time showed grammatical vs. ungrammatical differences first in the left inferior frontal gyrus (IFG: 321-406 ms), then the left anterior temporal lobe (ATL: 466-531 ms), and finally in both left IFG (549-602 ms) and left posterior superior temporal gyrus (pSTG: 553-622 ms). We interpret the early IFG activity as reflecting the rapid bottom-up activation of sentence-level representations, including syntax, enabled by partly parallel word processing. Subsequent activity in ATL and pSTG is thought to reflect the constraints imposed by such sentence-level representations on on-going word-based semantic activation (ATL), and the subsequent development of a more detailed sentence-level representation (pSTG). These results provide further support for a cascaded interactive-activation account of sentence reading.
Collapse
Affiliation(s)
- Stéphane Dufau
- Laboratoire de Psychologie Cognitive, Centre National de la Recherche Scientifique, Aix-Marseille University, Marseille, France; Institute for Language, Communication, and the Brain, Aix-Marseille University, Aix-en-Provence, France
| | - Jeremy Yeaton
- Laboratoire de Psychologie Cognitive, Centre National de la Recherche Scientifique, Aix-Marseille University, Marseille, France; Department of Language Science, University of California, Irvine, CA, USA
| | - Jean-Michel Badier
- Institute for Language, Communication, and the Brain, Aix-Marseille University, Aix-en-Provence, France; Institut de Neurosciences des Systèmes (INS), INSERM, Aix-Marseille University, Marseille, France
| | - Sophie Chen
- Institute for Language, Communication, and the Brain, Aix-Marseille University, Aix-en-Provence, France; Institut de Neurosciences des Systèmes (INS), INSERM, Aix-Marseille University, Marseille, France
| | - Phillip J Holcomb
- Department of Psychology, San Diego State University, San Diego, CA, USA
| | - Jonathan Grainger
- Laboratoire de Psychologie Cognitive, Centre National de la Recherche Scientifique, Aix-Marseille University, Marseille, France; Institute for Language, Communication, and the Brain, Aix-Marseille University, Aix-en-Provence, France.
| |
Collapse
|
4
|
Gosselke Berthelsen S, Horne M, Shtyrov Y, Roll M. Native language experience shapes pre-attentive foreign tone processing and guides rapid memory trace build-up: An ERP study. Psychophysiology 2022; 59:e14042. [PMID: 35294788 PMCID: PMC9539634 DOI: 10.1111/psyp.14042] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Revised: 02/17/2022] [Accepted: 02/21/2022] [Indexed: 12/01/2022]
Abstract
Language experience, particularly from our native language (L1), shapes our perception of other languages around us. The present study examined how L1 experience moulds the initial processing of foreign (L2) tone during acquisition. In particular, we investigated whether learners were able to rapidly forge new neural memory traces for novel tonal words, which was tracked by recording learners’ ERP responses during two word acquisition sessions. We manipulated the degree of L1–L2 familiarity by comparing learners with a nontonal L1 (German) and a tonal L1 (Swedish) and by using tones that were similar (fall) or dissimilar (high, low, rise) to those occurring in Swedish. Our results indicate that a rapid, pre‐attentive memory trace build‐up for tone manifests in an early ERP component at ~50 ms but only at particularly high levels of L1–L2 similarity. Specifically, early processing was facilitated for an L2 tone that had a familiar pitch shape (fall) and word‐level function (inflection). This underlines the importance of these L1 properties for the early processing of L2 tone. In comparison, a later anterior negativity related to the processing of the tones’ grammatical content was unaffected by native language experience but was instead influenced by lexicality, pitch prominence, entrenchment, and successful learning. Behaviorally, learning effects emerged for all learners and tone types, regardless of L1–L2 familiarity or pitch prominence. Together, the findings suggest that while L1‐based facilitation effects occur, they mainly affect early processing stages and do not necessarily result in more successful L2 acquisition at behavioral level. Our findings add important evidence that contributes to answering the open question of how similarity between native and target language influences target language processing and acquisition. We found facilitative effects of similarity only at pre‐attentive levels and only when the degree of similarity was high. Late processing and successful acquisition, on the other hand, were unaffected by the target words’ similarity to native language properties.
Collapse
Affiliation(s)
- Sabine Gosselke Berthelsen
- Department of Linguistics and Phonetics, Lund University, Lund, Sweden.,Department of Nordic Studies and Linguistics, University of Copenhagen, Copenhagen, Denmark
| | - Merle Horne
- Department of Linguistics and Phonetics, Lund University, Lund, Sweden
| | - Yury Shtyrov
- Center of Functionally Integrative Neuroscience, Aarhus University, Aarhus, Denmark.,Institute for Cognitive Neuroscience, HSE University, Moscow, Russia
| | - Mikael Roll
- Department of Linguistics and Phonetics, Lund University, Lund, Sweden
| |
Collapse
|
5
|
So W, Smith SB. Comparison of two cortical measures of binaural hearing acuity. Int J Audiol 2021; 60:875-884. [PMID: 33345686 PMCID: PMC8244817 DOI: 10.1080/14992027.2020.1860260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Revised: 11/29/2020] [Accepted: 12/01/2020] [Indexed: 10/22/2022]
Abstract
OBJECTIVE Multiple studies have demonstrated binaural hearing deficits in the aging and those with hearing loss. Consequently, there is great interest in developing efficient clinical tests of binaural hearing acuity to improve diagnostic assessments and to assist clinicians when fitting binaural hearing aids and/or cochlear implants. DESIGN Two cortical measures of interaural phase difference sensitivity, the acoustic change complex (ACC) and interaural phase modulation following response (IPM-FR), were compared on three metrics using five different stimulus interaural phase differences (IPDs; 0°, ±22.5°, ±45°, ±67.5° and ±90°). These metrics were scalp topography, time-to-detect, and input-output characteristics. STUDY SAMPLE Ten young, normal-hearing listeners. RESULTS Scalp topography qualitatively differed between ACC and IPM-FR. The IPM-FR demonstrated better time-to-detect performance on smaller (±22.5° and ±45°) but not larger (67.5°, and ±90°) IPDs. Input-output characteristics of each response were similar. CONCLUSIONS The IPM-FR may be a faster and more efficient tool for assessing neural sensitivity to subtle IPD changes. However, the ACC may be useful for research or clinical questions concerned with the topographic representation of binaural cues.
Collapse
Affiliation(s)
- Won So
- Department of Communication Sciences and Disorders, The University of Texas at Austin, Austin, TX, USA
| | - Spencer B Smith
- Department of Communication Sciences and Disorders, The University of Texas at Austin, Austin, TX, USA
| |
Collapse
|
6
|
Duecker K, Gutteling TP, Herrmann CS, Jensen O. No Evidence for Entrainment: Endogenous Gamma Oscillations and Rhythmic Flicker Responses Coexist in Visual Cortex. J Neurosci 2021; 41:6684-6698. [PMID: 34230106 PMCID: PMC8336697 DOI: 10.1523/jneurosci.3134-20.2021] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Revised: 04/25/2021] [Accepted: 06/13/2021] [Indexed: 12/02/2022] Open
Abstract
Over the past decades, numerous studies have linked cortical gamma oscillations (∼30-100 Hz) to neurocomputational mechanisms. Their functional relevance, however, is still passionately debated. Here, we asked whether endogenous gamma oscillations in the human brain can be entrained by a rhythmic photic drive >50 Hz. Such a noninvasive modulation of endogenous brain rhythms would allow conclusions about their causal involvement in neurocognition. To this end, we systematically investigated oscillatory responses to a rapid sinusoidal flicker in the absence and presence of endogenous gamma oscillations using magnetoencephalography (MEG) in combination with a high-frequency projector. The photic drive produced a robust response over visual cortex to stimulation frequencies of up to 80 Hz. Strong, endogenous gamma oscillations were induced using moving grating stimuli as repeatedly done in previous research. When superimposing the flicker and the gratings, there was no evidence for phase or frequency entrainment of the endogenous gamma oscillations by the photic drive. Unexpectedly, we did not observe an amplification of the flicker response around participants' individual gamma frequencies (IGFs); rather, the magnitude of the response decreased monotonically with increasing frequency. Source reconstruction suggests that the flicker response and the gamma oscillations were produced by separate, coexistent generators in visual cortex. The presented findings challenge the notion that cortical gamma oscillations can be entrained by rhythmic visual stimulation. Instead, the mechanism generating endogenous gamma oscillations seems to be resilient to external perturbation.SIGNIFICANCE STATEMENT We aimed to investigate to what extent ongoing, high-frequency oscillations in the gamma-band (30-100 Hz) in the human brain can be entrained by a visual flicker. Gamma oscillations have long been suggested to coordinate neuronal firing and enable interregional communication. Our results demonstrate that rhythmic visual stimulation cannot hijack the dynamics of ongoing gamma oscillations; rather, the flicker response and the endogenous gamma oscillations coexist in different visual areas. Therefore, while a visual flicker evokes a strong neuronal response even at high frequencies in the gamma-band, it does not entrain endogenous gamma oscillations in visual cortex. This has important implications for interpreting studies investigating the causal and neuroprotective effects of rhythmic sensory stimulation in the gamma-band.
Collapse
Affiliation(s)
- Katharina Duecker
- Centre for Human Brain Health, School of Psychology, University of Birmingham, Birmingham B15 2SA, United Kingdom
| | - Tjerk P Gutteling
- Centre for Human Brain Health, School of Psychology, University of Birmingham, Birmingham B15 2SA, United Kingdom
| | - Christoph S Herrmann
- Department of Psychology, Faculty VI-Medicine and Health Sciences, Carl-von-Ossietzky University of Oldenburg, Oldenburg 26129, Germany
| | - Ole Jensen
- Centre for Human Brain Health, School of Psychology, University of Birmingham, Birmingham B15 2SA, United Kingdom
| |
Collapse
|
7
|
Herrmann B, Butler BE. Hearing loss and brain plasticity: the hyperactivity phenomenon. Brain Struct Funct 2021; 226:2019-2039. [PMID: 34100151 DOI: 10.1007/s00429-021-02313-9] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2020] [Accepted: 06/03/2021] [Indexed: 12/22/2022]
Abstract
Many aging adults experience some form of hearing problems that may arise from auditory peripheral damage. However, it has been increasingly acknowledged that hearing loss is not only a dysfunction of the auditory periphery but also results from changes within the entire auditory system, from periphery to cortex. Damage to the auditory periphery is associated with an increase in neural activity at various stages throughout the auditory pathway. Here, we review neurophysiological evidence of hyperactivity, auditory perceptual difficulties that may result from hyperactivity, and outline open conceptual and methodological questions related to the study of hyperactivity. We suggest that hyperactivity alters all aspects of hearing-including spectral, temporal, spatial hearing-and, in turn, impairs speech comprehension when background sound is present. By focusing on the perceptual consequences of hyperactivity and the potential challenges of investigating hyperactivity in humans, we hope to bring animal and human electrophysiologists closer together to better understand hearing problems in older adulthood.
Collapse
Affiliation(s)
- Björn Herrmann
- Rotman Research Institute, Baycrest, Toronto, ON, M6A 2E1, Canada. .,Department of Psychology, University of Toronto, Toronto, ON, Canada.
| | - Blake E Butler
- Department of Psychology & The Brain and Mind Institute, University of Western Ontario, London, ON, Canada.,National Centre for Audiology, University of Western Ontario, London, ON, Canada
| |
Collapse
|
8
|
Herrmann B, Araz K, Johnsrude IS. Sustained neural activity correlates with rapid perceptual learning of auditory patterns. Neuroimage 2021; 238:118238. [PMID: 34098064 DOI: 10.1016/j.neuroimage.2021.118238] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Revised: 06/02/2021] [Accepted: 06/03/2021] [Indexed: 11/27/2022] Open
Abstract
Repeating structures forming regular patterns are common in sounds. Learning such patterns may enable accurate perceptual organization. In five experiments, we investigated the behavioral and neural signatures of rapid perceptual learning of regular sound patterns. We show that recurring (compared to novel) patterns are detected more quickly and increase sensitivity to pattern deviations and to the temporal order of pattern onset relative to a visual stimulus. Sustained neural activity reflected perceptual learning in two ways. Firstly, sustained activity increased earlier for recurring than novel patterns when participants attended to sounds, but not when they ignored them; this earlier increase mirrored the rapid perceptual learning we observed behaviorally. Secondly, the magnitude of sustained activity was generally lower for recurring than novel patterns, but only for trials later in the experiment, and independent of whether participants attended to or ignored sounds. The late manifestation of sustained activity reduction suggests that it is not directly related to rapid perceptual learning, but to a mechanism that does not require attention to sound. In sum, we demonstrate that the latency of sustained activity reflects rapid perceptual learning of auditory patterns, while the magnitude may reflect a result of learning, such as better prediction of learned auditory patterns.
Collapse
Affiliation(s)
- Björn Herrmann
- Rotman Research Institute, Baycrest, M6A 2E1, North York, ON, Canada; Department of Psychology, University of Toronto, M5S 1A1, Toronto, ON, Canada; Department of Psychology, University of Western Ontario, N6A 3K7, London, ON, Canada.
| | - Kurdo Araz
- Department of Psychology, University of Western Ontario, N6A 3K7, London, ON, Canada
| | - Ingrid S Johnsrude
- Department of Psychology, University of Western Ontario, N6A 3K7, London, ON, Canada; School of Communication Sciences & Disorders, University of Western Ontario, N6A 5B7 London, ON, Canada
| |
Collapse
|
9
|
Yuan D, Tian H, Zhou Y, Wu J, Sun T, Xiao Z, Shang C, Wang J, Chen X, Sun Y, Tang J, Qiu S, Tan LH. Acupoint-brain (acubrain) mapping: Common and distinct cortical language regions activated by focused ultrasound stimulation on two language-relevant acupoints. BRAIN AND LANGUAGE 2021; 215:104920. [PMID: 33561785 DOI: 10.1016/j.bandl.2021.104920] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/22/2020] [Revised: 01/13/2021] [Accepted: 01/14/2021] [Indexed: 06/12/2023]
Abstract
Acupuncture, taking the advantage of modality-specific neural pathways, has shown promising results in the treatment of brain disorders that affect different modalities such as pain and vision. However, the precise underlying mechanisms of within-modality neuromodulation of acupoints on human high-order cognition remain largely unknown. In the present study, we used a non-invasive and easy-operating method, focused ultrasound, to stimulate two language-relevant acupoints, namely GB39 (Xuanzhong) and SJ8 (Sanyangluo), of thirty healthy adults. The effect of focused ultrasound stimulation (FUS) on brain activation was examined by functional magnetic resonance imaging (fMRI). We found that stimulating GB39 and SJ8 by FUS evoked overlapping but distinct brain activation patterns. Our findings provide a major step toward within-modality (in this case, language) acupoint-brain (acubrain) mapping and shed light on to the potential use of FUS as a personalized treatment option for brain disorders that affect high-level cognitive functions.
Collapse
Affiliation(s)
- Di Yuan
- Guangdong-Hongkong-Macau Institute of CNS Regeneration and Ministry of Education CNS Regeneration Collaborative Joint Laboratory, Jinan University, Guangzhou, China; Center for Language and Brain, Shenzhen Institute of Neuroscience, Shenzhen, China
| | - Haoyue Tian
- Guangdong-Hongkong-Macau Institute of CNS Regeneration and Ministry of Education CNS Regeneration Collaborative Joint Laboratory, Jinan University, Guangzhou, China; Center for Language and Brain, Shenzhen Institute of Neuroscience, Shenzhen, China
| | - Yulong Zhou
- Guangdong-Hongkong-Macau Institute of CNS Regeneration and Ministry of Education CNS Regeneration Collaborative Joint Laboratory, Jinan University, Guangzhou, China; Center for Language and Brain, Shenzhen Institute of Neuroscience, Shenzhen, China
| | - Jinjian Wu
- The First School of Clinical Medicine, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Tong Sun
- School of Biomedical Engineering, Shenzhen University Health Science Center, Shenzhen, China
| | - Zhuoni Xiao
- Guangdong-Hongkong-Macau Institute of CNS Regeneration and Ministry of Education CNS Regeneration Collaborative Joint Laboratory, Jinan University, Guangzhou, China; Center for Language and Brain, Shenzhen Institute of Neuroscience, Shenzhen, China
| | - Chunfeng Shang
- Center for Language and Brain, Shenzhen Institute of Neuroscience, Shenzhen, China
| | - Jiaojian Wang
- Center for Language and Brain, Shenzhen Institute of Neuroscience, Shenzhen, China
| | - Xin Chen
- School of Biomedical Engineering, Shenzhen University Health Science Center, Shenzhen, China
| | - Yimin Sun
- Department of Biomedical Engineering, Medical Systems Biology Research Center, Tsinghua University School of Medicine, Beijing, China
| | - Joey Tang
- Center for Language and Brain, Shenzhen Institute of Neuroscience, Shenzhen, China
| | - Shijun Qiu
- Department of Radiology, First Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China.
| | - Li Hai Tan
- Guangdong-Hongkong-Macau Institute of CNS Regeneration and Ministry of Education CNS Regeneration Collaborative Joint Laboratory, Jinan University, Guangzhou, China; Center for Language and Brain, Shenzhen Institute of Neuroscience, Shenzhen, China.
| |
Collapse
|
10
|
Zaccarella E, Papitto G, Friederici AD. Language and action in Broca's area: Computational differentiation and cortical segregation. Brain Cogn 2020; 147:105651. [PMID: 33254030 DOI: 10.1016/j.bandc.2020.105651] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Revised: 10/21/2020] [Accepted: 10/23/2020] [Indexed: 10/22/2022]
Abstract
Actions have been proposed to follow hierarchical principles similar to those hypothesized for language syntax. These structural similarities are claimed to be reflected in the common involvement of certain neural populations of Broca's area, in the Inferior Frontal Gyrus (IFG). In this position paper, we follow an influential hypothesis in linguistic theory to introduce the syntactic operation Merge and the corresponding motor/conceptual interfaces. We argue that actions hierarchies do not follow the same principles ruling language syntax. We propose that hierarchy in the action domain lies in predictive processing mechanisms mapping sensory inputs and statistical regularities of action-goal relationships. At the cortical level, distinct Broca's subregions appear to support different types of computations across the two domains. We argue that anterior BA44 is a major hub for the implementation of the syntactic operation Merge. On the other hand, posterior BA44 is recruited in selecting premotor mental representations based on the information provided by contextual signals. This functional distinction is corroborated by a recent meta-analysis (Papitto, Friederici, & Zaccarella, 2020). We conclude by suggesting that action and language can meet only where the interfaces transfer abstract computations either to the external world or to the internal mental world.
Collapse
Affiliation(s)
- Emiliano Zaccarella
- Max Planck Institute for Human Cognitive and Brain Sciences, Department of Neuropsychology, Leipzig, Germany.
| | - Giorgio Papitto
- Max Planck Institute for Human Cognitive and Brain Sciences, Department of Neuropsychology, Leipzig, Germany; International Max Planck Research School on Neuroscience of Communication: Function, Structure, and Plasticity, Leipzig, Germany
| | - Angela D Friederici
- Max Planck Institute for Human Cognitive and Brain Sciences, Department of Neuropsychology, Leipzig, Germany
| |
Collapse
|
11
|
Jiang R, Zuo N, Ford JM, Qi S, Zhi D, Zhuo C, Xu Y, Fu Z, Bustillo J, Turner JA, Calhoun VD, Sui J. Task-induced brain connectivity promotes the detection of individual differences in brain-behavior relationships. Neuroimage 2019; 207:116370. [PMID: 31751666 PMCID: PMC7345498 DOI: 10.1016/j.neuroimage.2019.116370] [Citation(s) in RCA: 69] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2019] [Revised: 11/12/2019] [Accepted: 11/15/2019] [Indexed: 02/05/2023] Open
Abstract
Although both resting and task-induced functional connectivity (FC) have been used to characterize the human brain and cognitive abilities, the potential of task-induced FCs in individualized prediction for out-of-scanner cognitive traits remains largely unexplored. A recent study Greene et al. (2018) predicted the fluid intelligence scores using FCs derived from rest and multiple task conditions, suggesting that task-induced brain state manipulation improved prediction of individual traits. Here, using a large dataset incorporating fMRI data from rest and 7 distinct task conditions, we replicated the original study by employing a different machine learning approach, and applying the method to predict two reading comprehension-related cognitive measures. Consistent with their findings, we found that task-based machine learning models often outperformed rest-based models. We also observed that combining multi-task fMRI improved prediction performance, yet, integrating the more fMRI conditions can not necessarily ensure better predictions. Compared with rest, the predictive FCs derived from language and working memory tasks were highlighted with more predictive power in predominantly default mode and frontoparietal networks. Moreover, prediction models demonstrated high stability to be generalizable across distinct cognitive states. Together, this replication study highlights the benefit of using task-based FCs to reveal brain-behavior relationships, which may confer more predictive power and promote the detection of individual differences of connectivity patterns underlying relevant cognitive traits, providing strong evidence for the validity and robustness of the original findings.
Collapse
Affiliation(s)
- Rongtao Jiang
- Brainnetome Center and National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China; University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Nianming Zuo
- Brainnetome Center and National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China
| | - Judith M Ford
- Department of Psychiatry, University of California, San Francisco, CA, 94143, USA; San Francisco VA Medical Center, San Francisco, CA, 94143, USA
| | - Shile Qi
- Tri-institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State University, Georgia Institute of Technology, Emory University, Atlanta, GA, USA, 30303
| | - Dongmei Zhi
- Brainnetome Center and National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China; University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Chuanjun Zhuo
- Department of Psychiatric-Neuroimaging-Genetics and Morbidity Laboratory (PNGC-Lab), Nankai University Affiliated Anding Hospital, Tianjin Mental Health Center, Tianjin, 300222, China
| | - Yong Xu
- Department of Psychiatry, First Hospital of Shanxi Medical University, Taiyuan, 030001, China
| | - Zening Fu
- Tri-institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State University, Georgia Institute of Technology, Emory University, Atlanta, GA, USA, 30303
| | - Juan Bustillo
- Department of Psychiatry, University of New Mexico, Albuquerque, NM, 87131, USA
| | - Jessica A Turner
- Tri-institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State University, Georgia Institute of Technology, Emory University, Atlanta, GA, USA, 30303; Department of Psychology and Neuroscience, Georgia State University, Atlanta, GA, 30302, USA
| | - Vince D Calhoun
- Tri-institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State University, Georgia Institute of Technology, Emory University, Atlanta, GA, USA, 30303.
| | - Jing Sui
- Brainnetome Center and National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China; University of Chinese Academy of Sciences, Beijing, 100049, China; Tri-institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State University, Georgia Institute of Technology, Emory University, Atlanta, GA, USA, 30303; Chinese Academy of Sciences Center for Excellence in Brain Science, Institute of Automation, Beijing, China.
| |
Collapse
|
12
|
Kubota M, Ono Y, Ishiyama A, Zouridakis G, Papanicolaou AC. Magnetoencephalography Reveals Mismatch Field Enhancement from Unexpected Syntactic Category Errors in English Sentences. Neurosci Lett 2018; 662:195-204. [PMID: 28847487 DOI: 10.1016/j.neulet.2017.07.051] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2016] [Revised: 07/26/2017] [Accepted: 07/28/2017] [Indexed: 11/15/2022]
Abstract
The type of syntactic operations that increase neuronal activation in humans as a result of syntactically erroneous, unexpected lexical items in hearing sentences has remained unclear. In the present study, we used recordings of magnetoencephalographic (MEG) activity to compare bare infinitive and full infinitive constructions in English. This research aims to identify the type of syntactic deviance that may trigger an early syntax-related mismatch field (MMF) component when unexpected words appear in sentences. Six speakers of English as a first language were presented with auditory stimuli of sentences or words in a passive odd-ball paradigm while watching a silent movie. The experimental protocol included four sessions, specifically investigating the sentential (structural) versions of full (with the 'to' infinitival particle) and bare infinitival structures (without the particle) and the lexical (non-structure) versions of the verb either with or without the particle to determine whether the structure processing of sentences was a more crucial factor in the detection of the MMF than the simple processing of lexical items in verb-only conditions. The amplitude analysis of the resulting evoked fields showed that the presence of the syntactic category error of bare infinitival structures against syntactic predictions evoked a significantly larger MMF activation with a peak latency of approximately 200ms in the anterior superior temporal sulci in the left hemisphere, compared with the lexical items that did not have any syntactic status. These results clearly demonstrate that syntactically unexpected, illegal input in the bare infinitival structure is likely to be noticed more robustly in the brain while processing the structural information of the entire sentence than the corresponding verb-only items.
Collapse
Affiliation(s)
- Mikio Kubota
- Department of English, Seijo University, Tokyo, Japan; Department of Engineering Technology, University of Houston, Houston, TX, USA; Center for Clinical Neurosciences, Children's Learning Institute, The University of Texas Health Science Center at Houston, TX, USA.
| | - Yumie Ono
- Department of Physiology and Neuroscience, Kanagawa Dental College, Kanagawa, Japan; Department of Electronics and Bioinformatics, Meiji University, Kanagawa, Japan; Department of Electrical Engineering and Bioscience, Waseda University, Tokyo, Japan
| | - Atsushi Ishiyama
- Department of Electrical Engineering and Bioscience, Waseda University, Tokyo, Japan
| | - George Zouridakis
- Department of Engineering Technology, University of Houston, Houston, TX, USA
| | - Andrew C Papanicolaou
- Center for Clinical Neurosciences, Children's Learning Institute, The University of Texas Health Science Center at Houston, TX, USA; Department of Pediatrics, University of Tennessee Health Science Center, Memphis, TN, USA
| |
Collapse
|
13
|
Predictive Brain Mechanisms in Sound-to-Meaning Mapping during Speech Processing. J Neurosci 2017; 36:10813-10822. [PMID: 27798136 DOI: 10.1523/jneurosci.0583-16.2016] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2016] [Accepted: 08/27/2016] [Indexed: 11/21/2022] Open
Abstract
Spoken language comprehension relies not only on the identification of individual words, but also on the expectations arising from contextual information. A distributed frontotemporal network is known to facilitate the mapping of speech sounds onto their corresponding meanings. However, how prior expectations influence this efficient mapping at the neuroanatomical level, especially in terms of individual words, remains unclear. Using fMRI, we addressed this question in the framework of the dual-stream model by scanning native speakers of Mandarin Chinese, a language highly dependent on context. We found that, within the ventral pathway, the violated expectations elicited stronger activations in the left anterior superior temporal gyrus and the ventral inferior frontal gyrus (IFG) for the phonological-semantic prediction of spoken words. Functional connectivity analysis showed that expectations were mediated by both top-down modulation from the left ventral IFG to the anterior temporal regions and enhanced cross-stream integration through strengthened connections between different subregions of the left IFG. By further investigating the dynamic causality within the dual-stream model, we elucidated how the human brain accomplishes sound-to-meaning mapping for words in a predictive manner. SIGNIFICANCE STATEMENT In daily communication via spoken language, one of the core processes is understanding the words being used. Effortless and efficient information exchange via speech relies not only on the identification of individual spoken words, but also on the contextual information giving rise to expected meanings. Despite the accumulating evidence for the bottom-up perception of auditory input, it is still not fully understood how the top-down modulation is achieved in the extensive frontotemporal cortical network. Here, we provide a comprehensive description of the neural substrates underlying sound-to-meaning mapping and demonstrate how the dual-stream model functions in the modulation of expectations, allowing for a better understanding of how the human brain accomplishes sound-to-meaning mapping in a predictive manner.
Collapse
|
14
|
Leminen A, Kimppa L, Leminen MM, Lehtonen M, Mäkelä JP, Shtyrov Y. Acquisition and consolidation of novel morphology in human neocortex: A neuromagnetic study. Cortex 2016; 83:1-16. [PMID: 27458780 DOI: 10.1016/j.cortex.2016.06.020] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2016] [Revised: 06/16/2016] [Accepted: 06/20/2016] [Indexed: 10/21/2022]
Abstract
Research into neurobiological mechanisms of morphosyntactic processing of language has suggested specialised systems for decomposition and storage, which are used flexibly during the processing of complex polymorphemic words (such as those formed through affixation, e.g., boy + s = noun + plural marker or boy + ish = noun plus attenuator). However, neural underpinnings of acquisition of novel morphology are still unknown. We implicitly trained our participants with new derivational affixes through a word-picture association task and investigated the neural processes underlying formation of neural memory traces for new affixes. The participants' brain activity was recorded using magnetoencephalography (MEG), as they passively listened to the newly trained and untrained suffixes combined with real word and pseudoword stems. The MEG recording was repeated after a night's sleep using the same stimuli, to test the effects of overnight consolidation. The newly trained suffixes combined with real stems elicited stronger source activity in the left inferior frontal gyrus (LIFG) at ∼50 msec after the suffix onset than untrained suffixes, suggesting memory trace formation for the newly learned suffixes already on the same day. The following day, the suffix learning effect spread to the left superior temporal gyrus (STG) where it was again manifest as a response enhancement, particularly at ∼200-300 msec after the suffix onset, which might reflect an additional effect of overnight consolidation. Overall, the results demonstrate the rapid and dynamic processes of both immediate build-up and longer-term consolidation of neocortical memory traces for novel morphology, taking place after a short period of exposure to novel morphology and involving fronto-temporal perisylvian language circuitry.
Collapse
Affiliation(s)
- Alina Leminen
- Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark; Cognitive Brain Research Unit, Cognitive Science, Institute of Behavioural Sciences, University of Helsinki, Helsinki, Finland.
| | - Lilli Kimppa
- Cognitive Brain Research Unit, Cognitive Science, Institute of Behavioural Sciences, University of Helsinki, Helsinki, Finland
| | - Miika M Leminen
- Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark; Cognitive Brain Research Unit, Cognitive Science, Institute of Behavioural Sciences, University of Helsinki, Helsinki, Finland
| | - Minna Lehtonen
- Cognitive Brain Research Unit, Cognitive Science, Institute of Behavioural Sciences, University of Helsinki, Helsinki, Finland; Department of Psychology, Åbo Akademi University, Turku, Finland
| | - Jyrki P Mäkelä
- BioMag Laboratory, HUS Medical Imaging Center, Hospital District of Helsinki and Uusimaa, Helsinki, Finland
| | - Yury Shtyrov
- Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark; Centre for Cognition and Decision Making, National Research University Higher School of Economics, Moscow, Russia
| |
Collapse
|
15
|
Shtyrov Y, Lenzen M. First-pass neocortical processing of spoken language takes only 30 msec: Electrophysiological evidence. Cogn Neurosci 2016; 8:24-38. [DOI: 10.1080/17588928.2016.1156663] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
16
|
Abstract
Language-processing functions follow heterogeneous developmental trajectories. The human embryo can already distinguish vowels in utero, but grammatical complexity is usually not fully mastered until at least 7 years of age. Examining the current literature, we propose that the ontogeny of the cortical language network can be roughly subdivided into two main developmental stages. In the first stage extending over the first 3 years of life, the infant rapidly acquires bottom-up processing capacities, which are primarily implemented bilaterally in the temporal cortices. In the second stage continuing into adolescence, top-down processes emerge gradually with the increasing functional selectivity and structural connectivity of the left inferior frontal cortex.
Collapse
Affiliation(s)
- Michael A Skeide
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, 04103 Leipzig, Germany
| | - Angela D Friederici
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, 04103 Leipzig, Germany
| |
Collapse
|
17
|
Shtyrov YY, Stroganova TA. When ultrarapid is ultrarapid: on importance of temporal precision in neuroscience of language. Front Hum Neurosci 2015; 9:576. [PMID: 26539098 PMCID: PMC4612669 DOI: 10.3389/fnhum.2015.00576] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2015] [Accepted: 10/04/2015] [Indexed: 11/21/2022] Open
Affiliation(s)
- Yury Y Shtyrov
- Center of Functionally Integrative Neuroscience (CFIN), Institute for Clinical Medicine, Aarhus University Aarhus, Denmark ; Centre for Cognition and Decision Making, NRU Higher School of Economics Moscow, Russia
| | - Tatyana A Stroganova
- Moscow MEG Center, Moscow State University for Psychology and Education Moscow, Russia
| |
Collapse
|
18
|
Boylan C, Trueswell JC, Thompson-Schill SL. Multi-voxel pattern analysis of noun and verb differences in ventral temporal cortex. BRAIN AND LANGUAGE 2014; 137:40-49. [PMID: 25156159 PMCID: PMC4189997 DOI: 10.1016/j.bandl.2014.07.009] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2013] [Revised: 07/21/2014] [Accepted: 07/27/2014] [Indexed: 06/03/2023]
Abstract
Recent evidence suggests a probabilistic relationship exists between the phonological/orthographic form of a word and its lexical-syntactic category (specifically nouns vs. verbs) such that syntactic prediction may elicit form-based estimates in sensory cortex. We tested this hypothesis by conducting multi-voxel pattern analysis (MVPA) of fMRI data from early visual cortex (EVC), left ventral temporal (VT) cortex, and a subregion of the latter - the left mid fusiform gyrus (mid FG), sometimes called the "visual word form area." Crucially, we examined only those volumes sampled when subjects were predicting, but not viewing, nouns and verbs. This allowed us to investigate prediction effects in visual areas without any bottom-up orthographic input. We found that voxels in VT and mid FG, but not in EVC, were able to classify noun-predictive trials vs. verb-predictive trials in sentence contexts, suggesting that sentence-level predictions are sufficient to generate word form-based estimates in visual areas.
Collapse
|
19
|
Yoshimura Y, Kikuchi M, Ueno S, Shitamichi K, Remijn GB, Hiraishi H, Hasegawa C, Furutani N, Oi M, Munesue T, Tsubokawa T, Higashida H, Minabe Y. A longitudinal study of auditory evoked field and language development in young children. Neuroimage 2014; 101:440-7. [PMID: 25067819 DOI: 10.1016/j.neuroimage.2014.07.034] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2014] [Revised: 06/16/2014] [Accepted: 07/18/2014] [Indexed: 10/25/2022] Open
Abstract
The relationship between language development in early childhood and the maturation of brain functions related to the human voice remains unclear. Because the development of the auditory system likely correlates with language development in young children, we investigated the relationship between the auditory evoked field (AEF) and language development using non-invasive child-customized magnetoencephalography (MEG) in a longitudinal design. Twenty typically developing children were recruited (aged 36-75 months old at the first measurement). These children were re-investigated 11-25 months after the first measurement. The AEF component P1m was examined to investigate the developmental changes in each participant's neural brain response to vocal stimuli. In addition, we examined the relationships between brain responses and language performance. P1m peak amplitude in response to vocal stimuli significantly increased in both hemispheres in the second measurement compared to the first measurement. However, no differences were observed in P1m latency. Notably, our results reveal that children with greater increases in P1m amplitude in the left hemisphere performed better on linguistic tests. Thus, our results indicate that P1m evoked by vocal stimuli is a neurophysiological marker for language development in young children. Additionally, MEG is a technique that can be used to investigate the maturation of the auditory cortex based on auditory evoked fields in young children. This study is the first to demonstrate a significant relationship between the development of the auditory processing system and the development of language abilities in young children.
Collapse
Affiliation(s)
- Yuko Yoshimura
- Research Center for Child Mental Development, Kanazawa University, Kanazawa, Japan
| | - Mitsuru Kikuchi
- Research Center for Child Mental Development, Kanazawa University, Kanazawa, Japan.
| | - Sanae Ueno
- Research Center for Child Mental Development, Kanazawa University, Kanazawa, Japan
| | - Kiyomi Shitamichi
- Department of Psychiatry and Neurobiology, Graduate School of Medical Science, Kanazawa University, Kanazawa, Japan
| | - Gerard B Remijn
- International Education Center, Kyushu University, Fukuoka, Japan
| | - Hirotoshi Hiraishi
- Research Center for Child Mental Development, Kanazawa University, Kanazawa, Japan
| | - Chiaki Hasegawa
- Research Center for Child Mental Development, Kanazawa University, Kanazawa, Japan
| | - Naoki Furutani
- Department of Psychiatry and Neurobiology, Graduate School of Medical Science, Kanazawa University, Kanazawa, Japan
| | - Manabu Oi
- Research Center for Child Mental Development, Kanazawa University, Kanazawa, Japan
| | - Toshio Munesue
- Research Center for Child Mental Development, Kanazawa University, Kanazawa, Japan
| | - Tsunehisa Tsubokawa
- Department of Anesthesiology, Graduate School of Medical Science, Kanazawa University, Kanazawa, Japan
| | - Haruhiro Higashida
- Research Center for Child Mental Development, Kanazawa University, Kanazawa, Japan
| | - Yoshio Minabe
- Department of Psychiatry and Neurobiology, Graduate School of Medical Science, Kanazawa University, Kanazawa, Japan
| |
Collapse
|
20
|
Ruhnau P, Herrmann B, Maess B, Brauer J, Friederici AD, Schröger E. Processing of complex distracting sounds in school-aged children and adults: evidence from EEG and MEG data. Front Psychol 2013; 4:717. [PMID: 24155730 PMCID: PMC3800842 DOI: 10.3389/fpsyg.2013.00717] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2013] [Accepted: 09/18/2013] [Indexed: 11/25/2022] Open
Abstract
When a perceiver performs a task, rarely occurring sounds often have a distracting effect on task performance. The neural mismatch responses in event-related potentials to such distracting stimuli depend on age. Adults commonly show a negative response, whereas in children a positive as well as a negative mismatch response has been reported. Using electro- and magnetoencephalography (EEG/MEG), here we investigated the developmental changes of distraction processing in school-aged children (9–10 years) and adults. Participants took part in an auditory-visual distraction paradigm comprising a visuo-spatial primary task and task-irrelevant environmental sounds distracting from this task. Behaviorally, distractors delayed reaction times (RTs) in the primary task in both age groups, and this delay was of similar magnitude in both groups. The neurophysiological data revealed an early as well as a late mismatch response elicited by distracting stimuli in both age groups. Together with previous research, this indicates that deviance detection is accomplished in a hierarchical manner in the auditory system. Both mismatch responses were localized to auditory cortex areas. All mismatch responses were generally delayed in children, suggesting that not all neurophysiological aspects of deviance processing are mature in school-aged children. Furthermore, the P3a, reflecting involuntary attention capture, was present in both age groups in the EEG with comparable amplitudes and at similar latencies, but with a different topographical distribution. This suggests that involuntary attention shifts toward complex distractors operate comparably in school-aged children and adults, yet undergoing generator maturation.
Collapse
Affiliation(s)
- Philipp Ruhnau
- Center for Mind/Brain Science, University of Trento Mattarello, Italy ; Institute of Psychology, University of Leipzig Leipzig, Germany
| | | | | | | | | | | |
Collapse
|
21
|
Generating predictions: lesion evidence on the role of left inferior frontal cortex in rapid syntactic analysis. Cortex 2013; 49:2861-74. [PMID: 23890826 DOI: 10.1016/j.cortex.2013.05.014] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2012] [Revised: 10/18/2012] [Accepted: 05/30/2013] [Indexed: 11/22/2022]
Abstract
A well-documented phenomenon in event-related electroencephalography (EEG) and magnetoencephalography (MEG) studies on language processing is that syntactic violations of different types elicit negativities as early as 100 msec after the violation point. Recently, these responses have been associated with activations in or very close to sensory cortices, suggesting the involvement of basic sensory mechanisms in the detection of syntactic violations. The present study investigated whether intact auditory cortices and adjacent temporal regions are sufficient to generate early syntactic negativities in the auditory event-related potential (ERP). We tested ten clinically non-aphasic patients with left inferior frontal lesions, but intact temporal cortices in a passive auditory ERP paradigm that had reliably elicited early negativities in response to violations of subject-verb agreement and word category in the past. Subject-verb agreement violations failed to elicit early grammaticality effects in these patients, whereas a group of ten age-matched controls showed a reliable early negativity. This finding supports the idea that sensory aspects of syntactic analysis as reflected in early syntactic negativities critically depend on top-down predictions generated by the left inferior frontal cortex. In contrast, word category violations elicited a small, marginally significant early negativity both in controls and patients, suggesting an additional involvement of temporal regions in early phrase structure processing. In an additional auditory oddball experiment patients showed a regular P300, but no N2b component in response to deviant tones, indicating that their deficit in generating sensory predictions extends beyond the language domain.
Collapse
|
22
|
Sammler D, Koelsch S, Ball T, Brandt A, Grigutsch M, Huppertz HJ, Knösche TR, Wellmer J, Widman G, Elger CE, Friederici AD, Schulze-Bonhage A. Co-localizing linguistic and musical syntax with intracranial EEG. Neuroimage 2012; 64:134-46. [PMID: 23000255 DOI: 10.1016/j.neuroimage.2012.09.035] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2012] [Revised: 09/05/2012] [Accepted: 09/13/2012] [Indexed: 10/27/2022] Open
Abstract
Despite general agreement on shared syntactic resources in music and language, the neuroanatomical underpinnings of this overlap remain largely unexplored. While previous studies mainly considered frontal areas as supramodal grammar processors, the domain-general syntactic role of temporal areas has been so far neglected. Here we capitalized on the excellent spatial and temporal resolution of subdural EEG recordings to co-localize low-level syntactic processes in music and language in the temporal lobe in a within-subject design. We used Brain Surface Current Density mapping to localize and compare neural generators of the early negativities evoked by violations of phrase structure grammar in both music and spoken language. The results show that the processing of syntactic violations relies in both domains on bilateral temporo-fronto-parietal neural networks. We found considerable overlap of these networks in the superior temporal lobe, but also differences in the hemispheric timing and relative weighting of their fronto-temporal constituents. While alluding to the dissimilarity in how shared neural resources may be configured depending on the musical or linguistic nature of the perceived stimulus, the combined data lend support for a co-localization of early musical and linguistic syntax processing in the temporal lobe.
Collapse
Affiliation(s)
- Daniela Sammler
- Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103 Leipzig, Germany.
| | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
23
|
Friederici AD. The cortical language circuit: from auditory perception to sentence comprehension. Trends Cogn Sci 2012; 16:262-8. [PMID: 22516238 DOI: 10.1016/j.tics.2012.04.001] [Citation(s) in RCA: 427] [Impact Index Per Article: 35.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2012] [Revised: 03/29/2012] [Accepted: 04/03/2012] [Indexed: 11/29/2022]
Abstract
Over the years, a large body of work on the brain basis of language comprehension has accumulated, paving the way for the formulation of a comprehensive model. The model proposed here describes the functional neuroanatomy of the different processing steps from auditory perception to comprehension as located in different gray matter brain regions. It also specifies the information flow between these regions, taking into account white matter fiber tract connections. Bottom-up, input-driven processes proceeding from the auditory cortex to the anterior superior temporal cortex and from there to the prefrontal cortex, as well as top-down, controlled and predictive processes from the prefrontal cortex back to the temporal cortex are proposed to constitute the cortical language circuit.
Collapse
Affiliation(s)
- Angela D Friederici
- Max Planck Institute for Human Cognitive and Brain Sciences, Department of Neuropsychology, 04103 Leipzig, Germany.
| |
Collapse
|
24
|
Herrmann B, Maess B, Kalberlah C, Haynes JD, Friederici AD. Auditory perception and syntactic cognition: brain activity-based decoding within and across subjects. Eur J Neurosci 2012; 35:1488-96. [DOI: 10.1111/j.1460-9568.2012.08053.x] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
25
|
Abstract
Language processing is a trait of human species. The knowledge about its neurobiological basis has been increased considerably over the past decades. Different brain regions in the left and right hemisphere have been identified to support particular language functions. Networks involving the temporal cortex and the inferior frontal cortex with a clear left lateralization were shown to support syntactic processes, whereas less lateralized temporo-frontal networks subserve semantic processes. These networks have been substantiated both by functional as well as by structural connectivity data. Electrophysiological measures indicate that within these networks syntactic processes of local structure building precede the assignment of grammatical and semantic relations in a sentence. Suprasegmental prosodic information overtly available in the acoustic language input is processed predominantly in a temporo-frontal network in the right hemisphere associated with a clear electrophysiological marker. Studies with patients suffering from lesions in the corpus callosum reveal that the posterior portion of this structure plays a crucial role in the interaction of syntactic and prosodic information during language processing.
Collapse
Affiliation(s)
- Angela D Friederici
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| |
Collapse
|