1
|
Tang M, Chen B, Zhao X, Zhao L. Semantic and syntactic processing of emojis in sentential intermediate positions. Cogn Neurodyn 2024; 18:1743-1752. [PMID: 39104667 PMCID: PMC11297853 DOI: 10.1007/s11571-023-10037-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Revised: 10/20/2023] [Accepted: 11/09/2023] [Indexed: 08/07/2024] Open
Abstract
The current study investigated the neuro mechanisms of emoji processing as sentence predicate in written context. In the hybrid textuality which is more cognitively engaging, emojis in sentential intermediate positions were designed as either congruent or incongruent to the context. The results showed that incongruent words led to a robust N400 effect, while incongruent emojis only elicited the P600 effect. It implies that semantics and syntax of words can be separated while those of emojis seem to be integrated together. That is, when the meaning of the emoji is violated to the sentential context, its grammatical role cannot be well interpreted, especially when it is used as a key grammatical component in a sentence, such as the predicate. Thus, it shows that even though the meaning of emojis can be interpreted by readers, their syntactic and semantic functions cannot be clearly separated. In comparison with word processing, the larger amplitude with emojis in the time window of 350-500 ms shows more cognitive efforts in emoji semantic processing, possibly arising from the switch of modalities within the visual channel, that is, the multimodal cognitive load.
Collapse
Affiliation(s)
- Mengmeng Tang
- School of Foreign Languages, China University of Petroleum, Beijing, China
| | - Bingfei Chen
- School of Foreign Languages, China University of Petroleum, Beijing, China
| | - Xiufeng Zhao
- School of Foreign Languages, China University of Petroleum, Beijing, China
| | - Lun Zhao
- Center of Language & Brain Research, Sichuan International Studies University, Chongqing, China
| |
Collapse
|
2
|
Huizeling E, Alday PM, Peeters D, Hagoort P. Combining EEG and 3D-eye-tracking to study the prediction of upcoming speech in naturalistic virtual environments: A proof of principle. Neuropsychologia 2023; 191:108730. [PMID: 37939871 DOI: 10.1016/j.neuropsychologia.2023.108730] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 09/15/2023] [Accepted: 11/03/2023] [Indexed: 11/10/2023]
Abstract
EEG and eye-tracking provide complementary information when investigating language comprehension. Evidence that speech processing may be facilitated by speech prediction comes from the observation that a listener's eye gaze moves towards a referent before it is mentioned if the remainder of the spoken sentence is predictable. However, changes to the trajectory of anticipatory fixations could result from a change in prediction or an attention shift. Conversely, N400 amplitudes and concurrent spectral power provide information about the ease of word processing the moment the word is perceived. In a proof-of-principle investigation, we combined EEG and eye-tracking to study linguistic prediction in naturalistic, virtual environments. We observed increased processing, reflected in theta band power, either during verb processing - when the verb was predictive of the noun - or during noun processing - when the verb was not predictive of the noun. Alpha power was higher in response to the predictive verb and unpredictable nouns. We replicated typical effects of noun congruence but not predictability on the N400 in response to the noun. Thus, the rich visual context that accompanied speech in virtual reality influenced language processing compared to previous reports, where the visual context may have facilitated processing of unpredictable nouns. Finally, anticipatory fixations were predictive of spectral power during noun processing and the length of time fixating the target could be predicted by spectral power at verb onset, conditional on the object having been fixated. Overall, we show that combining EEG and eye-tracking provides a promising new method to answer novel research questions about the prediction of upcoming linguistic input, for example, regarding the role of extralinguistic cues in prediction during language comprehension.
Collapse
Affiliation(s)
- Eleanor Huizeling
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands.
| | | | - David Peeters
- Department of Communication and Cognition, TiCC, Tilburg University, Tilburg, the Netherlands
| | - Peter Hagoort
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands; Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, the Netherlands
| |
Collapse
|
3
|
Eviatar Z, Binur N, Peleg O. Interactions of lexical and conceptual representations: Evidence from EEG. BRAIN AND LANGUAGE 2023; 243:105302. [PMID: 37437410 DOI: 10.1016/j.bandl.2023.105302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 05/24/2023] [Accepted: 06/30/2023] [Indexed: 07/14/2023]
Abstract
We examined whether meanings automatically activate linguistic forms, and whether these forms affect semantic decisions. Participants were presented sequentially with pairs of pictures and decided whether the objects in the pictures were related. At no point did they name the pictures. The object names of the experimental stimuli were ambiguous either in orthography (homographs), phonology (homophones), or both (homonyms), or unambiguous. We show that the lexical characteristics of the name of the objects affect a semantic decision about real world relations, in an online measure (N400), in addition to offline behavioral measures. We show a dissociation between conceptual and lexical recognition, where an earlier component (N230), was affected by relatedness, but was not sensitive to the lexical characteristics. We interpret this as supporting the hypothesis that semantic recognition occurs before the automatic lexical activation of the object name, but that once linguistic representations are activated, they affect semantic integration.
Collapse
Affiliation(s)
- Zohar Eviatar
- Institute for Information Processing and Decision Making, University of Haifa, Israel; School of Psychological Science, University of Haifa, Israel; The Edmond J. Safra Brain Research Center for the Study of Learning Disabilities, University of Haifa, Israel.
| | - Nahal Binur
- The Edmond J. Safra Brain Research Center for the Study of Learning Disabilities, University of Haifa, Israel
| | - Orna Peleg
- The Program of Cognitive Studies of Language and Its Uses, and Sagol School of Neuroscience, Tel-Aviv University, Israel
| |
Collapse
|
4
|
Murphy E, Woolnough O, Rollo PS, Roccaforte ZJ, Segaert K, Hagoort P, Tandon N. Minimal Phrase Composition Revealed by Intracranial Recordings. J Neurosci 2022; 42:3216-3227. [PMID: 35232761 PMCID: PMC8994536 DOI: 10.1523/jneurosci.1575-21.2022] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Revised: 01/11/2022] [Accepted: 01/18/2022] [Indexed: 11/21/2022] Open
Abstract
The ability to comprehend phrases is an essential integrative property of the brain. Here, we evaluate the neural processes that enable the transition from single-word processing to a minimal compositional scheme. Previous research has reported conflicting timing effects of composition, and disagreement persists with respect to inferior frontal and posterior temporal contributions. To address these issues, 19 patients (10 male, 9 female) implanted with penetrating depth or surface subdural intracranial electrodes, heard auditory recordings of adjective-noun, pseudoword-noun, and adjective-pseudoword phrases and judged whether the phrase matched a picture. Stimulus-dependent alterations in broadband gamma activity, low-frequency power, and phase-locking values across the language-dominant left hemisphere were derived. This revealed a mosaic located on the lower bank of the posterior superior temporal sulcus (pSTS), in which closely neighboring cortical sites displayed exclusive sensitivity to either lexicality or phrase structure, but not both. Distinct timings were found for effects of phrase composition (210-300 ms) and pseudoword processing (∼300-700 ms), and these were localized to neighboring electrodes in pSTS. The pars triangularis and temporal pole encoded anticipation of composition in broadband low frequencies, and both regions exhibited greater functional connectivity with pSTS during phrase composition. Our results suggest that the pSTS is a highly specialized region composed of sparsely interwoven heterogeneous constituents that encodes both lower and higher level linguistic features. This hub in pSTS for minimal phrase processing may form the neural basis for the human-specific computational capacity for forming hierarchically organized linguistic structures.SIGNIFICANCE STATEMENT Linguists have claimed that the integration of multiple words into a phrase demands a computational procedure distinct from single-word processing. Here, we provide intracranial recordings from a large patient cohort, with high spatiotemporal resolution, to track the cortical dynamics of phrase composition. Epileptic patients volunteered to participate in a task in which they listened to phrases (red boat), word-pseudoword or pseudoword-word pairs (e.g., red fulg). At the onset of the second word in phrases, greater broadband high gamma activity was found in posterior superior temporal sulcus in electrodes that exclusively indexed phrasal meaning and not lexical meaning. These results provide direct, high-resolution signatures of minimal phrase composition in humans, a potentially species-specific computational capacity.
Collapse
Affiliation(s)
- Elliot Murphy
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, Texas 77030
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, Texas 77030
| | - Oscar Woolnough
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, Texas 77030
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, Texas 77030
| | - Patrick S Rollo
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, Texas 77030
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, Texas 77030
| | - Zachary J Roccaforte
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, Texas 77030
| | - Katrien Segaert
- School of Psychology and Centre for Human Brain Health, University of Birmingham, Birmingham B15 2TT, United Kingdom
- Max Planck Institute for Psycholinguistics, Nijmegen, 6525 XD Nijmegen, The Netherlands
| | - Peter Hagoort
- Max Planck Institute for Psycholinguistics, Nijmegen, 6525 XD Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Nijmegen, 6525 HR Nijmegen, The Netherlands
| | - Nitin Tandon
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, Texas 77030
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, Texas 77030
- Memorial Hermann Hospital, Texas Medical Center, Houston, Texas 77030
| |
Collapse
|
5
|
Zhou L, Perfetti C. Consistency and regularity effects in character identification: A greater role for global than local mapping congruence. BRAIN AND LANGUAGE 2021; 221:104997. [PMID: 34399241 DOI: 10.1016/j.bandl.2021.104997] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Revised: 07/14/2021] [Accepted: 07/15/2021] [Indexed: 06/13/2023]
Abstract
Consistency and regularity, concepts that arise, respectively, from the connectionist and classical cognitive modeling work in alphabetic reading, are two ways to characterize the orthography-to-phonology mappings of written languages. These concepts have been applied to Chinese reading research despite important differences across writing systems, with mixed results concerning their relative importance. The present study of covert naming in Chinese is distinctive in testing the ERP effects of regularity and consistency in a fully orthogonal design. We found that consistency, but not regularity, affected the N170, P200 and N400 as well as pronunciation transcription accuracies, demonstrating a more prominent role of consistency than regularity in character naming, consistent with conclusions from English word naming. To capture a generalization across writing systems, we propose mapping congruence as a writing-system-independent way of referring to orthography-to-phonology mappings and illustrate these congruence effects in an interactive framework of character identification.
Collapse
Affiliation(s)
- Lin Zhou
- Learning Research and Development Center, University of Pittsburgh, Pittsburgh, PA, USA; Department of Psychology, University of Pittsburgh, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA.
| | - Charles Perfetti
- Learning Research and Development Center, University of Pittsburgh, Pittsburgh, PA, USA; Department of Psychology, University of Pittsburgh, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, Pittsburgh, PA, USA
| |
Collapse
|
6
|
Jouen AL, Cazin N, Hidot S, Madden-Lombardi C, Ventre-Dominey J, Dominey PF. Common ERP responses to narrative incoherence in sentence and picture pair comprehension. Brain Cogn 2021; 153:105775. [PMID: 34333283 DOI: 10.1016/j.bandc.2021.105775] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2021] [Revised: 07/13/2021] [Accepted: 07/22/2021] [Indexed: 11/17/2022]
Abstract
Understanding the neural processes underlying the comprehension of visual images and sentences remains a major open challenge in cognitive neuroscience. We previously demonstrated with fMRI and DTI that comprehension of visual images and sentences describing human activities recruits a common extended parietal-temporal-frontal semantic system. The current research tests the hypothesis that this common semantic system will display similar ERP profiles during processing in these two modalities, providing further support for the common comprehension system. We recorded EEG from naïve subjects as they saw simple narratives made up of a first visual image depicting a human event, followed by a second image that was either a sequentially coherent narrative follow-up, or not, of the first. Incoherent second stimuli depict the same agents but shifted into a different situation. In separate blocks of trials the same protocol was presented using narrative sentence stimuli. Part of the novelty is the comparison of sentence and visual narrative responses. ERPs revealed common neural profiles for narrative processing across image and sentence modalities in the form of early and late central and frontal positivities in response to narrative incoherence. There was an additional posterior positivity only for sentences in a very late window. These results are discussed in the context of ERP signatures of narrative processing and meaning, and a current model of narrative comprehension.
Collapse
Affiliation(s)
- Anne-Lise Jouen
- Department of Neuropsycholinguistics, Université de Genève, Geneva, Switzerland
| | - Nicolas Cazin
- INSERM UMR1093-CAPS, Université Bourgogne Franche-Comté, UFR des Sciences du Sport, Dijon, France
| | - Sullivan Hidot
- INSERM UMR1093-CAPS, Université Bourgogne Franche-Comté, UFR des Sciences du Sport, Dijon, France
| | - Carol Madden-Lombardi
- INSERM UMR1093-CAPS, Université Bourgogne Franche-Comté, UFR des Sciences du Sport, Dijon, France
| | - Jocelyne Ventre-Dominey
- INSERM UMR1093-CAPS, Université Bourgogne Franche-Comté, UFR des Sciences du Sport, Dijon, France
| | - Peter Ford Dominey
- INSERM UMR1093-CAPS, Université Bourgogne Franche-Comté, UFR des Sciences du Sport, Dijon, France.
| |
Collapse
|
7
|
Hennessy S, Wood A, Wilcox R, Habibi A. Neurophysiological improvements in speech-in-noise task after short-term choir training in older adults. Aging (Albany NY) 2021; 13:9468-9495. [PMID: 33824226 PMCID: PMC8064162 DOI: 10.18632/aging.202931] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Accepted: 03/26/2021] [Indexed: 01/24/2023]
Abstract
Perceiving speech in noise (SIN) is important for health and well-being and decreases with age. Musicians show improved speech-in-noise abilities and reduced age-related auditory decline, yet it is unclear whether short term music engagement has similar effects. In this randomized control trial we used a pre-post design to investigate whether a 12-week music intervention in adults aged 50-65 without prior music training and with subjective hearing loss improves well-being, speech-in-noise abilities, and auditory encoding and voluntary attention as indexed by auditory evoked potentials (AEPs) in a syllable-in-noise task, and later AEPs in an oddball task. Age and gender-matched adults were randomized to a choir or control group. Choir participants sang in a 2-hr ensemble with 1-hr home vocal training weekly; controls listened to a 3-hr playlist weekly, attended concerts, and socialized online with fellow participants. From pre- to post-intervention, no differences between groups were observed on quantitative measures of well-being or behavioral speech-in-noise abilities. In the choir group, but not the control group, changes in the N1 component were observed for the syllable-in-noise task, with increased N1 amplitude in the passive condition and decreased N1 latency in the active condition. During the oddball task, larger N1 amplitudes to the frequent standard stimuli were also observed in the choir but not control group from pre to post intervention. Findings have implications for the potential role of music training to improve sound encoding in individuals who are in the vulnerable age range and at risk of auditory decline.
Collapse
Affiliation(s)
- Sarah Hennessy
- Brain and Creativity Institute, University of Southern California, Los Angeles, CA 90089, USA
| | - Alison Wood
- Brain and Creativity Institute, University of Southern California, Los Angeles, CA 90089, USA
| | - Rand Wilcox
- Department of Psychology, University of Southern California, Los Angeles, CA 90089, USA
| | - Assal Habibi
- Brain and Creativity Institute, University of Southern California, Los Angeles, CA 90089, USA
| |
Collapse
|
8
|
Hagoort P. The meaning-making mechanism(s) behind the eyes and between the ears. Philos Trans R Soc Lond B Biol Sci 2020; 375:20190301. [PMID: 31840590 PMCID: PMC6939349 DOI: 10.1098/rstb.2019.0301] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2019] [Accepted: 09/10/2019] [Indexed: 11/12/2022] Open
Abstract
In this contribution, the following four questions are discussed: (i) where is meaning?; (ii) what is meaning?; (iii) what is the meaning of mechanism?; (iv) what are the mechanisms of meaning? I will argue that meanings are in the head. Meanings have multiple facets, but minimally one needs to make a distinction between single word meanings (lexical meaning) and the meanings of multi-word utterances. The latter ones cannot be retrieved from memory, but need to be constructed on the fly. A mechanistic account of the meaning-making mind requires an analysis at both a functional and a neural level, the reason being that these levels are causally interdependent. I will show that an analysis exclusively focusing on patterns of brain activation lacks explanatory power. Finally, I shall present an initial sketch of how the dynamic interaction between temporo-parietal areas and inferior frontal cortex might instantiate the interpretation of linguistic utterances in the context of a multimodal setting and ongoing discourse information. This article is part of the theme issue 'Towards mechanistic models of meaning composition'.
Collapse
Affiliation(s)
- Peter Hagoort
- Max Planck Institute for Psycholinguistics, PO Box 310, 6500 AH Nijmegen, The Netherlands
| |
Collapse
|
9
|
Zinchenko A, Kotz SA, Schröger E, Kanske P. Moving towards dynamics: Emotional modulation of cognitive and emotional control. Int J Psychophysiol 2020; 147:193-201. [DOI: 10.1016/j.ijpsycho.2019.10.018] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2019] [Revised: 10/18/2019] [Accepted: 10/23/2019] [Indexed: 12/13/2022]
|
10
|
Li S, Chen S, Zhang H, Zhao Q, Zhou Z, Huang F, Sui D, Wang F, Hong J. Dynamic cognitive processes of text-picture integration revealed by event-related potentials. Brain Res 2019; 1726:146513. [PMID: 31669828 DOI: 10.1016/j.brainres.2019.146513] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2019] [Revised: 09/17/2019] [Accepted: 10/15/2019] [Indexed: 11/30/2022]
Abstract
The integration of text and picture is the core of multimedia information processing. Relevant theories suggest that text and picture are processed through different channels in the early stage, and integrated in the late stage of processing. Based on these theories, the current study adopted measures of event-related potentials to examine the cognitive and neural processes of text-picture integration. The results showed that in the early stage of text-picture integration, picture processing evoked a more negative N1 over the occipital area and a N300 over the prefrontal area, which might reflect the discrimination process of visual stimuli and the imagery representation of the picture, respectively; in the late stage, the text-picture induced a N400 in the central area and an LPC over the central, parietal and temporal areas, which might be associated with the semantic activation and integration of text and picture, respectively. These results not only provide support for existing theories, but also further elucidate the dynamic neural processing of text-picture integration in terms of its temporal and spatial characteristics.
Collapse
Affiliation(s)
- Songqing Li
- Key Laboratory of Adolescent Cyberpsychology and Behavior (CCNU), Ministry of Education, Central China Normal University, Wuhan 430079, China; Key Laboratory of Human Development and Mental Health of Hubei Province, School of Psychology, Central China Normal University, Wuhan 430079, China
| | - Shi Chen
- Key Laboratory of Adolescent Cyberpsychology and Behavior (CCNU), Ministry of Education, Central China Normal University, Wuhan 430079, China; Key Laboratory of Human Development and Mental Health of Hubei Province, School of Psychology, Central China Normal University, Wuhan 430079, China
| | - Hongpo Zhang
- Key Laboratory of Adolescent Cyberpsychology and Behavior (CCNU), Ministry of Education, Central China Normal University, Wuhan 430079, China; Key Laboratory of Human Development and Mental Health of Hubei Province, School of Psychology, Central China Normal University, Wuhan 430079, China
| | - Qingbai Zhao
- Key Laboratory of Adolescent Cyberpsychology and Behavior (CCNU), Ministry of Education, Central China Normal University, Wuhan 430079, China; Key Laboratory of Human Development and Mental Health of Hubei Province, School of Psychology, Central China Normal University, Wuhan 430079, China.
| | - Zhijin Zhou
- Key Laboratory of Adolescent Cyberpsychology and Behavior (CCNU), Ministry of Education, Central China Normal University, Wuhan 430079, China; Key Laboratory of Human Development and Mental Health of Hubei Province, School of Psychology, Central China Normal University, Wuhan 430079, China.
| | - Furong Huang
- School of Psychology, Jiangxi Normal University, Nanchang 330022, China.
| | - Danni Sui
- School of Foreign Languages, Shenyang University, Shenyang 110044, China
| | - Fuxing Wang
- Key Laboratory of Adolescent Cyberpsychology and Behavior (CCNU), Ministry of Education, Central China Normal University, Wuhan 430079, China; Key Laboratory of Human Development and Mental Health of Hubei Province, School of Psychology, Central China Normal University, Wuhan 430079, China
| | - Jianzhong Hong
- Key Laboratory of Adolescent Cyberpsychology and Behavior (CCNU), Ministry of Education, Central China Normal University, Wuhan 430079, China; Key Laboratory of Human Development and Mental Health of Hubei Province, School of Psychology, Central China Normal University, Wuhan 430079, China.
| |
Collapse
|
11
|
No language unification without neural feedback: How awareness affects sentence processing. Neuroimage 2019; 202:116063. [PMID: 31376519 DOI: 10.1016/j.neuroimage.2019.116063] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2019] [Revised: 06/22/2019] [Accepted: 07/30/2019] [Indexed: 01/08/2023] Open
Abstract
How does the human brain combine a finite number of words to form an infinite variety of sentences? According to the Memory, Unification and Control (MUC) model, sentence processing requires long-range feedback from the left inferior frontal cortex (LIFC) to left posterior temporal cortex (LPTC). Single word processing however may only require feedforward propagation of semantic information from sensory regions to LPTC. Here we tested the claim that long-range feedback is required for sentence processing by reducing visual awareness of words using a masking technique. Masking disrupts feedback processing while leaving feedforward processing relatively intact. Previous studies have shown that masked single words still elicit an N400 ERP effect, a neural signature of semantic incongruency. However, whether multiple words can be combined to form a sentence under reduced levels of awareness is controversial. To investigate this issue, we performed two experiments in which we measured electroencephalography (EEG) while 40 subjects performed a masked priming task. Words were presented either successively or simultaneously, thereby forming a short sentence that could be congruent or incongruent with a target picture. This sentence condition was compared with a typical single word condition. In the masked condition we only found an N400 effect for single words, whereas in the unmasked condition we observed an N400 effect for both unmasked sentences and single words. Our findings suggest that long-range feedback processing is required for sentence processing, but not for single word processing.
Collapse
|
12
|
Courteau É, Martignetti L, Royle P, Steinhauer K. Eliciting ERP Components for Morphosyntactic Agreement Mismatches in Perfectly Grammatical Sentences. Front Psychol 2019; 10:1152. [PMID: 31312150 PMCID: PMC6613437 DOI: 10.3389/fpsyg.2019.01152] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2018] [Accepted: 05/01/2019] [Indexed: 11/18/2022] Open
Abstract
The present event-related brain potential (ERP) study investigates mechanisms underlying the processing of morphosyntactic information during real-time auditory sentence comprehension in French. Employing an auditory-visual sentence-picture matching paradigm, we investigated two types of anomalies using entirely grammatical auditory stimuli: (i) semantic mismatches between visually presented actions and spoken verbs, and (ii) number mismatches between visually presented agents and corresponding morphosyntactic number markers in the spoken sentences (determiners, pronouns in liaison contexts, and verb-final “inflection”). We varied the type and amount of number cues available in each sentence using two manipulations. First, we manipulated the verb type, by using verbs whose number cue was audible through subject (clitic) pronoun liaison (liaison verbs) as well as verbs whose number cue was audible on the verb ending (consonant-final verbs). Second, we manipulated the pre-verbal context: each sentence was preceded either by a neutral context providing no number cue, or by a subject noun phrase containing a subject number cue on the determiner. Twenty-two French-speaking adults participated in the experiment. While sentence judgment accuracy was high, participants' ERP responses were modulated by the type of mismatch encountered. Lexico-semantic mismatches on the verb elicited the expected N400 and additional negativities. Determiner number mismatches elicited early anterior negativities, N400s and P600s. Verb number mismatches elicited biphasic N400-P600 patterns. However, pronoun + verb liaison mismatches yielded this pattern only in the plural, while consonant-final changes did so in the singular and the plural. Furthermore, an additional sustained frontal negativity was observed in two of the four verb mismatch conditions: plural liaison and singular consonant-final forms. This study highlights the different contributions of number cues in oral language processing and is the first to investigate whether auditory-visual mismatches can elicit errors reminiscent of outright grammatical errors. Our results emphasize that neurocognitive mechanisms underlying number agreement in French are modulated by the type of cue that is used to identify auditory-visual mismatches.
Collapse
Affiliation(s)
- Émilie Courteau
- Faculty of Medicine, School of Speech Language Pathology and Audiology, University of Montreal, Montreal, QC, Canada.,Centre for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada
| | - Lisa Martignetti
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada.,Faculty of Medicine, School of Communication Sciences and Disorders, McGill University, Montreal, QC, Canada
| | - Phaedra Royle
- Faculty of Medicine, School of Speech Language Pathology and Audiology, University of Montreal, Montreal, QC, Canada.,Centre for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada
| | - Karsten Steinhauer
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada.,Faculty of Medicine, School of Communication Sciences and Disorders, McGill University, Montreal, QC, Canada
| |
Collapse
|
13
|
Abstract
When we comprehend language, we often do this in rich settings where we can use many cues to understand what someone is saying. However, it has traditionally been difficult to design experiments with rich three-dimensional contexts that resemble our everyday environments, while maintaining control over the linguistic and nonlinguistic information that is available. Here we test the validity of combining electroencephalography (EEG) and virtual reality (VR) to overcome this problem. We recorded electrophysiological brain activity during language processing in a well-controlled three-dimensional virtual audiovisual environment. Participants were immersed in a virtual restaurant while wearing EEG equipment. In the restaurant, participants encountered virtual restaurant guests. Each guest was seated at a separate table with an object on it (e.g., a plate with salmon). The restaurant guest would then produce a sentence (e.g., “I just ordered this salmon.”). The noun in the spoken sentence could either match (“salmon”) or mismatch (“pasta”) the object on the table, creating a situation in which the auditory information was either appropriate or inappropriate in the visual context. We observed a reliable N400 effect as a consequence of the mismatch. This finding validates the combined use of VR and EEG as a tool to study the neurophysiological mechanisms of everyday language comprehension in rich, ecologically valid settings.
Collapse
|
14
|
Schendan HE. Memory influences visual cognition across multiple functional states of interactive cortical dynamics. PSYCHOLOGY OF LEARNING AND MOTIVATION 2019. [DOI: 10.1016/bs.plm.2019.07.007] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|
15
|
Wang G, Kino M, Yamauchi K, Wang RH. Correlations between features of event-related potentials and Autism Spectrum Quotient scores. J Clin Neurosci 2019; 59:202-208. [DOI: 10.1016/j.jocn.2018.10.070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2018] [Revised: 09/10/2018] [Accepted: 10/14/2018] [Indexed: 11/24/2022]
|
16
|
Draschkow D, Heikel E, Võ MLH, Fiebach CJ, Sassenhagen J. No evidence from MVPA for different processes underlying the N300 and N400 incongruity effects in object-scene processing. Neuropsychologia 2018; 120:9-17. [DOI: 10.1016/j.neuropsychologia.2018.09.016] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2018] [Revised: 09/18/2018] [Accepted: 09/23/2018] [Indexed: 11/24/2022]
|
17
|
Hasson U, Egidi G, Marelli M, Willems RM. Grounding the neurobiology of language in first principles: The necessity of non-language-centric explanations for language comprehension. Cognition 2018; 180:135-157. [PMID: 30053570 PMCID: PMC6145924 DOI: 10.1016/j.cognition.2018.06.018] [Citation(s) in RCA: 76] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2017] [Revised: 06/05/2018] [Accepted: 06/24/2018] [Indexed: 12/26/2022]
Abstract
Recent decades have ushered in tremendous progress in understanding the neural basis of language. Most of our current knowledge on language and the brain, however, is derived from lab-based experiments that are far removed from everyday language use, and that are inspired by questions originating in linguistic and psycholinguistic contexts. In this paper we argue that in order to make progress, the field needs to shift its focus to understanding the neurobiology of naturalistic language comprehension. We present here a new conceptual framework for understanding the neurobiological organization of language comprehension. This framework is non-language-centered in the computational/neurobiological constructs it identifies, and focuses strongly on context. Our core arguments address three general issues: (i) the difficulty in extending language-centric explanations to discourse; (ii) the necessity of taking context as a serious topic of study, modeling it formally and acknowledging the limitations on external validity when studying language comprehension outside context; and (iii) the tenuous status of the language network as an explanatory construct. We argue that adopting this framework means that neurobiological studies of language will be less focused on identifying correlations between brain activity patterns and mechanisms postulated by psycholinguistic theories. Instead, they will be less self-referential and increasingly more inclined towards integration of language with other cognitive systems, ultimately doing more justice to the neurobiological organization of language and how it supports language as it is used in everyday life.
Collapse
Affiliation(s)
- Uri Hasson
- Center for Mind/Brain Sciences, The University of Trento, Trento, Italy; Center for Practical Wisdom, The University of Chicago, Chicago, IL, United States.
| | - Giovanna Egidi
- Center for Mind/Brain Sciences, The University of Trento, Trento, Italy
| | - Marco Marelli
- Department of Psychology, University of Milano-Bicocca, Milano, Italy; NeuroMI - Milan Center for Neuroscience, Milano, Italy
| | - Roel M Willems
- Centre for Language Studies & Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands; Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| |
Collapse
|
18
|
Perniss P. Why We Should Study Multimodal Language. Front Psychol 2018; 9:1109. [PMID: 30002643 PMCID: PMC6032889 DOI: 10.3389/fpsyg.2018.01109] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2017] [Accepted: 06/11/2018] [Indexed: 12/21/2022] Open
Affiliation(s)
- Pamela Perniss
- School of Humanities, University of Brighton, Brighton, United Kingdom
| |
Collapse
|
19
|
The core and beyond in the language-ready brain. Neurosci Biobehav Rev 2017; 81:194-204. [DOI: 10.1016/j.neubiorev.2017.01.048] [Citation(s) in RCA: 72] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2016] [Revised: 01/18/2017] [Accepted: 01/20/2017] [Indexed: 11/18/2022]
|
20
|
Neural correlates of multimodal metaphor comprehension: Evidence from event-related potentials and time-frequency decompositions. Int J Psychophysiol 2016; 109:81-91. [DOI: 10.1016/j.ijpsycho.2016.09.007] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2016] [Revised: 08/12/2016] [Accepted: 09/09/2016] [Indexed: 11/19/2022]
|
21
|
Abstract
Hand gestures and speech form a single integrated system of meaning during language comprehension, but is gesture processed with speech in a unique fashion? We had subjects watch multimodal videos that presented auditory (words) and visual (gestures and actions on objects) information. Half of the subjects related the audio information to a written prime presented before the video, and the other half related the visual information to the written prime. For half of the multimodal video stimuli, the audio and visual information contents were congruent, and for the other half, they were incongruent. For all subjects, stimuli in which the gestures and actions were incongruent with the speech produced more errors and longer response times than did stimuli that were congruent, but this effect was less prominent for speech-action stimuli than for speech-gesture stimuli. However, subjects focusing on visual targets were more accurate when processing actions than gestures. These results suggest that although actions may be easier to process than gestures, gestures may be more tightly tied to the processing of accompanying speech.
Collapse
|
22
|
Villena-González M, López V, Rodríguez E. Orienting attention to visual or verbal/auditory imagery differentially impairs the processing of visual stimuli. Neuroimage 2016; 132:71-78. [PMID: 26876471 DOI: 10.1016/j.neuroimage.2016.02.013] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2015] [Revised: 01/29/2016] [Accepted: 02/08/2016] [Indexed: 10/22/2022] Open
Abstract
When attention is oriented toward inner thoughts, as spontaneously occurs during mind wandering, the processing of external information is attenuated. However, the potential effects of thought's content regarding sensory attenuation are still unknown. The present study aims to assess if the representational format of thoughts, such as visual imagery or inner speech, might differentially affect the sensory processing of external stimuli. We recorded the brain activity of 20 participants (12 women) while they were exposed to a probe visual stimulus in three different conditions: executing a task on the visual probe (externally oriented attention), and two conditions involving inward-turned attention i.e. generating inner speech and performing visual imagery. Event-related potentials results showed that the P1 amplitude, related with sensory response, was significantly attenuated during both task involving inward attention compared with external task. When both representational formats were compared, the visual imagery condition showed stronger attenuation in sensory processing than inner speech condition. Alpha power in visual areas was measured as an index of cortical inhibition. Larger alpha amplitude was found when participants engaged in an internal thought contrasted with the external task, with visual imagery showing even more alpha power than inner speech condition. Our results show, for the first time to our knowledge, that visual attentional processing to external stimuli during self-generated thoughts is differentially affected by the representational format of the ongoing train of thoughts.
Collapse
Affiliation(s)
- Mario Villena-González
- Escuela de Psicología, Pontificia Universidad Católica de Chile, Santiago CP 7820436, Chile
| | - Vladimir López
- Escuela de Psicología, Pontificia Universidad Católica de Chile, Santiago CP 7820436, Chile
| | - Eugenio Rodríguez
- Escuela de Psicología, Pontificia Universidad Católica de Chile, Santiago CP 7820436, Chile.
| |
Collapse
|
23
|
Özyürek A. Hearing and seeing meaning in speech and gesture: insights from brain and behaviour. Philos Trans R Soc Lond B Biol Sci 2015; 369:20130296. [PMID: 25092664 DOI: 10.1098/rstb.2013.0296] [Citation(s) in RCA: 74] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
As we speak, we use not only the arbitrary form-meaning mappings of the speech channel but also motivated form-meaning correspondences, i.e. iconic gestures that accompany speech (e.g. inverted V-shaped hand wiggling across gesture space to demonstrate walking). This article reviews what we know about processing of semantic information from speech and iconic gestures in spoken languages during comprehension of such composite utterances. Several studies have shown that comprehension of iconic gestures involves brain activations known to be involved in semantic processing of speech: i.e. modulation of the electrophysiological recording component N400, which is sensitive to the ease of semantic integration of a word to previous context, and recruitment of the left-lateralized frontal-posterior temporal network (left inferior frontal gyrus (IFG), medial temporal gyrus (MTG) and superior temporal gyrus/sulcus (STG/S)). Furthermore, we integrate the information coming from both channels recruiting brain areas such as left IFG, posterior superior temporal sulcus (STS)/MTG and even motor cortex. Finally, this integration is flexible: the temporal synchrony between the iconic gesture and the speech segment, as well as the perceived communicative intent of the speaker, modulate the integration process. Whether these findings are special to gestures or are shared with actions or other visual accompaniments to speech (e.g. lips) or other visual symbols such as pictures are discussed, as well as the implications for a multimodal view of language.
Collapse
Affiliation(s)
- Aslı Özyürek
- Department of Linguistics, Radboud University Nijmegen, Erasmus Plain 1, 6500 HD, Nijmegen, The Netherlands Max Planck Institute for Psycholinguistics, Wundtlaan 1, Nijmegen 6525 JT, The Netherlands
| |
Collapse
|
24
|
Xue J, Marmolejo-Ramos F, Pei X. The linguistic context effects on the processing of body-object interaction words: An ERP study on second language learners. Brain Res 2015; 1613:37-48. [PMID: 25858488 DOI: 10.1016/j.brainres.2015.03.050] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2014] [Revised: 03/21/2015] [Accepted: 03/30/2015] [Indexed: 10/23/2022]
Abstract
Embodied theories of cognition argue that the processing of both concrete and abstract concepts requires the activation of sensorimotor systems. The present study examined the time course for embedding a sensorimotor context in order to elicit sensitivity to the sensorimotor consequences of understanding body-object interaction (BOI) words. In the study, Event-Related Potentials (ERPs) were recorded while subjects performed a sentence acceptability task. Target BOI words were preceded by rich or poor sensorimotor sentential contexts. The behavioural results replicated previous findings in that high BOI words received a response faster than low BOI words. In addition to this, however, there was a context effect in the sensorimotor region as well as a BOI effect in the parietal region (involved in object representation). The results indicate that the sentential sensorimotor context contributes to the subsequent BOI processing and that action-and perception-related language leads to the activation of the same brain areas, which is consistent with the embodiment theory.
Collapse
Affiliation(s)
- Jin Xue
- School of English Language, Literature and Culture and Centre for Language and Cognition, Beijing International Studies University, Beijing, China.
| | | | - Xuna Pei
- School of English Language, Literature and Culture and Centre for Language and Cognition, Beijing International Studies University, Beijing, China
| |
Collapse
|
25
|
Kiang M, Christensen BK, Zipursky RB. Event-related brain potential study of semantic priming in unaffected first-degree relatives of schizophrenia patients. Schizophr Res 2014; 153:78-86. [PMID: 24451397 DOI: 10.1016/j.schres.2014.01.001] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/07/2013] [Revised: 12/25/2013] [Accepted: 01/02/2014] [Indexed: 10/25/2022]
Abstract
Schizophrenia is associated with abnormalities in using meaningful stimuli to activate or prime related concepts in semantic long-term memory. A neurophysiological index of this activation is the N400, an event-related brain potential (ERP) waveform elicited by meaningful stimuli, which is normally reduced (made less negative) by relatedness between the eliciting stimulus and preceding ones (N400 semantic priming). Schizophrenia patients exhibit N400 semantic priming deficits, suggesting impairment in using meaningful context to activate related concepts. To address whether this abnormality is a trait-like marker of liability to schizophrenia or, alternatively, a biomarker of the illness itself, we tested for its presence in schizophrenia patients' unaffected biological relatives. We recorded ERPs from 12 unaffected first-degree relatives of schizophrenia patients, 12 schizophrenia patients, and 12 normal control participants (NCPs) who viewed prime words each followed at 300- or 750-ms stimulus-onset asynchrony (SOA) by an unrelated or related target word, or a nonword, in a lexical-decision task. As expected, across SOAs, NCPs exhibited smaller (less negative) N400 amplitudes for related versus unrelated targets. The same pattern held in relatives, whose N400 amplitudes for related and unrelated targets did not differ from NCPs'. In contrast, consistent with previous results, schizophrenia patients exhibited larger N400 amplitudes than NCPs (and relatives) for related targets, such that patients' N400 amplitudes for related and unrelated targets did not differ. N400 amplitudes for unrelated targets did not differ between the three groups. Thus, N400 semantic priming deficits in a visual word-pair paradigm may be an illness biomarker for schizophrenia.
Collapse
Affiliation(s)
- Michael Kiang
- Department of Psychiatry and Behavioural Neurosciences, McMaster University, Hamilton, Ontario, Canada; St. Joseph's Healthcare, Hamilton, Ontario, Canada.
| | - Bruce K Christensen
- Department of Psychiatry and Behavioural Neurosciences, McMaster University, Hamilton, Ontario, Canada; St. Joseph's Healthcare, Hamilton, Ontario, Canada
| | - Robert B Zipursky
- Department of Psychiatry and Behavioural Neurosciences, McMaster University, Hamilton, Ontario, Canada; St. Joseph's Healthcare, Hamilton, Ontario, Canada
| |
Collapse
|
26
|
Dikker S, Pylkkänen L. Predicting language: MEG evidence for lexical preactivation. BRAIN AND LANGUAGE 2013; 127:55-64. [PMID: 23040469 DOI: 10.1016/j.bandl.2012.08.004] [Citation(s) in RCA: 83] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2012] [Revised: 08/08/2012] [Accepted: 08/12/2012] [Indexed: 06/01/2023]
Abstract
It is widely assumed that prediction plays a substantial role in language processing. However, despite numerous studies demonstrating that contextual information facilitates both syntactic and lexical-semantic processing, there exists no direct evidence pertaining to the neural correlates of the prediction process itself. Using magnetoencephalography (MEG), this study found that brain activity was modulated by whether or not a specific noun could be predicted, given a picture prime. Specifically, before the noun was presented, predictive contexts triggered enhanced activation in left mid-temporal cortex (implicated in lexical access), ventro-medial prefrontal cortex (previously associated with top-down processing), and visual cortex (hypothesized to index the preactivation of predicted form features), successively. This finding suggests that predictive language processing recruits a top-down network where predicted words are activated at different levels of representation, from more 'abstract' lexical-semantic representations in temporal cortex, all the way down to visual word form features. The same brain regions that exhibited enhanced activation for predictive contexts before the onset of the noun showed effects of congruence during the target word. To our knowledge, this study is one of the first to directly investigate the anticipatory stage of predictive language processing.
Collapse
Affiliation(s)
- Suzanne Dikker
- Sackler Institute for Developmental Psychobiology, Weill Cornell Medical College, NY, USA; New York University, Department of Psychology, NY, USA.
| | | |
Collapse
|
27
|
Guan CQ, Meng W, Yao R, Glenberg AM. The motor system contributes to comprehension of abstract language. PLoS One 2013; 8:e75183. [PMID: 24086463 PMCID: PMC3784420 DOI: 10.1371/journal.pone.0075183] [Citation(s) in RCA: 45] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2013] [Accepted: 08/10/2013] [Indexed: 11/18/2022] Open
Abstract
If language comprehension requires a sensorimotor simulation, how can abstract language be comprehended? We show that preparation to respond in an upward or downward direction affects comprehension of the abstract quantifiers "more and more" and "less and less" as indexed by an N400-like component. Conversely, the semantic content of the sentence affects the motor potential measured immediately before the upward or downward action is initiated. We propose that this bidirectional link between motor system and language arises because the motor system implements forward models that predict the sensory consequences of actions. Because the same movement (e.g., raising the arm) can have multiple forward models for different contexts, the models can make different predictions depending on whether the arm is raised, for example, to place an object or raised as a threat. Thus, different linguistic contexts invoke different forward models, and the predictions constitute different understandings of the language.
Collapse
Affiliation(s)
- Connie Qun Guan
- University of Science and Technology, Beijing, China
- Florida State University and Florida Center for Reading Research, Tallahassee, Florida, United States of America
| | - Wanjin Meng
- Florida State University and Florida Center for Reading Research, Tallahassee, Florida, United States of America
- National Institute of Education Sciences, Beijing, China
| | - Ru Yao
- National Institute of Education Sciences, Beijing, China
| | - Arthur M. Glenberg
- Arizona State University, Tempe, Arizona, United States of America
- University of Wisconsin-Madison, Madison, Wisconsin, United States of America
- * E-mail:
| |
Collapse
|
28
|
An ERP study of motor compatibility effects in action language. Brain Res 2013; 1526:71-83. [DOI: 10.1016/j.brainres.2013.06.020] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2013] [Revised: 05/17/2013] [Accepted: 06/14/2013] [Indexed: 11/18/2022]
|
29
|
Abstract
A neurobiological model of language is discussed that overcomes the shortcomings of the classical Wernicke-Lichtheim-Geschwind model. It is based on a subdivision of language processing into three components: Memory, Unification, and Control. The functional components as well as the neurobiological underpinnings of the model are discussed. In addition, the need for extension of the model beyond the classical core regions for language is shown. The attention network and the network for inferential processing are crucial to realize language comprehension beyond single word processing and beyond decoding propositional content. It is shown that this requires the dynamic interaction between multiple brain regions.
Collapse
Affiliation(s)
- Peter Hagoort
- Donders Institute for Brain, Cognition and Behaviour, Max Planck Institute for Psycholinguistics, Radboud University Nijmegen Nijmegen, Netherlands
| |
Collapse
|
30
|
Hubbard AL, McNealy K, Scott-Van Zeeland AA, Callan DE, Bookheimer SY, Dapretto M. Altered integration of speech and gesture in children with autism spectrum disorders. Brain Behav 2012; 2:606-19. [PMID: 23139906 PMCID: PMC3489813 DOI: 10.1002/brb3.81] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/18/2012] [Accepted: 06/28/2012] [Indexed: 11/09/2022] Open
Abstract
The presence of gesture during speech has been shown to impact perception, comprehension, learning, and memory in normal adults and typically developing children. In neurotypical individuals, the impact of viewing co-speech gestures representing an object and/or action (i.e., iconic gesture) or speech rhythm (i.e., beat gesture) has also been observed at the neural level. Yet, despite growing evidence of delayed gesture development in children with autism spectrum disorders (ASD), few studies have examined how the brain processes multimodal communicative cues occurring during everyday communication in individuals with ASD. Here, we used a previously validated functional magnetic resonance imaging (fMRI) paradigm to examine the neural processing of co-speech beat gesture in children with ASD and matched controls. Consistent with prior observations in adults, typically developing children showed increased responses in right superior temporal gyrus and sulcus while listening to speech accompanied by beat gesture. Children with ASD, however, exhibited no significant modulatory effects in secondary auditory cortices for the presence of co-speech beat gesture. Rather, relative to their typically developing counterparts, children with ASD showed significantly greater activity in visual cortex while listening to speech accompanied by beat gesture. Importantly, the severity of their socio-communicative impairments correlated with activity in this region, such that the more impaired children demonstrated the greatest activity in visual areas while viewing co-speech beat gesture. These findings suggest that although the typically developing brain recognizes beat gesture as communicative and successfully integrates it with co-occurring speech, information from multiple sensory modalities is not effectively integrated during social communication in the autistic brain.
Collapse
Affiliation(s)
- Amy L Hubbard
- Ahmanson-Lovelace Brain Mapping Center, University of California Los Angeles, California ; Department of Modern Languages, Carnegie Mellon University Pittsburgh, Pennsylvania ; Department of Computational Brain Imaging, Neural Information Analysis Laboratories Kyoto, Japan
| | | | | | | | | | | |
Collapse
|
31
|
Wang X, Ma Q, Wang C. N400 as an index of uncontrolled categorization processing in brand extension. Neurosci Lett 2012; 525:76-81. [PMID: 22884930 DOI: 10.1016/j.neulet.2012.07.043] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2012] [Revised: 07/10/2012] [Accepted: 07/24/2012] [Indexed: 11/29/2022]
Abstract
This study examined the ERP (event-related potential) correlates of categorization processing in brand extension with irrelative task. Participants faced two sequential stimuli in a pair consisting of a soft drink brand name (S1) and a product name (S2) which comprised two categories: beverage (typical product of the brand, e.g. Coke branded soda water) and clothing (atypical product of the brand, even though sometimes it was seen in the real market, e.g. Coke branded sport wear). The N400 was recorded and more largely distributed in frontal, frontal-central and central areas when S2 was clothing compared with beverage. The study did not require the participants to evaluate that the brand extension was appropriate or not, the N400 recorded here was, therefore, irrelative to the task difficulty and the conscious categorization process. We speculated that it reflected an integration processing related with the mental category. The brand performed the role of prime which aroused the participants' association of the brand-related typical products and attributes retrieving from their long term memory. The product name activated an unconscious processing of comparison between the brand and the product. In this process, the participant treated the brand as a mental category and classified the product as a member of it. There would be a large cognitive reaction which elicited the N400 if the product's attributes were atypical to the category of the brand. These findings might help us understand the N400 component in unconscious mental categorization and supported the categorization hypotheses in brand extension theory which was crucial in consumer psychology.
Collapse
Affiliation(s)
- Xiaoyi Wang
- School of Management, Zhejiang University, PR China
| | | | | |
Collapse
|
32
|
Russo N, Mottron L, Burack JA, Jemel B. Parameters of semantic multisensory integration depend on timing and modality order among people on the autism spectrum: evidence from event-related potentials. Neuropsychologia 2012; 50:2131-41. [PMID: 22613013 DOI: 10.1016/j.neuropsychologia.2012.05.003] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2011] [Revised: 04/18/2012] [Accepted: 05/04/2012] [Indexed: 11/24/2022]
Abstract
Individuals with autism spectrum disorders (ASD) report difficulty integrating simultaneously presented visual and auditory stimuli (Iarocci & McDonald, 2006), albeit showing enhanced perceptual processing of unisensory stimuli, as well as an enhanced role of perception in higher-order cognitive tasks (Enhanced Perceptual Functioning (EPF) model; Mottron, Dawson, Soulières, Hubert, & Burack, 2006). Individuals with an ASD also integrate auditory-visual inputs over longer periods of time than matched typically developing (TD) peers (Kwakye, Foss-Feig, Cascio, Stone & Wallace, 2011). To tease apart the dichotomy of both extended multisensory processing and enhanced perceptual processing, we used behavioral and electrophysiological measurements of audio-visual integration among persons with ASD. 13 TD and 14 autistics matched on IQ completed a forced choice multisensory semantic congruence task requiring speeded responses regarding the congruence or incongruence of animal sounds and pictures. Stimuli were presented simultaneously or sequentially at various stimulus onset asynchronies in both auditory first and visual first presentations. No group differences were noted in reaction time (RT) or accuracy. The latency at which congruent and incongruent waveforms diverged was the component of interest. In simultaneous presentations, congruent and incongruent waveforms diverged earlier (circa 150 ms) among persons with ASD than among TD individuals (around 350 ms). In sequential presentations, asymmetries in the timing of neuronal processing were noted in ASD which depended on stimulus order, but these were consistent with the nature of specific perceptual strengths in this group. These findings extend the Enhanced Perceptual Functioning Model to the multisensory domain, and provide a more nuanced context for interpreting ERP findings of impaired semantic processing in ASD.
Collapse
Affiliation(s)
- N Russo
- Syracuse University, Psychology Department, 403 Huntington Hall, Syracuse, NY 13244, USA.
| | | | | | | |
Collapse
|
33
|
Hirschfeld G, Feldker K, Zwitserlood P. Listening to "flying ducks": individual differences in sentence-picture verification investigated with ERPs. Psychophysiology 2011; 49:312-21. [PMID: 22176030 DOI: 10.1111/j.1469-8986.2011.01315.x] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2010] [Accepted: 08/30/2011] [Indexed: 11/29/2022]
Abstract
The present ERP study investigated individual differences in the integration of verbal descriptions and visual object representations. Participants saw pictures of objects (e.g., a swimming duck) after listening to noun phrases describing the same object in the identical state, a shape-mismatching state ("flying duck"), or an incongruent object (e.g., "sliced bread"). Individual differences in the vividness of mental imagery and preference for mental imagery were assessed after the experiment. ERP effects of context arose 170 ms after picture onset, differentiating the incongruent-object context from the other two. The N400 mirrored these context effects. Self-rated vividness of imagery affected responses to pictures already after 100 ms, and modulated the N400 effect. Participants with highly vivid imagery showed larger context effects than participants low in imagery. The context effects at 170 ms were not modulated by individual differences.
Collapse
Affiliation(s)
- Gerrit Hirschfeld
- Institut für Psychologie, Westfälische Wilhelms-Universität Münster, Münster, Germany.
| | | | | |
Collapse
|
34
|
The role of left inferior frontal gyrus in explicit and implicit semantic processing. Brain Res 2011; 1440:56-64. [PMID: 22284615 DOI: 10.1016/j.brainres.2011.11.060] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2011] [Revised: 11/24/2011] [Accepted: 11/29/2011] [Indexed: 11/23/2022]
Abstract
Using event-related functional MRI, we examined the involvement of the left inferior frontal gyrus (LIFG) in explicit and implicit semantic processing of Chinese sentences. During scanning, Chinese readers read individually presented normal sentences with a contextually expected or unexpected target noun and were asked to perform an explicit or implicit semantic task (semantic or syntactic violation judgment). The conjunction analysis of the two tasks revealed LIFG as the critical brain region for semantic integration. Further, a cross-task comparison showed more extensive activations for the expectancy effect in the explicit task than in the implicit task in regions including bilateral anterior cingulate cortex/dorsolateral prefrontal cortex, left middle temporal gyrus, and right inferior frontal gyrus. These results indicate that LIFG is responsible for the integration process per se and that other brain regions observed in previous studies using explicit semantic tasks may be due to task-induced generic processes (e.g., cognitive control).
Collapse
|
35
|
Wu YC, Coulson S. Are depictive gestures like pictures? commonalities and differences in semantic processing. BRAIN AND LANGUAGE 2011; 119:184-195. [PMID: 21864890 PMCID: PMC3196291 DOI: 10.1016/j.bandl.2011.07.002] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/06/2011] [Revised: 07/04/2011] [Accepted: 07/21/2011] [Indexed: 05/29/2023]
Abstract
Conversation is multi-modal, involving both talk and gesture. Does understanding depictive gestures engage processes similar to those recruited in the comprehension of drawings or photographs? Event-related brain potentials (ERPs) were recorded from neurotypical adults as they viewed spontaneously produced depictive gestures preceded by congruent and incongruent contexts. Gestures were presented either dynamically in short, soundless video-clips, or statically as freeze frames extracted from gesture videos. In a separate ERP experiment, the same participants viewed related or unrelated pairs of photographs depicting common real-world objects. Both object photos and gesture stimuli elicited less negative ERPs from 400 to 600ms post-stimulus when preceded by matching versus mismatching contexts (dN450). Object photos and static gesture stills also elicited less negative ERPs between 300 and 400ms post-stimulus (dN300). Findings demonstrate commonalities between the conceptual integration processes underlying the interpretation of iconic gestures and other types of image-based representations of the visual world.
Collapse
Affiliation(s)
- Ying Choon Wu
- Center for Research in Language, UC San Diego 0526, 9500 Gilman Dr., La Jolla, CA 92093
- Swartz Cener for Computational Neuroscience UC San Diego 0559, 9500 Gilman Dr., La Jolla, CA 92093
| | - Seana Coulson
- Center for Research in Language, UC San Diego 0526, 9500 Gilman Dr., La Jolla, CA 92093
- UC San Diego, Dept. of Cognitive Science 0515, 9500 Gilman Dr., La Jolla, CA 92093
| |
Collapse
|
36
|
Baggio G, Hagoort P. The balance between memory and unification in semantics: A dynamic account of the N400. ACTA ACUST UNITED AC 2011. [DOI: 10.1080/01690965.2010.542671] [Citation(s) in RCA: 74] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
37
|
Kutas M, Federmeier KD. Thirty years and counting: finding meaning in the N400 component of the event-related brain potential (ERP). Annu Rev Psychol 2011; 62:621-47. [PMID: 20809790 DOI: 10.1146/annurev.psych.093008.131123] [Citation(s) in RCA: 2143] [Impact Index Per Article: 164.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
We review the discovery, characterization, and evolving use of the N400, an event-related brain potential response linked to meaning processing. We describe the elicitation of N400s by an impressive range of stimulus types--including written, spoken, and signed words or pseudowords; drawings, photos, and videos of faces, objects, and actions; sounds; and mathematical symbols--and outline the sensitivity of N400 amplitude (as its latency is remarkably constant) to linguistic and nonlinguistic manipulations. We emphasize the effectiveness of the N400 as a dependent variable for examining almost every aspect of language processing and highlight its expanding use to probe semantic memory and to determine how the neurocognitive system dynamically and flexibly uses bottom-up and top-down information to make sense of the world. We conclude with different theories of the N400's functional significance and offer an N400-inspired reconceptualization of how meaning processing might unfold.
Collapse
Affiliation(s)
- Marta Kutas
- Department of Cognitive Science, Center for Research in Language, Kavli Institute for Brain and Mind, University of California, San Diego, California 92093, USA.
| | | |
Collapse
|
38
|
Van Balkom H, Verhoeven L. Literacy learning in users of AAC: A neurocognitive perspective. Augment Altern Commun 2011; 26:149-57. [PMID: 20874078 DOI: 10.3109/07434618.2010.505610] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
The understanding of written or printed text or discourse - depicted either in orthographical, graphic-visual or tactile symbols - calls upon both bottom-up word recognition processes and top-down comprehension processes. Different architectures have been proposed to account for literacy processes. Research has shown that the first steps in perceiving, processing and deriving conceptual meaning from words, graphic symbols, manual signs, and co-speech gestures or tactile manual signing and tangible symbols can be seen as identical and collectively (sub)activated. Results from recent brain research and neurolinguistics have revealed new insights in the reading process of typical and atypical readers and may provide verifiable evidence for improved literacy assessment and the validation of early intervention programs for AAC users.
Collapse
Affiliation(s)
- Hans Van Balkom
- Behavioural Science Institute Radboud University Nijmegen, The Netherlands.
| | | |
Collapse
|
39
|
Shinkareva SV, Malave VL, Mason RA, Mitchell TM, Just MA. Commonality of neural representations of words and pictures. Neuroimage 2011; 54:2418-25. [DOI: 10.1016/j.neuroimage.2010.10.042] [Citation(s) in RCA: 79] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2010] [Revised: 10/04/2010] [Accepted: 10/13/2010] [Indexed: 10/18/2022] Open
|
40
|
Hirschfeld G, Zwitserlood P, Dobel C. Effects of language comprehension on visual processing - MEG dissociates early perceptual and late N400 effects. BRAIN AND LANGUAGE 2011; 116:91-96. [PMID: 20708788 DOI: 10.1016/j.bandl.2010.07.002] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/25/2010] [Revised: 05/31/2010] [Accepted: 07/14/2010] [Indexed: 05/29/2023]
Abstract
We investigated whether and when information conveyed by spoken language impacts on the processing of visually presented objects. In contrast to traditional views, grounded-cognition posits direct links between language comprehension and perceptual processing. We used a magnetoencephalographic cross-modal priming paradigm to disentangle these views. In a sentence-picture verification task, pictures (e.g. of a flying duck) were paired with three sentence conditions: A feature-matching sentence about a duck in the air, a feature-mismatching sentence about a duck in a lake, and an unrelated sentence. Brain responses to pictures showed enhanced activity in the N400 time-window for the unrelated compared to both related conditions in the left temporal lobe. The M1 time-window revealed more activation for the feature-matching than for the other two conditions in the occipital cortex. These dissociable effects on early visual processing and semantic integration support models in which language comprehension engages two complementary systems, a perceptual and an abstract one.
Collapse
Affiliation(s)
- Gerrit Hirschfeld
- Department of Psychology, University of Muenster, Fliednerstr. 21, 48149 Münster, Germany.
| | | | | |
Collapse
|
41
|
Pinheiro AP, Galdo-Álvarez S, Sampaio A, Niznikiewicz M, Gonçalves OF. Electrophysiological correlates of semantic processing in Williams syndrome. RESEARCH IN DEVELOPMENTAL DISABILITIES 2010; 31:1412-1425. [PMID: 20674263 DOI: 10.1016/j.ridd.2010.06.017] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/05/2010] [Revised: 06/15/2010] [Accepted: 06/21/2010] [Indexed: 05/29/2023]
Abstract
Williams syndrome (WS), a genetic neurodevelopmental disorder due to microdeletion in chromosome 7, has been described as a syndrome with an intriguing socio-cognitive phenotype. Cognitively, the relative preservation of language and face processing abilities coexists with severe deficits in visual-spatial tasks, as well as in tasks involving abstract reasoning. However, in spite of early claims of the independence of language from general cognition in WS, a detailed investigation of language subcomponents has demonstrated several abnormalities in lexical-semantic processing. Nonetheless, the neurobiological processes underlying language processing in Williams syndrome remain to be clarified. The aim of this study was to examine the electrophysiological correlates of semantic processing in WS, taking typical development as a reference. A group of 12 individuals diagnosed with Williams syndrome, with age range between 9 and 31 years, was compared with a group of typically developing participants, individually matched in chronological age, gender and handedness. Participants were presented with sentences that ended with words incongruent (50%) with the previous sentence context or with words judged to be its best completion (50%), and they were asked to decide if the sentence made sense or not. Results in WS suggest atypical sensory ERP components (N100 and P200), preserved N400 amplitude, and abnormal P600 in WS, with the latter being related to late integration and re-analysis processes. These results may represent a physiological signature of underlying impaired on-line language processing in this disorder.
Collapse
Affiliation(s)
- Ana P Pinheiro
- Neuropsychophysiology Lab, CiPsi, School of Psychology, University of Minho, Braga, Portugal.
| | | | | | | | | |
Collapse
|
42
|
Willems RM, Varley R. Neural Insights into the Relation between Language and Communication. Front Hum Neurosci 2010; 4:203. [PMID: 21151364 PMCID: PMC2996040 DOI: 10.3389/fnhum.2010.00203] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2010] [Accepted: 10/04/2010] [Indexed: 11/13/2022] Open
Abstract
The human capacity to communicate has been hypothesized to be causally dependent upon language. Intuitively this seems plausible since most communication relies on language. Moreover, intention recognition abilities (as a necessary prerequisite for communication) and language development seem to co-develop. Here we review evidence from neuroimaging as well as from neuropsychology to evaluate the relationship between communicative and linguistic abilities. Our review indicates that communicative abilities are best considered as neurally distinct from language abilities. This conclusion is based upon evidence showing that humans rely on different cortical systems when designing a communicative message for someone else as compared to when performing core linguistic tasks, as well as upon observations of individuals with severe language loss after extensive lesions to the language system, who are still able to perform tasks involving intention understanding.
Collapse
Affiliation(s)
- Roel M Willems
- Helen Wills Neuroscience Institute, University of California Berkeley Berkeley, CA, USA
| | | |
Collapse
|
43
|
van Elk M, van Schie H, Bekkering H. The N400-concreteness effect reflects the retrieval of semantic information during the preparation of meaningful actions. Biol Psychol 2010; 85:134-42. [DOI: 10.1016/j.biopsycho.2010.06.004] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2009] [Revised: 05/27/2010] [Accepted: 06/03/2010] [Indexed: 10/19/2022]
|
44
|
Khateb A, Pegna AJ, Landis T, Mouthon MS, Annoni JM. On the origin of the N400 effects: an ERP waveform and source localization analysis in three matching tasks. Brain Topogr 2010; 23:311-20. [PMID: 20549553 DOI: 10.1007/s10548-010-0149-7] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2009] [Accepted: 05/22/2010] [Indexed: 10/19/2022]
Abstract
The question of the cognitive nature and the cerebral origins of the event-related potential (ERP) N400 component has frequently been debated. Here, the N400 effects were analyzed in three tasks. In the semantic task, subjects decided whether sequentially presented word pairs were semantically related or unrelated. In the phonologic (rhyme detection) task, they decided if words were phonologically related or not. In the image categorization task, they decided whether images were categorically related or not. Difference waves between ERPs to unrelated and related conditions (defined here as the N400 effect) demonstrated a greater amplitude and an earlier peak latency effect in the image than in semantic and phonologic tasks. In contrast, spatial correlation analysis revealed that the maps computed during the peak of the N400 effects were highly correlated. Source localization computed from these maps showed the involvement in all tasks of the middle/superior temporal gyrus. Our results suggest that these qualitatively similar N400 effects index the same cognitive content despite differences in the representational formats (words vs. images) and the types of mismatch (semantic vs. phonological) across tasks.
Collapse
Affiliation(s)
- Asaid Khateb
- The Edmond J. Safra Brain Research Center for the Study of Learning Disabilities and the Department of Learning Disabilities, Faculty of Education, University of Haifa, Mount Carmel, Haifa, 31905, Israel.
| | | | | | | | | |
Collapse
|
45
|
Willems RM, Clevis K, Hagoort P. Add a picture for suspense: neural correlates of the interaction between language and visual information in the perception of fear. Soc Cogn Affect Neurosci 2010; 6:404-16. [PMID: 20530540 DOI: 10.1093/scan/nsq050] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
We investigated how visual and linguistic information interact in the perception of emotion. We borrowed a phenomenon from film theory which states that presentation of an as such neutral visual scene intensifies the percept of fear or suspense induced by a different channel of information, such as language. Our main aim was to investigate how neutral visual scenes can enhance responses to fearful language content in parts of the brain involved in the perception of emotion. Healthy participants' brain activity was measured (using functional magnetic resonance imaging) while they read fearful and less fearful sentences presented with or without a neutral visual scene. The main idea is that the visual scenes intensify the fearful content of the language by subtly implying and concretizing what is described in the sentence. Activation levels in the right anterior temporal pole were selectively increased when a neutral visual scene was paired with a fearful sentence, compared to reading the sentence alone, as well as to reading of non-fearful sentences presented with the same neutral scene. We conclude that the right anterior temporal pole serves a binding function of emotional information across domains such as visual and linguistic information.
Collapse
Affiliation(s)
- Roel M Willems
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, PO Box 9101, 6500 HB Nijmegen, The Netherlands.
| | | | | |
Collapse
|
46
|
Snijders TM, Petersson KM, Hagoort P. Effective connectivity of cortical and subcortical regions during unification of sentence structure. Neuroimage 2010; 52:1633-44. [PMID: 20493954 DOI: 10.1016/j.neuroimage.2010.05.035] [Citation(s) in RCA: 42] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2009] [Revised: 04/26/2010] [Accepted: 05/13/2010] [Indexed: 10/19/2022] Open
Abstract
In a recent fMRI study we showed that left posterior middle temporal gyrus (LpMTG) subserves the retrieval of a word's lexical-syntactic properties from the mental lexicon (long-term memory), while left posterior inferior frontal gyrus (LpIFG) is involved in unifying (on-line integration of) this information into a sentence structure (Snijders et al., 2009). In addition, the right IFG, right MTG, and the right striatum were involved in the unification process. Here we report results from a psychophysical interactions (PPI) analysis in which we investigated the effective connectivity between LpIFG and LpMTG during unification, and how the right hemisphere areas and the striatum are functionally connected to the unification network. LpIFG and LpMTG both showed enhanced connectivity during the unification process with a region slightly superior to our previously reported LpMTG. Right IFG better predicted right temporal activity when unification processes were more strongly engaged, just as LpIFG better predicted left temporal activity. Furthermore, the striatum showed enhanced coupling to LpIFG and LpMTG during unification. We conclude that bilateral inferior frontal and posterior temporal regions are functionally connected during sentence-level unification. Cortico-subcortical connectivity patterns suggest cooperation between inferior frontal and striatal regions in performing unification operations on lexical-syntactic representations retrieved from LpMTG.
Collapse
Affiliation(s)
- Tineke M Snijders
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, Centre for Cognitive Neuroimaging, The Netherlands.
| | | | | |
Collapse
|
47
|
Kelly SD, Creigh P, Bartolotti J. Integrating Speech and Iconic Gestures in a Stroop-like Task: Evidence for Automatic Processing. J Cogn Neurosci 2010; 22:683-94. [DOI: 10.1162/jocn.2009.21254] [Citation(s) in RCA: 78] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Previous research has demonstrated a link between language and action in the brain. The present study investigates the strength of this neural relationship by focusing on a potential interface between the two systems: cospeech iconic gesture. Participants performed a Stroop-like task in which they watched videos of a man and a woman speaking and gesturing about common actions. The videos differed as to whether the gender of the speaker and gesturer was the same or different and whether the content of the speech and gesture was congruent or incongruent. The task was to identify whether a man or a woman produced the spoken portion of the videos while accuracy rates, RTs, and ERPs were recorded to the words. Although not relevant to the task, participants paid attention to the semantic relationship between the speech and the gesture, producing a larger N400 to words accompanied by incongruent versus congruent gestures. In addition, RTs were slower to incongruent versus congruent gesture–speech stimuli, but this effect was greater when the gender of the gesturer and speaker was the same versus different. These results suggest that the integration of gesture and speech during language comprehension is automatic but also under some degree of neurocognitive control.
Collapse
|
48
|
Bobes MA, García YF, Lopera F, Quiroz YT, Galán L, Vega M, Trujillo N, Valdes-Sosa M, Valdes-Sosa P. ERP generator anomalies in presymptomatic carriers of the Alzheimer's disease E280A PS-1 mutation. Hum Brain Mapp 2010; 31:247-65. [PMID: 19650138 DOI: 10.1002/hbm.20861] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
Abstract
Although subtle anatomical anomalies long precede the onset of clinical symptoms in Alzheimer's disease, their impact on the reorganization of brain networks underlying cognitive functions has not been fully explored. A unique window into this reorganization is provided by presymptomatic cases of familial Alzheimer's disease (FAD). Here we studied neural circuitry related to semantic processing in presymptomatic FAD cases by estimating the intracranial sources of the N400 event-related potential (ERP). ERPs were obtained during a semantic-matching task from 24 presymptomatic carriers and 25 symptomatic carriers of the E280A presenilin-1 (PS-1) mutation, as well as 27 noncarriers (from the same families). As expected, the symptomatic-carrier group performed worse in the matching task and had lower N400 amplitudes than both asymptomatic groups, which did not differ from each other on these variables. However, N400 topography differed in mutation carrier groups with respect to the noncarriers. Intracranial source analysis evinced that the presymptomatic-carriers presented a decrease of N400 generator strength in right inferior-temporal and medial cingulate areas and increased generator strength in the left hippocampus and parahippocampus compared to the controls. This represents alterations in neural function without translation into behavioral impairments. Compared to controls, the symptomatic-carriers presented a similar anatomical shift in the distribution of N400 generators to that found in presymptomatic-carriers, albeit with a larger reduction in generator strength. The redistribution of N400 generators in presymptomatic-carriers indicates that early focal degeneration associated with the mutation induces neural reorganization, possibly contributing to a functional compensation that enables normal performance in the semantic task.
Collapse
Affiliation(s)
- María A Bobes
- Cognitive Neuroscience Department, Cuban Center for Neuroscience, Havana, Cuba.
| | | | | | | | | | | | | | | | | |
Collapse
|
49
|
Habets B, Kita S, Shao Z, Ozyurek A, Hagoort P. The role of synchrony and ambiguity in speech-gesture integration during comprehension. J Cogn Neurosci 2010; 23:1845-54. [PMID: 20201632 DOI: 10.1162/jocn.2010.21462] [Citation(s) in RCA: 62] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
During face-to-face communication, one does not only hear speech but also see a speaker's communicative hand movements. It has been shown that such hand gestures play an important role in communication where the two modalities influence each other's interpretation. A gesture typically temporally overlaps with coexpressive speech, but the gesture is often initiated before (but not after) the coexpressive speech. The present ERP study investigated what degree of asynchrony in the speech and gesture onsets are optimal for semantic integration of the concurrent gesture and speech. Videos of a person gesturing were combined with speech segments that were either semantically congruent or incongruent with the gesture. Although gesture and speech always overlapped in time, gesture and speech were presented with three different degrees of asynchrony. In the SOA 0 condition, the gesture onset and the speech onset were simultaneous. In the SOA 160 and 360 conditions, speech was delayed by 160 and 360 msec, respectively. ERPs time locked to speech onset showed a significant difference between semantically congruent versus incongruent gesture-speech combinations on the N400 for the SOA 0 and 160 conditions. No significant difference was found for the SOA 360 condition. These results imply that speech and gesture are integrated most efficiently when the differences in onsets do not exceed a certain time span because of the fact that iconic gestures need speech to be disambiguated in a way relevant to the speech context.
Collapse
|
50
|
Lau E, Almeida D, Hines PC, Poeppel D. A lexical basis for N400 context effects: evidence from MEG. BRAIN AND LANGUAGE 2009; 111:161-72. [PMID: 19815267 PMCID: PMC2783912 DOI: 10.1016/j.bandl.2009.08.007] [Citation(s) in RCA: 88] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/07/2009] [Revised: 08/21/2009] [Accepted: 08/21/2009] [Indexed: 05/28/2023]
Abstract
The electrophysiological response to words during the 'N400' time window (approximately 300-500 ms post-onset) is affected by the context in which the word is presented, but whether this effect reflects the impact of context on access of the stored lexical information itself or, alternatively, post-access integration processes is still an open question with substantive theoretical consequences. One challenge for integration accounts is that contexts that seem to require different levels of integration for incoming words (i.e., sentence frames vs. prime words) have similar effects on the N400 component measured in ERP. In this study we compare the effects of these different context types directly, in a within-subject design using MEG, which provides a better opportunity for identifying topographical differences between electrophysiological components, due to the minimal spatial distortion of the MEG signal. We find a qualitatively similar contextual effect for both sentence frame and prime-word contexts, although the effect is smaller in magnitude for shorter word prime contexts. Additionally, we observe no difference in response amplitude between sentence endings that are explicitly incongruent and target words that are simply part of an unrelated pair. These results suggest that the N400 effect does not reflect semantic integration difficulty. Rather, the data are consistent with an account in which N400 reduction reflects facilitated access of lexical information.
Collapse
Affiliation(s)
- Ellen Lau
- Department of Linguistics, University of Maryland, College Park, MD 20742, USA.
| | | | | | | |
Collapse
|