1
|
Radošević T, Malaia EA, Milković M. Predictive Processing in Sign Languages: A Systematic Review. Front Psychol 2022; 13:805792. [PMID: 35496220 PMCID: PMC9047358 DOI: 10.3389/fpsyg.2022.805792] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 03/03/2022] [Indexed: 01/12/2023] Open
Abstract
The objective of this article was to review existing research to assess the evidence for predictive processing (PP) in sign language, the conditions under which it occurs, and the effects of language mastery (sign language as a first language, sign language as a second language, bimodal bilingualism) on the neural bases of PP. This review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) framework. We searched peer-reviewed electronic databases (SCOPUS, Web of Science, PubMed, ScienceDirect, and EBSCO host) and gray literature (dissertations in ProQuest). We also searched the reference lists of records selected for the review and forward citations to identify all relevant publications. We searched for records based on five criteria (original work, peer-reviewed, published in English, research topic related to PP or neural entrainment, and human sign language processing). To reduce the risk of bias, the remaining two authors with expertise in sign language processing and a variety of research methods reviewed the results. Disagreements were resolved through extensive discussion. In the final review, 7 records were included, of which 5 were published articles and 2 were dissertations. The reviewed records provide evidence for PP in signing populations, although the underlying mechanism in the visual modality is not clear. The reviewed studies addressed the motor simulation proposals, neural basis of PP, as well as the development of PP. All studies used dynamic sign stimuli. Most of the studies focused on semantic prediction. The question of the mechanism for the interaction between one’s sign language competence (L1 vs. L2 vs. bimodal bilingual) and PP in the manual-visual modality remains unclear, primarily due to the scarcity of participants with varying degrees of language dominance. There is a paucity of evidence for PP in sign languages, especially for frequency-based, phonetic (articulatory), and syntactic prediction. However, studies published to date indicate that Deaf native/native-like L1 signers predict linguistic information during sign language processing, suggesting that PP is an amodal property of language processing.
Collapse
Affiliation(s)
- Tomislav Radošević
- Laboratory for Sign Language and Deaf Culture Research, Faculty of Education and Rehabilitation Sciences, University of Zagreb, Zagreb, Croatia
| | - Evie A Malaia
- Laboratory for Neuroscience of Dynamic Cognition, Department of Communicative Disorders, College of Arts and Sciences, University of Alabama, Tuscaloosa, AL, United States
| | - Marina Milković
- Laboratory for Sign Language and Deaf Culture Research, Faculty of Education and Rehabilitation Sciences, University of Zagreb, Zagreb, Croatia
| |
Collapse
|
2
|
Caldwell HB. Sign and Spoken Language Processing Differences in the Brain: A Brief Review of Recent Research. Ann Neurosci 2022; 29:62-70. [PMID: 35875424 PMCID: PMC9305909 DOI: 10.1177/09727531211070538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Accepted: 11/29/2021] [Indexed: 11/27/2022] Open
Abstract
Background: It is currently accepted that sign languages and spoken languages have significant processing commonalities. The evidence supporting this often merely investigates frontotemporal pathways, perisylvian language areas, hemispheric lateralization, and event-related potentials in typical settings. However, recent evidence has explored beyond this and uncovered numerous modality-dependent processing differences between sign languages and spoken languages by accounting for confounds that previously invalidated processing comparisons and by delving into the specific conditions in which they arise. However, these processing differences are often shallowly dismissed as unspecific to language. Summary: This review examined recent neuroscientific evidence for processing differences between sign and spoken language modalities and the arguments against these differences’ importance. Key distinctions exist in the topography of the left anterior negativity (LAN) and with modulations of event-related potential (ERP) components like the N400. There is also differential activation of typical spoken language processing areas, such as the conditional role of the temporal areas in sign language (SL) processing. Importantly, sign language processing uniquely recruits parietal areas for processing phonology and syntax and requires the mapping of spatial information to internal representations. Additionally, modality-specific feedback mechanisms distinctively involve proprioceptive post-output monitoring in sign languages, contrary to spoken languages’ auditory and visual feedback mechanisms. The only study to find ERP differences post-production revealed earlier lexical access in sign than spoken languages. Themes of temporality, the validity of an analogous anatomical mechanisms viewpoint, and the comprehensiveness of current language models were also discussed to suggest improvements for future research. Key message: Current neuroscience evidence suggests various ways in which processing differs between sign and spoken language modalities that extend beyond simple differences between languages. Consideration and further exploration of these differences will be integral in developing a more comprehensive view of language in the brain.
Collapse
Affiliation(s)
- Hayley Bree Caldwell
- Cognitive and Systems Neuroscience Research Hub (CSN-RH), School of Justice and Society, University of South Australia Magill Campus, Magill, South Australia, Australia
| |
Collapse
|
3
|
Joue G, Boven L, Willmes K, Evola V, Demenescu LR, Hassemer J, Mittelberg I, Mathiak K, Schneider F, Habel U. Metaphor processing is supramodal semantic processing: The role of the bilateral lateral temporal regions in multimodal communication. BRAIN AND LANGUAGE 2020; 205:104772. [PMID: 32126372 DOI: 10.1016/j.bandl.2020.104772] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/21/2019] [Revised: 01/26/2020] [Accepted: 02/09/2020] [Indexed: 06/10/2023]
Abstract
This paper presents an fMRI study on healthy adult understanding of metaphors in multimodal communication. We investigated metaphors expressed either only in coverbal gestures ("monomodal metaphors") or in speech with accompanying gestures ("multimodal metaphors"). Monomodal metaphoric gestures convey metaphoric information not expressed in the accompanying speech (e.g. saying the non-metaphoric utterance, "She felt bad" while dropping down the hand with palm facing up; here, the gesture alone indicates metaphoricity), whereas coverbal gestures in multimodal metaphors indicate metaphoricity redundant to the speech (e.g. saying the metaphoric utterance, "Her spirits fell" while dropping the hand with palm facing up). In other words, in monomodal metaphors, gestures add information not spoken, whereas the gestures in multimodal metaphors can be redundant to the spoken content. Understanding and integrating the information in each modality, here spoken and visual, is important in multimodal communication, but most prior studies have only considered multimodal metaphors where the gesture is redundant to what is spoken. Our participants watched audiovisual clips of an actor speaking while gesturing. We found that abstract metaphor comprehension recruited the lateral superior/middle temporal cortices, regardless of the modality in which the conceptual metaphor is expressed. These results suggest that abstract metaphors, regardless of modality, involve resources implicated in general semantic processing and are consistent with the role of these areas in supramodal semantic processing as well as the theory of embodied cognition.
Collapse
Affiliation(s)
- Gina Joue
- Human Technology Center, RWTH Aachen University, Theaterplatz 14, 52056 Aachen, Germany; Department of Psychiatry, Psychotherapy and Psychosomatics, School of Medicine, RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany; Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, 20246 Hamburg, Germany.
| | - Linda Boven
- School of Medicine, RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Klaus Willmes
- Section Neuropsychology, Department of Neurology, School of Medicine, RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Vito Evola
- Human Technology Center, RWTH Aachen University, Theaterplatz 14, 52056 Aachen, Germany; Bonn-Aachen International Center for Information Technology, Dahlmannstraße 2, 53113 Bonn, Germany; Faculty of Social Sciences and Humanities, New University of Lisbon, Portugal
| | - Liliana R Demenescu
- Department of Psychiatry, Psychotherapy and Psychosomatics, School of Medicine, RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Julius Hassemer
- Human Technology Center, RWTH Aachen University, Theaterplatz 14, 52056 Aachen, Germany
| | - Irene Mittelberg
- Human Technology Center, RWTH Aachen University, Theaterplatz 14, 52056 Aachen, Germany
| | - Klaus Mathiak
- Department of Psychiatry, Psychotherapy and Psychosomatics, School of Medicine, RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany; JARA, Translational Brain Medicine, 52425 Jülich, Germany; Institute of Neuroscience and Medicine (INM-1), Research Center Jülich, 52425 Jülich, Germany
| | - Frank Schneider
- Department of Psychiatry, Psychotherapy and Psychosomatics, School of Medicine, RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Ute Habel
- Department of Psychiatry, Psychotherapy and Psychosomatics, School of Medicine, RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| |
Collapse
|
4
|
Emmorey K, Winsler K, Midgley KJ, Grainger J, Holcomb PJ. Neurophysiological Correlates of Frequency, Concreteness, and Iconicity in American Sign Language. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2020; 1:249-267. [PMID: 33043298 PMCID: PMC7544239 DOI: 10.1162/nol_a_00012] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/07/2019] [Accepted: 04/16/2020] [Indexed: 05/21/2023]
Abstract
To investigate possible universal and modality-specific factors that influence the neurophysiological response during lexical processing, we recorded event-related potentials while a large group of deaf adults (n = 40) viewed 404 signs in American Sign Language (ASL) that varied in ASL frequency, concreteness, and iconicity. Participants performed a go/no-go semantic categorization task (does the sign refer to people?) to videoclips of ASL signs (clips began with the signer's hands at rest). Linear mixed-effects regression models were fit with per-participant, per-trial, and per-electrode data, allowing us to identify unique effects of each lexical variable. We observed an early effect of frequency (greater negativity for less frequent signs) beginning at 400 ms postvideo onset at anterior sites, which we interpreted as reflecting form-based lexical processing. This effect was followed by a more widely distributed posterior response that we interpreted as reflecting lexical-semantic processing. Paralleling spoken language, more concrete signs elicited greater negativities, beginning 600 ms postvideo onset with a wide scalp distribution. Finally, there were no effects of iconicity (except for a weak effect in the latest epochs; 1,000-1,200 ms), suggesting that iconicity does not modulate the neural response during sign recognition. Despite the perceptual and sensorimotoric differences between signed and spoken languages, the overall results indicate very similar neurophysiological processes underlie lexical access for both signs and words.
Collapse
Affiliation(s)
| | - Kurt Winsler
- Department of Psychology, University of California, Davis
| | | | - Jonathan Grainger
- Laboratoire de Psychologie Cognitive, Aix-Marseille University, Centre National de la Recherche Scientifique
| | | |
Collapse
|
5
|
Quer J, Steinbach M. Handling Sign Language Data: The Impact of Modality. Front Psychol 2019; 10:483. [PMID: 30914998 PMCID: PMC6423168 DOI: 10.3389/fpsyg.2019.00483] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2018] [Accepted: 02/19/2019] [Indexed: 11/13/2022] Open
Abstract
Natural languages come in two different modalities. The impact of modality on the grammatical structure and linguistic theory has been discussed at great length in the last 20 years. By contrast, the impact of modality on linguistic data elicitation and collection, corpus studies, and experimental (psycholinguistic) studies is still underinvestigated. In this article, we address specific challenges that arise in judgment data elicitation and experimental studies of sign languages. These challenges are related to the socio-linguistic status of the Deaf community and the larger variability across signers within the same community, to the social status of sign languages, to properties of the visual-gestural modality and its interface with gesture, to methodological aspects of handling sign language data, and to specific linguistic features of sign languages. While some of these challenges also pertain to (some varieties of) spoken languages, other challenges are more modality-specific. The special combination of the challenges discussed in this article seems to be a specific facet empirical research on sign languages is faced with. In addition, we discuss the complementarity of theoretical approaches and experimental studies and show how the interaction of both approaches contributes to a better understanding of sign languages in particular and linguistic structures in general.
Collapse
Affiliation(s)
- Josep Quer
- ICREA-Pompeu Fabra University, Barcelona, Spain
| | | |
Collapse
|
6
|
Meade G, Lee B, Midgley KJ, Holcomb PJ, Emmorey K. Phonological and semantic priming in American Sign Language: N300 and N400 effects. LANGUAGE, COGNITION AND NEUROSCIENCE 2018; 33:1092-1106. [PMID: 30662923 PMCID: PMC6335044 DOI: 10.1080/23273798.2018.1446543] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/09/2017] [Accepted: 02/20/2018] [Indexed: 05/29/2023]
Abstract
This study investigated the electrophysiological signatures of phonological and semantic priming in American Sign Language (ASL). Deaf signers made semantic relatedness judgments to pairs of ASL signs separated by a 1300 ms prime-target SOA. Phonologically related sign pairs shared two of three phonological parameters (handshape, location, and movement). Target signs preceded by phonologically related and semantically related prime signs elicited smaller negativities within the N300 and N400 windows than those preceded by unrelated primes. N300 effects, typically reported in studies of picture processing, are interpreted to reflect the mapping from the visual features of the signs to more abstract linguistic representations. N400 effects, consistent with rhyme priming effects in the spoken language literature, are taken to index lexico-semantic processes that appear to be largely modality independent. Together, these results highlight both the unique visual-manual nature of sign languages and the linguistic processing characteristics they share with spoken languages.
Collapse
Affiliation(s)
- Gabriela Meade
- Joint Doctoral Program in Language and Communicative Disorders, San Diego State University and University of California, San Diego, San Diego, CA, USA
| | - Brittany Lee
- Joint Doctoral Program in Language and Communicative Disorders, San Diego State University and University of California, San Diego, San Diego, CA, USA
| | | | - Phillip J. Holcomb
- Department of Psychology, San Diego State University, San Diego, CA, USA
| | - Karen Emmorey
- School of Speech, Language, and Hearing Sciences, San Diego State University, San Diego, CA, USA
| |
Collapse
|
7
|
Almeida D, Poeppel D, Corina D. The Processing of Biologically Plausible and Implausible forms in American Sign Language: Evidence for Perceptual Tuning. LANGUAGE, COGNITION AND NEUROSCIENCE 2015; 31:361-374. [PMID: 27135041 PMCID: PMC4849140 DOI: 10.1080/23273798.2015.1100315] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/18/2015] [Accepted: 09/16/2015] [Indexed: 05/29/2023]
Abstract
The human auditory system distinguishes speech-like information from general auditory signals in a remarkably fast and efficient way. Combining psychophysics and neurophysiology (MEG), we demonstrate a similar result for the processing of visual information used for language communication in users of sign languages. We demonstrate that the earliest visual cortical responses in deaf signers viewing American Sign Language (ASL) signs show specific modulations to violations of anatomic constraints that would make the sign either possible or impossible to articulate. These neural data are accompanied with a significantly increased perceptual sensitivity to the anatomical incongruity. The differential effects in the early visual evoked potentials arguably reflect an expectation-driven assessment of somatic representational integrity, suggesting that language experience and/or auditory deprivation may shape the neuronal mechanisms underlying the analysis of complex human form. The data demonstrate that the perceptual tuning that underlies the discrimination of language and non-language information is not limited to spoken languages but extends to languages expressed in the visual modality.
Collapse
Affiliation(s)
- Diogo Almeida
- Division of Sciences, Psychology program, New York University – Abu Dhabi, Abu Dhabi, UAE
| | - David Poeppel
- Department of Psychology, New York University, New York, NY, USA
- Department of Neuroscience, Max-Planck-Institute (MPIEA), Frankfurt, Germany
| | - David Corina
- Department of Linguistics and the Center for Mind and Brain, University of California, Davis, CA, USA
| |
Collapse
|
8
|
Corina DP, Hafer S, Welch K. Phonological awareness for american sign language. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2014; 19:530-545. [PMID: 25149961 DOI: 10.1093/deafed/enu023] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
This paper examines the concept of phonological awareness (PA) as it relates to the processing of American Sign Language (ASL). We present data from a recently developed test of PA for ASL and examine whether sign language experience impacts the use of metalinguistic routines necessary for completion of our task. Our data show that deaf signers exposed to ASL from infancy perform better than deaf signers exposed to ASL later in life and that this relationship remains even after controlling for the number of years of experience with a signed language. For a subset of participants, we examine the relationship between PA for ASL and performance on a PA test of English and report a positive correlation between ASL PA and English PA in native signers. We discuss the implications of these findings in relation to the development of reading skills in deaf children.
Collapse
Affiliation(s)
- David P Corina
- Center for Mind and Brain, University of California, Davis
| | - Sarah Hafer
- Center for Mind and Brain, University of California, Davis
| | | |
Collapse
|
9
|
Ferjan Ramirez N, Leonard MK, Torres C, Hatrak M, Halgren E, Mayberry RI. Neural language processing in adolescent first-language learners. Cereb Cortex 2014; 24:2772-83. [PMID: 23696277 PMCID: PMC4153811 DOI: 10.1093/cercor/bht137] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
The relation between the timing of language input and development of neural organization for language processing in adulthood has been difficult to tease apart because language is ubiquitous in the environment of nearly all infants. However, within the congenitally deaf population are individuals who do not experience language until after early childhood. Here, we investigated the neural underpinnings of American Sign Language (ASL) in 2 adolescents who had no sustained language input until they were approximately 14 years old. Using anatomically constrained magnetoencephalography, we found that recently learned signed words mainly activated right superior parietal, anterior occipital, and dorsolateral prefrontal areas in these 2 individuals. This spatiotemporal activity pattern was significantly different from the left fronto-temporal pattern observed in young deaf adults who acquired ASL from birth, and from that of hearing young adults learning ASL as a second language for a similar length of time as the cases. These results provide direct evidence that the timing of language experience over human development affects the organization of neural language processing.
Collapse
Affiliation(s)
| | | | | | | | - Eric Halgren
- Multimodal Imaging Laboratory
- Department of Radiology
- Department of Neurosciences
- Kavli Institute for Brain and Mind, University of California, San Diego, USA
| | | |
Collapse
|
10
|
Zachau S, Korpilahti P, Hämäläinen JA, Ervast L, Heinänen K, Suominen K, Lehtihalmes M, Leppänen PHT. Electrophysiological correlates of cross-linguistic semantic integration in hearing signers: N400 and LPC. Neuropsychologia 2014; 59:57-73. [PMID: 24751994 DOI: 10.1016/j.neuropsychologia.2014.04.011] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2012] [Revised: 03/29/2014] [Accepted: 04/12/2014] [Indexed: 11/15/2022]
Abstract
We explored semantic integration mechanisms in native and non-native hearing users of sign language and non-signing controls. Event-related brain potentials (ERPs) were recorded while participants performed a semantic decision task for priming lexeme pairs. Pairs were presented either within speech or across speech and sign language. Target-related ERP responses were subjected to principal component analyses (PCA), and neurocognitive basis of semantic integration processes were assessed by analyzing the N400 and the late positive complex (LPC) components in response to spoken (auditory) and signed (visual) antonymic and unrelated targets. Semantically-related effects triggered across modalities would indicate a similar tight interconnection between the signers׳ two languages like that described for spoken language bilinguals. Remarkable structural similarity of the N400 and LPC components with varying group differences between the spoken and signed targets were found. The LPC was the dominant response. The controls׳ LPC differed from the LPC of the two signing groups. It was reduced to the auditory unrelated targets and was less frontal for all the visual targets. The visual LPC was more broadly distributed in native than non-native signers and was left-lateralized for the unrelated targets in the native hearing signers only. Semantic priming effects were found for the auditory N400 in all groups, but only native hearing signers revealed a clear N400 effect to the visual targets. Surprisingly, the non-native signers revealed no semantically-related processing effect to the visual targets reflected in the N400 or the LPC; instead they appeared to rely more on visual post-lexical analyzing stages than native signers. We conclude that native and non-native signers employed different processing strategies to integrate signed and spoken semantic content. It appeared that the signers׳ semantic processing system was affected by group-specific factors like language background and/or usage.
Collapse
Affiliation(s)
- Swantje Zachau
- Logopedics, P.O. Box 1000, 90014 University of Oulu, Finland; Neurocognitive Unit, P.O. Box 50, 90029 Oulu University Hospital, Finland.
| | - Pirjo Korpilahti
- Logopedics, Publicum, Assistentinkatu 7, 20014 University of Turku, Finland.
| | - Jarmo A Hämäläinen
- Department of Psychology, P.O. Box 35, 40014 University of Jyväskylä, Finland.
| | - Leena Ervast
- Logopedics, P.O. Box 1000, 90014 University of Oulu, Finland; Neurocognitive Unit, P.O. Box 50, 90029 Oulu University Hospital, Finland.
| | - Kaisu Heinänen
- Logopedics, P.O. Box 1000, 90014 University of Oulu, Finland; Neurocognitive Unit, P.O. Box 50, 90029 Oulu University Hospital, Finland.
| | - Kalervo Suominen
- Neurocognitive Unit, P.O. Box 50, 90029 Oulu University Hospital, Finland.
| | | | - Paavo H T Leppänen
- Department of Psychology, P.O. Box 35, 40014 University of Jyväskylä, Finland.
| |
Collapse
|
11
|
Lexical prediction via forward models: N400 evidence from German Sign Language. Neuropsychologia 2013; 51:2224-37. [PMID: 23896445 DOI: 10.1016/j.neuropsychologia.2013.07.013] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2013] [Revised: 06/16/2013] [Accepted: 07/18/2013] [Indexed: 11/22/2022]
Abstract
Models of language processing in the human brain often emphasize the prediction of upcoming input-for example in order to explain the rapidity of language understanding. However, the precise mechanisms of prediction are still poorly understood. Forward models, which draw upon the language production system to set up expectations during comprehension, provide a promising approach in this regard. Here, we present an event-related potential (ERP) study on German Sign Language (DGS) which tested the hypotheses of a forward model perspective on prediction. Sign languages involve relatively long transition phases between one sign and the next, which should be anticipated as part of a forward model-based prediction even though they are semantically empty. Native speakers of DGS watched videos of naturally signed DGS sentences which either ended with an expected or a (semantically) unexpected sign. Unexpected signs engendered a biphasic N400-late positivity pattern. Crucially, N400 onset preceded critical sign onset and was thus clearly elicited by properties of the transition phase. The comprehension system thereby clearly anticipated modality-specific information about the realization of the predicted semantic item. These results provide strong converging support for the application of forward models in language comprehension.
Collapse
|
12
|
Lexical access in American Sign Language: An ERP investigation of effects of semantics and phonology. Brain Res 2012; 1468:63-83. [DOI: 10.1016/j.brainres.2012.04.029] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2011] [Revised: 12/23/2011] [Accepted: 04/17/2012] [Indexed: 11/24/2022]
|