1
|
Rubianes M, Drijvers L, Muñoz F, Jiménez-Ortega L, Almeida-Rivera T, Sánchez-García J, Fondevila S, Casado P, Martín-Loeches M. The Self-reference Effect Can Modulate Language Syntactic Processing Even Without Explicit Awareness: An Electroencephalography Study. J Cogn Neurosci 2024; 36:460-474. [PMID: 38165746 DOI: 10.1162/jocn_a_02104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2024]
Abstract
Although it is well established that self-related information can rapidly capture our attention and bias cognitive functioning, whether this self-bias can affect language processing remains largely unknown. In addition, there is an ongoing debate as to the functional independence of language processes, notably regarding the syntactic domain. Hence, this study investigated the influence of self-related content on syntactic speech processing. Participants listened to sentences that could contain morphosyntactic anomalies while the masked face identity (self, friend, or unknown faces) was presented for 16 msec preceding the critical word. The language-related ERP components (left anterior negativity [LAN] and P600) appeared for all identity conditions. However, the largest LAN effect followed by a reduced P600 effect was observed for self-faces, whereas a larger LAN with no reduction of the P600 was found for friend faces compared with unknown faces. These data suggest that both early and late syntactic processes can be modulated by self-related content. In addition, alpha power was more suppressed over the left inferior frontal gyrus only when self-faces appeared before the critical word. This may reflect higher semantic demands concomitant to early syntactic operations (around 150-550 msec). Our data also provide further evidence of self-specific response, as reflected by the N250 component. Collectively, our results suggest that identity-related information is rapidly decoded from facial stimuli and may impact core linguistic processes, supporting an interactive view of syntactic processing. This study provides evidence that the self-reference effect can be extended to syntactic processing.
Collapse
Affiliation(s)
- Miguel Rubianes
- Complutense University of Madrid, Spain
- UCM-ISCIII Center for Human Evolution and Behavior, Madrid, Spain
| | - Linda Drijvers
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| | - Francisco Muñoz
- Complutense University of Madrid, Spain
- UCM-ISCIII Center for Human Evolution and Behavior, Madrid, Spain
| | - Laura Jiménez-Ortega
- Complutense University of Madrid, Spain
- UCM-ISCIII Center for Human Evolution and Behavior, Madrid, Spain
| | | | | | - Sabela Fondevila
- Complutense University of Madrid, Spain
- UCM-ISCIII Center for Human Evolution and Behavior, Madrid, Spain
| | - Pilar Casado
- Complutense University of Madrid, Spain
- UCM-ISCIII Center for Human Evolution and Behavior, Madrid, Spain
| | - Manuel Martín-Loeches
- Complutense University of Madrid, Spain
- UCM-ISCIII Center for Human Evolution and Behavior, Madrid, Spain
| |
Collapse
|
2
|
Radošević T, Malaia EA, Milković M. Predictive Processing in Sign Languages: A Systematic Review. Front Psychol 2022; 13:805792. [PMID: 35496220 PMCID: PMC9047358 DOI: 10.3389/fpsyg.2022.805792] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 03/03/2022] [Indexed: 01/12/2023] Open
Abstract
The objective of this article was to review existing research to assess the evidence for predictive processing (PP) in sign language, the conditions under which it occurs, and the effects of language mastery (sign language as a first language, sign language as a second language, bimodal bilingualism) on the neural bases of PP. This review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) framework. We searched peer-reviewed electronic databases (SCOPUS, Web of Science, PubMed, ScienceDirect, and EBSCO host) and gray literature (dissertations in ProQuest). We also searched the reference lists of records selected for the review and forward citations to identify all relevant publications. We searched for records based on five criteria (original work, peer-reviewed, published in English, research topic related to PP or neural entrainment, and human sign language processing). To reduce the risk of bias, the remaining two authors with expertise in sign language processing and a variety of research methods reviewed the results. Disagreements were resolved through extensive discussion. In the final review, 7 records were included, of which 5 were published articles and 2 were dissertations. The reviewed records provide evidence for PP in signing populations, although the underlying mechanism in the visual modality is not clear. The reviewed studies addressed the motor simulation proposals, neural basis of PP, as well as the development of PP. All studies used dynamic sign stimuli. Most of the studies focused on semantic prediction. The question of the mechanism for the interaction between one’s sign language competence (L1 vs. L2 vs. bimodal bilingual) and PP in the manual-visual modality remains unclear, primarily due to the scarcity of participants with varying degrees of language dominance. There is a paucity of evidence for PP in sign languages, especially for frequency-based, phonetic (articulatory), and syntactic prediction. However, studies published to date indicate that Deaf native/native-like L1 signers predict linguistic information during sign language processing, suggesting that PP is an amodal property of language processing.
Collapse
Affiliation(s)
- Tomislav Radošević
- Laboratory for Sign Language and Deaf Culture Research, Faculty of Education and Rehabilitation Sciences, University of Zagreb, Zagreb, Croatia
| | - Evie A Malaia
- Laboratory for Neuroscience of Dynamic Cognition, Department of Communicative Disorders, College of Arts and Sciences, University of Alabama, Tuscaloosa, AL, United States
| | - Marina Milković
- Laboratory for Sign Language and Deaf Culture Research, Faculty of Education and Rehabilitation Sciences, University of Zagreb, Zagreb, Croatia
| |
Collapse
|
3
|
Krebs J, Malaia E, Wilbur RB, Roehm D. Psycholinguistic mechanisms of classifier processing in sign language. J Exp Psychol Learn Mem Cogn 2020; 47:998-1011. [PMID: 33211523 DOI: 10.1037/xlm0000958] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Nonsigners viewing sign language are sometimes able to guess the meaning of signs by relying on the overt connection between form and meaning, or iconicity (cf. Ortega, Özyürek, & Peeters, 2020; Strickland et al., 2015). One word class in sign languages that appears to be highly iconic is classifiers: verb-like signs that can refer to location change or handling. Classifier use and meaning are governed by linguistic rules, yet in comparison with lexical verb signs, classifiers are highly variable in their morpho-phonology (variety of potential handshapes and motion direction within the sign). These open-class linguistic items in sign languages prompt a question about the mechanisms of their processing: Are they part of a gestural-semiotic system (processed like the gestures of nonsigners), or are they processed as linguistic verbs? To examine the psychological mechanisms of classifier comprehension, we recorded the electroencephalogram (EEG) activity of signers who watched videos of signed sentences with classifiers. We manipulated the sentence word order of the stimuli (subject-object-verb [SOV] vs. object-subject-verb [OSV]), contrasting the two conditions, which, according to different processing hypotheses, should incur increased processing costs for OSV orders. As previously reported for lexical signs, we observed an N400 effect for OSV compared with SOV, reflecting increased cognitive load for linguistic processing. These findings support the hypothesis that classifiers are a linguistic part of speech in sign language, extending the current understanding of processing mechanisms at the interface of linguistic form and meaning. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Collapse
Affiliation(s)
- Julia Krebs
- Research Group Neurobiology of Language, Department of Linguistics, University of Salzburg
| | - Evie Malaia
- Department of Communicative Disorders, University of Alabama
| | | | - Dietmar Roehm
- Research Group Neurobiology of Language, Department of Linguistics, University of Salzburg
| |
Collapse
|
4
|
Esaulova Y, Penke M, Dolscheid S. Describing Events: Changes in Eye Movements and Language Production Due to Visual and Conceptual Properties of Scenes. Front Psychol 2019; 10:835. [PMID: 31057462 PMCID: PMC6478754 DOI: 10.3389/fpsyg.2019.00835] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2018] [Accepted: 03/29/2019] [Indexed: 11/22/2022] Open
Abstract
How can a visual environment shape our utterances? A variety of visual and conceptual factors appear to affect sentence production, such as the visual cueing of patients or agents, their position relative to one another, and their animacy. These factors have previously been studied in isolation, leaving the question about their interplay open. The present study brings them together to examine systematic variations in eye movements, speech initiation and voice selection in descriptions of visual scenes. A sample of 44 native speakers of German were asked to describe depicted event scenes presented on a computer screen, while both their utterances and eye movements were recorded. Participants were instructed to produce one-sentence descriptions. The pictures depicted scenes with animate agents and either animate or inanimate patients who were situated to the right or to the left of agents. Half of the patients were preceded by a visual cue - a small circle appearing for 60 ms on a blank screen in the place of patients. The results show that scenes with left- rather than right-positioned patients lead to longer speech onset times, a higher probability of passive sentences and looks toward the patient. In addition, scenes with animate patients received more looks and elicited more passive utterances than scenes with inanimate patients. Visual cueing did not produce significant changes in speech, even though there were more looks to cued vs. non-cued referents, demonstrating that cueing only impacted initial scene scanning patterns but not speech. Our findings demonstrate that when examined together rather than separately, visual and conceptual factors of event scenes influence different aspects of behavior. In comparison to cueing that only affected eye movements, patient animacy also acted on the syntactic realization of utterances, whereas patient position in addition altered their onset. In terms of time course, visual influences are rather short-lived, while conceptual factors have long-lasting effects.
Collapse
Affiliation(s)
- Yulia Esaulova
- Department of Special Education and Rehabilitation, University of Cologne, Cologne, Germany
| | | | | |
Collapse
|
5
|
Ursino M, Cuppini C, Cappa SF, Catricalà E. A feature-based neurocomputational model of semantic memory. Cogn Neurodyn 2018; 12:525-547. [PMID: 30483362 PMCID: PMC6233327 DOI: 10.1007/s11571-018-9494-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2018] [Revised: 06/19/2018] [Accepted: 06/29/2018] [Indexed: 11/25/2022] Open
Abstract
According with a featural organization of semantic memory, this work is aimed at investigating, through an attractor network, the role of different kinds of features in the representation of concepts, both in normal and neurodegenerative conditions. We implemented new synaptic learning rules in order to take into account the role of partially shared features and of distinctive features with different saliency. The model includes semantic and lexical layers, coding, respectively for object features and word-forms. Connections among nodes are strongly asymmetrical. To account for the feature saliency, asymmetrical synapses were created using Hebbian rules of potentiation and depotentiation, setting different pre-synaptic and post-synaptic thresholds. A variable post-synaptic threshold, which automatically changed to reflect the feature frequency in different concepts (i.e., how many concepts share a feature), was used to account for partially shared features. The trained network solved naming tasks and word recognition tasks very well, exploiting the different role of salient versus marginal features in concept identification. In the case of damage, superordinate concepts were preserved better than the subordinate ones. Interestingly, the degradation of salient features, but not of marginal ones, prevented object identification. The model suggests that Hebbian rules, with adjustable post-synaptic thresholds, can provide a reliable semantic representation of objects exploiting the statistics of input features.
Collapse
Affiliation(s)
- Mauro Ursino
- Department of Electrical, Electronic and Information Engineering, University of Bologna, Viale Risorgimento 2, 40136 Bologna, Italy
| | - Cristiano Cuppini
- Department of Electrical, Electronic and Information Engineering, University of Bologna, Viale Risorgimento 2, 40136 Bologna, Italy
| | - Stefano F. Cappa
- NEtS Center, Scuola Universitaria Superiore IUSS, Pavia, Italy
- IRCCS S. Giovanni di Dio, Brescia, Italy
| | | |
Collapse
|
6
|
Blumenthal-Dramé A, Malaia E. Shared neural and cognitive mechanisms in action and language: The multiscale information transfer framework. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2018; 10:e1484. [PMID: 30417551 DOI: 10.1002/wcs.1484] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/08/2018] [Revised: 09/20/2018] [Accepted: 10/02/2018] [Indexed: 11/11/2022]
Abstract
This review compares how humans process action and language sequences produced by other humans. On the one hand, we identify commonalities between action and language processing in terms of cognitive mechanisms (e.g., perceptual segmentation, predictive processing, integration across multiple temporal scales), neural resources (e.g., the left inferior frontal cortex), and processing algorithms (e.g., comprehension based on changes in signal entropy). On the other hand, drawing on sign language with its particularly strong motor component, we also highlight what differentiates (both oral and signed) linguistic communication from nonlinguistic action sequences. We propose the multiscale information transfer framework (MSIT) as a way of integrating these insights and highlight directions into which future empirical research inspired by the MSIT framework might fruitfully evolve. This article is categorized under: Psychology > Language Linguistics > Language in Mind and Brain Psychology > Motor Skill and Performance Psychology > Prediction.
Collapse
Affiliation(s)
- Alice Blumenthal-Dramé
- Department of English, Albert-Ludwigs-Universität Freiburg, Freiburg, Germany.,Freiburg Institute for Advanced Studies, Freiburg, Germany
| | - Evie Malaia
- Department of Communicative Disorders, University of Alabama, Tuscaloosa, Alabama.,Freiburg Institute for Advanced Studies, Freiburg, Germany
| |
Collapse
|
7
|
Subject preference emerges as cross-modal strategy for linguistic processing. Brain Res 2018; 1691:105-117. [PMID: 29627484 DOI: 10.1016/j.brainres.2018.03.029] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2017] [Revised: 01/30/2018] [Accepted: 03/24/2018] [Indexed: 11/23/2022]
Abstract
Research on spoken languages has identified a "subject preference" processing strategy for tackling input that is syntactically ambiguous as to whether a sentence-initial NP is a subject or object. The present study documents that the "subject preference" strategy is also seen in the processing of a sign language, supporting the hypothesis that the "subject"-first strategy is universal and not dependent on the language modality (spoken vs. signed). Deaf signers of Austrian Sign Language (ÖGS) were shown videos of locally ambiguous signed sentences in SOV and OSV word orders. Electroencephalogram (EEG) data indicated higher cognitive load in response to OSV stimuli (i.e. a negativity for OSV compared to SOV), indicative of syntactic reanalysis cost. A finding that is specific to the visual modality is that the ERP (event-related potential) effect reflecting linguistic reanalysis occurred earlier than might have been expected, that is, before the time point when the path movement of the disambiguating sign was visible. We suggest that in the visual modality, transitional movement of the articulators prior to the disambiguating verb position or co-occurring non-manual (face/body) markings were used in resolving the local ambiguity in ÖGS. Thus, whereas the processing strategy of "subject preference" is cross-modal at the linguistic level, the cues that enable the processor to apply that strategy differ in signing as compared to speech.
Collapse
|
8
|
Effects of spectral smearing of stimuli on the performance of auditory steady-state response-based brain-computer interface. Cogn Neurodyn 2017; 11:515-527. [PMID: 29147144 DOI: 10.1007/s11571-017-9448-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2017] [Revised: 07/07/2017] [Accepted: 07/24/2017] [Indexed: 10/19/2022] Open
Abstract
There have been few reports that investigated the effects of the degree and pattern of a spectral smearing of stimuli due to deteriorated hearing ability on the performance of auditory brain-computer interface (BCI) systems. In this study, we assumed that such spectral smearing of stimuli may affect the performance of an auditory steady-state response (ASSR)-based BCI system and performed subjective experiments using 10 normal-hearing subjects to verify this assumption. We constructed smearing-reflected stimuli using an 8-channel vocoder with moderate and severe hearing loss setups and, using these stimuli, performed subjective concentration tests with three symmetric and six asymmetric smearing patterns while recording electroencephalogram signals. Then, 56 ratio features were calculated from the recorded signals, and the accuracies of the BCI selections were calculated and compared. Experimental results demonstrated that (1) applying smearing-reflected stimuli decreases the performance of an ASSR-based auditory BCI system, and (2) such negative effects can be reduced by adjusting the feature settings of the BCI algorithm on the basis of results acquired a posteriori. These results imply that by fine-tuning the feature settings of the BCI algorithm according to the degree and pattern of hearing ability deterioration of the recipient, the clinical benefits of a BCI system can be improved.
Collapse
|
9
|
Philipp M, Graf T, Kretzschmar F, Primus B. Beyond Verb Meaning: Experimental Evidence for Incremental Processing of Semantic Roles and Event Structure. Front Psychol 2017; 8:1806. [PMID: 29163250 PMCID: PMC5670351 DOI: 10.3389/fpsyg.2017.01806] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2016] [Accepted: 09/29/2017] [Indexed: 11/26/2022] Open
Abstract
We present an event-related potentials (ERP) study that addresses the question of how pieces of information pertaining to semantic roles and event structure interact with each other and with the verb's meaning. Specifically, our study investigates German verb-final clauses with verbs of motion such as fliegen 'fly' and schweben 'float, hover,' which are indeterminate with respect to agentivity and event structure. Agentivity was tested by manipulating the animacy of the subject noun phrase and event structure by selecting a goal adverbial, which makes the event telic, or a locative adverbial, which leads to an atelic reading. On the clause-initial subject, inanimates evoked an N400 effect vis-à-vis animates. On the adverbial phrase in the atelic (locative) condition, inanimates showed an N400 in comparison to animates. The telic (goal) condition exhibited a similar amplitude like the inanimate-atelic condition. Finally, at the verbal lexeme, the inanimate condition elicited an N400 effect against the animate condition in the telic (goal) contexts. In the atelic (locative) condition, items with animates evoked an N400 effect compared to inanimates. The combined set of findings suggest that clause-initial animacy is not sufficient for agent identification in German, which seems to be completed only at the verbal lexeme in our experiment. Here non-agents (inanimates) changing their location in a goal-directed way and agents (animates) lacking this property are dispreferred and this challenges the assumption that change of (locational) state is generally a defining characteristic of the patient role. Besides this main finding that sheds new light on role prototypicality, our data seem to indicate effects that, in our view, are related to complexity, i.e., minimality. Inanimate subjects or goal arguments increase processing costs since they have role or event structure restrictions that animate subjects or locative modifiers lack.
Collapse
Affiliation(s)
- Markus Philipp
- Institute of German Language and Literature I, University of Cologne, Cologne, Germany
| | | | | | | |
Collapse
|
10
|
Malaia E, Cockerham D, Rublein K. Visual integration of fear and anger emotional cues by children on the autism spectrum and neurotypical peers: An EEG study. Neuropsychologia 2017. [PMID: 28633887 DOI: 10.1016/j.neuropsychologia.2017.06.014] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Communication deficits in children with autism spectrum disorders (ASD) are often related to inefficient interpretation of emotional cues, which are conveyed visually through both facial expressions and body language. The present study examined ASD behavioral and ERP responses to emotional expressions of anger and fear, as conveyed by the face and body. Behavioral results showed significantly faster response times for the ASD than for the typically developing (TD) group when processing fear, but not anger, in isolated face expressions, isolated body expressions, and in the integration of the two. In addition, EEG data for the N170 and P1 indicated processing differences between fear and anger stimuli only in TD group, suggesting that individuals with ASD may not be distinguishing between emotional expressions. These results suggest that ASD children may employ a different neural mechanism for visual emotion recognition than their TD peers, possibly relying on inferential processing.
Collapse
|
11
|
Ai G, Sato N, Singh B, Wagatsuma H. Direction and viewing area-sensitive influence of EOG artifacts revealed in the EEG topographic pattern analysis. Cogn Neurodyn 2016; 10:301-14. [PMID: 27468318 DOI: 10.1007/s11571-016-9382-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2015] [Revised: 01/29/2016] [Accepted: 02/18/2016] [Indexed: 11/28/2022] Open
Abstract
The influence of eye movement-related artifacts on electroencephalography (EEG) signals of human subjects, who were requested to perform a direction or viewing area dependent saccade task, was investigated by using a simultaneous recording with ocular potentials as electro-oculography (EOG). In the past, EOG artifact removals have been studied in tasks with a single fixation point in the screen center, with less attention to the sensitivity of cornea-retinal dipole orientations to the EEG head map. In the present study, we hypothesized the existence of a systematic EOG influence that differs according to coupling conditions of eye-movement directions with viewing areas including different fixation points. The effect was validated in the linear regression analysis by using 12 task conditions combining horizontal/vertical eye-movement direction and three segregated zones of gaze in the screen. In the first place, event-related potential topographic patterns were analyzed to compare the 12 conditions and propagation coefficients of the linear regression analysis were successively calculated in each condition. As a result, the EOG influences were significantly different in a large number of EEG channels, especially in the case of horizontal eye-movements. In the cross validation, the linear regression analysis using the appropriate dataset of the target direction/viewing area combination demonstrated an improved performance compared with the traditional methods using a single fixation at the center. This result may open a potential way to improve artifact correction methods by considering the systematic EOG influence that can be predicted according to the view angle such as using eye-tracker systems.
Collapse
Affiliation(s)
- Guangyi Ai
- Graduate School of Life Science and Systems Engineering, Kyushu Institute of Technology, 2-4 Hibikino, Wakamatsu-ku, Kitakyushu, Fukuoka 808-0196 Japan
| | - Naoyuki Sato
- School of Systems Information Science, Future University Hakodate, 116-2 Kamedanakano-cho, Hakodate, Hokkaido 041-8655 Japan
| | - Balbir Singh
- Graduate School of Life Science and Systems Engineering, Kyushu Institute of Technology, 2-4 Hibikino, Wakamatsu-ku, Kitakyushu, Fukuoka 808-0196 Japan
| | - Hiroaki Wagatsuma
- Graduate School of Life Science and Systems Engineering, Kyushu Institute of Technology, 2-4 Hibikino, Wakamatsu-ku, Kitakyushu, Fukuoka 808-0196 Japan ; RIKEN BSI, 2-1 Hirosawa, Wako, Saitama 351-0198 Japan
| |
Collapse
|