1
|
Malaia EA, Borneman SC, Borneman JD, Krebs J, Wilbur RB. Prediction underlying comprehension of human motion: an analysis of Deaf signer and non-signer EEG in response to visual stimuli. Front Neurosci 2023; 17:1218510. [PMID: 37901437 PMCID: PMC10602904 DOI: 10.3389/fnins.2023.1218510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Accepted: 09/27/2023] [Indexed: 10/31/2023] Open
Abstract
Introduction Sensory inference and top-down predictive processing, reflected in human neural activity, play a critical role in higher-order cognitive processes, such as language comprehension. However, the neurobiological bases of predictive processing in higher-order cognitive processes are not well-understood. Methods This study used electroencephalography (EEG) to track participants' cortical dynamics in response to Austrian Sign Language and reversed sign language videos, measuring neural coherence to optical flow in the visual signal. We then used machine learning to assess entropy-based relevance of specific frequencies and regions of interest to brain state classification accuracy. Results EEG features highly relevant for classification were distributed across language processing-related regions in Deaf signers (frontal cortex and left hemisphere), while in non-signers such features were concentrated in visual and spatial processing regions. Discussion The results highlight functional significance of predictive processing time windows for sign language comprehension and biological motion processing, and the role of long-term experience (learning) in minimizing prediction error.
Collapse
Affiliation(s)
- Evie A. Malaia
- Department of Communicative Disorders, University of Alabama, Tuscaloosa, AL, United States
| | - Sean C. Borneman
- Department of Communicative Disorders, University of Alabama, Tuscaloosa, AL, United States
| | - Joshua D. Borneman
- Department of Linguistics, Purdue University, West Lafayette, IN, United States
| | - Julia Krebs
- Linguistics Department, University of Salzburg, Salzburg, Austria
- Centre for Cognitive Neuroscience, University of Salzburg, Salzburg, Austria
| | - Ronnie B. Wilbur
- Department of Linguistics, Purdue University, West Lafayette, IN, United States
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN, United States
| |
Collapse
|
2
|
Tomeny TS, Hudac CM, Malaia EA, Morett LM, Tomeny KR, Watkins L, Kana RK. Serving Individuals With Autism Spectrum Disorder in the Age of COVID-19: Special Considerations for Rural Families. Rural Spec Educ Q 2023; 42:105-118. [PMID: 38602929 PMCID: PMC10155053 DOI: 10.1177/87568705231167440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/13/2024]
Abstract
This position paper explores the needs of rural families of children, adolescents, and adults with autism spectrum disorder (ASD) during the COVID-19 pandemic. Prior to COVID-19, literature portrays elevated stress in families of individuals with ASD and health and socioeconomic disparities for rural and underserved populations. These disparities were exacerbated due to COVID-19 and subsequent lockdowns and economic turmoil. Academic and adaptive skills training were particularly impacted due to school closures, with parents tasked with taking some responsibility for training these skills. Our goals for this article focus on special considerations for rural families regarding (a) neurobiological and developmental impacts of stressful experiences like COVID-19, (b) delineation of the impacts on individuals with ASD and other comorbid and related conditions, and (c) education and intervention needs during these times. Finally, we offer suggestions for future care during pandemic events, including recommendations for improving service delivery under such conditions.
Collapse
|
3
|
Abstract
The objective of this article was to review existing research to assess the evidence for predictive processing (PP) in sign language, the conditions under which it occurs, and the effects of language mastery (sign language as a first language, sign language as a second language, bimodal bilingualism) on the neural bases of PP. This review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) framework. We searched peer-reviewed electronic databases (SCOPUS, Web of Science, PubMed, ScienceDirect, and EBSCO host) and gray literature (dissertations in ProQuest). We also searched the reference lists of records selected for the review and forward citations to identify all relevant publications. We searched for records based on five criteria (original work, peer-reviewed, published in English, research topic related to PP or neural entrainment, and human sign language processing). To reduce the risk of bias, the remaining two authors with expertise in sign language processing and a variety of research methods reviewed the results. Disagreements were resolved through extensive discussion. In the final review, 7 records were included, of which 5 were published articles and 2 were dissertations. The reviewed records provide evidence for PP in signing populations, although the underlying mechanism in the visual modality is not clear. The reviewed studies addressed the motor simulation proposals, neural basis of PP, as well as the development of PP. All studies used dynamic sign stimuli. Most of the studies focused on semantic prediction. The question of the mechanism for the interaction between one’s sign language competence (L1 vs. L2 vs. bimodal bilingual) and PP in the manual-visual modality remains unclear, primarily due to the scarcity of participants with varying degrees of language dominance. There is a paucity of evidence for PP in sign languages, especially for frequency-based, phonetic (articulatory), and syntactic prediction. However, studies published to date indicate that Deaf native/native-like L1 signers predict linguistic information during sign language processing, suggesting that PP is an amodal property of language processing.
Collapse
Affiliation(s)
- Tomislav Radošević
- Laboratory for Sign Language and Deaf Culture Research, Faculty of Education and Rehabilitation Sciences, University of Zagreb, Zagreb, Croatia
| | - Evie A Malaia
- Laboratory for Neuroscience of Dynamic Cognition, Department of Communicative Disorders, College of Arts and Sciences, University of Alabama, Tuscaloosa, AL, United States
| | - Marina Milković
- Laboratory for Sign Language and Deaf Culture Research, Faculty of Education and Rehabilitation Sciences, University of Zagreb, Zagreb, Croatia
| |
Collapse
|
4
|
Bradley C, Malaia EA, Siskind JM, Wilbur RB. Visual form of ASL verb signs predicts non-signer judgment of transitivity. PLoS One 2022; 17:e0262098. [PMID: 35213558 PMCID: PMC8880903 DOI: 10.1371/journal.pone.0262098] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Accepted: 12/17/2021] [Indexed: 11/18/2022] Open
Abstract
Longstanding cross-linguistic work on event representations in spoken languages have argued for a robust mapping between an event’s underlying representation and its syntactic encoding, such that–for example–the agent of an event is most frequently mapped to subject position. In the same vein, sign languages have long been claimed to construct signs that visually represent their meaning, i.e., signs that are iconic. Experimental research on linguistic parameters such as plurality and aspect has recently shown some of them to be visually universal in sign, i.e. recognized by non-signers as well as signers, and have identified specific visual cues that achieve this mapping. However, little is known about what makes action representations in sign language iconic, or whether and how the mapping of underlying event representations to syntactic encoding is visually apparent in the form of a verb sign. To this end, we asked what visual cues non-signers may use in evaluating transitivity (i.e., the number of entities involved in an action). To do this, we correlated non-signer judgments about transitivity of verb signs from American Sign Language (ASL) with phonological characteristics of these signs. We found that non-signers did not accurately guess the transitivity of the signs, but that non-signer transitivity judgments can nevertheless be predicted from the signs’ visual characteristics. Further, non-signers cue in on just those features that code event representations across sign languages, despite interpreting them differently. This suggests the existence of visual biases that underlie detection of linguistic categories, such as transitivity, which may uncouple from underlying conceptual representations over time in mature sign languages due to lexicalization processes.
Collapse
Affiliation(s)
- Chuck Bradley
- Department of Linguistics, Purdue University, West Lafayette, Indiana, United States of America
- * E-mail:
| | - Evie A. Malaia
- Department of Communicative Disorders, University of Alabama, Tuscaloosa, Alabama, United States of America
| | - Jeffrey Mark Siskind
- Department of Linguistics, Purdue University, West Lafayette, Indiana, United States of America
- Elmore Family School School of Electrical and Computer Engineering, Purdue University, West Lafayette, Indiana, United States of America
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana, United States of America
| | - Ronnie B. Wilbur
- Department of Linguistics, Purdue University, West Lafayette, Indiana, United States of America
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana, United States of America
| |
Collapse
|
5
|
Abstract
When people listen to speech, neural activity tracks the entropy fluctuation in the acoustic envelope of the signal. This signal-based entrainment has been shown to be the basis of speech parsing and comprehension. In this electroencephalography (EEG) study, we compute sign language users’ cortical tracking of changes in visual dynamics of the communicative signal in the time-direct videos of sign language, and their time-reversed counterparts, and assess the relative contribution of response frequencies between.2 and 12.4 Hz to comprehension using a machine learning approach to brain state classification. Lower frequencies of EEG response (.2–4 Hz) yield 100% classification accuracy, while information about cortical tracking of the visual envelope in higher frequencies is less informative. This suggests that signers rely on lower visual frequency data, such as envelope of visual signal, for sign language comprehension. In the context of real-time language processing, given the speed of comprehension responses, this suggests that fluent signers employ a predictive processing heuristic based on sign language knowledge.
Collapse
|
6
|
Krebs J, Roehm D, Wilbur RB, Malaia EA. Age of sign language acquisition has lifelong effect on syntactic preferences in sign language users. Int J Behav Dev 2021; 45:397-408. [PMID: 34690387 DOI: 10.1177/0165025420958193] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Acquisition of natural language has been shown to fundamentally impact both one's ability to use the first language, and the ability to learn subsequent languages later in life. Sign languages offer a unique perspective on this issue, because Deaf signers receive access to signed input at varying ages. The majority acquires sign language in (early) childhood, but some learn sign language later - a situation that is drastically different from that of spoken language acquisition. To investigate the effect of age of sign language acquisition and its potential interplay with age in signers, we examined grammatical acceptability ratings and reaction time measures in a group of Deaf signers (age range: 28-58 years) with early (0-3 years) or later (4-7 years) acquisition of sign language in childhood. Behavioral responses to grammatical word order variations (subject-object-verb vs. object-subject-verb) were examined in sentences that included: 1) simple sentences, 2) topicalized sentences, and 3) sentences involving manual classifier constructions, uniquely characteristic of sign languages. Overall, older participants responded more slowly. Age of acquisition had subtle effects on acceptability ratings, whereby the direction of the effect depended on the specific linguistic structure.
Collapse
Affiliation(s)
- Julia Krebs
- Research group Neurobiology of Language, Department of Linguistics, University of Salzburg, Erzabt-Klotz-Straße 1, 5020 Salzburg, Austria.,Centre for Cognitive Neuroscience (CCNS), University of Salzburg, Salzburg, Austria
| | - Dietmar Roehm
- Research group Neurobiology of Language, Department of Linguistics, University of Salzburg, Erzabt-Klotz-Straße 1, 5020 Salzburg, Austria.,Centre for Cognitive Neuroscience (CCNS), University of Salzburg, Salzburg, Austria
| | - Ronnie B Wilbur
- Linguistics Program, and Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Lyles-Porter Hall, West Lafayette, IN 47907, USA
| | - Evie A Malaia
- Department of Communicative Disorders, University of Alabama, Tuscaloosa, AL 35404, USA
| |
Collapse
|
7
|
Malaia EA, Krebs J, Roehm D, Wilbur RB. Age of acquisition effects differ across linguistic domains in sign language: EEG evidence. Brain Lang 2020; 200:104708. [PMID: 31698097 PMCID: PMC6934356 DOI: 10.1016/j.bandl.2019.104708] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2019] [Revised: 10/10/2019] [Accepted: 10/11/2019] [Indexed: 06/10/2023]
Abstract
One of the key questions in the study of human language acquisition is the extent to which the development of neural processing networks for different components of language are modulated by exposure to linguistic stimuli. Sign languages offer a unique perspective on this issue, because prelingually Deaf children who receive access to complex linguistic input later in life provide a window into brain maturation in the absence of language, and subsequent neuroplasticity of neurolinguistic networks during late language learning. While the duration of sensitive periods of acquisition of linguistic subsystems (sound, vocabulary, and syntactic structure) is well established on the basis of L2 acquisition in spoken language, for sign languages, the relative timelines for development of neural processing networks for linguistic sub-domains are unknown. We examined neural responses of a group of Deaf signers who received access to signed input at varying ages to three linguistic phenomena at the levels of classifier signs, syntactic structure, and information structure. The amplitude of the N400 response to the marked word order condition negatively correlated with the age of acquisition for syntax and information structure, indicating increased cognitive load in these conditions. Additionally, the combination of behavioral and neural data suggested that late learners preferentially relied on classifiers over word order for meaning extraction. This suggests that late acquisition of sign language significantly increases cognitive load during analysis of syntax and information structure, but not word-level meaning.
Collapse
Affiliation(s)
- Evie A Malaia
- Department of Communicative Disorders, University of Alabama, Speech and Hearing Clinic, 700 Johnny Stallings Drive, Tuscaloosa, AL 35401, USA.
| | - Julia Krebs
- Research Group Neurobiology of Language, Department of Linguistics, University of Salzburg, Erzabt-Klotz-Straße 1, 5020 Salzburg, Austria; Centre for Cognitive Neuroscience (CCNS), University of Salzburg, Erzabt-Klotz-Straße 1, 5020 Salzburg, Austria
| | - Dietmar Roehm
- Research Group Neurobiology of Language, Department of Linguistics, University of Salzburg, Erzabt-Klotz-Straße 1, 5020 Salzburg, Austria; Centre for Cognitive Neuroscience (CCNS), University of Salzburg, Erzabt-Klotz-Straße 1, 5020 Salzburg, Austria
| | - Ronnie B Wilbur
- Department of Linguistics, Purdue University, Lyles-Porter Hall, West Lafayette, IN 47907-2122, USA; Department of Speech, Language, and Hearing Sciences, Purdue University, Lyles-Porter Hall, West Lafayette, IN 47907-2122, USA
| |
Collapse
|
8
|
Malaia EA, Ahn S, Rubchinsky LL. Dysregulation of temporal dynamics of synchronous neural activity in adolescents on autism spectrum. Autism Res 2019; 13:24-31. [PMID: 31702116 DOI: 10.1002/aur.2219] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2019] [Revised: 09/04/2019] [Accepted: 09/05/2019] [Indexed: 12/20/2022]
Abstract
Autism spectrum disorder is increasingly understood to be based on atypical signal transfer among multiple interconnected networks in the brain. Relative temporal patterns of neural activity have been shown to underlie both the altered neurophysiology and the altered behaviors in a variety of neurogenic disorders. We assessed brain network dynamics variability in autism spectrum disorders (ASD) using measures of synchronization (phase-locking) strength, and timing of synchronization and desynchronization of neural activity (desynchronization ratio) across frequency bands of resting-state electroencephalography (EEG). Our analysis indicated that frontoparietal synchronization is higher in ASD but with more short periods of desynchronization. It also indicates that the relationship between the properties of neural synchronization and behavior is different in ASD and typically developing populations. Recent theoretical studies suggest that neural networks with a high desynchronization ratio have increased sensitivity to inputs. Our results point to the potential significance of this phenomenon to the autistic brain. This sensitivity may disrupt the production of an appropriate neural and behavioral responses to external stimuli. Cognitive processes dependent on the integration of activity from multiple networks maybe, as a result, particularly vulnerable to disruption. Autism Res 2020, 13: 24-31. © 2019 International Society for Autism Research, Wiley Periodicals, Inc. LAY SUMMARY: Parts of the brain can work together by synchronizing the activity of the neurons. We recorded the electrical activity of the brain in adolescents with autism spectrum disorder and then compared the recording to that of their peers without the diagnosis. We found that in participants with autism, there were a lot of very short time periods of non-synchronized activity between frontal and parietal parts of the brain. Mathematical models show that the brain system with this kind of activity is very sensitive to external events.
Collapse
Affiliation(s)
- Evie A Malaia
- Department of Communicative Disorders, University of Alabama, Tuscaloosa, Alabama
| | - Sungwoo Ahn
- Department of Mathematics, East Carolina University, Greenville, North Carolina
| | - Leonid L Rubchinsky
- Department of Mathematical Sciences, Indiana University - Purdue University Indianapolis, Indianapolis, Indiana.,Stark Neurosciences Research Institute, Indiana University School of Medicine, Indianapolis, Indiana
| |
Collapse
|
9
|
Malaia EA, Wilbur RB. Syllable as a unit of information transfer in linguistic communication: The entropy syllable parsing model. Wiley Interdiscip Rev Cogn Sci 2019; 11:e1518. [PMID: 31505710 DOI: 10.1002/wcs.1518] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2019] [Revised: 08/03/2019] [Accepted: 08/16/2019] [Indexed: 12/12/2022]
Abstract
To understand human language-both spoken and signed-the listener or viewer has to parse the continuous external signal into components. The question of what those components are (e.g., phrases, words, sounds, phonemes?) has been a subject of long-standing debate. We re-frame this question to ask: What properties of the incoming visual or auditory signal are indispensable to eliciting language comprehension? In this review, we assess the phenomenon of language parsing from modality-independent viewpoint. We show that the interplay between dynamic changes in the entropy of the signal and between neural entrainment to the signal at syllable level (4-5 Hz range) is causally related to language comprehension in both speech and sign language. This modality-independent Entropy Syllable Parsing model for the linguistic signal offers insight into the mechanisms of language processing, suggesting common neurocomputational bases for syllables in speech and sign language. This article is categorized under: Linguistics > Linguistic Theory Linguistics > Language in Mind and Brain Linguistics > Computational Models of Language Psychology > Language.
Collapse
Affiliation(s)
- Evie A Malaia
- Department of Communicative Disorders, University of Alabama, Tuscaloosa, Alabama
| | - Ronnie B Wilbur
- Department of Speech, Language, Hearing Sciences, College of Health and Human Sciences, Purdue University, West Lafayette, Indiana.,Linguistics, School of Interdisciplinary Studies, College of Liberal Arts, Purdue University, West Lafayette, Indiana
| |
Collapse
|