1
|
Karabüklü S, Wood S, Bradley C, Wilbur RB, Malaia EA. Effect of sign language learning on temporal resolution of visual attention. J Vis 2025; 25:3. [PMID: 39752178 PMCID: PMC11706239 DOI: 10.1167/jov.25.1.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2024] [Accepted: 11/14/2024] [Indexed: 01/04/2025] Open
Abstract
The visual environment of sign language users is markedly distinct in its spatiotemporal parameters compared to that of non-signers. Although the importance of temporal and spectral resolution in the auditory modality for language development is well established, the spectrotemporal parameters of visual attention necessary for sign language comprehension remain less understood. This study investigates visual temporal resolution in learners of American Sign Language (ASL) at various stages of acquisition to determine how experience with sign language affects perceptual sampling. Using a flicker paradigm, we assessed the accuracy of identifying out-of-phase visual flicker objects at frequencies up to 60 Hz. Our findings reveal that third-semester ASL learners show increased accuracy in detecting high-frequency flicker, indicating enhanced temporal resolution. Interestingly, as learners achieve higher proficiency in ASL, their perceptual sampling reverts to typical levels, likely because of a shift toward predictive processing mechanisms in sign language comprehension. These results suggest that the temporal resolution of visual attention is malleable and can be influenced by the process of learning a visual language.
Collapse
Affiliation(s)
- Serpil Karabüklü
- Department of Linguistics, University of Chicago, Chicago, IL USA
- https://orcid.org/0000-0002-5221-4575
| | - Sandra Wood
- Department of Linguistics, University of Southern Maine, Portland, ME, USA
| | - Chuck Bradley
- Department of Linguistics, Purdue University, West Lafayette, IN, USA
- https://orcid.org/0000-0002-9695-1024
| | - Ronnie B Wilbur
- Department of Linguistics, Purdue University, West Lafayette, IN, USA
- https://orcid.org/0000-0001-7081-9351
| | - Evie A Malaia
- Department of Communicative Disorders, University of Alabama, Tuscaloosa, AL, USA
- https://orcid.org/0000-0002-4700-0257
| |
Collapse
|
2
|
Stringer C, Cooley F, Saunders E, Emmorey K, Schotter ER. Deaf readers use leftward information to read more efficiently: Evidence from eye tracking. Q J Exp Psychol (Hove) 2024; 77:2098-2110. [PMID: 38326329 DOI: 10.1177/17470218241232407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/09/2024]
Abstract
Little is known about how information to the left of fixation impacts reading and how it may help to integrate what has been read into the context of the sentence. To better understand the role of this leftward information and how it may be beneficial during reading, we compared the sizes of the leftward span for reading-matched deaf signers (n = 32) and hearing adults (n = 40) using a gaze-contingent moving window paradigm with windows of 1, 4, 7, 10, and 13 characters to the left, as well as a no-window condition. All deaf participants were prelingually and profoundly deaf, used American Sign Language (ASL) as a primary means of communication, and were exposed to ASL before age eight. Analysis of reading rates indicated that deaf readers had a leftward span of 10 characters, compared to four characters for hearing readers, and the size of the span was positively related to reading comprehension ability for deaf but not hearing readers. These findings suggest that deaf readers may engage in continued word processing of information obtained to the left of fixation, making reading more efficient, and showing a qualitatively different reading process than hearing readers.
Collapse
Affiliation(s)
- Casey Stringer
- Department of Psychology, University of South Florida, Tampa, FL, USA
| | - Frances Cooley
- Department of Psychology, University of South Florida, Tampa, FL, USA
| | - Emily Saunders
- Department of Psychology, San Diego State University, San Diego, CA, USA
| | - Karen Emmorey
- Department of Psychology, San Diego State University, San Diego, CA, USA
| | | |
Collapse
|
3
|
Emmorey K. Ten things you should know about sign languages. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE 2023; 32:387-394. [PMID: 37829330 PMCID: PMC10568932 DOI: 10.1177/09637214231173071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2023]
Abstract
The ten things you should know about sign languages are the following. 1) Sign languages have phonology and poetry. 2) Sign languages vary in their linguistic structure and family history, but share some typological features due to their shared biology (manual production). 3) Although there are many similarities between perceiving and producing speech and sign, the biology of language can impact aspects of processing. 4) Iconicity is pervasive in sign language lexicons and can play a role in language acquisition and processing. 5) Deaf and hard-of-hearing children are at risk for language deprivation. 6) Signers gesture when signing. 7) Sign language experience enhances some visual-spatial skills. 8) The same left hemisphere brain regions support both spoken and sign languages, but some neural regions are specific to sign language. 9) Bimodal bilinguals can code-blend, rather code-switch, which alters the nature of language control. 10) The emergence of new sign languages reveals patterns of language creation and evolution. These discoveries reveal how language modality does and does not affect language structure, acquisition, processing, use, and representation in the brain. Sign languages provide unique insights into human language that cannot be obtained by studying spoken languages alone.
Collapse
Affiliation(s)
- Karen Emmorey
- School of Speech, Language and Hearing Sciences, San Diego State University
| |
Collapse
|
4
|
Lammert JM, Levine AT, Koshkebaghi D, Butler BE. Sign language experience has little effect on face and biomotion perception in bimodal bilinguals. Sci Rep 2023; 13:15328. [PMID: 37714887 PMCID: PMC10504335 DOI: 10.1038/s41598-023-41636-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Accepted: 08/29/2023] [Indexed: 09/17/2023] Open
Abstract
Sensory and language experience can affect brain organization and domain-general abilities. For example, D/deaf individuals show superior visual perception compared to hearing controls in several domains, including the perception of faces and peripheral motion. While these enhancements may result from sensory loss and subsequent neural plasticity, they may also reflect experience using a visual-manual language, like American Sign Language (ASL), where signers must process moving hand signs and facial cues simultaneously. In an effort to disentangle these concurrent sensory experiences, we examined how learning sign language influences visual abilities by comparing bimodal bilinguals (i.e., sign language users with typical hearing) and hearing non-signers. Bimodal bilinguals and hearing non-signers completed online psychophysical measures of face matching and biological motion discrimination. No significant group differences were observed across these two tasks, suggesting that sign language experience is insufficient to induce perceptual advantages in typical-hearing adults. However, ASL proficiency (but not years of experience or age of acquisition) was found to predict performance on the motion perception task among bimodal bilinguals. Overall, the results presented here highlight a need for more nuanced study of how linguistic environments, sensory experience, and cognitive functions impact broad perceptual processes and underlying neural correlates.
Collapse
Affiliation(s)
- Jessica M Lammert
- Department of Psychology, University of Western Ontario, Western Interdisciplinary Research Building Room 6126, London, ON, N6A 5C2, Canada
- Western Institute for Neuroscience, University of Western Ontario, London, Canada
| | - Alexandra T Levine
- Department of Psychology, University of Western Ontario, Western Interdisciplinary Research Building Room 6126, London, ON, N6A 5C2, Canada
- Western Institute for Neuroscience, University of Western Ontario, London, Canada
| | - Dursa Koshkebaghi
- Undergraduate Neuroscience Program, University of Western Ontario, London, Canada
| | - Blake E Butler
- Department of Psychology, University of Western Ontario, Western Interdisciplinary Research Building Room 6126, London, ON, N6A 5C2, Canada.
- Western Institute for Neuroscience, University of Western Ontario, London, Canada.
- National Centre for Audiology, University of Western Ontario, London, Canada.
- Children's Health Research Institute, Lawson Health Research, London, Canada.
| |
Collapse
|
5
|
Bosworth RG, Hwang SO, Corina DP. Visual attention for linguistic and non-linguistic body actions in non-signing and native signing children. Front Psychol 2022; 13:951057. [PMID: 36160576 PMCID: PMC9505519 DOI: 10.3389/fpsyg.2022.951057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Accepted: 07/25/2022] [Indexed: 11/13/2022] Open
Abstract
Evidence from adult studies of deaf signers supports the dissociation between neural systems involved in processing visual linguistic and non-linguistic body actions. The question of how and when this specialization arises is poorly understood. Visual attention to these forms is likely to change with age and be affected by prior language experience. The present study used eye-tracking methodology with infants and children as they freely viewed alternating video sequences of lexical American sign language (ASL) signs and non-linguistic body actions (self-directed grooming action and object-directed pantomime). In Experiment 1, we quantified fixation patterns using an area of interest (AOI) approach and calculated face preference index (FPI) values to assess the developmental differences between 6 and 11-month-old hearing infants. Both groups were from monolingual English-speaking homes with no prior exposure to sign language. Six-month-olds attended the signer's face for grooming; but for mimes and signs, they were drawn to attend to the "articulatory space" where the hands and arms primarily fall. Eleven-month-olds, on the other hand, showed a similar attention to the face for all body action types. We interpret this to reflect an early visual language sensitivity that diminishes with age, just before the child's first birthday. In Experiment 2, we contrasted 18 hearing monolingual English-speaking children (mean age of 4.8 years) vs. 13 hearing children of deaf adults (CODAs; mean age of 5.7 years) whose primary language at home was ASL. Native signing children had a significantly greater face attentional bias than non-signing children for ASL signs, but not for grooming and mimes. The differences in the visual attention patterns that are contingent on age (in infants) and language experience (in children) may be related to both linguistic specialization over time and the emerging awareness of communicative gestural acts.
Collapse
Affiliation(s)
- Rain G. Bosworth
- NTID PLAY Lab, National Technical Institute for the Deaf, Rochester Institute of Technology, Rochester, NY, United States
| | - So One Hwang
- Center for Research in Language, University of California, San Diego, San Diego, CA, United States
| | - David P. Corina
- Center for Mind and Brain, University of California, Davis, Davis, CA, United States
| |
Collapse
|
6
|
Radošević T, Malaia EA, Milković M. Predictive Processing in Sign Languages: A Systematic Review. Front Psychol 2022; 13:805792. [PMID: 35496220 PMCID: PMC9047358 DOI: 10.3389/fpsyg.2022.805792] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 03/03/2022] [Indexed: 01/12/2023] Open
Abstract
The objective of this article was to review existing research to assess the evidence for predictive processing (PP) in sign language, the conditions under which it occurs, and the effects of language mastery (sign language as a first language, sign language as a second language, bimodal bilingualism) on the neural bases of PP. This review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) framework. We searched peer-reviewed electronic databases (SCOPUS, Web of Science, PubMed, ScienceDirect, and EBSCO host) and gray literature (dissertations in ProQuest). We also searched the reference lists of records selected for the review and forward citations to identify all relevant publications. We searched for records based on five criteria (original work, peer-reviewed, published in English, research topic related to PP or neural entrainment, and human sign language processing). To reduce the risk of bias, the remaining two authors with expertise in sign language processing and a variety of research methods reviewed the results. Disagreements were resolved through extensive discussion. In the final review, 7 records were included, of which 5 were published articles and 2 were dissertations. The reviewed records provide evidence for PP in signing populations, although the underlying mechanism in the visual modality is not clear. The reviewed studies addressed the motor simulation proposals, neural basis of PP, as well as the development of PP. All studies used dynamic sign stimuli. Most of the studies focused on semantic prediction. The question of the mechanism for the interaction between one’s sign language competence (L1 vs. L2 vs. bimodal bilingual) and PP in the manual-visual modality remains unclear, primarily due to the scarcity of participants with varying degrees of language dominance. There is a paucity of evidence for PP in sign languages, especially for frequency-based, phonetic (articulatory), and syntactic prediction. However, studies published to date indicate that Deaf native/native-like L1 signers predict linguistic information during sign language processing, suggesting that PP is an amodal property of language processing.
Collapse
Affiliation(s)
- Tomislav Radošević
- Laboratory for Sign Language and Deaf Culture Research, Faculty of Education and Rehabilitation Sciences, University of Zagreb, Zagreb, Croatia
| | - Evie A Malaia
- Laboratory for Neuroscience of Dynamic Cognition, Department of Communicative Disorders, College of Arts and Sciences, University of Alabama, Tuscaloosa, AL, United States
| | - Marina Milković
- Laboratory for Sign Language and Deaf Culture Research, Faculty of Education and Rehabilitation Sciences, University of Zagreb, Zagreb, Croatia
| |
Collapse
|
7
|
Quandt LC, Kubicek E, Willis A, Lamberton J. Enhanced biological motion perception in deaf native signers. Neuropsychologia 2021; 161:107996. [PMID: 34425145 DOI: 10.1016/j.neuropsychologia.2021.107996] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2020] [Revised: 07/22/2021] [Accepted: 08/17/2021] [Indexed: 02/06/2023]
Abstract
We conducted two studies to test how deaf signed language users perceive biological motions. We created 18 Biological Motion point-light displays (PLDs) depicting everyday human actions, and 18 Scrambled control PLDs. First, we conducted an online behavioral rating survey, in which deaf and hearing raters identified the biological motion PLDs and rated how easy it was for them to identify the actions. Then, we conducted an EEG study in which Deaf Signers and Hearing Non-Signers watched both the Biological Motion PLDs and the Scrambled PLDs, and we computed the time-frequency responses within the theta, alpha, and beta EEG rhythms. From the behavioral rating task, we show that the deaf raters reported significantly less effort required for identifying the Biological motion PLDs, across all stimuli. The EEG results showed that the Deaf Signers showed theta, mu, and beta differentiation between Scrambled and Biological PLDs earlier and more consistently than Hearing Non-Signers. We conclude that native ASL users exhibit experience-dependent neuroplasticity in the domain of biological human motion perception.
Collapse
Affiliation(s)
- Lorna C Quandt
- Ph.D in Educational Neuroscience Program, Gallaudet University, 800 Florida Ave NE, Washington, D.C. 20002, USA.
| | - Emily Kubicek
- Ph.D in Educational Neuroscience Program, Gallaudet University, 800 Florida Ave NE, Washington, D.C. 20002, USA
| | - Athena Willis
- Ph.D in Educational Neuroscience Program, Gallaudet University, 800 Florida Ave NE, Washington, D.C. 20002, USA
| | - Jason Lamberton
- VL2 Center, Gallaudet University, 800 Florida Ave NE, Washington, D.C. 20002, USA
| |
Collapse
|
8
|
Andin J, Holmer E, Schönström K, Rudner M. Working Memory for Signs with Poor Visual Resolution: fMRI Evidence of Reorganization of Auditory Cortex in Deaf Signers. Cereb Cortex 2021; 31:3165-3176. [PMID: 33625498 PMCID: PMC8196262 DOI: 10.1093/cercor/bhaa400] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2020] [Revised: 12/14/2020] [Accepted: 12/14/2020] [Indexed: 11/16/2022] Open
Abstract
Stimulus degradation adds to working memory load during speech processing. We investigated whether this applies to sign processing and, if so, whether the mechanism implicates secondary auditory cortex. We conducted an fMRI experiment where 16 deaf early signers (DES) and 22 hearing non-signers performed a sign-based n-back task with three load levels and stimuli presented at high and low resolution. We found decreased behavioral performance with increasing load and decreasing visual resolution, but the neurobiological mechanisms involved differed between the two manipulations and did so for both groups. Importantly, while the load manipulation was, as predicted, accompanied by activation in the frontoparietal working memory network, the resolution manipulation resulted in temporal and occipital activation. Furthermore, we found evidence of cross-modal reorganization in the secondary auditory cortex: DES had stronger activation and stronger connectivity between this and several other regions. We conclude that load and stimulus resolution have different neural underpinnings in the visual–verbal domain, which has consequences for current working memory models, and that for DES the secondary auditory cortex is involved in the binding of representations when task demands are low.
Collapse
Affiliation(s)
- Josefine Andin
- Department of Behavioural Science and Learning, Linköping University, Linköping, Sweden.,Swedish Institute for Disability Research, Linnaeus Centre HEAD, Sweden
| | - Emil Holmer
- Department of Behavioural Science and Learning, Linköping University, Linköping, Sweden.,Swedish Institute for Disability Research, Linnaeus Centre HEAD, Sweden.,Center for Medical Image Science and Visualization, Linköping, Sweden
| | | | - Mary Rudner
- Department of Behavioural Science and Learning, Linköping University, Linköping, Sweden.,Swedish Institute for Disability Research, Linnaeus Centre HEAD, Sweden.,Center for Medical Image Science and Visualization, Linköping, Sweden
| |
Collapse
|
9
|
Bosworth RG, Stone A. Rapid development of perceptual gaze control in hearing native signing Infants and children. Dev Sci 2021; 24:e13086. [PMID: 33484575 DOI: 10.1111/desc.13086] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2020] [Revised: 11/23/2020] [Accepted: 01/19/2021] [Indexed: 11/30/2022]
Abstract
Children's gaze behavior reflects emergent linguistic knowledge and real-time language processing of speech, but little is known about naturalistic gaze behaviors while watching signed narratives. Measuring gaze patterns in signing children could uncover how they master perceptual gaze control during a time of active language learning. Gaze patterns were recorded using a Tobii X120 eye tracker, in 31 non-signing and 30 signing hearing infants (5-14 months) and children (2-8 years) as they watched signed narratives on video. Intelligibility of the signed narratives was manipulated by presenting them naturally and in video-reversed ("low intelligibility") conditions. This video manipulation was used because it distorts semantic content, while preserving most surface phonological features. We examined where participants looked, using linear mixed models with Language Group (non-signing vs. signing) and Video Condition (Forward vs. Reversed), controlling for trial order. Non-signing infants and children showed a preference to look at the face as well as areas below the face, possibly because their gaze was drawn to the moving articulators in signing space. Native signing infants and children demonstrated resilient, face-focused gaze behavior. Moreover, their gaze behavior was unchanged for video-reversed signed narratives, similar to what was seen for adult native signers, possibly because they already have efficient highly focused gaze behavior. The present study demonstrates that human perceptual gaze control is sensitive to visual language experience over the first year of life and emerges early, by 6 months of age. Results have implications for the critical importance of early visual language exposure for deaf infants. A video abstract of this article can be viewed at https://www.youtube.com/watch?v=2ahWUluFAAg.
Collapse
Affiliation(s)
- Rain G Bosworth
- National Technical Institute for the Deaf, Rochester Institute of Technology, Rochester, NY, USA
| | - Adam Stone
- Department of Psychology, University of California, San Diego, CA, USA
| |
Collapse
|
10
|
Ford LKW, Borneman J, Krebs J, Malaia E, Ames B. Classification of visual comprehension based on EEG data using sparse optimal scoring. J Neural Eng 2021; 18. [PMID: 33440368 DOI: 10.1088/1741-2552/abdb3b] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Accepted: 01/13/2021] [Indexed: 11/12/2022]
Abstract
OBJECTIVE Understanding and differentiating brain states is an important task in the field of cognitive neuroscience with applications in health diagnostics (such as detecting neurotypical development vs. Autism Spectrum or coma/vegetative state vs. locked-in state). Electroencephalography (EEG) analysis is a particularly useful tool for this task as EEG data can detect millisecond-level changes in brain activity across a range of frequencies in a non-invasive and relatively inexpensive fashion. The goal of this study is to apply machine learning methods to EEG data in order to classify visual language comprehension across multiple participants. APPROACH 26-channel EEG was recorded for 24 Deaf participants while they watched videos of sign language sentences played in time-direct and time-reverse formats to simulate interpretable vs. uninterpretable sign language, respectively. Sparse Optimal Scoring (SOS) was applied to EEG data in order to classify which type of video a participant was watching, time-direct or time-reversed. The use of SOS also served to reduce the dimensionality of the features to improve model interpretability. MAIN RESULTS The analysis of frequency-domain EEG data resulted in an average out-of-sample classification accuracy of 98.89%, which was far superior to the time-domain analysis. This high classification accuracy suggests this model can accurately identify common neural responses to visual linguistic stimuli. SIGNIFICANCE The significance of this work is in determining necessary and sufficient neural features for classifying the high-level neural process of visual language comprehension across multiple participants.
Collapse
Affiliation(s)
| | - Joshua Borneman
- Speech, Language, and Hearing Sciences, Purdue University, 715 Clinic Drive, West Lafayette, Indiana, 47907-2122, UNITED STATES
| | - Julia Krebs
- Center for Cognitive Neuroscience, University of Salzburg, Hellbrunnerstraße 34, Salzburg, 5020, AUSTRIA
| | - Evguenia Malaia
- Communicative Disorders, The University of Alabama, Box 870242, Tuscaloosa, Alabama, 35487, UNITED STATES
| | - Brendan Ames
- Mathematics, The University of Alabama, Box 870350, Tuscaloosa, Alabama, 35487-0350, UNITED STATES
| |
Collapse
|
11
|
Schotter ER, Johnson E, Lieberman AM. The sign superiority effect: Lexical status facilitates peripheral handshape identification for deaf signers. J Exp Psychol Hum Percept Perform 2020; 46:1397-1410. [PMID: 32940493 PMCID: PMC7887614 DOI: 10.1037/xhp0000862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/31/2024]
Abstract
Deaf signers exhibit an enhanced ability to process information in their peripheral visual field, particularly the motion of dots or orientation of lines. Does their experience processing sign language, which involves identifying meaningful visual forms across the visual field, contribute to this enhancement? We tested whether deaf signers recruit language knowledge to facilitate peripheral identification through a sign superiority effect (i.e., better handshape discrimination in a sign than a pseudosign) and whether such a superiority effect might be responsible for perceptual enhancements relative to hearing individuals (i.e., a decrease in the effect of eccentricity on perceptual identification). Deaf signers and hearing signers or nonsigners identified the handshape presented within a static ASL fingerspelling letter (Experiment 1), fingerspelled sequence (Experiment 2), or sign or pseudosign (Experiment 3) presented in the near or far periphery. Accuracy on all tasks was higher for deaf signers than hearing nonsigning participants and was higher in the near than the far periphery. Across experiments, there were different patterns of interactions between hearing status and eccentricity depending on the type of stimulus; deaf signers showed an effect of eccentricity for static fingerspelled letters, fingerspelled sequences, and pseudosigns but not for ASL signs. In contrast, hearing nonsigners showed an effect of eccentricity for all stimuli. Thus, deaf signers recruit lexical knowledge to facilitate peripheral perceptual identification, and this perceptual enhancement may derive from their extensive experience processing visual linguistic information in the periphery during sign comprehension. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Collapse
Affiliation(s)
| | - Emily Johnson
- Department of Psychology, University of South Florida
| | - Amy M Lieberman
- Wheelock College of Education and Human Development, Boston University
| |
Collapse
|
12
|
Bosworth R, Stone A, Hwang SO. Effects of Video Reversal on Gaze Patterns during Signed Narrative Comprehension. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2020; 25:283-297. [PMID: 32427289 PMCID: PMC7260695 DOI: 10.1093/deafed/enaa007] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/23/2019] [Revised: 01/29/2020] [Accepted: 02/24/2020] [Indexed: 06/11/2023]
Abstract
Language knowledge, age of acquisition (AoA), and stimulus intelligibility all affect gaze behavior for reading print, but it is unknown how these factors affect "sign-watching" among signers. This study investigated how these factors affect gaze behavior during sign language comprehension in 52 adult signers who acquired American Sign Language (ASL) at different ages. We examined gaze patterns and story comprehension in four subject groups who differ in hearing status and when they learned ASL (i.e. Deaf Early, Deaf Late, Hearing Late, and Hearing Novice). Participants watched signed stories in normal (high intelligibility) and video-reversed (low intelligibility) conditions. This video manipulation was used because it distorts word order and thus disrupts the syntax and semantic content of narratives, while preserving most surface phonological features of individual signs. Video reversal decreased story comprehension accuracy, and this effect was greater for those who learned ASL later in life. Reversal also was associated with more dispersed gaze behavior. Although each subject group had unique gaze patterns, the effect of video reversal on gaze measures was similar across all groups. Among fluent signers, gaze behavior was not correlated with AoA, suggesting that "efficient" sign watching can be quickly learnt even among signers exposed to signed language later in life.
Collapse
Affiliation(s)
- Rain Bosworth
- National Technical Institute for the Deaf, Rochester Institute of Technology, Rochester, New York
| | - Adam Stone
- Department of Psychology, University of California, San Diego
| | - So-One Hwang
- Center for Research in Language, University of California, San Diego
| |
Collapse
|
13
|
Malaia EA, Krebs J, Roehm D, Wilbur RB. Age of acquisition effects differ across linguistic domains in sign language: EEG evidence. BRAIN AND LANGUAGE 2020; 200:104708. [PMID: 31698097 PMCID: PMC6934356 DOI: 10.1016/j.bandl.2019.104708] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2019] [Revised: 10/10/2019] [Accepted: 10/11/2019] [Indexed: 06/10/2023]
Abstract
One of the key questions in the study of human language acquisition is the extent to which the development of neural processing networks for different components of language are modulated by exposure to linguistic stimuli. Sign languages offer a unique perspective on this issue, because prelingually Deaf children who receive access to complex linguistic input later in life provide a window into brain maturation in the absence of language, and subsequent neuroplasticity of neurolinguistic networks during late language learning. While the duration of sensitive periods of acquisition of linguistic subsystems (sound, vocabulary, and syntactic structure) is well established on the basis of L2 acquisition in spoken language, for sign languages, the relative timelines for development of neural processing networks for linguistic sub-domains are unknown. We examined neural responses of a group of Deaf signers who received access to signed input at varying ages to three linguistic phenomena at the levels of classifier signs, syntactic structure, and information structure. The amplitude of the N400 response to the marked word order condition negatively correlated with the age of acquisition for syntax and information structure, indicating increased cognitive load in these conditions. Additionally, the combination of behavioral and neural data suggested that late learners preferentially relied on classifiers over word order for meaning extraction. This suggests that late acquisition of sign language significantly increases cognitive load during analysis of syntax and information structure, but not word-level meaning.
Collapse
Affiliation(s)
- Evie A Malaia
- Department of Communicative Disorders, University of Alabama, Speech and Hearing Clinic, 700 Johnny Stallings Drive, Tuscaloosa, AL 35401, USA.
| | - Julia Krebs
- Research Group Neurobiology of Language, Department of Linguistics, University of Salzburg, Erzabt-Klotz-Straße 1, 5020 Salzburg, Austria; Centre for Cognitive Neuroscience (CCNS), University of Salzburg, Erzabt-Klotz-Straße 1, 5020 Salzburg, Austria
| | - Dietmar Roehm
- Research Group Neurobiology of Language, Department of Linguistics, University of Salzburg, Erzabt-Klotz-Straße 1, 5020 Salzburg, Austria; Centre for Cognitive Neuroscience (CCNS), University of Salzburg, Erzabt-Klotz-Straße 1, 5020 Salzburg, Austria
| | - Ronnie B Wilbur
- Department of Linguistics, Purdue University, Lyles-Porter Hall, West Lafayette, IN 47907-2122, USA; Department of Speech, Language, and Hearing Sciences, Purdue University, Lyles-Porter Hall, West Lafayette, IN 47907-2122, USA
| |
Collapse
|