1
|
Nematova S, Zinszer B, Morlet T, Morini G, Petitto LA, Jasińska KK. Impact of ASL Exposure on Spoken Phonemic Discrimination in Adult CI Users: A Functional Near-Infrared Spectroscopy Study. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2024; 5:553-588. [PMID: 38939730 PMCID: PMC11210937 DOI: 10.1162/nol_a_00143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 03/11/2024] [Indexed: 06/29/2024]
Abstract
We examined the impact of exposure to a signed language (American Sign Language, or ASL) at different ages on the neural systems that support spoken language phonemic discrimination in deaf individuals with cochlear implants (CIs). Deaf CI users (N = 18, age = 18-24 yrs) who were exposed to a signed language at different ages and hearing individuals (N = 18, age = 18-21 yrs) completed a phonemic discrimination task in a spoken native (English) and non-native (Hindi) language while undergoing functional near-infrared spectroscopy neuroimaging. Behaviorally, deaf CI users who received a CI early versus later in life showed better English phonemic discrimination, albeit phonemic discrimination was poor relative to hearing individuals. Importantly, the age of exposure to ASL was not related to phonemic discrimination. Neurally, early-life language exposure, irrespective of modality, was associated with greater neural activation of left-hemisphere language areas critically involved in phonological processing during the phonemic discrimination task in deaf CI users. In particular, early exposure to ASL was associated with increased activation in the left hemisphere's classic language regions for native versus non-native language phonemic contrasts for deaf CI users who received a CI later in life. For deaf CI users who received a CI early in life, the age of exposure to ASL was not related to neural activation during phonemic discrimination. Together, the findings suggest that early signed language exposure does not negatively impact spoken language processing in deaf CI users, but may instead potentially offset the negative effects of language deprivation that deaf children without any signed language exposure experience prior to implantation. This empirical evidence aligns with and lends support to recent perspectives regarding the impact of ASL exposure in the context of CI usage.
Collapse
Affiliation(s)
- Shakhlo Nematova
- Department of Linguistics and Cognitive Science, University of Delaware, Newark, DE, USA
| | - Benjamin Zinszer
- Department of Psychology, Swarthmore College, Swarthmore, PA, USA
| | - Thierry Morlet
- Nemours Children’s Hospital, Delaware, Wilmington, DE, USA
| | - Giovanna Morini
- Department of Communication Sciences and Disorders, University of Delaware, Newark, DE, USA
| | - Laura-Ann Petitto
- Brain and Language Center for Neuroimaging, Gallaudet University, Washington, DC, USA
| | - Kaja K. Jasińska
- Department of Applied Psychology and Human Development, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
2
|
Emmorey K. Ten things you should know about sign languages. CURRENT DIRECTIONS IN PSYCHOLOGICAL SCIENCE 2023; 32:387-394. [PMID: 37829330 PMCID: PMC10568932 DOI: 10.1177/09637214231173071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2023]
Abstract
The ten things you should know about sign languages are the following. 1) Sign languages have phonology and poetry. 2) Sign languages vary in their linguistic structure and family history, but share some typological features due to their shared biology (manual production). 3) Although there are many similarities between perceiving and producing speech and sign, the biology of language can impact aspects of processing. 4) Iconicity is pervasive in sign language lexicons and can play a role in language acquisition and processing. 5) Deaf and hard-of-hearing children are at risk for language deprivation. 6) Signers gesture when signing. 7) Sign language experience enhances some visual-spatial skills. 8) The same left hemisphere brain regions support both spoken and sign languages, but some neural regions are specific to sign language. 9) Bimodal bilinguals can code-blend, rather code-switch, which alters the nature of language control. 10) The emergence of new sign languages reveals patterns of language creation and evolution. These discoveries reveal how language modality does and does not affect language structure, acquisition, processing, use, and representation in the brain. Sign languages provide unique insights into human language that cannot be obtained by studying spoken languages alone.
Collapse
Affiliation(s)
- Karen Emmorey
- School of Speech, Language and Hearing Sciences, San Diego State University
| |
Collapse
|
3
|
Özdemir O, Baytaş İM, Akarun L. Multi-cue temporal modeling for skeleton-based sign language recognition. Front Neurosci 2023; 17:1148191. [PMID: 37090797 PMCID: PMC10113557 DOI: 10.3389/fnins.2023.1148191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Accepted: 03/10/2023] [Indexed: 04/09/2023] Open
Abstract
Sign languages are visual languages used as the primary communication medium for the Deaf community. The signs comprise manual and non-manual articulators such as hand shapes, upper body movement, and facial expressions. Sign Language Recognition (SLR) aims to learn spatial and temporal representations from the videos of the signs. Most SLR studies focus on manual features often extracted from the shape of the dominant hand or the entire frame. However, facial expressions combined with hand and body gestures may also play a significant role in discriminating the context represented in the sign videos. In this study, we propose an isolated SLR framework based on Spatial-Temporal Graph Convolutional Networks (ST-GCNs) and Multi-Cue Long Short-Term Memorys (MC-LSTMs) to exploit multi-articulatory (e.g., body, hands, and face) information for recognizing sign glosses. We train an ST-GCN model for learning representations from the upper body and hands. Meanwhile, spatial embeddings of hand shape and facial expression cues are extracted from Convolutional Neural Networks (CNNs) pre-trained on large-scale hand and facial expression datasets. Thus, the proposed framework coupling ST-GCNs with MC-LSTMs for multi-articulatory temporal modeling can provide insights into the contribution of each visual Sign Language (SL) cue to recognition performance. To evaluate the proposed framework, we conducted extensive analyzes on two Turkish SL benchmark datasets with different linguistic properties, BosphorusSign22k and AUTSL. While we obtained comparable recognition performance with the skeleton-based state-of-the-art, we observe that incorporating multiple visual SL cues improves the recognition performance, especially in certain sign classes where multi-cue information is vital. The code is available at: https://github.com/ogulcanozdemir/multicue-slr.
Collapse
Affiliation(s)
- Oğulcan Özdemir
- Perceptual Intelligence Laboratory, Computer Engineering Department, Boğaziçi University, Istanbul, Türkiye
| | | | | |
Collapse
|
4
|
Giovannelli F, Borgheresi A, Lucidi G, Squitieri M, Gavazzi G, Suppa A, Berardelli A, Viggiano MP, Cincotta M. Language-related motor facilitation in Italian Sign Language signers. Cereb Cortex 2023:6988100. [PMID: 36646456 DOI: 10.1093/cercor/bhac536] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Revised: 12/23/2022] [Accepted: 12/24/2022] [Indexed: 01/18/2023] Open
Abstract
Linguistic tasks facilitate corticospinal excitability as revealed by increased motor evoked potential (MEP) induced by transcranial magnetic stimulation (TMS) in the dominant hand. This modulation of the primary motor cortex (M1) excitability may reflect the relationship between speech and gestures. It is conceivable that in healthy individuals who use a sign language this cortical excitability modulation could be rearranged. The aim of this study was to evaluate the effect of spoken language tasks on M1 excitability in a group of hearing signers. Ten hearing Italian Sign Language (LIS) signers and 16 non-signer healthy controls participated. Single-pulse TMS was applied to either M1 hand area at the baseline and during different tasks: (i) reading aloud, (ii) silent reading, (iii) oral movements, (iv) syllabic phonation and (v) looking at meaningless non-letter strings. Overall, M1 excitability during the linguistic and non-linguistic tasks was higher in LIS group compared to the control group. In LIS group, MEPs were significantly larger during reading aloud, silent reading and non-verbal oral movements, regardless the hemisphere. These results suggest that in hearing signers there is a different modulation of the functional connectivity between the speech-related brain network and the motor system.
Collapse
Affiliation(s)
- Fabio Giovannelli
- Department of Neuroscience, Psychology, Drug Research and Child's Health (NEUROFARBA), Section of Psychology, University of Florence, Florence 50135, Italy
| | - Alessandra Borgheresi
- Unit of Neurology of Florence, Central Tuscany Local Health Authority, Florence 50143, Italy
| | - Giulia Lucidi
- Unit of Neurology of Florence, Central Tuscany Local Health Authority, Florence 50143, Italy
| | - Martina Squitieri
- Unit of Neurology of Florence, Central Tuscany Local Health Authority, Florence 50143, Italy
| | - Gioele Gavazzi
- Department of Neuroscience, Psychology, Drug Research and Child's Health (NEUROFARBA), Section of Psychology, University of Florence, Florence 50135, Italy
| | - Antonio Suppa
- Department of Human Neurosciences, Sapienza University of Rome, Rome 00185, Italy.,IRCCS Neuromed, Pozzilli (IS) 86077, Italy
| | - Alfredo Berardelli
- Department of Human Neurosciences, Sapienza University of Rome, Rome 00185, Italy.,IRCCS Neuromed, Pozzilli (IS) 86077, Italy
| | - Maria Pia Viggiano
- Department of Neuroscience, Psychology, Drug Research and Child's Health (NEUROFARBA), Section of Psychology, University of Florence, Florence 50135, Italy
| | - Massimo Cincotta
- Unit of Neurology of Florence, Central Tuscany Local Health Authority, Florence 50143, Italy
| |
Collapse
|
5
|
Lee B, Secora K. Fingerspelling and Its Role in Translanguaging. LANGUAGES (BASEL, SWITZERLAND) 2022; 7:278. [PMID: 37920277 PMCID: PMC10622114 DOI: 10.3390/languages7040278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/04/2023]
Abstract
Fingerspelling is a critical component of many sign languages. This manual representation of orthographic code is one key way in which signers engage in translanguaging, drawing from all of their linguistic and semiotic resources to support communication. Translanguaging in bimodal bilinguals is unique because it involves drawing from languages in different modalities, namely a signed language like American Sign Language and a spoken language like English (or its written form). Fingerspelling can be seen as a unique product of the unified linguistic system that translanguaging theories purport, as it blends features of both sign and print. The goals of this paper are twofold: to integrate existing research on fingerspelling in order to characterize it as a cognitive-linguistic phenomenon and to discuss the role of fingerspelling in translanguaging and communication. We will first review and synthesize research from linguistics and cognitive neuroscience to summarize our current understanding of fingerspelling, its production, comprehension, and acquisition. We will then discuss how fingerspelling relates to translanguaging theories and how it can be incorporated into translanguaging practices to support literacy and other communication goals.
Collapse
Affiliation(s)
- Brittany Lee
- Psychological Sciences, University of Connecticut, Storrs, CT 06269, USA
| | - Kristen Secora
- Theory and Practice in Teacher Education, University of Tennessee Knoxville, Knoxville, TN 37996, USA
| |
Collapse
|