1
|
Nematova S, Zinszer B, Morlet T, Morini G, Petitto LA, Jasińska KK. Impact of ASL Exposure on Spoken Phonemic Discrimination in Adult CI Users: A Functional Near-Infrared Spectroscopy Study. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2024; 5:553-588. [PMID: 38939730 PMCID: PMC11210937 DOI: 10.1162/nol_a_00143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 03/11/2024] [Indexed: 06/29/2024]
Abstract
We examined the impact of exposure to a signed language (American Sign Language, or ASL) at different ages on the neural systems that support spoken language phonemic discrimination in deaf individuals with cochlear implants (CIs). Deaf CI users (N = 18, age = 18-24 yrs) who were exposed to a signed language at different ages and hearing individuals (N = 18, age = 18-21 yrs) completed a phonemic discrimination task in a spoken native (English) and non-native (Hindi) language while undergoing functional near-infrared spectroscopy neuroimaging. Behaviorally, deaf CI users who received a CI early versus later in life showed better English phonemic discrimination, albeit phonemic discrimination was poor relative to hearing individuals. Importantly, the age of exposure to ASL was not related to phonemic discrimination. Neurally, early-life language exposure, irrespective of modality, was associated with greater neural activation of left-hemisphere language areas critically involved in phonological processing during the phonemic discrimination task in deaf CI users. In particular, early exposure to ASL was associated with increased activation in the left hemisphere's classic language regions for native versus non-native language phonemic contrasts for deaf CI users who received a CI later in life. For deaf CI users who received a CI early in life, the age of exposure to ASL was not related to neural activation during phonemic discrimination. Together, the findings suggest that early signed language exposure does not negatively impact spoken language processing in deaf CI users, but may instead potentially offset the negative effects of language deprivation that deaf children without any signed language exposure experience prior to implantation. This empirical evidence aligns with and lends support to recent perspectives regarding the impact of ASL exposure in the context of CI usage.
Collapse
Affiliation(s)
- Shakhlo Nematova
- Department of Linguistics and Cognitive Science, University of Delaware, Newark, DE, USA
| | - Benjamin Zinszer
- Department of Psychology, Swarthmore College, Swarthmore, PA, USA
| | - Thierry Morlet
- Nemours Children’s Hospital, Delaware, Wilmington, DE, USA
| | - Giovanna Morini
- Department of Communication Sciences and Disorders, University of Delaware, Newark, DE, USA
| | - Laura-Ann Petitto
- Brain and Language Center for Neuroimaging, Gallaudet University, Washington, DC, USA
| | - Kaja K. Jasińska
- Department of Applied Psychology and Human Development, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
2
|
Wainscott SD, Spurgin K. Differentiating Language for Students Who Are Deaf or Hard of Hearing: A Practice-Informed Framework for Auditory and Visual Supports. Lang Speech Hear Serv Sch 2024; 55:473-494. [PMID: 38324382 DOI: 10.1044/2023_lshss-22-00088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/09/2024] Open
Abstract
PURPOSE Speech-language pathologists (SLPs) serving students who are d/Deaf or hard of hearing (Deaf/hh) and their deaf education counterparts must navigate complexities in language that include modalities that are spoken or signed and proficiency, which is often compromised. This tutorial describes a practice-informed framework that conceptualizes and organizes a continuum of auditory and visual language supports with the aim of informing the practice of the SLP whose training is more inherently focused on spoken language alone, as well as the practice of the teacher of the Deaf/hh (TDHH) who may focus more on visual language supports. METHOD This product resulted from a need within interdisciplinary, graduate programs for SLPs and TDHHs. Both cohorts required preparation to address the needs of diverse language learners who are Deaf/hh. This tutorial includes a brief review of the challenges in developing language proficiency and describes the complexities of effective service delivery. The process of developing a practice-informed framework for language supports is summarized, referencing established practices in auditory-based and visually based methodologies, identifying parallel practices, and summarizing the practices within a multitiered framework called the Framework of Differentiated Practices for Language Support. Recommendations for use of the framework include guidance on the identification of a student's language modality/ies and proficiency to effectively match students' needs and target supports. CONCLUSIONS An examination of established practices in language supports across auditory and visual modalities reveals clear parallels that can be organized into a tiered framework. The result is a reference for differentiating language for the interdisciplinary school team. The parallel supports also provide evidence of similarities in practice across philosophical boundaries as professionals work collaboratively.
Collapse
Affiliation(s)
- Sarah D Wainscott
- Department of Communication Sciences and Oral Health, Texas Woman's University, Denton
| | - Kelsey Spurgin
- Department of Special Education, Ball State University, Muncie, IN
| |
Collapse
|
3
|
Humphries T, Mathur G, Napoli DJ, Padden C, Rathmann C. Deaf Children Need Rich Language Input from the Start: Support in Advising Parents. CHILDREN (BASEL, SWITZERLAND) 2022; 9:1609. [PMID: 36360337 PMCID: PMC9688581 DOI: 10.3390/children9111609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 10/13/2022] [Accepted: 10/19/2022] [Indexed: 01/25/2023]
Abstract
Bilingual bimodalism is a great benefit to deaf children at home and in schooling. Deaf signing children perform better overall than non-signing deaf children, regardless of whether they use a cochlear implant. Raising a deaf child in a speech-only environment can carry cognitive and psycho-social risks that may have lifelong adverse effects. For children born deaf, or who become deaf in early childhood, we recommend comprehensible multimodal language exposure and engagement in joint activity with parents and friends to assure age-appropriate first-language acquisition. Accessible visual language input should begin as close to birth as possible. Hearing parents will need timely and extensive support; thus, we propose that, upon the birth of a deaf child and through the preschool years, among other things, the family needs an adult deaf presence in the home for several hours every day to be a linguistic model, to guide the family in taking sign language lessons, to show the family how to make spoken language accessible to their deaf child, and to be an encouraging liaison to deaf communities. While such a support program will be complicated and challenging to implement, it is far less costly than the harm of linguistic deprivation.
Collapse
Affiliation(s)
- Tom Humphries
- Department of Communication, University of California at San Diego, La Jolla, CA 92093, USA
| | - Gaurav Mathur
- Department of Linguistics, Gallaudet University, Washington, DC 20002, USA
| | - Donna Jo Napoli
- Department of Linguistics, Swarthmore College, Swarthmore, PA 19081, USA
| | - Carol Padden
- Division of Social Sciences, Department of Communication and Dean, University of California at San Diego, La Jolla, CA 92093, USA
| | - Christian Rathmann
- Department of Deaf Studies and Sign Language Interpreting, Humboldt-Universität zu Berlin, 10019 Berlin, Germany
| |
Collapse
|
4
|
Morin O. The puzzle of ideography. Behav Brain Sci 2022; 46:e233. [PMID: 36254782 DOI: 10.1017/s0140525x22002801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
An ideography is a general-purpose code made of pictures that do not encode language, which can be used autonomously - not just as a mnemonic prop - to encode information on a broad range of topics. Why are viable ideographies so hard to find? I contend that self-sufficient graphic codes need to be narrowly specialized. Writing systems are only an apparent exception: At their core, they are notations of a spoken language. Even if they also encode nonlinguistic information, they are useless to someone who lacks linguistic competence in the encoded language or a related one. The versatility of writing is thus vicarious: Writing borrows it from spoken language. Why is it so difficult to build a fully generalist graphic code? The most widespread answer points to a learnability problem. We possess specialized cognitive resources for learning spoken language, but lack them for graphic codes. I argue in favor of a different account: What is difficult about graphic codes is not so much learning or teaching them as getting every user to learn and teach the same code. This standardization problem does not affect spoken or signed languages as much. Those are based on cheap and transient signals, allowing for easy online repairing of miscommunication, and require face-to-face interactions where the advantages of common ground are maximized. Graphic codes lack these advantages, which makes them smaller in size and more specialized.
Collapse
Affiliation(s)
- Olivier Morin
- Max Planck Institute for Geoanthropology, Minds and Traditions Research Group, Jena, Germany ; https://www.shh.mpg.de/94549/themintgroup
- Institut Jean Nicod, CNRS, ENS, PSL University, Paris, France
| |
Collapse
|
5
|
Predictors of Word and Text Reading Fluency of Deaf Children in Bilingual Deaf Education Programmes. LANGUAGES 2022. [DOI: 10.3390/languages7010051] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Reading continues to be a challenging task for most deaf children. Bimodal bilingual education creates a supportive environment that stimulates deaf children’s learning through the use of sign language. However, it is still unclear how exposure to sign language might contribute to improving reading ability. Here, we investigate the relative contribution of several cognitive and linguistic variables to the development of word and text reading fluency in deaf children in bimodal bilingual education programmes. The participants of this study were 62 school-aged (8 to 10 years old at the start of the 3-year study) deaf children who took part in bilingual education (using Dutch and Sign Language of The Netherlands) and 40 age-matched hearing children. We assessed vocabulary knowledge in speech and sign, phonological awareness in speech and sign, receptive fingerspelling ability, and short-term memory at time 1 (T1). At times 2 (T2) and 3 (T3), we assessed word and text reading fluency. We found that (1) speech-based vocabulary strongly predicted word and text reading at T2 and T3, (2) fingerspelling ability was a strong predictor of word and text reading fluency at T2 and T3, (3) speech-based phonological awareness predicted word reading accuracy at T2 and T3 but did not predict text reading fluency, and (4) fingerspelling and STM predicted word reading latency at T2 while sign-based phonological awareness predicted this outcome measure at T3. These results suggest that fingerspelling may have an important function in facilitating the construction of orthographical/phonological representations of printed words for deaf children and strengthening word decoding and recognition abilities.
Collapse
|
6
|
Berteletti I, Kimbley SE, Sullivan SJ, Quandt LC, Miyakoshi M. Different Language Modalities Yet Similar Cognitive Processes in Arithmetic Fact Retrieval. Brain Sci 2022; 12:brainsci12020145. [PMID: 35203909 PMCID: PMC8870392 DOI: 10.3390/brainsci12020145] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Revised: 01/10/2022] [Accepted: 01/17/2022] [Indexed: 12/04/2022] Open
Abstract
Does experience with signed language impact the neurocognitive processes recruited by adults solving arithmetic problems? We used event-related potentials (ERPs) to identify the components that are modulated by operation type and problem size in Deaf American Sign Language (ASL) native signers and in hearing English-speaking participants. Participants were presented with single-digit subtraction and multiplication problems in a delayed verification task. Problem size was manipulated in small and large problems with an additional extra-large subtraction condition to equate the overall magnitude of large multiplication problems. Results show comparable behavioral results and similar ERP dissociations across groups. First, an early operation type effect is observed around 200 ms post-problem onset, suggesting that both groups have a similar attentional differentiation for processing subtraction and multiplication problems. Second, for the posterior-occipital component between 240 ms and 300 ms, subtraction problems show a similar modulation with problem size in both groups, suggesting that only subtraction problems recruit quantity-related processes. Control analyses exclude possible perceptual and cross-operation magnitude-related effects. These results are the first evidence that the two operation types rely on distinct cognitive processes within the ASL native signing population and that they are equivalent to those observed in the English-speaking population.
Collapse
Affiliation(s)
- Ilaria Berteletti
- Ph.D. in Educational Neuroscience Program, Gallaudet University, Washington, DC 20002, USA; (S.E.K.); (S.J.S.); (L.C.Q.)
- Correspondence:
| | - Sarah E. Kimbley
- Ph.D. in Educational Neuroscience Program, Gallaudet University, Washington, DC 20002, USA; (S.E.K.); (S.J.S.); (L.C.Q.)
| | - SaraBeth J. Sullivan
- Ph.D. in Educational Neuroscience Program, Gallaudet University, Washington, DC 20002, USA; (S.E.K.); (S.J.S.); (L.C.Q.)
| | - Lorna C. Quandt
- Ph.D. in Educational Neuroscience Program, Gallaudet University, Washington, DC 20002, USA; (S.E.K.); (S.J.S.); (L.C.Q.)
| | - Makoto Miyakoshi
- Swartz Center for Computational Neuroscience, Institute for Neural Computation, University of California, San Diego, CA 92093, USA;
| |
Collapse
|
7
|
Berent I, de la Cruz-Pavía I, Brentari D, Gervain J. Infants differentially extract rules from language. Sci Rep 2021; 11:20001. [PMID: 34625613 PMCID: PMC8501030 DOI: 10.1038/s41598-021-99539-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Accepted: 09/13/2021] [Indexed: 12/02/2022] Open
Abstract
Infants readily extract linguistic rules from speech. Here, we ask whether this advantage extends to linguistic stimuli that do not rely on the spoken modality. To address this question, we first examine whether infants can differentially learn rules from linguistic signs. We show that, despite having no previous experience with a sign language, six-month-old infants can extract the reduplicative rule (AA) from dynamic linguistic signs, and the neural response to reduplicative linguistic signs differs from reduplicative visual controls, matched for the dynamic spatiotemporal properties of signs. We next demonstrate that the brain response for reduplicative signs is similar to the response to reduplicative speech stimuli. Rule learning, then, apparently depends on the linguistic status of the stimulus, not its sensory modality. These results suggest that infants are language-ready. They possess a powerful rule system that is differentially engaged by all linguistic stimuli, speech or sign.
Collapse
Affiliation(s)
| | - Irene de la Cruz-Pavía
- Integrative Neuroscience and Cognition Center, Université de Paris & CNRS, Paris, France.,University of the Basque Country UPV/EHU, Vitoria-Gasteiz, Spain.,Basque Foundation for Science Ikerbasque, Bilbao, Spain
| | | | - Judit Gervain
- Integrative Neuroscience and Cognition Center, Université de Paris & CNRS, Paris, France.,University of Padua, Padua, Italy
| |
Collapse
|
8
|
Deng Q, Tong SX. Linguistic but Not Cognitive Weaknesses in Deaf or Hard-of-Hearing Poor Comprehenders. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2021; 26:351-362. [PMID: 33824969 DOI: 10.1093/deafed/enab006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Revised: 02/22/2021] [Accepted: 02/23/2021] [Indexed: 06/12/2023]
Abstract
This study examined the reading comprehension profiles, and the related linguistic and cognitive skills, of 146 Chinese students in Grades 3-9 who are deaf or hard of hearing (d/Dhh). Employing a rigorous regression approach, the current study identified 19 unexpected poor comprehenders, 24 expected average comprehenders, and 16 unexpected good comprehenders. Compared to the expected average and unexpected good comprehenders, the unexpected poor comprehenders performed worse in broad linguistic skills (i.e., Chinese sign language comprehension, vocabulary, and segmental and suprasegmental phonological awareness), but their weaknesses in cognitive skills (i.e., working memory and executive function) were less severe. These findings suggest that weak linguistic skills are possible indicators of reading comprehension difficulties for students who are d/Dhh.
Collapse
Affiliation(s)
- Qinli Deng
- Human Communication, Development, and Information Sciences, Faculty of Education, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Shelley Xiuli Tong
- Human Communication, Development, and Information Sciences, Faculty of Education, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| |
Collapse
|
9
|
Napoli DJ, Ferrara C. Correlations Between Handshape and Movement in Sign Languages. Cogn Sci 2021; 45:e12944. [PMID: 34018242 PMCID: PMC8243953 DOI: 10.1111/cogs.12944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2020] [Revised: 12/27/2020] [Accepted: 12/31/2020] [Indexed: 12/04/2022]
Abstract
Sign language phonological parameters are somewhat analogous to phonemes in spoken language. Unlike phonemes, however, there is little linguistic literature arguing that these parameters interact at the sublexical level. This situation raises the question of whether such interaction in spoken language phonology is an artifact of the modality or whether sign language phonology has not been approached in a way that allows one to recognize sublexical parameter interaction. We present three studies in favor of the latter alternative: a shape-drawing study with deaf signers from six countries, an online dictionary study of American Sign Language, and a study of selected lexical items across 34 sign languages. These studies show that, once iconicity is considered, handshape and movement parameters interact at the sublexical level. Thus, consideration of iconicity makes transparent similarities in grammar across both modalities, allowing us to maintain certain key findings of phonological theory as evidence of cognitive architecture.
Collapse
|
10
|
Costello B, Caffarra S, Fariña N, Duñabeitia JA, Carreiras M. Reading without phonology: ERP evidence from skilled deaf readers of Spanish. Sci Rep 2021; 11:5202. [PMID: 33664324 PMCID: PMC7933439 DOI: 10.1038/s41598-021-84490-5] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2020] [Accepted: 02/16/2021] [Indexed: 11/10/2022] Open
Abstract
Reading typically involves phonological mediation, especially for transparent orthographies with a regular letter to sound correspondence. In this study we ask whether phonological coding is a necessary part of the reading process by examining prelingually deaf individuals who are skilled readers of Spanish. We conducted two EEG experiments exploiting the pseudohomophone effect, in which nonwords that sound like words elicit phonological encoding during reading. The first, a semantic categorization task with masked priming, resulted in modulation of the N250 by pseudohomophone primes in hearing but not in deaf readers. The second, a lexical decision task, confirmed the pattern: hearing readers had increased errors and an attenuated N400 response for pseudohomophones compared to control pseudowords, whereas deaf readers did not treat pseudohomophones any differently from pseudowords, either behaviourally or in the ERP response. These results offer converging evidence that skilled deaf readers do not rely on phonological coding during visual word recognition. Furthermore, the finding demonstrates that reading can take place in the absence of phonological activation, and we speculate about the alternative mechanisms that allow these deaf individuals to read competently.
Collapse
Affiliation(s)
- Brendan Costello
- Basque Center on Cognition, Brain and Language, Paseo Mikeletegi, 69, 20009, Donostia-San Sebstián, Spain.
| | - Sendy Caffarra
- Basque Center on Cognition, Brain and Language, Paseo Mikeletegi, 69, 20009, Donostia-San Sebstián, Spain.,Division of Developmental-Behavioral Pediatrics, Stanford University School of Medicine, Stanford University, Stanford, CA, USA.,Stanford University Graduate School of Education, Stanford, CA, USA
| | - Noemi Fariña
- Basque Center on Cognition, Brain and Language, Paseo Mikeletegi, 69, 20009, Donostia-San Sebstián, Spain.,Departamento de Psicología de la Educación y Psicobiología, Facultad de Educación, Universidad Internacional de La Rioja, Logroño, Spain
| | - Jon Andoni Duñabeitia
- Centro de Ciencia Cognitiva - C3, Universidad Nebrija, Madrid, Spain.,Department of Language and Culture, The Arctic University of Norway, Tromsö, Norway
| | - Manuel Carreiras
- Basque Center on Cognition, Brain and Language, Paseo Mikeletegi, 69, 20009, Donostia-San Sebstián, Spain.,Departamento de Lengua Vasca y Comunicación, UPV/EHU, Bilbao, Spain.,Basque Foundation for Science, Bilbao, Spain
| |
Collapse
|
11
|
Crume PK, Lederberg A, Schick B. Language and Reading Comprehension Abilities of Elementary School-Aged Deaf Children. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2021; 26:159-169. [PMID: 33207367 DOI: 10.1093/deafed/enaa033] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/26/2019] [Revised: 08/12/2020] [Accepted: 08/27/2020] [Indexed: 06/11/2023]
Abstract
Bilingual education programs for deaf children have long asserted that American Sign Language (ASL) is a better language of instruction English-like signing because ASL is a natural language. However, English-like signing may be a useful bridge to reading English. In the present study, we tested 32 deaf children between third and sixth grade to assess their capacity to use ASL or English-like signing in nine different languages and reading tasks. Our results found that there was no significant difference in the deaf children's ability to comprehend narratives in ASL compared to when they are told in English-like signing. Additionally, language abilities in ASL and English-like signing were strongly related to each other and to reading. Reading was also strongly related to fingerspelling. Our results suggest that there may be a role in literacy instruction for English-like signing as a supplement to ASL in deaf bilingual schools.
Collapse
Affiliation(s)
- Peter K Crume
- Georgia State University, Department of Learning Sciences
| | - Amy Lederberg
- Georgia State University, Department of Learning Sciences
| | - Brenda Schick
- Department of Speech, Language, and Hearing Sciences, University of Colorado-Boulder
| |
Collapse
|
12
|
Holcomb L, Wolbers K. Effects of ASL Rhyme and Rhythm on Deaf Children's Engagement Behavior and Accuracy in Recitation: Evidence from a Single Case Design. CHILDREN-BASEL 2020; 7:children7120256. [PMID: 33255943 PMCID: PMC7761000 DOI: 10.3390/children7120256] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/31/2020] [Revised: 11/22/2020] [Accepted: 11/24/2020] [Indexed: 11/21/2022]
Abstract
Early language acquisition is critical for lifelong success in language, literacy, and academic studies. There is much to explore about the specific techniques used to foster deaf children’s language development. The use of rhyme and rhythm in American Sign Language (ASL) remains understudied. This single-subject study compared the effects of rhyming and non-rhyming ASL stories on the engagement behavior and accuracy in recitation of five deaf children between three and six years old in an ASL/English bilingual early childhood classroom. With the application of alternating treatment design with initial baseline, it is the first experimental research of its kind on ASL rhyme and rhythm. Baseline data revealed the lack of rhyme awareness in children and informed the decision to provide intervention as a condition to examine the effects of explicit handshape rhyme awareness instruction on increasing engagement behavior and accuracy in recitation. There were four phases in this study: baseline, handshape rhyme awareness intervention, alternating treatments, and preference. Visual analysis and total mean and mean difference procedures were employed to analyze results. The findings indicate that recitation skills in young deaf children can be supported through interventions utilizing ASL rhyme and rhythm supplemented with ASL phonological awareness activities. A potential case of sign language impairment was identified in a native signer, creating a new line of inquiry in using ASL rhyme, rhythm, and phonological awareness to detect atypical language patterns.
Collapse
|
13
|
Greene‐Woods A, Delgado N. Addressing the big picture: Deaf children and reading assessments. PSYCHOLOGY IN THE SCHOOLS 2020. [DOI: 10.1002/pits.22285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
- Ashley Greene‐Woods
- Department of Deaf Studies and Deaf EducationLamar University Beaumont Texas
| | - Natalie Delgado
- Department of Deaf Studies and Deaf EducationLamar University Beaumont Texas
| |
Collapse
|
14
|
Neild R, Diane Clark M. Assessment in deaf education: Perspectives and experiences linking it all together. PSYCHOLOGY IN THE SCHOOLS 2020. [DOI: 10.1002/pits.22337] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Raschelle Neild
- Department of Special EducationBall State University Muncie Indiana
| | - M. Diane Clark
- Department of Deaf Studies and Deaf EducationLamar University Beaumont Texas
| |
Collapse
|
15
|
Allen TE, Morere DA. Early visual language skills affect the trajectory of literacy gains over a three-year period of time for preschool aged deaf children who experience signing in the home. PLoS One 2020; 15:e0229591. [PMID: 32106252 PMCID: PMC7046216 DOI: 10.1371/journal.pone.0229591] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Accepted: 02/11/2020] [Indexed: 12/05/2022] Open
Abstract
Previous research has established a correlation between literacy skills and sign language skills among deaf children raised in signing families, but little research has examined the impact of early signing skills on the rate of growth of emergent literacy in early childhood. A subset of data was extracted from a larger dataset containing national longitudinal data from a three-year investigation of early literacy development of deaf children who were between the ages of three and six at the outset of the study. Selection criteria for inclusion in this limited sample included: 1) being rated as having little or no access to spoken language and 2) being raised in homes in which signs were regularly used as a means of communication (N = 56). Our purpose was twofold: 1) to examine and describe the trajectories of growth in letter and word identification skill for this sample in relation to the participants’ initial ages; and 2) to assess the degree to which the presence or deaf parents in the home (DoD) and the receptive American Sign Language (ASL) skills of the participants impacted both the level of emerging print literacy and its rate of growth over the three year period. We hypothesized that both the presence of a deaf parent in the home and the acquisition of ASL skills, a strong native language, would contribute to both the overall letter and word identification skills and to the rates of growth of this skill over time. Results indicated that having a deaf parent did, indeed, impact emergent literacy attainment, but its effect was rendered nonsignificant when ASL skill was taken into consideration. Possession of stronger ASL skills, whether or not the children had deaf parents, contributed significantly to both the levels and rate of growth. The findings contribute to the body of work that emphasizes the importance early language skills (spoken or signed) to later academic success and dispels the myth that deaf children with deaf parents have exclusive access to the acquisition of these skills.
Collapse
Affiliation(s)
- Thomas E. Allen
- Science of Learning Center on Visual Language and Visual Learning, Gallaudet University, Washington, DC, United States of America
- PhD in Educational Neuroscience Program, Gallaudet University, Washington, DC, United States of America
- * E-mail:
| | - Donna A. Morere
- Science of Learning Center on Visual Language and Visual Learning, Gallaudet University, Washington, DC, United States of America
- Department of Psychology, Gallaudet University, Washington, DC, United States of America
| |
Collapse
|
16
|
Brooks R, Singleton JL, Meltzoff AN. Enhanced gaze-following behavior in Deaf infants of Deaf parents. Dev Sci 2019; 23:e12900. [PMID: 31486168 DOI: 10.1111/desc.12900] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2019] [Revised: 08/04/2019] [Accepted: 08/30/2019] [Indexed: 01/19/2023]
Abstract
Gaze following plays a role in parent-infant communication and is a key mechanism by which infants acquire information about the world from social input. Gaze following in Deaf infants has been understudied. Twelve Deaf infants of Deaf parents (DoD) who had native exposure to American Sign Language (ASL) were gender-matched and age-matched (±7 days) to 60 spoken-language hearing control infants. Results showed that the DoD infants had significantly higher gaze-following scores than the hearing infants. We hypothesize that in the absence of auditory input, and with support from ASL-fluent Deaf parents, infants become attuned to visual-communicative signals from other people, which engenders increased gaze following. These findings underscore the need to revise the 'deficit model' of deafness. Deaf infants immersed in natural sign language from birth are better at understanding the signals and identifying the referential meaning of adults' gaze behavior compared to hearing infants not exposed to sign language. Broader implications for theories of social-cognitive development are discussed. A video abstract of this article can be viewed at https://youtu.be/QXCDK_CUmAI.
Collapse
Affiliation(s)
- Rechele Brooks
- Institute for Learning & Brain Sciences, University of Washington, Seattle, Washington
| | - Jenny L Singleton
- Department of Linguistics, University of Texas at Austin, Austin, Texas
| | - Andrew N Meltzoff
- Institute for Learning & Brain Sciences, University of Washington, Seattle, Washington
| |
Collapse
|
17
|
Lederberg AR, Branum-Martin L, Webb MY, Schick B, Antia S, Easterbrooks SR, Connor CM. Modality and Interrelations Among Language, Reading, Spoken Phonological Awareness, and Fingerspelling. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2019; 24:408-423. [PMID: 31089729 DOI: 10.1093/deafed/enz011] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/10/2018] [Revised: 02/27/2019] [Accepted: 03/16/2019] [Indexed: 06/09/2023]
Abstract
Better understanding of the mechanisms underlying early reading skills can lead to improved interventions. Hence, the purpose of this study was to examine multivariate associations among reading, language, spoken phonological awareness, and fingerspelling abilities for three groups of deaf and hard-of-hearing (DHH) beginning readers: those who were acquiring only spoken English (n = 101), those who were visual learners and acquiring sign (n = 131), and those who were acquiring both (n = 104). Children were enrolled in kindergarten, first, or second grade. Within-group and between-group confirmatory factor analysis showed that there were both similarities and differences in the abilities that underlie reading in these three groups. For all groups, reading abilities related to both language and the ability to manipulate the sublexical features of words. However, the groups differed on whether these constructs were based on visual or spoken language. Our results suggest that there are alternative means to learning to read. Whereas all DHH children learning to read rely on the same fundamental abilities of language and phonological processing, the modality, levels, and relations among these abilities differ.
Collapse
|
18
|
Deaf Children as ‘English Learners’: The Psycholinguistic Turn in Deaf Education. EDUCATION SCIENCES 2019. [DOI: 10.3390/educsci9020133] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The purpose of this literature review is to present the arguments in support of conceptualizing deaf children as ‘English Learners’, to explore the educational implications of such conceptualizations, and to suggest directions for future inquiry. Three ways of interpreting the label ‘English Learner’ in relationship to deaf children are explored: (1) as applied to deaf children whose native language is American Sign Language; (2) as applied to deaf children whose parents speak a language other than English; and (3) as applied to deaf children who have limited access to the spoken English used by their parents. Recent research from the fields of linguistics and neuroscience on the effects of language deprivation is presented and conceptualized within a framework that we refer to as the psycholinguistic turn in deaf education. The implications for developing the literacy skills of signing deaf children are explored, particularly around the theoretical construct of a ‘bridge’ between sign language proficiency and print-based literacy. Finally, promising directions for future inquiry are presented.
Collapse
|
19
|
Stone A, Petitto LA, Bosworth R. Visual Sonority Modulates Infants' Attraction to Sign Language. LANGUAGE LEARNING AND DEVELOPMENT : THE OFFICIAL JOURNAL OF THE SOCIETY FOR LANGUAGE DEVELOPMENT 2017; 14:130-148. [PMID: 32952461 PMCID: PMC7500480 DOI: 10.1080/15475441.2017.1404468] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The infant brain may be predisposed to identify perceptually salient cues that are common to both signed and spoken languages. Recent theory based on spoken languages has advanced sonority as one of these potential language acquisition cues. Using a preferential looking paradigm with an infrared eye tracker, we explored visual attention of hearing 6- and 12-month-olds with no sign language experience as they watched fingerspelling stimuli that either conformed to high sonority (well-formed) or low sonority (ill-formed) values, which are relevant to syllabic structure in signed language. Younger babies showed highly significant looking preferences for well-formed, high sonority fingerspelling, while older babies showed no preference for either fingerspelling variant, despite showing a strong preference in a control condition. The present findings suggest babies possess a sensitivity to specific sonority-based contrastive cues at the core of human language structure that is subject to perceptual narrowing, irrespective of language modality (visual or auditory), shedding new light on universals of early language learning.
Collapse
Affiliation(s)
- Adam Stone
- Department of Psychology, University of California, San Diego, La Jolla, CA
| | - Laura-Ann Petitto
- PhD in Educational Neuroscience Program, Gallaudet University, Washington, DC
- NSF Science of Learning Center, Visual Language and Visual Learning (VL2), Gallaudet University, Washington, DC
- Department of Psychology, Gallaudet University, Washington, DC
| | - Rain Bosworth
- Department of Psychology, University of California, San Diego, La Jolla, CA
| |
Collapse
|
20
|
Williams JT, Stone A, Newman SD. Operationalization of Sign Language Phonological Similarity and its Effects on Lexical Access. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2017; 22:303-315. [PMID: 28575411 PMCID: PMC6364953 DOI: 10.1093/deafed/enx014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/09/2016] [Revised: 03/20/2017] [Accepted: 04/12/2017] [Indexed: 06/07/2023]
Abstract
Cognitive mechanisms for sign language lexical access are fairly unknown. This study investigated whether phonological similarity facilitates lexical retrieval in sign languages using measures from a new lexical database for American Sign Language. Additionally, it aimed to determine which similarity metric best fits the present data in order to inform theories of how phonological similarity is constructed within the lexicon and to aid in the operationalization of phonological similarity in sign language. Sign repetition latencies and accuracy were obtained when native signers were asked to reproduce a sign displayed on a computer screen. Results indicated that, as predicted, phonological similarity facilitated repetition latencies and accuracy as long as there were no strict constraints on the type of sublexical features that overlapped. The data converged to suggest that one similarity measure, MaxD, defined as the overlap of any 4 sublexical features, likely best represents mechanisms of phonological similarity in the mental lexicon. Together, these data suggest that lexical access in sign language is facilitated by phonologically similar lexical representations in memory and the optimal operationalization is defined as liberal constraints on overlap of 4 out of 5 sublexical features-similar to the majority of extant definitions in the literature.
Collapse
|