1
|
Thierfelder P, Cai ZG, Huang S, Lin H. The Chinese lexicon of deaf readers: A database of character decisions and a comparison between deaf and hearing readers. Behav Res Methods 2024; 56:5732-5753. [PMID: 38114882 DOI: 10.3758/s13428-023-02305-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/22/2023] [Indexed: 12/21/2023]
Abstract
We present a psycholinguistic study investigating lexical effects on simplified Chinese character recognition by deaf readers. Prior research suggests that deaf readers exhibit efficient orthographic processing and decreased reliance on speech-based phonology in word recognition compared to hearing readers. In this large-scale character decision study (25 participants, each evaluating 2500 real characters and 2500 pseudo-characters), we analyzed various factors influencing character recognition accuracy and speed in deaf readers. Deaf participants demonstrated greater accuracy and faster recognition when characters were more frequent, were acquired earlier, had more strokes, displayed higher orthographic complexity, were more imageable in reference, or were less concrete in reference. Comparison with a previous study of hearing readers revealed that the facilitative effect of frequency on character decision accuracy was stronger for deaf readers than hearing readers. The effect of orthographic-phonological regularity differed significantly for the two groups, indicating that deaf readers rely more on orthographic structure and less on phonological information during character recognition. Notably, increased stroke counts (i.e., higher orthographic complexity) hindered hearing readers but facilitated recognition processes in deaf readers, suggesting that deaf readers excel at recognizing characters based on orthographic structure. The database generated from this large-scale character decision study offers a valuable resource for further research and practical applications in deaf education and literacy.
Collapse
Affiliation(s)
- Philip Thierfelder
- Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Sha Tin, N.T., Hong Kong, SAR
| | - Zhenguang G Cai
- Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Sha Tin, N.T., Hong Kong, SAR.
| | - Shuting Huang
- Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Sha Tin, N.T., Hong Kong, SAR
| | - Hao Lin
- Shanghai International Studies University, 550 Dalian Road(W), Shanghai, People's Republic of China.
| |
Collapse
|
2
|
Holcomb PJ, Akers EM, Midgley KJ, Emmorey K. Orthographic and Phonological Code Activation in Deaf and Hearing Readers. J Cogn 2024; 7:19. [PMID: 38312942 PMCID: PMC10836169 DOI: 10.5334/joc.326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Accepted: 10/11/2023] [Indexed: 02/06/2024] Open
Abstract
Grainger et al. (2006) were the first to use ERP masked priming to explore the differing contributions of phonological and orthographic representations to visual word processing. Here we adapted their paradigm to examine word processing in deaf readers. We investigated whether reading-matched deaf and hearing readers (n = 36) exhibit different ERP effects associated with the activation of orthographic and phonological codes during word processing. In a visual masked priming paradigm, participants performed a go/no-go categorization task (detect an occasional animal word). Critical target words were preceded by orthographically-related (transposed letter - TL) or phonologically-related (pseudohomophone - PH) masked non-word primes were contrasted with the same target words preceded by letter substitution (control) non-words primes. Hearing readers exhibited typical N250 and N400 priming effects (greater negativity for control compared to TL or PH primed targets), and the TL and PH priming effects did not differ. For deaf readers, the N250 PH priming effect was later (250-350 ms), and they showed a reversed N250 priming effect for TL primes in this time window. The N400 TL and PH priming effects did not differ between groups. For hearing readers, those with better phonological and spelling skills showed larger early N250 PH and TL priming effects (150-250 ms). For deaf readers, those with better phonological skills showed a larger reversed TL priming effect in the late N250 window. We speculate that phonological knowledge modulates how strongly deaf readers rely on whole-word orthographic representations and/or the mapping from sublexical to lexical representations.
Collapse
Affiliation(s)
| | - Emily M. Akers
- Department of Psychology, San Diego State University, CA, USA
| | | | - Karen Emmorey
- School of Speech, Language and Hearing Sciences, San Diego State University, CA, USA
| |
Collapse
|
3
|
Lee B, Martinez PM, Midgley KJ, Holcomb PJ, Emmorey K. Sensitivity to orthographic vs. phonological constraints on word recognition: An ERP study with deaf and hearing readers. Neuropsychologia 2022; 177:108420. [PMID: 36396091 PMCID: PMC10152474 DOI: 10.1016/j.neuropsychologia.2022.108420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Revised: 09/28/2022] [Accepted: 11/07/2022] [Indexed: 11/16/2022]
Abstract
The role of phonology in word recognition has previously been investigated using a masked lexical decision task and transposed letter (TL) nonwords that were either pronounceable (barve) or unpronounceable (brvae). We used event-related potentials (ERPs) to investigate these effects in skilled deaf readers, who may be more sensitive to orthotactic than phonotactic constraints, which are conflated in English. Twenty deaf and twenty hearing adults completed a masked lexical decision task while ERPs were recorded. The groups were matched in reading skill and IQ, but deaf readers had poorer phonological ability. Deaf readers were faster and more accurate at rejecting TL nonwords than hearing readers. Neither group exhibited an effect of nonword pronounceability in RTs or accuracy. For both groups, the N250 and N400 components were modulated by lexicality (more negative for nonwords). The N250 was not modulated by nonword pronounceability, but pronounceable nonwords elicited a larger amplitude N400 than unpronounceable nonwords. Because pronounceable nonwords are more word-like, they may incite activation that is unresolved when no lexical entry is found, leading to a larger N400 amplitude. Similar N400 pronounceability effects for deaf and hearing readers, despite differences in phonological sensitivity, suggest these TL effects arise from sensitivity to lexical-level orthotactic constraints. Deaf readers may have an advantage in processing TL nonwords because of enhanced early visual attention and/or tight orthographic-to-semantic connections, bypassing the phonologically mediated route to word recognition.
Collapse
Affiliation(s)
- Brittany Lee
- Joint Doctoral Program in Language and Communicative Disorders, San Diego State University & University of California, San Diego, United States.
| | | | | | | | | |
Collapse
|
4
|
Winsler K, Holcomb PJ, Emmorey K. Electrophysiological patterns of visual word recognition in deaf and hearing readers: An ERP mega-study. LANGUAGE, COGNITION AND NEUROSCIENCE 2022; 38:636-650. [PMID: 37304206 PMCID: PMC10249718 DOI: 10.1080/23273798.2022.2135746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Accepted: 10/03/2022] [Indexed: 06/13/2023]
Abstract
Deaf and hearing readers have different access to spoken phonology which may affect the representation and recognition of written words. We used ERPs to investigate how a matched sample of deaf and hearing adults (total n = 90) responded to lexical characteristics of 480 English words in a go/no-go lexical decision task. Results from mixed effect regression models showed a) visual complexity produced small effects in opposing directions for deaf and hearing readers, b) similar frequency effects, but shifted earlier for deaf readers, c) more pronounced effects of orthographic neighborhood density for hearing readers, and d) more pronounced effects of concreteness for deaf readers. We suggest hearing readers have visual word representations that are more integrated with phonological representations, leading to larger lexically-mediated effects of neighborhood density. Conversely, deaf readers weight other sources of information more heavily, leading to larger semantically-mediated effects and altered responses to low-level visual variables.
Collapse
Affiliation(s)
- Kurt Winsler
- Department of Psychology, University of California - Davis. Davis, CA, United States
| | - Phillip J Holcomb
- Department of Psychology, San Diego State University, San Diego, CA, United States
| | - Karen Emmorey
- School of Speech, Language and Hearing Science, San Diego State University, San Diego, CA, United States
| |
Collapse
|
5
|
Predictors of Word and Text Reading Fluency of Deaf Children in Bilingual Deaf Education Programmes. LANGUAGES 2022. [DOI: 10.3390/languages7010051] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Reading continues to be a challenging task for most deaf children. Bimodal bilingual education creates a supportive environment that stimulates deaf children’s learning through the use of sign language. However, it is still unclear how exposure to sign language might contribute to improving reading ability. Here, we investigate the relative contribution of several cognitive and linguistic variables to the development of word and text reading fluency in deaf children in bimodal bilingual education programmes. The participants of this study were 62 school-aged (8 to 10 years old at the start of the 3-year study) deaf children who took part in bilingual education (using Dutch and Sign Language of The Netherlands) and 40 age-matched hearing children. We assessed vocabulary knowledge in speech and sign, phonological awareness in speech and sign, receptive fingerspelling ability, and short-term memory at time 1 (T1). At times 2 (T2) and 3 (T3), we assessed word and text reading fluency. We found that (1) speech-based vocabulary strongly predicted word and text reading at T2 and T3, (2) fingerspelling ability was a strong predictor of word and text reading fluency at T2 and T3, (3) speech-based phonological awareness predicted word reading accuracy at T2 and T3 but did not predict text reading fluency, and (4) fingerspelling and STM predicted word reading latency at T2 while sign-based phonological awareness predicted this outcome measure at T3. These results suggest that fingerspelling may have an important function in facilitating the construction of orthographical/phonological representations of printed words for deaf children and strengthening word decoding and recognition abilities.
Collapse
|
6
|
Bosworth RG, Binder EM, Tyler SC, Morford JP. Automaticity of lexical access in deaf and hearing bilinguals: Cross-linguistic evidence from the color Stroop task across five languages. Cognition 2021; 212:104659. [PMID: 33798950 DOI: 10.1016/j.cognition.2021.104659] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2020] [Revised: 12/08/2020] [Accepted: 03/07/2021] [Indexed: 11/15/2022]
Abstract
The well-known Stroop interference effect has been instrumental in revealing the highly automated nature of lexical processing as well as providing new insights to the underlying lexical organization of first and second languages within proficient bilinguals. The present cross-linguistic study had two goals: 1) to examine Stroop interference for dynamic signs and printed words in deaf ASL-English bilinguals who report no reliance on speech or audiological aids; 2) to compare Stroop interference effects in several groups of bilinguals whose two languages range from very distinct to very similar in their shared orthographic patterns: ASL-English bilinguals (very distinct), Chinese-English bilinguals (low similarity), Korean-English bilinguals (moderate similarity), and Spanish-English bilinguals (high similarity). Reaction time and accuracy were measured for the Stroop color naming and word reading tasks, for congruent and incongruent color font conditions. Results confirmed strong Stroop interference for both dynamic ASL stimuli and English printed words in deaf bilinguals, with stronger Stroop interference effects in ASL for deaf bilinguals who scored higher in a direct assessment of ASL proficiency. Comparison of the four groups of bilinguals revealed that the same-script bilinguals (Spanish-English bilinguals) exhibited significantly greater Stroop interference effects for color naming than the other three bilingual groups. The results support three conclusions. First, Stroop interference effects are found for both signed and spoken languages. Second, contrary to some claims in the literature about deaf signers who do not use speech being poor readers, deaf bilinguals' lexical processing of both signs and written words is highly automated. Third, cross-language similarity is a critical factor shaping bilinguals' experience of Stroop interference in their two languages. This study represents the first comparison of both deaf and hearing bilinguals on the Stroop task, offering a critical test of theories about bilingual lexical access and cognitive control.
Collapse
Affiliation(s)
- Rain G Bosworth
- National Technical Institute for the Deaf, Rochester Institute of Technology, USA.
| | | | - Sarah C Tyler
- Department of Psychology, University of California, San Diego, USA
| | - Jill P Morford
- Department of Linguistics, University of New Mexico, USA
| |
Collapse
|
7
|
Language development in deaf bilinguals: Deaf middle school students co-activate written English and American Sign Language during lexical processing. Cognition 2021; 211:104642. [PMID: 33752155 DOI: 10.1016/j.cognition.2021.104642] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Revised: 02/07/2021] [Accepted: 02/19/2021] [Indexed: 11/20/2022]
Abstract
Bilinguals, both hearing and deaf, activate multiple languages simultaneously even in contexts that require only one language. To date, the point in development at which bilingual signers experience cross-language activation of a signed and a spoken language remains unknown. We investigated the processing of written words by ASL-English bilingual deaf middle school students. Deaf bilinguals were faster to respond to English word pairs with phonologically related translations in ASL than to English word pairs with unrelated translations, but no difference was found for hearing controls with no knowledge of ASL. The results indicate that co-activation of signs and written words is not the outcome of years of bilingual experience, but instead characterizes bilingual language development.
Collapse
|
8
|
Costello B, Caffarra S, Fariña N, Duñabeitia JA, Carreiras M. Reading without phonology: ERP evidence from skilled deaf readers of Spanish. Sci Rep 2021; 11:5202. [PMID: 33664324 PMCID: PMC7933439 DOI: 10.1038/s41598-021-84490-5] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2020] [Accepted: 02/16/2021] [Indexed: 11/10/2022] Open
Abstract
Reading typically involves phonological mediation, especially for transparent orthographies with a regular letter to sound correspondence. In this study we ask whether phonological coding is a necessary part of the reading process by examining prelingually deaf individuals who are skilled readers of Spanish. We conducted two EEG experiments exploiting the pseudohomophone effect, in which nonwords that sound like words elicit phonological encoding during reading. The first, a semantic categorization task with masked priming, resulted in modulation of the N250 by pseudohomophone primes in hearing but not in deaf readers. The second, a lexical decision task, confirmed the pattern: hearing readers had increased errors and an attenuated N400 response for pseudohomophones compared to control pseudowords, whereas deaf readers did not treat pseudohomophones any differently from pseudowords, either behaviourally or in the ERP response. These results offer converging evidence that skilled deaf readers do not rely on phonological coding during visual word recognition. Furthermore, the finding demonstrates that reading can take place in the absence of phonological activation, and we speculate about the alternative mechanisms that allow these deaf individuals to read competently.
Collapse
Affiliation(s)
- Brendan Costello
- Basque Center on Cognition, Brain and Language, Paseo Mikeletegi, 69, 20009, Donostia-San Sebstián, Spain.
| | - Sendy Caffarra
- Basque Center on Cognition, Brain and Language, Paseo Mikeletegi, 69, 20009, Donostia-San Sebstián, Spain.,Division of Developmental-Behavioral Pediatrics, Stanford University School of Medicine, Stanford University, Stanford, CA, USA.,Stanford University Graduate School of Education, Stanford, CA, USA
| | - Noemi Fariña
- Basque Center on Cognition, Brain and Language, Paseo Mikeletegi, 69, 20009, Donostia-San Sebstián, Spain.,Departamento de Psicología de la Educación y Psicobiología, Facultad de Educación, Universidad Internacional de La Rioja, Logroño, Spain
| | - Jon Andoni Duñabeitia
- Centro de Ciencia Cognitiva - C3, Universidad Nebrija, Madrid, Spain.,Department of Language and Culture, The Arctic University of Norway, Tromsö, Norway
| | - Manuel Carreiras
- Basque Center on Cognition, Brain and Language, Paseo Mikeletegi, 69, 20009, Donostia-San Sebstián, Spain.,Departamento de Lengua Vasca y Comunicación, UPV/EHU, Bilbao, Spain.,Basque Foundation for Science, Bilbao, Spain
| |
Collapse
|
9
|
Schotter ER, Johnson E, Lieberman AM. The sign superiority effect: Lexical status facilitates peripheral handshape identification for deaf signers. J Exp Psychol Hum Percept Perform 2020; 46:1397-1410. [PMID: 32940493 PMCID: PMC7887614 DOI: 10.1037/xhp0000862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/31/2024]
Abstract
Deaf signers exhibit an enhanced ability to process information in their peripheral visual field, particularly the motion of dots or orientation of lines. Does their experience processing sign language, which involves identifying meaningful visual forms across the visual field, contribute to this enhancement? We tested whether deaf signers recruit language knowledge to facilitate peripheral identification through a sign superiority effect (i.e., better handshape discrimination in a sign than a pseudosign) and whether such a superiority effect might be responsible for perceptual enhancements relative to hearing individuals (i.e., a decrease in the effect of eccentricity on perceptual identification). Deaf signers and hearing signers or nonsigners identified the handshape presented within a static ASL fingerspelling letter (Experiment 1), fingerspelled sequence (Experiment 2), or sign or pseudosign (Experiment 3) presented in the near or far periphery. Accuracy on all tasks was higher for deaf signers than hearing nonsigning participants and was higher in the near than the far periphery. Across experiments, there were different patterns of interactions between hearing status and eccentricity depending on the type of stimulus; deaf signers showed an effect of eccentricity for static fingerspelled letters, fingerspelled sequences, and pseudosigns but not for ASL signs. In contrast, hearing nonsigners showed an effect of eccentricity for all stimuli. Thus, deaf signers recruit lexical knowledge to facilitate peripheral perceptual identification, and this perceptual enhancement may derive from their extensive experience processing visual linguistic information in the periphery during sign comprehension. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Collapse
Affiliation(s)
| | - Emily Johnson
- Department of Psychology, University of South Florida
| | - Amy M Lieberman
- Wheelock College of Education and Human Development, Boston University
| |
Collapse
|
10
|
Meade G, Grainger J, Midgley KJ, Holcomb PJ, Emmorey K. An ERP investigation of orthographic precision in deaf and hearing readers. Neuropsychologia 2020; 146:107542. [PMID: 32590018 PMCID: PMC7502516 DOI: 10.1016/j.neuropsychologia.2020.107542] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2020] [Revised: 05/27/2020] [Accepted: 06/21/2020] [Indexed: 10/24/2022]
Abstract
Phonology is often assumed to play a role in the tuning of orthographic representations, but it is unknown whether deaf readers' reduced access to spoken phonology reduces orthographic precision. To index how precisely deaf and hearing readers encode orthographic information, we used a masked transposed-letter (TL) priming paradigm. Word targets were preceded by TL primes formed by reversing two letters in the word and substitution primes in which the same two letters were replaced. The two letters that were manipulated were either in adjacent or non-adjacent positions, yielding four prime conditions: adjacent TL (e.g., chikcen-CHICKEN), adjacent substitution (e.g., chidven- CHICKEN), non-adjacent TL (e.g., ckichen-CHICKEN), and non-adjacent substitution (e.g., cticfen-CHICKEN). Replicating the standard TL priming effects, targets preceded by TL primes elicited smaller amplitude negativities and faster responses than those preceded by substitution primes overall. This indicates some degree of flexibility in the associations between letters and their positions within words. More flexible (i.e., less precise) representations are thought to be more susceptible to activation by TL primes, resulting in larger TL priming effects. However, the size of the TL priming effects was virtually identical between groups. Moreover, the ERP effects were shifted in time such that the adjacent TL priming effect arose earlier than the non-adjacent TL priming effect in both groups. These results suggest that phonological tuning is not required to represent orthographic information in a precise manner.
Collapse
Affiliation(s)
- Gabriela Meade
- Joint Doctoral Program in Language and Communicative Disorders, San Diego State University & University of California, San Diego, USA.
| | - Jonathan Grainger
- Laboratoire de Psychologie Cognitive, CNRS & Aix-Marseille Université, USA
| | | | | | - Karen Emmorey
- School of Speech, Language, and Hearing Sciences, San Diego State University, USA
| |
Collapse
|
11
|
Morford JP, Occhino C, Zirnstein M, Kroll JF, Wilkinson E, Piñar P. What is the Source of Bilingual Cross-Language Activation in Deaf Bilinguals? JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2019; 24:356-365. [PMID: 31398721 DOI: 10.1093/deafed/enz024] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/09/2019] [Revised: 04/16/2019] [Accepted: 05/12/2019] [Indexed: 06/10/2023]
Abstract
When deaf bilinguals are asked to make semantic similarity judgments of two written words, their responses are influenced by the sublexical relationship of the signed language translations of the target words. This study investigated whether the observed effects of American Sign Language (ASL) activation on English print depend on (a) an overlap in syllabic structure of the signed translations or (b) on initialization, an effect of contact between ASL and English that has resulted in a direct representation of English orthographic features in ASL sublexical form. Results demonstrate that neither of these conditions is required or enhances effects of cross-language activation. The experimental outcomes indicate that deaf bilinguals discover the optimal mapping between their two languages in a manner that is not constrained by privileged sublexical associations.
Collapse
|
12
|
Meade G, Grainger J, Midgley KJ, Holcomb PJ, Emmorey K. ERP effects of masked orthographic neighbour priming in deaf readers. LANGUAGE, COGNITION AND NEUROSCIENCE 2019; 34:1016-1026. [PMID: 31595216 PMCID: PMC6781870 DOI: 10.1080/23273798.2019.1614201] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/23/2018] [Accepted: 04/26/2019] [Indexed: 05/12/2023]
Abstract
In masked priming studies with hearing readers, neighbouring words (e.g., wine, vine) compete through lateral inhibition. Here, we asked whether lateral inhibition also characterizes visual word recognition in deaf readers and whether the neural signature of this competition is the same as for hearing readers. Only real words have lexical representations that engage in lateral inhibition. Therefore, we compared processing of target words following neighbouring prime words (e.g., wine-VINE) and pseudowords (e.g., bine-VINE). Targets following words elicited larger amplitude N400s and slower lexical decision responses than those following pseudowords, indicating more effortful processing due to lateral inhibition. Although these effects went in the same direction for hearing and deaf readers, the distribution of the N400 effect differed. We associate the more anterior effect in hearing readers with stronger co-activation of, and competition among, phonological representations. Thus, deaf readers use lexical competition to recognize visual words, but it is primarily restricted to orthographic representations.
Collapse
Affiliation(s)
- Gabriela Meade
- Joint Doctoral Program in Language and Communicative Disorders, San Diego State University and University of California, San Diego, San Diego, CA, USA
| | - Jonathan Grainger
- Laboratoire de Psychologie Cognitive, CNRS & Aix-Marseille Université, France
| | | | - Phillip J. Holcomb
- Department of Psychology, San Diego State University, San Diego, CA, USA
| | - Karen Emmorey
- School of Speech, Language, and Hearing Sciences, San Diego State University, San Diego, CA, USA
| |
Collapse
|
13
|
Petitto LA, Langdon C, Stone A, Andriola D, Kartheiser G, Cochran C. Visual sign phonology: insights into human reading and language from a natural soundless phonology. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2016; 7:366-381. [PMID: 27425650 DOI: 10.1002/wcs.1404] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2015] [Revised: 05/24/2016] [Accepted: 05/29/2016] [Indexed: 11/09/2022]
Abstract
Among the most prevailing assumptions in science and society about the human reading process is that sound and sound-based phonology are critical to young readers. The child's sound-to-letter decoding is viewed as universal and vital to deriving meaning from print. We offer a different view. The crucial link for early reading success is not between segmental sounds and print. Instead the human brain's capacity to segment, categorize, and discern linguistic patterning makes possible the capacity to segment all languages. This biological process includes the segmentation of languages on the hands in signed languages. Exposure to natural sign language in early life equally affords the child's discovery of silent segmental units in visual sign phonology (VSP) that can also facilitate segmental decoding of print. We consider powerful biological evidence about the brain, how it builds sound and sign phonology, and why sound and sign phonology are equally important in language learning and reading. We offer a testable theoretical account, reading model, and predictions about how VSP can facilitate segmentation and mapping between print and meaning. We explain how VSP can be a powerful facilitator of all children's reading success (deaf and hearing)-an account with profound transformative impact on learning to read in deaf children with different language backgrounds. The existence of VSP has important implications for understanding core properties of all human language and reading, challenges assumptions about language and reading as being tied to sound, and provides novel insight into a remarkable biological equivalence in signed and spoken languages. WIREs Cogn Sci 2016, 7:366-381. doi: 10.1002/wcs.1404 For further resources related to this article, please visit the WIREs website.
Collapse
Affiliation(s)
- L A Petitto
- NSF Science of Learning Center, Visual Language and Visual Learning, VL2, Gallaudet University, Washington, DC, USA. .,Brain and Language Laboratory for fNIRS Neuroimaging, BL2, Gallaudet University, Washington, DC, USA. .,Ph.D. in Educational Neuroscience (PEN) Program, Gallaudet University, Washington, DC, USA. .,Department of Psychology, Gallaudet University, Washington, DC, USA.
| | - C Langdon
- NSF Science of Learning Center, Visual Language and Visual Learning, VL2, Gallaudet University, Washington, DC, USA.,Brain and Language Laboratory for fNIRS Neuroimaging, BL2, Gallaudet University, Washington, DC, USA.,Ph.D. in Educational Neuroscience (PEN) Program, Gallaudet University, Washington, DC, USA
| | - A Stone
- NSF Science of Learning Center, Visual Language and Visual Learning, VL2, Gallaudet University, Washington, DC, USA.,Brain and Language Laboratory for fNIRS Neuroimaging, BL2, Gallaudet University, Washington, DC, USA.,Ph.D. in Educational Neuroscience (PEN) Program, Gallaudet University, Washington, DC, USA
| | - D Andriola
- NSF Science of Learning Center, Visual Language and Visual Learning, VL2, Gallaudet University, Washington, DC, USA.,Brain and Language Laboratory for fNIRS Neuroimaging, BL2, Gallaudet University, Washington, DC, USA.,Ph.D. in Educational Neuroscience (PEN) Program, Gallaudet University, Washington, DC, USA
| | - G Kartheiser
- NSF Science of Learning Center, Visual Language and Visual Learning, VL2, Gallaudet University, Washington, DC, USA.,Brain and Language Laboratory for fNIRS Neuroimaging, BL2, Gallaudet University, Washington, DC, USA.,Ph.D. in Educational Neuroscience (PEN) Program, Gallaudet University, Washington, DC, USA
| | - C Cochran
- NSF Science of Learning Center, Visual Language and Visual Learning, VL2, Gallaudet University, Washington, DC, USA.,Brain and Language Laboratory for fNIRS Neuroimaging, BL2, Gallaudet University, Washington, DC, USA.,Department of Linguistics, Gallaudet University, Washington, DC, USA
| |
Collapse
|