1
|
Larionova E, Garakh Z. Spelling principles matter: An ERP study investigating the processing of different types of pseudohomophones. Brain Res 2024; 1839:149012. [PMID: 38772521 DOI: 10.1016/j.brainres.2024.149012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Revised: 05/15/2024] [Accepted: 05/16/2024] [Indexed: 05/23/2024]
Abstract
Spelling in any writing system is governed by fundamental principles. We examined the processing of two types of pseudohomophones constructed from words whose spellings are based on different principles - on the traditional principle of writing, requiring memorization of their spelling, and on the morphological principle, allowing the determination of their spelling from another word with the same morpheme (root) to examine the dependence of the occurrence of orthography-phonology conflict on spelling principles. Event-related potentials were recorded from 22 volunteers during silent reading. Pseudohomophones based on the morphological principle increased the N400 amplitude, emphasizing semantic and morphological processing importance. The P600 component showed significant effects for differentiating words and pseudohomophones based on the traditional principle, predominantly indicating the involvement of memory and reanalysis processes. Source reconstruction demonstrates that both pseudohomophones activate the left inferior frontal gyrus. However, pseudohomophones based on the traditional principle additionally activate the right and left postcentral gyrus, indicating the involvement of additional areas in the differentiation process. The earlier differences for stimuli based on the morphological principle indicate access to smaller units (morphemes), whereas stimuli based on the traditional principle require whole word processing. Our findings underscore the significant role of spelling principles in orthographic processing.
Collapse
Affiliation(s)
- Ekaterina Larionova
- Institute of Higher Nervous Activity and Neurophysiology, Russian Academy of Sciences, Moscow, Russian Federation.
| | - Zhanna Garakh
- Institute of Higher Nervous Activity and Neurophysiology, Russian Academy of Sciences, Moscow, Russian Federation
| |
Collapse
|
2
|
Majorano M, Santangelo M, Redondi I, Barachetti C, Florit E, Guerzoni L, Cuda D, Ferrari R, Bertelli B. The use of a computer-based program focused on the syllabic method to support early literacy in children with cochlear implants. Int J Pediatr Otorhinolaryngol 2024; 183:112048. [PMID: 39068706 DOI: 10.1016/j.ijporl.2024.112048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Revised: 07/11/2024] [Accepted: 07/22/2024] [Indexed: 07/30/2024]
Abstract
BACKGROUND Children with cochlear implants (CIs) often lag behind children with normal hearing (NH) in early literacy skills. Furthermore, the development of language skills associated with their emergent literacy skills seems to depend on good auditory access. Supporting language acquisition and early literacy in children with CIs may prevent difficulties in primary school. The use of technology may facilitate auditory and speech recovery in children with CIs, but evidence on computer-based early literacy programs is limited. OBJECTIVE This study investigates (a) the effects of a computer-based program focusing on the syllabic method on the literacy skills of children with CIs (CIs group), comparing them with the literacy skills of a group of age-matched NH (normal hearing) peers (NHs group); (b) the associations between language and early literacy skills in the NHs group and between language, auditory and early literacy skills in the CIs group. METHOD Nine prelingually deaf children with CIs (M = 61.11, SD = 6.90) with severe to profound sensorineural hearing loss and nine age-matched NH children participated in the program. Categories of Auditory Performance (CAP) as measures of children's auditory skills were collected. All participants were tested on phonological, morphosyntax (grammatical comprehension and repetition), and early literacy skills (syllable blending and segmentation, syllable and word reading) (T1). Next, all children participated in the computer-based program for 12 weeks. After the program was completed (T2), only early literacy tests were administered to the children. RESULTS Although, on average, both groups obtained higher scores in all literacy tasks at T2, the CIs group scored lower than the NHs group. In the CIs group, at T2 we found significant improvements in syllable segmentation (p = 0.042) and word reading (p = 0.035). In the NHs group, at T2 we found significant improvements in syllable segmentation (p = 0.034), syllable blending (p = 0.022), syllable reading (p = 0.008), and word reading (p = 0.009). We also found significant associations in both groups between measures of morphosyntax at T1 and measures of early literacy at T2. In addition, for the CIs group, we found significant associations between children's auditory performance at T1 and measures of morphosyntax at T1 and early literacy at T2. CONCLUSION a computer-based program focused on the syllabic method could support children with CIs in acquiring emergent literacy abilities. The auditory performance of children with CIs seems to influence their morphosyntax and later early literacy skills.
Collapse
Affiliation(s)
| | | | - Irene Redondi
- Department of Human Sciences, University of Verona, Italy
| | | | - Elena Florit
- Department of Human Sciences, University of Verona, Italy
| | | | | | | | | |
Collapse
|
3
|
Larionova E, Rebreikina A, Martynova O. Electrophysiological signatures of spelling sensitivity development from primary school age to adulthood. Sci Rep 2024; 14:7585. [PMID: 38555413 PMCID: PMC10981698 DOI: 10.1038/s41598-024-58219-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Accepted: 03/26/2024] [Indexed: 04/02/2024] Open
Abstract
Recognizing spelling errors is important for correct writing and reading, and develops over an extended period. The neural bases of the development of orthographic sensitivity remain poorly understood. We investigated event-related potentials (ERPs) associated with spelling error recognition when performing the orthographic decision task with correctly spelled and misspelled words in children aged 8-10 years old, early adolescents aged 11-14 years old, and adults. Spelling processing in adults included an early stage associated with the initial recognition of conflict between orthography and phonology (reflected in the N400 time window) and a later stage (reflected in the P600 time window) related to re-checking the spelling. In children 8-10 years old, there were no differences in ERPs to correct and misspelled words; in addition, their behavioral scores were worse than those of early adolescents, implying that the ability to quickly recognize the correct spelling is just beginning to develop at this age. In early adolescents, spelling recognition was reflected only at the later stage, corresponding to the P600 component. At the behavioral level, they were worse than adults at recognizing misspelled words. Our data suggest that orthographic sensitivity can develop beyond 14 years.
Collapse
Affiliation(s)
- Ekaterina Larionova
- Institute of Higher Nervous Activity and Neurophysiology, Russian Academy of Sciences, Moscow, Russian Federation.
| | - Anna Rebreikina
- Institute of Higher Nervous Activity and Neurophysiology, Russian Academy of Sciences, Moscow, Russian Federation
| | - Olga Martynova
- Institute of Higher Nervous Activity and Neurophysiology, Russian Academy of Sciences, Moscow, Russian Federation
- Centre for Cognition and Decision Making, Institute for Cognitive Neuroscience, Higher School of Economics, Moscow, Russian Federation
| |
Collapse
|
4
|
Sabatier E, Leybaert J, Chetail F. Orthographic Learning in French-Speaking Deaf and Hard of Hearing Children. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:870-885. [PMID: 38394239 DOI: 10.1044/2023_jslhr-23-00324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/25/2024]
Abstract
PURPOSE Children are assumed to acquire orthographic representations during autonomous reading by decoding new written words. The present study investigates how deaf and hard of hearing (DHH) children build new orthographic representations compared to typically hearing (TH) children. METHOD Twenty-nine DHH children, from 7.8 to 13.5 years old, with moderate-to-profound hearing loss, matched for reading level and chronological age to TH controls, were exposed to 10 pseudowords (novel words) in written stories. Then, they performed a spelling task and an orthographic recognition task on these new words. RESULTS In the spelling task, we found no difference in accuracy, but a difference in errors emerged between the two groups: Phonologically plausible errors were less common in DHH children than in TH children. In the recognition task, DHH children were better than TH children at recognizing target pseudowords. Phonological strategies seemed to be used less by DHH than by TH children who very often chose phonological distractors. CONCLUSIONS Both groups created sufficiently detailed orthographic representations to complete the tasks, which support the self-teaching hypothesis. DHH children used phonological information in both tasks but could use more orthographic cues than TH children to build up orthographic representations. Using the combination of a spelling task and a recognition task, as well as analyzing the nature of errors, in this study, provides a methodological implication for further understanding of underlying cognitive processes.
Collapse
Affiliation(s)
- Elodie Sabatier
- Laboratoire Cognition Langage et Développement, Center for Research in Cognition & Neurosciences, Université libre de Bruxelles, Brussels, Belgium
| | - Jacqueline Leybaert
- Laboratoire Cognition Langage et Développement, Center for Research in Cognition & Neurosciences, Université libre de Bruxelles, Brussels, Belgium
| | - Fabienne Chetail
- Laboratoire Cognition Langage et Développement, Center for Research in Cognition & Neurosciences, Université libre de Bruxelles, Brussels, Belgium
| |
Collapse
|
5
|
Larionova E, Garakh Z, Martynova O. Top-down modulation of brain responses in spelling error recognition. Acta Psychol (Amst) 2023; 235:103891. [PMID: 36933384 DOI: 10.1016/j.actpsy.2023.103891] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2023] [Accepted: 03/13/2023] [Indexed: 03/18/2023] Open
Abstract
The task being undertaken can influence orthographic, phonological and semantic processes. In linguistic research, two tasks are most often used: a task requiring a decision in relation to the presented word and a passive reading task which does not require a decision regarding the presented word. The results of studies using these different tasks are not always consistent. This study aimed to explore brain responses associated with the process of recognition of spelling errors, as well as the influence of the task on this process. Event-related potentials (ERPs) were recorded in 40 adults during an orthographic decision task to determine correctly spelled words and words written with errors that did not change the phonology and during the passive reading. During spelling recognition, the early stages up to 100 ms after the stimulus were automatic and did not depend on the requirements of the task. The amplitude of the N1 component (90-160 ms) was greater in the orthographic decision task, but did not depend on the correct spelling of the word. Late word recognition after 350-500 ms was task dependent, but spelling effects were similar across the two tasks: misspelled words evoked an increase in the amplitude of the N400 component related to lexical and semantic processing regardless of the task. In addition, the orthographic decision task modulated spelling effects, this was reflected in an increase in the amplitude of the P2 component (180-260 ms) for correctly spelled words compared with misspelled words. Thus, our results show that spelling recognition involves general lexico-semantic processes independent of the task. Simultaneously, the orthographic decision task modulates the spelling-specific processes necessary to quickly detect conflicts between orthographic and phonological representations of words in memory.
Collapse
Affiliation(s)
- Ekaterina Larionova
- Institute of Higher Nervous Activity and Neurophysiology, Russian Academy of Sciences, Butlerova 5a, Moscow 117485, Russia.
| | - Zhanna Garakh
- Institute of Higher Nervous Activity and Neurophysiology, Russian Academy of Sciences, Butlerova 5a, Moscow 117485, Russia
| | - Olga Martynova
- Institute of Higher Nervous Activity and Neurophysiology, Russian Academy of Sciences, Butlerova 5a, Moscow 117485, Russia; Centre for Cognition and Decision Making, National Research University Higher School of Economics, Krivokolenny per. 3, Moscow 101000, Russia.
| |
Collapse
|
6
|
Lee B, Martinez PM, Midgley KJ, Holcomb PJ, Emmorey K. Sensitivity to orthographic vs. phonological constraints on word recognition: An ERP study with deaf and hearing readers. Neuropsychologia 2022; 177:108420. [PMID: 36396091 PMCID: PMC10152474 DOI: 10.1016/j.neuropsychologia.2022.108420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Revised: 09/28/2022] [Accepted: 11/07/2022] [Indexed: 11/16/2022]
Abstract
The role of phonology in word recognition has previously been investigated using a masked lexical decision task and transposed letter (TL) nonwords that were either pronounceable (barve) or unpronounceable (brvae). We used event-related potentials (ERPs) to investigate these effects in skilled deaf readers, who may be more sensitive to orthotactic than phonotactic constraints, which are conflated in English. Twenty deaf and twenty hearing adults completed a masked lexical decision task while ERPs were recorded. The groups were matched in reading skill and IQ, but deaf readers had poorer phonological ability. Deaf readers were faster and more accurate at rejecting TL nonwords than hearing readers. Neither group exhibited an effect of nonword pronounceability in RTs or accuracy. For both groups, the N250 and N400 components were modulated by lexicality (more negative for nonwords). The N250 was not modulated by nonword pronounceability, but pronounceable nonwords elicited a larger amplitude N400 than unpronounceable nonwords. Because pronounceable nonwords are more word-like, they may incite activation that is unresolved when no lexical entry is found, leading to a larger N400 amplitude. Similar N400 pronounceability effects for deaf and hearing readers, despite differences in phonological sensitivity, suggest these TL effects arise from sensitivity to lexical-level orthotactic constraints. Deaf readers may have an advantage in processing TL nonwords because of enhanced early visual attention and/or tight orthographic-to-semantic connections, bypassing the phonologically mediated route to word recognition.
Collapse
Affiliation(s)
- Brittany Lee
- Joint Doctoral Program in Language and Communicative Disorders, San Diego State University & University of California, San Diego, United States.
| | | | | | | | | |
Collapse
|
7
|
Winsler K, Holcomb PJ, Emmorey K. Electrophysiological patterns of visual word recognition in deaf and hearing readers: An ERP mega-study. LANGUAGE, COGNITION AND NEUROSCIENCE 2022; 38:636-650. [PMID: 37304206 PMCID: PMC10249718 DOI: 10.1080/23273798.2022.2135746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Accepted: 10/03/2022] [Indexed: 06/13/2023]
Abstract
Deaf and hearing readers have different access to spoken phonology which may affect the representation and recognition of written words. We used ERPs to investigate how a matched sample of deaf and hearing adults (total n = 90) responded to lexical characteristics of 480 English words in a go/no-go lexical decision task. Results from mixed effect regression models showed a) visual complexity produced small effects in opposing directions for deaf and hearing readers, b) similar frequency effects, but shifted earlier for deaf readers, c) more pronounced effects of orthographic neighborhood density for hearing readers, and d) more pronounced effects of concreteness for deaf readers. We suggest hearing readers have visual word representations that are more integrated with phonological representations, leading to larger lexically-mediated effects of neighborhood density. Conversely, deaf readers weight other sources of information more heavily, leading to larger semantically-mediated effects and altered responses to low-level visual variables.
Collapse
Affiliation(s)
- Kurt Winsler
- Department of Psychology, University of California - Davis. Davis, CA, United States
| | - Phillip J Holcomb
- Department of Psychology, San Diego State University, San Diego, CA, United States
| | - Karen Emmorey
- School of Speech, Language and Hearing Science, San Diego State University, San Diego, CA, United States
| |
Collapse
|
8
|
Sehyr ZS, Emmorey K. Contribution of Lexical Quality and Sign Language Variables to Reading Comprehension. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2022; 27:355-372. [PMID: 35775152 DOI: 10.1093/deafed/enac018] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/18/2022] [Revised: 05/17/2022] [Accepted: 05/21/2022] [Indexed: 06/15/2023]
Abstract
The lexical quality hypothesis proposes that the quality of phonological, orthographic, and semantic representations impacts reading comprehension. In Study 1, we evaluated the contributions of lexical quality to reading comprehension in 97 deaf and 98 hearing adults matched for reading ability. While phonological awareness was a strong predictor for hearing readers, for deaf readers, orthographic precision and semantic knowledge, not phonology, predicted reading comprehension (assessed by two different tests). For deaf readers, the architecture of the reading system adapts by shifting reliance from (coarse-grained) phonological representations to high-quality orthographic and semantic representations. In Study 2, we examined the contribution of American Sign Language (ASL) variables to reading comprehension in 83 deaf adults. Fingerspelling (FS) and ASL comprehension skills predicted reading comprehension. We suggest that FS might reinforce orthographic-to-semantic mappings and that sign language comprehension may serve as a linguistic basis for the development of skilled reading in deaf signers.
Collapse
Affiliation(s)
- Zed Sevcikova Sehyr
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, CA, USA
| | - Karen Emmorey
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, CA, USA
| |
Collapse
|
9
|
Larionova EV, Martynova OV. Frequency Effects on Spelling Error Recognition: An ERP Study. Front Psychol 2022; 13:834852. [PMID: 35496180 PMCID: PMC9046601 DOI: 10.3389/fpsyg.2022.834852] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Accepted: 03/30/2022] [Indexed: 11/24/2022] Open
Abstract
Spelling errors are ubiquitous in all writing systems. Most studies exploring spelling errors focused on the phonological plausibility of errors. However, unlike typical pseudohomophones, spelling errors occur in naturally produced written language. We investigated the time course of recognition of the most frequent orthographic errors in Russian (error in an unstressed vowel in the root) and the effect of word frequency on this process. During event-related potentials (ERP) recording, 26 native Russian speakers silently read high-frequency correctly spelled words, low-frequency correctly spelled words, high-frequency words with errors, and low-frequency words with errors. The amplitude of P200 was more positive for correctly spelled words than for misspelled words and did not depend on the frequency of the words. In addition, in the 350–500-ms time window, we found a more negative response for misspelled words than for correctly spelled words in parietal–temporal-occipital regions regardless of word frequency. Considering our results in the context of a dual-route model, we concluded that recognizing misspelled high-frequency and low-frequency words involves common orthographic and phonological processes associated with P200 and N400 components such as whole word orthography processing and activation of phonological representations correspondingly. However, at the 500–700 ms stage (associated with lexical-semantic access in our study), error recognition depends on the word frequency. One possible explanation for these differences could be that at the 500–700 ms stage recognition of high-frequency misspelled and correctly spelled words shifts from phonological to orthographic processes, while low-frequency misspelled words are accompanied by more prolonged phonological activation. We believe these processes may be associated with different ERP components P300 and N400, reflecting a temporal overlap between categorization processes based on orthographic properties for high-frequency words and phonological processes for low-frequency words. Therefore, our results complement existing reading models and demonstrate that the neuronal underpinnings of spelling error recognition during reading may depend on word frequency.
Collapse
Affiliation(s)
- Ekaterina V. Larionova
- Institute of Higher Nervous Activity and Neurophysiology, Russian Academy of Sciences, Moscow, Russia
- *Correspondence: Ekaterina V. Larionova,
| | - Olga V. Martynova
- Institute of Higher Nervous Activity and Neurophysiology, Russian Academy of Sciences, Moscow, Russia
- Centre for Cognition and Decision Making, National Research University Higher School of Economics, Moscow, Russia
| |
Collapse
|
10
|
Predictors of Word and Text Reading Fluency of Deaf Children in Bilingual Deaf Education Programmes. LANGUAGES 2022. [DOI: 10.3390/languages7010051] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Reading continues to be a challenging task for most deaf children. Bimodal bilingual education creates a supportive environment that stimulates deaf children’s learning through the use of sign language. However, it is still unclear how exposure to sign language might contribute to improving reading ability. Here, we investigate the relative contribution of several cognitive and linguistic variables to the development of word and text reading fluency in deaf children in bimodal bilingual education programmes. The participants of this study were 62 school-aged (8 to 10 years old at the start of the 3-year study) deaf children who took part in bilingual education (using Dutch and Sign Language of The Netherlands) and 40 age-matched hearing children. We assessed vocabulary knowledge in speech and sign, phonological awareness in speech and sign, receptive fingerspelling ability, and short-term memory at time 1 (T1). At times 2 (T2) and 3 (T3), we assessed word and text reading fluency. We found that (1) speech-based vocabulary strongly predicted word and text reading at T2 and T3, (2) fingerspelling ability was a strong predictor of word and text reading fluency at T2 and T3, (3) speech-based phonological awareness predicted word reading accuracy at T2 and T3 but did not predict text reading fluency, and (4) fingerspelling and STM predicted word reading latency at T2 while sign-based phonological awareness predicted this outcome measure at T3. These results suggest that fingerspelling may have an important function in facilitating the construction of orthographical/phonological representations of printed words for deaf children and strengthening word decoding and recognition abilities.
Collapse
|
11
|
Holcomb L, Golos D, Moses A, Broadrick A. Enriching Deaf Children's American Sign Language Phonological Awareness: A Quasi-Experimental Study. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2021; 27:26-36. [PMID: 34392343 DOI: 10.1093/deafed/enab028] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/26/2021] [Revised: 07/01/2021] [Accepted: 07/14/2021] [Indexed: 06/13/2023]
Abstract
With the knowledge that deaf children benefit from early exposure to signed language, questions are raised about the role of specific types of language input that are beneficial in early childhood classrooms. This quasi-experimental study explores the effects of ASL rhyme, rhythm, and handshape awareness activities on 4- to 6-year-old deaf children's ASL phonological awareness. Deaf children received three-week structured activities and four-week teacher-choice activities that targeted handshape awareness. Results yielded evidence that interventions as brief as 12 minutes daily for up to 2 months can produce positive effects on deaf children's phonological awareness. Furthermore, although the intervention focused only on handshape awareness, children's positive gains on the ASL Phonological Awareness Test suggests one targeted phonological awareness skill (e.g., handshape) may generalize to other phonological awareness skills (e.g., location and movement). Further investigation is needed on the relationship between ASL phonological awareness and overall language and literacy skills in both ASL and English.
Collapse
Affiliation(s)
| | - Debbie Golos
- University of Minnesota, Minneapolis, Minnesota, USA
| | - Annie Moses
- National Association for the Education of Young Children (NAEYC), Washington D.C., USA
| | | |
Collapse
|