1
|
Orthographic Learning in French-Speaking Deaf and Hard of Hearing Children. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:870-885. [PMID: 38394239 DOI: 10.1044/2023_jslhr-23-00324] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/25/2024]
Abstract
PURPOSE Children are assumed to acquire orthographic representations during autonomous reading by decoding new written words. The present study investigates how deaf and hard of hearing (DHH) children build new orthographic representations compared to typically hearing (TH) children. METHOD Twenty-nine DHH children, from 7.8 to 13.5 years old, with moderate-to-profound hearing loss, matched for reading level and chronological age to TH controls, were exposed to 10 pseudowords (novel words) in written stories. Then, they performed a spelling task and an orthographic recognition task on these new words. RESULTS In the spelling task, we found no difference in accuracy, but a difference in errors emerged between the two groups: Phonologically plausible errors were less common in DHH children than in TH children. In the recognition task, DHH children were better than TH children at recognizing target pseudowords. Phonological strategies seemed to be used less by DHH than by TH children who very often chose phonological distractors. CONCLUSIONS Both groups created sufficiently detailed orthographic representations to complete the tasks, which support the self-teaching hypothesis. DHH children used phonological information in both tasks but could use more orthographic cues than TH children to build up orthographic representations. Using the combination of a spelling task and a recognition task, as well as analyzing the nature of errors, in this study, provides a methodological implication for further understanding of underlying cognitive processes.
Collapse
|
2
|
The Effect of Cued-Speech (CS) Perception on Auditory Processing in Typically Hearing (TH) Individuals Who Are Either Naïve or Experienced CS Producers. Brain Sci 2023; 13:1036. [PMID: 37508968 PMCID: PMC10377728 DOI: 10.3390/brainsci13071036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 06/28/2023] [Accepted: 07/05/2023] [Indexed: 07/30/2023] Open
Abstract
Cued Speech (CS) is a communication system that uses manual gestures to facilitate lipreading. In this study, we investigated how CS information interacts with natural speech using Event-Related Potential (ERP) analyses in French-speaking, typically hearing adults (TH) who were either naïve or experienced CS producers. The audiovisual (AV) presentation of lipreading information elicited an amplitude attenuation of the entire N1 and P2 complex in both groups, accompanied by N1 latency facilitation in the group of CS producers. Adding CS gestures to lipread information increased the magnitude of effects observed at the N1 time window, but did not enhance P2 amplitude attenuation. Interestingly, presenting CS gestures without lipreading information yielded distinct response patterns depending on participants' experience with the system. In the group of CS producers, AV perception of CS gestures facilitated the early stage of speech processing, while in the group of naïve participants, it elicited a latency delay at the P2 time window. These results suggest that, for experienced CS users, the perception of gestures facilitates early stages of speech processing, but when people are not familiar with the system, the perception of gestures impacts the efficiency of phonological decoding.
Collapse
|
3
|
Cortical tracking of speech in noise accounts for reading strategies in children. PLoS Biol 2020; 18:e3000840. [PMID: 32845876 PMCID: PMC7478533 DOI: 10.1371/journal.pbio.3000840] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2020] [Revised: 09/08/2020] [Accepted: 08/12/2020] [Indexed: 11/29/2022] Open
Abstract
Humans' propensity to acquire literacy relates to several factors, including the ability to understand speech in noise (SiN). Still, the nature of the relation between reading and SiN perception abilities remains poorly understood. Here, we dissect the interplay between (1) reading abilities, (2) classical behavioral predictors of reading (phonological awareness, phonological memory, and rapid automatized naming), and (3) electrophysiological markers of SiN perception in 99 elementary school children (26 with dyslexia). We demonstrate that, in typical readers, cortical representation of the phrasal content of SiN relates to the degree of development of the lexical (but not sublexical) reading strategy. In contrast, classical behavioral predictors of reading abilities and the ability to benefit from visual speech to represent the syllabic content of SiN account for global reading performance (i.e., speed and accuracy of lexical and sublexical reading). In individuals with dyslexia, we found preserved integration of visual speech information to optimize processing of syntactic information but not to sustain acoustic/phonemic processing. Finally, within children with dyslexia, measures of cortical representation of the phrasal content of SiN were negatively related to reading speed and positively related to the compromise between reading precision and reading speed, potentially owing to compensatory attentional mechanisms. These results clarify the nature of the relation between SiN perception and reading abilities in typical child readers and children with dyslexia and identify novel electrophysiological markers of emergent literacy.
Collapse
|
4
|
The Neural Basis of Speech Perception through Lipreading and Manual Cues: Evidence from Deaf Native Users of Cued Speech. Front Psychol 2017; 8:426. [PMID: 28424636 PMCID: PMC5371603 DOI: 10.3389/fpsyg.2017.00426] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2016] [Accepted: 03/07/2017] [Indexed: 11/13/2022] Open
Abstract
We present here the first neuroimaging data for perception of Cued Speech (CS) by deaf adults who are native users of CS. CS is a visual mode of communicating a spoken language through a set of manual cues which accompany lipreading and disambiguate it. With CS, sublexical units of the oral language are conveyed clearly and completely through the visual modality without requiring hearing. The comparison of neural processing of CS in deaf individuals with processing of audiovisual (AV) speech in normally hearing individuals represents a unique opportunity to explore the similarities and differences in neural processing of an oral language delivered in a visuo-manual vs. an AV modality. The study included deaf adult participants who were early CS users and native hearing users of French who process speech audiovisually. Words were presented in an event-related fMRI design. Three conditions were presented to each group of participants. The deaf participants saw CS words (manual + lipread), words presented as manual cues alone, and words presented to be lipread without manual cues. The hearing group saw AV spoken words, audio-alone and lipread-alone. Three findings are highlighted. First, the middle and superior temporal gyrus (excluding Heschl's gyrus) and left inferior frontal gyrus pars triangularis constituted a common, amodal neural basis for AV and CS perception. Second, integration was inferred in posterior parts of superior temporal sulcus for audio and lipread information in AV speech, but in the occipito-temporal junction, including MT/V5, for the manual cues and lipreading in CS. Third, the perception of manual cues showed a much greater overlap with the regions activated by CS (manual + lipreading) than lipreading alone did. This supports the notion that manual cues play a larger role than lipreading for CS processing. The present study contributes to a better understanding of the role of manual cues as support of visual speech perception in the framework of the multimodal nature of human communication.
Collapse
|
5
|
Children with Autism Understand Indirect Speech Acts: Evidence from a Semi-Structured Act-Out Task. PLoS One 2015; 10:e0142191. [PMID: 26551648 PMCID: PMC4638355 DOI: 10.1371/journal.pone.0142191] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2015] [Accepted: 10/18/2015] [Indexed: 12/31/2022] Open
Abstract
Children with Autism Spectrum Disorder are often said to present a global pragmatic impairment. However, there is some observational evidence that context-based comprehension of indirect requests may be preserved in autism. In order to provide experimental confirmation to this hypothesis, indirect speech act comprehension was tested in a group of 15 children with autism between 7 and 12 years and a group of 20 typically developing children between 2:7 and 3:6 years. The aim of the study was to determine whether children with autism can display genuinely contextual understanding of indirect requests. The experiment consisted of a three-pronged semi-structured task involving Mr Potato Head. In the first phase a declarative sentence was uttered by one adult as an instruction to put a garment on a Mr Potato Head toy; in the second the same sentence was uttered as a comment on a picture by another speaker; in the third phase the same sentence was uttered as a comment on a picture by the first speaker. Children with autism complied with the indirect request in the first phase and demonstrated the capacity to inhibit the directive interpretation in phases 2 and 3. TD children had some difficulty in understanding the indirect instruction in phase 1. These results call for a more nuanced view of pragmatic dysfunction in autism.
Collapse
|
6
|
Effects of aging on audio-visual speech integration. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2014; 136:1918-1931. [PMID: 25324091 DOI: 10.1121/1.4894685] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
This study investigated the impact of aging on audio-visual speech integration. A syllable identification task was presented in auditory-only, visual-only, and audio-visual congruent and incongruent conditions. Visual cues were either degraded or unmodified. Stimuli were embedded in stationary noise alternating with modulated noise. Fifteen young adults and 15 older adults participated in this study. Results showed that older adults had preserved lipreading abilities when the visual input was clear but not when it was degraded. The impact of aging on audio-visual integration also depended on the quality of the visual cues. In the visual clear condition, the audio-visual gain was similar in both groups and analyses in the framework of the fuzzy-logical model of perception confirmed that older adults did not differ from younger adults in their audio-visual integration abilities. In the visual reduction condition, the audio-visual gain was reduced in the older group, but only when the noise was stationary, suggesting that older participants could compensate for the loss of lipreading abilities by using the auditory information available in the valleys of the noise. The fuzzy-logical model of perception confirmed the significant impact of aging on audio-visual integration by showing an increased weight of audition in the older group.
Collapse
|
7
|
Les entraînements auditifs : des modifications comportementales aux modifications neurophysiologiques. ANNEE PSYCHOLOGIQUE 2014. [DOI: 10.3917/anpsy.142.0389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
|
8
|
Atypical audio-visual speech perception and McGurk effects in children with specific language impairment. Front Psychol 2014; 5:422. [PMID: 24904454 PMCID: PMC4033223 DOI: 10.3389/fpsyg.2014.00422] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2014] [Accepted: 04/22/2014] [Indexed: 11/30/2022] Open
Abstract
Audiovisual speech perception of children with specific language impairment (SLI) and children with typical language development (TLD) was compared in two experiments using /aCa/ syllables presented in the context of a masking release paradigm. Children had to repeat syllables presented in auditory alone, visual alone (speechreading), audiovisual congruent and incongruent (McGurk) conditions. Stimuli were masked by either stationary (ST) or amplitude modulated (AM) noise. Although children with SLI were less accurate in auditory and audiovisual speech perception, they showed similar auditory masking release effect than children with TLD. Children with SLI also had less correct responses in speechreading than children with TLD, indicating impairment in phonemic processing of visual speech information. In response to McGurk stimuli, children with TLD showed more fusions in AM noise than in ST noise, a consequence of the auditory masking release effect and of the influence of visual information. Children with SLI did not show this effect systematically, suggesting they were less influenced by visual speech. However, when the visual cues were easily identified, the profile of responses to McGurk stimuli was similar in both groups, suggesting that children with SLI do not suffer from an impairment of audiovisual integration. An analysis of percent of information transmitted revealed a deficit in the children with SLI, particularly for the place of articulation feature. Taken together, the data support the hypothesis of an intact peripheral processing of auditory speech information, coupled with a supra modal deficit of phonemic categorization in children with SLI. Clinical implications are discussed.
Collapse
|
9
|
How is the McGurk effect modulated by Cued Speech in deaf and hearing adults? Front Psychol 2014; 5:416. [PMID: 24904451 PMCID: PMC4032946 DOI: 10.3389/fpsyg.2014.00416] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2014] [Accepted: 04/21/2014] [Indexed: 11/21/2022] Open
Abstract
Speech perception for both hearing and deaf people involves an integrative process between auditory and lip-reading information. In order to disambiguate information from lips, manual cues from Cued Speech may be added. Cued Speech (CS) is a system of manual aids developed to help deaf people to clearly and completely understand speech visually (Cornett, 1967). Within this system, both labial and manual information, as lone input sources, remain ambiguous. Perceivers, therefore, have to combine both types of information in order to get one coherent percept. In this study, we examined how audio-visual (AV) integration is affected by the presence of manual cues and on which form of information (auditory, labial or manual) the CS receptors primarily rely. To address this issue, we designed a unique experiment that implemented the use of AV McGurk stimuli (audio /pa/ and lip-reading /ka/) which were produced with or without manual cues. The manual cue was congruent with either auditory information, lip information or the expected fusion. Participants were asked to repeat the perceived syllable aloud. Their responses were then classified into four categories: audio (when the response was /pa/), lip-reading (when the response was /ka/), fusion (when the response was /ta/) and other (when the response was something other than /pa/, /ka/ or /ta/). Data were collected from hearing impaired individuals who were experts in CS (all of which had either cochlear implants or binaural hearing aids; N = 8), hearing-individuals who were experts in CS (N = 14) and hearing-individuals who were completely naïve of CS (N = 15). Results confirmed that, like hearing-people, deaf people can merge auditory and lip-reading information into a single unified percept. Without manual cues, McGurk stimuli induced the same percentage of fusion responses in both groups. Results also suggest that manual cues can modify the AV integration and that their impact differs between hearing and deaf people.
Collapse
|
10
|
Symbolic number abilities predict later approximate number system acuity in preschool children. PLoS One 2014; 9:e91839. [PMID: 24637785 PMCID: PMC3956743 DOI: 10.1371/journal.pone.0091839] [Citation(s) in RCA: 60] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2013] [Accepted: 02/15/2014] [Indexed: 01/29/2023] Open
Abstract
An ongoing debate in research on numerical cognition concerns the extent to which the approximate number system and symbolic number knowledge influence each other during development. The current study aims at establishing the direction of the developmental association between these two kinds of abilities at an early age. Fifty-seven children of 3-4 years performed two assessments at 7 months interval. In each assessment, children's precision in discriminating numerosities as well as their capacity to manipulate number words and Arabic digits was measured. By comparing relationships between pairs of measures across the two time points, we were able to assess the predictive direction of the link. Our data indicate that both cardinality proficiency and symbolic number knowledge predict later accuracy in numerosity comparison whereas the reverse links are not significant. The present findings are the first to provide longitudinal evidence that the early acquisition of symbolic numbers is an important precursor in the developmental refinement of the approximate number representation system.
Collapse
|
11
|
Phonological processing of rhyme in spoken language and location in sign language by deaf and hearing participants: A neurophysiological study. Neurophysiol Clin 2013; 43:151-60. [DOI: 10.1016/j.neucli.2013.03.001] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2012] [Revised: 11/30/2012] [Accepted: 03/21/2013] [Indexed: 11/28/2022] Open
|
12
|
Impact of language abilities on exact and approximate number skills development: evidence from children with specific language impairment. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2013; 56:956-970. [PMID: 23275399 DOI: 10.1044/1092-4388(2012/10-0229)] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
PURPOSE Counting and exact arithmetic rely on language-based representations, whereas number comparison and approximate arithmetic involve approximate quantity-based representations that are available early in life, before the first stages of language acquisition. The objective of this study was to examine the impact of language abilities on the later development of exact and approximate number skills. METHOD Twenty-eight 7- to 14-year-old children with specific language impairment (SLI) completed exact and approximate number tasks involving quantities presented symbolically and nonsymbolically. They were compared with age-matched (AM) and vocabulary-matched (VM) children. RESULTS In the exact arithmetic task, the accuracy of children with SLI was lower than that of AM and VM controls and related to phonological measures. In the symbolic approximate tasks, children with SLI were less accurate than AM controls, but the difference vanished when their cognitive skills were considered or when they were compared with younger VM controls. In the nonsymbolic approximate tasks, children with SLI did not differ significantly from controls. Further, accuracy in the approximate number tasks was unrelated to language measures. CONCLUSIONS Language impairment is related to reduced exact arithmetic skills, whereas it does not intrinsically affect the development of approximate number skills in children with SLI.
Collapse
|
13
|
The development of word recognition, sentence comprehension, word spelling, and vocabulary in children with deafness: a longitudinal study. RESEARCH IN DEVELOPMENTAL DISABILITIES 2013; 34:1781-1793. [PMID: 23500170 DOI: 10.1016/j.ridd.2013.02.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/29/2012] [Revised: 02/02/2013] [Accepted: 02/05/2013] [Indexed: 06/01/2023]
Abstract
BACKGROUND Only a small number of longitudinal studies have been conducted to assess the literacy skills of children with hearing impairment. The results of these studies are inconsistent with regard to the importance of phonology in reading acquisition as is the case in studies with hearing children. Colin, Magnan, Ecalle, and Leybaert (2007) revealed the important role of early phonological skills and the contribution of the factor of age of exposure to Cued Speech (CS: a manual system intended to resolve the ambiguities inherent to speechreading) to subsequent reading acquisition (from kindergarten to first grade) in children with deafness. The aim of the present paper is twofold: (1) to confirm the role of early exposure to CS in the development of the linguistic skills necessary in order to learn reading and writing in second grade; (2) to reveal the possible existence of common factors other than CS that may influence literacy performances and explain the inter-individual difference within groups of children with hearing impairment. METHOD Eighteen 6-year-old hearing-impaired and 18 hearing children of the same chronological age were tested from kindergarten to second grade. The children with deafness had either been exposed to CS at an early age, at home and before kindergarten (early-CS group), or had first been exposed to it when they entered kindergarten (late-CS group) or first grade (beginner-CS group). Children were given implicit and explicit phonological tasks, silent reading tasks (word recognition and sentence comprehension), word spelling, and vocabulary tasks. RESULTS Children in the early-CS group outperformed those of the late-CS and beginner-CS groups in phonological tasks from first grade to second grade. They became better readers and better spellers than those from the late-CS group and the beginner-CS group. Their performances did not differ from those of hearing children in any of the tasks except for the receptive vocabulary test. Thus early exposure to CS seems to permit the development of linguistic skills necessary in order to learn reading and writing. The possible contribution of other factors to the acquisition of literacy skills by children with hearing impairment will be discussed.
Collapse
|
14
|
Does math education modify the approximate number system? A comparison of schooled and unschooled adults. Trends Neurosci Educ 2013. [DOI: 10.1016/j.tine.2013.01.001] [Citation(s) in RCA: 51] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
|
15
|
Effect of phonological training in French children with SLI: perspectives on voicing identification, discrimination and categorical perception. RESEARCH IN DEVELOPMENTAL DISABILITIES 2012; 33:1805-1818. [PMID: 22699254 DOI: 10.1016/j.ridd.2012.05.003] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/20/2012] [Accepted: 05/01/2012] [Indexed: 06/01/2023]
Abstract
The aim of the present study was to investigate the effect of auditory training on voicing perception in French children with specific language impairment (SLI). We used an adaptive discrimination training that was centred across the French phonological boundary (0 ms voice onset time--VOT). One group of nine children with SLI attended eighteen twenty-minute training sessions with feedback, and a control group of nine children with SLI did not receive any training. Identification, discrimination and categorical perception were evaluated before, during and after training as well as one month following the final session. Phonological awareness and vocabulary were also assessed for both groups. The results showed that children with SLI experienced strong difficulties in the identification, discrimination and categorical perception of the voicing continuum prior to training. However, as early as after the first nine training sessions, their performance in the identification and discrimination tasks increased significantly. Moreover, phonological awareness scores improved during training, whereas vocabulary scores remained stable across sessions.
Collapse
|
16
|
Abstract
It is known that sleep participates in memory consolidation processes. However, results obtained in the auditory domain are inconsistent. Here we aimed at investigating the role of post-training sleep in auditory training and learning new phonological categories, a fundamental process in speech processing. Adult French-speakers were trained to identify two synthetic speech variants of the syllable /d∂/ during two 1-h training sessions. The 12-h interval between the two sessions either did (8 p.m. to 8 a.m. ± 1 h) or did not (8 a.m. to 8 p.m. ± 1 h) included a sleep period. In both groups, identification performance dramatically improved over the first training session, to slightly decrease over the 12-h offline interval, although remaining above chance levels. Still, reaction times (RT) were slowed down after sleep suggesting higher attention devoted to the learned, novel phonological contrast. Notwithstanding, our results essentially suggest that post-training sleep does not benefit more than wakefulness to the consolidation or stabilization of new phonological categories.
Collapse
|
17
|
Abstract
It has traditionally been assumed that cochlear implant users de facto perform atypically in audiovisual tasks. However, a recent study that combined an auditory task with visual distractors suggests that only those cochlear implant users that are not proficient at recognizing speech sounds might show abnormal audiovisual interactions. The present study aims at reinforcing this notion by investigating the audiovisual segregation abilities of cochlear implant users in a visual task with auditory distractors. Speechreading was assessed in two groups of cochlear implant users (proficient and non-proficient at sound recognition), as well as in normal controls. A visual speech recognition task (i.e. speechreading) was administered either in silence or in combination with three types of auditory distractors: i) noise ii) reverse speech sound and iii) non-altered speech sound. Cochlear implant users proficient at speech recognition performed like normal controls in all conditions, whereas non-proficient users showed significantly different audiovisual segregation patterns in both speech conditions. These results confirm that normal-like audiovisual segregation is possible in highly skilled cochlear implant users and, consequently, that proficient and non-proficient CI users cannot be lumped into a single group. This important feature must be taken into account in further studies of audiovisual interactions in cochlear implant users.
Collapse
|
18
|
Compliance with requests by children with autism: the impact of sentence type. AUTISM : THE INTERNATIONAL JOURNAL OF RESEARCH AND PRACTICE 2012; 16:523-31. [DOI: 10.1177/1362361311406296] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
19
|
Abstract
It is known that deaf individuals usually outperform normal hearing subjects in speechreading; however, the underlying reasons remain unclear. In the present study, speechreading performance was assessed in normal hearing participants (NH), deaf participants who had been exposed to the Cued Speech (CS) system early and intensively, and deaf participants exposed to oral language without Cued Speech (NCS). Results show a gradation in performance with highest performance in CS, then in NCS, and finally NH participants. Moreover, error analysis suggests that speechreading processing is more accurate in the CS group than in the other groups. Given that early and intensive CS has been shown to promote development of accurate phonological processing, we propose that the higher speechreading results in Cued Speech users are linked to a better capacity in phonological decoding of visual articulators.
Collapse
|
20
|
|
21
|
Cued speech for enhancing speech perception and first language development of children with cochlear implants. Trends Amplif 2010; 14:96-112. [PMID: 20724357 PMCID: PMC4111351 DOI: 10.1177/1084713810375567] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Nearly 300 million people worldwide have moderate to profound hearing loss. Hearing impairment, if not adequately managed, has strong socioeconomic and affective impact on individuals. Cochlear implants have become the most effective vehicle for helping profoundly deaf children and adults to understand spoken language, to be sensitive to environmental sounds, and, to some extent, to listen to music. The auditory information delivered by the cochlear implant remains non-optimal for speech perception because it delivers a spectrally degraded signal and lacks some of the fine temporal acoustic structure. In this article, we discuss research revealing the multimodal nature of speech perception in normally-hearing individuals, with important inter-subject variability in the weighting of auditory or visual information. We also discuss how audio-visual training, via Cued Speech, can improve speech perception in cochlear implantees, particularly in noisy contexts. Cued Speech is a system that makes use of visual information from speechreading combined with hand shapes positioned in different places around the face in order to deliver completely unambiguous information about the syllables and the phonemes of spoken language. We support our view that exposure to Cued Speech before or after the implantation could be important in the aural rehabilitation process of cochlear implantees. We describe five lines of research that are converging to support the view that Cued Speech can enhance speech perception in individuals with cochlear implants.
Collapse
|
22
|
Reading Disabilities in SLI and Dyslexia Result From Distinct Phonological Impairments. Dev Neuropsychol 2009; 34:296-311. [DOI: 10.1080/87565640902801841] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
23
|
Relation between deaf children's phonological skills in kindergarten and word recognition performance in first grade. J Child Psychol Psychiatry 2007; 48:139-46. [PMID: 17300552 DOI: 10.1111/j.1469-7610.2006.01700.x] [Citation(s) in RCA: 82] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
BACKGROUND The aim of the present study was twofold: 1) to determine whether phonological skills measured in deaf prereaders predict their later phonological and reading skills after one year of reading instruction as is the case for hearing children; 2) to examine whether the age of exposure to a fully specified phonological input such as Cued Speech may explain the inter-individual differences observed in deaf children's phonological and word recognition levels. METHOD Twenty-one 6-year-old deaf prereaders and 21 hearing children of the same chronological age performed two phonological tasks (rhyme decision and generation tasks); they were re-assessed 12 months later and presented with other phonological tasks (rhyme decision and common unit identification tasks) and a written word choice test. RESULTS Phonological skills measured before learning to read predicted the written word recognition score the following year, both for hearing and for deaf participants. Age of onset of exposure to Cued Speech was also a strong predictor of phonological and written word recognition scores in beginning deaf readers. CONCLUSIONS The evidence broadly supports the idea of a capacity for acquiring phonological skills in deaf children. Deaf children who are able to develop an implicitly structured phonological knowledge before learning to read will be better readers when this knowledge becomes explicit under the pressure of reading instruction.
Collapse
|
24
|
Le rôle des informations visuelles dans le développement du langage de l'enfant sourd muni d'un implant cochléaire. ENFANCE 2007. [DOI: 10.3917/enf.593.0245] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
|
25
|
|
26
|
Lateralization effects during semantic and rhyme judgement tasks in deaf and hearing subjects. BRAIN AND LANGUAGE 2003; 87:227-240. [PMID: 14585292 DOI: 10.1016/s0093-934x(03)00104-4] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
A visual hemifield experiment investigated hemispheric specialization among hearing children and adults and prelingually, profoundly deaf youngsters who were exposed intensively to Cued Speech (CS). Of interest was whether deaf CS users, who undergo a development of phonology and grammar of the spoken language similar to that of hearing youngsters, would display similar laterality patterns in the processing of written language. Semantic, rhyme, and visual judgement tasks were used. In the visual task no VF advantage was observed. A RVF (left hemisphere) advantage was obtained for both the deaf and the hearing subjects for the semantic task, supporting Neville's claim that the acquisition of competence in the grammar of language is critical in establishing the specialization of the left hemisphere for language. For the rhyme task, however, a RVF advantage was obtained for the hearing subjects, but not for the deaf ones, suggesting that different neural resources are recruited by deaf and hearing subjects. Hearing the sounds of language may be necessary to develop left lateralised processing of rhymes.
Collapse
|
27
|
Abstract
Recent investigations have indicated a relationship between the development of cerebral lateralization for processing language and the level of development of linguistic skills in hearing children. The research on cerebral lateralization for language processing in deaf persons is compatible with this view. We have argued that the absence of appropriate input during a critical time window creates a risk for deaf children that the initial bias for left-hemisphere specialization will be distorted or disappear. Two experiments were conducted to test this hypothesis The results of these investigations showed that children educated early and intensively with cued speech or with sign language display more evidence of left-hemisphere specialization for the processing of their native language than do those who have been exposed later and less intensively to those languages.
Collapse
|
28
|
Rhyme generation in deaf students: the effect of exposure to cued speech. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2003; 8:250-270. [PMID: 15448052 DOI: 10.1093/deafed/eng014] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
This study compares the rhyme-generation ability of deaf participants with severe to profound hearing losses from cued speech (CS) and non-cued speech (NCS) backgrounds with a hearing comparison group for consistent orthography-to-phonology (O-P) rhyming elements, or rimes (e.g., -ail in sail is always pronounced the same), and inconsistent orthography-to-phonology (I-O-P) rhyming elements where the orthographic rime (e.g., -ear) has different pronunciations in words such as bear, and rear. Rhyming accuracy was better for O-P target words than for I-O-P target words. The performance of the deaf participants from CS backgrounds, although falling between that of the hearing and the NCS groups, did not differ significantly from that of the hearing group. By contrast, the performance of the NCS group was lower than that of the hearing group. Hearing and CS participants produced more orthographically different responses (e.g., blue-few), whereas participants from the NCS group produced more responses that are orthographically similar (e.g., blue-true), indicating that the hearing and CS groups rely more on phonology and the NCS group more on spelling to generate rhymes. The results support the use of cued speech for developing phonological abilities of deaf students to promote their reading abilities.
Collapse
|
29
|
Abstract
Do the visuomanual modality and the structure of the sequence of numbers in sign language have an impact on the development of counting and its use by deaf children? The sequence of number signs in Belgian French Sign Language follows a base-5 rule while the number sequence in oral French follows a base-10 rule. The accuracy and use of sequence number string were investigated in hearing children varying in age from 3 years 4 months to 5 years 8 months and in deaf children varying in age from 4 years to 6 years 2 months. Three tasks were used: abstract counting, object counting, and creation of sets of a given cardinality. Deaf children exhibited age-related lags in their knowledge of the number sequence; they made different errors from those of hearing children, reflecting the rule-bound nature of sign language. Remarkably, their performance in object counting and creating sets of given cardinality was similar to that of hearing children who had a longer sequence number string, indicating a better use of counting than predicted by their knowledge of the linguistic sequence of numbers.
Collapse
|
30
|
Phonological similarity effects in memory for serial order of cued speech. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2001; 44:949-963. [PMID: 11708535 DOI: 10.1044/1092-4388(2001/074)] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Experiment I investigated memory for serial order by congenitally, profoundly deaf individuals, 6-22 years old, for words presented via Cued Speech (CS) without sound. CS is a system that resolves the ambiguity inherent in speechreading through the addition of manual cues. The phonological components of CS are mouth shape, hand shape, and hand placement. Of interest was whether the recall of serial order was lower for lists of words similar in both mouth shape and hand placement, or similar in mouth shape only, or in hand placement only than for control lists designed to minimize these similarities. Deaf participants showed lower performance on the three similar lists than the control lists, suggesting that deaf individuals use the phonology of CS to support their recall. In Experiment II, the same lists were administered to two groups of hearing participants. One group, experienced producers of CS, received the CS stimuli without sound; the other group, unfamiliar with CS, received the CS stimuli audiovisually. Participants experienced with CS showed no effect of hand placement similarity, suggesting that this effect may be related to the linguistic experience of deaf participants. The recency effect was greater in the hearing group provided with sound, indicating that the traces left by auditory stimuli are perceptually more salient than those left by the visual stimuli encountered in CS.
Collapse
|
31
|
Variability in deaf children's spelling: The effect of language experience. JOURNAL OF EDUCATIONAL PSYCHOLOGY 2001. [DOI: 10.1037/0022-0663.93.3.554] [Citation(s) in RCA: 19] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
32
|
The rhyming skills of deaf children educated with phonetically augmented speechreading. THE QUARTERLY JOURNAL OF EXPERIMENTAL PSYCHOLOGY. A, HUMAN EXPERIMENTAL PSYCHOLOGY 2000; 53:349-75. [PMID: 10881610 DOI: 10.1080/713755898] [Citation(s) in RCA: 53] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
Two experiments investigated whether profoundly deaf children's rhyming ability was determined by the linguistic input that they were exposed to in their early childhood. Children educated with Cued Speech (CS) were compared to other deaf children, educated orally or with sign language. In CS, speechreading is combined with manual cues that disambiguate it. The central hypothesis is that CS allows deaf children to develop accurate phonological representations, which, in turn, assist in the emergence of accurate rhyming abilities. Experiment 1 showed that the deaf children educated early with CS performed better at rhyme judgement than did other deaf children. The performance of early CS-users was not influenced by word spelling. Experiment 2 confirmed this result in a rhyme generation task. Taken together, results support the hypothesis that rhyming ability depends on early exposure to a linguistic input specifying all phonological contrasts, independently of the modality (visual or auditory) in which this input is perceived.
Collapse
|
33
|
Abstract
Hearing and deaf children, ranging in age from 6 years 8 months to 14 years 4 months, and matched for general spelling level, were required to spell high-frequency and low-frequency words. Of interest was performance in relation to degree of exposure to Cued Speech (CS), which is a system delivering phonetically augmented speechreading through the visual modality. Groups were (a) hearing children, (b) deaf children exposed early and intensively to CS at home (CS-Home), and (c) deaf children exposed to CS later and at school only (CS-School). Most of the spelling productions of hearing children as well as of CS-Home children were phonologically accurate for high-frequency as well as for low-frequency words. CS-School children, who had less specified phonological representations, made a lower proportion of phonologically accurate spellings. These findings indicate that the accuracy of phonological representations, independent of the modality (acoustic versus visual) through which spoken language is perceived, determines the acquisition of phonology-to-orthography mappings. Analyses of the spelling productions indicate that the acquisition of orthographic representations of high precision depends on fully specified phonological representations.
Collapse
|
34
|
Do deaf children use phonological syllables as reading units? JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 1999; 4:124-143. [PMID: 15579882 DOI: 10.1093/deafed/4.2.124] [Citation(s) in RCA: 17] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
This study aimed at examining whether deaf children process written words on the basis of phonological units. In French, the syllable is a phonologically and orthographically well-defined unit. French deaf children and hearing children matched on word recognition level were asked to copy written words and pseudo-words. The number of glances at the item, copying duration, and the locus of the first segmentation (i.e., after the first glance) within the item were measured. The main question was whether the segments copied by the deaf children corresponded to syllables as defined by phonological and orthographic rules.The results showed that deaf children, like hearing children, used syllables as copying units when the syllable boundaries were marked both by orthographic and phonological criteria. However, in a condition in which orthographic and phonological criteria were differentiated, the deaf children did not perform phonological segmentations while the hearing children did. We discuss two explanatory hypotheses. First, items in this condition were difficult to decode for deaf children; second, orthographic units were probably easier to process for deaf children than phonological units because of a lack of automaticity in their phonological conversion processes for pseudo-words. Finally, incidental observations during the experimental task raised the question of the use of fingerspelled units.
Collapse
|
35
|
Phonological representations in deaf children: the importance of early linguistic experience. Scand J Psychol 1998; 39:169-73. [PMID: 9800532 DOI: 10.1111/1467-9450.393074] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
It is argued that the development of phonological representations in deaf children does not necessarily depend on auditory speech experience, neither at the perception nor at the production level. Instead, this development depends upon early experience of an input in which all phonological contrasts are well specified, independently of input modality. This is argued on the basis of the studies investigating phonological and morpho-phonological abilities of profoundly deaf children early exposed to Cued Speech. The paper is concluded with some speculations about the effect of early exposure to CS on the development of language specific processes housed in the left-hemisphere.
Collapse
|
36
|
Visual speech in the head: the effect of cued-speech on rhyming, remembering, and spelling. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 1996; 1:234-248. [PMID: 15579827 DOI: 10.1093/oxfordjournals.deafed.a014299] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
|