1
|
Hidalgo C, Zielinski C, Chen S, Roman S, Truy E, Schön D. Similar gaze behaviour during dialogue perception in congenitally deaf children with cochlear Implants and normal hearing children. INTERNATIONAL JOURNAL OF LANGUAGE & COMMUNICATION DISORDERS 2024; 59:2441-2453. [PMID: 39073184 DOI: 10.1111/1460-6984.13094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Accepted: 07/03/2024] [Indexed: 07/30/2024]
Abstract
BACKGROUND Perceptual and speech production abilities of children with cochlear implants (CIs) are usually tested by word and sentence repetition or naming tests. However, these tests are quite far apart from daily life linguistic contexts. AIM Here, we describe a way of investigating the link between language comprehension and anticipatory verbal behaviour promoting the use of more complex listening situations. METHODS AND PROCEDURE The setup consists in watching the audio-visual dialogue of two actors. Children's gaze switches from one speaker to the other serve as a proxy of their prediction abilities. Moreover, to better understand the basis and the impact of anticipatory behaviour, we also measured children's ability to understand the dialogue content, their speech perception and memory skills as well as their rhythmic skills, that also require temporal predictions. Importantly, we compared children with CI performances with those of an age-matched group of children with normal hearing (NH). OUTCOMES AND RESULTS While children with CI revealed poorer speech perception and verbal working memory abilities than NH children, there was no difference in gaze anticipatory behaviour. Interestingly, in children with CI only, we found a significant correlation between dialogue comprehension, perceptual skills and gaze anticipatory behaviour. CONCLUSION Our results extend to a dialogue context of previous findings showing an absence of predictive deficits in children with CI. The current design seems an interesting avenue to provide an accurate and objective estimate of anticipatory language behaviour in a more ecological linguistic context also with young children. WHAT THIS PAPER ADDS What is already known on the subject Children with cochlear implants seem to have difficulties extracting structure from and learning sequential input patterns, possibly due to signal degradation and auditory deprivation in the first years of life. They also seem to have a reduced use of contextual information and slow language processing among children with hearing loss. What this paper adds to existing knowledge Here we show that when adopting a rather complex linguistic context such as watching a dialogue of two individuals, children with cochlear implants are able to use the speech and language structure to anticipate gaze switches to the upcoming speaker. What are the clinical implications of this work? The present design seems an interesting avenue to provide an accurate and objective estimate of anticipatory behaviour in a more ecological and dynamic linguistic context. Importantly, this measure is implicit and it has been previously used with very young (normal-hearing) children, showing that they spontaneously make anticipatory gaze switches by age two. Thus, this approach may be of interest to refine the speech comprehension assessment at a rather early age after cochlear implantation where explicit behavioural tests are not always reliable and sensitive.
Collapse
Affiliation(s)
- Céline Hidalgo
- Aix Marseille Univ, Inserm, INS, Inst Neurosci Syst, Marseille, France
| | - Christelle Zielinski
- Aix-Marseille Univ, Institute of Language, Communication and the Brain, Marseille, France
| | - Sophie Chen
- Aix Marseille Univ, Inserm, INS, Inst Neurosci Syst, Marseille, France
| | - Stéphane Roman
- Aix Marseille Univ, Inserm, INS, Inst Neurosci Syst, Marseille, France
- Pediatric Otolaryngology Department, La Timone Children's Hospital (APHM), Marseille, France
| | - Eric Truy
- Service d'ORL et de Chirurgie cervico-faciale, Hôpital Edouard Herriot, CHU, LYON, France
- Inserm U1028, Lyon Neuroscience Research Center, Equipe IMPACT, Lyon, France
- CNRS UMR5292, Lyon Neuroscience Research Center, Equipe IMPACT, Lyon, France
- University Lyon 1, Lyon, France
| | - Daniele Schön
- Aix Marseille Univ, Inserm, INS, Inst Neurosci Syst, Marseille, France
- Aix-Marseille Univ, Institute of Language, Communication and the Brain, Marseille, France
| |
Collapse
|
2
|
Walker EA. The Importance of High-Frequency Bandwidth on Speech and Language Development in Children: A Review of Patricia Stelmachowicz's Contributions to Pediatric Audiology. Semin Hear 2023; 44:S3-S16. [PMID: 36970651 PMCID: PMC10033203 DOI: 10.1055/s-0043-1764138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/05/2023] Open
Abstract
We review the literature related to Patricia Stelmachowicz's research in pediatric audiology, specifically focusing on the influence of audibility in language development and acquisition of linguistic rules. Pat Stelmachowicz spent her career increasing our awareness and understanding of children with mild to severe hearing loss who use hearing aids. Using a variety of novel experiments and stimuli, Pat and her colleagues produced a robust body of evidence to support the hypothesis that development moderates the role of frequency bandwidth on speech perception, particularly for fricative sounds. The prolific research that came out of Pat's lab had several important implications for clinical practice. First, her work highlighted that children require access to more high-frequency speech information than adults in the detection and identification of fricatives such as /s/ and /z/. These high-frequency speech sounds are important for morphological and phonological development. Consequently, the limited bandwidth of conventional hearing aids may delay the formation of linguistic rules in these two domains for children with hearing loss. Second, it emphasized the importance of not merely applying adult findings to the clinical decision-making process in pediatric amplification. Clinicians should use evidence-based practices to verify and provide maximum audibility for children who use hearing aids to acquire spoken language.
Collapse
Affiliation(s)
- Elizabeth A. Walker
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, Iowa
| |
Collapse
|
3
|
Davies B, Holt R, Demuth K. Children with hearing loss can use subject-verb agreement to predict during spoken language processing. J Exp Child Psychol 2023; 226:105545. [PMID: 36126586 DOI: 10.1016/j.jecp.2022.105545] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2021] [Revised: 08/21/2022] [Accepted: 08/25/2022] [Indexed: 10/14/2022]
Abstract
Rapid processing of spoken language is aided by the ability to predict upcoming words using both semantic and syntactic cues. However, although children with hearing loss (HL) can predict upcoming words using semantic associations, little is known about their ability to predict using syntactic dependencies such as subject-verb (SV) agreement. This study examined whether school-aged children with hearing aids and/or cochlear implants can use SV agreement to predict upcoming nouns when processing spoken language. Although they did demonstrate prediction with plural SV agreement, they did so more slowly than their normal hearing (NH) peers. This may be due to weaker grammatical representations given that function words and grammatical inflections typically have lower perceptual salience. Thus, a better understanding of morphosyntactic representations in children with HL, and their ability to use these for prediction, sheds much-needed light on the online language processing challenges and abilities of this population.
Collapse
Affiliation(s)
- Benjamin Davies
- Department of Linguistics, Level 3 Australian Hearing Hub, Macquarie University, Sydney, New South Wales 2109, Australia.
| | - Rebecca Holt
- Department of Linguistics, Level 3 Australian Hearing Hub, Macquarie University, Sydney, New South Wales 2109, Australia
| | - Katherine Demuth
- Department of Linguistics, Level 3 Australian Hearing Hub, Macquarie University, Sydney, New South Wales 2109, Australia
| |
Collapse
|
4
|
Simeon KM, Grieco-Calub TM. The Impact of Hearing Experience on Children's Use of Phonological and Semantic Information During Lexical Access. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:2825-2844. [PMID: 34106737 PMCID: PMC8632499 DOI: 10.1044/2021_jslhr-20-00547] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 01/14/2021] [Accepted: 03/02/2021] [Indexed: 06/12/2023]
Abstract
Purpose The purpose of this study was to examine the extent to which phonological competition and semantic priming influence lexical access in school-aged children with cochlear implants (CIs) and children with normal acoustic hearing. Method Participants included children who were 5-10 years of age with either normal hearing (n = 41) or bilateral severe to profound sensorineural hearing loss and used CIs (n = 13). All participants completed a two-alternative forced-choice task while eye gaze to visual images was recorded and quantified during a word recognition task. In this task, the target image was juxtaposed with a competitor image that was either a phonological onset competitor (i.e., shared the same initial consonant-vowel-consonant syllable as the target) or an unrelated distractor. Half of the trials were preceded by an image prime that was semantically related to the target image. Results Children with CIs showed evidence of phonological competition during real-time processing of speech. This effect, however, was less and occurred later in the time course of speech processing than what was observed in children with normal hearing. The presence of a semantically related visual prime reduced the effects of phonological competition in both groups of children but to a greater degree in children with CIs. Conclusions Children with CIs were able to process single words similarly to their counterparts with normal hearing. However, children with CIs appeared to have increased reliance on surrounding semantic information compared to their normal-hearing counterparts.
Collapse
Affiliation(s)
- Katherine M. Simeon
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL
| | - Tina M. Grieco-Calub
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL
- Hugh Knowles Hearing Center, Northwestern University, Evanston, IL
| |
Collapse
|
5
|
Deaf Children of Hearing Parents Have Age-Level Vocabulary Growth When Exposed to American Sign Language by 6 Months of Age. J Pediatr 2021; 232:229-236. [PMID: 33482219 PMCID: PMC8085057 DOI: 10.1016/j.jpeds.2021.01.029] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Revised: 01/08/2021] [Accepted: 01/13/2021] [Indexed: 11/20/2022]
Abstract
OBJECTIVE To examine whether children who are deaf or hard of hearing who have hearing parents can develop age-level vocabulary skills when they have early exposure to a sign language. STUDY DESIGN This cross-sectional study of vocabulary size included 78 children who are deaf or hard of hearing between 8 and 68 months of age who were learning American Sign Language (ASL) and had hearing parents. Children who were exposed to ASL before 6 months of age or between 6 and 36 months of age were compared with a reference sample of 104 deaf and hard of hearing children who have parents who are deaf and sign. RESULTS Deaf and hard of hearing children with hearing parents who were exposed to ASL in the first 6 months of life had age-expected receptive and expressive vocabulary growth. Children who had a short delay in ASL exposure had relatively smaller expressive but not receptive vocabulary sizes, and made rapid gains. CONCLUSIONS Although hearing parents generally learn ASL alongside their children who are deaf, their children can develop age-expected vocabulary skills when exposed to ASL during infancy. Children who are deaf with hearing parents can predictably and consistently develop age-level vocabularies at rates similar to native signers; early vocabulary skills are robust predictors of development across domains.
Collapse
|
6
|
Holt R, Bruggeman L, Demuth K. Children with hearing loss can predict during sentence processing. Cognition 2021; 212:104684. [PMID: 33901882 DOI: 10.1016/j.cognition.2021.104684] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Revised: 03/10/2021] [Accepted: 03/16/2021] [Indexed: 10/21/2022]
Abstract
Listeners readily anticipate upcoming sentence constituents, however little is known about prediction when the input is suboptimal, such as for children with hearing loss (HL). Here we examined whether children with hearing aids and/or cochlear implants use semantic context to predict upcoming spoken sentence completions. We expected reduced prediction among children with HL, but found they were able to predict similarly to children with normal hearing. This suggests prediction is robust even when input quality is chronically suboptimal, and is compatible with the idea that recent advances in the management of pre-lingual HL may have minimised some of the language processing differences between children with and without HL.
Collapse
Affiliation(s)
- Rebecca Holt
- Department of Linguistics, Macquarie University, Level 3 Australian Hearing Hub, 16 University Ave, NSW 2109, Australia.
| | - Laurence Bruggeman
- Department of Linguistics, Macquarie University, Level 3 Australian Hearing Hub, 16 University Ave, NSW 2109, Australia; The MARCS Institute for Brain, Behaviour & Development, ARC Centre of Excellence for the Dynamics of Language, Western Sydney University; Bullecourt Ave, Milperra, NSW 2214, Australia.
| | - Katherine Demuth
- Department of Linguistics, Macquarie University, Level 3 Australian Hearing Hub, 16 University Ave, NSW 2109, Australia.
| |
Collapse
|
7
|
Auditory processing in children: Role of working memory and lexical ability in auditory closure. PLoS One 2020; 15:e0240534. [PMID: 33147602 PMCID: PMC7641369 DOI: 10.1371/journal.pone.0240534] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2020] [Accepted: 09/28/2020] [Indexed: 11/19/2022] Open
Abstract
We examined the relationship between cognitive-linguistic mechanisms and auditory closure ability in children. Sixty-seven school-age children recognized isolated words and keywords in sentences that were interrupted at a rate of 2.5 Hz and 5 Hz. In essence, children were given only 50% of speech information and asked to repeat the complete word or sentence. Children’s working memory capacity (WMC), attention, lexical knowledge, and retrieval from long-term memory (LTM) abilities were also measured to model their role in auditory closure ability. Overall, recognition of monosyllabic words and lexically easy multisyllabic words was significantly better at 2.5 Hz interruption rate than 5 Hz. Recognition of lexically hard multisyllabic words and keywords in sentences was better at 5 Hz relative to 2.5 Hz. Based on the best fit generalized “logistic” linear mixed effects models, there was a significant interaction between WMC and lexical difficulty of words. WMC was positively related only to recognition of lexically easy words. Lexical knowledge was found to be crucial for recognition of words and sentences, regardless of interruption rate. In addition, LTM retrieval ability was significantly associated with sentence recognition. These results suggest that lexical knowledge and the ability to retrieve information from LTM is crucial for children’s speech recognition in adverse listening situations. Study findings make a compelling case for the assessment and intervention of lexical knowledge and retrieval abilities in children with listening difficulties.
Collapse
|
8
|
Lee Y, Sim H. Bilateral cochlear implantation versus unilateral cochlear implantation in deaf children: Effects of sentence context and listening conditions on recognition of spoken words in sentences. Int J Pediatr Otorhinolaryngol 2020; 137:110237. [PMID: 32658807 DOI: 10.1016/j.ijporl.2020.110237] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/15/2020] [Revised: 06/13/2020] [Accepted: 06/28/2020] [Indexed: 11/24/2022]
Abstract
OBJECTIVES Previous studies have investigated the efficacy of bilateral cochlear implants (CIs) in deaf children. The current study focused on the use of sentence-context information in different listening conditions to better explain the benefits of bilateral cochlear implantation. We compared the word recognition abilities of children with bilateral CIs and children with unilateral CIs in relation to sentence context and listening conditions. Additionally, we investigated whether sentence context- and listening condition-dependent word recognition scores can differentiate children with bilateral CIs from children with unilateral CIs. METHODS Twenty children with bilateral CIs and 20 children with unilateral CIs participated in this study. All children were presented with semantically controlled sentences (high vs. low predictability) in quiet and noisy conditions and were asked to repeat the final words of each sentence. RESULTS Children with bilateral CIs had significantly higher word recognition scores than children with unilateral CIs on words embedded in both high- and low-predictability sentences in noisy conditions. The two groups recognized more words in high-predictability sentences than in low-predictability sentences in noisy conditions. The scores on the high-predictability sentences in noisy conditions significantly differentiated children with bilateral CIs from children with unilateral CIs. CONCLUSION Bilateral cochlear implantation is more advantageous than unilateral cochlear implantation at the auditory-linguistic processing level in complex listening conditions.
Collapse
Affiliation(s)
- Youngmee Lee
- Department of Communication Disorders, Ewha Womans University, Seoul, Republic of Korea
| | - Hyunsub Sim
- Department of Communication Disorders, Ewha Womans University, Seoul, Republic of Korea.
| |
Collapse
|
9
|
Hall ML, Dills S. The Limits of "Communication Mode" as a Construct. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2020; 25:383-397. [PMID: 32432678 DOI: 10.1093/deafed/enaa009] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2019] [Revised: 02/06/2020] [Accepted: 02/19/2020] [Indexed: 06/11/2023]
Abstract
Questions about communication mode (a.k.a. "communication options" or "communication opportunities") remain among the most controversial issues in the many fields that are concerned with the development and well-being of children (and adults) who are d/Deaf or hard of hearing. In this manuscript, we argue that a large part of the reason that this debate persists is due to limitations of the construct itself. We focus on what we term "the crucial question": namely, what kind of experience with linguistic input during infancy and toddlerhood is most likely to result in mastery of at least one language (spoken or signed) by school entry. We argue that the construct of communication mode-as currently construed-actively prevents the discovery of compelling answers to that question. To substantiate our argument, we present a review of a relevant subset of the recent empirical literature and document the prevalence of our concerns. We conclude by articulating the desiderata of an alternative construct that, if appropriately measured, would have the potential to yield answers to what we identify as "the crucial question."
Collapse
|
10
|
Hall ML. The Input Matters: Assessing Cumulative Language Access in Deaf and Hard of Hearing Individuals and Populations. Front Psychol 2020; 11:1407. [PMID: 32636790 PMCID: PMC7319016 DOI: 10.3389/fpsyg.2020.01407] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2019] [Accepted: 05/26/2020] [Indexed: 11/13/2022] Open
Abstract
Deaf and hard-of-hearing (DHH) children present several challenges to traditional methods of language assessment, and yet language assessment for this population is absolutely essential for optimizing their developmental potential. Whereas assessment often focuses on language outcomes, this Conceptual Analysis argues that assessing cumulative language input is critically important both in clinical work with DHH individuals and in research/public health contexts concerned with DHH populations. At the individual level, paying attention to the input (and the person's access to it) is vital for discriminating disorder from delay, and for setting goals and strategies for reaching them. At the population level, understanding relationships between cumulative language input and resulting language outcomes is essential to the broader public health efforts aimed at identifying strategies to improve outcomes in DHH populations and to theoretical efforts to understand the role that language plays in child development. Unfortunately, several factors jointly result in DHH children's input being under-described at both individual and population levels: for example, overly simplistic ways of classifying input, and the lack of tools for assessing input more thoroughly. To address these limitations, this Conceptual Analysis proposes a new way of characterizing a DHH child's cumulative experience with input, and outlines the features that a tool would need to have in order to measure this alternative construct.
Collapse
Affiliation(s)
- Matthew L Hall
- Department of Communication Sciences and Disorders, Temple University, Philadelphia, PA, United States
| |
Collapse
|
11
|
Masked Sentence Recognition in Children, Young Adults, and Older Adults: Age-Dependent Effects of Semantic Context and Masker Type. Ear Hear 2020; 40:1117-1126. [PMID: 30601213 DOI: 10.1097/aud.0000000000000692] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Masked speech recognition in normal-hearing listeners depends in part on masker type and semantic context of the target. Children and older adults are more susceptible to masking than young adults, particularly when the masker is speech. Semantic context has been shown to facilitate noise-masked sentence recognition in all age groups, but it is not known whether age affects a listener's ability to use context with a speech masker. The purpose of the present study was to evaluate the effect of masker type and semantic context of the target as a function of listener age. DESIGN Listeners were children (5 to 16 years), young adults (19 to 30 years), and older adults (67 to 81 years), all with normal or near-normal hearing. Maskers were either speech-shaped noise or two-talker speech, and targets were either semantically correct (high context) sentences or semantically anomalous (low context) sentences. RESULTS As predicted, speech reception thresholds were lower for young adults than either children or older adults. Age effects were larger for the two-talker masker than the speech-shaped noise masker, and the effect of masker type was larger in children than older adults. Performance tended to be better for targets with high than low semantic context, but this benefit depended on age group and masker type. In contrast to adults, children benefitted less from context in the two-talker speech masker than the speech-shaped noise masker. Context effects were small compared with differences across age and masker type. CONCLUSIONS Different effects of masker type and target context are observed at different points across the lifespan. While the two-talker masker is particularly challenging for children and older adults, the speech masker may limit the use of semantic context in children but not adults.
Collapse
|
12
|
Walker EA, Kessler D, Klein K, Spratford M, Oleson JJ, Welhaven A, McCreery RW. Time-Gated Word Recognition in Children: Effects of Auditory Access, Age, and Semantic Context. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:2519-2534. [PMID: 31194921 PMCID: PMC6808355 DOI: 10.1044/2019_jslhr-h-18-0407] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/06/2018] [Revised: 01/07/2019] [Accepted: 02/19/2019] [Indexed: 06/03/2023]
Abstract
Purpose We employed a time-gated word recognition task to investigate how children who are hard of hearing (CHH) and children with normal hearing (CNH) combine cognitive-linguistic abilities and acoustic-phonetic cues to recognize words in sentence-final position. Method The current study included 40 CHH and 30 CNH in 1st or 3rd grade. Participants completed vocabulary and working memory tests and a time-gated word recognition task consisting of 14 high- and 14 low-predictability sentences. A time-to-event model was used to evaluate the effect of the independent variables (age, hearing status, predictability) on word recognition. Mediation models were used to examine the associations between the independent variables (vocabulary size and working memory), aided audibility, and word recognition. Results Gated words were identified significantly earlier for high-predictability than low-predictability sentences. First-grade CHH and CNH showed no significant difference in performance. Third-grade CHH needed more information than CNH to identify final words. Aided audibility was associated with word recognition. This association was fully mediated by vocabulary size but not working memory. Conclusions Both CHH and CNH benefited from the addition of semantic context. Interventions that focus on consistent aided audibility and vocabulary may enhance children's ability to fill in gaps in incoming messages.
Collapse
Affiliation(s)
- Elizabeth A. Walker
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City
| | - David Kessler
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN
| | - Kelsey Klein
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City
| | | | | | - Anne Welhaven
- Department of Biostatistics, University of Iowa, Iowa City
| | | |
Collapse
|
13
|
Ben-Itzhak D, Adi-Bensaid L. Auditory recognition in toddlers with typical hearing and toddlers with hearing loss using the Hebrew version of the Mr. Potato Head Task. Int J Audiol 2018; 57:592-599. [PMID: 29741119 DOI: 10.1080/14992027.2018.1458162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
Abstract
OBJECTIVE This study describes the adaptation of the Mr. Potato Head Task into Hebrew and explores the development of word and sentence recognition in toddlers with typical hearing (TH) and toddlers with hearing loss (HL). DESIGN Toddlers manipulated Mr. Potato Head according to auditory instructions. STUDY SAMPLE One hundred and seventeen toddlers with TH and 28 toddlers with HL, age 23-48 months. RESULTS Internal consistency scores in TH toddlers - words: α = 0.85, sentences: α = 0.87; in toddlers with HL, words: α = 0.88; sentences: α = 0.84. The findings showed a clear upward trajectory in the TH toddlers, plateauing at age four. Toddlers with HL showed poorer performance in general, but exhibited a similar trajectory, albeit with greater individual variability. Toddlers with HL performed less well than age-matched toddlers with TH, but performed at the same level as toddlers with TH matched for hearing experience. Severity of HL was associated with performance level. CONCLUSIONS The Hebrew-adapted version can provide a developmental assessment of word and sentence recognition tasks in both groups of toddlers. These findings have important implications for toddlers with HL for whom assessment tools at the sentence level are rare.
Collapse
Affiliation(s)
- Drorit Ben-Itzhak
- a Department of Communication Sciences and Disorders , Ono Academic College , Kiryat Ono , Israel
| | - Limor Adi-Bensaid
- a Department of Communication Sciences and Disorders , Ono Academic College , Kiryat Ono , Israel
| |
Collapse
|
14
|
Patro C, Mendel LL. Gated Word Recognition by Postlingually Deafened Adults With Cochlear Implants: Influence of Semantic Context. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:145-158. [PMID: 29242894 DOI: 10.1044/2017_jslhr-h-17-0141] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/17/2017] [Accepted: 08/28/2017] [Indexed: 06/07/2023]
Abstract
PURPOSE The main goal of this study was to investigate the minimum amount of sensory information required to recognize spoken words (isolation points [IPs]) in listeners with cochlear implants (CIs) and investigate facilitative effects of semantic contexts on the IPs. METHOD Listeners with CIs as well as those with normal hearing (NH) participated in the study. In Experiment 1, the CI users listened to unprocessed (full-spectrum) stimuli and individuals with NH listened to full-spectrum or vocoder processed speech. IPs were determined for both groups who listened to gated consonant-nucleus-consonant words that were selected based on lexical properties. In Experiment 2, the role of semantic context on IPs was evaluated. Target stimuli were chosen from the Revised Speech Perception in Noise corpus based on the lexical properties of the final words. RESULTS The results indicated that spectrotemporal degradations impacted IPs for gated words adversely, and CI users as well as participants with NH listening to vocoded speech had longer IPs than participants with NH who listened to full-spectrum speech. In addition, there was a clear disadvantage due to lack of semantic context in all groups regardless of the spectral composition of the target speech (full spectrum or vocoded). Finally, we showed that CI users (and users with NH with vocoded speech) can overcome such word processing difficulties with the help of semantic context and perform as well as listeners with NH. CONCLUSION Word recognition occurs even before the entire word is heard because listeners with NH associate an acoustic input with its mental representation to understand speech. The results of this study provide insight into the role of spectral degradation on the processing of spoken words in isolation and the potential benefits of semantic context. These results may also explain why CI users rely substantially on semantic context.
Collapse
Affiliation(s)
| | - Lisa Lucks Mendel
- School of Communication Sciences & Disorders, University of Memphis, TN
| |
Collapse
|