1
|
Levari T, Snedeker J. Understanding words in context: A naturalistic EEG study of children's lexical processing. JOURNAL OF MEMORY AND LANGUAGE 2024; 137:104512. [PMID: 38855737 PMCID: PMC11160963 DOI: 10.1016/j.jml.2024.104512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2024]
Abstract
When listening to speech, adults rely on context to anticipate upcoming words. Evidence for this comes from studies demonstrating that the N400, an event-related potential (ERP) that indexes ease of lexical-semantic processing, is influenced by the predictability of a word in context. We know far less about the role of context in children's speech comprehension. The present study explored lexical processing in adults and 5-10-year-old children as they listened to a story. ERPs time-locked to the onset of every word were recorded. Each content word was coded for frequency, semantic association, and predictability. In both children and adults, N400s reflect word predictability, even when controlling for frequency and semantic association. These findings suggest that both adults and children use top-down constraints from context to anticipate upcoming words when listening to stories.
Collapse
Affiliation(s)
- Tatyana Levari
- Department of Psychology, Harvard University, United States
| | - Jesse Snedeker
- Department of Psychology, Harvard University, United States
| |
Collapse
|
2
|
Li Y, Xing H, Zhang L, Shu H, Zhang Y. How Visual Word Decoding and Context-Driven Auditory Semantic Integration Contribute to Reading Comprehension: A Test of Additive vs. Multiplicative Models. Brain Sci 2021; 11:brainsci11070830. [PMID: 34201695 PMCID: PMC8301993 DOI: 10.3390/brainsci11070830] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2021] [Revised: 06/11/2021] [Accepted: 06/21/2021] [Indexed: 11/21/2022] Open
Abstract
Theories of reading comprehension emphasize decoding and listening comprehension as two essential components. The current study aimed to investigate how Chinese character decoding and context-driven auditory semantic integration contribute to reading comprehension in Chinese middle school students. Seventy-five middle school students were tested. Context-driven auditory semantic integration was assessed with speech-in-noise tests in which the fundamental frequency (F0) contours of spoken sentences were either kept natural or acoustically flattened, with the latter requiring a higher degree of contextual information. Statistical modeling with hierarchical regression was conducted to examine the contributions of Chinese character decoding and context-driven auditory semantic integration to reading comprehension. Performance in Chinese character decoding and auditory semantic integration scores with the flattened (but not natural) F0 sentences significantly predicted reading comprehension. Furthermore, the contributions of these two factors to reading comprehension were better fitted with an additive model instead of a multiplicative model. These findings indicate that reading comprehension in middle schoolers is associated with not only character decoding but also the listening ability to make better use of the sentential context for semantic integration in a severely degraded speech-in-noise condition. The results add to our better understanding of the multi-faceted reading comprehension in children. Future research could further address the age-dependent development and maturation of reading skills by examining and controlling other important cognitive variables, and apply neuroimaging techniques such as functional magmatic resonance imaging and electrophysiology to reveal the neural substrates and neural oscillatory patterns for the contribution of auditory semantic integration and the observed additive model to reading comprehension.
Collapse
Affiliation(s)
- Yu Li
- Division of Science and Technology, BNU-HKBU United International College, Zhuhai 519087, China;
| | - Hongbing Xing
- Institute on Education Policy and Evaluation of International Students, Beijing Language and Culture University, Beijing 100083, China;
| | - Linjun Zhang
- Beijing Advanced Innovation Center for Language Resources and College of Advanced Chinese Training, Beijing Language and Culture University, Beijing 100083, China
- Correspondence: (L.Z.); (Y.Z.)
| | - Hua Shu
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China;
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences and Center for Neurobehavioral Development, University of Minnesota, Minneapolis, MN 55455, USA
- Correspondence: (L.Z.); (Y.Z.)
| |
Collapse
|
3
|
Nagels L, Gaudrain E, Vickers D, Hendriks P, Başkent D. School-age children benefit from voice gender cue differences for the perception of speech in competing speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:3328. [PMID: 34241121 DOI: 10.1121/10.0004791] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/24/2020] [Accepted: 04/08/2021] [Indexed: 06/13/2023]
Abstract
Differences in speakers' voice characteristics, such as mean fundamental frequency (F0) and vocal-tract length (VTL), that primarily define speakers' so-called perceived voice gender facilitate the perception of speech in competing speech. Perceiving speech in competing speech is particularly challenging for children, which may relate to their lower sensitivity to differences in voice characteristics than adults. This study investigated the development of the benefit from F0 and VTL differences in school-age children (4-12 years) for separating two competing speakers while tasked with comprehending one of them and also the relationship between this benefit and their corresponding voice discrimination thresholds. Children benefited from differences in F0, VTL, or both cues at all ages tested. This benefit proportionally remained the same across age, although overall accuracy continued to differ from that of adults. Additionally, children's benefit from F0 and VTL differences and their overall accuracy were not related to their discrimination thresholds. Hence, although children's voice discrimination thresholds and speech in competing speech perception abilities develop throughout the school-age years, children already show a benefit from voice gender cue differences early on. Factors other than children's discrimination thresholds seem to relate more closely to their developing speech in competing speech perception abilities.
Collapse
Affiliation(s)
- Leanne Nagels
- Center for Language and Cognition Groningen (CLCG), University of Groningen, Groningen 9712EK, Netherlands
| | - Etienne Gaudrain
- CNRS UMR 5292, Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics, Inserm UMRS 1028, Université Claude Bernard Lyon 1, Université de Lyon, Lyon, France
| | - Deborah Vickers
- Sound Lab, Cambridge Hearing Group, Clinical Neurosciences Department, University of Cambridge, Cambridge CB2 0SZ, United Kingdom
| | - Petra Hendriks
- Center for Language and Cognition Groningen (CLCG), University of Groningen, Groningen 9712EK, Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen 9713GZ, Netherlands
| |
Collapse
|
4
|
Abstract
OBJECTIVES The present study investigated presentation modality differences in lexical encoding and working memory representations of spoken words of older, hearing-impaired adults. Two experiments were undertaken: a memory-scanning experiment and a stimulus gating experiment. The primary objective of experiment 1 was to determine whether memory encoding and retrieval and scanning speeds are different for easily identifiable words presented in auditory-visual (AV), auditory-only (AO), and visual-only (VO) modalities. The primary objective of experiment 2 was to determine if memory encoding and retrieval speed differences observed in experiment 1 could be attributed to the early availability of AV speech information compared with AO or VO conditions. DESIGN Twenty-six adults over age 60 years with bilateral mild to moderate sensorineural hearing loss participated in experiment 1, and 24 adults who took part in experiment 1 participated in experiment 2. An item recognition reaction-time paradigm (memory-scanning) was used in experiment 1 to measure (1) lexical encoding speed, that is, the speed at which an easily identifiable word was recognized and placed into working memory, and (2) retrieval speed, that is, the speed at which words were retrieved from memory and compared with similarly encoded words (memory scanning) presented in AV, AO, and VO modalities. Experiment 2 used a time-gated word identification task to test whether the time course of stimulus information available to participants predicted the modality-related memory encoding and retrieval speed results from experiment 1. RESULTS The results of experiment 1 revealed significant differences among the modalities with respect to both memory encoding and retrieval speed, with AV fastest and VO slowest. These differences motivated an examination of the time course of stimulus information available as a function of modality. Results from experiment 2 indicated the encoding and retrieval speed advantages for AV and AO words compared with VO words were mostly driven by the time course of stimulus information. The AV advantage seen in encoding and retrieval speeds is likely due to a combination of robust stimulus information available to the listener earlier in time and lower attentional demands compared with AO or VO encoding and retrieval. CONCLUSIONS Significant modality differences in lexical encoding and memory retrieval speeds were observed across modalities. The memory scanning speed advantage observed for AV compared with AO or VO modalities was strongly related to the time course of stimulus information. In contrast, lexical encoding and retrieval speeds for VO words could not be explained by the time-course of stimulus information alone. Working memory processes for the VO modality may be impacted by greater attentional demands and less information availability compared with the AV and AO modalities. Overall, these results support the hypothesis that the presentation modality for speech inputs (AV, AO, or VO) affects how older adult listeners with hearing loss encode, remember, and retrieve what they hear.
Collapse
|
5
|
Walker EA, Kessler D, Klein K, Spratford M, Oleson JJ, Welhaven A, McCreery RW. Time-Gated Word Recognition in Children: Effects of Auditory Access, Age, and Semantic Context. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:2519-2534. [PMID: 31194921 PMCID: PMC6808355 DOI: 10.1044/2019_jslhr-h-18-0407] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/06/2018] [Revised: 01/07/2019] [Accepted: 02/19/2019] [Indexed: 06/03/2023]
Abstract
Purpose We employed a time-gated word recognition task to investigate how children who are hard of hearing (CHH) and children with normal hearing (CNH) combine cognitive-linguistic abilities and acoustic-phonetic cues to recognize words in sentence-final position. Method The current study included 40 CHH and 30 CNH in 1st or 3rd grade. Participants completed vocabulary and working memory tests and a time-gated word recognition task consisting of 14 high- and 14 low-predictability sentences. A time-to-event model was used to evaluate the effect of the independent variables (age, hearing status, predictability) on word recognition. Mediation models were used to examine the associations between the independent variables (vocabulary size and working memory), aided audibility, and word recognition. Results Gated words were identified significantly earlier for high-predictability than low-predictability sentences. First-grade CHH and CNH showed no significant difference in performance. Third-grade CHH needed more information than CNH to identify final words. Aided audibility was associated with word recognition. This association was fully mediated by vocabulary size but not working memory. Conclusions Both CHH and CNH benefited from the addition of semantic context. Interventions that focus on consistent aided audibility and vocabulary may enhance children's ability to fill in gaps in incoming messages.
Collapse
Affiliation(s)
- Elizabeth A. Walker
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City
| | - David Kessler
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN
| | - Kelsey Klein
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City
| | | | | | - Anne Welhaven
- Department of Biostatistics, University of Iowa, Iowa City
| | | |
Collapse
|
6
|
The Influence of Hearing Aid Gain on Gap-Detection Thresholds for Children and Adults With Hearing Loss. Ear Hear 2019; 39:969-979. [PMID: 29489468 DOI: 10.1097/aud.0000000000000558] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES The objective of this experiment was to examine the contributions of audibility to the ability to perceive a gap in noise for children and adults. Sensorineural hearing loss (SNHL) in adulthood is associated with a deficit in gap detection. It is well known that reduced audibility in adult listeners with SNHL contributes to this deficit; however, it is unclear the extent to which hearing aid amplification can restore gap-detection thresholds, and the effect of childhood SNHL on gap-detection thresholds have not been described. For adults, it was hypothesized that restoring the dynamic range of hearing for listeners with SNHL would lead to approximately normal gap-detection thresholds. Children with normal hearing (NH) exhibit poorer gap-detection thresholds than adults. Because of their hearing loss, children with SNHL have less auditory experience than their peers with NH. Yet, it is unknown the extent to which auditory experience impacts their ability to perceive gaps in noise. Even with the provision of amplification, it was hypothesized that children with SNHL would show a deficit in gap detection, relative to their peers with normal hearing, because of reduced auditory experience. DESIGN The ability to detect a silent interval in noise was tested by adapting the stimulus level required for detection of gap durations between 3 and 20 ms for adults and children with and without SNHL. Stimulus-level thresholds were measured for participants with SNHL without amplification and with two prescriptive procedures-the adult and child versions of the desired sensation level i/o program-using a hearing aid simulator. The child version better restored the normal dynamic range than the adult version. Adults and children with NH were tested without amplification. RESULTS When fitted using the procedure that best restored the dynamic range, adults with SNHL had stimulus-level thresholds similar to those of adults with normal hearing. Compared to the children with NH, the children with SNHL required a higher stimulus level to detect a 5-ms gap, despite having used the procedure that better restored the normal dynamic range of hearing. Otherwise, the two groups of children had similar stimulus-level thresholds. CONCLUSION These findings suggest that apparent deficits in temporal resolution, as measured using stimulus-level thresholds for the detection of gaps, are dependent on age and audibility. These novel results indicate that childhood SNHL may impair temporal resolution as measured by stimulus-level thresholds for the detection of a gap in noise. This work has implications for understanding the effects of amplification on the ability to perceive temporal cues in speech.
Collapse
|
7
|
Amichetti NM, Atagi E, Kong YY, Wingfield A. Linguistic Context Versus Semantic Competition in Word Recognition by Younger and Older Adults With Cochlear Implants. Ear Hear 2019; 39:101-109. [PMID: 28700448 PMCID: PMC5741484 DOI: 10.1097/aud.0000000000000469] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
OBJECTIVES The increasing numbers of older adults now receiving cochlear implants raises the question of how the novel signal produced by cochlear implants may interact with cognitive aging in the recognition of words heard spoken within a linguistic context. The objective of this study was to pit the facilitative effects of a constraining linguistic context against a potential age-sensitive negative effect of response competition on effectiveness of word recognition. DESIGN Younger (n = 8; mean age = 22.5 years) and older (n = 8; mean age = 67.5 years) adult implant recipients heard 20 target words as the final words in sentences that manipulated the target word's probability of occurrence within the sentence context. Data from published norms were also used to measure response entropy, calculated as the total number of different responses and the probability distribution of the responses suggested by the sentence context. Sentence-final words were presented to participants using a word-onset gating paradigm, in which a target word was presented with increasing amounts of its onset duration in 50 msec increments until the word was correctly identified. RESULTS Results showed that for both younger and older adult implant users, the amount of word-onset information needed for correct recognition of sentence-final words was inversely proportional to their likelihood of occurrence within the sentence context, with older adults gaining differential advantage from the contextual constraints offered by a sentence context. On the negative side, older adults' word recognition was differentially hampered by high response entropy, with this effect being driven primarily by the number of competing responses that might also fit the sentence context. CONCLUSIONS Consistent with previous research with normal-hearing younger and older adults, the present results showed older adult implant users' recognition of spoken words to be highly sensitive to linguistic context. This sensitivity, however, also resulted in a greater degree of interference from other words that might also be activated by the context, with negative effects on ease of word recognition. These results are consistent with an age-related inhibition deficit extending to the domain of semantic constraints on word recognition.
Collapse
Affiliation(s)
- Nicole M. Amichetti
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA, USA
| | - Eriko Atagi
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA, USA
- Department of Communication Sciences and Disorders, Northeastern University, Boston, MA, USA
| | - Ying-Yee Kong
- Department of Communication Sciences and Disorders, Northeastern University, Boston, MA, USA
| | - Arthur Wingfield
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA, USA
| |
Collapse
|
8
|
Payne BR, Silcox JW. Aging, context processing, and comprehension. PSYCHOLOGY OF LEARNING AND MOTIVATION 2019. [DOI: 10.1016/bs.plm.2019.07.001] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
|
9
|
Boudelaa S. Non-Selective Lexical Access in Late Arabic-English Bilinguals: Evidence from Gating. JOURNAL OF PSYCHOLINGUISTIC RESEARCH 2018; 47:913-930. [PMID: 29417453 DOI: 10.1007/s10936-018-9564-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Previous research suggests that late bilinguals who speak typologically distant languages are the least likely to show evidence of non-selective lexical access processes. This study puts this claim to test by using the gating task to determine whether words beginning with speech sounds that are phonetically similar in Arabic and English (e.g., [b,d,m,n]) give rise to selective or non-selective lexical access processes in late Arabic-English bilinguals. The results show that an acoustic-phonetic input (e.g., [bæ]) that is consistent with words in Arabic (e.g., [bædrun] "moon") and English (e.g., [bæd] "bad") activates lexical representations in both languages of the bilingual. This non-selective activation holds equally well for mixed lists with words from both Arabic and English and blocked lists consisting only of Arabic or English words. These results suggest that non-selective lexical access processes are the default mechanism even in late bilinguals of typologically distant languages.
Collapse
Affiliation(s)
- Sami Boudelaa
- Department of Linguistics, United Arab Emirates University, Al Ain, 15551, UAE.
- Department of Psychology, University of Cambridge, Cambridge, UK.
| |
Collapse
|
10
|
Abstract
OBJECTIVES The purpose of this study was to examine word recognition in children who are hard of hearing (CHH) and children with normal hearing (CNH) in response to time-gated words presented in high- versus low-predictability sentences (HP, LP), where semantic cues were manipulated. Findings inform our understanding of how CHH combine cognitive-linguistic and acoustic-phonetic cues to support spoken word recognition. It was hypothesized that both groups of children would be able to make use of linguistic cues provided by HP sentences to support word recognition. CHH were expected to require greater acoustic information (more gates) than CNH to correctly identify words in the LP condition. In addition, it was hypothesized that error patterns would differ across groups. DESIGN Sixteen CHH with mild to moderate hearing loss and 16 age-matched CNH participated (5 to 12 years). Test stimuli included 15 LP and 15 HP age-appropriate sentences. The final word of each sentence was divided into segments and recombined with the sentence frame to create series of sentences in which the final word was progressively longer by the gated increments. Stimuli were presented monaurally through headphones and children were asked to identify the target word at each successive gate. They also were asked to rate their confidence in their word choice using a five- or three-point scale. For CHH, the signals were processed through a hearing aid simulator. Standardized language measures were used to assess the contribution of linguistic skills. RESULTS Analysis of language measures revealed that the CNH and CHH performed within the average range on language abilities. Both groups correctly recognized a significantly higher percentage of words in the HP condition than in the LP condition. Although CHH performed comparably with CNH in terms of successfully recognizing the majority of words, differences were observed in the amount of acoustic-phonetic information needed to achieve accurate word recognition. CHH needed more gates than CNH to identify words in the LP condition. CNH were significantly lower in rating their confidence in the LP condition than in the HP condition. CHH, however, were not significantly different in confidence between the conditions. Error patterns for incorrect word responses across gates and predictability varied depending on hearing status. CONCLUSIONS The results of this study suggest that CHH with age-appropriate language abilities took advantage of context cues in the HP sentences to guide word recognition in a manner similar to CNH. However, in the LP condition, they required more acoustic information (more gates) than CNH for word recognition. Differences in the structure of incorrect word responses and their nomination patterns across gates for CHH compared with their peers with NH suggest variations in how these groups use limited acoustic information to select word candidates.
Collapse
|
11
|
Zhou H, Li Y, Liang M, Guan CQ, Zhang L, Shu H, Zhang Y. Mandarin-Speaking Children's Speech Recognition: Developmental Changes in the Influences of Semantic Context and F0 Contours. Front Psychol 2017; 8:1090. [PMID: 28701990 PMCID: PMC5487482 DOI: 10.3389/fpsyg.2017.01090] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2017] [Accepted: 06/13/2017] [Indexed: 11/22/2022] Open
Abstract
The goal of this developmental speech perception study was to assess whether and how age group modulated the influences of high-level semantic context and low-level fundamental frequency (F0) contours on the recognition of Mandarin speech by elementary and middle-school-aged children in quiet and interference backgrounds. The results revealed different patterns for semantic and F0 information. One the one hand, age group modulated significantly the use of F0 contours, indicating that elementary school children relied more on natural F0 contours than middle school children during Mandarin speech recognition. On the other hand, there was no significant modulation effect of age group on semantic context, indicating that children of both age groups used semantic context to assist speech recognition to a similar extent. Furthermore, the significant modulation effect of age group on the interaction between F0 contours and semantic context revealed that younger children could not make better use of semantic context in recognizing speech with flat F0 contours compared with natural F0 contours, while older children could benefit from semantic context even when natural F0 contours were altered, thus confirming the important role of F0 contours in Mandarin speech recognition by elementary school children. The developmental changes in the effects of high-level semantic and low-level F0 information on speech recognition might reflect the differences in auditory and cognitive resources associated with processing of the two types of information in speech perception.
Collapse
Affiliation(s)
- Hong Zhou
- International Cultural Exchange School, Shanghai University of Finance and EconomicsShanghai, China
| | - Yu Li
- Department of Cognitive Science and ARC Centre of Excellence in Cognition and Its Disorders, Macquarie University, SydneyNSW, Australia
| | - Meng Liang
- College of Allied Health Sciences, Beijing Language and Culture UniversityBeijing, China
| | - Connie Qun Guan
- School of Foreign Studies, University of Science and Technology BeijingBeijing, China
| | - Linjun Zhang
- College of Allied Health Sciences, Beijing Language and Culture UniversityBeijing, China
| | - Hua Shu
- National Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal UniversityBeijing, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences and Center for Neurobehavioral Development, University of Minnesota, MinneapolisMN, United States
| |
Collapse
|
12
|
Molis MR, Kampel SD, McMillan GP, Gallun FJ, Dann SM, Konrad-Martin D. Effects of hearing and aging on sentence-level time-gated word recognition. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2015; 58:481-96. [PMID: 25815688 PMCID: PMC4635971 DOI: 10.1044/2015_jslhr-h-14-0098] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/08/2014] [Accepted: 11/21/2014] [Indexed: 05/16/2023]
Abstract
PURPOSE Aging is known to influence temporal processing, but its relationship to speech perception has not been clearly defined. To examine listeners' use of contextual and phonetic information, the Revised Speech Perception in Noise test (R-SPIN) was used to develop a time-gated word (TGW) task. METHOD In Experiment 1, R-SPIN sentence lists were matched on context, target-word length, and median word segment length necessary for target recognition. In Experiment 2, TGW recognition was assessed in quiet and in noise among adults of various ages with normal hearing to moderate hearing loss. Linear regression models of the minimum word duration necessary for correct identification and identification failure rates were developed. Age and hearing thresholds were modeled as continuous predictors with corrections for correlations among multiple measurements of the same participants. RESULTS While aging and hearing loss both had significant impacts on task performance in the most adverse listening condition (low context, in noise), for most conditions, performance was limited primarily by hearing loss. CONCLUSION Whereas hearing loss was strongly related to target-word recognition, the effect of aging was only weakly related to task performance. These results have implications for the design and evaluation of studies of hearing and aging.
Collapse
|
13
|
Lash A, Rogers CS, Zoller A, Wingfield A. Expectation and entropy in spoken word recognition: effects of age and hearing acuity. Exp Aging Res 2013; 39:235-53. [PMID: 23607396 DOI: 10.1080/0361073x.2013.779175] [Citation(s) in RCA: 56] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
UNLABELLED BACKGROUND/STUDY CONTEXT: Older adults, especially those with reduced hearing acuity, can make good use of linguistic context in word recognition. Less is known about the effects of the weighted distribution of probable target and nontarget words that fit the sentence context (response entropy). The present study examined the effects of age, hearing acuity, linguistic context, and response entropy on spoken word recognition. METHODS Participants were 18 older adults with good hearing acuity (M age = 74.3 years), 18 older adults with mild-to-moderate hearing loss (M age = 76.1 years), and 18 young adults with age-normal hearing (M age = 19.6 years). Participants heard sentence-final words using a word-onset gating paradigm, in which words were heard with increasing amounts of onset information until they could be correctly identified. Degrees of context varied from a neutral context to a high context condition. RESULTS Older adults with poor hearing acuity required a greater amount of word onset information for recognition of words when heard in a neutral context compared with older adults with good hearing acuity and young adults. This difference progressively decreased with an increase in words' contextual probability. Unlike the young adults, both older adult groups' word recognition thresholds were sensitive to response entropy. Response entropy was not affected by hearing acuity. CONCLUSION Increasing linguistic context mitigates the negative effect of age and hearing loss on word recognition. The effect of response entropy on older adults' word recognition is discussed in terms of an age-related inhibition deficit.
Collapse
Affiliation(s)
- Amanda Lash
- Department of Psychology and Volen National Center for Complex Systems, Brandeis University, Waltham, Massachusetts 02454-9110, USA
| | | | | | | |
Collapse
|
14
|
Ben-David BM, Chambers CG, Daneman M, Pichora-Fuller MK, Reingold EM, Schneider BA. Effects of aging and noise on real-time spoken word recognition: evidence from eye movements. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2011; 54:243-262. [PMID: 20689026 DOI: 10.1044/1092-4388(2010/09-0233)] [Citation(s) in RCA: 60] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
PURPOSE To use eye tracking to investigate age differences in real-time lexical processing in quiet and in noise in light of the fact that older adults find it more difficult than younger adults to understand conversations in noisy situations. METHOD Twenty-four younger and 24 older adults followed spoken instructions referring to depicted objects, for example, "Look at the candle." Eye movements captured listeners' ability to differentiate the target noun (candle) from a similar-sounding phonological competitor (e.g., candy or sandal). Manipulations included the presence/absence of noise, the type of phonological overlap in target-competitor pairs, and the number of syllables. RESULTS Having controlled for age-related differences in word recognition accuracy (by tailoring noise levels), similar online processing profiles were found for younger and older adults when targets were discriminated from competitors that shared onset sounds. Age-related differences were found when target words were differentiated from rhyming competitors and were more extensive in noise. CONCLUSIONS Real-time spoken word recognition processes appear similar for younger and older adults in most conditions; however, age-related differences may be found in the discrimination of rhyming words (especially in noise), even when there are no age differences in word recognition accuracy. These results highlight the utility of eye movement methodologies for studying speech processing across the life span.
Collapse
Affiliation(s)
- Boaz M Ben-David
- Oral Dynamics Laboratory, Department of Speech-Language Pathology, 160-500 University Avenue, Toronto, Ontario M5G 1V7, Canada.
| | | | | | | | | | | |
Collapse
|
15
|
Vousden JI. Units of English spelling-to-sound mapping: a rational approach to reading instruction. APPLIED COGNITIVE PSYCHOLOGY 2008. [DOI: 10.1002/acp.1371] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
16
|
Stuart A. Development of Auditory Temporal Resolution in School-Age Children Revealed by Word Recognition in Continuous and Interrupted Noise. Ear Hear 2005; 26:78-88. [PMID: 15692306 DOI: 10.1097/00003446-200502000-00007] [Citation(s) in RCA: 44] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVE The purpose of the study was to investigate the development of one aspect of auditory temporal resolution in normal-hearing school-age children with word recognition in quiet and in spectrally identical continuous and interrupted noise. Typically, listeners experience a perceptual advantage (i.e., a "release from masking") in the interrupted noise relative to the continuous noise at equivalent signal-to-noise ratios (S/Ns). Any release from masking observed with children in the interrupted noise compared with the continuous noise at equivalent S/Ns could be interpreted as evidence for acquired temporal resolution ability. Differences in the amount of release from masking in the interrupted noise between children and adults could be interpreted as development of temporal resolution ability, or lack thereof, as revealed by word recognition in noise. It was hypothesized that word recognition performance would be poorer in children than adults; performance differences would be more pronounced with competition; word recognition performance would reach an asymptote to adult levels sooner in quiet than with competing stimuli; children would demonstrate better performance in the interrupted noise relative to the continuous noise (i.e., display a release from masking); and younger children would experience less release from masking compared with older children and adults (i.e., have less developed temporal resolution). DESIGN Eighty normal-hearing children aged 6 to 15 yr and 16 normal-hearing young adults participated. Word recognition performance with Northwestern University-Children's Perception of Speech (NU-CHIPs) stimuli was evaluated with an open-set response mode in quiet and in backgrounds of competing continuous steady-state and interrupted noise at S/Ns of 10, 0, -10, and -20 dB. Both noises were essentially identical in their spectral content and differed only in their temporal continuity. RESULTS Performance was better in the interrupted noise at poorer S/Ns, increased with increasing S/N, and improved with increasing age. Younger listeners were more susceptible to noise. They did not experience an equivalent perceptual advantage (i.e., a release from masking) in the interrupted noise at poorer S/Ns (i.e., < 10 dB) and generally required more favorable S/Ns to perform the same as the adult participants. These trends were less pronounced with increasing age. By 8 yr of age, children's performance in quiet equated that of adult levels, but it did not do so in noise until after 11 yr of age. CONCLUSIONS As revealed by their NU-CHIPs word recognition performance in continuous and interrupted noises, children's temporal resolving abilities improve in their early school years and reach adult performance levels after 11 yr of age. It was speculated that these changes reflect maturation in their central auditory system.
Collapse
Affiliation(s)
- Andrew Stuart
- East Carolina University, Greenville, North Carolina 27858-4353, USA
| |
Collapse
|
17
|
Fallon M, Trehub SE, Schneider BA. Children's use of semantic cues in degraded listening environments. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2002; 111:2242-2249. [PMID: 12051444 DOI: 10.1121/1.1466873] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Children 5 and 9 years of age and adults were required to identify the final words of low- and high-context sentences in background noise. Age-related differences in the audibility of speech signals were minimized by selecting signal-to-noise ratios (SNRs) that yielded 78% correct performance for low-context sentences. As expected, children required more favorable SNRs than adults to achieve comparable levels of performance. A more difficult listening condition was generated by adding 2 dB of noise. In general, 5-year-olds performed more poorly than did 9-year-olds and adults. Listeners of all ages, however, showed comparable gains from context in both levels of noise, indicating that noise does not impede children's use of contextual cues.
Collapse
Affiliation(s)
- Marianne Fallon
- Department of Psychology, University of Toronto at Mississauga, Ontario, Canada
| | | | | |
Collapse
|
18
|
Picard M, Bradley JS. Revisiting Speech Interference in Classrooms:Revisando la interferencia en el habla dentro del salón de clases. Int J Audiol 2001. [DOI: 10.3109/00206090109073117] [Citation(s) in RCA: 145] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
19
|
Wingfield A, Lindfield KC, Goodglass H. Effects of age and hearing sensitivity on the use of prosodic information in spoken word recognition. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2000; 43:915-925. [PMID: 11386478 DOI: 10.1044/jslhr.4304.915] [Citation(s) in RCA: 32] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
It is well known that spoken words can often be recognized from just their onsets and that older adults require a greater word onset duration for recognition than young adults. In this study, young and older adults heard either just word onsets, word onsets followed by white noise indicating the full duration of the target word, or word onsets followed by a low-pass-filtered signal that indicated the number of syllables and syllabic stress (word prosody) in the absence of segmental information. Older adults required longer stimulus durations for word recognition under all conditions, with age differences in hearing sensitivity contributing significantly to this age difference. Within this difference, however, word recognition was facilitated by knowledge of word prosody to the same degree for young and older adults. These findings suggest, first, that listeners can detect and utilize word stress in making perceptual judgments and, second, that this ability remains spared in normal aging.
Collapse
Affiliation(s)
- A Wingfield
- Volen National Center for Complex Systems, Brandeis University Waltham, MA 02454-9110, USA
| | | | | |
Collapse
|
20
|
|
21
|
Marshall NB, Duke LW, Walley AC. Effects of age and Alzheimer's disease on recognition of gated spoken words. JOURNAL OF SPEECH AND HEARING RESEARCH 1996; 39:724-733. [PMID: 8844553 DOI: 10.1044/jshr.3904.724] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
This study investigated the effects of normal aging and Alzheimer's disease on listeners' ability to recognize gated spoken words. Groups of healthy young adults, healthy older adults, and adults with Alzheimer's disease were presented isolated gated spoken words. Theoretical predictions of the Cohort model of spoken word recognition (Marslen-Wilson, 1984) were tested, employing both between-group and within-group comparisons. The findings for the young adults supported the Cohort model's predictions. The findings for the older adult groups revealed different effects for age and disease. These results are interpreted in relation to the theoretical predictions, the findings of previous gating studies, and differentiating age from disease-related changes in spoken word recognition.
Collapse
Affiliation(s)
- N B Marshall
- Department of Neurology, Alzheimer's Disease Center, University of Alabama at Birmingham, USA
| | | | | |
Collapse
|
22
|
Elliott LL. Verbal auditory closure and the speech perception in noise (SPIN) Test. JOURNAL OF SPEECH AND HEARING RESEARCH 1995; 38:1363-1376. [PMID: 8747828 DOI: 10.1044/jshr.3806.1363] [Citation(s) in RCA: 26] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
Ability to utilize auditory contextual information to facilitate speech-recognition verbal auditory closure is postulated to be a specific factor or primary mental ability, separable from general intelligence or other mental functions. This paper proposes that measurement of verbal auditory closure provides useful clinical information. Because the Speech Perception in Noise (SPIN). Test allows separate scores for understanding of sentences that contain contextual information and of those that do not, the SPIN Test provides a good measure of verbal auditory closure. Now that an authorized version of the revised SPIN Test is commercially available, it is appropriate to review published information about reported performance of different listener groups on this instrument and to propose additional research questions that deserve investigation.
Collapse
|
23
|
Walley AC, Michela VL, Wood DR. The gating paradigm: effects of presentation format on spoken word recognition by children and adults. PERCEPTION & PSYCHOPHYSICS 1995; 57:343-51. [PMID: 7770325 DOI: 10.3758/bf03213059] [Citation(s) in RCA: 39] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
This study focused on the impact of stimulus presentation format in the gating paradigm with age. Two presentation formats were employed--the standard, successive format and a duration-blocked one, in which gates from word onset were blocked by duration (i.e., gates for the same word were not temporally adjacent). In Experiment 1, the effect of presentation format on adults' recognition was assessed as a function of response format (written vs. oral). In Experiment 2, the effect of presentation format on kindergarteners', first graders', and adults' recognition was assessed with an oral response format only. Performance was typically poorer for the successive format than for the duration-blocked one. The role of response perseveration and negative feedback in producing this effect is considered, as is the effect of word frequency and cohort size on recognition. Although the successive format yields a conservative picture of recognition, presentation format did not have a markedly different effect across the three age levels studied. Thus, the gating paradigm would seem to be an appropriate one for making developmental comparisons of spoken word recognition.
Collapse
Affiliation(s)
- A C Walley
- Department of Psychology, University of Alabama, Birmingham 35294, USA
| | | | | |
Collapse
|