1
|
Porto L, Wouters J, van Wieringen A. Speech perception in noise, working memory, and attention in children: A scoping review. Hear Res 2023; 439:108883. [PMID: 37722287 DOI: 10.1016/j.heares.2023.108883] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 08/28/2023] [Accepted: 09/07/2023] [Indexed: 09/20/2023]
Abstract
PURPOSE Speech perception in noise is an everyday occurrence for adults and children alike. The factors that influence how well individuals cope with noise during spoken communication are not well understood, particularly in the case of children. This article aims to review the available evidence on how working memory and attention play a role in children's speech perception in noise, how characteristics of measures affect results, and how this relationship differs in non-typical populations. METHOD This article is a scoping review of the literature available on PubMed. Forty articles were included for meeting the inclusion criteria of including children as participants, some measure of speech perception in noise, some measure of attention and/or working memory, and some attempt to establish relationships between the measures. Findings were charted and presented keeping in mind how they relate to the research questions. RESULTS The majority of studies report that attention and especially working memory are involved in speech perception in noise by children. We provide an overview of the impact of certain task characteristics on findings across the literature, as well as how these affect non-typical populations. CONCLUSION While most of the work reviewed here provides evidence suggesting that working memory and attention are important abilities employed by children in overcoming the difficulties imposed by noise during spoken communication, methodological variability still prevents a clearer picture from emerging.
Collapse
Affiliation(s)
- Lyan Porto
- Department of Neurosciences, University of Leuven, Research group Experimental Oto-Rino-Laryngologie. O&N II, Herestraat 49, Leuven 3000, Belgium.
| | - Jan Wouters
- Department of Neurosciences, University of Leuven, Research group Experimental Oto-Rino-Laryngologie. O&N II, Herestraat 49, Leuven 3000, Belgium
| | - Astrid van Wieringen
- Department of Neurosciences, University of Leuven, Research group Experimental Oto-Rino-Laryngologie. O&N II, Herestraat 49, Leuven 3000, Belgium; Department of Special Needs Education, University of Oslo, Norway
| |
Collapse
|
2
|
Cameron S, Boyle C, Dillon H. The development of the Language-Independent Speech in Noise and Reverberation test (LISiNaR) and evaluation in listeners with English as a second language. Int J Audiol 2023; 62:756-766. [PMID: 35654088 DOI: 10.1080/14992027.2022.2078432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Revised: 05/10/2022] [Accepted: 05/10/2022] [Indexed: 11/05/2022]
Abstract
OBJECTIVE Create a language-independent, ecologically valid auditory processing assessment and evaluate relative stimuli intelligibility in native and non-native English speakers. DESIGN The Language-Independent Speech in Noise and Reverberation Test (LISiNaR) targets comprised consonant-vowel (CVCV) pseudo-words. Distractors comprised CVCVCVCV pseudo-words. Stimuli were presented over headphones using an iPad either face-to-face or remotely. Scoring occurred adaptively to establish a participant's speech reception threshold in noise (SRT). The listening environment was simulated using reverberant and anechoic head-related transfer functions. In four test conditions, targets originated from 0°. Distractors originated from either ±90°, ±67.5° and ±45° (spatially separated) or 0° azimuth (co-located). Reverberation impact (RI) was calculated as the difference in SRTs between the anechoic and reverberant conditions and spatial advantage (SA) as the difference between the spatially separated and co-located conditions. STUDY SAMPLE Young adult native speakers of Australian (n = 24) and Canadian (25) and non-native English speakers (34). RESULTS No significant effects of language occurred for the test conditions, RI or SA. A small but significant effect of delivery mode occurred for RI. Reverberation impacted SRT by 5 dB relative to anechoic conditions. CONCLUSION Performance on LISiNaR is not affected by the native language or accent of groups tested in this study.
Collapse
Affiliation(s)
- Sharon Cameron
- Department of Linguistics, Macquarie University, Sydney, Australia
| | - Christian Boyle
- Department of Linguistics, Macquarie University, Sydney, Australia
| | - Harvey Dillon
- Department of Linguistics, Macquarie University, Sydney, Australia
- Division of Human Communication, Development and Hearing, University of Manchester, Manchester, United Kingdom
| |
Collapse
|
3
|
Lewis DE. Speech Understanding in Complex Environments by School-Age Children with Mild Bilateral or Unilateral Hearing Loss. Semin Hear 2023; 44:S36-S48. [PMID: 36970648 PMCID: PMC10033204 DOI: 10.1055/s-0043-1764134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023] Open
Abstract
Numerous studies have shown that children with mild bilateral (MBHL) or unilateral hearing loss (UHL) experience speech perception difficulties in poor acoustics. Much of the research in this area has been conducted via laboratory studies using speech-recognition tasks with a single talker and presentation via earphones and/or from a loudspeaker located directly in front of the listener. Real-world speech understanding is more complex, however, and these children may need to exert greater effort than their peers with normal hearing to understand speech, potentially impacting progress in a number of developmental areas. This article discusses issues and research relative to speech understanding in complex environments for children with MBHL or UHL and implications for real-world listening and understanding.
Collapse
Affiliation(s)
- Dawna E. Lewis
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska
| |
Collapse
|
4
|
Vettori G, Di Leonardo L, Secchi S, Astolfi A, Bigozzi L. Primary school children’s verbal working memory performances in classrooms with different acoustic conditions. COGNITIVE DEVELOPMENT 2022. [DOI: 10.1016/j.cogdev.2022.101256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
|
5
|
Abstract
OBJECTIVES The purpose of the present study was to determine whether age and hearing ability influence selective attention during childhood. Specifically, we hypothesized that immaturity and disrupted auditory experience impede selective attention during childhood. DESIGN Seventy-seven school-age children (5 to 12 years of age) participated in this study: 61 children with normal hearing and 16 children with bilateral hearing loss who use hearing aids and/or cochlear implants. Children performed selective attention-based behavioral change detection tasks comprised of target and distractor streams in the auditory and visual modalities. In the auditory modality, children were presented with two streams of single-syllable words spoken by a male and female talker. In the visual modality, children were presented with two streams of grayscale images. In each task, children were instructed to selectively attend to the target stream, inhibit attention to the distractor stream, and press a key as quickly as possible when they detected a frequency (auditory modality) or color (visual modality) deviant stimulus in the target, but not distractor, stream. Performance on the auditory and visual change detection tasks was quantified by response sensitivity, which reflects children's ability to selectively attend to deviants in the target stream and inhibit attention to those in the distractor stream. Children also completed a standardized measure of attention and inhibitory control. RESULTS Younger children and children with hearing loss demonstrated lower response sensitivity, and therefore poorer selective attention, than older children and children with normal hearing, respectively. The effect of hearing ability on selective attention was observed across the auditory and visual modalities, although the extent of this group difference was greater in the auditory modality than the visual modality due to differences in children's response patterns. Additionally, children's performance on a standardized measure of attention and inhibitory control related to their performance during the auditory and visual change detection tasks. CONCLUSIONS Overall, the findings from the present study suggest that age and hearing ability influence children's ability to selectively attend to a target stream in both the auditory and visual modalities. The observed differences in response patterns across modalities, however, reveal a complex interplay between hearing ability, task modality, and selective attention during childhood. While the effect of age on selective attention is expected to reflect the immaturity of cognitive and linguistic processes, the effect of hearing ability may reflect altered development of selective attention due to disrupted auditory experience early in life and/or a differential allocation of attentional resources to meet task demands.
Collapse
|
6
|
A Slight Increase in Reverberation Time in the Classroom Affects Performance and Behavioral Listening Effort. Ear Hear 2021; 43:460-476. [PMID: 34369418 DOI: 10.1097/aud.0000000000001110] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES The purpose of this study was to investigate the effect of a small change in reverberation time (from 0.57 to 0.69 s) in a classroom on children's performance and listening effort. Aiming for ecological listening conditions, the change in reverberation time was combined with the presence or absence of classroom noise. In three academic tasks, the study examined whether the effect of reverberation was modulated by the presence of noise and depended on the children's age. DESIGN A total of 302 children (aged 11-13 years, grades 6-8) with normal hearing participated in the study. Three typical tasks of daily classroom activities (speech perception, sentence comprehension, and mental calculation) were administered to groups of children in two listening conditions (quiet and classroom noise). The experiment was conducted inside real classrooms, where reverberation time was controlled. The outcomes considered were task accuracy and response times (RTs), the latter taken as a behavioral proxy for listening effort. Participants were also assessed on reading comprehension and math fluency. To investigate the impact of noise and/or reverberation, these two scores were entered in the statistical model to control for individual child's general academic abilities. RESULTS While the longer reverberation time did not significantly affect accuracy or RTs under the quiet condition, it had several effects when in combination with classroom noise, depending on the task measured. A significant drop in accuracy with a longer reverberation time emerged for the speech perception task, but only for the grade 6 children. The effect on accuracy of a longer reverberation time was nonsignificant for sentence comprehension (always at ceiling), and depended on the children's age in the mental calculation task. RTs were longer for moderate than for short reverberation times in the speech perception and sentence comprehension tasks, while there was no significant effect of the different reverberation times on RTs in the mental calculation task. CONCLUSIONS The results indicate small, but statistically significant, effects of a small change in reverberation time on listening effort as well as accuracy for children aged 11 to 13 performing typical tasks of daily classroom activities. Thus, the results extend previous findings in adults to children as well. The findings also contribute to a better understanding of the practical implications and importance of optimal ranges of reverberation time in classrooms. A comparison with previous studies underscored the importance of early reflections as well as reverberation times in classrooms.
Collapse
|
7
|
Iglehart F. Speech Perception in Classroom Acoustics by Children With Hearing Loss and Wearing Hearing Aids. Am J Audiol 2020; 29:6-17. [PMID: 31835909 PMCID: PMC7229780 DOI: 10.1044/2019_aja-19-0010] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose The classroom acoustic standard ANSI/ASA S12.60-2010/Part 1 requires a reverberation time (RT) for children with hearing impairment of 0.3 s, shorter than its requirement of 0.6 s for children with typical hearing. While preliminary data from conference proceedings support this new RT requirement of 0.3 s, peer-reviewed data that support 0.3-s RT are not available on those wearing hearing aids. To help address this, this article compares speech perception performance by children with hearing aids in RTs, including those specified in the ANSI/ASA-2010 standard. A related clinical issue is whether assessments of speech perception conducted in near-anechoic sound booths, which may overestimate performance in reverberant classrooms, may now provide a more reliable estimate when the child is in a classroom with a short RT of 0.3 s. To address this, this study compared speech perception by children with hearing aids in a sound booth to listening in 0.3-s RT. Method Participants listened in classroom RTs of 0.3, 0.6, and 0.9 s and in a near-anechoic sound booth. All conditions also included a 21-dB range of speech-to-noise ratios (SNRs) to further represent classroom listening environments. Performance measures using the Bamford–Kowal–Bench Speech-in-Noise (BKB-SIN) test were 50% correct word recognition across these acoustic conditions, with supplementary analyses of percent correct. Results Each reduction in RT from 0.9 to 0.6 to 0.3 s significantly benefited the children's perception of speech. Scores obtained in a sound booth were significantly better than those measured in 0.3-s RT. Conclusion These results support the acoustic standard of 0.3-s RT for children with hearing impairment in learning spaces ≤ 283 m3, as specified in ANSI/ASA S12.60-2010/Part 1. Additionally, speech perception testing in a sound booth did not predict accurately listening ability in a classroom with 0.3-s RT. Supplemental Material https://doi.org/10.23641/asha.11356487
Collapse
Affiliation(s)
- Frank Iglehart
- Formerly of the Audiology Department, Clarke Schools for Hearing and Speech, Northampton, MA
| |
Collapse
|
8
|
Looking Behavior and Audiovisual Speech Understanding in Children With Normal Hearing and Children With Mild Bilateral or Unilateral Hearing Loss. Ear Hear 2017; 39:783-794. [PMID: 29252979 DOI: 10.1097/aud.0000000000000534] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Visual information from talkers facilitates speech intelligibility for listeners when audibility is challenged by environmental noise and hearing loss. Less is known about how listeners actively process and attend to visual information from different talkers in complex multi-talker environments. This study tracked looking behavior in children with normal hearing (NH), mild bilateral hearing loss (MBHL), and unilateral hearing loss (UHL) in a complex multi-talker environment to examine the extent to which children look at talkers and whether looking patterns relate to performance on a speech-understanding task. It was hypothesized that performance would decrease as perceptual complexity increased and that children with hearing loss would perform more poorly than their peers with NH. Children with MBHL or UHL were expected to demonstrate greater attention to individual talkers during multi-talker exchanges, indicating that they were more likely to attempt to use visual information from talkers to assist in speech understanding in adverse acoustics. It also was of interest to examine whether MBHL, versus UHL, would differentially affect performance and looking behavior. DESIGN Eighteen children with NH, eight children with MBHL, and 10 children with UHL participated (8-12 years). They followed audiovisual instructions for placing objects on a mat under three conditions: a single talker providing instructions via a video monitor, four possible talkers alternately providing instructions on separate monitors in front of the listener, and the same four talkers providing both target and nontarget information. Multi-talker background noise was presented at a 5 dB signal-to-noise ratio during testing. An eye tracker monitored looking behavior while children performed the experimental task. RESULTS Behavioral task performance was higher for children with NH than for either group of children with hearing loss. There were no differences in performance between children with UHL and children with MBHL. Eye-tracker analysis revealed that children with NH looked more at the screens overall than did children with MBHL or UHL, though individual differences were greater in the groups with hearing loss. Listeners in all groups spent a small proportion of time looking at relevant screens as talkers spoke. Although looking was distributed across all screens, there was a bias toward the right side of the display. There was no relationship between overall looking behavior and performance on the task. CONCLUSIONS The present study examined the processing of audiovisual speech in the context of a naturalistic task. Results demonstrated that children distributed their looking to a variety of sources during the task, but that children with NH were more likely to look at screens than were those with MBHL/UHL. However, all groups looked at the relevant talkers as they were speaking only a small proportion of the time. Despite variability in looking behavior, listeners were able to follow the audiovisual instructions and children with NH demonstrated better performance than children with MBHL/UHL. These results suggest that performance on some challenging multi-talker audiovisual tasks is not dependent on visual fixation to relevant talkers for children with NH or with MBHL/UHL.
Collapse
|
9
|
Brännström KJ, Johansson E, Vigertsson D, Morris DJ, Sahlén B, Lyberg-Åhlander V. How Children Perceive the Acoustic Environment of Their School. Noise Health 2017; 19:84-94. [PMID: 29192618 PMCID: PMC5437757 DOI: 10.4103/nah.nah_33_16] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
Objective: Children’s own ratings and opinions on their schools sound environments add important information on noise sources. They can also provide information on how to further improve and optimize children’s learning situation in their classrooms. This study reports on the Swedish translation and application of an evidence-based questionnaire that measures how children perceive the acoustic environment of their school. Study Design: The Swedish version was made using a back-to-back translation. Responses on the questionnaire along with demographic data were collected for 149 children aged 9–13 years of age. Results: The Swedish translation of the questionnaire can be reduced from 93 to 27 items. The 27 items were distributed over five separate factors measuring different underlying constructs with high internal consistency and high inter-item correlations. The responses demonstrated that the dining hall/canteen and the corridors are the school spaces with the poorest listening conditions. The highest annoyance was reported for tests and reading; next, student-generated sounds occur more frequently within the classroom than any sudden unexpected sounds, and finally, road traffic noise and teachers in adjoining classrooms are the most frequently occurring sounds from outside the classroom. Several demographic characteristics could be used to predict the outcome on these factors. Conclusion: The findings suggest that crowded spaces are most challenging; the children themselves generate most of the noise inside the classroom, but it is also common to hear road traffic noise and teachers in adjoining classrooms. The extent of annoyance that noise causes depends on the task but seems most detrimental in tasks, wherein the demands of verbal processing are higher. Finally, children with special support seem to report that they are more susceptible to noise than the typical child.
Collapse
Affiliation(s)
- Karl Jonas Brännström
- Department of Logopedics, Phoniatrics and Audiology, Clinical Sciences in Lund, Lund University, Lund, Sweden
| | - Erika Johansson
- Department of Logopedics, Phoniatrics and Audiology, Clinical Sciences in Lund, Lund University, Lund, Sweden
| | - Daniel Vigertsson
- Department of Logopedics, Phoniatrics and Audiology, Clinical Sciences in Lund, Lund University, Lund, Sweden
| | - David J Morris
- Speech Pathology and Audiology, Department of Nordic Studies and Linguistics, University of Copenhagen, Copenhagen, Denmark
| | - Birgitta Sahlén
- Department of Logopedics, Phoniatrics and Audiology, Clinical Sciences in Lund, Lund University, Lund, Sweden
| | - Viveka Lyberg-Åhlander
- Department of Logopedics, Phoniatrics and Audiology, Clinical Sciences in Lund, Lund University, Lund; Linneaus' Environment Cognition, Communication and Learning, Lund University, Lund, Sweden
| |
Collapse
|