1
|
Lie S, Zekveld AA, Smits C, Kramer SE, Versfeld NJ. Learning effects in speech-in-noise tasks: Effect of masker modulation and masking release. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 156:341-349. [PMID: 38990038 DOI: 10.1121/10.0026519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 06/19/2024] [Indexed: 07/12/2024]
Abstract
Previous research has shown that learning effects are present for speech intelligibility in temporally modulated (TM) noise, but not in stationary noise. The present study aimed to gain more insight into the factors that might affect the time course (the number of trials required to reach stable performance) and size [the improvement in the speech reception threshold (SRT)] of the learning effect. Two hypotheses were addressed: (1) learning effects are present in both TM and spectrally modulated (SM) noise and (2) the time course and size of the learning effect depend on the amount of masking release caused by either TM or SM noise. Eighteen normal-hearing adults (23-62 years) participated in SRT measurements, in which they listened to sentences in six masker conditions, including stationary, TM, and SM noise conditions. The results showed learning effects in all TM and SM noise conditions, but not for the stationary noise condition. The learning effect was related to the size of masking release: a larger masking release was accompanied by an increased time course of the learning effect and a larger learning effect. The results also indicate that speech is processed differently in SM noise than in TM noise.
Collapse
Affiliation(s)
- Sisi Lie
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, De Boelelaan, Amsterdam Public Health research institute, Amsterdam, The Netherlands
| | - Adriana A Zekveld
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, De Boelelaan, Amsterdam Public Health research institute, Amsterdam, The Netherlands
| | - Cas Smits
- Amsterdam UMC, University of Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, Meibergdreef, Amsterdam Public Health research institute, Amsterdam, The Netherlands
| | - Sophia E Kramer
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, De Boelelaan, Amsterdam Public Health research institute, Amsterdam, The Netherlands
| | - Niek J Versfeld
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, De Boelelaan, Amsterdam Public Health research institute, Amsterdam, The Netherlands
| |
Collapse
|
2
|
Viswanathan V, Heinz MG, Shinn-Cunningham BG. Impact of Reduced Spectral Resolution on Temporal-Coherence-Based Source Segregation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.11.584489. [PMID: 38586037 PMCID: PMC10998286 DOI: 10.1101/2024.03.11.584489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 04/09/2024]
Abstract
Hearing-impaired listeners struggle to understand speech in noise, even when using cochlear implants (CIs) or hearing aids. Successful listening in noisy environments depends on the brain's ability to organize a mixture of sound sources into distinct perceptual streams (i.e., source segregation). In normal-hearing listeners, temporal coherence of sound fluctuations across frequency channels supports this process by promoting grouping of elements belonging to a single acoustic source. We hypothesized that reduced spectral resolution-a hallmark of both electric/CI (from current spread) and acoustic (from broadened tuning) hearing with sensorineural hearing loss-degrades segregation based on temporal coherence. This is because reduced frequency resolution decreases the likelihood that a single sound source dominates the activity driving any specific channel; concomitantly, it increases the correlation in activity across channels. Consistent with our hypothesis, predictions from a physiologically plausible model of temporal-coherence-based segregation suggest that CI current spread reduces comodulation masking release (CMR; a correlate of temporal-coherence processing) and speech intelligibility in noise. These predictions are consistent with our behavioral data with simulated CI listening. Our model also predicts smaller CMR with increasing levels of outer-hair-cell damage. These results suggest that reduced spectral resolution relative to normal hearing impairs temporal-coherence-based segregation and speech-in-noise outcomes.
Collapse
Affiliation(s)
- Vibha Viswanathan
- Neuroscience Institute, Carnegie Mellon University, Pitttsburgh, PA 15213
| | - Michael G. Heinz
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN 47907
| | | |
Collapse
|
3
|
Cueille R, Lavandier M, Grimault N. Effects of reverberation on speech intelligibility in noise for hearing-impaired listeners. ROYAL SOCIETY OPEN SCIENCE 2022; 9:210342. [PMID: 36061524 DOI: 10.6084/m9.figshare.c.6159208] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Accepted: 08/09/2022] [Indexed: 05/25/2023]
Abstract
Reverberation can have a strong detrimental effect on speech intelligibility in noise. Two main monaural effects were studied here: the temporal smearing of the target speech, which makes the speech less understandable, and the temporal smearing of the noise, which reduces the opportunity for listening in the masker dips. These phenomena have been shown to affect normal-hearing (NH) listeners. The aim of this study was to understand whether hearing-impaired (HI) listeners are more affected by reverberation, and if so to identify which of these two effects is responsible. They were investigated separately and in combination, by applying reverberation either on the target speech, on the noise masker, or on both sources. Binaural effects were not investigated here. Intelligibility scores in the presence of stationary and modulated noise were systematically compared for both NH and HI listeners in these situations. At the optimal signal-to-noise ratios (SNRs) (that is to say, the SNRs with the least amount of floor and ceiling effects), the temporal smearing of both the speech and the noise had a similar effect for the HI and NH listeners, so that reverberation was not more detrimental for the HI listeners. There was only a very limited dip listening benefit at this SNR for either group. Some differences across group appeared at the SNR maximizing dip listening, but they could not be directly related to an effect of reverberation, and were rather due to floor effects or to the reduced ability of the HI listeners to benefit from dip listening, even in the absence of reverberation.
Collapse
Affiliation(s)
- Raphael Cueille
- Univ. Lyon, ENTPE, Ecole Centrale de Lyon, CNRS, LTDS, UMR5513, Vaulx-en-Velin 69518, France
- CRNL, UMR CNRS 5292, Univ. Lyon 1, 50 av T Garnier, Lyon Cedex 07 69366, France
| | - Mathieu Lavandier
- Univ. Lyon, ENTPE, Ecole Centrale de Lyon, CNRS, LTDS, UMR5513, Vaulx-en-Velin 69518, France
| | - Nicolas Grimault
- CRNL, UMR CNRS 5292, Univ. Lyon 1, 50 av T Garnier, Lyon Cedex 07 69366, France
| |
Collapse
|
4
|
Cueille R, Lavandier M, Grimault N. Effects of reverberation on speech intelligibility in noise for hearing-impaired listeners. ROYAL SOCIETY OPEN SCIENCE 2022; 9:210342. [PMID: 36061524 PMCID: PMC9428532 DOI: 10.1098/rsos.210342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Accepted: 08/09/2022] [Indexed: 05/10/2023]
Abstract
Reverberation can have a strong detrimental effect on speech intelligibility in noise. Two main monaural effects were studied here: the temporal smearing of the target speech, which makes the speech less understandable, and the temporal smearing of the noise, which reduces the opportunity for listening in the masker dips. These phenomena have been shown to affect normal-hearing (NH) listeners. The aim of this study was to understand whether hearing-impaired (HI) listeners are more affected by reverberation, and if so to identify which of these two effects is responsible. They were investigated separately and in combination, by applying reverberation either on the target speech, on the noise masker, or on both sources. Binaural effects were not investigated here. Intelligibility scores in the presence of stationary and modulated noise were systematically compared for both NH and HI listeners in these situations. At the optimal signal-to-noise ratios (SNRs) (that is to say, the SNRs with the least amount of floor and ceiling effects), the temporal smearing of both the speech and the noise had a similar effect for the HI and NH listeners, so that reverberation was not more detrimental for the HI listeners. There was only a very limited dip listening benefit at this SNR for either group. Some differences across group appeared at the SNR maximizing dip listening, but they could not be directly related to an effect of reverberation, and were rather due to floor effects or to the reduced ability of the HI listeners to benefit from dip listening, even in the absence of reverberation.
Collapse
Affiliation(s)
- Raphael Cueille
- Univ. Lyon, ENTPE, Ecole Centrale de Lyon, CNRS, LTDS, UMR5513, Vaulx-en-Velin 69518, France
- CRNL, UMR CNRS 5292, Univ. Lyon 1, 50 av T Garnier, Lyon Cedex 07 69366, France
| | - Mathieu Lavandier
- Univ. Lyon, ENTPE, Ecole Centrale de Lyon, CNRS, LTDS, UMR5513, Vaulx-en-Velin 69518, France
| | - Nicolas Grimault
- CRNL, UMR CNRS 5292, Univ. Lyon 1, 50 av T Garnier, Lyon Cedex 07 69366, France
| |
Collapse
|
5
|
Brennan MA, McCreery RW, Massey J. Influence of Audibility and Distortion on Recognition of Reverberant Speech for Children and Adults with Hearing Aid Amplification. J Am Acad Audiol 2022; 33:170-180. [PMID: 34695870 PMCID: PMC9112843 DOI: 10.1055/a-1678-3381] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
BACKGROUND Adults and children with sensorineural hearing loss (SNHL) have trouble understanding speech in rooms with reverberation when using hearing aid amplification. While the use of amplitude compression signal processing in hearing aids may contribute to this difficulty, there is conflicting evidence on the effects of amplitude compression settings on speech recognition. Less clear is the effect of a fast release time for adults and children with SNHL when using compression ratios derived from a prescriptive procedure. PURPOSE The aim of the study is to determine whether release time impacts speech recognition in reverberation for children and adults with SNHL and to determine if these effects of release time and reverberation can be predicted using indices of audibility or temporal and spectral distortion. RESEARCH DESIGN This is a quasi-experimental cohort study. Participants used a hearing aid simulator set to the Desired Sensation Level algorithm m[i/o] for three different amplitude compression release times. Reverberation was simulated using three different reverberation times. PARTICIPANTS Participants were 20 children and 16 adults with SNHL. DATA COLLECTION AND ANALYSES Participants were seated in a sound-attenuating booth and then nonsense syllable recognition was measured. Predictions of speech recognition were made using indices of audibility, temporal distortion, and spectral distortion and the effects of release time and reverberation were analyzed using linear mixed models. RESULTS While nonsense syllable recognition decreased in reverberation release time did not significantly affect nonsense syllable recognition. Participants with lower audibility were more susceptible to the negative effect of reverberation on nonsense syllable recognition. CONCLUSION We have extended previous work on the effects of reverberation on aided speech recognition to children with SNHL. Variations in release time did not impact the understanding of speech. An index of audibility best predicted nonsense syllable recognition in reverberation and, clinically, these results suggest that patients with less audibility are more susceptible to nonsense syllable recognition in reverberation.
Collapse
|
6
|
Rosa BC, Souza COE, Paccola ECM, Bucuvic ÉC, Jacob RTDS. Phrases in Noise Test (PINT) Brazil: influence of the inter-stimulus interval on the performance of children with hearing impairment. Codas 2021; 33:e20200054. [PMID: 34431856 DOI: 10.1590/2317-1782/20202020054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2020] [Accepted: 11/13/2020] [Indexed: 11/21/2022] Open
Abstract
PURPOSE This study aimed to investigate, using the PINT Brasil, the influence of the interstimulus interval on the performance of children with moderate and severe hearing loss fitted with hearing aids. METHODS Ten children with normal hearing (CG) and 20 children with hearing loss (SG) participated in the study. Both groups were assessed using the speech perception test called PINT Brasil in PAUSE and NO PAUSE situations. RESULTS When comparing the PAUSE and NO PAUSE situations, only the SG presented a statistically significant difference, indicating that the NO PAUSE situation had the best performance. In this situation, the noise oscillations were smaller, and the noise reduction algorithm, which may cause the loss of message information, was not repeatedly activated. CONCLUSION The interstimulus interval in the PINT Brasil influenced the performance of children with moderate and severe hearing loss fitted with hearing aids. The NO PAUSE situation presented the best results.
Collapse
Affiliation(s)
- Bruna Camilo Rosa
- Divisão de Saúde Auditiva, Hospital de Reabilitação de Anomalias Craniofaciais - HRAC, Universidade de São Paulo - USP - Bauru (SP), Brasil
| | - Camila Oliveira E Souza
- Departamento de Fonoaudiologia, Faculdade de Odontologia de Bauru - FOB, Universidade de São Paulo - USP - Bauru (SP), Brasil
| | - Elaine Cristina Moreto Paccola
- Divisão de Saúde Auditiva, Hospital de Reabilitação de Anomalias Craniofaciais - HRAC, Universidade de São Paulo - USP - Bauru (SP), Brasil
| | - Érika Cristina Bucuvic
- Divisão de Saúde Auditiva, Hospital de Reabilitação de Anomalias Craniofaciais - HRAC, Universidade de São Paulo - USP - Bauru (SP), Brasil
| | - Regina Tangerino de Souza Jacob
- Departamento de Fonoaudiologia, Faculdade de Odontologia de Bauru - FOB, Universidade de São Paulo - USP - Bauru (SP), Brasil
| |
Collapse
|
7
|
Minimal and Mild Hearing Loss in Children: Association with Auditory Perception, Cognition, and Communication Problems. Ear Hear 2021; 41:720-732. [PMID: 31633598 DOI: 10.1097/aud.0000000000000802] [Citation(s) in RCA: 51] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES "Minimal" and "mild" hearing loss are the most common but least understood forms of hearing loss in children. Children with better ear hearing level as low as 30 dB HL have a global language impairment and, according to the World Health Organization, a "disabling level of hearing loss." We examined in a population of 6- to 11-year-olds how hearing level ≤40.0 dB HL (1 and 4 kHz pure-tone average, PTA, threshold) is related to auditory perception, cognition, and communication. DESIGN School children (n = 1638) were recruited in 4 centers across the United Kingdom. They completed a battery of hearing (audiometry, filter width, temporal envelope, speech-in-noise) and cognitive (IQ, attention, verbal memory, receptive language, reading) tests. Caregivers assessed their children's communication and listening skills. Children included in this study (702 male; 752 female) had 4 reliable tone thresholds (1, 4 kHz each ear), and no caregiver reported medical or intellectual disorder. Normal-hearing children (n = 1124, 77.1%) had all 4 thresholds and PTA <15 dB HL. Children with ≥15 dB HL for at least 1 threshold, and PTA <20 dB (n = 245, 16.8%) had minimal hearing loss. Children with 20 ≤PTA <40 dB HL (n = 88, 6.0%) had mild hearing loss. Interaural asymmetric hearing loss ( left PTA - right PTA ≥10 dB) was found in 28.9% of those with minimal and 39.8% of those with mild hearing loss. RESULTS Speech perception in noise, indexed by vowel-consonant-vowel pseudoword repetition in speech-modulated noise, was impaired in children with minimal and mild hearing loss, relative to normal-hearing children. Effect size was largest (d = 0.63) in asymmetric mild hearing loss and smallest (d = 0.21) in symmetric minimal hearing loss. Spectral (filter width) and temporal (backward masking) perceptions were impaired in children with both forms of hearing loss, but suprathreshold perception generally related only weakly to PTA. Speech-in-noise (nonsense syllables) and language (pseudoword repetition) were also impaired in both forms of hearing loss and correlated more strongly with PTA. Children with mild hearing loss were additionally impaired in working memory (digit span) and reading, and generally performed more poorly than those with minimal loss. Asymmetric hearing loss produced as much impairment overall on both auditory and cognitive tasks as symmetric hearing loss. Nonverbal IQ, attention, and caregiver-rated listening and communication were not significantly impaired in children with hearing loss. Modeling suggested that 15 dB HL is objectively an appropriate lower audibility limit for diagnosis of hearing loss. CONCLUSIONS Hearing loss between 15 and 30 dB PTA is, at ~20%, much more prevalent in 6- to 11-year-old children than most current estimates. Key aspects of auditory and cognitive skills are impaired in both symmetric and asymmetric minimal and mild hearing loss. Hearing loss <30 dB HL is most closely related to speech perception in noise, and to cognitive abilities underpinning language and reading. The results suggest wider use of speech-in-noise measures to diagnose and assess management of hearing loss and reduction of the clinical hearing loss threshold for children to 15 dB HL.
Collapse
|
8
|
Speech audiometry in noise: SNR Loss per age-group in normal hearing subjects. Eur Ann Otorhinolaryngol Head Neck Dis 2021; 139:61-64. [PMID: 34175252 DOI: 10.1016/j.anorl.2021.05.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
OBJECTIVES The present study aimed to determine normal SNR values per age group for the 50% speech reception threshold in noise (SNR Loss) on the VRB (Vocale Rapide dans le Bruit: rapid speech in noise) test. MATERIAL AND METHODS Two hundred patients underwent pure-tone threshold and VRB speech-in-noise audiometry. Six ages groups were distinguished: 20-30, 30-40, 40-50, 50-60, 60-70 and>70 years. All subjects had normal hearing for age according to ISO 7029. SNR Loss was measured according to age group. RESULTS Mean SNR Loss ranged from -0.37dB in the youngest age group (20-30 years) to +6.84dB in the oldest (>70 years). Range and interquartile range increased with age: 3.66 and 1.49dB respectively for 20-30 year-olds; 6 and 3.5dB for>70 year-olds. Linear regression between SNR Loss and age showed a coefficient R2 of 0.83. CONCLUSION The present study reports SNR Loss values per age group in normal-hearing subjects (ISO 7029), confirming that SNR Loss increases with age. Scatter also increased with age, suggesting that other age-related factors combine with inner-ear aging to impair hearing in noise.
Collapse
|
9
|
Silva RFD, Advíncula KP, Gonçalves PA, Leite GA, Pereira LD, Griz SMS, Menezes DC. Modulation rate and age effect on intermittent speech recognition. REVISTA CEFAC 2021. [DOI: 10.1590/1982-0216/20212324120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
ABSTRACT Purpose: to investigate the auditory recognition of intermittent speech in relation to different modulation rates and ages. Methods: 20 young people, 20 middle-aged adults, and 16 older adults, all of them with auditory thresholds equal to or lower than 25 dB HL up to the frequency of 4000 Hz. The participants were submitted to intermittent speech recognition tests presented in three modulation conditions: 4 Hz, 10 Hz, and 64 Hz. The percentages of correct answers were compared between age groups and modulation rates. ANOVA and post hoc tests were conducted to investigate the modulation rate effect, and the mixed linear regression model (p < 0.001). Results: regarding the age effect, the data showed a significant difference between young people and older adults, and between middle-aged and older adults. As for the modulation rate effect, the indexes of correct answers were significantly lower at the slower rate (4 Hz) in the three age groups. Conclusion: an age effect was verified on intermittent speech recognition: older adults have greater difficulty. A modulation rate effect was also noticed in the three age groups: the higher the rate, the better the performance.
Collapse
|
10
|
McCreery RW, Miller MK, Buss E, Leibold LJ. Cognitive and Linguistic Contributions to Masked Speech Recognition in Children. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:3525-3538. [PMID: 32881629 PMCID: PMC8060059 DOI: 10.1044/2020_jslhr-20-00030] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/24/2020] [Revised: 06/08/2020] [Accepted: 06/28/2020] [Indexed: 05/31/2023]
Abstract
Purpose The goal of this study was to examine the effects of cognitive and linguistic skills on masked speech recognition for children with normal hearing in three different masking conditions: (a) speech-shaped noise (SSN), (b) amplitude-modulated SSN (AMSSN), and (c) two-talker speech (TTS). We hypothesized that children with better working memory and language skills would have better masked speech recognition than peers with poorer skills in these areas. Selective attention was predicted to affect performance in the TTS masker due to increased cognitive demands from informational masking. Method A group of 60 children in two age groups (5- to 6-year-olds and 9- to 10-year-olds) with normal hearing completed sentence recognition in SSN, AMSSN, and TTS masker conditions. Speech recognition thresholds for 50% correct were measured. Children also completed standardized measures of language, memory, and executive function. Results Children's speech recognition was poorer in the TTS relative to the SSN and AMSSN maskers. Older children had lower speech recognition thresholds than younger children for all masker conditions. Greater language abilities were associated with better sentence recognition for the younger children in all masker conditions, but there was no effect of language for older children. Better working memory and selective attention skills were associated with better masked sentence recognition for both age groups, but only in the TTS masker condition. Conclusions The decreasing influence of vocabulary on masked speech recognition for older children supports the idea that this relationship depends on an interaction between the language level of the stimuli and the listener's vocabulary. Increased cognitive demands associated with perceptually isolating the target talker and two competing masker talkers with a TTS masker may result in the recruitment of working memory and selective attention skills, effects that were not observed in SSN or AMSSN maskers. Future research should evaluate these effects across a broader range of stimuli or with children who have hearing loss.
Collapse
Affiliation(s)
- Ryan W. McCreery
- Audibility, Perception and Cognition Laboratory, Boys Town National Research Hospital, Omaha, NE
| | - Margaret K. Miller
- Human Auditory Development Laboratory, Boys Town National Research Hospital, Omaha, NE
| | - Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill
| | - Lori J. Leibold
- Human Auditory Development Laboratory, Boys Town National Research Hospital, Omaha, NE
| |
Collapse
|
11
|
Simulations with FADE of the effect of impaired hearing on speech recognition performance cast doubt on the role of spectral resolution. Hear Res 2020; 395:107995. [DOI: 10.1016/j.heares.2020.107995] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/28/2020] [Revised: 04/06/2020] [Accepted: 05/12/2020] [Indexed: 11/18/2022]
|
12
|
Mushtaq F, Wiggins IM, Kitterick PT, Anderson CA, Hartley DEH. The Benefit of Cross-Modal Reorganization on Speech Perception in Pediatric Cochlear Implant Recipients Revealed Using Functional Near-Infrared Spectroscopy. Front Hum Neurosci 2020; 14:308. [PMID: 32922273 PMCID: PMC7457128 DOI: 10.3389/fnhum.2020.00308] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Accepted: 07/13/2020] [Indexed: 01/01/2023] Open
Abstract
Cochlear implants (CIs) are the most successful treatment for severe-to-profound deafness in children. However, speech outcomes with a CI often lag behind those of normally-hearing children. Some authors have attributed these deficits to the takeover of the auditory temporal cortex by vision following deafness, which has prompted some clinicians to discourage the rehabilitation of pediatric CI recipients using visual speech. We studied this cross-modal activity in the temporal cortex, along with responses to auditory speech and non-speech stimuli, in experienced CI users and normally-hearing controls of school-age, using functional near-infrared spectroscopy. Strikingly, CI users displayed significantly greater cortical responses to visual speech, compared with controls. Importantly, in the same regions, the processing of auditory speech, compared with non-speech stimuli, did not significantly differ between the groups. This suggests that visual and auditory speech are processed synergistically in the temporal cortex of children with CIs, and they should be encouraged, rather than discouraged, to use visual speech.
Collapse
Affiliation(s)
- Faizah Mushtaq
- National Institute for Health Research Nottingham Biomedical Research Centre, Nottingham, United Kingdom
- Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Ian M. Wiggins
- National Institute for Health Research Nottingham Biomedical Research Centre, Nottingham, United Kingdom
- Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Pádraig T. Kitterick
- National Institute for Health Research Nottingham Biomedical Research Centre, Nottingham, United Kingdom
- Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Carly A. Anderson
- Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, United Kingdom
| | - Douglas E. H. Hartley
- National Institute for Health Research Nottingham Biomedical Research Centre, Nottingham, United Kingdom
- Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham, United Kingdom
- Nottingham University Hospitals NHS Trust, Nottingham, United Kingdom
| |
Collapse
|
13
|
Oster MM, Werner LA. Infants' use of isolated and combined temporal cues in speech sound segregation. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 148:401. [PMID: 32752747 PMCID: PMC7386947 DOI: 10.1121/10.0001582] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Revised: 06/14/2020] [Accepted: 06/28/2020] [Indexed: 06/11/2023]
Abstract
This paper investigates infants' and adults' use of envelope cues and combined onset asynchrony and envelope cues in the segregation of concurrent vowels. Listeners heard superimposed vowel pairs consisting of two different vowels spoken by a male and a female talker and were trained to respond to one specific target vowel, either the male /u:/ or male /i:/. Vowel detection was measured in three conditions. In the baseline condition the two superimposed vowels had similar amplitude envelopes and synchronous onset. In the envelope cue condition, the amplitude envelopes of the two vowels differed. In the combined cue condition, both the onset time and amplitude envelopes of the two vowels differed. Seven-month-old infants' concurrent vowel segregation improved both with envelope and with combined onset asynchrony and envelope cues to the same extent as adults'. A preliminary investigation with 3-month-old infants suggested that neither envelope cues nor combined asynchrony and envelope cues improved their ability to detect the target vowel. Taken together, these results suggest that envelope and combined onset-asynchrony cues are available to infants as they attempt to process competing speech sounds, at least after 7 months of age.
Collapse
Affiliation(s)
- Monika-Maria Oster
- Listen and Talk, 8610 8th Avenue Northeast, Seattle, Washington 98115, USA
| | - Lynne A Werner
- Department of Speech and Hearing Sciences, University of Washington, 1417 Northeast 42nd Street, Seattle, Washington 98105, USA
| |
Collapse
|
14
|
Factors Affecting Bimodal Benefit in Pediatric Mandarin-Speaking Chinese Cochlear Implant Users. Ear Hear 2020; 40:1316-1327. [PMID: 30882534 DOI: 10.1097/aud.0000000000000712] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES While fundamental frequency (F0) cues are important to both lexical tone perception and multitalker segregation, F0 cues are poorly perceived by cochlear implant (CI) users. Adding low-frequency acoustic hearing via a hearing aid in the contralateral ear may improve CI users' F0 perception. For English-speaking CI users, contralateral acoustic hearing has been shown to improve perception of target speech in noise and in competing talkers. For tonal languages such as Mandarin Chinese, F0 information is lexically meaningful. Given competing F0 information from multiple talkers and lexical tones, contralateral acoustic hearing may be especially beneficial for Mandarin-speaking CI users' perception of competing speech. DESIGN Bimodal benefit (CI+hearing aid - CI-only) was evaluated in 11 pediatric Mandarin-speaking Chinese CI users. In experiment 1, speech recognition thresholds (SRTs) were adaptively measured using a modified coordinated response measure test; subjects were required to correctly identify 2 keywords from among 10 choices in each category. SRTs were measured with CI-only or bimodal listening in the presence of steady state noise (SSN) or competing speech with the same (M+M) or different voice gender (M+F). Unaided thresholds in the non-CI ear and demographic factors were compared with speech performance. In experiment 2, SRTs were adaptively measured in SSN for recognition of 5 keywords, a more difficult listening task than the 2-keyword recognition task in experiment 1. RESULTS In experiment 1, SRTs were significantly lower for SSN than for competing speech in both the CI-only and bimodal listening conditions. There was no significant difference between CI-only and bimodal listening for SSN and M+F (p > 0.05); SRTs were significantly lower for CI-only than for bimodal listening for M+M (p < 0.05), suggesting bimodal interference. Subjects were able to make use of voice gender differences for bimodal listening (p < 0.05) but not for CI-only listening (p > 0.05). Unaided thresholds in the non-CI ear were positively correlated with bimodal SRTs for M+M (p < 0.006) but not for SSN or M+F. No significant correlations were observed between any demographic variables and SRTs (p > 0.05 in all cases). In experiment 2, SRTs were significantly lower with two than with five keywords (p < 0.05). A significant bimodal benefit was observed only for the 5-keyword condition (p < 0.05). CONCLUSIONS With the CI alone, subjects experienced greater interference with competing speech than with SSN and were unable to use voice gender difference to segregate talkers. For the coordinated response measure task, subjects experienced no bimodal benefit and even bimodal interference when competing talkers were the same voice gender. A bimodal benefit in SSN was observed for the five-keyword condition but not for the two-keyword condition, suggesting that bimodal listening may be more beneficial as the difficulty of the listening task increased. The present data suggest that bimodal benefit may depend on the type of masker and/or the difficulty of the listening task.
Collapse
|
15
|
McCreery RW, Walker EA, Spratford M, Lewis D, Brennan M. Auditory, Cognitive, and Linguistic Factors Predict Speech Recognition in Adverse Listening Conditions for Children With Hearing Loss. Front Neurosci 2019; 13:1093. [PMID: 31680828 PMCID: PMC6803493 DOI: 10.3389/fnins.2019.01093] [Citation(s) in RCA: 45] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2019] [Accepted: 09/30/2019] [Indexed: 11/23/2022] Open
Abstract
Objectives: Children with hearing loss listen and learn in environments with noise and reverberation, but perform more poorly in noise and reverberation than children with normal hearing. Even with amplification, individual differences in speech recognition are observed among children with hearing loss. Few studies have examined the factors that support speech understanding in noise and reverberation for this population. This study applied the theoretical framework of the Ease of Language Understanding (ELU) model to examine the influence of auditory, cognitive, and linguistic factors on speech recognition in noise and reverberation for children with hearing loss. Design: Fifty-six children with hearing loss and 50 age-matched children with normal hearing who were 7–10 years-old participated in this study. Aided sentence recognition was measured using an adaptive procedure to determine the signal-to-noise ratio for 50% correct (SNR50) recognition in steady-state speech-shaped noise. SNR50 was also measured with noise plus a simulation of 600 ms reverberation time. Receptive vocabulary, auditory attention, and visuospatial working memory were measured. Aided speech audibility indexed by the Speech Intelligibility Index was measured through the hearing aids of children with hearing loss. Results: Children with hearing loss had poorer aided speech recognition in noise and reverberation than children with typical hearing. Children with higher receptive vocabulary and working memory skills had better speech recognition in noise and noise plus reverberation than peers with poorer skills in these domains. Children with hearing loss with higher aided audibility had better speech recognition in noise and reverberation than peers with poorer audibility. Better audibility was also associated with stronger language skills. Conclusions: Children with hearing loss are at considerable risk for poor speech understanding in noise and in conditions with noise and reverberation. Consistent with the predictions of the ELU model, children with stronger vocabulary and working memory abilities performed better than peers with poorer skills in these domains. Better aided speech audibility was associated with better recognition in noise and noise plus reverberation conditions for children with hearing loss. Speech audibility had direct effects on speech recognition in noise and reverberation and cumulative effects on speech recognition in noise through a positive association with language development over time.
Collapse
Affiliation(s)
- Ryan W McCreery
- The Audibility Perception and Cognition Laboratory, Boys Town National Research Hospital, Omaha, NE, United States
| | - Elizabeth A Walker
- Pediatric Audiology Laboratory, Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA, United States
| | - Meredith Spratford
- The Audibility Perception and Cognition Laboratory, Boys Town National Research Hospital, Omaha, NE, United States
| | - Dawna Lewis
- The Audibility Perception and Cognition Laboratory, Boys Town National Research Hospital, Omaha, NE, United States
| | - Marc Brennan
- Amplification and Perception Laboratory, Department of Special Education and Communication Disorders, University of Nebraska, Lincoln, NE, United States
| |
Collapse
|
16
|
Reducing Simulated Channel Interaction Reveals Differences in Phoneme Identification Between Children and Adults With Normal Hearing. Ear Hear 2019; 40:295-311. [PMID: 29927780 DOI: 10.1097/aud.0000000000000615] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Channel interaction, the stimulation of overlapping populations of auditory neurons by distinct cochlear implant (CI) channels, likely limits the speech perception performance of CI users. This study examined the role of vocoder-simulated channel interaction in the ability of children with normal hearing (cNH) and adults with normal hearing (aNH) to recognize spectrally degraded speech. The primary aim was to determine the interaction between number of processing channels and degree of simulated channel interaction on phoneme identification performance as a function of age for cNH and to relate those findings to aNH and to CI users. DESIGN Medial vowel and consonant identification of cNH (age 8-17 years) and young aNH were assessed under six (for children) or nine (for adults) different conditions of spectral degradation. Stimuli were processed using a noise-band vocoder with 8, 12, and 15 channels and synthesis filter slopes of 15 (aNH only), 30, and 60 dB/octave (all NH subjects). Steeper filter slopes (larger numbers) simulated less electrical current spread and, therefore, less channel interaction. Spectrally degraded performance of the NH listeners was also compared with the unprocessed phoneme identification of school-aged children and adults with CIs. RESULTS Spectrally degraded phoneme identification improved as a function of age for cNH. For vowel recognition, cNH exhibited an interaction between the number of processing channels and vocoder filter slope, whereas aNH did not. Specifically, for cNH, increasing the number of processing channels only improved vowel identification in the steepest filter slope condition. Additionally, cNH were more sensitive to changes in filter slope. As the filter slopes increased, cNH continued to receive vowel identification benefit beyond where aNH performance plateaued or reached ceiling. For all NH participants, consonant identification improved with increasing filter slopes but was unaffected by the number of processing channels. Although cNH made more phoneme identification errors overall, their phoneme error patterns were similar to aNH. Furthermore, consonant identification of adults with CI was comparable to aNH listening to simulations with shallow filter slopes (15 dB/octave). Vowel identification of earlier-implanted pediatric ears was better than that of later-implanted ears and more comparable to cNH listening in conditions with steep filter slopes (60 dB/octave). CONCLUSIONS Recognition of spectrally degraded phonemes improved when simulated channel interaction was reduced, particularly for children. cNH showed an interaction between number of processing channels and filter slope for vowel identification. The differences observed between cNH and aNH suggest that identification of spectrally degraded phonemes continues to improve through adolescence and that children may benefit from reduced channel interaction beyond where adult performance has plateaued. Comparison to CI users suggests that early implantation may facilitate development of better phoneme discrimination.
Collapse
|
17
|
Speech Recognition Abilities in Normal-Hearing Children 4 to 12 Years of Age in Stationary and Interrupted Noise. Ear Hear 2019; 39:1091-1103. [PMID: 29554035 PMCID: PMC7664447 DOI: 10.1097/aud.0000000000000569] [Citation(s) in RCA: 42] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Objectives: The main purpose of this study was to examine developmental effects for speech recognition in noise abilities for normal-hearing children in several listening conditions, relevant for daily life. Our aim was to study the auditory component in these listening abilities by using a test that was designed to minimize the dependency on nonauditory factors, the digits-in-noise (DIN) test. Secondary aims were to examine the feasibility of the DIN test for children, and to establish age-dependent normative data for diotic and dichotic listening conditions in both stationary and interrupted noise. Design: In experiment 1, a newly designed pediatric DIN (pDIN) test was compared with the standard DIN test. Major differences with the DIN test are that the pDIN test uses 79% correct instead of 50% correct as a target point, single digits (except 0) instead of triplets, and animations in the test procedure. In this experiment, 43 normal-hearing subjects between 4 and 12 years of age and 10 adult subjects participated. The authors measured the monaural speech reception threshold for both DIN test and pDIN test using headphones. Experiment 2 used the standard DIN test to measure speech reception thresholds in noise in 112 normal-hearing children between 4 and 12 years of age and 33 adults. The DIN test was applied using headphones in stationary and interrupted noise, and in diotic and dichotic conditions, to study also binaural unmasking and the benefit of listening in the gaps. Results: Most children could reliably do both pDIN test and DIN test, and measurement errors for the pDIN test were comparable between children and adults. There was no significant difference between the score for the pDIN test and that of the DIN test. Speech recognition scores increase with age for all conditions tested, and performance is adult-like by 10 to 12 years of age in stationary noise but not interrupted noise. The youngest, 4-year-old children have speech reception thresholds 3 to 7 dB less favorable than adults, depending on test conditions. The authors found significant age effects on binaural unmasking and fluctuating masker benefit, even after correction for the lower baseline speech reception threshold of adults in stationary noise. Conclusions: Speech recognition in noise abilities develop well into adolescence, and young children need a more favorable signal-to-noise ratio than adults for all listening conditions. Speech recognition abilities in children in stationary and interrupted noise can accurately and reliably be tested using the DIN test. A pediatric version of the test was shown to be unnecessary. Normative data were established for the DIN test in stationary and fluctuating maskers, and in diotic and dichotic conditions. The DIN test can thus be used to test speech recognition abilities for normal-hearing children from the age of 4 years and older.
Collapse
|
18
|
Jensen KK, Bernstein JGW. The fluctuating masker benefit for normal-hearing and hearing-impaired listeners with equal audibility at a fixed signal-to-noise ratio. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 145:2113. [PMID: 31046298 PMCID: PMC6472958 DOI: 10.1121/1.5096641] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Normal-hearing (NH) listeners can extract and integrate speech fragments from momentary dips in the level of a fluctuating masker, yielding a fluctuating-masker benefit (FMB) for speech understanding relative to a stationary-noise masker. Hearing-impaired (HI) listeners generally show less FMB, suggesting a dip-listening deficit attributable to suprathreshold spectral or temporal distortion. However, reduced FMB might instead result from different test signal-to-noise ratios (SNRs), reduced absolute audibility of otherwise unmasked speech segments, or age differences. This study examined the FMB for nine age-matched NH-HI listener pairs, while simultaneously equalizing audibility, SNR, and percentage-correct performance in stationary noise. Nonsense syllables were masked by stationary noise, 4- or 32-Hz sinusoidally amplitude-modulated noise (SAMN), or an opposite-gender interfering talker. Stationary-noise performance was equalized by adjusting the response-set size. Audibility was equalized by removing stimulus components falling below the HI absolute threshold. HI listeners showed a clear 4.5-dB reduction in FMB for 32-Hz SAMN, a similar FMB to NH listeners for 4-Hz SAMN, and a non-significant trend toward a 2-dB reduction in FMB for an interfering talker. These results suggest that HI listeners do not exhibit a general dip-listening deficit for all fluctuating maskers, but rather a specific temporal-resolution deficit affecting performance for high-rate modulated maskers.
Collapse
Affiliation(s)
- Kenneth Kragh Jensen
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| | - Joshua G W Bernstein
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| |
Collapse
|
19
|
Sobon KA, Taleb NM, Buss E, Grose JH, Calandruccio L. Psychometric function slope for speech-in-noise and speech-in-speech: Effects of development and aging. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 145:EL284. [PMID: 31046371 PMCID: PMC6910021 DOI: 10.1121/1.5097377] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/13/2018] [Revised: 03/22/2019] [Accepted: 03/23/2019] [Indexed: 06/01/2023]
Abstract
Masked sentence recognition was evaluated in normal-hearing children (8.8-10.5 years), young adults (18-28 years), and older adults (60-71 years). Consistent with published data, speech recognition thresholds were poorer for young children and older adults than for young adults, particularly when the masker was composed of speech. Psychometric function slopes were steeper for young children and older adults than for young adults when the masker was two-talker speech, but not when it was speech-shaped noise. Multiple factors are implicated in the age effects observed for speech-in-speech recognition at low signal-to-noise ratios.
Collapse
Affiliation(s)
- Kathryn A Sobon
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599, USA
| | - Nardine M Taleb
- Department of Psychological Sciences, Case Western Reserve University, Cleveland, Ohio 44106, , , , ,
| | - Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599, USA
| | - John H Grose
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599, USA
| | - Lauren Calandruccio
- Department of Psychological Sciences, Case Western Reserve University, Cleveland, Ohio 44106, , , , ,
| |
Collapse
|
20
|
Goldsworthy RL, Markle KL. Pediatric Hearing Loss and Speech Recognition in Quiet and in Different Types of Background Noise. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:758-767. [PMID: 30950727 PMCID: PMC9907566 DOI: 10.1044/2018_jslhr-h-17-0389] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/19/2017] [Revised: 04/23/2018] [Accepted: 10/12/2018] [Indexed: 05/27/2023]
Abstract
Purpose Speech recognition deteriorates with hearing loss, particularly in fluctuating background noise. This study examined how hearing loss affects speech recognition in different types of noise to clarify how characteristics of the noise interact with the benefits listeners receive when listening in fluctuating compared to steady-state noise. Method Speech reception thresholds were measured for a closed set of spondee words in children (ages 5-17 years) in quiet, speech-spectrum noise, 2-talker babble, and instrumental music. Twenty children with normal hearing and 43 children with hearing loss participated; children with hearing loss were subdivided into groups with cochlear implant (18 children) and hearing aid (25 children) groups. A cohort of adults with normal hearing was included for comparison. Results Hearing loss had a large effect on speech recognition for each condition, but the effect of hearing loss was largest in 2-talker babble and smallest in speech-spectrum noise. Children with normal hearing had better speech recognition in 2-talker babble than in speech-spectrum noise, whereas children with hearing loss had worse recognition in 2-talker babble than in speech-spectrum noise. Almost all subjects had better speech recognition in instrumental music compared to speech-spectrum noise, but with less of a difference observed for children with hearing loss. Conclusions Speech recognition is more sensitive to the effects of hearing loss when measured in fluctuating compared to steady-state noise. Speech recognition measured in fluctuating noise depends on an interaction of hearing loss with characteristics of the background noise; specifically, children with hearing loss were able to derive a substantial benefit for listening in fluctuating noise when measured in instrumental music compared to 2-talker babble.
Collapse
|
21
|
Flanagan S, Zorilă TC, Stylianou Y, Moore BCJ. Speech Processing to Improve the Perception of Speech in Background Noise for Children With Auditory Processing Disorder and Typically Developing Peers. Trends Hear 2019; 22:2331216518756533. [PMID: 29441834 PMCID: PMC5815419 DOI: 10.1177/2331216518756533] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Auditory processing disorder (APD) may be diagnosed when a child has listening difficulties but has normal audiometric thresholds. For adults with normal hearing and with mild-to-moderate hearing impairment, an algorithm called spectral shaping with dynamic range compression (SSDRC) has been shown to increase the intelligibility of speech when background noise is added after the processing. Here, we assessed the effect of such processing using 8 children with APD and 10 age-matched control children. The loudness of the processed and unprocessed sentences was matched using a loudness model. The task was to repeat back sentences produced by a female speaker when presented with either speech-shaped noise (SSN) or a male competing speaker (CS) at two signal-to-background ratios (SBRs). Speech identification was significantly better with SSDRC processing than without, for both groups. The benefit of SSDRC processing was greater for the SSN than for the CS background. For the SSN, scores were similar for the two groups at both SBRs. For the CS, the APD group performed significantly more poorly than the control group. The overall improvement produced by SSDRC processing could be useful for enhancing communication in a classroom where the teacher's voice is broadcast using a wireless system.
Collapse
Affiliation(s)
- Sheila Flanagan
- 1 Department of Experimental Psychology, University of Cambridge, UK
| | | | - Yannis Stylianou
- 2 Toshiba Research Europe Ltd., Cambridge Research Laboratory, UK.,3 Department of Computer Science, University of Crete, Heraklion, Greece
| | - Brian C J Moore
- 1 Department of Experimental Psychology, University of Cambridge, UK
| |
Collapse
|
22
|
Rasetshwane DM, Raybine DA, Kopun JG, Gorga MP, Neely ST. Influence of Instantaneous Compression on Recognition of Speech in Noise with Temporal Dips. J Am Acad Audiol 2018; 30:16-30. [PMID: 30461387 DOI: 10.3766/jaaa.16165] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
BACKGROUND In listening environments with background noise that fluctuates in level, listeners with normal hearing can "glimpse" speech during dips in the noise, resulting in better speech recognition in fluctuating noise than in steady noise at the same overall level (referred to as masking release). Listeners with sensorineural hearing loss show less masking release. Amplification can improve masking release but not to the same extent that it does for listeners with normal hearing. PURPOSE The purpose of this study was to compare masking release for listeners with sensorineural hearing loss obtained with an experimental hearing-aid signal-processing algorithm with instantaneous compression (referred to as a suppression hearing aid, SHA) to masking release obtained with fast compression. The suppression hearing aid mimics effects of normal cochlear suppression, i.e., the reduction in the response to one sound by the simultaneous presentation of another sound. RESEARCH DESIGN A within-participant design with repeated measures across test conditions was used. STUDY SAMPLE Participants included 29 adults with mild-to-moderate sensorineural hearing loss and 21 adults with normal hearing. INTERVENTION Participants with sensorineural hearing loss were fitted with simulators for SHA and a generic hearing aid (GHA) with fast (but not instantaneous) compression (5 ms attack and 50 ms release times) and no suppression. Gain was prescribed using either an experimental method based on categorical loudness scaling (CLS) or the Desired Sensation Level (DSL) algorithm version 5a, resulting in a total of four processing conditions: CLS-GHA, CLS-SHA, DSL-GHA, and DSL-SHA. DATA COLLECTION All participants listened to consonant-vowel-consonant nonwords in the presence of temporally-modulated and steady noise. An adaptive-tracking procedure was used to determine the signal-to-noise ratio required to obtain 29% and 71% correct. Measurements were made with amplification for participants with sensorineural hearing loss and without amplification for participants with normal hearing. ANALYSIS Repeated-measures analysis of variance was used to determine the influence of within-participant factors of noise type and, for participants with sensorineural hearing loss, processing condition on masking release. Pearson correlational analysis was used to assess the effect of age on masking release for participants with sensorineural hearing loss. RESULTS Statistically significant masking release was observed for listeners with sensorineural hearing loss for 29% correct, but not for 71% correct. However, the amount of masking release was less than masking release for participants with normal hearing. There were no significant differences among the amplification conditions for participants with sensorineural hearing loss. CONCLUSIONS The results suggest that amplification with either instantaneous or fast compression resulted in similar masking release for listeners with sensorineural hearing loss. However, the masking release was less for participants with hearing loss than it was for those with normal hearing.
Collapse
Affiliation(s)
| | - David A Raybine
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Judy G Kopun
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Michael P Gorga
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Stephen T Neely
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| |
Collapse
|
23
|
Porter HL, Spitzer ER, Buss E, Leibold LJ, Grose JH. Forward and Backward Masking of Consonants in School-Age Children and Adults. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:1807-1814. [PMID: 29971342 PMCID: PMC6195056 DOI: 10.1044/2018_jslhr-h-17-0403] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/28/2017] [Revised: 01/11/2018] [Accepted: 03/20/2018] [Indexed: 06/08/2023]
Abstract
PURPOSE This experiment sought to determine whether children's increased susceptibility to nonsimultaneous masking, particularly backward masking, is evident for speech stimuli. METHOD Five- to 9-year-olds and adults with normal hearing heard nonsense consonant-vowel-consonant targets. In Experiments 1 and 2, those targets were presented between two 250-ms segments of 70-dB-SPL speech-shaped noise, at either -30 dB signal-to-noise ratio (Experiment 1) or at the listener's word recognition threshold (Experiment 2). In Experiment 3, the target was presented in steady speech-shaped noise at listener threshold. For all experiments, percent correct was estimated for initial and final consonants. RESULTS In the nonsimultaneous noise conditions, child-adult differences were larger for the final consonant than the initial consonant whether listeners were tested at -30 dB signal-to-noise ratio (Experiment 1) or at their individual word recognition threshold (Experiment 2). Children were not particularly susceptible to backward masking relative to adults when tested in a steady masker (Experiment 3). CONCLUSIONS Child-adult differences were greater for backward than forward masking for speech in a nonsimultaneous noise masker, as observed in previous psychophysical studies using tonal stimuli. Children's greater susceptibility to nonsimultaneous masking, and backward masking in particular, could play a role in their limited ability to benefit from masker envelope modulation when recognizing masked speech.
Collapse
Affiliation(s)
- Heather L. Porter
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Emily R. Spitzer
- Department of Allied Health Sciences, University of North Carolina at Chapel Hill
| | - Emily Buss
- Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill
| | - Lori J. Leibold
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - John H. Grose
- Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill
| |
Collapse
|
24
|
Miller G, Lewis B, Benchek P, Buss E, Calandruccio L. Masked Speech Recognition and Reading Ability in School-Age Children: Is There a Relationship? JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:776-788. [PMID: 29507949 DOI: 10.1044/2017_jslhr-h-17-0279] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/21/2017] [Accepted: 11/01/2017] [Indexed: 06/08/2023]
Abstract
PURPOSE The relationship between reading (decoding) skills, phonological processing abilities, and masked speech recognition in typically developing children was explored. This experiment was designed to evaluate the relationship between phonological processing and decoding abilities and 2 aspects of masked speech recognition in typically developing children: (a) the ability to benefit from temporal and spectral modulations within a noise masker and (b) the masking exerted by a speech masker. METHOD Forty-two typically developing 3rd- and 4th-grade children with normal hearing, ranging in age from 8;10 to 10;6 years (mean age = 9;2 years, SD = 0.5 months), completed sentence recognition testing in 4 different maskers: steady-state noise, temporally modulated noise, spectrally modulated noise, and two-talker speech. Children also underwent assessment of phonological processing abilities and assessments of single-word decoding. As a comparison group, 15 adults with normal hearing also completed speech-in-noise testing. RESULTS Speech recognition thresholds varied between approximately 3 and 7 dB across children, depending on the masker condition. Compared to adults, performance in the 2-talker masker was relatively consistent across children. Furthermore, decreasing the signal-to-noise ratio had a more precipitously deleterious effect on children's speech recognition in the 2-talker masker than was observed for adults. For children, individual differences in speech recognition threshold were not predicted by phonological awareness or decoding ability in any masker condition. CONCLUSIONS No relationship was found between phonological awareness and/or decoding ability and a child's ability to benefit from spectral or temporal modulations. In addition, phonological awareness and/or decoding ability was not related to speech recognition in a 2-talker masker. Last, these data suggest that the between-listeners variability often observed in 2-talker maskers for adults may be smaller for children. The reasons for this child-adult difference need to be further explored. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.5913547.
Collapse
Affiliation(s)
- Gabrielle Miller
- Department of Psychological Sciences, Case Western Reserve University, Cleveland, OH
| | - Barbara Lewis
- Department of Psychological Sciences, Case Western Reserve University, Cleveland, OH
| | - Penelope Benchek
- Department of Epidemiology and Biostatistics, Case Western Reserve University, Cleveland, OH
| | - Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina School of Medicine, Chapel Hill
| | - Lauren Calandruccio
- Department of Psychological Sciences, Case Western Reserve University, Cleveland, OH
| |
Collapse
|
25
|
Buss E, Leibold LJ, Lorenzi C. Speech recognition for school-age children and adults tested in multi-tone vs multi-noise-band maskers. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 143:1458. [PMID: 29604693 PMCID: PMC5854493 DOI: 10.1121/1.5026795] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/05/2017] [Revised: 02/21/2018] [Accepted: 02/23/2018] [Indexed: 05/30/2023]
Abstract
The present study set out to test whether greater susceptibility to modulation masking could be responsible for immature recognition of speech in noise for school-age children. Listeners were normal-hearing four- to ten-year-olds and adults. Target sentences were filtered into 28 adjacent narrow bands (100-7800 Hz), and the masker was either spectrally matched noise bands or tones centered on each of the speech bands. In experiment 1, odd- and even-numbered bands of target-plus-masker were presented to opposite ears. Performance improved with child age in all conditions, but this improvement was larger for the multi-tone than the multi-noise-band masker. This outcome is contrary to the expectation that children are more susceptible than adults to masking produced by inherent modulation of the noise masker. In experiment 2, odd-numbered bands were presented to both ears, with the masker diotic and the target either diotic or binaurally out of phase. The binaural difference cue was particularly beneficial for young children tested in the multi-tone masker, suggesting that development of auditory stream segregation may play a role in the child-adult difference for this condition. Overall, results provide no evidence of greater susceptibility to modulation masking in children than adults.
Collapse
Affiliation(s)
- Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, 170 Manning Drive, University of North Carolina, Chapel Hill, North Carolina 27599, USA
| | - Lori J Leibold
- Center for Hearing Research, Boys Town National Research Hospital, 555 North 30th Street, Omaha, Nebraska 68131, USA
| | - Christian Lorenzi
- Département d'études cognitives, Ecole normale supérieure, Paris Sciences et Lettres Research University, Centre National de la Recherche Scientifique, 29 rue d'Ulm, Paris, 75005, France
| |
Collapse
|
26
|
Corbin NE, Buss E, Leibold LJ. Spatial Release From Masking in Children: Effects of Simulated Unilateral Hearing Loss. Ear Hear 2018; 38:223-235. [PMID: 27787392 PMCID: PMC5321780 DOI: 10.1097/aud.0000000000000376] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES The purpose of this study was twofold: (1) to determine the effect of an acute simulated unilateral hearing loss on children's spatial release from masking in two-talker speech and speech-shaped noise, and (2) to develop a procedure to be used in future studies that will assess spatial release from masking in children who have permanent unilateral hearing loss. There were three main predictions. First, spatial release from masking was expected to be larger in two-talker speech than in speech-shaped noise. Second, simulated unilateral hearing loss was expected to worsen performance in all listening conditions, but particularly in the spatially separated two-talker speech masker. Third, spatial release from masking was expected to be smaller for children than for adults in the two-talker masker. DESIGN Participants were 12 children (8.7 to 10.9 years) and 11 adults (18.5 to 30.4 years) with normal bilateral hearing. Thresholds for 50%-correct recognition of Bamford-Kowal-Bench sentences were measured adaptively in continuous two-talker speech or speech-shaped noise. Target sentences were always presented from a loudspeaker at 0° azimuth. The masker stimulus was either co-located with the target or spatially separated to +90° or -90° azimuth. Spatial release from masking was quantified as the difference between thresholds obtained when the target and masker were co-located and thresholds obtained when the masker was presented from +90° or -90° azimuth. Testing was completed both with and without a moderate simulated unilateral hearing loss, created with a foam earplug and supra-aural earmuff. A repeated-measures design was used to compare performance between children and adults, and performance in the no-plug and simulated-unilateral-hearing-loss conditions. RESULTS All listeners benefited from spatial separation of target and masker stimuli on the azimuth plane in the no-plug listening conditions; this benefit was larger in two-talker speech than in speech-shaped noise. In the simulated-unilateral-hearing-loss conditions, a positive spatial release from masking was observed only when the masker was presented ipsilateral to the simulated unilateral hearing loss. In the speech-shaped noise masker, spatial release from masking in the no-plug condition was similar to that obtained when the masker was presented ipsilateral to the simulated unilateral hearing loss. In contrast, in the two-talker speech masker, spatial release from masking in the no-plug condition was much larger than that obtained when the masker was presented ipsilateral to the simulated unilateral hearing loss. When either masker was presented contralateral to the simulated unilateral hearing loss, spatial release from masking was negative. This pattern of results was observed for both children and adults, although children performed more poorly overall. CONCLUSIONS Children and adults with normal bilateral hearing experience greater spatial release from masking for a two-talker speech than a speech-shaped noise masker. Testing in a two-talker speech masker revealed listening difficulties in the presence of disrupted binaural input that were not observed in a speech-shaped noise masker. This procedure offers promise for the assessment of spatial release from masking in children with permanent unilateral hearing loss.
Collapse
Affiliation(s)
- Nicole E. Corbin
- Department of Allied Health Sciences, Division of Speech and Hearing Sciences, University of North Carolina at Chapel Hill, School of Medicine, Chapel Hill, NC, USA
| | - Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, School of Medicine, Chapel Hill, NC, USA
| | | |
Collapse
|
27
|
Frisina RD, Ding B, Zhu X, Walton JP. Age-related hearing loss: prevention of threshold declines, cell loss and apoptosis in spiral ganglion neurons. Aging (Albany NY) 2017; 8:2081-2099. [PMID: 27667674 PMCID: PMC5076453 DOI: 10.18632/aging.101045] [Citation(s) in RCA: 66] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2016] [Accepted: 09/08/2016] [Indexed: 12/18/2022]
Abstract
Age-related hearing loss (ARHL) -presbycusis - is the most prevalent neurodegenerative disease and number one communication disorder of our aged population; and affects hundreds of millions of people worldwide. Its prevalence is close to that of cardiovascular disease and arthritis, and can be a precursor to dementia. The auditory perceptual dysfunction is well understood, but knowledge of the biological bases of ARHL is still somewhat lacking. Surprisingly, there are no FDA-approved drugs for treatment. Based on our previous studies of human subjects, where we discovered relations between serum aldosterone levels and the severity of ARHL, we treated middle age mice with aldosterone, which normally declines with age in all mammals. We found that hearing thresholds and suprathreshold responses significantly improved in the aldosterone-treated mice compared to the non-treatment group. In terms of cellular and molecular mechanisms underlying this therapeutic effect, additional experiments revealed that spiral ganglion cell survival was significantly improved, mineralocorticoid receptors were upregulated via post-translational protein modifications, and age-related intrinsic and extrinsic apoptotic pathways were blocked by the aldosterone therapy. Taken together, these novel findings pave the way for translational drug development towards the first medication to prevent the progression of ARHL.
Collapse
Affiliation(s)
- Robert D Frisina
- Department Communication Sciences and Disorders, Global Center for Hearing and Speech Research, University of South Florida, Tampa FL, 33612, USA.,Department Chemical and Biomedical Engineering, Global Center for Hearing and Speech Research, University of South Florida, Tampa FL, 33612, USA
| | - Bo Ding
- Department Communication Sciences and Disorders, Global Center for Hearing and Speech Research, University of South Florida, Tampa FL, 33612, USA
| | - Xiaoxia Zhu
- Department Chemical and Biomedical Engineering, Global Center for Hearing and Speech Research, University of South Florida, Tampa FL, 33612, USA
| | - Joseph P Walton
- Department Communication Sciences and Disorders, Global Center for Hearing and Speech Research, University of South Florida, Tampa FL, 33612, USA.,Department Chemical and Biomedical Engineering, Global Center for Hearing and Speech Research, University of South Florida, Tampa FL, 33612, USA
| |
Collapse
|
28
|
Shinn-Cunningham B. Cortical and Sensory Causes of Individual Differences in Selective Attention Ability Among Listeners With Normal Hearing Thresholds. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:2976-2988. [PMID: 29049598 PMCID: PMC5945067 DOI: 10.1044/2017_jslhr-h-17-0080] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2017] [Revised: 06/23/2017] [Accepted: 07/05/2017] [Indexed: 05/28/2023]
Abstract
PURPOSE This review provides clinicians with an overview of recent findings relevant to understanding why listeners with normal hearing thresholds (NHTs) sometimes suffer from communication difficulties in noisy settings. METHOD The results from neuroscience and psychoacoustics are reviewed. RESULTS In noisy settings, listeners focus their attention by engaging cortical brain networks to suppress unimportant sounds; they then can analyze and understand an important sound, such as speech, amidst competing sounds. Differences in the efficacy of top-down control of attention can affect communication abilities. In addition, subclinical deficits in sensory fidelity can disrupt the ability to perceptually segregate sound sources, interfering with selective attention, even in listeners with NHTs. Studies of variability in control of attention and in sensory coding fidelity may help to isolate and identify some of the causes of communication disorders in individuals presenting at the clinic with "normal hearing." CONCLUSIONS How well an individual with NHTs can understand speech amidst competing sounds depends not only on the sound being audible but also on the integrity of cortical control networks and the fidelity of the representation of suprathreshold sound. Understanding the root cause of difficulties experienced by listeners with NHTs ultimately can lead to new, targeted interventions that address specific deficits affecting communication in noise. PRESENTATION VIDEO http://cred.pubs.asha.org/article.aspx?articleid=2601617.
Collapse
Affiliation(s)
- Barbara Shinn-Cunningham
- Center for Research in Sensory Communication and Emerging Neural Technology, Boston University, MA
| |
Collapse
|
29
|
Leibold LJ. Speech Perception in Complex Acoustic Environments: Developmental Effects. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:3001-3008. [PMID: 29049600 PMCID: PMC5945069 DOI: 10.1044/2017_jslhr-h-17-0070] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2017] [Accepted: 06/19/2017] [Indexed: 05/06/2023]
Abstract
PURPOSE The ability to hear and understand speech in complex acoustic environments follows a prolonged time course of development. The purpose of this article is to provide a general overview of the literature describing age effects in susceptibility to auditory masking in the context of speech recognition, including a summary of findings related to the maturation of processes thought to facilitate segregation of target from competing speech. METHOD Data from published and ongoing studies are discussed, with a focus on synthesizing results from studies that address age-related changes in the ability to perceive speech in the presence of a small number of competing talkers. CONCLUSIONS This review provides a summary of the current state of knowledge that is valuable for researchers and clinicians. It highlights the importance of considering listener factors, such as age and hearing status, as well as stimulus factors, such as masker type, when interpreting masked speech recognition data. PRESENTATION VIDEO http://cred.pubs.asha.org/article.aspx?articleid=2601620.
Collapse
Affiliation(s)
- Lori J. Leibold
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| |
Collapse
|
30
|
Buss E, Leibold LJ, Porter HL, Grose JH. Speech recognition in one- and two-talker maskers in school-age children and adults: Development of perceptual masking and glimpsing. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 141:2650. [PMID: 28464682 PMCID: PMC5391283 DOI: 10.1121/1.4979936] [Citation(s) in RCA: 47] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
Children perform more poorly than adults on a wide range of masked speech perception paradigms, but this effect is particularly pronounced when the masker itself is also composed of speech. The present study evaluated two factors that might contribute to this effect: the ability to perceptually isolate the target from masker speech, and the ability to recognize target speech based on sparse cues (glimpsing). Speech reception thresholds (SRTs) were estimated for closed-set, disyllabic word recognition in children (5-16 years) and adults in a one- or two-talker masker. Speech maskers were 60 dB sound pressure level (SPL), and they were either presented alone or in combination with a 50-dB-SPL speech-shaped noise masker. There was an age effect overall, but performance was adult-like at a younger age for the one-talker than the two-talker masker. Noise tended to elevate SRTs, particularly for older children and adults, and when summed with the one-talker masker. Removing time-frequency epochs associated with a poor target-to-masker ratio markedly improved SRTs, with larger effects for younger listeners; the age effect was not eliminated, however. Results were interpreted as indicating that development of speech-in-speech recognition is likely impacted by development of both perceptual masking and the ability recognize speech based on sparse cues.
Collapse
Affiliation(s)
- Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina, Chapel Hill, North Carolina 27599, USA
| | - Lori J Leibold
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska 68131, USA
| | - Heather L Porter
- Hearing and Speech Department, Children's Hospital Los Angeles, Los Angeles, California 90027, USA
| | - John H Grose
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina, Chapel Hill, North Carolina 27599, USA
| |
Collapse
|
31
|
Dai L, Shinn-Cunningham BG. Contributions of Sensory Coding and Attentional Control to Individual Differences in Performance in Spatial Auditory Selective Attention Tasks. Front Hum Neurosci 2016; 10:530. [PMID: 27812330 PMCID: PMC5071360 DOI: 10.3389/fnhum.2016.00530] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2016] [Accepted: 10/05/2016] [Indexed: 11/13/2022] Open
Abstract
Listeners with normal hearing thresholds (NHTs) differ in their ability to steer attention to whatever sound source is important. This ability depends on top-down executive control, which modulates the sensory representation of sound in the cortex. Yet, this sensory representation also depends on the coding fidelity of the peripheral auditory system. Both of these factors may thus contribute to the individual differences in performance. We designed a selective auditory attention paradigm in which we could simultaneously measure envelope following responses (EFRs, reflecting peripheral coding), onset event-related potentials (ERPs) from the scalp (reflecting cortical responses to sound) and behavioral scores. We performed two experiments that varied stimulus conditions to alter the degree to which performance might be limited due to fine stimulus details vs. due to control of attentional focus. Consistent with past work, in both experiments we find that attention strongly modulates cortical ERPs. Importantly, in Experiment I, where coding fidelity limits the task, individual behavioral performance correlates with subcortical coding strength (derived by computing how the EFR is degraded for fully masked tones compared to partially masked tones); however, in this experiment, the effects of attention on cortical ERPs were unrelated to individual subject performance. In contrast, in Experiment II, where sensory cues for segregation are robust (and thus less of a limiting factor on task performance), inter-subject behavioral differences correlate with subcortical coding strength. In addition, after factoring out the influence of subcortical coding strength, behavioral differences are also correlated with the strength of attentional modulation of ERPs. These results support the hypothesis that behavioral abilities amongst listeners with NHTs can arise due to both subcortical coding differences and differences in attentional control, depending on stimulus characteristics and task demands.
Collapse
Affiliation(s)
- Lengshi Dai
- Department of Biomedical Engineering, Boston University Boston, MA, USA
| | | |
Collapse
|
32
|
Dai L, Shinn-Cunningham BG. Contributions of Sensory Coding and Attentional Control to Individual Differences in Performance in Spatial Auditory Selective Attention Tasks. Front Hum Neurosci 2016. [PMID: 27812330 DOI: 10.3389/fnhum.2016.00530/bibtex] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
Listeners with normal hearing thresholds (NHTs) differ in their ability to steer attention to whatever sound source is important. This ability depends on top-down executive control, which modulates the sensory representation of sound in the cortex. Yet, this sensory representation also depends on the coding fidelity of the peripheral auditory system. Both of these factors may thus contribute to the individual differences in performance. We designed a selective auditory attention paradigm in which we could simultaneously measure envelope following responses (EFRs, reflecting peripheral coding), onset event-related potentials (ERPs) from the scalp (reflecting cortical responses to sound) and behavioral scores. We performed two experiments that varied stimulus conditions to alter the degree to which performance might be limited due to fine stimulus details vs. due to control of attentional focus. Consistent with past work, in both experiments we find that attention strongly modulates cortical ERPs. Importantly, in Experiment I, where coding fidelity limits the task, individual behavioral performance correlates with subcortical coding strength (derived by computing how the EFR is degraded for fully masked tones compared to partially masked tones); however, in this experiment, the effects of attention on cortical ERPs were unrelated to individual subject performance. In contrast, in Experiment II, where sensory cues for segregation are robust (and thus less of a limiting factor on task performance), inter-subject behavioral differences correlate with subcortical coding strength. In addition, after factoring out the influence of subcortical coding strength, behavioral differences are also correlated with the strength of attentional modulation of ERPs. These results support the hypothesis that behavioral abilities amongst listeners with NHTs can arise due to both subcortical coding differences and differences in attentional control, depending on stimulus characteristics and task demands.
Collapse
Affiliation(s)
- Lengshi Dai
- Department of Biomedical Engineering, Boston University Boston, MA, USA
| | | |
Collapse
|
33
|
Buss E, Leibold LJ, Hall JW. Effect of response context and masker type on word recognition in school-age children and adults. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 140:968. [PMID: 27586729 PMCID: PMC5392093 DOI: 10.1121/1.4960587] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/11/2016] [Revised: 07/13/2016] [Accepted: 07/26/2016] [Indexed: 05/14/2023]
Abstract
In adults, masked speech recognition improves with the provision of a closed set of response alternatives. The present study evaluated whether school-age children (5-13 years) benefit to the same extent as adults from a forced-choice context, and whether this effect depends on masker type. Experiment 1 compared masked speech reception thresholds for disyllabic words in either an open-set or a four-alternative forced-choice (4AFC) task. Maskers were speech-shaped noise or two-talker speech. Experiment 2 compared masked speech reception thresholds for monosyllabic words in two 4AFC tasks, one in which the target and foils were phonetically similar and one in which they were dissimilar. Maskers were speech-shaped noise, amplitude-modulated noise, or two-talker speech. For both experiments, it was predicted that children would not benefit from the information provided by the 4AFC context to the same degree as adults, particularly when the masker was complex (two-talker) or when audible speech cues were temporally sparse (modulated-noise). Results indicate that young children do benefit from a 4AFC context to the same extent as adults in speech-shaped noise and amplitude-modulated noise, but the benefit of context increases with listener age for the two-talker speech masker.
Collapse
Affiliation(s)
- Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599, USA
| | - Lori J Leibold
- Boys Town National Research Hospital, Omaha, Nebraska 68131, USA
| | - Joseph W Hall
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599, USA
| |
Collapse
|
34
|
Hall JW, Buss E, Grose JH. Factors affecting the development of speech recognition in steady and modulated noise. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 139:2964. [PMID: 27250187 PMCID: PMC5392062 DOI: 10.1121/1.4950810] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/10/2015] [Revised: 04/18/2016] [Accepted: 05/04/2016] [Indexed: 06/05/2023]
Abstract
This study used a checkerboard-masking paradigm to investigate the development of the speech reception threshold (SRT) for monosyllabic words in synchronously and asynchronously modulated noise. In asynchronous modulation, masker frequencies below 1300 Hz were gated off when frequencies above 1300 Hz were gated on, and vice versa. The goals of the study were to examine development of the ability to use asynchronous spectro-temporal cues for speech recognition and to assess factors related to speech frequency region and audible speech bandwidth. A speech-shaped noise masker was steady or was modulated synchronously or asynchronously across frequency. Target words were presented to 5-7 year old children or to adults. Overall, children showed higher SRTs and smaller masking release than adults. Consideration of the present results along with previous findings supports the idea that children can have particularly poor masked SRTs when the speech and masker spectra differ substantially, and that this may arise due to children requiring a wider speech bandwidth than adults for speech recognition. The results were also consistent with the idea that children are relatively poor in integrating speech cues when the frequency regions with the best signal-to-noise ratios vary across frequency as a function of time.
Collapse
Affiliation(s)
- Joseph W Hall
- Department of Otolaryngology-Head & Neck Surgery, University of North Carolina at Chapel Hill, 170 Manning Drive, Chapel Hill, North Carolina 27599-7070, USA
| | - Emily Buss
- Department of Otolaryngology-Head & Neck Surgery, University of North Carolina at Chapel Hill, 170 Manning Drive, Chapel Hill, North Carolina 27599-7070, USA
| | - John H Grose
- Department of Otolaryngology-Head & Neck Surgery, University of North Carolina at Chapel Hill, 170 Manning Drive, Chapel Hill, North Carolina 27599-7070, USA
| |
Collapse
|
35
|
Calandruccio L, Leibold LJ, Buss E. Linguistic Masking Release in School-Age Children and Adults. Am J Audiol 2016; 25:34-40. [PMID: 26974870 DOI: 10.1044/2015_aja-15-0053] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2015] [Accepted: 12/08/2015] [Indexed: 11/09/2022] Open
Abstract
PURPOSE This study assessed if 6- to 8-year-old children benefit from a language mismatch between target and masker speech for sentence recognition in a 2-talker masker. METHOD English sentence recognition was evaluated for English monolingual children (ages 6-8 years, n = 15) and adults (n = 15) in an English 2-talker and a Spanish 2-talker masker. A regression analysis with subject as a random variable was used to test the fixed effect of listener group and masker language and the interaction of these two effects. RESULTS Thresholds were approximately 5 dB higher for children than for adults in both maskers. However, children and adults benefited to the same degree from a mismatch between the target and masker language with approximately 3 dB lower thresholds in the Spanish than the English masker. CONCLUSIONS Results suggest that children are able to take advantage of linguistic differences between English and Spanish speech maskers to the same degree as adults. Yet, overall worse performance for children may indicate general cognitive immaturity compared with adults, perhaps causing children to be less efficient when combining glimpses of degraded speech information into a meaningful sentence.
Collapse
Affiliation(s)
| | - Lori J. Leibold
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Emily Buss
- The University of North Carolina at Chapel Hill
| |
Collapse
|
36
|
Brennan M, McCreery R, Kopun J, Lewis D, Alexander J, Stelmachowicz P. Masking Release in Children and Adults With Hearing Loss When Using Amplification. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2016; 59:110-21. [PMID: 26540194 PMCID: PMC4867924 DOI: 10.1044/2015_jslhr-h-14-0105] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2014] [Accepted: 10/23/2015] [Indexed: 05/09/2023]
Abstract
PURPOSE This study compared masking release for adults and children with normal hearing and hearing loss. For the participants with hearing loss, masking release using simulated hearing aid amplification with 2 different compression speeds (slow, fast) was compared. METHOD Sentence recognition in unmodulated noise was compared with recognition in modulated noise (masking release). Recognition was measured for participants with hearing loss using individualized amplification via the hearing-aid simulator. RESULTS Adults with hearing loss showed greater masking release than the children with hearing loss. Average masking release was small (1 dB) and did not depend on hearing status. Masking release was comparable for slow and fast compression. CONCLUSIONS The use of amplification in this study contrasts with previous studies that did not use amplification. The results suggest that when differences in audibility are reduced, participants with hearing loss may be able to take advantage of dips in the noise levels, similar to participants with normal hearing. Although children required a more favorable signal-to-noise ratio than adults for both unmodulated and modulated noise, masking release was not statistically different. However, the ability to detect a difference may have been limited by the small amount of masking release observed.
Collapse
|
37
|
Newman RS, Morini G, Ahsan F, Kidd G. Linguistically-based informational masking in preschool children. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 138:EL93-8. [PMID: 26233069 PMCID: PMC4506292 DOI: 10.1121/1.4921677] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/25/2015] [Revised: 04/10/2015] [Accepted: 05/12/2015] [Indexed: 05/22/2023]
Abstract
Previous work has shown that young children exhibit more difficulty understanding speech in the presence of speech-like distractors than do adults, and are more susceptible to at least some form of informational masking (IM). Yet little is known about how/when the "susceptibility" to linguistically-based IM develops. The authors tested adults, school-age children (aged 8 yrs), and preschool-age children (aged 4 yrs) on sentence recognition in the presence of normal speech, "jumbled" speech, and reversed speech distractors. As has been found previously with adults [e.g., Summers and Molis (2004). J. Speech, Lang. Hear. Res. 47, 245-256], children in both age groups showed a release of masking when the distractor was uninterpretable (reversed speech). This suggests that children already demonstrate linguistically-based IM by the age of 4 yrs.
Collapse
Affiliation(s)
- Rochelle S Newman
- Department of Hearing and Speech Sciences, University of Maryland, 0100 Lefrak Hall, College Park, Maryland 20742, USA , ,
| | - Giovanna Morini
- Department of Hearing and Speech Sciences, University of Maryland, 0100 Lefrak Hall, College Park, Maryland 20742, USA , ,
| | - Faraz Ahsan
- Department of Hearing and Speech Sciences, University of Maryland, 0100 Lefrak Hall, College Park, Maryland 20742, USA , ,
| | - Gerald Kidd
- Department of Speech, Language and Hearing Sciences, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, USA
| |
Collapse
|
38
|
An YH, Jin SY, Yoon SW, Shim HJ. The effects of unilateral tinnitus on auditory temporal resolution: gaps-in-noise performance. KOREAN JOURNAL OF AUDIOLOGY 2014; 18:119-25. [PMID: 25558405 PMCID: PMC4280753 DOI: 10.7874/kja.2014.18.3.119] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/18/2014] [Revised: 11/25/2014] [Accepted: 11/26/2014] [Indexed: 11/22/2022]
Abstract
BACKGROUND AND OBJECTIVES The Gaps-In-Noise (GIN) test is a measure to assess auditory temporal resolution, which is the ability to follow rapid changes in the envelope of a sound stimulus over time. We investigated whether unilateral tinnitus affects temporal resolution by the GIN performance. SUBJECTS AND METHODS Hearing tests including the GIN test were performed in 120 ears of 60 patients with unilateral tinnitus who showed symmetric hearing within 20 dB HL difference up to 8 kHz (tinnitus-affected ears, 14.6±11.2 dB HL; non-tinnitus ears 15.1±11.5 dB HL) and 60 ears of 30 subjects with normal hearing. Comparisons were made between tinnitus and non-tinnitus side of patients and normal ears of controls. RESULTS There was no significant difference of the mean GIN thresholds among tinnitus-affected ears (5.18±0.6 ms), non-tinnitus ears (4.98±0.6 ms) and normal ears (4.97±0.8 ms). The mean percentage of correct answers in tinnitus side (67.3±5.5%) was slightly less than that in non-tinnitus side (70.0±5.5%) but it was not significantly different from that in normal ears (69.4±7.5%). Neither the GIN threshold nor the GIN perception level in tinnitus ears has relation to sex, frequency and loudness of tinnitus, and audiometric data. Age only showed a significant correlation with the GIN performance. CONCLUSIONS We found no evidence which supported the influence of unilateral tinnitus on auditory temporal resolution. These results imply that tinnitus may not simply fill in the silent gaps in the background noise.
Collapse
Affiliation(s)
- Yong-Hwi An
- Department of Otorhinolaryngology-Head and Neck Surgery, Eulji Medical Center, Eulji University School of Medicine, Seoul, Korea
| | - So Young Jin
- Department of Otorhinolaryngology-Head and Neck Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Sang Won Yoon
- Department of Otorhinolaryngology-Head and Neck Surgery, Eulji Medical Center, Eulji University School of Medicine, Seoul, Korea
| | - Hyun Joon Shim
- Department of Otorhinolaryngology-Head and Neck Surgery, Eulji Medical Center, Eulji University School of Medicine, Seoul, Korea
| |
Collapse
|
39
|
Hall JW, Buss E, Grose JH. Development of speech glimpsing in synchronously and asynchronously modulated noise. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2014; 135:3594-3600. [PMID: 24907822 PMCID: PMC4048449 DOI: 10.1121/1.4873518] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/06/2013] [Revised: 02/03/2014] [Accepted: 04/08/2014] [Indexed: 06/03/2023]
Abstract
This study investigated development of the ability to integrate glimpses of speech in modulated noise. Noise was modulated synchronously across frequency or asynchronously such that when noise below 1300 Hz was "off," noise above 1300 Hz was "on," and vice versa. Asynchronous masking was used to examine the ability of listeners to integrate speech glimpses separated across time and frequency. The study used the Word Intelligibility by Picture Identification (WIPI) test and included adults, older children (age 8-10 yr) and younger children (5-7 yr). Results showed poorer masking release for the children than the adults for synchronous modulation but not for asynchronous modulation. It is possible that children can integrate cues relatively well when all intervals provide at least partial speech information (asynchronous modulation) but less well when some intervals provide little or no information (synchronous modulation). Control conditions indicated that children appeared to derive less benefit than adults from speech cues below 1300 Hz. This frequency effect was supported by supplementary conditions where the noise was unmodulated and the speech was low- or high-pass filtered. Possible sources of the developmental frequency effect include differences in frequency weighting, effective speech bandwidth, and the signal-to-noise ratio in the unmodulated noise condition.
Collapse
Affiliation(s)
- Joseph W Hall
- Department of Otolaryngology-Head and Neck Surgery, University of North Carolina at Chapel Hill, 170 Manning Drive, Chapel Hill, North Carolina 27599-7070
| | - Emily Buss
- Department of Otolaryngology-Head and Neck Surgery, University of North Carolina at Chapel Hill, 170 Manning Drive, Chapel Hill, North Carolina 27599-7070
| | - John H Grose
- Department of Otolaryngology-Head and Neck Surgery, University of North Carolina at Chapel Hill, 170 Manning Drive, Chapel Hill, North Carolina 27599-7070
| |
Collapse
|
40
|
Calandruccio L, Gomez B, Buss E, Leibold LJ. Development and preliminary evaluation of a pediatric Spanish-English speech perception task. Am J Audiol 2014; 23:158-72. [PMID: 24686915 DOI: 10.1044/2014_aja-13-0055] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
PURPOSE The purpose of this study was to develop a task to evaluate children's English and Spanish speech perception abilities in either noise or competing speech maskers. METHOD Eight bilingual Spanish-English and 8 age-matched monolingual English children (ages 4.9-16.4 years) were tested. A forced-choice, picture-pointing paradigm was selected for adaptively estimating masked speech reception thresholds. Speech stimuli were spoken by simultaneous bilingual Spanish-English talkers. The target stimuli were 30 disyllabic English and Spanish words, familiar to 5-year-olds and easily illustrated. Competing stimuli included either 2-talker English or 2-talker Spanish speech (corresponding to target language) and spectrally matched noise. RESULTS For both groups of children, regardless of test language, performance was significantly worse for the 2-talker than for the noise masker condition. No difference in performance was found between bilingual and monolingual children. Bilingual children performed significantly better in English than in Spanish in competing speech. For all listening conditions, performance improved with increasing age. CONCLUSIONS Results indicated that the stimuli and task were appropriate for speech recognition testing in both languages, providing a more conventional measure of speech-in-noise perception as well as a measure of complex listening. Further research is needed to determine performance for Spanish-dominant listeners and to evaluate the feasibility of implementation into routine clinical use.
Collapse
Affiliation(s)
| | | | - Emily Buss
- University of North Carolina at Chapel Hill
| | | |
Collapse
|
41
|
Influence of hearing loss on children's identification of spondee words in a speech-shaped noise or a two-talker masker. Ear Hear 2014; 34:575-84. [PMID: 23492919 DOI: 10.1097/aud.0b013e3182857742] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE This study evaluated the influence of hearing loss on children's speech-perception abilities in a speech-shaped noise or a two-talker masker. For both masker conditions, it was predicted that children with hearing loss would require a more advantageous signal to noise ratio (SNR) than children with normal hearing to achieve the same criterion level of performance. However, it was hypothesized that the performance gap between children with hearing loss and children with normal hearing would be larger in the two-talker than in the speech-shaped noise masker. DESIGN A repeated-measures design compared the spondee identification performance of two age groups of children with hearing loss (9-11 and 13-17 years of age) and a group of children with normal hearing (9-11 years of age) in continuous speech-shaped noise or a two-talker masker. Estimates of the SNR required for 70.7% correct spondee identification were obtained using an adaptive, four-alternative, forced-choice procedure. Children were tested in the sound field. Children with hearing loss wore their personal hearing aids at their regular settings during testing. RESULTS Both groups of children with hearing loss performed more poorly than children with normal hearing in the speech-shaped noise masker. Younger children required an additional 2.7 dB SNR and older children required an additional 4.7 dB SNR to achieve the same level of performance as children with normal hearing. This disadvantage decreased to 8.1 dB for both age groups of children with hearing loss in the two-talker masker. For children with hearing loss, degree of hearing loss was significantly correlated with performance in the speech-shaped noise masker, but not in the two-talker masker. CONCLUSIONS A larger performance gap was observed between children with hearing loss and children with normal hearing in competing speech than in steady state noise. These results are consistent with the hypothesis that hearing loss influenced children's perceptual processing abilities.
Collapse
|
42
|
Calandruccio L, Buss E, Hall JW. Effects of linguistic experience on the ability to benefit from temporal and spectral masker modulation. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2014; 135:1335-1343. [PMID: 24606272 PMCID: PMC4042472 DOI: 10.1121/1.4864785] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/06/2013] [Revised: 01/24/2014] [Accepted: 01/27/2014] [Indexed: 06/02/2023]
Abstract
Masked speech perception can often be improved by modulating the masker temporally and/or spectrally. These effects tend to be larger in normal-hearing listeners than hearing-impaired listeners, and effects of temporal modulation are larger in adults than young children [Hall et al. (2012). Ear Hear. 33, 340-348]. Initial reports indicate non-native adult speakers of the target language also have a reduced ability to benefit from temporal masker modulation [Stuart et al. (2010). J. Am. Acad. Aud. 21, 239-248]. The present study further investigated the effect of masker modulation on English speech recognition in normal-hearing adults who are non-native speakers of English. Sentence recognition was assessed in a steady-state baseline masker condition and in three modulated masker conditions, characterized by spectral, temporal, or spectro-temporal modulation. Thresholds for non-natives were poorer than those of native English speakers in all conditions, particularly in the presence of a modulated masker. The group differences were consistent across maskers when assessed in percent correct, suggesting that a single factor may limit the performance of non-native listeners similarly in all conditions.
Collapse
Affiliation(s)
- Lauren Calandruccio
- Division of Speech and Hearing Sciences, Department of Allied Health Sciences, University of North Carolina School of Medicine, Chapel Hill, North Carolina 27599
| | - Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina School of Medicine, Chapel Hill, North Carolina 27599
| | - Joseph W Hall
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina School of Medicine, Chapel Hill, North Carolina 27599
| |
Collapse
|
43
|
Advíncula KP, Menezes DC, Pacífico FA, Griz SMS. Percepção da fala em presença de ruído competitivo: o efeito da taxa de modulação do ruído mascarante. AUDIOLOGY - COMMUNICATION RESEARCH 2013. [DOI: 10.1590/s2317-64312013000400003] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
OBJETIVO: Este estudo investigou o efeito das diferentes taxas de modulações do mascaramento na magnitude do masking release. MÉTODOS: Quinze indivíduos jovens, com audição normal, foram submetidos ao teste de reconhecimento de sentença na presença de ruído, utilizando as listas de sentenças do HINT-Brasil. Foram obtidos limiares de reconhecimento de fala em presença de ruído estável e ruído modulado, em diferentes taxas de modulação (4, 8, 16, 32 e 64 Hz). A magnitude do masking release foi obtida para cada modulação e foi realizada a análise comparativa dos resultados. RESULTADOS: Os achados demonstraram melhores limiares de reconhecimento de sentenças quando o ruído mascarante foi modulado em 4, 8, 16 e 32 Hz e piores limiares quando o ruído mascarante estava estável e em 64 Hz. No que diz respeito à análise da relação sinal/ruído, foram observados, no presente estudo, maiores valores para as tarefas que envolvem reconhecimento de sentenças com ruído estável, seguidos das tarefas que envolvem reconhecimento de sentenças com ruído modulado em 64 Hz, e menores valores para as tarefas que envolvem reconhecimento de sentenças com ruído modulado em 32, 16, 8 e 4 Hz, respectivamente. CONCLUSÃO: A magnitude do masking release para sentenças não se diferencia com taxas de modulação em amplitude entre 4 e 32 Hz. No entanto, quando a taxa de modulação é elevada a 64 Hz, a magnitude do masking release diminui.
Collapse
|
44
|
Leibold LJ, Buss E. Children's identification of consonants in a speech-shaped noise or a two-talker masker. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2013; 56:1144-55. [PMID: 23785181 PMCID: PMC3981452 DOI: 10.1044/1092-4388(2012/12-0011)] [Citation(s) in RCA: 69] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
PURPOSE To evaluate child-adult differences for consonant identification in a noise or a 2-talker masker. Error patterns were compared across age and masker type to test the hypothesis that errors with the noise masker reflect limitations in the peripheral encoding of speech, whereas errors with the 2-talker masker reflect target-masker confusions within the central auditory system. METHOD A repeated-measures design compared the performance of children (5-13 years) and adults in continuous speech-shaped noise or a 2-talker masker. Consonants were identified from a closed set of 12 using a picture-pointing response. RESULTS In speech-shaped noise, children under age 10 years performed more poorly than adults, but performance was adultlike for 11- to 13-year-olds. In the 2-talker masker, significant child-adult differences were observed in even the oldest group of children. Systematic clusters of consonant errors were observed for children in the noise masker and for adults in both maskers, but not for children in the 2-talker masker. CONCLUSIONS These results suggest a more prolonged time course of development for consonant identification in a 2-talker masker than in a noise masker. Differences in error patterns between the maskers support the hypothesis that errors with the 2-talker masker reflect failures of sound segregation.
Collapse
|
45
|
Werner LA. Infants' detection and discrimination of sounds in modulated maskers. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2013; 133:4156-4167. [PMID: 23742367 PMCID: PMC3689834 DOI: 10.1121/1.4803903] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/10/2012] [Revised: 04/15/2013] [Accepted: 04/19/2013] [Indexed: 06/02/2023]
Abstract
Adults and 7-month-old infants were compared in detection and discrimination of sounds in modulated maskers. In two experiments, the level of a target sound was varied to equate listeners' performance in unmodulated noise, and performance was assessed at that level in a noise modulated with the envelope of single-talker speech. While adults' vowel discrimination and tone detection were better in the modulated than in the unmodulated masker, infants' vowel discrimination was poorer in the modulated than in the unmodulated masker. Infants' tone detection was the same in the two maskers. In two additional experiments, each age group was tested at one level with order of testing in modulated and unmodulated maskers counterbalanced across subjects. Both infants and adults discriminated between vowels better in single-talker modulated and sinusoidally amplitude modulated (SAM) maskers than in an unmodulated masker, but infants' modulated-unmodulated difference was smaller than than that of adults. Increasing the modulation depth of the SAM masker did not affect the size of infants' modulated-unmodulated difference. However, infants' asymptotic performance in a modulated masker limits the extent to which their performance could improve. Infants can make use of information in masker dips, but masker modulation may also interfere with their ability to process the target.
Collapse
Affiliation(s)
- Lynne A Werner
- Department of Speech and Hearing Sciences, University of Washington, Seattle, Washington 98105-6246, USA
| |
Collapse
|
46
|
An energetic limit on spatial release from masking. J Assoc Res Otolaryngol 2013; 14:603-10. [PMID: 23649712 DOI: 10.1007/s10162-013-0392-1] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2012] [Accepted: 04/17/2013] [Indexed: 10/26/2022] Open
Abstract
This study tested the hypothesis that energetic masking limits the benefits obtained from spatial separation in multiple-talker listening situations, particularly for listeners with sensorineural hearing loss. A speech target was presented simultaneously with two or four speech maskers. The target was always presented diotically, and the maskers were either presented diotically or dichotically. In dichotic configurations, the maskers were symmetrically placed by introducing interaural time differences (ITDs) or infinitely large interaural level differences (ILDs; monaural presentation). Target-to-masker ratios for 50 % correct were estimated. Thresholds in all separated conditions were poorer in listeners with hearing loss than listeners with normal hearing. Moreover, for a given listener, thresholds were similar for conditions with the same number of talkers per ear (e.g., ILD with four talkers equivalent to ITD with two talkers) and hence the same energetic masking. The results are consistent with the idea that increased energetic masking, rather than a specific spatial deficit, may limit performance for hearing-impaired listeners in spatialized speech mixtures.
Collapse
|