1
|
Fernandez LB, Pickering MJ, Naylor G, Hadley LV. Uses of Linguistic Context in Speech Listening: Does Acquired Hearing Loss Lead to Reduced Engagement of Prediction? Ear Hear 2024:00003446-990000000-00296. [PMID: 38880953 DOI: 10.1097/aud.0000000000001515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/18/2024]
Abstract
Research investigating the complex interplay of cognitive mechanisms involved in speech listening for people with hearing loss has been gaining prominence. In particular, linguistic context allows the use of several cognitive mechanisms that are not well distinguished in hearing science, namely those relating to "postdiction", "integration", and "prediction". We offer the perspective that an unacknowledged impact of hearing loss is the differential use of predictive mechanisms relative to age-matched individuals with normal hearing. As evidence, we first review how degraded auditory input leads to reduced prediction in people with normal hearing, then consider the literature exploring context use in people with acquired postlingual hearing loss. We argue that no research on hearing loss has directly assessed prediction. Because current interventions for hearing do not fully alleviate difficulty in conversation, and avoidance of spoken social interaction may be a mediator between hearing loss and cognitive decline, this perspective could lead to greater understanding of cognitive effects of hearing loss and provide insight regarding new targets for intervention.
Collapse
Affiliation(s)
- Leigh B Fernandez
- Department of Social Sciences, Psycholinguistics Group, University of Kaiserslautern-Landau, Kaiserslautern, Germany
| | - Martin J Pickering
- Department of Psychology, University of Edinburgh, Edinburgh, United Kingdom
| | - Graham Naylor
- Hearing Sciences-Scottish Section, School of Medicine, University of Nottingham, Glasgow, United Kingdom
| | - Lauren V Hadley
- Hearing Sciences-Scottish Section, School of Medicine, University of Nottingham, Glasgow, United Kingdom
| |
Collapse
|
2
|
Nicholson N, Rhoades EA, Glade RE. Analysis of Health Disparities in the Screening and Diagnosis of Hearing Loss: Early Hearing Detection and Intervention Hearing Screening Follow-Up Survey. Am J Audiol 2022; 31:764-788. [PMID: 35613624 DOI: 10.1044/2022_aja-21-00014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
PURPOSE The purpose of this study was to (a) provide introductory literature regarding cultural constructs, health disparities, and social determinants of health (SDoH); (b) summarize the literature regarding the Centers for Disease Control and Prevention (CDC) Early Hearing Detection and Intervention (EHDI) Hearing Screening Follow-Up Survey (HSFS) data; (c) explore the CDC EHDI HSFS data regarding the contribution of maternal demographics to loss-to-follow-up/loss-to-documentation (LTF/D) between hearing screening and audiologic diagnosis for 2016, 2017, and 2018; and (d) examine these health disparities within the context of potential ethnoracial biases. METHOD This is a comprehensive narrative literature review of cultural constructs, hearing health disparities, and SDoH as they relate to the CDC EHDI HSFS data. We explore the maternal demographic data reported on the CDC EHDI website and report disparities for maternal age, education, ethnicity, and race for 2016, 2017, and 2018. We focus on LTF/D for screening and diagnosis within the context of racial and cultural bias. RESULTS A literature review demonstrates the increase in quality of the CDC EHDI HSFS data over the past 2 decades. LTF/D rates for hearing screening and audiologic diagnostic testing have improved from higher than 60% to current rates of less than 30%. Comparisons of diagnostic completion rates reported on the CDC website for the EHDI HSFS 2016, 2017, and 2018 data show trends for maternal age, education, and race, but not for ethnicity. Trends were defined as changes more than 10% for variables averaged over a 3-year period (2016-2018). CONCLUSIONS Although there have been significant improvements in LTF/D over the past 2 decades, there continue to be opportunities for further improvement. Beyond neonatal screening, delays continue to be reported in the diagnosis of young children with hearing loss. Notwithstanding the extraordinarily diverse families within the United States, the imperative is to minimize such delays so that all children with hearing loss can, at the very least, have auditory accessibility to spoken language by 3 months of age. Conscious awareness is essential before developing a potentially effective plan of action that might remediate the problem.
Collapse
Affiliation(s)
| | | | - Rachel E. Glade
- Communication Science and Disorders, University of Arkansas, Fayetteville
| |
Collapse
|
3
|
Shahin AJ, Shen S, Kerlin JR. Tolerance for audiovisual asynchrony is enhanced by the spectrotemporal fidelity of the speaker's mouth movements and speech. LANGUAGE, COGNITION AND NEUROSCIENCE 2017; 32:1102-1118. [PMID: 28966930 PMCID: PMC5617130 DOI: 10.1080/23273798.2017.1283428] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/09/2016] [Accepted: 01/07/2017] [Indexed: 06/07/2023]
Abstract
We examined the relationship between tolerance for audiovisual onset asynchrony (AVOA) and the spectrotemporal fidelity of the spoken words and the speaker's mouth movements. In two experiments that only varied in the temporal order of sensory modality, visual speech leading (exp1) or lagging (exp2) acoustic speech, participants watched intact and blurred videos of a speaker uttering trisyllabic words and nonwords that were noise vocoded with 4-, 8-, 16-, and 32-channels. They judged whether the speaker's mouth movements and the speech sounds were in-sync or out-of-sync. Individuals perceived synchrony (tolerated AVOA) on more trials when the acoustic speech was more speech-like (8 channels and higher vs. 4 channels), and when visual speech was intact than blurred (exp1 only). These findings suggest that enhanced spectrotemporal fidelity of the audiovisual (AV) signal prompts the brain to widen the window of integration promoting the fusion of temporally distant AV percepts.
Collapse
Affiliation(s)
- Antoine J Shahin
- Center for Mind and Brain, University of California, Davis, CA, 95618
| | - Stanley Shen
- Center for Mind and Brain, University of California, Davis, CA, 95618
| | - Jess R Kerlin
- Center for Mind and Brain, University of California, Davis, CA, 95618
| |
Collapse
|
4
|
Stilp C, Donaldson G, Oh S, Kong YY. Influences of noise-interruption and information-bearing acoustic changes on understanding simulated electric-acoustic speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 140:3971. [PMID: 27908030 PMCID: PMC6909990 DOI: 10.1121/1.4967445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/06/2016] [Revised: 08/18/2016] [Accepted: 10/28/2016] [Indexed: 06/06/2023]
Abstract
In simulations of electrical-acoustic stimulation (EAS), vocoded speech intelligibility is aided by preservation of low-frequency acoustic cues. However, the speech signal is often interrupted in everyday listening conditions, and effects of interruption on hybrid speech intelligibility are poorly understood. Additionally, listeners rely on information-bearing acoustic changes to understand full-spectrum speech (as measured by cochlea-scaled entropy [CSE]) and vocoded speech (CSECI), but how listeners utilize these informational changes to understand EAS speech is unclear. Here, normal-hearing participants heard noise-vocoded sentences with three to six spectral channels in two conditions: vocoder-only (80-8000 Hz) and simulated hybrid EAS (vocoded above 500 Hz; original acoustic signal below 500 Hz). In each sentence, four 80-ms intervals containing high-CSECI or low-CSECI acoustic changes were replaced with speech-shaped noise. As expected, performance improved with the preservation of low-frequency fine-structure cues (EAS). This improvement decreased for continuous EAS sentences as more spectral channels were added, but increased as more channels were added to noise-interrupted EAS sentences. Performance was impaired more when high-CSECI intervals were replaced by noise than when low-CSECI intervals were replaced, but this pattern did not differ across listening modes. Utilizing information-bearing acoustic changes to understand speech is predicted to generalize to cochlear implant users who receive EAS inputs.
Collapse
Affiliation(s)
- Christian Stilp
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, Kentucky 40292, USA
| | - Gail Donaldson
- Department of Communication Sciences and Disorders, University of South Florida, PCD 1017, 4202 East Fowler Avenue, Tampa, Florida 33620, USA
| | - Soohee Oh
- Department of Communication Sciences and Disorders, University of South Florida, PCD 1017, 4202 East Fowler Avenue, Tampa, Florida 33620, USA
| | - Ying-Yee Kong
- Department of Communication Sciences and Disorders, Northeastern University, 226 Forsyth Building, 360 Huntington Avenue, Boston, Massachusetts 02115, USA
| |
Collapse
|
5
|
Başkent D, Clarke J, Pals C, Benard MR, Bhargava P, Saija J, Sarampalis A, Wagner A, Gaudrain E. Cognitive Compensation of Speech Perception With Hearing Impairment, Cochlear Implants, and Aging. Trends Hear 2016. [PMCID: PMC5056620 DOI: 10.1177/2331216516670279] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
External degradations in incoming speech reduce understanding, and hearing impairment further compounds the problem. While cognitive mechanisms alleviate some of the difficulties, their effectiveness may change with age. In our research, reviewed here, we investigated cognitive compensation with hearing impairment, cochlear implants, and aging, via (a) phonemic restoration as a measure of top-down filling of missing speech, (b) listening effort and response times as a measure of increased cognitive processing, and (c) visual world paradigm and eye gazing as a measure of the use of context and its time course. Our results indicate that between speech degradations and their cognitive compensation, there is a fine balance that seems to vary greatly across individuals. Hearing impairment or inadequate hearing device settings may limit compensation benefits. Cochlear implants seem to allow the effective use of sentential context, but likely at the cost of delayed processing. Linguistic and lexical knowledge, which play an important role in compensation, may be successfully employed in advanced age, as some compensatory mechanisms seem to be preserved. These findings indicate that cognitive compensation in hearing impairment can be highly complicated—not always absent, but also not easily predicted by speech intelligibility tests only.
Collapse
Affiliation(s)
- Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Netherlands
- Graduate School of Medical Sciences, University of Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
| | - Jeanne Clarke
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Netherlands
- Graduate School of Medical Sciences, University of Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
| | - Carina Pals
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Netherlands
- Graduate School of Medical Sciences, University of Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
| | - Michel R. Benard
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Netherlands
- Pento Speech and Hearing Center Zwolle, Zwolle, Netherlands
| | - Pranesh Bhargava
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Netherlands
- Graduate School of Medical Sciences, University of Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
| | - Jefta Saija
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Netherlands
- Graduate School of Medical Sciences, University of Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
| | - Anastasios Sarampalis
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
- Department of Psychology, University of Groningen, Netherlands
| | - Anita Wagner
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Netherlands
- Graduate School of Medical Sciences, University of Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
| | - Etienne Gaudrain
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Netherlands
- Graduate School of Medical Sciences, University of Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Netherlands
- Auditory Cognition and Psychoacoustics, CNRS, Lyon Neuroscience Research Center, Lyon, France
| |
Collapse
|
6
|
Fogerty D, Ahlstrom JB, Bologna WJ, Dubno JR. Glimpsing Speech in the Presence of Nonsimultaneous Amplitude Modulations From a Competing Talker: Effect of Modulation Rate, Age, and Hearing Loss. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2016; 59:1198-1207. [PMID: 27603264 PMCID: PMC5345559 DOI: 10.1044/2016_jslhr-h-15-0259] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/22/2015] [Revised: 11/18/2015] [Accepted: 01/04/2016] [Indexed: 05/25/2023]
Abstract
PURPOSE This study investigated how listeners process acoustic cues preserved during sentences interrupted by nonsimultaneous noise that was amplitude modulated by a competing talker. METHOD Younger adults with normal hearing and older adults with normal or impaired hearing listened to sentences with consonants or vowels replaced with noise amplitude modulated by a competing talker. Sentences were spectrally shaped according to individual audiograms or to the mean audiogram from the listeners with hearing impairment for a younger spectrally shaped control group. The modulation spectrum of the noise was low-pass filtered at different modulation cutoff frequencies. The effect of noise level was also examined. RESULTS Performance declined when nonsimultaneous masker modulation included faster rates and was maximized when masker modulation matched the preserved primary speech modulation. Vowels resulted in better performance compared with consonants at slower modulation cutoff rates, likely due to suprasegmental features. Poorer overall performance was observed with increased age or hearing loss, and for listeners who received spectrally shaped speech. CONCLUSIONS Nonsimultaneous amplitude modulations from a competing talker significantly interacted with the preserved speech segment, and additional listener factors were observed for age and hearing loss. Importantly, listeners may obtain benefit from nonsimultaneous competing modulations when they match the preserved modulations of the sentence.
Collapse
Affiliation(s)
| | | | - William J. Bologna
- Medical University of South Carolina, Charleston
- University of Maryland, College Park
| | | |
Collapse
|