1
|
Zhou M, Soleimanpour R, Mahajan A, Anderson S. Hearing Aid Delay Effects on Neural Phase Locking. Ear Hear 2024; 45:142-150. [PMID: 37434283 PMCID: PMC10718218 DOI: 10.1097/aud.0000000000001408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 06/15/2023] [Indexed: 07/13/2023]
Abstract
OBJECTIVES This study was designed to examine the effects of hearing aid delay on the neural representation of the temporal envelope. It was hypothesized that the comb-filter effect would disrupt neural phase locking, and that shorter hearing aid delays would minimize this effect. DESIGN Twenty-one participants, ages 50 years and older, with bilateral mild-to-moderate sensorineural hearing loss were recruited through print advertisements in local senior newspapers. They were fitted with three different sets of hearing aids with average processing delays that ranged from 0.5 to 7 msec. Envelope-following responses (EFRs) were recorded to a 50-msec /da/ syllable presented through a speaker placed 1 meter in front of the participants while they wore the three sets of hearing aids with open tips. Phase-locking factor (PLF) and stimulus-to-response (STR) correlations were calculated from these recordings. RESULTS Recordings obtained while wearing hearing aids with a 0.5-msec processing delay showed higher PLF and STR correlations compared with those with either 5-msec or 7-msec delays. No differences were noted between recordings of hearing aids with 5-msec and 7-msec delays. The degree of difference between hearing aids was greater for individuals who had milder degrees of hearing loss. CONCLUSIONS Hearing aid processing delays disrupt phase locking due to mixing of processed and unprocessed sounds in the ear canal when using open domes. Given previous work showing that better phase locking correlates with better speech-in-noise performance, consideration should be given to reducing hearing aid processing delay in the design of hearing aid algorithms.
Collapse
Affiliation(s)
- Mary Zhou
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland, USA
| | - Roksana Soleimanpour
- Department of Biological Sciences, University of Maryland, College Park, Maryland, USA
| | - Aakriti Mahajan
- Department of Biological Sciences, University of Maryland, College Park, Maryland, USA
| | - Samira Anderson
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland, USA
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland, USA
| |
Collapse
|
2
|
Deshpande P, Brandt C, Debener S, Neher T. Does experience with hearing aid amplification influence electrophysiological measures of speech comprehension? Int J Audiol 2023:1-10. [PMID: 38010629 DOI: 10.1080/14992027.2023.2284675] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Accepted: 11/13/2023] [Indexed: 11/29/2023]
Abstract
OBJECTIVE To explore if experience with hearing aid (HA) amplification affects speech-evoked cortical potentials reflecting comprehension abilities. DESIGN N400 and late positive complex (LPC) responses as well as behavioural response times to congruent and incongruent digit triplets were measured. The digits were presented against stationary speech-shaped noise 10 dB above individually measured speech recognition thresholds. Stimulus presentation was either acoustic (digits 1-3) or first visual (digits 1-2) and then acoustic (digit 3). STUDY SAMPLE Three groups of older participants (N = 3 × 15) with (1) pure-tone average hearing thresholds <25 dB HL from 500-4000 Hz, (2) mild-to-moderate sensorineural hearing loss (SNHL) but no prior HA experience, and (3) mild-to-moderate SNHL and >2 years of HA experience. Groups 2-3 were fitted with test devices in accordance with clinical gain targets. RESULTS No group differences were found in the electrophysiological data. N400 amplitudes were larger and LPC latencies shorter with acoustic presentation. For group 1, behavioural response times were shorter with visual-then-acoustic presentation. CONCLUSION When speech audibility is ensured, comprehension-related electrophysiological responses appear intact in individuals with mild-to-moderate SNHL, regardless of prior experience with amplified sound. Further research into the effects of audibility versus acclimatisation-related neurophysiological changes is warranted.
Collapse
Affiliation(s)
- Pushkar Deshpande
- Institute of Clinical Research, University of Southern Denmark, Odense, Denmark
- Research Unit for ORL - Head & Neck Surgery and Audiology, Odense University Hospital & University of Southern Denmark, Odense, Denmark
| | - Christian Brandt
- Institute of Clinical Research, University of Southern Denmark, Odense, Denmark
- Research Unit for ORL - Head & Neck Surgery and Audiology, Odense University Hospital & University of Southern Denmark, Odense, Denmark
| | - Stefan Debener
- Neuropsychology Lab, Department of Psychology, University of Oldenburg, Germany
- Cluster of Excellence Hearing4all, University of Oldenburg, Oldenburg, Germany
- Branch for Hearing, Speech and Audio Technology HSA, Fraunhofer Institute for Digital Media Technology IDMT, Oldenburg, Germany
| | - Tobias Neher
- Institute of Clinical Research, University of Southern Denmark, Odense, Denmark
- Research Unit for ORL - Head & Neck Surgery and Audiology, Odense University Hospital & University of Southern Denmark, Odense, Denmark
| |
Collapse
|
3
|
Slugocki C, Kuk F, Korhonen P. Left Lateralization of the Cortical Auditory-Evoked Potential Reflects Aided Processing and Speech-in-Noise Performance of Older Listeners With a Hearing Loss. Ear Hear 2023; 44:399-410. [PMID: 36331191 DOI: 10.1097/aud.0000000000001293] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
OBJECTIVES We analyzed the lateralization of the cortical auditory-evoked potential recorded previously from aided hearing-impaired listeners as part of a study on noise-mitigating hearing aid technologies. Specifically, we asked whether the degree of leftward lateralization in the magnitudes and latencies of these components was reduced by noise and, conversely, enhanced/restored by hearing aid technology. We further explored if individual differences in lateralization could predict speech-in-noise abilities in listeners when tested in the aided mode. DESIGN The study followed a double-blind within-subjects design. Nineteen older adults (8 females; mean age = 73.6 years, range = 56 to 86 years) with moderate to severe hearing loss participated. The cortical auditory-evoked potential was measured over 400 presentations of a synthetic /da/ stimulus which was delivered binaurally in a simulated aided mode using shielded ear-insert transducers. Sequences of the /da/ syllable were presented from the front at 75 dB SPL-C with continuous speech-shaped noise presented from the back at signal-to-noise ratios of 0, 5, and 10 dB. Four hearing aid conditions were tested: (1) omnidirectional microphone (OM) with noise reduction (NR) disabled, (2) OM with NR enabled, (3) directional microphone (DM) with NR disabled, and (4) DM with NR enabled. Lateralization of the P1 component and N1P2 complex was quantified across electrodes spanning the mid-coronal plane. Subsequently, listener speech-in-noise performance was assessed using the Repeat-Recall Test at the same signal-to-noise ratios and hearing aid conditions used to measure cortical activity. RESULTS As expected, both the P1 component and the N1P2 complex were of greater magnitude in electrodes over the left compared to the right hemisphere. In addition, N1 and P2 peaks tended to occur earlier over the left hemisphere, although the effect was mediated by an interaction of signal-to-noise ratio and hearing aid technology. At a group level, degrees of lateralization for the P1 component and the N1P2 complex were enhanced in the DM relative to the OM mode. Moreover, linear mixed-effects models suggested that the degree of leftward lateralization in the N1P2 complex, but not the P1 component, accounted for a significant portion of variability in speech-in-noise performance that was not related to age, hearing loss, hearing aid processing, or signal-to-noise ratio. CONCLUSIONS A robust leftward lateralization of cortical potentials was observed in older listeners when tested in the aided mode. Moreover, the degree of lateralization was enhanced by hearing aid technologies that improve the signal-to-noise ratio for speech. Accounting for the effects of signal-to-noise ratio, hearing aid technology, semantic context, and audiometric thresholds, individual differences in left-lateralized speech-evoked cortical activity were found to predict listeners' speech-in-noise abilities. Quantifying cortical auditory-evoked potential component lateralization may then be useful for profiling listeners' likelihood of communication success following clinical amplification.
Collapse
Affiliation(s)
- Christopher Slugocki
- Office of Research in Clinical Amplification (ORCA-USA), WS Audiology, Lisle, Illinois, USA
| | | | | |
Collapse
|
4
|
Easwar V, Purcell D, Wright T. Predicting Hearing aid Benefit Using Speech-Evoked Envelope Following Responses in Children With Hearing Loss. Trends Hear 2023; 27:23312165231151468. [PMID: 36946195 PMCID: PMC10034298 DOI: 10.1177/23312165231151468] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Revised: 12/24/2022] [Accepted: 12/30/2022] [Indexed: 03/23/2023] Open
Abstract
Electroencephalography could serve as an objective tool to evaluate hearing aid benefit in infants who are developmentally unable to participate in hearing tests. We investigated whether speech-evoked envelope following responses (EFRs), a type of electroencephalography-based measure, could predict improved audibility with the use of a hearing aid in children with mild-to-severe permanent, mainly sensorineural, hearing loss. In 18 children, EFRs were elicited by six male-spoken band-limited phonemic stimuli--the first formants of /u/ and /i/, the second and higher formants of /u/ and /i/, and the fricatives /s/ and /∫/--presented together as /su∫i/. EFRs were recorded between the vertex and nape, when /su∫i/ was presented at 55, 65, and 75 dB SPL using insert earphones in unaided conditions and individually fit hearing aids in aided conditions. EFR amplitude and detectability improved with the use of a hearing aid, and the degree of improvement in EFR amplitude was dependent on the extent of change in behavioral thresholds between unaided and aided conditions. EFR detectability was primarily influenced by audibility; higher sensation level stimuli had an increased probability of detection. Overall EFR sensitivity in predicting audibility was significantly higher in aided (82.1%) than unaided conditions (66.5%) and did not vary as a function of stimulus or frequency. EFR specificity in ascertaining inaudibility was 90.8%. Aided improvement in EFR detectability was a significant predictor of hearing aid-facilitated change in speech discrimination accuracy. Results suggest that speech-evoked EFRs could be a useful objective tool in predicting hearing aid benefit in children with hearing loss.
Collapse
Affiliation(s)
- Vijayalakshmi Easwar
- Department of Communication Sciences and Disorders & Waisman
Center, University of
Wisconsin–Madison, Madison, USA
- National
Acoustic Laboratories, Macquarie
University, Sydney, New South Wales, Australia
| | - David Purcell
- School of Communication Sciences and Disorders,
Western
University, London, Canada
- National Centre for Audiology, Western
University, London, Canada
| | - Trevor Wright
- Department of Communication Sciences and Disorders & Waisman
Center, University of
Wisconsin–Madison, Madison, USA
| |
Collapse
|
5
|
Anderson S, DeVries L, Smith E, Goupell MJ, Gordon-Salant S. Rate Discrimination Training May Partially Restore Temporal Processing Abilities from Age-Related Deficits. J Assoc Res Otolaryngol 2022; 23:771-786. [PMID: 35948694 PMCID: PMC9365219 DOI: 10.1007/s10162-022-00859-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Accepted: 06/30/2022] [Indexed: 01/06/2023] Open
Abstract
The ability to understand speech in complex environments depends on the brain's ability to preserve the precise timing characteristics of the speech signal. Age-related declines in temporal processing may contribute to the older adult's experience of communication difficulty in challenging listening conditions. This study's purpose was to evaluate the effects of rate discrimination training on auditory temporal processing. A double-blind, randomized control design assigned 77 young normal-hearing, older normal-hearing, and older hearing-impaired listeners to one of two treatment groups: experimental (rate discrimination for 100- and 300-Hz pulse trains) and active control (tone detection in noise). All listeners were evaluated during pre- and post-training sessions using perceptual rate discrimination of 100-, 200-, 300-, and 400-Hz band-limited pulse trains and auditory steady-state responses (ASSRs) to the same stimuli. Training generalization was evaluated using several temporal processing measures and sentence recognition tests that included time-compressed and reverberant speech stimuli. Results demonstrated a session × training group interaction for perceptual and ASSR testing to the trained frequencies (100 and 300 Hz), driven by greater improvements in the training group than in the active control group. Further, post-test rate discrimination of the older listeners reached levels that were equivalent to those of the younger listeners at pre-test. Generalization was observed in significant improvement in rate discrimination of untrained frequencies (200 and 400 Hz) and in correlations between performance changes in rate discrimination and sentence recognition of reverberant speech. Further, non-auditory inhibition/attention performance predicted training-related improvement in rate discrimination. Overall, the results demonstrate the potential for auditory training to partially restore temporal processing in older listeners and highlight the role of cognitive function in these gains.
Collapse
Affiliation(s)
- Samira Anderson
- Department of Hearing and Speech Sciences, University of Maryland, College Park, 20742 USA
| | - Lindsay DeVries
- Department of Hearing and Speech Sciences, University of Maryland, College Park, 20742 USA
| | - Edward Smith
- Department of Hearing and Speech Sciences, University of Maryland, College Park, 20742 USA
| | - Matthew J. Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, 20742 USA
| | - Sandra Gordon-Salant
- Department of Hearing and Speech Sciences, University of Maryland, College Park, 20742 USA
| |
Collapse
|
6
|
Performance of Statistical Indicators in the Objective Detection of Speech-Evoked Envelope Following Responses. Ear Hear 2022; 43:1669-1677. [PMID: 35499293 DOI: 10.1097/aud.0000000000001232] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
OBJECTIVES To assess the sensitivity of statistical indicators used for the objective detection of speech-evoked envelope following responses (EFRs) in infants and adults. DESIGN Twenty-three adults and 21 infants with normal hearing participated in this study. A modified/susa∫i/speech token was presented at 65 dB SPL monoaurally. Presentation level in infants was corrected using in-ear measurements. EFRs were recorded between high forehead and ipsilateral mastoid. Statistical post-processing was completed using F -test, Magnitude-Square Coherence, Rayleigh test, Rayleigh-Moore test, and Hotelling's T 2 test. Logistic regression models assessed the sensitivity of each statistical indicator in both infants and adults as a function of testing duration. RESULTS The Rayleigh-Moore and Rayleigh tests were the most sensitive statistical indicators for speech-evoked EFR detection in infants. Comparatively, Magnitude-Square Coherence and Hotelling's T 2 also provide clinical benefit for infants in all conditions after ~30 minutes of testing, whereas the F -test failed to detect responses to EFRs elicited by vowels with accuracy greater than chance. In contrast, the F-test was the most sensitive for vowel-elicited response detection for adults in short tests (<10 minute) and performed comparatively with the Rayleigh-Moore and Rayleigh test during longer test durations. Decreased sensitivity was observed in infants relative to adults across all testing durations and statistical indicators, but the effects were largest in low frequency stimuli and seemed to be mostly, but not wholly, caused by differences in response amplitude. CONCLUSIONS The choice of statistical indicator significantly impacts the sensitivity of speech-evoked EFR detection. In both groups and for all stimuli, the Rayleigh test and Rayleigh-Moore tests have high sensitivity. Differences in EFR detection are present between infants and adults regardless of statistical indicator; however, these effects are largest for low-frequency EFR stimuli and for amplitude-based statistical indicators.
Collapse
|
7
|
DeVries L, Anderson S, Goupell MJ, Smith E, Gordon-Salant S. Effects of aging and hearing loss on perceptual and electrophysiological measures of pulse-rate discrimination. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 151:1639. [PMID: 35364956 PMCID: PMC8916844 DOI: 10.1121/10.0009399] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Revised: 01/10/2022] [Accepted: 01/14/2022] [Indexed: 06/14/2023]
Abstract
Auditory temporal processing declines with age, leading to potential deleterious effects on communication. In young normal-hearing listeners, perceptual rate discrimination is rate limited around 300 Hz. It is not known whether this rate limitation is similar in older listeners with hearing loss. The purpose of this study was to investigate age- and hearing-loss-related rate limitations on perceptual rate discrimination, and age- and hearing-loss-related effects on neural representation of these stimuli. Younger normal-hearing, older normal-hearing, and older hearing-impaired listeners performed a pulse-rate discrimination task at rates of 100, 200, 300, and 400 Hz. Neural phase locking was assessed using the auditory steady-state response. Finally, a battery of non-auditory cognitive tests was administered. Younger listeners had better rate discrimination, higher phase locking, and higher cognitive scores compared to both groups of older listeners. Aging, but not hearing loss, diminished neural-rate encoding and perceptual performance; however, there was no relationship between the perceptual and neural measures. Higher cognitive scores were correlated with improved perceptual performance, but not with neural phase locking. This study shows that aging, rather than hearing loss, may be a stronger contributor to poorer temporal processing, and cognitive factors such as processing speed and inhibitory control may be related to these declines.
Collapse
Affiliation(s)
- Lindsay DeVries
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Samira Anderson
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Ed Smith
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Sandra Gordon-Salant
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| |
Collapse
|
8
|
Karawani H, Jenkins K, Anderson S. Neural Plasticity Induced by Hearing Aid Use. Front Aging Neurosci 2022; 14:884917. [PMID: 35663566 PMCID: PMC9160992 DOI: 10.3389/fnagi.2022.884917] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Accepted: 04/28/2022] [Indexed: 12/21/2022] Open
Abstract
Age-related hearing loss is one of the most prevalent health conditions in older adults. Although hearing aid technology has advanced dramatically, a large percentage of older adults do not use hearing aids. This untreated hearing loss may accelerate declines in cognitive and neural function and dramatically affect the quality of life. Our previous findings have shown that the use of hearing aids improves cortical and cognitive function and offsets subcortical physiological decline. The current study tested the time course of neural adaptation to hearing aids over the course of 6 months and aimed to determine whether early measures of cortical processing predict the capacity for neural plasticity. Seventeen (9 females) older adults (mean age = 75 years) with age-related hearing loss with no history of hearing aid use were fit with bilateral hearing aids and tested in six testing sessions. Neural changes were observed as early as 2 weeks following the initial fitting of hearing aids. Increases in N1 amplitudes were observed as early as 2 weeks following the hearing aid fitting, whereas changes in P2 amplitudes were not observed until 12 weeks of hearing aid use. The findings suggest that increased audibility through hearing aids may facilitate rapid increases in cortical detection, but a longer time period of exposure to amplified sound may be required to integrate features of the signal and form auditory object representations. The results also showed a relationship between neural responses in earlier sessions and the change predicted after 6 months of the use of hearing aids. This study demonstrates rapid cortical adaptation to increased auditory input. Knowledge of the time course of neural adaptation may aid audiologists in counseling their patients, especially those who are struggling to adjust to amplification. A future comparison of a control group with no use of hearing aids that undergoes the same testing sessions as the study's group will validate these findings.
Collapse
Affiliation(s)
- Hanin Karawani
- Department of Communication Sciences and Disorders, Faculty of Social Welfare and Health Sciences, University of Haifa, Haifa, Israel
| | - Kimberly Jenkins
- Walter Reed National Military Medical Center, Bethesda, MD, United States
| | - Samira Anderson
- Department of Hearing and Speech Sciences, University of Maryland, College Park, College Park, MD, United States
| |
Collapse
|
9
|
Shukla B, Bidelman GM. Enhanced brainstem phase-locking in low-level noise reveals stochastic resonance in the frequency-following response (FFR). Brain Res 2021; 1771:147643. [PMID: 34473999 PMCID: PMC8490316 DOI: 10.1016/j.brainres.2021.147643] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Revised: 08/23/2021] [Accepted: 08/28/2021] [Indexed: 11/29/2022]
Abstract
In nonlinear systems, the inclusion of low-level noise can paradoxically improve signal detection, a phenomenon known as stochastic resonance (SR). SR has been observed in human hearing whereby sensory thresholds (e.g., signal detection and discrimination) are enhanced in the presence of noise. Here, we asked whether subcortical auditory processing (neural phase locking) shows evidence of SR. We recorded brainstem frequency-following-responses (FFRs) in young, normal-hearing listeners to near-electrophysiological-threshold (40 dB SPL) complex tones composed of 10 iso-amplitude harmonics of 150 Hz fundamental frequency (F0) presented concurrent with low-level noise (+20 to -20 dB SNRs). Though variable and weak across ears, some listeners showed improvement in auditory detection thresholds with subthreshold noise confirming SR psychophysically. At the neural level, low-level FFRs were initially eradicated by noise (expected masking effect) but were surprisingly reinvigorated at select masker levels (local maximum near ∼ 35 dB SPL). These data suggest brainstem phase-locking to near threshold periodic stimuli is enhanced in optimal levels of noise, the hallmark of SR. Our findings provide novel evidence for stochastic resonance in the human auditory brainstem and suggest that under some circumstances, noise can actually benefit both the behavioral and neural encoding of complex sounds.
Collapse
Affiliation(s)
- Bhanu Shukla
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA
| | - Gavin M Bidelman
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, USA.
| |
Collapse
|
10
|
Multiple Cases of Auditory Neuropathy Illuminate the Importance of Subcortical Neural Synchrony for Speech-in-noise Recognition and the Frequency-following Response. Ear Hear 2021; 43:605-619. [PMID: 34619687 DOI: 10.1097/aud.0000000000001122] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES The role of subcortical synchrony in speech-in-noise (SIN) recognition and the frequency-following response (FFR) was examined in multiple listeners with auditory neuropathy. Although an absent FFR has been documented in one listener with idiopathic neuropathy who has severe difficulty recognizing SIN, several etiologies cause the neuropathy phenotype. Consequently, it is necessary to replicate absent FFRs and concomitant SIN difficulties in patients with multiple sources and clinical presentations of neuropathy to elucidate fully the importance of subcortical neural synchrony for the FFR and SIN recognition. DESIGN Case series. Three children with auditory neuropathy (two males with neuropathy attributed to hyperbilirubinemia, one female with a rare missense mutation in the OPA1 gene) were compared to age-matched controls with normal hearing (52 for electrophysiology and 48 for speech recognition testing). Tests included standard audiological evaluations, FFRs, and sentence recognition in noise. The three children with neuropathy had a range of clinical presentations, including moderate sensorineural hearing loss, use of a cochlear implant, and a rapid progressive hearing loss. RESULTS Children with neuropathy generally had good speech recognition in quiet but substantial difficulties in noise. These SIN difficulties were somewhat mitigated by a clear speaking style and presenting words in a high semantic context. In the children with neuropathy, FFRs were absent from all tested stimuli. In contrast, age-matched controls had reliable FFRs. CONCLUSION Subcortical synchrony is subject to multiple forms of disruption but results in a consistent phenotype of an absent FFR and substantial difficulties recognizing SIN. These results support the hypothesis that subcortical synchrony is necessary for the FFR. Thus, in healthy listeners, the FFR may reflect subcortical neural processes important for SIN recognition.
Collapse
|
11
|
Test-Retest Variability in the Characteristics of Envelope Following Responses Evoked by Speech Stimuli. Ear Hear 2021; 41:150-164. [PMID: 31136317 DOI: 10.1097/aud.0000000000000739] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
OBJECTIVES The objective of the present study was to evaluate the between-session test-retest variability in the characteristics of envelope following responses (EFRs) evoked by modified natural speech stimuli in young normal hearing adults. DESIGN EFRs from 22 adults were recorded in two sessions, 1 to 12 days apart. EFRs were evoked by the token /susa∫ i/ (2.05 sec) presented at 65 dB SPL and recorded from the vertex referenced to the neck. The token /susa∫ i/, spoken by a male with an average fundamental frequency [f0] of 98.53 Hz, was of interest because of its potential utility as an objective hearing aid outcome measure. Each vowel was modified to elicit two EFRs simultaneously by lowering the f0 in the first formant while maintaining the original f0 in the higher formants. Fricatives were amplitude-modulated at 93.02 Hz and elicited one EFR each. EFRs evoked by vowels and fricatives were estimated using Fourier analyzer and discrete Fourier transform, respectively. Detection of EFRs was determined by an F-test. Test-retest variability in EFR amplitude and phase coherence were quantified using correlation, repeated-measures analysis of variance, and the repeatability coefficient. The repeatability coefficient, computed as twice the standard deviation (SD) of test-retest differences, represents the ±95% limits of test-retest variation around the mean difference. Test-retest variability of EFR amplitude and phase coherence were compared using the coefficient of variation, a normalized metric, which represents the ratio of the SD of repeat measurements to its mean. Consistency in EFR detection outcomes was assessed using the test of proportions. RESULTS EFR amplitude and phase coherence did not vary significantly between sessions, and were significantly correlated across repeat measurements. The repeatability coefficient for EFR amplitude ranged from 38.5 nV to 45.6 nV for all stimuli, except for /∫/ (71.6 nV). For any given stimulus, the test-retest differences in EFR amplitude of individual participants were not correlated with their test-retest differences in noise amplitude. However, across stimuli, higher repeatability coefficients of EFR amplitude tended to occur when the group mean noise amplitude and the repeatability coefficient of noise amplitude were higher. The test-retest variability of phase coherence was comparable to that of EFR amplitude in terms of the coefficient of variation, and the repeatability coefficient varied from 0.1 to 0.2, with the highest value of 0.2 for /∫/. Mismatches in EFR detection outcomes occurred in 11 of 176 measurements. For each stimulus, the tests of proportions revealed a significantly higher proportion of matched detection outcomes compared to mismatches. CONCLUSIONS Speech-evoked EFRs demonstrated reasonable repeatability across sessions. Of the eight stimuli, the shortest stimulus /∫/ demonstrated the largest variability in EFR amplitude and phase coherence. The test-retest variability in EFR amplitude could not be explained by test-retest differences in noise amplitude for any of the stimuli. This lack of explanation argues for other sources of variability, one possibility being the modulation of cortical contributions imposed on brainstem-generated EFRs.
Collapse
|
12
|
Xie Z, Stakhovskaya O, Goupell MJ, Anderson S. Aging Effects on Cortical Responses to Tones and Speech in Adult Cochlear-Implant Users. J Assoc Res Otolaryngol 2021; 22:719-740. [PMID: 34231111 DOI: 10.1007/s10162-021-00804-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2020] [Accepted: 05/19/2021] [Indexed: 11/29/2022] Open
Abstract
Age-related declines in auditory temporal processing contribute to speech understanding difficulties of older adults. These temporal processing deficits have been established primarily among acoustic-hearing listeners, but the peripheral and central contributions are difficult to separate. This study recorded cortical auditory evoked potentials from younger to middle-aged (< 65 years) and older (≥ 65 years) cochlear-implant (CI) listeners to assess age-related changes in temporal processing, where cochlear processing is bypassed in this population. Aging effects were compared to age-matched normal-hearing (NH) listeners. Advancing age was associated with prolonged P2 latencies in both CI and NH listeners in response to a 1000-Hz tone or a syllable /da/, and with prolonged N1 latencies in CI listeners in response to the syllable. Advancing age was associated with larger N1 amplitudes in NH listeners. These age-related changes in latency and amplitude were independent of stimulus presentation rate. Further, CI listeners exhibited prolonged N1 and P2 latencies and smaller P2 amplitudes than NH listeners. Thus, aging appears to degrade some aspects of auditory temporal processing when peripheral-cochlear contributions are largely removed, suggesting that changes beyond the cochlea may contribute to age-related temporal processing deficits.
Collapse
Affiliation(s)
- Zilong Xie
- Department of Hearing and Speech, University of Kansas Medical Center, Kansas City, KS, 66160, USA.
| | - Olga Stakhovskaya
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD, 20742, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD, 20742, USA
| | - Samira Anderson
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD, 20742, USA
| |
Collapse
|
13
|
Montage-related Variability in the Characteristics of Envelope Following Responses. Ear Hear 2021; 42:1436-1440. [PMID: 33900208 DOI: 10.1097/aud.0000000000001018] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES The study aimed to compare two electrode montages commonly used for recording speech-evoked envelope following responses. DESIGN Twenty-three normal-hearing adults participated in this study. EFRs were elicited by a naturally spoken, modified /susa∫i/ stimulus presented at 65 dB SPL monaurally. EFRs were recorded using two single-channel electrode montages: Cz-nape and Fz-ipsilateral mastoid, where the noninverting and inverting sites were the vertex and nape, and the high forehead and ipsilateral mastoid, respectively. Montage order was counterbalanced across participants. RESULTS Envelope following responses amplitude and phase coherence were significantly higher overall in the Cz-nape montage with no significant differences in noise amplitude. Post hoc testing on montage effects in response amplitude and phase coherence was not significant for individual stimuli. The Cz-nape montage also resulted in a greater number of detections and analyzed using the Hotelling's T2. CONCLUSIONS Electrode montage influences the estimated characteristics of speech-evoked EFRs.
Collapse
|
14
|
Anderson S, Bieber R, Schloss A. Peripheral deficits and phase-locking declines in aging adults. Hear Res 2021; 403:108188. [PMID: 33581668 DOI: 10.1016/j.heares.2021.108188] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/29/2020] [Revised: 01/16/2021] [Accepted: 01/20/2021] [Indexed: 10/22/2022]
Abstract
Age-related difficulties in speech understanding may arise from a decrease in the neural representation of speech sounds. A loss of outer hair cells or decrease in auditory nerve fibers may lead to a loss of temporal precision that can affect speech clarity. This study's purpose was to evaluate the peripheral contributors to phase-locking strength, a measure of temporal precision, in recordings to a sustained vowel in 30 younger and 30 older listeners with normal to near normal audiometric thresholds. Thresholds were obtained for pure tones and distortion-product otoacoustic emissions (DPOAEs). Auditory brainstem responses (ABRs) were recorded in quiet and in three levels of continuous white noise (+30, +20, and +10 dB SNR). Absolute amplitudes and latencies of Wave I in quiet and of Wave V across presentation conditions, in addition to the slope of Wave V amplitude and latency changes in noise, were calculated from these recordings. Frequency-following responses (FFRs) were recorded to synthesized /ba/ syllables of two durations, 170 and 260 ms, to determine whether age-related phase-locking deficits are more pronounced for stimuli that are sustained for longer durations. Phase locking was calculated for the early and late regions of the steady-state vowel for both syllables. Group differences were found for nearly every measure except for the slopes of Wave V latency and amplitude changes in noise. We found that outer hair cell function (DPOAEs) contributed to the variance in phase locking. However, the ABR and FFR differences were present after covarying for DPOAEs, suggesting the existence of temporal processing deficits in older listeners that are somewhat independent of outer hair cell function.
Collapse
Affiliation(s)
- Samira Anderson
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD 20742, United States.
| | - Rebecca Bieber
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD 20742, United States.
| | - Alanna Schloss
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD 20742, United States.
| |
Collapse
|
15
|
Abstract
OBJECTIVES There is increasing interest in using the frequency following response (FFR) to describe the effects of varying different aspects of hearing aid signal processing on brainstem neural representation of speech. To this end, recent studies have examined the effects of filtering on brainstem neural representation of the speech fundamental frequency (f0) in listeners with normal hearing sensitivity by measuring FFRs to low- and high-pass filtered signals. However, the stimuli used in these studies do not reflect the entire range of typical cutoff frequencies used in frequency-specific gain adjustments during hearing aid fitting. Further, there has been limited discussion on the effect of filtering on brainstem neural representation of formant-related harmonics. Here, the effects of filtering on brainstem neural representation of speech fundamental frequency (f0) and harmonics related to first formant frequency (F1) were assessed by recording envelope and spectral FFRs to a vowel low-, high-, and band-pass filtered at cutoff frequencies ranging from 0.125 to 8 kHz. DESIGN FFRs were measured to a synthetically generated vowel stimulus /u/ presented in a full bandwidth and low-pass (experiment 1), high-pass (experiment 2), and band-pass (experiment 3) filtered conditions. In experiment 1, FFRs were measured to a synthetically generated vowel stimulus /u/ presented in a full bandwidth condition as well as 11 low-pass filtered conditions (low-pass cutoff frequencies: 0.125, 0.25, 0.5, 0.75, 1, 1.5, 2, 3, 4, 6, and 8 kHz) in 19 adult listeners with normal hearing sensitivity. In experiment 2, FFRs were measured to the same synthetically generated vowel stimulus /u/ presented in a full bandwidth condition as well as 10 high-pass filtered conditions (high-pass cutoff frequencies: 0.125, 0.25, 0.5, 0.75, 1, 1.5, 2, 3, 4, and 6 kHz) in 7 adult listeners with normal hearing sensitivity. In experiment 3, in addition to the full bandwidth condition, FFRs were measured to vowel /u/ low-pass filtered at 2 kHz, band-pass filtered between 2-4 kHz and 4-6 kHz in 10 adult listeners with normal hearing sensitivity. A Fast Fourier Transform analysis was conducted to measure the strength of f0 and the F1-related harmonic relative to the noise floor in the brainstem neural responses obtained to the full bandwidth and filtered stimulus conditions. RESULTS Brainstem neural representation of f0 was reduced when the low-pass filter cutoff frequency was between 0.25 and 0.5 kHz; no differences in f0 strength were noted between conditions when the low-pass filter cutoff condition was at or greater than 0.75 kHz. While envelope FFR f0 strength was reduced when the stimulus was high-pass filtered at 6 kHz, there was no effect of high-pass filtering on brainstem neural representation of f0 when the high-pass filter cutoff frequency ranged from 0.125 to 4 kHz. There was a weakly significant global effect of band-pass filtering on brainstem neural phase-locking to f0. A trends analysis indicated that mean f0 magnitude in the brainstem neural response was greater when the stimulus was band-pass filtered between 2 and 4 kHz as compared to when the stimulus was band-pass filtered between 4 and 6 kHz, low-pass filtered at 2 kHz or presented in the full bandwidth condition. Last, neural phase-locking to f0 was reduced or absent in envelope FFRs measured to filtered stimuli that lacked spectral energy above 0.125 kHz or below 6 kHz. Similarly, little to no energy was seen at F1 in spectral FFRs obtained to low-, high-, or band-pass filtered stimuli that did not contain energy in the F1 region. For stimulus conditions that contained energy at F1, the strength of the peak at F1 in the spectral FFR varied little with low-, high-, or band-pass filtering. CONCLUSIONS Energy at f0 in envelope FFRs may arise due to neural phase-locking to low-, mid-, or high-frequency stimulus components, provided the stimulus envelope is modulated by at least two interacting harmonics. Stronger neural responses at f0 are measured when filtering results in stimulus bandwidths that preserve stimulus energy at F1 and F2. In addition, results suggest that unresolved harmonics may favorably influence f0 strength in the neural response. Lastly, brainstem neural representation of the F1-related harmonic measured in spectral FFRs obtained to filtered stimuli is related to the presence or absence of stimulus energy at F1. These findings add to the existing literature exploring the viability of the FFR as an objective technique to evaluate hearing aid fitting where stimulus bandwidth is altered by design due to frequency-specific gain applied by amplification algorithms.
Collapse
|
16
|
Seol HY, Park S, Ji YS, Hong SH, Moon IJ. Impact of hearing aid noise reduction algorithms on the speech-evoked auditory brainstem response. Sci Rep 2020; 10:10773. [PMID: 32612140 PMCID: PMC7330026 DOI: 10.1038/s41598-020-66970-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2018] [Accepted: 05/27/2020] [Indexed: 11/26/2022] Open
Abstract
The purposes of this study are to investigate the neural representation of a speech stimulus in the auditory system of individuals with normal hearing (NH) and those with hearing aids (HAs) and to explore the impact of noise reduction algorithms (NR) on auditory brainstem response to complex sounds (cABR). Twenty NH individuals and 28 HA users completed puretone audiometry, the Korean version of the Hearing in Noise Test (K-HINT), and cABR. In 0 and +5 dB signal-to-noise ratios (SNRs), the NH group was tested in /da/ only (quiet) and /da/ with white noise (WN) conditions while the HA group was tested in /da/ only, /da/ WN, /da/ WN NR ON, and /da/ WN NR OFF conditions. Significant differences were observed between /da/ only and /da/ WN conditions for F0 in both groups, but no SNR effect was observed for both groups. Findings of this study are consistent with previous literature that diminished cABR amplitudes indicate reduced representation of sounds in the auditory system. This is the first to examine the effect of a specific HA feature on cABR responses.
Collapse
Affiliation(s)
- Hye Yoon Seol
- Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, Korea
- Hearing Research Laboratory, Samsung Medical Center, Seoul, Korea
| | - Suyeon Park
- Hearing Research Laboratory, Samsung Medical Center, Seoul, Korea
| | - Yoon Sang Ji
- Hearing Research Laboratory, Samsung Medical Center, Seoul, Korea
| | - Sung Hwa Hong
- Hearing Research Laboratory, Samsung Medical Center, Seoul, Korea
- Department of Otolaryngology-Head & Neck Surgery, Samsung Changwon Hospital, Sungkyunkwan University School of Medicine, Changwon, Korea
| | - Il Joon Moon
- Samsung Advanced Institute for Health Sciences & Technology, Sungkyunkwan University, Seoul, Korea.
- Hearing Research Laboratory, Samsung Medical Center, Seoul, Korea.
- Department of Otolaryngology-Head & Neck Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea.
| |
Collapse
|
17
|
Van Canneyt J, Wouters J, Francart T. From modulated noise to natural speech: The effect of stimulus parameters on the envelope following response. Hear Res 2020; 393:107993. [PMID: 32535277 DOI: 10.1016/j.heares.2020.107993] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/25/2019] [Revised: 04/28/2020] [Accepted: 05/04/2020] [Indexed: 11/28/2022]
Abstract
Envelope following responses (EFRs) can be evoked by a wide range of auditory stimuli, but for many stimulus parameters the effect on EFR strength is not fully understood. This complicates the comparison of earlier studies and the design of new studies. Furthermore, the most optimal stimulus parameters are unknown. To help resolve this issue, we investigated the effects of four important stimulus parameters and their interactions on the EFR. Responses were measured in 16 normal hearing subjects evoked by stimuli with four levels of stimulus complexity (amplitude modulated noise, artificial vowels, natural vowels and vowel-consonant-vowel combinations), three fundamental frequencies (105 Hz, 185 Hz and 245 Hz), three fundamental frequency contours (upward sweeping, downward sweeping and flat) and three vowel identities (Flemish /a:/, /u:/, and /i:/). We found that EFRs evoked by artificial vowels were on average 4-6 dB SNR larger than responses evoked by the other stimulus complexities, probably because of (unnaturally) strong higher harmonics. Moreover, response amplitude decreased with fundamental frequency but response SNR remained largely unaffected. Thirdly, fundamental frequency variation within the stimulus did not impact EFR strength, but only when rate of change remained low (e.g. not the case for sweeping natural vowels). Finally, the vowel /i:/ appeared to evoke larger response amplitudes compared to /a:/ and /u:/, but analysis power was too small to confirm this statistically. Vowel-dependent differences in response strength have been suggested to stem from destructive interference between response components. We show how a model of the auditory periphery can simulate these interference patterns and predict response strength. Altogether, the results of this study can guide stimulus choice for future EFR research and practical applications.
Collapse
Affiliation(s)
- Jana Van Canneyt
- ExpORL, Dept. of Neurosciences, KU Leuven, Herestraat 49 Bus 721, 3000, Leuven, Belgium.
| | - Jan Wouters
- ExpORL, Dept. of Neurosciences, KU Leuven, Herestraat 49 Bus 721, 3000, Leuven, Belgium.
| | - Tom Francart
- ExpORL, Dept. of Neurosciences, KU Leuven, Herestraat 49 Bus 721, 3000, Leuven, Belgium.
| |
Collapse
|
18
|
Effects of Directional Microphone and Noise Reduction on Subcortical and Cortical Auditory-Evoked Potentials in Older Listeners With Hearing Loss. Ear Hear 2020; 41:1282-1293. [PMID: 32058351 DOI: 10.1097/aud.0000000000000847] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Understanding how signal processing influences neural activity in the brain with hearing loss is relevant to the design and evaluation of features intended to alleviate speech-in-noise deficits faced by many hearing aid wearers. Here, we examine whether hearing aid processing schemes that are designed to improve speech-in-noise intelligibility (i.e., directional microphone and noise reduction) also improve electrophysiological indices of speech processing in older listeners with hearing loss. DESIGN The study followed a double-blind within-subjects design. A sample of 19 older adults (8 females; mean age = 73.6 years, range = 56-86 years; 17 experienced hearing aid users) with a moderate to severe sensorineural hearing impairment participated in the experiment. Auditory-evoked potentials associated with processing in cortex (P1-N1-P2) and subcortex (frequency-following response) were measured over the course of two 2-hour visits. Listeners were presented with sequences of the consonant-vowel syllable /da/ in continuous speech-shaped noise at signal to noise ratios (SNRs) of 0, +5, and +10 dB. Speech and noise stimuli were pre-recorded using a Knowles Electronics Manikin for Acoustic Research (KEMAR) head and torso simulator outfitted with hearing aids programmed for each listener's loss. The study aid programs were set according to 4 conditions: (1) omnidirectional microphone, (2) omnidirectional microphone with noise reduction, (3) directional microphone, and (4) directional microphone with noise reduction. For each hearing aid condition, speech was presented from a loudspeaker located at 1 m directly in front of KEMAR (i.e., 0° in the azimuth) at 75 dB SPL and noise was presented from a matching loudspeaker located at 1 m directly behind KEMAR (i.e., 180° in the azimuth). Recorded stimulus sequences were normalized for speech level across conditions and presented to listeners over electromagnetically shielded ER-2 ear-insert transducers. Presentation levels were calibrated to match the output of listeners' study aids. RESULTS Cortical components from listeners with hearing loss were enhanced with improving SNR and with use of a directional microphone and noise reduction. On the other hand, subcortical components did not show sensitivity to SNR or microphone mode but did show enhanced encoding of temporal fine structure of speech for conditions where noise reduction was enabled. CONCLUSIONS These results suggest that auditory-evoked potentials may be useful in evaluating the benefit of different noise-mitigating hearing aid features.
Collapse
|
19
|
BinKhamis G, Elia Forte A, Reichenbach T, O'Driscoll M, Kluk K. Speech Auditory Brainstem Responses in Adult Hearing Aid Users: Effects of Aiding and Background Noise, and Prediction of Behavioral Measures. Trends Hear 2019; 23:2331216519848297. [PMID: 31264513 PMCID: PMC6607564 DOI: 10.1177/2331216519848297] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Evaluation of patients who are unable to provide behavioral responses on standard clinical measures is challenging due to the lack of standard objective (non-behavioral) clinical audiological measures that assess the outcome of an intervention (e.g., hearing aids). Brainstem responses to short consonant-vowel stimuli (speech-auditory brainstem responses [speech-ABRs]) have been proposed as a measure of subcortical encoding of speech, speech detection, and speech-in-noise performance in individuals with normal hearing. Here, we investigated the potential application of speech-ABRs as an objective clinical outcome measure of speech detection, speech-in-noise detection and recognition, and self-reported speech understanding in 98 adults with sensorineural hearing loss. We compared aided and unaided speech-ABRs, and speech-ABRs in quiet and in noise. In addition, we evaluated whether speech-ABR F0 encoding (obtained from the complex cross-correlation with the 40 ms [da] fundamental waveform) predicted aided behavioral speech recognition in noise or aided self-reported speech understanding. Results showed that (a) aided speech-ABRs had earlier peak latencies, larger peak amplitudes, and larger F0 encoding amplitudes compared to unaided speech-ABRs; (b) the addition of background noise resulted in later F0 encoding latencies but did not have an effect on peak latencies and amplitudes or on F0 encoding amplitudes; and (c) speech-ABRs were not a significant predictor of any of the behavioral or self-report measures. These results show that speech-ABR F0 encoding is not a good predictor of speech-in-noise recognition or self-reported speech understanding with hearing aids. However, our results suggest that speech-ABRs may have potential for clinical application as an objective measure of speech detection with hearing aids.
Collapse
Affiliation(s)
- Ghada BinKhamis
- 1 Manchester Centre for Audiology and Deafness, School of Health Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester Academic Health Science Centre, Manchester, UK.,2 Department of Communication and Swallowing Disorders, King Fahad Medical City, Riyadh, Saudi Arabia
| | - Antonio Elia Forte
- 3 John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
| | - Tobias Reichenbach
- 4 Department of Bioengineering, Centre for Neurotechnology, Imperial College London, London, UK
| | - Martin O'Driscoll
- 1 Manchester Centre for Audiology and Deafness, School of Health Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester Academic Health Science Centre, Manchester, UK.,5 Manchester Auditory Implant Centre, Manchester University Hospitals NHS Foundation Trust, Manchester, UK
| | - Karolina Kluk
- 1 Manchester Centre for Audiology and Deafness, School of Health Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester Academic Health Science Centre, Manchester, UK
| |
Collapse
|
20
|
Roque L, Karawani H, Gordon-Salant S, Anderson S. Effects of Age, Cognition, and Neural Encoding on the Perception of Temporal Speech Cues. Front Neurosci 2019; 13:749. [PMID: 31379494 PMCID: PMC6659127 DOI: 10.3389/fnins.2019.00749] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2018] [Accepted: 07/05/2019] [Indexed: 12/11/2022] Open
Abstract
Older adults commonly report difficulty understanding speech, particularly in adverse listening environments. These communication difficulties may exist in the absence of peripheral hearing loss. Older adults, both with normal hearing and with hearing loss, demonstrate temporal processing deficits that affect speech perception. The purpose of the present study is to investigate aging, cognition, and neural processing factors that may lead to deficits on perceptual tasks that rely on phoneme identification based on a temporal cue - vowel duration. A better understanding of the neural and cognitive impairments underlying temporal processing deficits could lead to more focused aural rehabilitation for improved speech understanding for older adults. This investigation was conducted in younger (YNH) and older normal-hearing (ONH) participants who completed three measures of cognitive functioning known to decline with age: working memory, processing speed, and inhibitory control. To evaluate perceptual and neural processing of auditory temporal contrasts, identification functions for the contrasting word-pair WHEAT and WEED were obtained on a nine-step continuum of vowel duration, and frequency-following responses (FFRs) and cortical auditory-evoked potentials (CAEPs) were recorded to the two endpoints of the continuum. Multiple linear regression analyses were conducted to determine the cognitive, peripheral, and/or central mechanisms that may contribute to perceptual performance. YNH participants demonstrated higher cognitive functioning on all three measures compared to ONH participants. The slope of the identification function was steeper in YNH than in ONH participants, suggesting a clearer distinction between the contrasting words in the YNH participants. FFRs revealed better response waveform morphology and more robust phase-locking in YNH compared to ONH participants. ONH participants also exhibited earlier latencies for CAEP components compared to the YNH participants. Linear regression analyses revealed that cortical processing significantly contributed to the variance in perceptual performance in the WHEAT/WEED identification functions. These results suggest that reduced neural precision contributes to age-related speech perception difficulties that arise from temporal processing deficits.
Collapse
Affiliation(s)
- Lindsey Roque
- Department of Hearing and Speech Sciences, University of Maryland, College Park, College Park, MD, United States
| | - Hanin Karawani
- Department of Hearing and Speech Sciences, University of Maryland, College Park, College Park, MD, United States.,Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| | - Sandra Gordon-Salant
- Department of Hearing and Speech Sciences, University of Maryland, College Park, College Park, MD, United States
| | - Samira Anderson
- Department of Hearing and Speech Sciences, University of Maryland, College Park, College Park, MD, United States
| |
Collapse
|
21
|
Predicting and Weighting the Factors Affecting Workers' Hearing Loss Based on Audiometric Data Using C5 Algorithm. Ann Glob Health 2019; 85. [PMID: 31225964 PMCID: PMC6634330 DOI: 10.5334/aogh.2522] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022] Open
Abstract
Introduction: With the extensively spread of industrialization in the world, noise exposure is becoming more prevalent in the industrial settings. The most important and definite harmful effects of sound include hearing loss, both permanent and temporary. Objective: This study was designed aimed to use the C5 algorithm to determine the weight of factors affecting the workers’ hearing loss based on the audiometric data. Methods: This cross-sectional, descriptive, analytical study was conducted in 2018 in a mining industry in southeastern Iran. In this study, workers were divided into three exposed groups with different sound pressure levels (one control group and two case groups). Audiometry was conducted for each group of 50 persons; hence, the total number of subjects was 150. The stages of this study include: 1) selecting factors (predictive) to check and weigh them; 2) conducting the audiometry for both ears; 3) calculating the permanent hearing loss in each ear and permanent hearing loss of both ears; 4) classifying the types of hearing loss; and 5) investigating and determining the weight of factors affecting the hearing loss and their classification based on the C5 algorithm and determining the error and accuracy rate of each model. To assess and determine the factors affecting the hearing loss of workers, the C5 algorithm and IBM SPSS Modeler 18.0 were used. SPSS V.18 was used to analyze the linear regression and paired t-test tests, too. Results: The results showed that in the first model (SPL <70 dBA), the 8KHz frequency with the weight of 31% had the highest effect, the factors of work experience and the frequency of 250Hz each with the weight of 3%, had the least effect, and the accuracy of the model was 100%. In the second model (SPL 70–80 dBA) the frequency of 8KHz with the weight of 21% had the highest effect, the frequency of 250Hz and the working experience each had the lowest effect with the weight of 7% and the accuracy of the model was calculated as 100%. In the third model (SPL >85 dBA), the 4KHz frequency with the weight of 31% had the highest effect, and the work experience with a weight of 1% had the lowest effect, and the accuracy of the model was 94%. In the fourth model, the 4KHz frequency with the weight of 22% had the highest effect and 250Hz and age each with the weight of 8% had the lowest effects; the accuracy of this model was calculated to be 99.05%. Conclusions: During investigating and determining the weight of the factors affecting hearing loss by the C5 algorithm, the high weight and effect of the 4KHz frequency were predicted in hearing loss changes. Considering the high accuracy obtained in this modeling, this algorithm is a suitable and powerful tool for predicting and modeling the hearing loss.
Collapse
|
22
|
Roque L, Gaskins C, Gordon-Salant S, Goupell MJ, Anderson S. Age Effects on Neural Representation and Perception of Silence Duration Cues in Speech. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:1099-1116. [PMID: 31026197 PMCID: PMC6802877 DOI: 10.1044/2018_jslhr-h-ascc7-18-0076] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2018] [Revised: 06/26/2018] [Accepted: 08/12/2018] [Indexed: 06/09/2023]
Abstract
Purpose Degraded temporal processing associated with aging may be a contributing factor to older adults' hearing difficulties, especially in adverse listening environments. This degraded processing may affect the ability to distinguish between words based on temporal duration cues. The current study investigates the effects of aging and hearing loss on cortical and subcortical representation of temporal speech components and on the perception of silent interval duration cues in speech. Method Identification functions for the words DISH and DITCH were obtained on a 7-step continuum of silence duration (0-60 ms) prior to the final fricative in participants who are younger with normal hearing (YNH), older with normal hearing (ONH), and older with hearing impairment (OHI). Frequency-following responses and cortical auditory-evoked potentials were recorded to the 2 end points of the continuum. Auditory brainstem responses to clicks were obtained to verify neural integrity and to compare group differences in auditory nerve function. A multiple linear regression analysis was conducted to determine the peripheral or central factors that contributed to perceptual performance. Results ONH and OHI participants required longer silence durations to identify DITCH than did YNH participants. Frequency-following responses showed reduced phase locking and poorer morphology, and cortical auditory-evoked potentials showed prolonged latencies in ONH and OHI participants compared with YNH participants. No group differences were noted for auditory brainstem response Wave I amplitude or Wave V/I ratio. After accounting for the possible effects of hearing loss, linear regression analysis revealed that both midbrain and cortical processing contributed to the variance in the DISH-DITCH perceptual identification functions. Conclusions These results suggest that age-related deficits in the ability to encode silence duration cues may be a contributing factor in degraded speech perception. In particular, degraded response morphology relates to performance on perceptual tasks based on silence duration contrasts between words.
Collapse
Affiliation(s)
- Lindsey Roque
- Department of Hearing and Speech Sciences, University of Maryland, College Park
| | - Casey Gaskins
- Department of Hearing and Speech Sciences, University of Maryland, College Park
| | - Sandra Gordon-Salant
- Department of Hearing and Speech Sciences, University of Maryland, College Park
- Neuroscience and Cognitive Science Program, University of Maryland, College Park
| | - Matthew J. Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park
- Neuroscience and Cognitive Science Program, University of Maryland, College Park
| | - Samira Anderson
- Department of Hearing and Speech Sciences, University of Maryland, College Park
- Neuroscience and Cognitive Science Program, University of Maryland, College Park
| |
Collapse
|
23
|
Rudner M, Seeto M, Keidser G, Johnson B, Rönnberg J. Poorer Speech Reception Threshold in Noise Is Associated With Lower Brain Volume in Auditory and Cognitive Processing Regions. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:1117-1130. [PMID: 31026199 DOI: 10.1044/2018_jslhr-h-ascc7-18-0142] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Purpose Hearing loss is associated with changes in brain volume in regions supporting auditory and cognitive processing. The purpose of this study was to determine whether there is a systematic association between hearing ability and brain volume in cross-sectional data from a large nonclinical cohort of middle-aged adults available from the UK Biobank Resource ( http://www.ukbiobank.ac.uk ). Method We performed a set of regression analyses to determine the association between speech reception threshold in noise (SRTn) and global brain volume as well as predefined regions of interest (ROIs) based on T1-weighted structural images, controlling for hearing-related comorbidities and cognition as well as demographic factors. In a 2nd set of analyses, we additionally controlled for hearing aid (HA) use. We predicted statistically significant associations globally and in ROIs including auditory and cognitive processing regions, possibly modulated by HA use. Results Whole-brain gray matter volume was significantly lower for individuals with poorer SRTn. Furthermore, the volume of 9 predicted ROIs including both auditory and cognitive processing regions was lower for individuals with poorer SRTn. The greatest percentage difference (-0.57%) in ROI volume relating to a 1 SD worsening of SRTn was found in the left superior temporal gyrus. HA use did not substantially modulate the pattern of association between brain volume and SRTn. Conclusions In a large middle-aged nonclinical population, poorer hearing ability is associated with lower brain volume globally as well as in cortical and subcortical regions involved in auditory and cognitive processing, but there was no conclusive evidence that this effect is moderated by HA use. This pattern of results supports the notion that poor hearing leads to reduced volume in brain regions recruited during speech understanding under challenging conditions. These findings should be tested in future longitudinal, experimental studies. Supplemental Material https://doi.org/10.23641/asha.7949357.
Collapse
Affiliation(s)
- Mary Rudner
- Linnaeus Centre HEAD, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Mark Seeto
- National Acoustic Laboratories and the HEARing CRC, Sydney, New South Wales, Australia
| | - Gitte Keidser
- National Acoustic Laboratories and the HEARing CRC, Sydney, New South Wales, Australia
| | - Blake Johnson
- Department of Cognitive Science, Macquarie University, Sydney, New South Wales, Australia
| | - Jerker Rönnberg
- Linnaeus Centre HEAD, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| |
Collapse
|
24
|
Karawani H, Jenkins K, Anderson S. Restoration of sensory input may improve cognitive and neural function. Neuropsychologia 2018; 114:203-213. [PMID: 29729278 PMCID: PMC5988995 DOI: 10.1016/j.neuropsychologia.2018.04.041] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2018] [Revised: 04/29/2018] [Accepted: 04/30/2018] [Indexed: 11/16/2022]
Abstract
Age-related hearing loss is one of the most prevalent health conditions among the elderly. Hearing loss may lead to social isolation, depression, and cognitive decline in older adults. The mechanistic basis for the association between hearing loss and decreased cognitive function remains unknown as does the potential for improving cognition through hearing rehabilitation. To that end, we asked whether the restoration of sensory input through the use of hearing aids would improve cognitive and auditory neural function. We compared a group of first-time hearing aid users with a hearing-matched control group after a period of six months. The use of hearing aids enhanced working memory performance and increased cortical response amplitudes. Neurophysiologic changes correlated with working memory changes, suggesting a mechanism for decreased cognitive function with hearing loss. These results suggest a neural mechanism for the sensory-cognitive connection and underscore the importance of providing auditory rehabilitation for individuals with age-related hearing loss to improve cognitive and neural function. Our findings of improved cognitive function with hearing aid use may lead to increased adoption of hearing loss remedies.
Collapse
Affiliation(s)
- Hanin Karawani
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD 20742, USA.
| | - Kimberly Jenkins
- Walter Reed National Military Medical Center, 4494 North Palmer Road, Bethesda, MD 20889, USA.
| | - Samira Anderson
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD 20742, USA; Neuroscience and Cognitive Science Program, University of Maryland, College Park, MD 20742, USA.
| |
Collapse
|
25
|
Karawani H, Jenkins KA, Anderson S. Neural and behavioral changes after the use of hearing aids. Clin Neurophysiol 2018; 129:1254-1267. [PMID: 29677689 DOI: 10.1016/j.clinph.2018.03.024] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2017] [Revised: 03/08/2018] [Accepted: 03/23/2018] [Indexed: 12/30/2022]
Abstract
OBJECTIVE Individuals with age-related hearing loss (ARHL) can restore some loss of the auditory function with the use of hearing aids (HAs). However, what remains unknown are the physiological mechanisms that underlie how the brain changes with exposure to amplified sounds though the use of HAs. We aimed to examine behavioral and physiological changes induced by HAs. METHODS Thirty-five older-adults with moderate ARHL with no history of hearing aid use were fit with HAs tested in aided and unaided conditions, and divided into experimental and control groups. The experimental group used HAs during a period of six months. The control group did not use HAs during this period, but were given the opportunity to use them after the completion of the study. Both groups underwent testing protocols six months apart. Outcome measures included behavioral (speech-in-noise measures, self-assessment questionnaires) and electrophysiological brainstem recordings (frequency-following responses) to the speech syllable /ga/ in two quiet conditions and in six-talker babble noise. RESULTS The experimental group reported subjective benefits on self-assessment questionnaires. Significant physiological changes were observed in the experimental group, specifically a reduction in fundamental frequency magnitude, while no change was observed in controls, yielding a significant time × group interaction. Furthermore, peak latencies remained stable in the experimental group but were significantly delayed in the control group after six months. Significant correlations between behavioral and physiological changes were also observed. CONCLUSIONS The findings suggest that HAs may alter subcortical processing and offset neural timing delay; however, further investigation is needed to understand cortical changes and HA effects on cognitive processing. SIGNIFICANCE The findings of the current study provide evidence for clinicians that the use of HAs may prevent further loss of auditory function resulting from sensory deprivation.
Collapse
Affiliation(s)
- Hanin Karawani
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD, USA.
| | - Kimberly A Jenkins
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD, USA; Walter Reed National Military Medical Center, Bethesda, MD, USA
| | - Samira Anderson
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD, USA; Neuroscience and Cognitive Science Program, University of Maryland, College Park, MD, USA
| |
Collapse
|