1
|
Ellis GM, Crukley J, Souza PE. The Effects of Signal to Noise Ratio, T60 , Wide-Dynamic Range Compression Speed, and Digital Noise Reduction in a Virtual Restaurant Setting. Ear Hear 2024; 45:760-774. [PMID: 38254265 PMCID: PMC11141238 DOI: 10.1097/aud.0000000000001469] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2024]
Abstract
OBJECTIVES Hearing aid processing in realistic listening environments is difficult to study effectively. Often the environment is unpredictable or unknown, such as in wearable aid trials with subjective report by the wearer. Some laboratory experiments create listening environments to exert tight experimental control, but those environments are often limited by physical space, a small number of sound sources, or room absorptive properties. Simulation techniques bridge this gap by providing greater experimental control over listening environments, effectively bringing aspects of the real-world into the laboratory. This project used simulation to study the effects of wide-dynamic range compression (WDRC) and digital noise reduction (DNR) on speech intelligibility in a reverberant environment with six spatialized competing talkers. The primary objective of this study was to determine the efficacy of WDRC and DNR in a complex listening environment using virtual auditory space techniques. DESIGN Participants of greatest interest were listeners with hearing impairment. A group of listeners with clinically normal hearing was included to assess the effects of the simulation absent the complex effects of hearing loss. Virtual auditory space techniques were used to simulate a small restaurant listening environment with two different reverberation times (0.8 and 1.8 sec) in a range of signal to noise ratios (SNRs) (-8.5 to 11.5 dB SNR). Six spatialized competing talkers were included to further enhance realism. A hearing aid simulation was used to examine the degree to which speech intelligibility was affected by slow and fast WDRC in conjunction with the presence or absence of DNR. The WDRC and DNR settings were chosen to be reasonable estimates of hearing aids currently available to consumers. RESULTS A WDRC × DNR × Hearing Status interaction was observed, such that DNR was beneficial for speech intelligibility when combined with fast WDRC speeds, but DNR was detrimental to speech intelligibility when WDRC speeds were slow. The pattern of the WDRC × DNR interaction was observed for both listener groups. Significant main effects of reverberation time and SNR were observed, indicating better performance with lower reverberation times and more positive SNR. CONCLUSIONS DNR reduced low-amplitude noise before WDRC-amplified the low-intensity portions of the signal, negating one potential downside of fast WDRC and leading to an improvement in speech intelligibility in this simulation. These data suggest that, in some real-world environments that include both reverberation and noise, older listeners with hearing impairment may find speech to be more intelligible if DNR is activated when the hearing aid has fast compression time constants. Additional research is needed to determine the appropriate DNR strength and to confirm results in wearable hearing aids and a wider range of listening environments.
Collapse
Affiliation(s)
- Gregory M Ellis
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, Illinois, USA
| | - Jeff Crukley
- Data Science and Statistics, Toronto, Ontario, Canada
- Department of Speech-Language Pathology, University of Toronto, Toronto, Ontario, Canada
- Department of Psychology, Neuroscience, and Behavior, McMaster University, Hamilton, Ontario, Canada
| | - Pamela E Souza
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, Illinois, USA
- Knowles Hearing Center, Evanston, Illinois, USA
| |
Collapse
|
2
|
Roman AM, Pratt SR, Zhen LQ. Threshold Estimation and Speech Perception Under Hearing Loss Simulation: Examination of the Immersive Hearing Loss and Prosthesis Simulator. Am J Audiol 2023:1-8. [PMID: 37956704 DOI: 10.1044/2023_aja-23-00155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2023] Open
Abstract
PURPOSE Hearing loss simulation (HLS) has been recommended for clinical teaching and counseling of patients and their families, so that they can experience hearing impairment. However, few validated procedures for simulating hearing loss are available to instructors and practicing clinicians. The aim of this study was to assess the accuracy of the Immersive Hearing Loss and Prosthesis Simulator (I-HeLPS) on reducing hearing sensitivity and word recognition to determine its adequacy for educational and clinical use. METHOD Thirty-seven young adults with normal hearing completed hearing threshold and word recognition testing under normal and simulated hearing losses. The accuracy of the nominal hearing threshold settings within the I-HeLPS software was assessed with behavioral detection of frequency-modulated pure tones presented in a calibrated sound field, while listeners wore I-HeLPS headphones. The impact of the HLSs on speech perception was measured using the California Consonant Test. Hearing thresholds, word identification accuracy, and sound confusions were compared across listening conditions. RESULTS Hearing thresholds increased systematically with worse simulated hearing loss. Performance on the California Consonant Test worsened, and the number of phoneme confusions increased with simulated hearing loss severity. Most of the confusions were place confusions with near neighbors and manner confusions increased as a function of increasing severity of simulated hearing loss. CONCLUSIONS The I-HeLPS accurately elevated hearing thresholds with increasing HLS severity and impacted word recognition in a manner consistent with sensorineural hearing loss. The simulations were considered reasonable for educational and clinical purposes. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.24520966.
Collapse
Affiliation(s)
- Aaron M Roman
- Department of Audiology, Osborne College of Audiology, Salus University, Elkins Park, PA
- Department of Communication Science and Disorders, University of Pittsburgh, PA
| | - Sheila R Pratt
- Department of Communication Science and Disorders, University of Pittsburgh, PA
- VA Pittsburgh Healthcare System, PA
| | - Leslie Q Zhen
- Department of Communication Science and Disorders, University of Pittsburgh, PA
- VA Pittsburgh Healthcare System, PA
| |
Collapse
|
3
|
Anderson S, DeVries L, Smith E, Goupell MJ, Gordon-Salant S. Rate Discrimination Training May Partially Restore Temporal Processing Abilities from Age-Related Deficits. J Assoc Res Otolaryngol 2022; 23:771-786. [PMID: 35948694 PMCID: PMC9365219 DOI: 10.1007/s10162-022-00859-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Accepted: 06/30/2022] [Indexed: 01/06/2023] Open
Abstract
The ability to understand speech in complex environments depends on the brain's ability to preserve the precise timing characteristics of the speech signal. Age-related declines in temporal processing may contribute to the older adult's experience of communication difficulty in challenging listening conditions. This study's purpose was to evaluate the effects of rate discrimination training on auditory temporal processing. A double-blind, randomized control design assigned 77 young normal-hearing, older normal-hearing, and older hearing-impaired listeners to one of two treatment groups: experimental (rate discrimination for 100- and 300-Hz pulse trains) and active control (tone detection in noise). All listeners were evaluated during pre- and post-training sessions using perceptual rate discrimination of 100-, 200-, 300-, and 400-Hz band-limited pulse trains and auditory steady-state responses (ASSRs) to the same stimuli. Training generalization was evaluated using several temporal processing measures and sentence recognition tests that included time-compressed and reverberant speech stimuli. Results demonstrated a session × training group interaction for perceptual and ASSR testing to the trained frequencies (100 and 300 Hz), driven by greater improvements in the training group than in the active control group. Further, post-test rate discrimination of the older listeners reached levels that were equivalent to those of the younger listeners at pre-test. Generalization was observed in significant improvement in rate discrimination of untrained frequencies (200 and 400 Hz) and in correlations between performance changes in rate discrimination and sentence recognition of reverberant speech. Further, non-auditory inhibition/attention performance predicted training-related improvement in rate discrimination. Overall, the results demonstrate the potential for auditory training to partially restore temporal processing in older listeners and highlight the role of cognitive function in these gains.
Collapse
Affiliation(s)
- Samira Anderson
- Department of Hearing and Speech Sciences, University of Maryland, College Park, 20742 USA
| | - Lindsay DeVries
- Department of Hearing and Speech Sciences, University of Maryland, College Park, 20742 USA
| | - Edward Smith
- Department of Hearing and Speech Sciences, University of Maryland, College Park, 20742 USA
| | - Matthew J. Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, 20742 USA
| | - Sandra Gordon-Salant
- Department of Hearing and Speech Sciences, University of Maryland, College Park, 20742 USA
| |
Collapse
|
4
|
Humes LE. Factors Underlying Individual Differences in Speech-Recognition Threshold (SRT) in Noise Among Older Adults. Front Aging Neurosci 2021; 13:702739. [PMID: 34290600 PMCID: PMC8287901 DOI: 10.3389/fnagi.2021.702739] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Accepted: 06/11/2021] [Indexed: 12/04/2022] Open
Abstract
Many older adults have difficulty understanding speech in noisy backgrounds. In this study, we examined peripheral auditory, higher-level auditory, and cognitive factors that may contribute to such difficulties. A convenience sample of 137 volunteer older adults, 90 women, and 47 men, ranging in age from 47 to 94 years (M = 69.2 and SD = 10.1 years) completed a large battery of tests. Auditory tests included measures of pure-tone threshold, clinical and psychophysical, as well as two measures of gap-detection threshold and four measures of temporal-order identification. The latter included two monaural and two dichotic listening conditions. In addition, cognition was assessed using the complete Wechsler Adult Intelligence Scale-3rd Edition (WAIS-III). Two monaural measures of speech-recognition threshold (SRT) in noise, the QuickSIN, and the WIN, were obtained from each ear at relatively high presentation levels of 93 or 103 dB SPL to minimize audibility concerns. Group data, both aggregate and by age decade, were evaluated initially to allow comparison to data in the literature. Next, following the application of principal-components factor analysis for data reduction, individual differences in speech-recognition-in-noise performance were examined using multiple-linear-regression analyses. Excellent fits were obtained, accounting for 60-77% of the total variance, with most accounted for by the audibility of the speech and noise stimuli and the severity of hearing loss with the balance primarily associated with cognitive function.
Collapse
Affiliation(s)
- Larry E. Humes
- Department of Speech, Language, and Hearing Sciences, Indiana University, Bloomington, IN, United States
| |
Collapse
|
5
|
Burton JA, Mackey CA, MacDonald KS, Hackett TA, Ramachandran R. Changes in audiometric threshold and frequency selectivity correlate with cochlear histopathology in macaque monkeys with permanent noise-induced hearing loss. Hear Res 2020; 398:108082. [PMID: 33045479 PMCID: PMC7769151 DOI: 10.1016/j.heares.2020.108082] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/14/2020] [Revised: 09/12/2020] [Accepted: 09/20/2020] [Indexed: 02/07/2023]
Abstract
Exposure to loud noise causes damage to the inner ear, including but not limited to outer and inner hair cells (OHCs and IHCs) and IHC ribbon synapses. This cochlear damage impairs auditory processing and increases audiometric thresholds (noise-induced hearing loss, NIHL). However, the exact relationship between the perceptual consequences of NIHL and its underlying cochlear pathology are poorly understood. This study used a nonhuman primate model of NIHL to relate changes in frequency selectivity and audiometric thresholds to indices of cochlear histopathology. Three macaques (one Macaca mulatta and two Macaca radiata) were trained to detect tones in quiet and in noises that were spectrally notched around the tone frequency. Audiograms were derived from tone thresholds in quiet; perceptual auditory filters were derived from tone thresholds in notched-noise maskers using the rounded-exponential fit. Data were obtained before and after a four-hour exposure to a 50-Hz noise centered at 2 kHz at 141 or 146 dB SPL. Noise exposure caused permanent audiometric threshold shifts and broadening of auditory filters at and above 2 kHz, with greater changes observed for the 146-dB-exposed monkeys. The normalized bandwidth of the perceptual auditory filters was strongly correlated with audiometric threshold at each tone frequency. While changes in audiometric threshold and perceptual auditory filter widths were primarily determined by the extent of OHC survival, additional variability was explained by including interactions among OHC, IHC, and ribbon synapse survival. This is the first study to provide within-subject comparisons of auditory filter bandwidths in an animal model of NIHL and correlate these NIHL-related perceptual changes with cochlear histopathology. These results expand the foundations for ongoing investigations of the neural correlates of NIHL-related perceptual changes.
Collapse
Affiliation(s)
- Jane A Burton
- Neuroscience Graduate Program, Vanderbilt University, Nashville, TN 37235, United States.
| | - Chase A Mackey
- Neuroscience Graduate Program, Vanderbilt University, Nashville, TN 37235, United States.
| | - Kaitlyn S MacDonald
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN 37232, United States.
| | - Troy A Hackett
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN 37232, United States.
| | - Ramnarayan Ramachandran
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN 37232, United States.
| |
Collapse
|
6
|
Burton JA, Dylla ME, Ramachandran R. Frequency selectivity in macaque monkeys measured using a notched-noise method. Hear Res 2017; 357:73-80. [PMID: 29223930 DOI: 10.1016/j.heares.2017.11.012] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/08/2017] [Revised: 11/15/2017] [Accepted: 11/26/2017] [Indexed: 11/26/2022]
Abstract
The auditory system is thought to process complex sounds through overlapping bandpass filters. Frequency selectivity as estimated by auditory filters has been well quantified in humans and other mammalian species using behavioral and physiological methodologies, but little work has been done to examine frequency selectivity in nonhuman primates. In particular, knowledge of macaque frequency selectivity would help address the recent controversy over the sharpness of cochlear tuning in humans relative to other animal species. The purpose of our study was to investigate the frequency selectivity of macaque monkeys using a notched-noise paradigm. Four macaques were trained to detect tones in noises that were spectrally notched symmetrically and asymmetrically around the tone frequency. Masked tone thresholds decreased with increasing notch width. Auditory filter shapes were estimated using a rounded exponential function. Macaque auditory filters were symmetric at low noise levels and broader and more asymmetric at higher noise levels with broader low-frequency and steeper high-frequency tails. Macaque filter bandwidths (BW3dB) increased with increasing center frequency, similar to humans and other species. Estimates of equivalent rectangular bandwidth (ERB) and filter quality factor (QERB) suggest macaque filters are broader than human filters. These data shed further light on frequency selectivity across species and serve as a baseline for studies of neuronal frequency selectivity and frequency selectivity in subjects with hearing loss.
Collapse
Affiliation(s)
- Jane A Burton
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN 37212, United States.
| | - Margit E Dylla
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN 37212, United States.
| | - Ramnarayan Ramachandran
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN 37212, United States.
| |
Collapse
|
7
|
Nishi K, Trevino AC, Rosado Rogers L, García P, Neely ST. Effects of Simulated Hearing Loss on Bilingual Children's Consonant Recognition in Noise. Ear Hear 2017; 38:e292-e304. [PMID: 28353522 DOI: 10.1097/aud.0000000000000428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVE This study investigated the possible impact of simulated hearing loss on speech perception in Spanish-English bilingual children. To avoid confound between individual differences in hearing-loss configuration and linguistic experience, threshold-elevating noise simulating a mild-to-moderate sloping hearing loss was used with normal-hearing listeners. The hypotheses were that: (1) bilingual children can perform similarly to English-speaking monolingual peers in quiet; (2) for both bilingual and monolingual children, noise and simulated hearing loss would have detrimental impacts consistent with their acoustic characteristics (i.e., consonants with high-frequency cues remain highly intelligible in speech-shaped noise, but suffer from simulated hearing loss more than other consonants); (3) differences in phonology and acquisition order between Spanish and English would have additional negative influence on bilingual children's recognition of some English consonants. DESIGN Listeners were 11 English-dominant, Spanish-English bilingual children (6 to 12 years old) and 12 English-speaking, monolingual age peers. All had normal hearing and age-appropriate nonverbal intelligence and expressive English vocabulary. Listeners performed a listen-and-repeat speech perception task. Targets were 13 American English consonants embedded in vowel-consonant-vowel (VCV) syllables. VCVs were presented in quiet and in speech-shaped noise at signal-to-noise ratios (SNRs) of -5, 0, 5 dB (normal-hearing condition). For the simulated hearing-loss condition, threshold-elevating noise modeling a mild-to-moderate sloping sensorineural hearing loss profile was added to the normal-hearing stimuli for 0, 5 dB SNR, and quiet. Responses were scored for consonant correct. Individual listeners' performance was summarized for average across 13 consonants (overall) and for individual consonants. RESULTS Groups were compared for the effects of background noise and simulated hearing loss. As predicted, group performed similarly in quiet. The simulated hearing loss had a considerable detrimental impact on both groups, even in the absence of speech-shaped noise. Contrary to our prediction, no group difference was observed at any SNR in either condition. However, although nonsignificant, the greater within-group variance for the bilingual children in the normal-hearing condition indicated a wider "normal" range than for the monolingual children. Interestingly, although it did not contribute to the group difference, bilingual children's overall consonant recognition in both conditions improved with age, whereas such a developmental trend for monolingual children was observed only in the simulated hearing-loss condition, suggesting possible effects of experience. As for the recognition of individual consonants, the influence of background noise or simulated hearing loss was similar between groups and was consistent with the prediction based on their acoustic characteristics. CONCLUSIONS The results demonstrated that school-age, English-dominant, Spanish-English bilingual children can recognize English consonants in a background of speech-shaped noise with similar average accuracy as English-speaking monolingual age peers. The general impact of simulated hearing loss was also similar between bilingual and monolingual children. Thus, our hypothesis that bilingual children's English consonant recognition would suffer from background noise or simulated hearing loss more than the monolingual peers was rejected. However, the present results raise several issues that warrant further investigation, including the possible difference in the "normal" range for bilingual and monolingual children, influence of experience, impact of actual hearing loss on bilingual children, and stimulus quality.
Collapse
Affiliation(s)
- Kanae Nishi
- 1Boys Town National Research Hospital, Omaha, Nebraska, USA; and 2Communication Sciences and Disorders, University of Utah, Salt Lake City, Utah, USA
| | | | | | | | | |
Collapse
|
8
|
Lewis JD, Kopun J, Neely ST, Schmid KK, Gorga MP. Tone-burst auditory brainstem response wave V latencies in normal-hearing and hearing-impaired ears. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 138:3210-3219. [PMID: 26627795 PMCID: PMC4662677 DOI: 10.1121/1.4935516] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/24/2015] [Revised: 10/20/2015] [Accepted: 10/28/2015] [Indexed: 06/05/2023]
Abstract
The metric used to equate stimulus level [sound pressure level (SPL) or sensation level (SL)] between ears with normal hearing (NH) and ears with hearing loss (HL) in comparisons of auditory function can influence interpretation of results. When stimulus level is equated in dB SL, higher SPLs are presented to ears with HL due to their reduced sensitivity. As a result, it may be difficult to determine if differences between ears with NH and ears with HL are due to cochlear pathology or level-dependent changes in cochlear mechanics. To the extent that level-dependent changes in cochlear mechanics contribute to auditory brainstem response latencies, comparisons between normal and pathologic ears may depend on the stimulus levels at which comparisons are made. To test this hypothesis, wave V latencies were measured in 16 NH ears and 15 ears with mild-to-moderate HL. When stimulus levels were equated in SL, latencies were shorter in HL ears. However, latencies were similar for NH and HL ears when stimulus levels were equated in SPL. These observations demonstrate that the effect of stimulus level on wave V latency is large relative to the effect of HL, at least in cases of mild-to-moderate HL.
Collapse
Affiliation(s)
- James D Lewis
- Boys Town National Research Hospital, 555 North 30th Street, Omaha, Nebraska 68131, USA
| | - Judy Kopun
- Boys Town National Research Hospital, 555 North 30th Street, Omaha, Nebraska 68131, USA
| | - Stephen T Neely
- Boys Town National Research Hospital, 555 North 30th Street, Omaha, Nebraska 68131, USA
| | - Kendra K Schmid
- Boys Town National Research Hospital, 555 North 30th Street, Omaha, Nebraska 68131, USA
| | - Michael P Gorga
- Boys Town National Research Hospital, 555 North 30th Street, Omaha, Nebraska 68131, USA
| |
Collapse
|
9
|
Swaminathan J, Reed CM, Desloge JG, Braida LD, Delhorne LA. Consonant identification using temporal fine structure and recovered envelope cues. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2014; 135:2078-2090. [PMID: 25235005 PMCID: PMC4167752 DOI: 10.1121/1.4865920] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2013] [Revised: 01/29/2014] [Accepted: 02/03/2014] [Indexed: 05/31/2023]
Abstract
The contribution of recovered envelopes (RENVs) to the utilization of temporal-fine structure (TFS) speech cues was examined in normal-hearing listeners. Consonant identification experiments used speech stimuli processed to present TFS or RENV cues. Experiment 1 examined the effects of exposure and presentation order using 16-band TFS speech and 40-band RENV speech recovered from 16-band TFS speech. Prior exposure to TFS speech aided in the reception of RENV speech. Performance on the two conditions was similar (∼50%-correct) for experienced listeners as was the pattern of consonant confusions. Experiment 2 examined the effect of varying the number of RENV bands recovered from 16-band TFS speech. Mean identification scores decreased as the number of RENV bands decreased from 40 to 8 and were only slightly above chance levels for 16 and 8 bands. Experiment 3 examined the effect of varying the number of bands in the TFS speech from which 40-band RENV speech was constructed. Performance fell from 85%- to 31%-correct as the number of TFS bands increased from 1 to 32. Overall, these results suggest that the interpretation of previous studies that have used TFS speech may have been confounded with the presence of RENVs.
Collapse
Affiliation(s)
- Jayaganesh Swaminathan
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
| | - Charlotte M Reed
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
| | - Joseph G Desloge
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
| | - Louis D Braida
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
| | - Lorraine A Delhorne
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
| |
Collapse
|
10
|
Desloge JG, Reed CM, Braida LD, Perez ZD, Delhorne LA, Villabona TJ. Auditory and tactile gap discrimination by observers with normal and impaired hearing. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2014; 135:838-50. [PMID: 25234892 PMCID: PMC3985970 DOI: 10.1121/1.4861246] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2013] [Revised: 12/11/2013] [Accepted: 12/12/2013] [Indexed: 06/03/2023]
Abstract
Temporal processing ability for the senses of hearing and touch was examined through the measurement of gap-duration discrimination thresholds (GDDTs) employing the same low-frequency sinusoidal stimuli in both modalities. GDDTs were measured in three groups of observers (normal-hearing, hearing-impaired, and normal-hearing with simulated hearing loss) covering an age range of 21-69 yr. GDDTs for a baseline gap of 6 ms were measured for four different combinations of 100-ms leading and trailing markers (250-250, 250-400, 400-250, and 400-400 Hz). Auditory measurements were obtained for monaural presentation over headphones and tactile measurements were obtained using sinusoidal vibrations presented to the left middle finger. The auditory GDDTs of the hearing-impaired listeners, which were larger than those of the normal-hearing observers, were well-reproduced in the listeners with simulated loss. The magnitude of the GDDT was generally independent of modality and showed effects of age in both modalities. The use of different-frequency compared to same-frequency markers led to a greater deterioration in auditory GDDTs compared to tactile GDDTs and may reflect differences in bandwidth properties between the two sensory systems.
Collapse
Affiliation(s)
- Joseph G Desloge
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
| | - Charlotte M Reed
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
| | - Louis D Braida
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
| | - Zachary D Perez
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
| | - Lorraine A Delhorne
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
| | - Timothy J Villabona
- Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
| |
Collapse
|