1
|
Best V, Conroy C. Relating monaural and binaural measures of modulation sensitivity in listeners with and without hearing loss. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 156:1543-1551. [PMID: 39235271 PMCID: PMC11379497 DOI: 10.1121/10.0028517] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Accepted: 08/15/2024] [Indexed: 09/06/2024]
Abstract
Listeners are sensitive to interaural time differences carried in the envelope of high-frequency sounds (ITDENV), but the salience of this cue depends on certain properties of the envelope and, in particular, on the presence/depth of amplitude modulation (AM) in the envelope. This study tested the hypothesis that individuals with sensorineural hearing loss, who show enhanced sensitivity to AM under certain conditions, would also show superior ITDENV sensitivity under those conditions. The second hypothesis was that variations in ITDENV sensitivity across individuals can be related to variations in sensitivity to AM. To enable a direct comparison, a standard adaptive AM detection task was used along with a modified version of it designed to measure ITDENV sensitivity. The stimulus was a 4-kHz tone modulated at rates of 32, 64, or 128 Hz and presented at a 30 dB sensation level. Both tasks were attempted by 16 listeners with normal hearing and 16 listeners with hearing loss. Consistent with the hypotheses, AM and ITDENV thresholds were correlated and tended to be better in listeners with hearing loss. A control experiment emphasized that absolute level may be a consideration when interpreting the group effects.
Collapse
Affiliation(s)
- Virginia Best
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| | - Christopher Conroy
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
- Department of Biological and Vision Sciences, State University of New York, College of Optometry, New York, New York 10036, USA
| |
Collapse
|
2
|
Kumar S, Nayak S, Kanagokar V, Pitchai Muthu AN. Does bilateral hearing aid fitting improve spatial hearing ability: a systematic review and meta-analysis. Disabil Rehabil Assist Technol 2024:1-13. [PMID: 38385777 DOI: 10.1080/17483107.2024.2316293] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Accepted: 02/03/2024] [Indexed: 02/23/2024]
Abstract
Objectives: The ability to localize sound sources is crucial for everyday listening, as it contributes to spatial awareness and the detection of warning signs. Individuals with hearing impairment have poorer localization abilities, which further deteriorate when they are fitted with a hearing aid. Although numerous studies have addressed this phenomenon, there is a lack of systematic evidence. The aim of the current systematic review is to address the following research question, "Do behavioural measures of spatial hearing ability improve with bilateral hearing aid fitting compared to the unaided hearing condition?"Design: A comprehensive search was conducted by two independent authors utilizing electronic databases, using various electronic databases, covering the period of 1965 to 2022. The inclusion and exclusion criteria were formulated using the Population, Intervention, Compression, Outcome, and Study design (PICOS) format, and the certainty of evidence was determined using the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) guidelines.Results: The comprehensive search resulted in 2199 studies, 17 studies for qualitative synthesis and 15 studies for quantitative synthesis. The collected data was divided into two groups, namely vertical and horizontal localization. The results of the quantitative analysis indicate that the localization performance was significantly better in the unaided condition for both vertical and horizontal planes. The certainty of our evidence was judged to be moderate, meaning that "we are moderately confident in the effect estimate. The true effect is likely to be close to the estimate of the effect, but there is a possibility that it is substantially different".Conclusion: The review findings demonstrate that the bilateral fitting of the hearing aid did not effectively preserve spatial cues, which resulted in poorer localization performance irrespective of the plane of assessment.Review Registration: Prospective Register of Systematic Reviews (PROSPERO); CRD42022358164.
Collapse
Affiliation(s)
- Sathish Kumar
- Department of Audiology and Speech-Language Pathology, Kasturba Medical College Mangalore, Manipal Academy of Higher Education, Manipal, India
| | - Srikanth Nayak
- Department of Audiology and Speech-Language Pathology, Yenepoya Medical College, Yenepoya University (Deemed to be University), Mangalore, India
| | - Vibha Kanagokar
- Department of Audiology and Speech-Language Pathology, Kasturba Medical College Mangalore, Manipal Academy of Higher Education, Manipal, India
| | - Arivudai Nambi Pitchai Muthu
- Department of Audiology and Speech-Language Pathology, Kasturba Medical College Mangalore, Manipal Academy of Higher Education, Manipal, India
| |
Collapse
|
3
|
Xiong YZ, Addleman DA, Nguyen NA, Nelson P, Legge GE. Dual Sensory Impairment: Impact of Central Vision Loss and Hearing Loss on Visual and Auditory Localization. Invest Ophthalmol Vis Sci 2023; 64:23. [PMID: 37703039 PMCID: PMC10503591 DOI: 10.1167/iovs.64.12.23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2023] [Accepted: 08/17/2023] [Indexed: 09/14/2023] Open
Abstract
Purpose In the United States, AMD is a leading cause of low vision that leads to central vision loss and has a high co-occurrence with hearing loss. The impact of central vision loss on the daily functioning of older individuals cannot be fully addressed without considering their hearing status. We investigated the impact of combined central vision loss and hearing loss on spatial localization, an ability critical for social interactions and navigation. Methods Sixteen older adults with central vision loss primarily due to AMD, with or without co-occurring hearing loss, completed a spatial perimetry task in which they verbally reported the directions of visual or auditory targets. Auditory testing was done with eyes open in a dimly lit room or with a blindfold. Twenty-three normally sighted, age-matched, and hearing-matched control subjects also completed the task. Results Subjects with central vision loss missed visual targets more often. They showed increased deviations in visual biases from control subjects as the scotoma size increased. However, these deficits did not generalize to sound localization. As hearing loss became more severe, the sound localization variability increased, and this relationship was not altered by coexisting central vision loss. For both control and central vision loss subjects, sound localization was less reliable when subjects wore blindfolds, possibly due to the absence of visual contextual cues. Conclusions Although central vision loss impairs visual localization, it does not impair sound localization and does not prevent vision from providing useful contextual cues for sound localization.
Collapse
Affiliation(s)
- Ying-Zi Xiong
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota, United States
- Center for Applied and Translational Sensory Science, University of Minnesota, Minneapolis, Minnesota, United States
- Lions Vision Research and Rehabilitation Center, Wilmer Eye Institute, Johns Hopkins University, Baltimore, Maryland, United States
| | - Douglas A. Addleman
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota, United States
- Center for Applied and Translational Sensory Science, University of Minnesota, Minneapolis, Minnesota, United States
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, New Hampshire, United States
| | - Nam Anh Nguyen
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota, United States
| | - Peggy Nelson
- Center for Applied and Translational Sensory Science, University of Minnesota, Minneapolis, Minnesota, United States
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, Minnesota, United States
| | - Gordon E. Legge
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota, United States
- Center for Applied and Translational Sensory Science, University of Minnesota, Minneapolis, Minnesota, United States
| |
Collapse
|
4
|
Ozmeral EJ, Menon KN. Selective auditory attention modulates cortical responses to sound location change for speech in quiet and in babble. PLoS One 2023; 18:e0268932. [PMID: 36638116 PMCID: PMC9838839 DOI: 10.1371/journal.pone.0268932] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Accepted: 01/03/2023] [Indexed: 01/14/2023] Open
Abstract
Listeners use the spatial location or change in spatial location of coherent acoustic cues to aid in auditory object formation. From stimulus-evoked onset responses in normal-hearing listeners using electroencephalography (EEG), we have previously shown measurable tuning to stimuli changing location in quiet, revealing a potential window into the cortical representations of auditory scene analysis. These earlier studies used non-fluctuating, spectrally narrow stimuli, so it was still unknown whether previous observations would translate to speech stimuli, and whether responses would be preserved for stimuli in the presence of background maskers. To examine the effects that selective auditory attention and interferers have on object formation, we measured cortical responses to speech changing location in the free field with and without background babble (+6 dB SNR) during both passive and active conditions. Active conditions required listeners to respond to the onset of the speech stream when it occurred at a new location, explicitly indicating 'yes' or 'no' to whether the stimulus occurred at a block-specific location either 30 degrees to the left or right of midline. In the aggregate, results show similar evoked responses to speech stimuli changing location in quiet compared to babble background. However, the effect of the two background environments diverges somewhat when considering the magnitude and direction of the location change and where the subject was attending. In quiet, attention to the right hemifield appeared to evoke a stronger response than attention to the left hemifield when speech shifted in the rightward direction. No such difference was found in babble conditions. Therefore, consistent with challenges associated with cocktail party listening, directed spatial attention could be compromised in the presence of stimulus noise and likely leads to poorer use of spatial cues in auditory streaming.
Collapse
Affiliation(s)
- Erol J Ozmeral
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States of America
| | - Katherine N Menon
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD, United States of America
| |
Collapse
|
5
|
Sheffield SW, Wheeler HJ, Brungart DS, Bernstein JGW. The Effect of Sound Localization on Auditory-Only and Audiovisual Speech Recognition in a Simulated Multitalker Environment. Trends Hear 2023; 27:23312165231186040. [PMID: 37415497 PMCID: PMC10331332 DOI: 10.1177/23312165231186040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Revised: 06/13/2023] [Accepted: 06/17/2023] [Indexed: 07/08/2023] Open
Abstract
Information regarding sound-source spatial location provides several speech-perception benefits, including auditory spatial cues for perceptual talker separation and localization cues to face the talker to obtain visual speech information. These benefits have typically been examined separately. A real-time processing algorithm for sound-localization degradation (LocDeg) was used to investigate how spatial-hearing benefits interact in a multitalker environment. Normal-hearing adults performed auditory-only and auditory-visual sentence recognition with target speech and maskers presented from loudspeakers at -90°, -36°, 36°, or 90° azimuths. For auditory-visual conditions, one target and three masking talker videos (always spatially separated) were rendered virtually in rectangular windows at these locations on a head-mounted display. Auditory-only conditions presented blank windows at these locations. Auditory target speech (always spatially aligned with the target video) was presented in co-located speech-shaped noise (experiment 1) or with three co-located or spatially separated auditory interfering talkers corresponding to the masker videos (experiment 2). In the co-located conditions, the LocDeg algorithm did not affect auditory-only performance but reduced target orientation accuracy, reducing auditory-visual benefit. In the multitalker environment, two spatial-hearing benefits were observed: perceptually separating competing speech based on auditory spatial differences and orienting to the target talker to obtain visual speech cues. These two benefits were additive, and both were diminished by the LocDeg algorithm. Although visual cues always improved performance when the target was accurately localized, there was no strong evidence that they provided additional assistance in perceptually separating co-located competing speech. These results highlight the importance of sound localization in everyday communication.
Collapse
Affiliation(s)
- Sterling W. Sheffield
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville, FL, USA
| | - Harley J. Wheeler
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, MN, USA
| | - Douglas S. Brungart
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD, USA
| | - Joshua G. W. Bernstein
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD, USA
| |
Collapse
|
6
|
Single-Sided Deafness Cochlear Implant Sound-Localization Behavior With Multiple Concurrent Sources. Ear Hear 2021; 43:206-219. [PMID: 34320529 DOI: 10.1097/aud.0000000000001089] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES For listeners with one deaf ear and the other ear with normal/near-normal hearing (single-sided deafness [SSD]) or moderate hearing loss (asymmetric hearing loss), cochlear implants (CIs) can improve speech understanding in noise and sound-source localization. Previous SSD-CI localization studies have used a single source with artificial sounds such as clicks or random noise. While this approach provides insights regarding the auditory cues that facilitate localization, it does not capture the complex nature of localization behavior in real-world environments. This study examined SSD-CI sound localization in a complex scenario where a target sound was added to or removed from a mixture of other environmental sounds, while tracking head movements to assess behavioral strategy. DESIGN Eleven CI users with normal hearing or moderate hearing loss in the contralateral ear completed a sound-localization task in monaural (CI-OFF) and bilateral (CI-ON) configurations. Ten of the listeners were also tested before CI activation to examine longitudinal effects. Two-second environmental sound samples, looped to create 4- or 10-sec trials, were presented in a spherical array of 26 loudspeakers encompassing ±144° azimuth and ±30° elevation at a 1-m radius. The target sound was presented alone (localize task) or concurrently with one or three additional sources presented to different loudspeakers, with the target cued by being added to (Add) or removed from (Rem) the mixture after 6 sec. A head-mounted tracker recorded movements in six dimensions (three for location, three for orientation). Mixed-model regression was used to examine target sound-identification accuracy, localization accuracy, and head movement. Angular and translational head movements were analyzed both before and after the target was switched on or off. RESULTS Listeners showed improved localization accuracy in the CI-ON configuration, but there was no interaction with test condition and no effect of the CI on sound-identification performance. Although high-frequency hearing loss in the unimplanted ear reduced localization accuracy and sound-identification performance, the magnitude of the CI localization benefit was independent of hearing loss. The CI reduced the magnitude of gross head movements used during the task in the azimuthal rotation and translational dimensions, both while the target sound was present (in all conditions) and during the anticipatory period before the target was switched on (in the Add condition). There was no change in pre- versus post-activation CI-OFF performance. CONCLUSIONS These results extend previous findings, demonstrating a CI localization benefit in a complex listening scenario that includes environmental and behavioral elements encountered in everyday listening conditions. The CI also reduced the magnitude of gross head movements used to perform the task. This was the case even before the target sound was added to the mixture. This suggests that a CI can reduce the need for physical movement both in anticipation of an upcoming sound event and while actively localizing the target sound. Overall, these results show that for SSD listeners, a CI can improve localization in a complex sound environment and reduce the amount of physical movement used.
Collapse
|
7
|
Gallun FJ. Impaired Binaural Hearing in Adults: A Selected Review of the Literature. Front Neurosci 2021; 15:610957. [PMID: 33815037 PMCID: PMC8017161 DOI: 10.3389/fnins.2021.610957] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Accepted: 02/19/2021] [Indexed: 11/17/2022] Open
Abstract
Despite over 100 years of study, there are still many fundamental questions about binaural hearing that remain unanswered, including how impairments of binaural function are related to the mechanisms of binaural hearing. This review focuses on a number of studies that are fundamental to understanding what is known about the effects of peripheral hearing loss, aging, traumatic brain injury, strokes, brain tumors, and multiple sclerosis (MS) on binaural function. The literature reviewed makes clear that while each of these conditions has the potential to impair the binaural system, the specific abilities of a given patient cannot be known without performing multiple behavioral and/or neurophysiological measurements of binaural sensitivity. Future work in this area has the potential to bring awareness of binaural dysfunction to patients and clinicians as well as a deeper understanding of the mechanisms of binaural hearing, but it will require the integration of clinical research with animal and computational modeling approaches.
Collapse
Affiliation(s)
- Frederick J. Gallun
- Oregon Hearing Research Center, Oregon Health and Science University, Portland, OR, United States
| |
Collapse
|
8
|
Cañete OM, Marfull D, Torrente MC, Purdy SC. The Spanish 12-item version of the Speech, Spatial and Qualities of Hearing scale (Sp-SSQ12): adaptation, reliability, and discriminant validity for people with and without hearing loss. Disabil Rehabil 2020; 44:1419-1426. [PMID: 32721200 DOI: 10.1080/09638288.2020.1795279] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
PURPOSE Because of the limited number of Spanish validated questionnaires available to assess auditory functionality in daily life situations in adults, the purpose of this study was to investigate the validity and the reliability of the Spanish version of the Speech, Spatial and Qualities of Hearing 12 items scale (sp-SSQ12), adapted from the published Spanish SSQ49, and to provide reference data for normal and hearing-impaired populations. METHODS The SSQ12 is a self-report questionnaire, consisting of 12 items assessing a range of daily life listening situations. One hundred fifty adults (101 female) with a mean age of 53.9 years (SD 20.3; range 20-88 years) took part in the study. Internal consistency, test-retest reliability, validity, and floor and ceiling effects were investigated. RESULTS The sp-SSQ12 questionnaire had high internal consistency (Cronbach's alpha = 0.95) and test-retest scores were highly correlated (ICC = 0.79). There was minimal evidence of floor and ceiling effects in our sample. Significant differences were observed overall and for the three subscales between normal and hearing-impaired groups. Although some significant differences in SSQ12 scores between groups of participants from different countries, these differences were minimal. CONCLUSIONS The sp-SSQ12 questionnaire is a valid and reliable tool that is easy to administer and requires a short time to answer. We recommend the use of this tool for the assessment of functional hearing in the Spanish-speaking population.Implication for rehabilitationHearing loss impacts people's lives in a number of ways that are captured in the SSQ.The sp-SSQ12 is a valid and reliable tool for assessing everyday listening abilities and limitations experienced by Spanish-speaking adults with hearing loss.The sp-SSQ12 can be incorporated in the hearing rehabilitation process as a tool for evaluating and improving hearing assessment and rehabilitation programs.The sp-SSQ12 can help to identify adults who require a comprehensive hearing assessment.
Collapse
Affiliation(s)
- Oscar M Cañete
- Speech Science, School of Psychology, The University of Auckland, Auckland, New Zealand
| | - Daphne Marfull
- Escuela de Fonoaudiología, Universidad de Valparaíso, Valparaíso, Chile
| | - Mariela C Torrente
- Servicio de Otorrinolaringología, Hospital Padre Hurtado, Santiago, Chile.,Departamento de Otorrinolaringología, Hospital Clínico Universidad de Chile, Santiago, Chile
| | - Suzanne C Purdy
- Speech Science, School of Psychology, The University of Auckland, Auckland, New Zealand.,Eisdell Moore Centre for Research in Hearing and Balance, Auckland, New Zealand
| |
Collapse
|
9
|
Baltzell LS, Cho AY, Swaminathan J, Best V. Spectro-temporal weighting of interaural time differences in speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:3883. [PMID: 32611137 PMCID: PMC7297545 DOI: 10.1121/10.0001418] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/23/2019] [Revised: 05/06/2020] [Accepted: 05/18/2020] [Indexed: 05/19/2023]
Abstract
Numerous studies have demonstrated that the perceptual weighting of interaural time differences (ITDs) is non-uniform in time and frequency, leading to reports of spectral and temporal "dominance" regions. It is unclear however, how these dominance regions apply to spectro-temporally complex stimuli such as speech. The authors report spectro-temporal weighting functions for ITDs in a pair of naturally spoken speech tokens ("two" and "eight"). Each speech token was composed of two phonemes, and was partitioned into eight frequency regions over two time bins (one time bin for each phoneme). To derive lateralization weights, ITDs for each time-frequency bin were drawn independently from a normal distribution with a mean of 0 and a standard deviation of 200 μs, and listeners were asked to indicate whether the speech token was presented from the left or right. ITD thresholds were also obtained for each of the 16 time-frequency bins in isolation. The results suggest that spectral dominance regions apply to speech, and that ITDs carried by phonemes in the first position of the syllable contribute more strongly to lateralization judgments than ITDs carried by phonemes in the second position. The results also show that lateralization judgments are partially accounted for by ITD sensitivity across time-frequency bins.
Collapse
Affiliation(s)
- Lucas S Baltzell
- Department of Speech, Language, and Hearing Sciences, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, USA
| | - Adrian Y Cho
- Department of Speech, Language, and Hearing Sciences, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, USA
| | - Jayaganesh Swaminathan
- Department of Speech, Language, and Hearing Sciences, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, USA
| | - Virginia Best
- Department of Speech, Language, and Hearing Sciences, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, USA
| |
Collapse
|
10
|
Moore BCJ. Effects of hearing loss and age on the binaural processing of temporal envelope and temporal fine structure information. Hear Res 2020; 402:107991. [PMID: 32418682 DOI: 10.1016/j.heares.2020.107991] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/28/2020] [Revised: 04/24/2020] [Accepted: 05/05/2020] [Indexed: 11/28/2022]
Abstract
Within the cochlea, broadband sounds like speech and music are filtered into a series of narrowband signals, each with a relatively slowly varying envelope (ENV) imposed on a rapidly oscillating carrier (the temporal fine structure, TFS). Information about ENV is conveyed by the timing and short-term rate of action potentials in the auditory nerve while information about TFS is conveyed by synchronization of action potentials to a specific phase of the waveform in the cochlea (phase locking). This paper describes the effects of age and hearing loss on the binaural processing of ENV and TFS information, i.e. on the processing of differences in ENV and TFS at the two ears. The binaural processing of TFS information is adversely affected by both hearing loss and increasing age. The binaural processing of ENV information deteriorates somewhat with increasing age but is only slightly affected by hearing loss. The reduced TFS processing abilities found for older/hearing-impaired subjects may partially account for the difficulties that such subjects experience in complex listening situations when the target speech and interfering sounds come from different directions in space.
Collapse
Affiliation(s)
- Brian C J Moore
- Department of Experimental Psychology, University of Cambridge, Downing Street, Cambridge, CB2 3EB, UK.
| |
Collapse
|
11
|
Baltzell LS, Swaminathan J, Cho AY, Lavandier M, Best V. Binaural sensitivity and release from speech-on-speech masking in listeners with and without hearing loss. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:1546. [PMID: 32237845 PMCID: PMC7060089 DOI: 10.1121/10.0000812] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/30/2019] [Revised: 02/07/2020] [Accepted: 02/11/2020] [Indexed: 05/29/2023]
Abstract
Listeners with sensorineural hearing loss routinely experience less spatial release from masking (SRM) in speech mixtures than listeners with normal hearing. Hearing-impaired listeners have also been shown to have degraded temporal fine structure (TFS) sensitivity, a consequence of which is degraded access to interaural time differences (ITDs) contained in the TFS. Since these "binaural TFS" cues are critical for spatial hearing, it has been hypothesized that degraded binaural TFS sensitivity accounts for the limited SRM experienced by hearing-impaired listeners. In this study, speech stimuli were noise-vocoded using carriers that were systematically decorrelated across the left and right ears, thus simulating degraded binaural TFS sensitivity. Both (1) ITD sensitivity in quiet and (2) SRM in speech mixtures spatialized using ITDs (or binaural release from masking; BRM) were measured as a function of TFS interaural decorrelation in young normal-hearing and hearing-impaired listeners. This allowed for the examination of the relationship between ITD sensitivity and BRM over a wide range of ITD thresholds. This paper found that, for a given ITD sensitivity, hearing-impaired listeners experienced less BRM than normal-hearing listeners, suggesting that binaural TFS sensitivity can account for only a modest portion of the BRM deficit in hearing-impaired listeners. However, substantial individual variability was observed.
Collapse
Affiliation(s)
- Lucas S Baltzell
- Department of Speech, Language, and Hearing Sciences, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, USA
| | - Jayaganesh Swaminathan
- Department of Speech, Language, and Hearing Sciences, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, USA
| | - Adrian Y Cho
- Department of Speech, Language, and Hearing Sciences, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, USA
| | - Mathieu Lavandier
- University of Lyon, ENTPE, Laboratoire Génie Civil et Bâtiment, Rue Maurice Audin, F-69518 Vaulx-en-Velin Cedex, France
| | - Virginia Best
- Department of Speech, Language, and Hearing Sciences, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, USA
| |
Collapse
|
12
|
Buchholz JM, Best V. Speech detection and localization in a reverberant multitalker environment by normal-hearing and hearing-impaired listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:1469. [PMID: 32237797 PMCID: PMC7058429 DOI: 10.1121/10.0000844] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/09/2019] [Revised: 01/31/2020] [Accepted: 02/13/2020] [Indexed: 06/11/2023]
Abstract
Spatial perception is an important part of a listener's experience and ability to function in everyday environments. However, the current understanding of how well listeners can locate sounds is based on measurements made using relatively simple stimuli and tasks. Here the authors investigated sound localization in a complex and realistic environment for listeners with normal and impaired hearing. A reverberant room containing a background of multiple talkers was simulated and presented to listeners in a loudspeaker-based virtual sound environment. The target was a short speech stimulus presented at various azimuths and distances relative to the listener. To ensure that the target stimulus was detectable to the listeners with hearing loss, masked thresholds were first measured on an individual basis and used to set the target level. Despite this compensation, listeners with hearing loss were less accurate at locating the target, showing increased front-back confusion rates and higher root-mean-square errors. Poorer localization was associated with poorer masked thresholds and with more severe low-frequency hearing loss. Localization accuracy in the multitalker background was lower than in quiet and also declined for more distant targets. However, individual accuracy in noise and quiet was strongly correlated.
Collapse
Affiliation(s)
- Jörg M Buchholz
- Department of Linguistics, Australian Hearing Hub, 16 University Avenue, Macquarie, University, Sydney, New South Wales, 2109, Australia
| | - Virginia Best
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| |
Collapse
|
13
|
Best V, Swaminathan J. Revisiting the detection of interaural time differences in listeners with hearing loss. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 145:EL508. [PMID: 31255153 PMCID: PMC6561774 DOI: 10.1121/1.5111065] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2019] [Revised: 05/19/2019] [Accepted: 05/21/2019] [Indexed: 05/29/2023]
Abstract
Sensitivity to interaural time differences (ITDs) was measured in two groups of listeners, one with normal hearing and one with sensorineural hearing loss. ITD detection thresholds were measured for pure tones and for speech (a single word), in quiet and in the presence of noise. It was predicted that effects of hearing loss would be reduced for speech as compared to tones due to the redundancy of information across frequency. Thresholds were better overall, and the effects of hearing loss less pronounced, for speech than for tones. There was no evidence that effects of hearing loss were exacerbated in noise.
Collapse
Affiliation(s)
- Virginia Best
- Department of Speech, Language and Hearing Sciences, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, ,
| | - Jayaganesh Swaminathan
- Department of Speech, Language and Hearing Sciences, Boston University, 635 Commonwealth Avenue, Boston, Massachusetts 02215, ,
| |
Collapse
|
14
|
Jeong E, Ryu H, Shin JH, Kwon GH, Jo G, Lee JY. High Oxygen Exchange to Music Indicates Auditory Distractibility in Acquired Brain Injury: An fNIRS Study with a Vector-Based Phase Analysis. Sci Rep 2018; 8:16737. [PMID: 30425287 PMCID: PMC6233191 DOI: 10.1038/s41598-018-35172-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2018] [Accepted: 10/31/2018] [Indexed: 01/30/2023] Open
Abstract
Attention deficits due to auditory distractibility are pervasive among patients with acquired brain injury (ABI). It remains unclear, however, whether attention deficits following ABI specific to auditory modality are associated with altered haemodynamic responses. Here, we examined cerebral haemodynamic changes using functional near-infrared spectroscopy combined with a topological vector-based analysis method. A total of thirty-seven participants (22 healthy adults, 15 patients with ABI) performed a melodic contour identification task (CIT) that simulates auditory distractibility. Findings demonstrated that the melodic CIT was able to detect auditory distractibility in patients with ABI. The rate-corrected score showed that the ABI group performed significantly worse than the non-ABI group in both CIT1 (target contour identification against environmental sounds) and CIT2 (target contour identification against target-like distraction). Phase-associated response intensity during the CITs was greater in the ABI group than in the non-ABI group. Moreover, there existed a significant interaction effect in the left dorsolateral prefrontal cortex (DLPFC) during CIT1 and CIT2. These findings indicated that stronger hemodynamic responses involving oxygen exchange in the left DLPFC can serve as a biomarker for evaluating and monitoring auditory distractibility, which could potentially lead to the discovery of the underlying mechanism that causes auditory attention deficits in patients with ABI.
Collapse
Affiliation(s)
- Eunju Jeong
- Department of Arts and Technology, Hanyang University, Seoul, 04763, Republic of Korea.
- Division of Industrial Information Studies, Hanyang University, Seoul, 04763, Republic of Korea.
| | - Hokyoung Ryu
- Department of Arts and Technology, Hanyang University, Seoul, 04763, Republic of Korea
- Graduate School of Technology and Innovation Management, Hanyang University, Seoul, 04763, Republic of Korea
| | - Joon-Ho Shin
- Department of Neurorehabilitation, National Rehabilitation Center, Ministry of Health and Welfare, Seoul, 01022, Republic of Korea
| | - Gyu Hyun Kwon
- Department of Arts and Technology, Hanyang University, Seoul, 04763, Republic of Korea
- Graduate School of Technology and Innovation Management, Hanyang University, Seoul, 04763, Republic of Korea
| | - Geonsang Jo
- Department of Arts and Technology, Hanyang University, Seoul, 04763, Republic of Korea
| | - Ji-Yeong Lee
- Department of Neurorehabilitation, National Rehabilitation Center, Ministry of Health and Welfare, Seoul, 01022, Republic of Korea
| |
Collapse
|
15
|
Cubick J, Buchholz JM, Best V, Lavandier M, Dau T. Listening through hearing aids affects spatial perception and speech intelligibility in normal-hearing listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 144:2896. [PMID: 30522291 PMCID: PMC6246072 DOI: 10.1121/1.5078582] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Cubick and Dau [(2016). Acta Acust. Acust. 102, 547-557] showed that speech reception thresholds (SRTs) in noise, obtained with normal-hearing listeners, were significantly higher with hearing aids (HAs) than without. Some listeners reported a change in their spatial perception of the stimuli due to the HA processing, with auditory images often being broader and closer to the head or even internalized. The current study investigated whether worse speech intelligibility with HAs might be explained by distorted spatial perception and the resulting reduced ability to spatially segregate the target speech from the interferers. SRTs were measured in normal-hearing listeners with or without HAs in the presence of three interfering talkers or speech-shaped noises. Furthermore, listeners were asked to sketch their spatial perception of the acoustic scene. Consistent with the previous study, SRTs increased with HAs. Spatial release from masking was lower with HAs than without. The effects were similar for noise and speech maskers and appeared to be accounted for by changes to energetic masking. This interpretation was supported by results from a binaural speech intelligibility model. Even though the sketches indicated a change of spatial perception with HAs, no direct link between spatial perception and segregation of talkers could be shown.
Collapse
Affiliation(s)
- Jens Cubick
- Hearing Systems Group, Department of Electrical Engineering, Technical University of Denmark, Ørsteds Plads, Building 352, 2800 Kongens Lyngby, Denmark
| | - Jörg M Buchholz
- Department of Linguistics, Australian Hearing Hub, 16 University Avenue, Macquarie University, New South Wales 2109, Australia
| | - Virginia Best
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| | - Mathieu Lavandier
- Univ Lyon, ENTPE, Laboratoire Génie Civil et Bâtiment, Rue M. Audin, F-69518 Vaulx-en-Velin, France
| | - Torsten Dau
- Hearing Systems Group, Department of Electrical Engineering, Technical University of Denmark, Ørsteds Plads, Building 352, 2800 Kongens Lyngby, Denmark
| |
Collapse
|
16
|
Meuret S, Ludwig A, Predel D, Staske B, Fuchs M. Localization and Spatial Discrimination in Children and Adolescents with Moderate Sensorineural Hearing Loss Tested without Their Hearing Aids. Audiol Neurootol 2018; 22:326-342. [DOI: 10.1159/000485826] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2017] [Accepted: 11/27/2017] [Indexed: 11/19/2022] Open
Abstract
The present study investigated two measures of spatial acoustic perception in children and adolescents with sensorineural hearing loss (SNHL) tested without their hearing aids and compared it to age-matched controls. Auditory localization was quantified by means of a sound source identification task and auditory spatial discrimination acuity by measuring minimum audible angles (MAA). Both low- and high-frequency noise bursts were employed in the tests to separately address spatial auditory processing based on interaural time and intensity differences. In SNHL children, localization (hit accuracy) was significantly reduced compared to normal-hearing children and intraindividual variability (dispersion) considerably increased. Given the respective impairments, the performance based on interaural time differences (low frequencies) was still better than that based on intensity differences (high frequencies). For MAA, age-matched comparisons yielded not only increased MAA values in SNHL children, but also no decrease with increasing age compared to normal-hearing children. Deficits in MAA were most apparent in the frontal azimuth. Thus, children with SNHL do not seem to benefit from frontal positions of the sound sources as do normal-hearing children. The results give an indication that the processing of spatial cues in SNHL children is restricted, which could also imply problems regarding speech understanding in challenging hearing situations.
Collapse
|
17
|
Archer-Boyd AW, Holman JA, Brimijoin WO. The minimum monitoring signal-to-noise ratio for off-axis signals and its implications for directional hearing aids. Hear Res 2017; 357:64-72. [PMID: 29223929 PMCID: PMC5759949 DOI: 10.1016/j.heares.2017.11.011] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/15/2017] [Revised: 11/20/2017] [Accepted: 11/26/2017] [Indexed: 11/28/2022]
Abstract
The signal-to-noise ratio (SNR) benefit of hearing aid directional microphones is dependent on the angle of the listener relative to the target, something that can change drastically and dynamically in a typical group conversation. When a new target signal is significantly off-axis, directional microphones lead to slower target orientation, more complex movements, and more reversals. This raises the question of whether there is an optimal design for directional microphones. In principle an ideal microphone would provide the user with sufficient directionality to help with speech understanding, but not attenuate off-axis signals so strongly that orienting to new signals was difficult or impossible. We investigated the latter part of this question. In order to measure the minimal monitoring SNR for reliable orientation to off-axis signals, we measured head-orienting behaviour towards targets of varying SNRs and locations for listeners with mild to moderate bilateral symmetrical hearing loss. Listeners were required to turn and face a female talker in background noise and movements were tracked using a head-mounted crown and infrared system that recorded yaw in a ring of loudspeakers. The target appeared randomly at ± 45, 90 or 135° from the start point. The results showed that as the target SNR decreased from 0 dB to −18 dB, first movement duration and initial misorientation count increased, then fixation error, and finally reversals increased. Increasing the target angle increased movement duration at all SNRs, decreased reversals (above −12 dB target SNR), and had little to no effect on initial misorientations. These results suggest that listeners experience some difficulty orienting towards sources as the target SNR drops below −6 dB, and that if one intends to make a directional microphone that is usable in a moving conversation, then off-axis attenuation should be no more than 12 dB. Investigated the minimum signal-to-noise ratio (SNR) required to localize a target. Head movement to targets at varying SNRs and locations was measured. Orienting towards a new off-axis target became difficult below −6 dB SNR. An ideal directional microphone should not attenuate off-axis sources by > 12 dB.
Collapse
Affiliation(s)
- Alan W Archer-Boyd
- MRC/CSO Institute of Hearing Research (Scottish Section), Glasgow Royal Infirmary, 10-16 Alexandra Parade, Glasgow, G31 2ER, UK; MRC Cognition & Brain Sciences Unit, 15 Chaucer Road, Cambridge, CB2 7EF, UK.
| | - Jack A Holman
- MRC/CSO Institute of Hearing Research (Scottish Section), Glasgow Royal Infirmary, 10-16 Alexandra Parade, Glasgow, G31 2ER, UK
| | - W Owen Brimijoin
- MRC/CSO Institute of Hearing Research (Scottish Section), Glasgow Royal Infirmary, 10-16 Alexandra Parade, Glasgow, G31 2ER, UK
| |
Collapse
|
18
|
Lundbeck M, Grimm G, Hohmann V, Laugesen S, Neher T. Sensitivity to Angular and Radial Source Movements as a Function of Acoustic Complexity in Normal and Impaired Hearing. Trends Hear 2017; 21:2331216517717152. [PMID: 28675088 PMCID: PMC5548306 DOI: 10.1177/2331216517717152] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2016] [Revised: 05/16/2017] [Accepted: 05/23/2017] [Indexed: 11/15/2022] Open
Abstract
In contrast to static sounds, spatially dynamic sounds have received little attention in psychoacoustic research so far. This holds true especially for acoustically complex (reverberant, multisource) conditions and impaired hearing. The current study therefore investigated the influence of reverberation and the number of concurrent sound sources on source movement detection in young normal-hearing (YNH) and elderly hearing-impaired (EHI) listeners. A listening environment based on natural environmental sounds was simulated using virtual acoustics and rendered over headphones. Both near-far ('radial') and left-right ('angular') movements of a frontal target source were considered. The acoustic complexity was varied by adding static lateral distractor sound sources as well as reverberation. Acoustic analyses confirmed the expected changes in stimulus features that are thought to underlie radial and angular source movements under anechoic conditions and suggested a special role of monaural spectral changes under reverberant conditions. Analyses of the detection thresholds showed that, with the exception of the single-source scenarios, the EHI group was less sensitive to source movements than the YNH group, despite adequate stimulus audibility. Adding static sound sources clearly impaired the detectability of angular source movements for the EHI (but not the YNH) group. Reverberation, on the other hand, clearly impaired radial source movement detection for the EHI (but not the YNH) listeners. These results illustrate the feasibility of studying factors related to auditory movement perception with the help of the developed test setup.
Collapse
Affiliation(s)
- Micha Lundbeck
- Medizinische Physik and Cluster of Excellence ‘Hearing4all,’ Department of Medical Physics and Acoustics, Oldenburg University, Germany
- HörTech gGmbH, Oldenburg, Germany
| | - Giso Grimm
- Medizinische Physik and Cluster of Excellence ‘Hearing4all,’ Department of Medical Physics and Acoustics, Oldenburg University, Germany
- HörTech gGmbH, Oldenburg, Germany
| | - Volker Hohmann
- Medizinische Physik and Cluster of Excellence ‘Hearing4all,’ Department of Medical Physics and Acoustics, Oldenburg University, Germany
- HörTech gGmbH, Oldenburg, Germany
| | | | - Tobias Neher
- Medizinische Physik and Cluster of Excellence ‘Hearing4all,’ Department of Medical Physics and Acoustics, Oldenburg University, Germany
| |
Collapse
|
19
|
Lőcsei G, Pedersen JH, Laugesen S, Santurette S, Dau T, MacDonald EN. Temporal Fine-Structure Coding and Lateralized Speech Perception in Normal-Hearing and Hearing-Impaired Listeners. Trends Hear 2016; 20:2331216516660962. [PMID: 27601071 PMCID: PMC5014088 DOI: 10.1177/2331216516660962] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2015] [Accepted: 07/01/2016] [Indexed: 11/16/2022] Open
Abstract
This study investigated the relationship between speech perception performance in spatially complex, lateralized listening scenarios and temporal fine-structure (TFS) coding at low frequencies. Young normal-hearing (NH) and two groups of elderly hearing-impaired (HI) listeners with mild or moderate hearing loss above 1.5 kHz participated in the study. Speech reception thresholds (SRTs) were estimated in the presence of either speech-shaped noise, two-, four-, or eight-talker babble played reversed, or a nonreversed two-talker masker. Target audibility was ensured by applying individualized linear gains to the stimuli, which were presented over headphones. The target and masker streams were lateralized to the same or to opposite sides of the head by introducing 0.7-ms interaural time differences between the ears. TFS coding was assessed by measuring frequency discrimination thresholds and interaural phase difference thresholds at 250 Hz. NH listeners had clearly better SRTs than the HI listeners. However, when maskers were spatially separated from the target, the amount of SRT benefit due to binaural unmasking differed only slightly between the groups. Neither the frequency discrimination threshold nor the interaural phase difference threshold tasks showed a correlation with the SRTs or with the amount of masking release due to binaural unmasking, respectively. The results suggest that, although HI listeners with normal hearing thresholds below 1.5 kHz experienced difficulties with speech understanding in spatially complex environments, these limitations were unrelated to TFS coding abilities and were only weakly associated with a reduction in binaural-unmasking benefit for spatially separated competing sources.
Collapse
Affiliation(s)
- Gusztáv Lőcsei
- Department of Electrical Engineering, Technical University of Denmark, Kongens Lyngby, Denmark
| | | | - Søren Laugesen
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
| | - Sébastien Santurette
- Department of Electrical Engineering, Technical University of Denmark, Kongens Lyngby, Denmark
| | - Torsten Dau
- Department of Electrical Engineering, Technical University of Denmark, Kongens Lyngby, Denmark
| | - Ewen N MacDonald
- Department of Electrical Engineering, Technical University of Denmark, Kongens Lyngby, Denmark
| |
Collapse
|
20
|
Jeong E, Ryu H. Melodic Contour Identification Reflects the Cognitive Threshold of Aging. Front Aging Neurosci 2016; 8:134. [PMID: 27378907 PMCID: PMC4904015 DOI: 10.3389/fnagi.2016.00134] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2016] [Accepted: 05/27/2016] [Indexed: 01/16/2023] Open
Abstract
Cognitive decline is a natural phenomenon of aging. Although there exists a consensus that sensitivity to acoustic features of music is associated with such decline, no solid evidence has yet shown that structural elements and contexts of music explain this loss of cognitive performance. This study examined the extent and the type of cognitive decline that is related to the contour identification task (CIT) using tones with different pitches (i.e., melodic contours). Both younger and older adult groups participated in the CIT given in three listening conditions (i.e., focused, selective, and alternating). Behavioral data (accuracy and response times) and hemodynamic reactions were measured using functional near-infrared spectroscopy (fNIRS). Our findings showed cognitive declines in the older adult group but with a subtle difference from the younger adult group. The accuracy of the melodic CITs given in the target-like distraction task (CIT2) was significantly lower than that in the environmental noise (CIT1) condition in the older adult group, indicating that CIT2 may be a benchmark test for age-specific cognitive decline. The fNIRS findings also agreed with this interpretation, revealing significant increases in oxygenated hemoglobin (oxyHb) concentration in the younger (p < 0.05 for Δpre - on task; p < 0.01 for Δon – post task) rather than the older adult group (n.s for Δpre - on task; n.s for Δon – post task). We further concluded that the oxyHb difference was present in the brain regions near the right dorsolateral prefrontal cortex. Taken together, these findings suggest that CIT2 (i.e., the melodic contour task in the target-like distraction) is an optimized task that could indicate the degree and type of age-related cognitive decline.
Collapse
Affiliation(s)
- Eunju Jeong
- Department of Arts and Technology, Hanyang University Seoul, South Korea
| | - Hokyoung Ryu
- Department of Arts and Technology, Hanyang University Seoul, South Korea
| |
Collapse
|
21
|
Weller T, Buchholz JM, Best V. Auditory masking of speech in reverberant multi-talker environments. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 139:1303-1313. [PMID: 27036267 DOI: 10.1121/1.4944568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Auditory localization research needs to be performed in more realistic testing environments to better capture the real-world abilities of listeners and their hearing devices. However, there are significant challenges involved in controlling the audibility of relevant target signals in realistic environments. To understand the important aspects influencing target detection in more complex environments, a reverberant room with a multi-talker background was simulated and presented to the listener in a loudspeaker-based virtual sound environment. Masked thresholds of a short speech stimulus were measured adaptively for multiple target source locations in this scenario. It was found that both distance and azimuth of the target source have a strong influence on the masked threshold. Subsequently, a functional model was applied to analyze the factors influencing target detectability. The model is comprised of an auditory front-end that generates an internal representation of the stimuli in both ears, followed by a decision device combining d' information across time, frequency and both ears. The model predictions of the masked thresholds were overall in very good agreement with the experimental results. An analysis of the model processes showed that head shadow effects, signal spectrum, and reverberation have a strong impact on target audibility in the given scenario.
Collapse
Affiliation(s)
- Tobias Weller
- Department of Linguistics, Macquarie University, New South Wales 2109, Australia
| | - Jörg M Buchholz
- Department of Linguistics, Macquarie University, New South Wales 2109, Australia
| | - Virginia Best
- Boston University Hearing Research Center, Boston, Massachusetts 02215, USA
| |
Collapse
|
22
|
Abstract
Sensorineural hearing loss is the most common type of hearing impairment worldwide. It arises as a consequence of damage to the cochlea or auditory nerve, and several structures are often affected simultaneously. There are many causes, including genetic mutations affecting the structures of the inner ear, and environmental insults such as noise, ototoxic substances, and hypoxia. The prevalence increases dramatically with age. Clinical diagnosis is most commonly accomplished by measuring detection thresholds and comparing these to normative values to determine the degree of hearing loss. In addition to causing insensitivity to weak sounds, sensorineural hearing loss has a number of adverse perceptual consequences, including loudness recruitment, poor perception of pitch and auditory space, and difficulty understanding speech, particularly in the presence of background noise. The condition is usually incurable; treatment focuses on restoring the audibility of sounds made inaudible by hearing loss using either hearing aids or cochlear implants.
Collapse
Affiliation(s)
- Kathryn Hopkins
- School of Psychological Sciences, University of Manchester, Manchester, UK.
| |
Collapse
|
23
|
Akeroyd MA. An overview of the major phenomena of the localization of sound sources by normal-hearing, hearing-impaired, and aided listeners. Trends Hear 2014; 18:18/0/2331216514560442. [PMID: 25492094 PMCID: PMC4271773 DOI: 10.1177/2331216514560442] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Localizing a sound source requires the auditory system to determine its direction and its distance. In general, hearing-impaired listeners do less well in experiments measuring localization performance than normal-hearing listeners, and hearing aids often exacerbate matters. This article summarizes the major experimental effects in direction (and its underlying cues of interaural time differences and interaural level differences) and distance for normal-hearing, hearing-impaired, and aided listeners. Front/back errors and the importance of self-motion are noted. The influence of vision on the localization of real-world sounds is emphasized, such as through the ventriloquist effect or the intriguing link between spatial hearing and visual attention.
Collapse
Affiliation(s)
- Michael A Akeroyd
- MRC/CSO Institute of Hearing Research-Scottish Section, Glasgow Royal Infirmary, Glasgow, UK
| |
Collapse
|
24
|
Sharma M, Dhamani I, Leung J, Carlile S. Attention, memory, and auditory processing in 10- to 15-year-old children with listening difficulties. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2014; 57:2308-2321. [PMID: 25198800 DOI: 10.1044/2014_jslhr-h-13-0226] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/03/2013] [Accepted: 08/20/2014] [Indexed: 06/03/2023]
Abstract
PURPOSE The aim of this study was to examine attention, memory, and auditory processing in children with reported listening difficulty in noise (LDN) despite having clinically normal hearing. METHOD Twenty-one children with LDN and 15 children with no listening concerns (controls) participated. The clinically normed auditory processing tests included the Frequency/Pitch Pattern Test (FPT; Musiek, 2002), the Dichotic Digits Test (Musiek, 1983), the Listening in Spatialized Noise-Sentences (LiSN-S) test (Dillon, Cameron, Glyde, Wilson, & Tomlin, 2012), gap detection in noise (Baker, Jayewardene, Sayle, & Saeed, 2008), and masking level difference (MLD; Wilson, Moncrieff, Townsend, & Pillion, 2003). Also included were research-based psychoacoustic tasks, such as auditory stream segregation, localization, sinusoidal amplitude modulation (SAM), and fine structure perception. All were also evaluated on attention and memory test batteries. RESULTS The LDN group was significantly slower switching their auditory attention and had poorer inhibitory control. Additionally, the group mean results showed significantly poorer performance on FPT, MLD, 4-Hz SAM, and memory tests. Close inspection of the individual data revealed that only 5 participants (out of 21) in the LDN group showed significantly poor performance on FPT compared with clinical norms. Further testing revealed the frequency discrimination of these 5 children to be significantly impaired. CONCLUSION Thus, the LDN group showed deficits in attention switching and inhibitory control, whereas only a subset of these participants demonstrated an additional frequency resolution deficit.
Collapse
|
25
|
Brungart DS, Cohen J, Cord M, Zion D, Kalluri S. Assessment of auditory spatial awareness in complex listening environments. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2014; 136:1808-1820. [PMID: 25324082 DOI: 10.1121/1.4893932] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
In the real world, listeners often need to track multiple simultaneous sources in order to maintain awareness of the relevant sounds in their environments. Thus, there is reason to believe that simple single source sound localization tasks may not accurately capture the impact that a listening device such as a hearing aid might have on a listener's level of auditory awareness. In this experiment, 10 normal hearing listeners and 20 hearing impaired listeners were tested in a task that required them to identify and localize sound sources in three different listening tasks of increasing complexity: a single-source localization task, where listeners identified and localized a single sound source presented in isolation; an added source task, where listeners identified and localized a source that was added to an existing auditory scene, and a remove source task, where listeners identified and localized a source that was removed from an existing auditory scene. Hearing impaired listeners completed these tasks with and without the use of their previously fit hearing aids. As expected, the results show that performance decreased both with increasing task complexity and with the number of competing sound sources in the acoustic scene. The results also show that the added source task was as sensitive to differences in performance across listening conditions as the standard localization task, but that it correlated with a different pattern of subjective and objective performance measures across listeners. This result suggests that a measure of complex auditory situation awareness such as the one tested here may be a useful tool for evaluating differences in performance across different types of listening devices, such as hearing aids or hearing protection devices.
Collapse
Affiliation(s)
- Douglas S Brungart
- Walter Reed National Military Medical Center, 4954 North Palmer Road, Bethesda, Maryland 20889
| | - Julie Cohen
- Walter Reed National Military Medical Center, 4954 North Palmer Road, Bethesda, Maryland 20889
| | - Mary Cord
- Walter Reed National Military Medical Center, 4954 North Palmer Road, Bethesda, Maryland 20889
| | - Danielle Zion
- Walter Reed National Military Medical Center, 4954 North Palmer Road, Bethesda, Maryland 20889
| | - Sridhar Kalluri
- Starkey Hearing Research Center, 2150 Shattuck Avenue, Berkeley, California 94704
| |
Collapse
|
26
|
Dhamani I, Leung J, Carlile S, Sharma M. Switch attention to listen. Sci Rep 2013; 3:1297. [PMID: 23416613 PMCID: PMC3575018 DOI: 10.1038/srep01297] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2012] [Accepted: 02/01/2013] [Indexed: 11/09/2022] Open
Abstract
The aim of this research was to evaluate the ability to switch attention and selectively attend to relevant information in children (10-15 years) with persistent listening difficulties in noisy environments. A wide battery of clinical tests indicated that children with complaints of listening difficulties had otherwise normal hearing sensitivity and auditory processing skills. Here we show that these children are markedly slower to switch their attention compared to their age-matched peers. The results suggest poor attention switching, lack of response inhibition and/or poor listening effort consistent with a predominantly top-down (central) information processing deficit. A deficit in the ability to switch attention across talkers would provide the basis for this otherwise hidden listening disability, especially in noisy environments involving multiple talkers such as classrooms.
Collapse
Affiliation(s)
- Imran Dhamani
- Audiology Section, Macquarie University and The Hearing CRC.
| | | | | | | |
Collapse
|