1
|
Anshu K, Kristensen K, Godar SP, Zhou X, Hartley SL, Litovsky RY. Speech Recognition and Spatial Hearing in Young Adults With Down Syndrome: Relationships With Hearing Thresholds and Auditory Working Memory. Ear Hear 2024; 45:1568-1584. [PMID: 39090791 PMCID: PMC11493531 DOI: 10.1097/aud.0000000000001549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/04/2024]
Abstract
OBJECTIVES Individuals with Down syndrome (DS) have a higher incidence of hearing loss (HL) compared with their peers without developmental disabilities. Little is known about the associations between HL and functional hearing for individuals with DS. This study investigated two aspects of auditory functions, "what" (understanding the content of sound) and "where" (localizing the source of sound), in young adults with DS. Speech reception thresholds in quiet and in the presence of interferers provided insight into speech recognition, that is, the "what" aspect of auditory maturation. Insights into "where" aspect of auditory maturation were gained from evaluating speech reception thresholds in colocated versus separated conditions (quantifying spatial release from masking) as well as right versus left discrimination and sound location identification. Auditory functions in the "where" domain develop during earlier stages of cognitive development in contrast with the later developing "what" functions. We hypothesized that young adults with DS would exhibit stronger "where" than "what" auditory functioning, albeit with the potential impact of HL. Considering the importance of auditory working memory and receptive vocabulary for speech recognition, we hypothesized that better speech recognition in young adults with DS, in quiet and with speech interferers, would be associated with better auditory working memory ability and receptive vocabulary. DESIGN Nineteen young adults with DS (aged 19 to 24 years) participated in the study and completed assessments on pure-tone audiometry, right versus left discrimination, sound location identification, and speech recognition in quiet and with speech interferers that were colocated or spatially separated. Results were compared with published data from children and adults without DS and HL, tested using similar protocols and stimuli. Digit Span tests assessed auditory working memory. Receptive vocabulary was examined using the Peabody Picture Vocabulary Test Fifth Edition. RESULTS Seven participants (37%) had HL in at least 1 ear; 4 individuals had mild HL, and 3 had moderate HL or worse. Participants with mild or no HL had ≥75% correct at 5° separation on the discrimination task and sound localization root mean square errors (mean ± SD: 8.73° ± 2.63°) within the range of adults in the comparison group. Speech reception thresholds in young adults with DS were higher than all comparison groups. However, spatial release from masking did not differ between young adults with DS and comparison groups. Better (lower) speech reception thresholds were associated with better hearing and better auditory working memory ability. Receptive vocabulary did not predict speech recognition. CONCLUSIONS In the absence of HL, young adults with DS exhibited higher accuracy during spatial hearing tasks as compared with speech recognition tasks. Thus, auditory processes associated with the "where" pathways appear to be a relative strength than those associated with "what" pathways in young adults with DS. Further, both HL and auditory working memory impairments contributed to difficulties in speech recognition in the presence of speech interferers. Future larger-sized samples are needed to replicate and extend our findings.
Collapse
Affiliation(s)
- Kumari Anshu
- Waisman Center, University of Wisconsin–Madison, Madison, WI, USA
| | - Kayla Kristensen
- Waisman Center, University of Wisconsin–Madison, Madison, WI, USA
| | - Shelly P. Godar
- Waisman Center, University of Wisconsin–Madison, Madison, WI, USA
| | - Xin Zhou
- Waisman Center, University of Wisconsin–Madison, Madison, WI, USA
- Currently at The Chinese University of Hong Kong, Hong Kong
| | - Sigan L. Hartley
- Waisman Center, University of Wisconsin–Madison, Madison, WI, USA
- School of Human Ecology, University of Wisconsin–Madison, Madison, WI, USA
| | - Ruth Y. Litovsky
- Waisman Center, University of Wisconsin–Madison, Madison, WI, USA
- Department of Communication Sciences and Disorders, University of Wisconsin–Madison, Madison, WI, USA
| |
Collapse
|
2
|
Folkerts ML, Picou EM, Stecker GC. Spectral weighting functions for localization of complex sound. III. The effect of sensorineural hearing lossa). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 156:2434-2447. [PMID: 39400266 PMCID: PMC11479636 DOI: 10.1121/10.0030471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Revised: 09/16/2024] [Accepted: 09/19/2024] [Indexed: 10/15/2024]
Abstract
Spectral weighting functions for sound localization were measured in participants with bilateral mild sloping to moderately severe, high-frequency sensorineural hearing loss (SNHL) and compared to normal hearing (NH) participants with and without simulated SNHL. Each participant group localized three types of complex tones, comprised of seven frequency components spatially jittered and presented from the horizontal frontal field. A threshold-elevating noise masker was implemented in the free field to simulate SNHL for participants with NH. On average, participants with SNHL and NH (in quiet and simulated SNHL) placed the greatest perceptual weight on components within the interaural time difference "dominance region," found previously to peak around 800 Hz [Folkerts and Stecker, J. Acoust. Soc. Am. 151, 3409-3425 (2022)]. In addition to the peak at 800 Hz, both participant groups (including NH participants in quiet) placed near equal weight on 400 Hz, resulting in a broadened "peak" in the dominance region, most likely due to the reduction of audibility to higher frequency components. However, individual weighting strategies were more variable across participants with SNHL than participants with NH. Localization performance was reduced for participants with SNHL but not for NH participants with simulated hearing loss when compared to NH participants in quiet.
Collapse
Affiliation(s)
- Monica L Folkerts
- School of Communication Sciences and Disorders, University of Central Florida, 4364 Scorpius Street, HSII, Suite 101, Orlando, Florida 32816-2215, USA
| | - Erin M Picou
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Avenue South, Nashville, Tennessee 37232, USA
| | - G Christopher Stecker
- Center for Hearing Research, Boys Town National Research Hospital, 555 N. 30th Street, Omaha, Nebraska 68131, USA
| |
Collapse
|
3
|
Best V, Roverud E. Externalization of Speech When Listening With Hearing Aids. Trends Hear 2024; 28:23312165241229572. [PMID: 38347733 PMCID: PMC10865954 DOI: 10.1177/23312165241229572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Revised: 01/08/2024] [Accepted: 01/15/2024] [Indexed: 02/15/2024] Open
Abstract
Subjective reports indicate that hearing aids can disrupt sound externalization and/or reduce the perceived distance of sounds. Here we conducted an experiment to explore this phenomenon and to quantify how frequently it occurs for different hearing-aid styles. Of particular interest were the effects of microphone position (behind the ear vs. in the ear) and dome type (closed vs. open). Participants were young adults with normal hearing or with bilateral hearing loss, who were fitted with hearing aids that allowed variations in the microphone position and the dome type. They were seated in a large sound-treated booth and presented with monosyllabic words from loudspeakers at a distance of 1.5 m. Their task was to rate the perceived externalization of each word using a rating scale that ranged from 10 (at the loudspeaker in front) to 0 (in the head) to -10 (behind the listener). On average, compared to unaided listening, hearing aids tended to reduce perceived distance and lead to more in-the-head responses. This was especially true for closed domes in combination with behind-the-ear microphones. The behavioral data along with acoustical recordings made in the ear canals of a manikin suggest that increased low-frequency ear-canal levels (with closed domes) and ambiguous spatial cues (with behind-the-ear microphones) may both contribute to breakdowns of externalization.
Collapse
Affiliation(s)
- Virginia Best
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA 02215, USA
| | - Elin Roverud
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA 02215, USA
| |
Collapse
|
4
|
Ramírez M, Arend JM, von Gablenz P, Liesefeld HR, Pörschmann C. Toward Sound Localization Testing in Virtual Reality to Aid in the Screening of Auditory Processing Disorders. Trends Hear 2024; 28:23312165241235463. [PMID: 38425297 PMCID: PMC10908240 DOI: 10.1177/23312165241235463] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 02/08/2024] [Accepted: 02/10/2024] [Indexed: 03/02/2024] Open
Abstract
Sound localization testing is key for comprehensive hearing evaluations, particularly in cases of suspected auditory processing disorders. However, sound localization is not commonly assessed in clinical practice, likely due to the complexity and size of conventional measurement systems, which require semicircular loudspeaker arrays in large and acoustically treated rooms. To address this issue, we investigated the feasibility of testing sound localization in virtual reality (VR). Previous research has shown that virtualization can lead to an increase in localization blur. To measure these effects, we conducted a study with a group of normal-hearing adults, comparing sound localization performance in different augmented reality and VR scenarios. We started with a conventional loudspeaker-based measurement setup and gradually moved to a virtual audiovisual environment, testing sound localization in each scenario using a within-participant design. The loudspeaker-based experiment yielded results comparable to those reported in the literature, and the results of the virtual localization test provided new insights into localization performance in state-of-the-art VR environments. By comparing localization performance between the loudspeaker-based and virtual conditions, we were able to estimate the increase in localization blur induced by virtualization relative to a conventional test setup. Notably, our study provides the first proxy normative cutoff values for sound localization testing in VR. As an outlook, we discuss the potential of a VR-based sound localization test as a suitable, accessible, and portable alternative to conventional setups and how it could serve as a time- and resource-saving prescreening tool to avoid unnecessarily extensive and complex laboratory testing.
Collapse
Affiliation(s)
- Melissa Ramírez
- Institute of Computer and Communication Technology, TH Köln University of Applied Sciences, Cologne, Germany
- Audio Communication Group, Technische Universität Berlin, Berlin, Germany
| | - Johannes M. Arend
- Audio Communication Group, Technische Universität Berlin, Berlin, Germany
| | - Petra von Gablenz
- Institute of Hearing Technology and Audiology, Jade University of Applied Sciences and Cluster of Excellence ‘Hearing4all’, Oldenburg, Germany
| | | | - Christoph Pörschmann
- Institute of Computer and Communication Technology, TH Köln University of Applied Sciences, Cologne, Germany
| |
Collapse
|
5
|
Higgins NC, Pupo DA, Ozmeral EJ, Eddins DA. Head movement and its relation to hearing. Front Psychol 2023; 14:1183303. [PMID: 37448716 PMCID: PMC10338176 DOI: 10.3389/fpsyg.2023.1183303] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 06/07/2023] [Indexed: 07/15/2023] Open
Abstract
Head position at any point in time plays a fundamental role in shaping the auditory information that reaches a listener, information that continuously changes as the head moves and reorients to different listening situations. The connection between hearing science and the kinesthetics of head movement has gained interest due to technological advances that have increased the feasibility of providing behavioral and biological feedback to assistive listening devices that can interpret movement patterns that reflect listening intent. Increasing evidence also shows that the negative impact of hearing deficits on mobility, gait, and balance may be mitigated by prosthetic hearing device intervention. Better understanding of the relationships between head movement, full body kinetics, and hearing health, should lead to improved signal processing strategies across a range of assistive and augmented hearing devices. The purpose of this review is to introduce the wider hearing community to the kinesiology of head movement and to place it in the context of hearing and communication with the goal of expanding the field of ecologically-specific listener behavior.
Collapse
Affiliation(s)
- Nathan C. Higgins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States
| | - Daniel A. Pupo
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States
- School of Aging Studies, University of South Florida, Tampa, FL, United States
| | - Erol J. Ozmeral
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States
| | - David A. Eddins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States
| |
Collapse
|
6
|
Nisha KV, Uppunda AK, Kumar RT. Spatial rehabilitation using virtual auditory space training paradigm in individuals with sensorineural hearing impairment. Front Neurosci 2023; 16:1080398. [PMID: 36733923 PMCID: PMC9887142 DOI: 10.3389/fnins.2022.1080398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 12/20/2022] [Indexed: 01/18/2023] Open
Abstract
Purpose The present study aimed to quantify the effects of spatial training using virtual sources on a battery of spatial acuity measures in listeners with sensorineural hearing impairment (SNHI). Methods An intervention-based time-series comparison design involving 82 participants divided into three groups was adopted. Group I (n = 27, SNHI-spatially trained) and group II (n = 25, SNHI-untrained) consisted of SNHI listeners, while group III (n = 30) had listeners with normal hearing (NH). The study was conducted in three phases. In the pre-training phase, all the participants underwent a comprehensive assessment of their spatial processing abilities using a battery of tests including spatial acuity in free-field and closed-field scenarios, tests for binaural processing abilities (interaural time threshold [ITD] and level difference threshold [ILD]), and subjective ratings. While spatial acuity in the free field was assessed using a loudspeaker-based localization test, the closed-field source identification test was performed using virtual stimuli delivered through headphones. The ITD and ILD thresholds were obtained using a MATLAB psychoacoustic toolbox, while the participant ratings on the spatial subsection of speech, spatial, and qualities questionnaire in Kannada were used for the subjective ratings. Group I listeners underwent virtual auditory spatial training (VAST), following pre-evaluation assessments. All tests were re-administered on the group I listeners halfway through training (mid-training evaluation phase) and after training completion (post-training evaluation phase), whereas group II underwent these tests without any training at the same time intervals. Results and discussion Statistical analysis showed the main effect of groups in all tests at the pre-training evaluation phase, with post hoc comparisons that revealed group equivalency in spatial performance of both SNHI groups (groups I and II). The effect of VAST in group I was evident on all the tests, with the localization test showing the highest predictive power for capturing VAST-related changes on Fischer discriminant analysis (FDA). In contrast, group II demonstrated no changes in spatial acuity across timelines of measurements. FDA revealed increased errors in the categorization of NH as SNHI-trained at post-training evaluation compared to pre-training evaluation, as the spatial performance of the latter improved with VAST in the post-training phase. Conclusion The study demonstrated positive outcomes of spatial training using VAST in listeners with SNHI. The utility of this training program can be extended to other clinical population with spatial auditory processing deficits such as auditory neuropathy spectrum disorder, cochlear implants, central auditory processing disorders etc.
Collapse
|
7
|
Nisha KV, Durai R, Konadath S. Musical Training and Its Association With Age-Related Changes in Binaural, Temporal, and Spatial Processing. Am J Audiol 2022; 31:669-683. [PMID: 35772171 DOI: 10.1044/2022_aja-21-00227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
OBJECTIVE This article aimed to assess the relationship between musical training and age-related changes in binaural, temporal, and spatial processing abilities. DESIGN A standard group comparison study was conducted involving both musicians and nonmusicians. The effect of musical training was assessed using a battery of psychoacoustical tests (interaural time and level difference thresholds: ITD & ILD, binaural gap detection threshold, and virtual auditory space identification test) and subjective ratings (Spatial-Hearing subsection of Speech, Spatial, and Quality of Hearing scale in Kannada). STUDY SAMPLE A total of 60 participants, between 41 and 70 years, were divided into three groups of 20 each, based on their age (41-50, 51-60, and 61-70 years). Each of these three groups was subdivided into two, one comprising 10 musicians (vocalists practicing South-Indian classical music) and the other comprising 10 nonmusicians. RESULTS Multivariate analyses of variance revealed that musicians performed significantly better (p < .001) than nonmusicians in all the tests. Analyses of variance showed that whereas age had no effect (p > .05) on performance in any of the tests in musicians, age affected the performance of nonmusicians significantly in terms of ITD (p = .02) and ILD (p = .01) thresholds. CONCLUSION Musical training appears to have the potential to slow down age-related decline in binaural, temporal, and spatial processing.
Collapse
Affiliation(s)
| | - Ranjini Durai
- Department of Audiology, All India Institute of Speech and Hearing, Mysuru
| | - Sreeraj Konadath
- Department of Audiology, All India Institute of Speech and Hearing, Mysuru
| |
Collapse
|
8
|
Russell MK. Age and Auditory Spatial Perception in Humans: Review of Behavioral Findings and Suggestions for Future Research. Front Psychol 2022; 13:831670. [PMID: 35250777 PMCID: PMC8888835 DOI: 10.3389/fpsyg.2022.831670] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Accepted: 01/24/2022] [Indexed: 11/13/2022] Open
Abstract
It has been well documented, and fairly well known, that concomitant with an increase in chronological age is a corresponding increase in sensory impairment. As most people realize, our hearing suffers as we get older; hence, the increased need for hearing aids. The first portion of the present paper is how the change in age apparently affects auditory judgments of sound source position. A summary of the literature evaluating the changes in the perception of sound source location and the perception of sound source motion as a function of chronological age is presented. The review is limited to empirical studies with behavioral findings involving humans. It is the view of the author that we have an immensely limited understanding of how chronological age affects perception of space when based on sound. In the latter part of the paper, discussion is given to how auditory spatial perception is traditionally conducted in the laboratory. Theoretically, beneficial reasons exist for conducting research in the manner it has been. Nonetheless, from an ecological perspective, the vast majority of previous research can be considered unnatural and greatly lacking in ecological validity. Suggestions for an alternative and more ecologically valid approach to the investigation of auditory spatial perception are proposed. It is believed an ecological approach to auditory spatial perception will enhance our understanding of the extent to which individuals perceive sound source location and how those perceptual judgments change with an increase in chronological age.
Collapse
|
9
|
Evaluation of Extended-Wear Hearing Aids as a Solution for Intermittently Noise-Exposed Listeners With Hearing Loss. Ear Hear 2021; 42:1544-1559. [PMID: 33974779 DOI: 10.1097/aud.0000000000001044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Many individuals with noise-related hearing loss continue working in environments where they are periodically exposed to high levels of noise, which increases their risk for further hearing loss. These individuals often must remove their hearing aids in operational environments because of incompatibility with the mandated personal protective equipment, thus reducing situational awareness. Extended-wear hearing aids might provide a solution for these individuals because they can be worn for weeks or months at a time, protect users from high-level noise exposures, and are compatible with communication headsets, earmuffs, and other types of personal protective equipment. The purpose of this study was to evaluate localization ability and speech understanding, feasibility of fitting and use, and acceptability in terms of comfort in a population of noise-exposed, active duty Service members. DESIGN Participants in the study were active duty Service members who were experienced hearing aid users and were currently using standard hearing aids bilaterally. Participants were fitted with extended-wear hearing aids for up to 14 weeks. Laboratory measures included functional gain, sound localization, and speech recognition (in quiet and in noise). Performance was compared between unaided, standard hearing aids, extended-wear hearing aids, and extended-wear hearing aids combined with a tactical communication device (3M Peltor ComTac). In addition, self-perceived benefit between extended-wear hearing aids and standard hearing aids was compared. RESULTS The extended-wear hearing aids provided more attenuation of external sound when turned off compared to standard hearing aids. Speech understanding in quiet and in noise was comparable between extended-wear hearing aids and standard hearing aids and was better when a tactical communication device was worn in addition to extended-wear hearing aids. Localization with extended-wear hearing aids was the worst, intermediate with the standard hearing aids, and the best when the ears were unaided. The extended-wear hearing aids and standard hearing aids provided similar self-perceived communication benefits relative to unaided ears. Device failure and issues with extended-wear hearing aids fit and comfort contributed to a high participant withdrawal rate. CONCLUSIONS Overall, the hearing benefits of extended-wear hearing aids for Service members with hearing loss were comparable to those obtained with standard hearing aids, except for sound localization, which was poorer with extended-wear hearing aids. Extended-wear hearing aids provide the additional benefits of protecting the ears from high-level impulsive noise and being compatible with tactical communication and protection systems and other existing personal protective equipment and communication gear. The withdrawal rate in this study, however, suggests that extended-wear hearing aids may not be suitable for active duty Service members in locations where properly trained hearing professionals are not available to replace or re-insert extended-wear hearing aids when needed due to discomfort or device failure.
Collapse
|
10
|
Abstract
OBJECTIVES Current hearing aids have a limited bandwidth, which limits the intelligibility and quality of their output, and inhibits their uptake. Recent advances in signal processing, as well as novel methods of transduction, allow for a greater useable frequency range. Previous studies have shown a benefit for this extended bandwidth in consonant recognition, talker-sex identification, and separating sound sources. To explore whether there would be any direct spatial benefits to extending bandwidth, we used a dynamic localization method in a realistic situation. DESIGN Twenty-eight adult participants with minimal hearing loss reoriented themselves as quickly and accurately as comfortable to a new, off-axis near-field talker continuing a story in a background of far-field talkers of the same overall level in a simulated large room with common building materials. All stimuli were low-pass filtered at either 5 or 10 kHz on each trial. To further simulate current hearing aids, participants wore microphones above the pinnae and insert earphones adjusted to provide a linear, zero-gain response. RESULTS Each individual trajectory was recorded with infra-red motion-tracking and analyzed for accuracy, duration, start time, peak velocity, peak velocity time, complexity, reversals, and misorientations. Results across listeners showed a significant increase in peak velocity and significant decrease in start and peak velocity time with greater (10 kHz) bandwidth. CONCLUSIONS These earlier, swifter orientations demonstrate spatial benefits beyond static localization accuracy in plausible conditions; extended bandwidth without pinna cues provided more salient cues in a realistic mixture of talkers.
Collapse
|
11
|
Cui D, Cai Y, Yu G. A Graphical-User-Interface-Based Azimuth-Collection Method in Autonomous Auditory Localization of Real and Virtual Sound Sources. IEEE J Biomed Health Inform 2021; 25:988-996. [PMID: 32750969 DOI: 10.1109/jbhi.2020.3011377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Auditory localization of spatial sound sources is an important life skill for human beings. For the practical application-oriented measurement of auditory localization ability, the preference is a compromise among (i) data accuracy, (ii) the maneuverability of collecting directions, and (iii) the cost of hardware and software. The graphical user interface (GUI)-based sound-localization experimental platform proposed here (i) is cheap, (ii) can be operated autonomously by the listener, (iii) can store results online, and (iv) supports real or virtual sound sources. To evaluate the accuracy of this method, by using 12 loudspeakers arranged in equal azimuthal intervals of 30° in the horizontal plane, three groups of azimuthal localization experiments are conducted in the horizontal plane with subjects with normal hearing. In these experiments, the azimuths are reported using (i) an assistant, (ii) a motion tracker, or (iii) the newly designed GUI-based method. All three groups of results show that the localization errors are mostly within 5-12°, which is consistent with previous results from different localization experiments. Finally, the stimulus of virtual sound sources is integrated into the GUI-based experimental platform. The results with the virtual sources suggest that using individualized head-related transfer functions can achieve better performance in spatial sound source localization, which is consistent with previous conclusions and further validates the reliability of this experimental platform.
Collapse
|
12
|
Gallun FJ. Impaired Binaural Hearing in Adults: A Selected Review of the Literature. Front Neurosci 2021; 15:610957. [PMID: 33815037 PMCID: PMC8017161 DOI: 10.3389/fnins.2021.610957] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Accepted: 02/19/2021] [Indexed: 11/17/2022] Open
Abstract
Despite over 100 years of study, there are still many fundamental questions about binaural hearing that remain unanswered, including how impairments of binaural function are related to the mechanisms of binaural hearing. This review focuses on a number of studies that are fundamental to understanding what is known about the effects of peripheral hearing loss, aging, traumatic brain injury, strokes, brain tumors, and multiple sclerosis (MS) on binaural function. The literature reviewed makes clear that while each of these conditions has the potential to impair the binaural system, the specific abilities of a given patient cannot be known without performing multiple behavioral and/or neurophysiological measurements of binaural sensitivity. Future work in this area has the potential to bring awareness of binaural dysfunction to patients and clinicians as well as a deeper understanding of the mechanisms of binaural hearing, but it will require the integration of clinical research with animal and computational modeling approaches.
Collapse
Affiliation(s)
- Frederick J. Gallun
- Oregon Hearing Research Center, Oregon Health and Science University, Portland, OR, United States
| |
Collapse
|
13
|
Buchholz JM, Best V. Speech detection and localization in a reverberant multitalker environment by normal-hearing and hearing-impaired listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:1469. [PMID: 32237797 PMCID: PMC7058429 DOI: 10.1121/10.0000844] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/09/2019] [Revised: 01/31/2020] [Accepted: 02/13/2020] [Indexed: 06/11/2023]
Abstract
Spatial perception is an important part of a listener's experience and ability to function in everyday environments. However, the current understanding of how well listeners can locate sounds is based on measurements made using relatively simple stimuli and tasks. Here the authors investigated sound localization in a complex and realistic environment for listeners with normal and impaired hearing. A reverberant room containing a background of multiple talkers was simulated and presented to listeners in a loudspeaker-based virtual sound environment. The target was a short speech stimulus presented at various azimuths and distances relative to the listener. To ensure that the target stimulus was detectable to the listeners with hearing loss, masked thresholds were first measured on an individual basis and used to set the target level. Despite this compensation, listeners with hearing loss were less accurate at locating the target, showing increased front-back confusion rates and higher root-mean-square errors. Poorer localization was associated with poorer masked thresholds and with more severe low-frequency hearing loss. Localization accuracy in the multitalker background was lower than in quiet and also declined for more distant targets. However, individual accuracy in noise and quiet was strongly correlated.
Collapse
Affiliation(s)
- Jörg M Buchholz
- Department of Linguistics, Australian Hearing Hub, 16 University Avenue, Macquarie, University, Sydney, New South Wales, 2109, Australia
| | - Virginia Best
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| |
Collapse
|
14
|
Nisha KV, Kumar UA. Pre-Attentive Neural Signatures of Auditory Spatial Processing in Listeners With Normal Hearing and Sensorineural Hearing Impairment: A Comparative Study. Am J Audiol 2019; 28:437-449. [PMID: 31461328 DOI: 10.1044/2018_aja-ind50-18-0099] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose This study was carried out to understand the neural intricacies of auditory spatial processing in listeners with sensorineural hearing impairment (SNHI) and compare it with normal hearing (NH) listeners using both local and global measures of waveform analyses. Method A standard group comparison research design was adopted in this study. Participants were assigned to 2 groups. Group I consisted of 13 participants with mild-moderate flat or sloping SNHI, while Group II consisted of 13 participants with NH sensitivity. Electroencephalographic data using virtual acoustic stimuli (spatially loaded stimuli played in center, right, and left hemifields) were recorded from 64 electrode sites in passive oddball paradigm. Both local (electrode-wise waveform analysis) and global (dissimilarity index, electric field strength, and topographic pattern analyses) measures were performed on the electroencephalographic data. Results Results of local waveform analyses marked the appearance of mismatch negativity in an earlier time window, relative to those reported conventionally in both the groups. The global measures of electric field strength and topographic modulations (dissimilarity index) revealed differences between the 2 groups in different time periods, indicating multiphases (integration and consolidation) of spatial processing. Further, the topographic pattern analysis showed the emergence of different scalp maps for SNHI and NH in the time window corresponding to mismatch negativity (78-150 ms), suggestive of differential spatial processing between the groups at the cortical level. Conclusions The findings of this study highlights the differential allotment of neural generators, denoting variations in spatial processing between SNHI and NH individuals.
Collapse
Affiliation(s)
- K. V. Nisha
- Department of Audiology, All India Institute of Speech and Hearing (AIISH), Naimisham Campus, Manasagangothri, Mysore-570006, Karnataka State, India
| | - U. Ajith Kumar
- Department of Audiology, All India Institute of Speech and Hearing (AIISH), Naimisham Campus, Manasagangothri, Mysore-570006, Karnataka State, India
| |
Collapse
|
15
|
Courtois G, Lissek H, Estoppey P, Oesch Y, Gigandet X. Effects of Binaural Spatialization in Wireless Microphone Systems for Hearing Aids on Normal-Hearing and Hearing-Impaired Listeners. Trends Hear 2018; 22:2331216517753548. [PMID: 29457537 PMCID: PMC5821302 DOI: 10.1177/2331216517753548] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2017] [Revised: 12/07/2017] [Accepted: 12/12/2017] [Indexed: 11/17/2022] Open
Abstract
Little is known about the perception of artificial spatial hearing by hearing-impaired subjects. The purpose of this study was to investigate how listeners with hearing disorders perceived the effect of a spatialization feature designed for wireless microphone systems. Forty listeners took part in the experiments. They were arranged in four groups: normal-hearing, moderate, severe, and profound hearing loss. Their performance in terms of speech understanding and speaker localization was assessed with diotic and binaural stimuli. The results of the speech intelligibility experiment revealed that the subjects presenting a moderate or severe hearing impairment better understood speech with the spatialization feature. Thus, it was demonstrated that the conventional diotic binaural summation operated by current wireless systems can be transformed to reproduce the spatial cues required to localize the speaker, without any loss of intelligibility. The speaker localization experiment showed that a majority of the hearing-impaired listeners had similar performance with natural and artificial spatial hearing, contrary to the normal-hearing listeners. This suggests that certain subjects with hearing impairment preserve their localization abilities with approximated generic head-related transfer functions in the frontal horizontal plane.
Collapse
Affiliation(s)
- Gilles Courtois
- Swiss Federal Institute of Technology, Signal Processing Laboratory, Lausanne, Switzerland
| | - Hervé Lissek
- Swiss Federal Institute of Technology, Signal Processing Laboratory, Lausanne, Switzerland
| | | | - Yves Oesch
- Phonak Communications AG, Murten, Switzerland
| | | |
Collapse
|