1
|
Lertpoompunya A, Ozmeral EJ, Higgins NC, Eddins DA. Head-orienting behaviors during simultaneous speech detection and localization. Front Psychol 2024; 15:1425972. [PMID: 39355293 PMCID: PMC11442202 DOI: 10.3389/fpsyg.2024.1425972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Accepted: 09/04/2024] [Indexed: 10/03/2024] Open
Abstract
Head movement plays a vital role in auditory processing by contributing to spatial awareness and the ability to identify and locate sound sources. Here we investigate head-orienting behaviors using a dual-task experimental paradigm to measure: (a) localization of a speech source; and (b) detection of meaningful speech (numbers), within a complex acoustic background. Ten younger adults with normal hearing and 20 older adults with mild-to-severe sensorineural hearing loss were evaluated in the free field on two head-movement conditions: (1) head fixed to the front and (2) head moving to a source location; and two context conditions: (1) with audio only or (2) with audio plus visual cues. Head-tracking analyses quantified the target location relative to head location, as well as the peak velocity during head movements. Evaluation of head-orienting behaviors revealed that both groups tended to undershoot the auditory target for targets beyond 60° in azimuth. Listeners with hearing loss had higher head-turn errors than the normal-hearing listeners, even when a visual location cue was provided. Digit detection accuracy was better for the normal-hearing than hearing-loss groups, with a main effect of signal-to-noise ratio (SNR). When performing the dual-task paradigm in the most difficult listening environments, participants consistently demonstrated a wait-and-listen head-movement strategy, characterized by a short pause during which they maintained their head orientation and gathered information before orienting to the target location.
Collapse
Affiliation(s)
- Angkana Lertpoompunya
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States
- Department of Communication Sciences and Disorders, Faculty of Medicine, Ramathibodi Hospital, Mahidol University, Bangkok, Thailand
| | - Erol J. Ozmeral
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States
| | - Nathan C. Higgins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States
| | - David A. Eddins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States
- Department of Communication Sciences and Disorders, Faculty of Medicine, Ramathibodi Hospital, Mahidol University, Bangkok, Thailand
- Department of Communication Sciences and Disorders, University of Central Florida, Orlando, FL, United States
| |
Collapse
|
2
|
Carlini A, Bordeau C, Ambard M. Auditory localization: a comprehensive practical review. Front Psychol 2024; 15:1408073. [PMID: 39049946 PMCID: PMC11267622 DOI: 10.3389/fpsyg.2024.1408073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Accepted: 06/17/2024] [Indexed: 07/27/2024] Open
Abstract
Auditory localization is a fundamental ability that allows to perceive the spatial location of a sound source in the environment. The present work aims to provide a comprehensive overview of the mechanisms and acoustic cues used by the human perceptual system to achieve such accurate auditory localization. Acoustic cues are derived from the physical properties of sound waves, and many factors allow and influence auditory localization abilities. This review presents the monaural and binaural perceptual mechanisms involved in auditory localization in the three dimensions. Besides the main mechanisms of Interaural Time Difference, Interaural Level Difference and Head Related Transfer Function, secondary important elements such as reverberation and motion, are also analyzed. For each mechanism, the perceptual limits of localization abilities are presented. A section is specifically devoted to reference systems in space, and to the pointing methods used in experimental research. Finally, some cases of misperception and auditory illusion are described. More than a simple description of the perceptual mechanisms underlying localization, this paper is intended to provide also practical information available for experiments and work in the auditory field.
Collapse
|
3
|
Zhang C, Burger RM. Cholinergic modulation in the vertebrate auditory pathway. Front Cell Neurosci 2024; 18:1414484. [PMID: 38962512 PMCID: PMC11220170 DOI: 10.3389/fncel.2024.1414484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2024] [Accepted: 06/06/2024] [Indexed: 07/05/2024] Open
Abstract
Acetylcholine (ACh) is a prevalent neurotransmitter throughout the nervous system. In the brain, ACh is widely regarded as a potent neuromodulator. In neurons, ACh signals are conferred through a variety of receptors that influence a broad range of neurophysiological phenomena such as transmitter release or membrane excitability. In sensory circuitry, ACh modifies neural responses to stimuli and coordinates the activity of neurons across multiple levels of processing. These factors enable individual neurons or entire circuits to rapidly adapt to the dynamics of complex sensory stimuli, underscoring an essential role for ACh in sensory processing. In the auditory system, histological evidence shows that acetylcholine receptors (AChRs) are expressed at virtually every level of the ascending auditory pathway. Despite its apparent ubiquity in auditory circuitry, investigation of the roles of this cholinergic network has been mainly focused on the inner ear or forebrain structures, while less attention has been directed at regions between the cochlear nuclei and midbrain. In this review, we highlight what is known about cholinergic function throughout the auditory system from the ear to the cortex, but with a particular emphasis on brainstem and midbrain auditory centers. We will focus on receptor expression, mechanisms of modulation, and the functional implications of ACh for sound processing, with the broad goal of providing an overview of a newly emerging view of impactful cholinergic modulation throughout the auditory pathway.
Collapse
Affiliation(s)
- Chao Zhang
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, United States
| | - R. Michael Burger
- Department of Biological Sciences, Lehigh University, Bethlehem, PA, United States
| |
Collapse
|
4
|
van der Heijden K, Patel P, Bickel S, Herrero JL, Mehta AD, Mesgarani N. Joint population coding and temporal coherence link an attended talker's voice and location features in naturalistic multi-talker scenes. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.13.593814. [PMID: 38798551 PMCID: PMC11118436 DOI: 10.1101/2024.05.13.593814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2024]
Abstract
Listeners readily extract multi-dimensional auditory objects such as a 'localized talker' from complex acoustic scenes with multiple talkers. Yet, the neural mechanisms underlying simultaneous encoding and linking of different sound features - for example, a talker's voice and location - are poorly understood. We analyzed invasive intracranial recordings in neurosurgical patients attending to a localized talker in real-life cocktail party scenarios. We found that sensitivity to an individual talker's voice and location features was distributed throughout auditory cortex and that neural sites exhibited a gradient from sensitivity to a single feature to joint sensitivity to both features. On a population level, cortical response patterns of both dual-feature sensitive sites but also single-feature sensitive sites revealed simultaneous encoding of an attended talker's voice and location features. However, for single-feature sensitive sites, the representation of the primary feature was more precise. Further, sites which selective tracked an attended speech stream concurrently encoded an attended talker's voice and location features, indicating that such sites combine selective tracking of an attended auditory object with encoding of the object's features. Finally, we found that attending a localized talker selectively enhanced temporal coherence between single-feature voice sensitive sites and single-feature location sensitive sites, providing an additional mechanism for linking voice and location in multi-talker scenes. These results demonstrate that a talker's voice and location features are linked during multi-dimensional object formation in naturalistic multi-talker scenes by joint population coding as well as by temporal coherence between neural sites. SIGNIFICANCE STATEMENT Listeners effortlessly extract auditory objects from complex acoustic scenes consisting of multiple sound sources in naturalistic, spatial sound scenes. Yet, how the brain links different sound features to form a multi-dimensional auditory object is poorly understood. We investigated how neural responses encode and integrate an attended talker's voice and location features in spatial multi-talker sound scenes to elucidate which neural mechanisms underlie simultaneous encoding and linking of different auditory features. Our results show that joint population coding as well as temporal coherence mechanisms contribute to distributed multi-dimensional auditory object encoding. These findings shed new light on cortical functional specialization and multidimensional auditory object formation in complex, naturalistic listening scenes. HIGHLIGHTS Cortical responses to an single talker exhibit a distributed gradient, ranging from sites that are sensitive to both a talker's voice and location (dual-feature sensitive sites) to sites that are sensitive to either voice or location (single-feature sensitive sites).Population response patterns of dual-feature sensitive sites encode voice and location features of the attended talker in multi-talker scenes jointly and with equal precision.Despite their sensitivity to a single feature at the level of individual cortical sites, population response patterns of single-feature sensitive sites also encode location and voice features of a talker jointly, but with higher precision for the feature they are primarily sensitive to.Neural sites which selectively track an attended speech stream concurrently encode the attended talker's voice and location features.Attention selectively enhances temporal coherence between voice and location selective sites over time.Joint population coding as well as temporal coherence mechanisms underlie distributed multi-dimensional auditory object encoding in auditory cortex.
Collapse
|
5
|
Lahemer ESF, Rad A. An Audio-Based SLAM for Indoor Environments: A Robotic Mixed Reality Presentation. SENSORS (BASEL, SWITZERLAND) 2024; 24:2796. [PMID: 38732904 PMCID: PMC11086165 DOI: 10.3390/s24092796] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Revised: 04/21/2024] [Accepted: 04/25/2024] [Indexed: 05/13/2024]
Abstract
In this paper, we present a novel approach referred to as the audio-based virtual landmark-based HoloSLAM. This innovative method leverages a single sound source and microphone arrays to estimate the voice-printed speaker's direction. The system allows an autonomous robot equipped with a single microphone array to navigate within indoor environments, interact with specific sound sources, and simultaneously determine its own location while mapping the environment. The proposed method does not require multiple audio sources in the environment nor sensor fusion to extract pertinent information and make accurate sound source estimations. Furthermore, the approach incorporates Robotic Mixed Reality using Microsoft HoloLens to superimpose landmarks, effectively mitigating the audio landmark-related issues of conventional audio-based landmark SLAM, particularly in situations where audio landmarks cannot be discerned, are limited in number, or are completely missing. The paper also evaluates an active speaker detection method, demonstrating its ability to achieve high accuracy in scenarios where audio data are the sole input. Real-time experiments validate the effectiveness of this method, emphasizing its precision and comprehensive mapping capabilities. The results of these experiments showcase the accuracy and efficiency of the proposed system, surpassing the constraints associated with traditional audio-based SLAM techniques, ultimately leading to a more detailed and precise mapping of the robot's surroundings.
Collapse
Affiliation(s)
- Elfituri S. F. Lahemer
- Autonomous and Intelligent Systems Laboratory, School of Mechatronic Systems Engineering, Simon Fraser University, Surrey, BC V3T 0A3, Canada;
| | | |
Collapse
|
6
|
Barot P, Mombaur K, MacDonald EN. Estimating speaker direction on a humanoid robot with binaural acoustic signals. PLoS One 2024; 19:e0296452. [PMID: 38165991 PMCID: PMC10760655 DOI: 10.1371/journal.pone.0296452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2023] [Accepted: 12/11/2023] [Indexed: 01/04/2024] Open
Abstract
To achieve human-like behaviour during speech interactions, it is necessary for a humanoid robot to estimate the location of a human talker. Here, we present a method to optimize the parameters used for the direction of arrival (DOA) estimation, while also considering real-time applications for human-robot interaction scenarios. This method is applied to binaural sound source localization framework on a humanoid robotic head. Real data is collected and annotated for this work. Optimizations are performed via a brute force method and a Bayesian model based method, results are validated and discussed, and effects on latency for real-time use are also explored.
Collapse
Affiliation(s)
- Pranav Barot
- Department of Systems Design Engineering, University of Waterloo, Waterloo, Ontario, Canada
| | - Katja Mombaur
- Department of Systems Design Engineering, University of Waterloo, Waterloo, Ontario, Canada
- Karlsruhe Institute of Technology (KIT), Institute of Anthropomatics and Robotics (IAR), Optimization and Biomechanics for Human-Centred Robotics, Karlsruhe, Germany
| | - Ewen N. MacDonald
- Department of Systems Design Engineering, University of Waterloo, Waterloo, Ontario, Canada
| |
Collapse
|
7
|
Alwashmi K, Meyer G, Rowe F, Ward R. Enhancing learning outcomes through multisensory integration: A fMRI study of audio-visual training in virtual reality. Neuroimage 2024; 285:120483. [PMID: 38048921 DOI: 10.1016/j.neuroimage.2023.120483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 11/18/2023] [Accepted: 12/01/2023] [Indexed: 12/06/2023] Open
Abstract
The integration of information from different sensory modalities is a fundamental process that enhances perception and performance in real and virtual environments (VR). Understanding these mechanisms, especially during learning tasks that exploit novel multisensory cue combinations provides opportunities for the development of new rehabilitative interventions. This study aimed to investigate how functional brain changes support behavioural performance improvements during an audio-visual (AV) learning task. Twenty healthy participants underwent a 30 min daily VR training for four weeks. The task was an AV adaptation of a 'scanning training' paradigm that is commonly used in hemianopia rehabilitation. Functional magnetic resonance imaging (fMRI) and performance data were collected at baseline, after two and four weeks of training, and four weeks post-training. We show that behavioural performance, operationalised as mean reaction time reduction in VR, significantly improves. In separate tests in a controlled laboratory environment, we showed that the behavioural performance gains in the VR training environment transferred to a significant mean RT reduction for the trained AV voluntary task on a computer screen. Enhancements were observed in both the visual-only and AV conditions, with the latter demonstrating a faster response time supported by the presence of audio cues. The behavioural learning effect also transfers to two additional tasks that were tested: a visual search task and an involuntary visual task. Our fMRI results reveal an increase in functional activation (BOLD signal) in multisensory brain regions involved in early-stage AV processing: the thalamus, the caudal inferior parietal lobe and cerebellum. These functional changes were only observed for the trained, multisensory, task and not for unimodal visual stimulation. Functional activation changes in the thalamus were significantly correlated to behavioural performance improvements. This study demonstrates that incorporating spatial auditory cues to voluntary visual training in VR leads to augmented brain activation changes in multisensory integration, resulting in measurable performance gains across tasks. The findings highlight the potential of VR-based multisensory training as an effective method for enhancing cognitive function and as a potentially valuable tool in rehabilitative programmes.
Collapse
Affiliation(s)
- Kholoud Alwashmi
- Faculty of Health and Life Sciences, University of Liverpool, United Kingdom; Department of Radiology, Princess Nourah bint Abdulrahman University, Saudi Arabia.
| | - Georg Meyer
- Digital Innovation Facility, University of Liverpool, United Kingdom
| | - Fiona Rowe
- Institute of Population Health, University of Liverpool, United Kingdom
| | - Ryan Ward
- Digital Innovation Facility, University of Liverpool, United Kingdom; School Computer Science and Mathematics, Liverpool John Moores University, United Kingdom
| |
Collapse
|
8
|
Jekateryńczuk G, Piotrowski Z. A Survey of Sound Source Localization and Detection Methods and Their Applications. SENSORS (BASEL, SWITZERLAND) 2023; 24:68. [PMID: 38202930 PMCID: PMC10781166 DOI: 10.3390/s24010068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Revised: 12/19/2023] [Accepted: 12/20/2023] [Indexed: 01/12/2024]
Abstract
This study is a survey of sound source localization and detection methods. The study provides a detailed classification of the methods used in the fields of science mentioned above. It classifies sound source localization systems based on criteria found in the literature. Moreover, an analysis of classic methods based on the propagation model and methods based on machine learning and deep learning techniques has been carried out. Attention has been paid to providing the most detailed information on the possibility of using physical phenomena, mathematical relationships, and artificial intelligence to determine sound source localization. Additionally, the article underscores the significance of these methods within both military and civil contexts. The study culminates with a discussion of forthcoming trends in the realms of acoustic detection and localization. The primary objective of this research is to serve as a valuable resource for selecting the most suitable approach within this domain.
Collapse
|
9
|
Liu Y, Wang Y, Yang L, Zhu J, Wang D, Zhao S. Bilateral adhesive bone conduction devices in patients with congenital bilateral conductive hearing loss. Am J Otolaryngol 2023; 44:103923. [PMID: 37167858 DOI: 10.1016/j.amjoto.2023.103923] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Accepted: 04/30/2023] [Indexed: 05/13/2023]
Abstract
PURPOSE This study aims to characterize the hearing benefits and sound localization accuracy of bilateral adhesive bone conduction devices (aBCDs) compared to unilateral devices in patients with congenital bilateral conductive hearing loss (BCHL). METHODS Sixteen children and adolescents with congenital BCHL were enrolled and tested under four listening conditions: (1) unaided, (2) R aided: aided with a right-side aBCD, (3) L aided: aided with a left-side aBCD, and (4) B aided: aided with aBCDs on both sides. The sound field hearing threshold (SFHT, in dB hearing level [HL]) and the word recognition score (WRS) were measured. The mean absolute error (MAE) of sound source localization was calculated to assess the sound localization accuracy. RESULTS The performance in SFHT and WRS was significantly higher in the B aided condition than that in the unaided, R and L aided conditions; moreover, no significant difference was observed between the R and L aided conditions. Concerning sound source localization, the accuracy of localization exhibited a sharp decline when using a single aBCD, while the application of bilateral aBCDs (B aided condition) resulted in a significantly improved localization accuracy as compared to the unilaterally aided conditions (both R and L); however, no significant difference was found between the unaided and B aided condition. CONCLUSION Patients with congenital BCHL experienced suboptimal hearing benefits and manifested significant challenges in sound source localization when utilizing a single aBCD, as compared to the utilization of bilateral aBCDs.
Collapse
Affiliation(s)
- Yujie Liu
- Department of Otolaryngology Head and Neck Surgery, Beijing Tongren Hospital, Ministry of Education Key Laboratory of Otolaryngology Head and Neck Surgery, Capital Medical University, Beijing 100730, China
| | - Yuan Wang
- Department of Otolaryngology Head and Neck Surgery, Beijing Tongren Hospital, Ministry of Education Key Laboratory of Otolaryngology Head and Neck Surgery, Capital Medical University, Beijing 100730, China
| | - Lin Yang
- Department of Otolaryngology Head and Neck Surgery, Beijing Tongren Hospital, Ministry of Education Key Laboratory of Otolaryngology Head and Neck Surgery, Capital Medical University, Beijing 100730, China
| | - Jikai Zhu
- Department of Otolaryngology Head and Neck Surgery, Beijing Tongren Hospital, Ministry of Education Key Laboratory of Otolaryngology Head and Neck Surgery, Capital Medical University, Beijing 100730, China
| | - Danni Wang
- Department of Otolaryngology Head and Neck Surgery, Beijing Tongren Hospital, Ministry of Education Key Laboratory of Otolaryngology Head and Neck Surgery, Capital Medical University, Beijing 100730, China
| | - Shouqin Zhao
- Department of Otolaryngology Head and Neck Surgery, Beijing Tongren Hospital, Ministry of Education Key Laboratory of Otolaryngology Head and Neck Surgery, Capital Medical University, Beijing 100730, China.
| |
Collapse
|
10
|
Fivel L, Mondino M, Brunelin J, Haesebaert F. Basic auditory processing and its relationship with symptoms in patients with schizophrenia: A systematic review. Psychiatry Res 2023; 323:115144. [PMID: 36940586 DOI: 10.1016/j.psychres.2023.115144] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 02/09/2023] [Accepted: 03/01/2023] [Indexed: 03/23/2023]
Abstract
Processing of basic auditory features, one of the earliest stages of auditory perception, has been the focus of considerable investigations in schizophrenia. Although numerous studies have shown abnormalities in pitch perception in schizophrenia, other basic auditory features such as intensity, duration, and sound localization have been less explored. Additionally, the relationship between basic auditory features and symptom severity shows inconsistent results, preventing concrete conclusions. Our aim was to present a comprehensive overview of basic auditory processing in schizophrenia and its relationship with symptoms. We conducted a systematic review according to the PRISMA guidelines. PubMed, Embase, and PsycINFO databases were searched for studies exploring auditory perception in schizophrenia compared to controls, with at least one behavioral task investigating basic auditory processing using pure tones. Forty-one studies were included. The majority investigated pitch processing while the others investigated intensity, duration and sound localization. The results revealed that patients have a significant deficit in the processing of all basic auditory features. Although the search for a relationship with symptoms was limited, auditory hallucinations experience appears to have an impact on basic auditory processing. Further research may examine correlations with clinical symptoms to explore the performance of patient subgroups and possibly implement remediation strategies.
Collapse
Affiliation(s)
- Laure Fivel
- Université Claude Bernard Lyon 1, CNRS, INSERM, Centre de Recherche en Neurosciences de Lyon CRNL U1028 UMR5292, PSYR2, Bron F-69500, France
| | - Marine Mondino
- Université Claude Bernard Lyon 1, CNRS, INSERM, Centre de Recherche en Neurosciences de Lyon CRNL U1028 UMR5292, PSYR2, Bron F-69500, France; Centre Hospitalier Le Vinatier, 95 Boulevard Pinel, Bron F-69500, France.
| | - Jerome Brunelin
- Université Claude Bernard Lyon 1, CNRS, INSERM, Centre de Recherche en Neurosciences de Lyon CRNL U1028 UMR5292, PSYR2, Bron F-69500, France; Centre Hospitalier Le Vinatier, 95 Boulevard Pinel, Bron F-69500, France
| | - Frédéric Haesebaert
- Université Claude Bernard Lyon 1, CNRS, INSERM, Centre de Recherche en Neurosciences de Lyon CRNL U1028 UMR5292, PSYR2, Bron F-69500, France; Centre Hospitalier Le Vinatier, 95 Boulevard Pinel, Bron F-69500, France
| |
Collapse
|
11
|
Long Y, Wang W, Liu J, Liu K, Gong S. The interference of tinnitus on sound localization was related to the type of stimulus. Front Neurosci 2023; 17:1077455. [PMID: 36824213 PMCID: PMC9941629 DOI: 10.3389/fnins.2023.1077455] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Accepted: 01/23/2023] [Indexed: 02/10/2023] Open
Abstract
Spatial processing is a major cognitive function of hearing. Sound source localization is an intuitive evaluation of spatial hearing. Current evidence of the effect of tinnitus on sound source localization remains limited. The present study aimed to investigate whether tinnitus affects the ability to localize sound in participants with normal hearing and whether the effect is related to the type of stimulus. Overall, 40 participants with tinnitus and another 40 control participants without tinnitus were evaluated. The sound source discrimination tasks were performed on the horizontal plane. Pure tone (PT, with single frequency) and monosyllable (MS, with spectrum information) were used as stimuli. The root-mean-square error (RMSE) score was calculated as the mean target response difference. When the stimuli were PTs, the RMSE scores of the control and tinnitus group were 11.77 ± 2.57° and 13.97 ± 4.18°, respectively. The control group performed significantly better than did the tinnitus group (t = 2.841, p = 0.006). When the stimuli were MS, the RMSE scores of the control and tinnitus groups were 7.12 ± 2.29° and 7.90 ± 2.33°, respectively. There was no significant difference between the two groups (t = 1.501, p = 0.137). Neither the effect of unilateral or bilateral tinnitus (PT: t = 0.763, p = 0.450; MS: t = 1.760, p = 0.086) nor the effect of tinnitus side (left/right, PT: t = 0.389, p = 0.703; MS: t = 1.407, p = 0.179) on sound localization ability were determined. The sound source localization ability gradually deteriorated with an increase in age (PT: r2 = 0.153, p < 0.001; MS: r2 = 0.516, p = 0.043). In conclusion, tinnitus interfered with the ability to localize PTs, but the ability to localize MS was not affected. Therefore, the interference of tinnitus in localizing sound sources is related to the type of stimulus.
Collapse
Affiliation(s)
- Yue Long
- Department of Otolaryngology-Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China,Clinical Center for Hearing Loss, Capital Medical University, Beijing, China
| | - Wei Wang
- Department of Otolaryngology-Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Jiao Liu
- Department of Otolaryngology-Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Ke Liu
- Department of Otolaryngology-Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China,Ke Liu,
| | - Shusheng Gong
- Department of Otolaryngology-Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China,Clinical Center for Hearing Loss, Capital Medical University, Beijing, China,*Correspondence: Shusheng Gong,
| |
Collapse
|
12
|
Dynamic speaker localization based on a novel lightweight R–CNN model. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08251-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
|
13
|
Nisha KV, Uppunda AK, Kumar RT. Spatial rehabilitation using virtual auditory space training paradigm in individuals with sensorineural hearing impairment. Front Neurosci 2023; 16:1080398. [PMID: 36733923 PMCID: PMC9887142 DOI: 10.3389/fnins.2022.1080398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 12/20/2022] [Indexed: 01/18/2023] Open
Abstract
Purpose The present study aimed to quantify the effects of spatial training using virtual sources on a battery of spatial acuity measures in listeners with sensorineural hearing impairment (SNHI). Methods An intervention-based time-series comparison design involving 82 participants divided into three groups was adopted. Group I (n = 27, SNHI-spatially trained) and group II (n = 25, SNHI-untrained) consisted of SNHI listeners, while group III (n = 30) had listeners with normal hearing (NH). The study was conducted in three phases. In the pre-training phase, all the participants underwent a comprehensive assessment of their spatial processing abilities using a battery of tests including spatial acuity in free-field and closed-field scenarios, tests for binaural processing abilities (interaural time threshold [ITD] and level difference threshold [ILD]), and subjective ratings. While spatial acuity in the free field was assessed using a loudspeaker-based localization test, the closed-field source identification test was performed using virtual stimuli delivered through headphones. The ITD and ILD thresholds were obtained using a MATLAB psychoacoustic toolbox, while the participant ratings on the spatial subsection of speech, spatial, and qualities questionnaire in Kannada were used for the subjective ratings. Group I listeners underwent virtual auditory spatial training (VAST), following pre-evaluation assessments. All tests were re-administered on the group I listeners halfway through training (mid-training evaluation phase) and after training completion (post-training evaluation phase), whereas group II underwent these tests without any training at the same time intervals. Results and discussion Statistical analysis showed the main effect of groups in all tests at the pre-training evaluation phase, with post hoc comparisons that revealed group equivalency in spatial performance of both SNHI groups (groups I and II). The effect of VAST in group I was evident on all the tests, with the localization test showing the highest predictive power for capturing VAST-related changes on Fischer discriminant analysis (FDA). In contrast, group II demonstrated no changes in spatial acuity across timelines of measurements. FDA revealed increased errors in the categorization of NH as SNHI-trained at post-training evaluation compared to pre-training evaluation, as the spatial performance of the latter improved with VAST in the post-training phase. Conclusion The study demonstrated positive outcomes of spatial training using VAST in listeners with SNHI. The utility of this training program can be extended to other clinical population with spatial auditory processing deficits such as auditory neuropathy spectrum disorder, cochlear implants, central auditory processing disorders etc.
Collapse
|
14
|
Kaufmann BC, Cazzoli D, Bartolomeo P, Geiser N, Nef T, Nyffeler T. Response to the Letter by Schenke et al. on "Auditory spatial cueing reduces neglect after righthemispheric stroke: A proof of concept study" by Kaufmann et al., 2022. Cortex 2022; 157:336-337. [PMID: 36307350 DOI: 10.1016/j.cortex.2022.09.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Accepted: 09/26/2022] [Indexed: 12/15/2022]
Affiliation(s)
- B C Kaufmann
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, Inserm, CNRS, Paris, France; Neurocenter, Luzerner Kantonsspital, Lucerne, Switzerland
| | - D Cazzoli
- Neurocenter, Luzerner Kantonsspital, Lucerne, Switzerland; ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland; Department of Psychology, University of Bern, Bern, Switzerland
| | - P Bartolomeo
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, Inserm, CNRS, Paris, France
| | - N Geiser
- Neurocenter, Luzerner Kantonsspital, Lucerne, Switzerland; ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - T Nef
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - T Nyffeler
- Neurocenter, Luzerner Kantonsspital, Lucerne, Switzerland; Perception and Eye Movement Laboratory, Department of Neurology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland; ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland.
| |
Collapse
|
15
|
Towards a Consensus on an ICF-Based Classification System for Horizontal Sound-Source Localization. J Pers Med 2022; 12:jpm12121971. [PMID: 36556192 PMCID: PMC9786639 DOI: 10.3390/jpm12121971] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Revised: 11/15/2022] [Accepted: 11/18/2022] [Indexed: 12/05/2022] Open
Abstract
The study aimed to develop a consensus classification system for the reporting of sound localization testing results, especially in the field of cochlear implantation. Against the background of an overview of the wide variations present in localization testing procedures and reporting metrics, a novel classification system was proposed to report localization errors according to the widely accepted International Classification of Functioning, Disability and Health (ICF) framework. The obtained HEARRING_LOC_ICF scale includes the ICF graded scale: 0 (no impairment), 1 (mild impairment), 2 (moderate impairment), 3 (severe impairment), and 4 (complete impairment). Improvement of comparability of localization results across institutes, localization testing setups, and listeners was demonstrated by applying the classification system retrospectively to data obtained from cohorts of normal-hearing and cochlear implant listeners at our institutes. The application of our classification system will help to facilitate multi-center studies, as well as allowing better meta-analyses of data, resulting in improved evidence-based practice in the field.
Collapse
|
16
|
Tang D, Taseska M, van Waterschoot T. Toward learning robust contrastive embeddings for binaural sound source localization. Front Neuroinform 2022; 16:942978. [DOI: 10.3389/fninf.2022.942978] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Accepted: 10/21/2022] [Indexed: 11/17/2022] Open
Abstract
Recent deep neural network based methods provide accurate binaural source localization performance. These data-driven models map measured binaural cues directly to source locations hence their performance highly depend on the training data distribution. In this paper, we propose a parametric embedding that maps the binaural cues to a low-dimensional space where localization can be done with a nearest-neighbor regression. We implement the embedding using a neural network, optimized to map points that are close to each other in the latent space (the space of source azimuths or elevations) to nearby points in the embedding space, thus the Euclidean distances between the embeddings reflect their source proximities, and the structure of the embeddings forms a manifold, which provides interpretability to the embeddings. We show that the proposed embedding generalizes well in various acoustic conditions (with reverberation) different from those encountered during training, and provides better performance than unsupervised embeddings previously used for binaural localization. In addition, the proposed method performs better than or equally well as a feed-forward neural network based model that directly estimates the source locations from the binaural cues, and it has better results than the feed-forward model when a small amount of training data is used. Moreover, we also compare the proposed embedding using both supervised and weakly supervised learning, and show that in both conditions, the resulting embeddings perform similarly well, but the weakly supervised embedding allows to estimate source azimuth and elevation simultaneously.
Collapse
|
17
|
Guérineau C, Lõoke M, Broseghini A, Dehesh G, Mongillo P, Marinelli L. Sound Localization Ability in Dogs. Vet Sci 2022; 9:619. [PMID: 36356096 PMCID: PMC9694642 DOI: 10.3390/vetsci9110619] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 11/02/2022] [Accepted: 11/03/2022] [Indexed: 09/07/2024] Open
Abstract
The minimum audible angle (MAA), defined as the smallest detectable difference between the azimuths of two identical sources of sound, is a standard measure of spatial auditory acuity in animals. Few studies have explored the MAA of dogs, using methods that do not allow potential improvement throughout the assessment, and with a very small number of dog(s) assessed. To overcome these limits, we adopted a staircase method on 10 dogs, using a two-forced choice procedure with two sound sources, testing angles of separation from 60° to 1°. The staircase method permits the level of difficulty for each dog to be continuously adapted and allows for the observation of improvement over time. The dogs' average MAA was 7.6°, although with a large interindividual variability, ranging from 1.3° to 13.2°. A global improvement was observed across the procedure, substantiated by a gradual lowering of the MAA and of choice latency across sessions. The results indicate that the staircase method is feasible and reliable in the assessment of auditory spatial localization in dogs, highlighting the importance of using an appropriate method in a sensory discrimination task, so as to allow improvement over time. The results also reveal that the MAA of dogs is more variable than previously reported, potentially reaching values lower than 2°. Although no clear patterns of association emerged between MAA and dogs' characteristics such as ear shape, head shape or age, the results suggest the value of conducting larger-scale studies to determine whether these or other factors influence sound localization abilities in dogs.
Collapse
Affiliation(s)
- Cécile Guérineau
- Laboratory of Applied Ethology, Dipartimento di Biomedicina Comparata e Alimentazione, University of Padova, Viale dell’Università 16, 35020 Legnaro, PD, Italy
| | - Miina Lõoke
- Laboratory of Applied Ethology, Dipartimento di Biomedicina Comparata e Alimentazione, University of Padova, Viale dell’Università 16, 35020 Legnaro, PD, Italy
| | - Anna Broseghini
- Laboratory of Applied Ethology, Dipartimento di Biomedicina Comparata e Alimentazione, University of Padova, Viale dell’Università 16, 35020 Legnaro, PD, Italy
| | - Giulio Dehesh
- Independent Researcher, Via Chiesanuova 139, 35136 Padova, PD, Italy
| | - Paolo Mongillo
- Laboratory of Applied Ethology, Dipartimento di Biomedicina Comparata e Alimentazione, University of Padova, Viale dell’Università 16, 35020 Legnaro, PD, Italy
| | - Lieta Marinelli
- Laboratory of Applied Ethology, Dipartimento di Biomedicina Comparata e Alimentazione, University of Padova, Viale dell’Università 16, 35020 Legnaro, PD, Italy
| |
Collapse
|
18
|
Jiang C, Luo B, Liu X, Chen GD, Salvi R. Ipsilateral auditory cortex responses to the intact ear after unilateral noise trauma in juvenile rats. Hear Res 2022; 422:108567. [PMID: 35816891 DOI: 10.1016/j.heares.2022.108567] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Revised: 06/21/2022] [Accepted: 06/30/2022] [Indexed: 11/19/2022]
Abstract
BACKGROUND While ear stimulation produces a robust response in the contralateral auditory cortex (AC), it produces only a weak response in the ipsilateral AC, known as interhemispheric asymmetry. Unilateral deafness can lead to AC plastic changes, resulting in reduced interhemispheric asymmetry and auditory perceptual consequences. However, the unilateral hearing loss-associated plastic changes are far from fully understood. The purpose of this study was to investigate AC responses to the ipsilateral unimpaired ear after noise injury to the contralateral ear in juvenile rats. METHODS Rats (50 days) were monaurally exposed to an intense noise (10.0-12.5 kHz, 126 dB SPL) for 2 hours. The unexposed ear-induced ipsilateral AC responses were recorded 2 days and 4 months after exposure and compared between groups. RESULTS The noise exposure resulted in complete hearing loss in the exposed ear, but normal function in the other. Two days after exposure, the ipsilateral AC response induced by the intact ear was significantly enhanced and the threshold decreased (the early-onset effect). Four months after noise exposure, in addition to the increased response amplitude, the "slow-increasing" firing pattern of the neurons in the ipsilateral AC turned into the contralateral-AC-response-like "sharp-increasing" pattern (the late-onset effect) with shortened response latency. DISCUSSION The early-onset effect can result from release of inhibition due to decreased contralateral input, while the late-onset effect may imply the formation of direct connections in the ipsilateral auditory pathway. The enhanced AC response may help maintain loudness perception and monaural sound localization after unilateral deafness.
Collapse
Affiliation(s)
- Chen Jiang
- Affiliated Hospital of University of Science and Technology of China, Hefei, Anhui, China
| | - Bin Luo
- Affiliated Hospital of University of Science and Technology of China, Hefei, Anhui, China
| | - Xiaopeng Liu
- Center for Hearing and Deafness, University at Buffalo, Buffalo, NY, United States
| | - Guang-Di Chen
- Center for Hearing and Deafness, University at Buffalo, Buffalo, NY, United States.
| | - Richard Salvi
- Center for Hearing and Deafness, University at Buffalo, Buffalo, NY, United States
| |
Collapse
|
19
|
Liu Y, Zhao C, Yang L, Chen P, Yang J, Wang D, Ren R, Li Y, Zhao S, Gong S. Characteristics of sound localization in children with unilateral microtia and atresia and predictors of localization improvement when using a bone conduction device. Front Neurosci 2022; 16:973735. [PMID: 36090257 PMCID: PMC9461951 DOI: 10.3389/fnins.2022.973735] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Accepted: 08/05/2022] [Indexed: 11/23/2022] Open
Abstract
This study aimed to determine the characteristics of sound localization in children with unilateral microtia and atresia (UMA) and the influence of a non-surgical bone conduction device (BCD). Hearing benefits were evaluated by the word recognition score (WRS), speech reception threshold, the international outcome inventory for hearing aids (IOI-HA), and the Speech, Spatial, and Qualities of Hearing Test for Parent (SSQ-P). Sound localization was measured using broadband noise stimuli randomly played from seven loudspeakers at different stimulus levels [65, 70, and 75 dB sound pressure levels (SPLs)]. The average unaided WRS and speech-to-noise ratio (SNR) for UMA patients was 18.27 ± 14.63 % and -5 ± 1.18 dB SPL, and the average aided WRS and SNR conspicuously changed to 85.45 ± 7.38 % and -7.73 ± 1.42 dB SPL, respectively. The mean IOI-HA score was 4.57 ± 0.73. Compared to the unaided condition, the mean SSQ-P score in each domain improved from 7.08 ± 2.5, 4.86 ± 2.27, and 6.59 ± 1.4 to 8.72 ± 0.95, 7.61 ± 1.52, and 8.55 ± 1.09, respectively. In the sound localization test, some children with UMA were able to detect sound sources quite well and the sound localization abilities did not deteriorate with the non-surgical BCD. Our study concludes that for children with UMA, the non-surgical BCD provided a definite benefit on speech recognition and high satisfaction without deteriorating their sound localization abilities. It is an efficient and safe solution for the early hearing intervention of these patients.
Collapse
Affiliation(s)
- Yujie Liu
- Ministry of Education Key Laboratory of Otolaryngology Head and Neck Surgery, Department of Otolaryngology Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Chunli Zhao
- Department of Otolaryngology Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Lin Yang
- Ministry of Education Key Laboratory of Otolaryngology Head and Neck Surgery, Department of Otolaryngology Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Peiwei Chen
- Ministry of Education Key Laboratory of Otolaryngology Head and Neck Surgery, Department of Otolaryngology Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Jinsong Yang
- Ministry of Education Key Laboratory of Otolaryngology Head and Neck Surgery, Department of Otolaryngology Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Danni Wang
- Ministry of Education Key Laboratory of Otolaryngology Head and Neck Surgery, Department of Otolaryngology Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Ran Ren
- Ministry of Education Key Laboratory of Otolaryngology Head and Neck Surgery, Department of Otolaryngology Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Ying Li
- Ministry of Education Key Laboratory of Otolaryngology Head and Neck Surgery, Department of Otolaryngology Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Shouqin Zhao
- Ministry of Education Key Laboratory of Otolaryngology Head and Neck Surgery, Department of Otolaryngology Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Shusheng Gong
- Department of Otolaryngology Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
20
|
Zheng Y, Swanson J, Koehnke J, Guan J. Sound Localization of Listeners With Normal Hearing, Impaired Hearing, Hearing Aids, Bone-Anchored Hearing Instruments, and Cochlear Implants: A Review. Am J Audiol 2022; 31:819-834. [PMID: 35917460 DOI: 10.1044/2022_aja-22-00006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
PURPOSE This review article reviews the contemporary studies of localization ability for different populations in different listening environments and provides possible future research directions. CONCLUSIONS The ability to accurately localize a sound source relying on three cues (interaural time difference, interaural level difference, and spectral cues) is important for communication, learning, and safety. Confounding effects including noise and reverberation, which exist in common listening environments, mask or alter localization cues and negatively affect localization performance. Hearing loss, a common public health issue, also affects localization accuracy. Although hearing devices have been developed to provide excellent audibility of speech signals, less attention has been paid to preserving and replicating crucial localization cues. Unique challenges are faced by users of various hearing devices, including hearing aids, bone-anchored hearing instruments, and cochlear implants. Hearing aids have failed to consistently improve localization performance and, in some cases, significantly impair sound localization. Bone-conduction hearing instruments show little to no benefit for sound localization performance in most cases, although some improvement is seen in binaural users. Although cochlear implants provide great hearing benefit to individuals with severe-to-profound sensorineural hearing loss, cochlear implant users have significant difficulty localizing sound, even with two implants. However, technologies in each of these areas are advancing to reduce interference with desired sound signals and preserve localization cues to help users achieve better hearing and sound localization in real-life environments.
Collapse
Affiliation(s)
- Yunfang Zheng
- Department of Communication Sciences and Disorders, Central Michigan University, Mount Pleasant, MI
| | - Jacob Swanson
- Department of Communication Sciences and Disorders, Central Michigan University, Mount Pleasant, MI
| | - Janet Koehnke
- Department of Communication Sciences and Disorders, Montclair State University, Bloomfield, NJ
| | - Jianwei Guan
- Department of Communication Sciences and Disorders, Central Michigan University, Mount Pleasant, MI
| |
Collapse
|
21
|
Audiovisual Integration for Saccade and Vergence Eye Movements Increases with Presbycusis and Loss of Selective Attention on the Stroop Test. Brain Sci 2022; 12:brainsci12050591. [PMID: 35624979 PMCID: PMC9139407 DOI: 10.3390/brainsci12050591] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 04/23/2022] [Accepted: 04/28/2022] [Indexed: 11/17/2022] Open
Abstract
Multisensory integration is a capacity allowing us to merge information from different sensory modalities in order to improve the salience of the signal. Audiovisual integration is one of the most used kinds of multisensory integration, as vision and hearing are two senses used very frequently in humans. However, the literature regarding age-related hearing loss (presbycusis) on audiovisual integration abilities is almost nonexistent, despite the growing prevalence of presbycusis in the population. In that context, the study aims to assess the relationship between presbycusis and audiovisual integration using tests of saccade and vergence eye movements to visual vs. audiovisual targets, with a pure tone as an auditory signal. Tests were run with the REMOBI and AIDEAL technologies coupled with the pupil core eye tracker. Hearing abilities, eye movement characteristics (latency, peak velocity, average velocity, amplitude) for saccade and vergence eye movements, and the Stroop Victoria test were measured in 69 elderly and 30 young participants. The results indicated (i) a dual pattern of aging effect on audiovisual integration for convergence (a decrease in the aged group relative to the young one, but an increase with age within the elderly group) and (ii) an improvement of audiovisual integration for saccades for people with presbycusis associated with lower scores of selective attention in the Stroop test, regardless of age. These results bring new insight on an unknown topic, that of audio visuomotor integration in normal aging and in presbycusis. They highlight the potential interest of using eye movement targets in the 3D space and pure tone sound to objectively evaluate audio visuomotor integration capacities.
Collapse
|
22
|
Valzolgher C, Todeschini M, Verdelet G, Gatel J, Salemme R, Gaveau V, Truy E, Farnè A, Pavani F. Adapting to altered auditory cues: Generalization from manual reaching to head pointing. PLoS One 2022; 17:e0263509. [PMID: 35421095 PMCID: PMC9009652 DOI: 10.1371/journal.pone.0263509] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Accepted: 01/21/2022] [Indexed: 12/02/2022] Open
Abstract
Localising sounds means having the ability to process auditory cues deriving from the interplay among sound waves, the head and the ears. When auditory cues change because of temporary or permanent hearing loss, sound localization becomes difficult and uncertain. The brain can adapt to altered auditory cues throughout life and multisensory training can promote the relearning of spatial hearing skills. Here, we study the training potentials of sound-oriented motor behaviour to test if a training based on manual actions toward sounds can learning effects that generalize to different auditory spatial tasks. We assessed spatial hearing relearning in normal hearing adults with a plugged ear by using visual virtual reality and body motion tracking. Participants performed two auditory tasks that entail explicit and implicit processing of sound position (head-pointing sound localization and audio-visual attention cueing, respectively), before and after having received a spatial training session in which they identified sound position by reaching to auditory sources nearby. Using a crossover design, the effects of the above-mentioned spatial training were compared to a control condition involving the same physical stimuli, but different task demands (i.e., a non-spatial discrimination of amplitude modulations in the sound). According to our findings, spatial hearing in one-ear plugged participants improved more after reaching to sound trainings rather than in the control condition. Training by reaching also modified head-movement behaviour during listening. Crucially, the improvements observed during training generalize also to a different sound localization task, possibly as a consequence of acquired and novel head-movement strategies.
Collapse
Affiliation(s)
- Chiara Valzolgher
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- Center for Mind/Brain Sciences—CIMeC, University of Trento, Trento, Italy
| | - Michela Todeschini
- Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Trento, Italy
| | - Gregoire Verdelet
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- Neuroimmersion, Lyon Neuroscience Research Center, Lyon, France
| | | | - Romeo Salemme
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- Neuroimmersion, Lyon Neuroscience Research Center, Lyon, France
| | - Valerie Gaveau
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- University of Lyon 1, Villeurbanne, France
| | - Eric Truy
- Hospices Civils de Lyon, Lyon, France
| | - Alessandro Farnè
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- Center for Mind/Brain Sciences—CIMeC, University of Trento, Trento, Italy
- Neuroimmersion, Lyon Neuroscience Research Center, Lyon, France
| | - Francesco Pavani
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, Lyon, France
- Center for Mind/Brain Sciences—CIMeC, University of Trento, Trento, Italy
| |
Collapse
|
23
|
Kim JH, Shim L, Bahng J, Lee HJ. Proficiency in Using Level Cue for Sound Localization Is Related to the Auditory Cortical Structure in Patients With Single-Sided Deafness. Front Neurosci 2021; 15:749824. [PMID: 34707477 PMCID: PMC8542703 DOI: 10.3389/fnins.2021.749824] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Accepted: 09/20/2021] [Indexed: 11/13/2022] Open
Abstract
Spatial hearing, which largely relies on binaural time/level cues, is a challenge for patients with asymmetric hearing. The degree of the deficit is largely variable, and better sound localization performance is frequently reported. Studies on the compensatory mechanism revealed that monaural level cues and monoaural spectral cues contribute to variable behavior in those patients who lack binaural spatial cues. However, changes in the monaural level cues have not yet been separately investigated. In this study, the use of the level cue in sound localization was measured using stimuli of 1 kHz at a fixed level in patients with single-sided deafness (SSD), the most severe form of asymmetric hearing. The mean absolute error (MAE) was calculated and related to the duration/age onset of SSD. To elucidate the biological correlate of this variable behavior, sound localization ability was compared with the cortical volume of the parcellated auditory cortex. In both SSD patients (n = 26) and normal controls with one ear acutely plugged (n = 23), localization performance was best on the intact ear side; otherwise, there was wide interindividual variability. In the SSD group, the MAE on the intact ear side was worse than that of the acutely plugged controls, and it deteriorated with longer duration/younger age at SSD onset. On the impaired ear side, MAE improved with longer duration/younger age at SSD onset. Performance asymmetry across lateral hemifields decreased in the SSD group, and the maximum decrease was observed with the most extended duration/youngest age at SSD onset. The decreased functional asymmetry in patients with right SSD was related to greater cortical volumes in the right posterior superior temporal gyrus and the left planum temporale, which are typically involved in auditory spatial processing. The study results suggest that structural plasticity in the auditory cortex is related to behavioral changes in sound localization when utilizing monaural level cues in patients with SSD.
Collapse
Affiliation(s)
- Ja Hee Kim
- Department of Otorhinolaryngology-Head and Neck Surgery, Hallym University College of Medicine, Chuncheon, South Korea.,Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, South Korea
| | - Leeseul Shim
- Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, South Korea
| | - Junghwa Bahng
- Department of Audiology and Speech-Language Pathology, Hallym University of Graduate Studies, Seoul, South Korea
| | - Hyo-Jeong Lee
- Department of Otorhinolaryngology-Head and Neck Surgery, Hallym University College of Medicine, Chuncheon, South Korea.,Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, South Korea
| |
Collapse
|
24
|
Abstract
Sound localization is a vast field of research and advancement which is used in many useful applications to facilitate communication, radars, medical aid, and speech enhancement to but name a few. Many different methods are presented in recent times in this field to gain benefits. Various types of microphone arrays serve the purpose of sensing the incoming sound. This paper presents an overview of the importance of using sound localization in different applications along with the use and limitations of ad-hoc microphones over other microphones. In order to overcome these limitations certain approaches are also presented. Detailed explanation of some of the existing methods that are used for sound localization using microphone arrays in the recent literature is given. Existing methods are studied in a comparative fashion along with the factors that influence the choice of one method over the others. This review is done in order to form a basis for choosing the best fit method for our use.
Collapse
|
25
|
Abstract
Sound localization is a field of signal processing that deals with identifying the origin of a detected sound signal. This involves determining the direction and distance of the source of the sound. Some useful applications of this phenomenon exists in speech enhancement, communication, radars and in the medical field as well. The experimental arrangement requires the use of microphone arrays which record the sound signal. Some methods involve using ad-hoc arrays of microphones because of their demonstrated advantages over other arrays. In this research project, the existing sound localization methods have been explored to analyze the advantages and disadvantages of each method. A novel sound localization routine has been formulated which uses both the direction of arrival (DOA) of the sound signal along with the location estimation in three-dimensional space to precisely locate a sound source. The experimental arrangement consists of four microphones and a single sound source. Previously, sound source has been localized using six or more microphones. The precision of sound localization has been demonstrated to increase with the use of more microphones. In this research, however, we minimized the use of microphones to reduce the complexity of the algorithm and the computation time as well. The method results in novelty in the field of sound source localization by using less resources and providing results that are at par with the more complex methods requiring more microphones and additional tools to locate the sound source. The average accuracy of the system is found to be 96.77% with an error factor of 3.8%.
Collapse
|
26
|
Nazaré CJ, Oliveira AM. Effects of Audiovisual Presentations on Visual Localization Errors: One or Several Multisensory Mechanisms? Multisens Res 2021; 34:1-35. [PMID: 33882452 DOI: 10.1163/22134808-bja10048] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2020] [Accepted: 03/30/2021] [Indexed: 11/19/2022]
Abstract
The present study examines the extent to which temporal and spatial properties of sound modulate visual motion processing in spatial localization tasks. Participants were asked to locate the place at which a moving visual target unexpectedly vanished. Across different tasks, accompanying sounds were factorially varied within subjects as to their onset and offset times and/or positions relative to visual motion. Sound onset had no effect on the localization error. Sound offset was shown to modulate the perceived visual offset location, both for temporal and spatial disparities. This modulation did not conform to attraction toward the timing or location of the sounds but, demonstrably in the case of temporal disparities, to bimodal enhancement instead. Favorable indications to a contextual effect of audiovisual presentations on interspersed visual-only trials were also found. The short sound-leading offset asynchrony had equivalent benefits to audiovisual offset synchrony, suggestive of the involvement of early-level mechanisms, constrained by a temporal window, at these conditions. Yet, we tentatively hypothesize that the whole of the results and how they compare with previous studies requires the contribution of additional mechanisms, including learning-detection of auditory-visual associations and cross-sensory spread of endogenous attention.
Collapse
Affiliation(s)
- Cristina Jordão Nazaré
- Instituto Politécnico de Coimbra, ESTESC - Coimbra Health School, Audiologia, Coimbra, Portugal
| | | |
Collapse
|
27
|
Ali RH, Abdullah MN, Abed BF. The identification and localization of speaker using fusion techniques and machine learning techniques. EVOLUTIONARY INTELLIGENCE 2021. [DOI: 10.1007/s12065-020-00560-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
28
|
Zweifel NO, Hartmann MJZ. Defining "active sensing" through an analysis of sensing energetics: homeoactive and alloactive sensing. J Neurophysiol 2020; 124:40-48. [PMID: 32432502 DOI: 10.1152/jn.00608.2019] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The term "active sensing" has been defined in multiple ways. Most strictly, the term refers to sensing that uses self-generated energy to sample the environment (e.g., echolocation). More broadly, the definition includes all sensing that occurs when the sensor is moving (e.g., tactile stimuli obtained by an immobile versus moving fingertip) and, broader still, includes all sensing guided by attention or intent (e.g., purposeful eye movements). The present work offers a framework to help disambiguate aspects of the "active sensing" terminology and reveals properties of tactile sensing unique among all modalities. The framework begins with the well-described "sensorimotor loop," which expresses the perceptual process as a cycle involving four subsystems: environment, sensor, nervous system, and actuator. Using system dynamics, we examine how information flows through the loop. This "sensory-energetic loop" reveals two distinct sensing mechanisms that subdivide active sensing into homeoactive and alloactive sensing. In homeoactive sensing, the animal can change the state of the environment, while in alloactive sensing the animal can alter only the sensor's configurational parameters and thus the mapping between input and output. Given these new definitions, examination of the sensory-energetic loop helps identify two unique characteristics of tactile sensing: 1) in tactile systems, alloactive and homeoactive sensing merge to a mutually controlled sensing mechanism, and 2) tactile sensing may require fundamentally different predictions to anticipate reafferent input. We expect this framework may help resolve ambiguities in the active sensing community and form a basis for future theoretical and experimental work regarding alloactive and homeoactive sensing.
Collapse
Affiliation(s)
- Nadina O Zweifel
- Department of Biomedical Engineering, Northwestern University, Evanston, Illinois
| | - Mitra J Z Hartmann
- Department of Biomedical Engineering, Northwestern University, Evanston, Illinois.,Department of Mechanical Engineering, Northwestern University, Evanston, Illinois
| |
Collapse
|
29
|
Sound-localisation performance in patients with congenital unilateral microtia and atresia fitted with an active middle ear implant. Eur Arch Otorhinolaryngol 2020; 278:31-39. [PMID: 32449028 DOI: 10.1007/s00405-020-06049-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2020] [Accepted: 05/11/2020] [Indexed: 01/05/2023]
Abstract
OBJECTIVE This study assessed the safety and sound-localisation ability of the Vibrant Soundbridge (VSB) (Med-EL, Innsbruck, Austria) in patients with unilateral microtia and atresia (MA). METHODS This was a single-centre retrospective research study. Twelve subjects with unilateral conductive hearing loss (UCHL) caused by ipsilateral MA were recruited, each of whom underwent VSB implantation and auricular reconstruction. The bone-conduction (BC) threshold was measured postoperatively, and the accuracy of sound localisation was evaluated at least 6 months after surgery. Horizontal sound-localisation performance was investigated with the VSB activated and inactivated, at varying sound stimuli levels (65, 70 and 75 dB SPL). Localisation benefit was analysed via the mean absolute error (MAE). RESULTS There was no statistical difference in mean BC threshold of impaired ears measured preoperatively and postoperatively. When compared with VSB-inactivated condition, the MAE increased significantly in unilateral MA patients in the VSB-activated condition. Besides, sound-localisation performance worsened remarkably when sound was presented at 70 dB SPL and 75 dB SPL. Regarding the side of signal location, the average MAE with the VSB device was much higher than that without the VSB when sound was from the normal-hearing ear. However, no significant difference was observed when sound was located from the impaired ear. CONCLUSION This study demonstrates that in patients with unilateral MA, the VSB device does not affect inner-ear function. Sound-localisation ability is not improved, but deteriorated at follow-up. Our results suggest that the VSB-aided localisation abilities may be related to the thresholds between the ears, plasticity of auditory system and duration of use of VSB.
Collapse
|
30
|
Bonne N, Hanson J, Gauvrit F, Risoud M, Vincent C. Long‐term evaluation of sound localisation in single‐sided deaf adults fitted with a BAHA device. Clin Otolaryngol 2019; 44:898-904. [DOI: 10.1111/coa.13381] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2017] [Revised: 05/02/2019] [Accepted: 05/20/2019] [Indexed: 11/28/2022]
Affiliation(s)
| | | | - Fanny Gauvrit
- Service d'Otologie et d'OtoneurologieCHU de Lille Lille France
| | - Michaël Risoud
- Service d'Otologie et d'OtoneurologieCHU de Lille Lille France
| | | |
Collapse
|
31
|
Risoud M, Hanson JN, Gauvrit F, Renard C, Bonne NX, Vincent C. Azimuthal sound source localization of various sound stimuli under different conditions. Eur Ann Otorhinolaryngol Head Neck Dis 2019; 137:21-29. [PMID: 31582332 DOI: 10.1016/j.anorl.2019.09.007] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
AIM To evaluate azimuthal sound-source localization performance under different conditions, with a view to optimizing a routine sound localization protocol. MATERIAL AND METHOD Two groups of healthy, normal-hearing subjects were tested identically, except that one had to keep their head still while the other was allowed to turn it. Sound localization was tested without and then with a right ear plug (acute auditory asymmetry) for each of the following sound stimuli: pulsed narrow-band centered on 250Hz, continuous narrowband centered on 2000Hz, 4000Hz and 8000Hz, continuous 4000Hz warble, pulsed white noise, and word ("lac" (lake)). Root mean square error was used to calculate sound-source localization accuracy. RESULTS With fixed head, localization was significantly disturbed by the earplug for all stimuli (P<0.05). The most discriminating stimulus was continuous 4000Hz narrow-band: area under the ROC curve (AUC), 0.99 [95% CI, 0.95-1.01] for screening and 0.85 [0.82-0.89] for diagnosis. With mobile head, localization was significantly better than with fixed head for 4000 and 8000Hz stimuli (P<0.05). The most discriminating stimulus was continuous 2000Hz narrow-band: AUC, 0.90 [0.83-0.97] for screening and 0.75 [0.71-0.79] for diagnosis. In both conditions, pulsed noise (250Hz narrow-band, white noise or word) was less difficult to localize than continuous noise. CONCLUSION The test was more sensitive with the head immobile. Continuous narrow-band stimulation centered on 4000Hz most effectively explored interaural level difference. Pulsed narrow-band stimulation centered on 250Hz most effectively explored interaural time difference. Testing with mobile head, closer to real-life conditions, was most effective with continuous narrow-band stimulation centered on 2000Hz.
Collapse
Affiliation(s)
- M Risoud
- Department of Otology and Neurotology, CHU de Lille, 59000 Lille, France; Inserm U1008 - Controlled Drug Delivery Systems and Biomaterials, University of Lille, CHU de Lille, 59000 Lille, France.
| | - J-N Hanson
- Department of Otology and Neurotology, CHU de Lille, 59000 Lille, France
| | - F Gauvrit
- Department of Otology and Neurotology, CHU de Lille, 59000 Lille, France
| | - C Renard
- Department of Otology and Neurotology, CHU de Lille, 59000 Lille, France
| | - N-X Bonne
- Department of Otology and Neurotology, CHU de Lille, 59000 Lille, France; Inserm U1192 - Proteomics Inflammatory Response Mass Spectrometry (PRISM), University of Lille, CHU de Lille, 59000 Lille, France
| | - C Vincent
- Department of Otology and Neurotology, CHU de Lille, 59000 Lille, France; Inserm U1008 - Controlled Drug Delivery Systems and Biomaterials, University of Lille, CHU de Lille, 59000 Lille, France
| |
Collapse
|