1
|
Watanabe M. Sound source localization in blind soccer: differences between sighted and visually impaired players. J Phys Ther Sci 2024; 36:161-166. [PMID: 38562539 PMCID: PMC10981959 DOI: 10.1589/jpts.36.161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Accepted: 12/30/2023] [Indexed: 04/04/2024] Open
Abstract
[Purpose] The sense of vision is omitted in blind soccer, and sound source localization to grasp the position of the ball is extremely important. The purpose of this study was to clarify whether there is a difference in ability in sound source localization in its approaching condition between visually impaired and sighted people, using the source actually used in blind soccer ball competitions. [Participants and Methods] Eighteen participants were divided into two groups; 10 sighted people and eight visually impaired people. The participants were asked to press a switch when a rolling blind soccer ball was sensed in any one of the four directions. We recorded time error as the difference between the time when the ball passed the optical sensor set under the participant's feet and when the participant pressed the switch. [Results] The time error in response increased with the ball speed in all cases; however, its dependence on the ball speed was significantly different between the two groups. [Conclusion] The visually impaired participants made less time errors in response to the localization of the ball than the sighted participants, even when the ball speed increased. The results indicate that visually impaired people have better sound source localization ability than sighted people do.
Collapse
Affiliation(s)
- Masahiro Watanabe
- Faculty of Medical and Health Science, Tsukuba International University: 6-8-33 Manabe, Tsuchiura-shi, Ibaraki 300-0051, Japan
| |
Collapse
|
2
|
Chen P, Liu Y, Yang J, Wang D, Ren R, Li Y, Yang L, Fu X, Dong R, Zhao S. A new active bone-conduction implant: surgical experiences and audiological outcomes in patients with bilateral congenital microtia. Eur Arch Otorhinolaryngol 2024:10.1007/s00405-024-08523-1. [PMID: 38365989 DOI: 10.1007/s00405-024-08523-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Accepted: 01/31/2024] [Indexed: 02/18/2024]
Abstract
PURPOSE First-generation bone bridges (BBs) have demonstrated favorable safety and audiological benefits in patients with conductive hearing loss. However, studies on the effects of second-generation BBs are limited, especially among children. In this study, we aimed to explore the surgical and audiological effects of second-generation BBs in patients with bilateral congenital microtia. METHODS This single-center prospective study included nine Mandarin-speaking patients with bilateral microtia. All the patients underwent BCI Generation 602 (BCI602; MED-EL, Innsbruck, Austria) implant surgery between September 2021 and June 2023. Audiological and sound localization tests were performed under unaided and BB-aided conditions. RESULTS The transmastoid and retrosigmoid sinus approaches were implemented in three and six patients, respectively. No patient underwent preoperative planning, lifts were unnecessary, and no sigmoid sinus or dural compression occurred. The mean function gain at 0.5-4.0 kHz was 28.06 ± 4.55-dB HL. The word recognition scores improved significantly in quiet under the BB aided condition. Signal-to-noise ratio reduction by 10.56 ± 2.30 dB improved the speech reception threshold in noise. Patients fitted with a unilateral BB demonstrated inferior sound source localization after the initial activation. CONCLUSIONS Second-generation BBs are safe and effective for patients with bilateral congenital microtia and may be suitable for children with mastoid hypoplasia without preoperative three-dimensional reconstruction.
Collapse
Affiliation(s)
- Peiwei Chen
- Department of Otolaryngology Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, No. 1 Dongjiaomin Lane, Dongcheng District, Beijing, 100730, China
- Key Laboratory of Otolaryngology, Head and Neck Surgery, Beijing Institute of Otolaryngology, Capital Medical University, Ministry of Education, Beijing, China
| | - Yujie Liu
- Department of Otolaryngology Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, No. 1 Dongjiaomin Lane, Dongcheng District, Beijing, 100730, China
- Key Laboratory of Otolaryngology, Head and Neck Surgery, Beijing Institute of Otolaryngology, Capital Medical University, Ministry of Education, Beijing, China
| | - Jinsong Yang
- Department of Otolaryngology Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, No. 1 Dongjiaomin Lane, Dongcheng District, Beijing, 100730, China
- Key Laboratory of Otolaryngology, Head and Neck Surgery, Beijing Institute of Otolaryngology, Capital Medical University, Ministry of Education, Beijing, China
| | - Danni Wang
- Department of Otolaryngology Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, No. 1 Dongjiaomin Lane, Dongcheng District, Beijing, 100730, China
- Key Laboratory of Otolaryngology, Head and Neck Surgery, Beijing Institute of Otolaryngology, Capital Medical University, Ministry of Education, Beijing, China
| | - Ran Ren
- Department of Otolaryngology Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, No. 1 Dongjiaomin Lane, Dongcheng District, Beijing, 100730, China
- Key Laboratory of Otolaryngology, Head and Neck Surgery, Beijing Institute of Otolaryngology, Capital Medical University, Ministry of Education, Beijing, China
| | - Ying Li
- Department of Otolaryngology Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, No. 1 Dongjiaomin Lane, Dongcheng District, Beijing, 100730, China
- Key Laboratory of Otolaryngology, Head and Neck Surgery, Beijing Institute of Otolaryngology, Capital Medical University, Ministry of Education, Beijing, China
| | - Lin Yang
- Department of Otolaryngology Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, No. 1 Dongjiaomin Lane, Dongcheng District, Beijing, 100730, China
- Key Laboratory of Otolaryngology, Head and Neck Surgery, Beijing Institute of Otolaryngology, Capital Medical University, Ministry of Education, Beijing, China
| | - Xinxing Fu
- Department of Otolaryngology Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, No. 1 Dongjiaomin Lane, Dongcheng District, Beijing, 100730, China
- Key Laboratory of Otolaryngology, Head and Neck Surgery, Beijing Institute of Otolaryngology, Capital Medical University, Ministry of Education, Beijing, China
| | - Ruijuan Dong
- Department of Otolaryngology Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, No. 1 Dongjiaomin Lane, Dongcheng District, Beijing, 100730, China
- Key Laboratory of Otolaryngology, Head and Neck Surgery, Beijing Institute of Otolaryngology, Capital Medical University, Ministry of Education, Beijing, China
| | - Shouqin Zhao
- Department of Otolaryngology Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, No. 1 Dongjiaomin Lane, Dongcheng District, Beijing, 100730, China.
- Key Laboratory of Otolaryngology, Head and Neck Surgery, Beijing Institute of Otolaryngology, Capital Medical University, Ministry of Education, Beijing, China.
| |
Collapse
|
3
|
Zhang Z, Wang Y. Enhanced approach to fusing automatic characteristic frequency extraction and adaptive array signals weighting for abnormal machine sound localization. ISA Trans 2024; 145:443-467. [PMID: 38052708 DOI: 10.1016/j.isatra.2023.11.041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Revised: 10/31/2023] [Accepted: 11/28/2023] [Indexed: 12/07/2023]
Abstract
In this paper, an enhanced approach for sound localization is proposed, which fuses automatic extraction of array signal characteristic frequencies and adaptive weighting. The method refines the autoregressive power spectral estimation algorithm and improves density-based spatial clustering of applications with noise algorithm for characteristic frequency extraction. Adaptive weighting technique is introduced to alleviate the problem of frequency mismatch in the localization process. The initial weight of narrowband signals is calculated and normalized using the frequency domain amplitude integration of narrowband signals, followed by adaptive threshold correction to eliminate invalid narrowband signal weights. The adaptive weight vector improves the localization method's accuracy and interference suppression. The effectiveness and universality of the proposed method are demonstrated with test data from dry transformers and pumps, and its applicability is shown to extend to various spatial spectrum estimation algorithms and deep learning-based sound source localization techniques.
Collapse
Affiliation(s)
- Zhanxi Zhang
- State Key Laboratory of Power Transmission Equipment & System Security and New Technology, Chongqing University, Chongqing 400044, People's Republic of China.
| | - Youyuan Wang
- State Key Laboratory of Power Transmission Equipment & System Security and New Technology, Chongqing University, Chongqing 400044, People's Republic of China
| |
Collapse
|
4
|
Liu H, Bai Y, Xu Z, Liu J, Ni G, Ming D. The scalp time-varying network of auditory spatial attention in "cocktail-party" situations. Hear Res 2024; 442:108946. [PMID: 38150794 DOI: 10.1016/j.heares.2023.108946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Revised: 12/21/2023] [Accepted: 12/22/2023] [Indexed: 12/29/2023]
Abstract
Sound source localization in "cocktail-party" situations is a remarkable ability of the human auditory system. However, the neural mechanisms underlying auditory spatial attention are still largely unknown. In this study, the "cocktail-party" situations are simulated through multiple sound sources and presented through head-related transfer functions and headphones. Furthermore, the scalp time-varying network of auditory spatial attention is constructed using the high-temporal resolution electroencephalogram, and its network properties are measured quantitatively using graph theory analysis. The results show that the time-varying network of auditory spatial attention in "cocktail-party" situations is more complex and partially different than in simple acoustic situations, especially in the early- and middle-latency periods. The network coupling strength increases continuously over time, and the network hub shifts from the posterior temporal lobe to the parietal lobe and then to the frontal lobe region. In addition, the right hemisphere has a stronger network strength for processing auditory spatial information in "cocktail-party" situations, i.e., the right hemisphere has higher clustering levels, higher transmission efficiency, and more node degrees during the early- and middle-latency periods, while this phenomenon disappears and appears symmetrically during the late-latency period. These findings reveal different network patterns and properties of auditory spatial attention in "cocktail-party" situations during different periods and demonstrate the dominance of the right hemisphere in the dynamic processing of auditory spatial information.
Collapse
Affiliation(s)
- Hongxing Liu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072 China
| | - Yanru Bai
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072 China; Tianjin Key Laboratory of Brain Science and Neuroengineering, Tianjin 300072 China; Haihe Laboratory of Brain-Computer Interaction and Human-Machine Integration, Tianjin 300392 China
| | - Zihao Xu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072 China
| | - Jihan Liu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072 China
| | - Guangjian Ni
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072 China; Tianjin Key Laboratory of Brain Science and Neuroengineering, Tianjin 300072 China; Haihe Laboratory of Brain-Computer Interaction and Human-Machine Integration, Tianjin 300392 China.
| | - Dong Ming
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072 China; Tianjin Key Laboratory of Brain Science and Neuroengineering, Tianjin 300072 China; Haihe Laboratory of Brain-Computer Interaction and Human-Machine Integration, Tianjin 300392 China
| |
Collapse
|
5
|
Wang J, Chen Y, Stenfelt S, Sang J, Li X, Zheng C. Analysis of cross-talk cancellation of bilateral bone conduction stimulation. Hear Res 2023; 434:108781. [PMID: 37156121 DOI: 10.1016/j.heares.2023.108781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 04/13/2023] [Accepted: 04/26/2023] [Indexed: 05/10/2023]
Abstract
When presenting a stereo sound through bilateral stimulation by two bone conduction transducers (BTs), part of the sound at the left side leaks to the right side, and vice versa. The sound transmitted to the contralateral cochlea becomes cross-talk, which can affect space perception. The negative effects of the cross-talk can be mitigated by a cross-talk cancellation system (CCS). Here, a CCS is designed from individual bone conduction (BC) transfer functions using a fast deconvolution algorithm. The BC response functions (BCRFs) from the stimulation positions to the cochleae were obtained by measurements of BC evoked otoacoustic emissions (OAEs) of 10 participants. The BCRFs of the 10 participants showed that the interaural isolation was low. In 5 of the participants, a cross-talk cancellation experiment was carried out based on the individualized BCRFs. Simulations showed that the CCS gave a channel separation (CS) of more than 50 dB in the 1-3 kHz range with appropriately chosen parameter values. Moreover, a localization test showed that the BC localization accuracy improved using the CCS where a 2-4.5 kHz narrowband noise gave better localization performance than a broadband 0.4-10 kHz noise. The results indicate that using a CCS with bilateral BC stimulation can improve interaural separation and thereby improve spatial hearing by bilateral BC.
Collapse
Affiliation(s)
- Jie Wang
- School of Electronics and Communication Engineering, Guangzhou University, Guangzhou 510006, PR. China
| | - Yunda Chen
- School of Electronics and Communication Engineering, Guangzhou University, Guangzhou 510006, PR. China
| | - Stefan Stenfelt
- Department of Biomedical and Clinical Sciences, Linköping University, Linköping, Sweden
| | - Jinqiu Sang
- Shanghai Institute of AI for Education, East China Normal University, Shanghai 200062, PR. China; Institute of Acoustics, Chinese Academy of Sciences, Beijing 100190, PR. China.
| | - Xiaodong Li
- Institute of Acoustics, Chinese Academy of Sciences, Beijing 100190, PR. China; University of Chinese Academy of Sciences, Beijing, 100049, PR. China
| | - Chengshi Zheng
- Institute of Acoustics, Chinese Academy of Sciences, Beijing 100190, PR. China; University of Chinese Academy of Sciences, Beijing, 100049, PR. China.
| |
Collapse
|
6
|
Brown AD, Hayward T, Portfors CV, Coffin AB. On the value of diverse organisms in auditory research: From fish to flies to humans. Hear Res 2023; 432:108754. [PMID: 37054531 PMCID: PMC10424633 DOI: 10.1016/j.heares.2023.108754] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 02/28/2023] [Accepted: 03/27/2023] [Indexed: 03/31/2023]
Abstract
Historically, diverse organisms have contributed to our understanding of auditory function. In recent years, the laboratory mouse has become the prevailing non-human model in auditory research, particularly for biomedical studies. There are many questions in auditory research for which the mouse is the most appropriate (or the only) model system available. But mice cannot provide answers for all auditory problems of basic and applied importance, nor can any single model system provide a synthetic understanding of the diverse solutions that have evolved to facilitate effective detection and use of acoustic information. In this review, spurred by trends in funding and publishing and inspired by parallel observations in other domains of neuroscience, we highlight a few examples of the profound impact and lasting benefits of comparative and basic organismal research in the auditory system. We begin with the serendipitous discovery of hair cell regeneration in non-mammalian vertebrates, a finding that has fueled an ongoing search for pathways to hearing restoration in humans. We then turn to the problem of sound source localization - a fundamental task that most auditory systems have been compelled to solve despite large variation in the magnitudes and kinds of spatial acoustic cues available, begetting varied direction-detecting mechanisms. Finally, we consider the power of work in highly specialized organisms to reveal exceptional solutions to sensory problems - and the diverse returns of deep neuroethological inquiry - via the example of echolocating bats. Throughout, we consider how discoveries made possible by comparative and curiosity-driven organismal research have driven fundamental scientific, biomedical, and technological advances in the auditory field.
Collapse
Affiliation(s)
- Andrew D Brown
- Department of Speech and Hearing Sciences, University of Washington, 1417 NE 42nd St, Seattle, WA, 98105 USA; Virginia-Merrill Bloedel Hearing Research Center, University of Washington, 1701 NE Columbia Rd, Seattle, WA, 98195 USA.
| | - Tamasen Hayward
- College of Arts and Sciences, Washington State University, 14204 NE Salmon Creek Ave, Vancouver, WA 98686 USA
| | - Christine V Portfors
- School of Biological Sciences, Washington State University, 14204 NE Salmon Creek Ave, Vancouver, WA 98686 USA
| | - Allison B Coffin
- College of Arts and Sciences, Washington State University, 14204 NE Salmon Creek Ave, Vancouver, WA 98686 USA; School of Biological Sciences, Washington State University, 14204 NE Salmon Creek Ave, Vancouver, WA 98686 USA; Department of Integrative Physiology and Neuroscience, Washington State University, 14204 NE Salmon Creek Ave, Vancouver, WA 98686 USA.
| |
Collapse
|
7
|
Long Y, Wang W, Liu J, Liu K, Gong S. Effect of tinnitus on sound localization ability in patients with normal hearing. Braz J Otorhinolaryngol 2023; 89:462-468. [PMID: 36841711 PMCID: PMC10164763 DOI: 10.1016/j.bjorl.2023.01.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2022] [Revised: 12/06/2022] [Accepted: 01/19/2023] [Indexed: 01/31/2023] Open
Abstract
OBJECTIVES To determine whether tinnitus negatively impacts the accuracy of sound source localization in participants with normal hearing. METHODS Seventy-five participants with tinnitus and 74 without tinnitus were enrolled in this study. The accuracy of sound source discrimination on the horizontal plane was compared between the two participant groups. The test equipment consisted of 37 loudspeakers arranged in a 180° arc facing forward with 5° intervals between them. The stimuli were pure tones of 0.25, 0.5, 1, 2, 4, and 8kHz at 50dB SPL. The stimuli were divided into three groups: low frequency (LF: 0.25, 0.5, and 1kHz), 2kHz, and high frequency (HF: 4 and 8kHz) stimuli. RESULTS The Root Mean Square Error (RMSE) score of all the stimuli in the tinnitus group was significantly higher than that in the control group (13.45±3.34 vs. 11.44±2.56, p=4.115, t<0.001). The RMSE scores at LF, 2kHz, and HF were significantly higher in the tinnitus group than those in the control group (LF: 11.66±3.62 vs. 10.04±3.13, t=2.918, p=0.004; 2kHz: 16.63±5.45 vs. 14.43±4.52, t=2.690, p=0.008; HF: 13.42±4.74 vs. 11.14 ±3.68, t=3.292, p=0.001). Thus, the accuracy of sound source discrimination in participants with tinnitus was significantly worse than that in those without tinnitus, despite the stimuli frequency. There was no difference in the ability to localize the sound of the matched frequency and other frequencies (12.86±6.29 vs. 13.87±3.14, t=1.204, p=0.236). Additionally, there was no correlation observed between the loudness of tinnitus and RMSE scores (r=0.096, p=0.434), and the Tinnitus Handicap Inventory (THI) and RMSE scores (r=-0.056, p=0.648). CONCLUSIONS Our present data suggest that tinnitus negatively impacted sound source localization accuracy, even when participants had normal hearing. The matched pitch and loudness and the impact of tinnitus on patients' daily lives were not related to the sound source localization ability. LEVEL OF EVIDENCE: 4
Collapse
Affiliation(s)
- Yue Long
- Department of Otolaryngology-Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China; Clinical Center for Hearing Loss, Capital Medical University, Beijing, China
| | - Wei Wang
- Department of Otolaryngology-Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Jiao Liu
- Department of Otolaryngology-Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Ke Liu
- Department of Otolaryngology-Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China.
| | - Shusheng Gong
- Department of Otolaryngology-Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China; Clinical Center for Hearing Loss, Capital Medical University, Beijing, China.
| |
Collapse
|
8
|
Warren MR, Spurrier MS, Sangiamo DT, Clein RS, Neunuebel JP. Mouse vocal emission and acoustic complexity do not scale linearly with the size of a social group. J Exp Biol 2021; 224:jeb239814. [PMID: 34096599 PMCID: PMC8214829 DOI: 10.1242/jeb.239814] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Accepted: 04/22/2021] [Indexed: 11/20/2022]
Abstract
Adult mice emit ultrasonic vocalizations (USVs), sounds above the range of human hearing, during social encounters. While mice alter their vocal emissions between isolated and social contexts, technological impediments have hampered our ability to assess how individual mice vocalize in group social settings. We overcame this challenge by implementing an 8-channel microphone array system, allowing us to determine which mouse emitted individual vocalizations across multiple social contexts. This technology, in conjunction with a new approach for extracting and categorizing a complex, full repertoire of vocalizations, facilitated our ability to directly compare how mice modulate their vocal emissions between isolated, dyadic and group social environments. When comparing vocal emission during isolated and social settings, we found that socializing male mice increase the proportion of vocalizations with turning points in frequency modulation and instantaneous jumps in frequency. Moreover, males change the types of vocalizations emitted between social and isolated contexts. In contrast, there was no difference in male vocal emission between dyadic and group social contexts. Female vocal emission, while predominantly absent in isolation, was also similar during dyadic and group interactions. In particular, there were no differences in the proportion of vocalizations with frequency jumps or turning points. Taken together, the findings lay the groundwork necessary for elucidating the stimuli underlying specific features of vocal emission in mice.
Collapse
Affiliation(s)
- Megan R. Warren
- Department of Psychological and Brain Sciences, University of Delaware, Newark, DE 19716, USA
- Department of Biology, Emory University, Atlanta, GA 30322, USA
| | - Morgan S. Spurrier
- Department of Psychological and Brain Sciences, University of Delaware, Newark, DE 19716, USA
| | - Daniel T. Sangiamo
- Department of Psychological and Brain Sciences, University of Delaware, Newark, DE 19716, USA
| | - Rachel S. Clein
- Department of Psychological and Brain Sciences, University of Delaware, Newark, DE 19716, USA
| | - Joshua P. Neunuebel
- Department of Psychological and Brain Sciences, University of Delaware, Newark, DE 19716, USA
| |
Collapse
|
9
|
Awano H, Shirasaka M, Mizumoto T, Okuno HG, Aihara I. Visualization of a chorus structure in multiple frog species by a sound discrimination device. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2021; 207:87-98. [PMID: 33481121 DOI: 10.1007/s00359-021-01463-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2020] [Revised: 12/22/2020] [Accepted: 01/05/2021] [Indexed: 10/22/2022]
Abstract
We developed a sound discrimination device to identify and localize the species of nocturnal animals in their natural habitat. The sound discrimination device is equipped with a microphone, a light-emitting diode, and a band-pass filter. By tuning the center frequency of the filter to include a dominant frequency of the calls of a focal species, we enable the device to be illuminated only when detecting the calls of the focal species. In experiments in a laboratory room, we tuned the sound discrimination devices to detect the calls of Hyla japonica or Rhacophorus schlegelii and broadcast the frog calls from loudspeakers. By analyzing the illumination pattern of the devices, we successfully identified and localized the two kinds of sound sources. Next, we placed the sound discrimination devices in a field site where actual male frogs (H. japonica and R. schlegelii) produced sounds. The analysis of the illumination pattern demonstrates the efficacy of the developed devices in a natural environment and also enables us to extract pairs of male frogs that significantly overlapped or alternated their calls.
Collapse
Affiliation(s)
- Hiromitsu Awano
- Graduate School of Information Science and Technology, Osaka University, Suita, Japan.,Graduate School of Informatics, Kyoto University, Kyoto, Japan
| | - Masahiro Shirasaka
- Graduate School of Systems and Information Engineering, University of Tsukuba, Tsukuba, Japan
| | | | - Hiroshi G Okuno
- Institute for Human-Robot Co-Creation, Waseda University, Tokyo, Japan
| | - Ikkyu Aihara
- Graduate School of Informatics, Kyoto University, Kyoto, Japan.
| |
Collapse
|
10
|
Abstract
A review of data published or presented by the authors from two populations of subjects (normal hearing listeners and patients fit with cochlear implants, CIs) involving research on sound source localization when listeners move is provided. The overall theme of the review is that sound source localization requires an integration of auditory-spatial and head-position cues and is, therefore, a multisystem process. Research with normal hearing listeners includes that related to the Wallach Azimuth Illusion, and additional aspects of sound source localization perception when listeners and sound sources rotate. Research with CI patients involves investigations of sound source localization performance by patients fit with a single CI, bilateral CIs, a CI and a hearing aid (bimodal patients), and single-sided deaf patients with one normal functioning ear and the other ear fit with a CI. Past research involving CI patients who were stationary and more recent data based on CI patients' use of head rotation to localize sound sources is summarized.
Collapse
Affiliation(s)
- William A. Yost
- Spatial Hearing Laboratory, Speech and Hearing Science, Arizona State University, PO Box 870102, Tempe, Arizona, 85287, USA
| | - M. Torben Pastore
- Spatial Hearing Laboratory, Speech and Hearing Science, Arizona State University, PO Box 870102, Tempe, Arizona, 85287, USA
| | - Michael F. Dorman
- Cochlear Implant Laboratory, Speech and Hearing Science, Arizona State University, PO Box 870102, Tempe, Arizona, 85287, USA
| |
Collapse
|
11
|
Ruedl G, Pocecco E, Kopp M, Burtscher M, Zorowka P, Seebacher J. Impact of listening to music while wearing a ski helmet on sound source localization. J Sci Med Sport 2019; 22 Suppl 1:S7-S11. [PMID: 30341036 DOI: 10.1016/j.jsams.2018.09.234] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2017] [Revised: 05/24/2018] [Accepted: 09/30/2018] [Indexed: 11/20/2022]
Abstract
OBJECTIVES In recreational skiing and snowboarding, listening to music may be associated with an increased injury risk due to impaired sound localization. Thus, we evaluated effects of listening to music at different sound levels on sound source localization while wearing a ski helmet. DESIGN within-subjects design. METHOD Sound source localization of 20 participants (50% females; age: 23.8±2.4years) was assessed in an anechoic chamber under six conditions: (1) head bare, (2) wearing a ski helmet, (3) wearing a ski helmet and insert ear phones, and (4-6) the latter and listening to music at 3 different sound levels of 45, 55, and 65dB sound pressure level (SPL), respectively. RESULTS One-way repeated measures ANOVA show that the percentage of correct sound localization was significantly affected by various conditions: F (5, 95)=138.2, p<.001 (ƞ2=0.88). Compared to the situation "head bare" with a correct score of 88%, increasing music sound levels of 45, 55 and 65dB SPL significantly decreased the ability to correctly localize the sound source to 54%, 45% and 37% correct scores, respectively. Also, angular errors [F (5, 95)=31.0, p<.001, ƞ2=0.62] and front rear confusion [F (2.8, 53.4)=57.9, p<.001, ƞ2=0.75] were significantly affected by wearing a ski helmet and listening to music simultaneously. CONCLUSIONS Listening to music while wearing a ski helmet impacts negatively on sound source localization. The extent of worsening strongly depends on the sound level.
Collapse
|
12
|
Abstract
Sound source localization is paramount for comfort of life, determining the position of a sound source in 3 dimensions: azimuth, height and distance. It is based on 3 types of cue: 2 binaural (interaural time difference and interaural level difference) and 1 monaural spectral cue (head-related transfer function). These are complementary and vary according to the acoustic characteristics of the incident sound. The objective of this report is to update the current state of knowledge on the physical basis of spatial sound localization.
Collapse
Affiliation(s)
- M Risoud
- Department of otology and neurotology, CHU de Lille, 59000 Lille, France; Inserm U1008 - controlled drug delivery systems and biomaterials, université de Lille 2, CHU de Lille, 59000 Lille, France.
| | - J-N Hanson
- Department of otology and neurotology, CHU de Lille, 59000 Lille, France
| | - F Gauvrit
- Department of otology and neurotology, CHU de Lille, 59000 Lille, France
| | - C Renard
- Department of otology and neurotology, CHU de Lille, 59000 Lille, France
| | - P-E Lemesre
- Department of otology and neurotology, CHU de Lille, 59000 Lille, France
| | - N-X Bonne
- Department of otology and neurotology, CHU de Lille, 59000 Lille, France; Inserm U1008 - controlled drug delivery systems and biomaterials, université de Lille 2, CHU de Lille, 59000 Lille, France
| | - C Vincent
- Department of otology and neurotology, CHU de Lille, 59000 Lille, France; Inserm U1008 - controlled drug delivery systems and biomaterials, université de Lille 2, CHU de Lille, 59000 Lille, France
| |
Collapse
|
13
|
Salminen NH, Jones SJ, Christianson GB, Marquardt T, McAlpine D. A common periodic representation of interaural time differences in mammalian cortex. Neuroimage 2018; 167:95-103. [PMID: 29122721 PMCID: PMC5854251 DOI: 10.1016/j.neuroimage.2017.11.012] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2017] [Revised: 10/01/2017] [Accepted: 11/04/2017] [Indexed: 11/16/2022] Open
Abstract
Binaural hearing, the ability to detect small differences in the timing and level of sounds at the two ears, underpins the ability to localize sound sources along the horizontal plane, and is important for decoding complex spatial listening environments into separate objects – a critical factor in ‘cocktail-party listening’. For human listeners, the most important spatial cue is the interaural time difference (ITD). Despite many decades of neurophysiological investigations of ITD sensitivity in small mammals, and computational models aimed at accounting for human perception, a lack of concordance between these studies has hampered our understanding of how the human brain represents and processes ITDs. Further, neural coding of spatial cues might depend on factors such as head-size or hearing range, which differ considerably between humans and commonly used experimental animals. Here, using magnetoencephalography (MEG) in human listeners, and electro-corticography (ECoG) recordings in guinea pig—a small mammal representative of a range of animals in which ITD coding has been assessed at the level of single-neuron recordings—we tested whether processing of ITDs in human auditory cortex accords with a frequency-dependent periodic code of ITD reported in small mammals, or whether alternative or additional processing stages implemented in psychoacoustic models of human binaural hearing must be assumed. Our data were well accounted for by a model consisting of periodically tuned ITD-detectors, and were highly consistent across the two species. The results suggest that the representation of ITD in human auditory cortex is similar to that found in other mammalian species, a representation in which neural responses to ITD are determined by phase differences relative to sound frequency rather than, for instance, the range of ITDs permitted by head size or the absolute magnitude or direction of ITD. ITD tuning is studied in human MEG and guinea pig ECoG with identical stimuli. Auditory cortical tuning to ITD is highly consistent across species. Results are consistent with a periodic, frequency-dependent code.
Collapse
Affiliation(s)
- Nelli H Salminen
- Brain and Mind Laboratory, Dept. of Neuroscience and Biomedical Engineering, MEG Core, Aalto NeuroImaging, Aalto University School of Science, Espoo, Finland.
| | - Simon J Jones
- UCL Ear Institute, 332 Gray's Inn Road, London, WC1X 8EE, UK
| | | | | | - David McAlpine
- UCL Ear Institute, 332 Gray's Inn Road, London, WC1X 8EE, UK; Dept of Linguistics, Australian Hearing Hub, Macquarie University, Sydney, NSW 2109, Australia
| |
Collapse
|
14
|
Warren MR, Sangiamo DT, Neunuebel JP. High channel count microphone array accurately and precisely localizes ultrasonic signals from freely-moving mice. J Neurosci Methods 2018; 297:44-60. [PMID: 29309793 DOI: 10.1016/j.jneumeth.2017.12.013] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2017] [Revised: 11/19/2017] [Accepted: 12/20/2017] [Indexed: 11/23/2022]
Abstract
BACKGROUND An integral component in the assessment of vocal behavior in groups of freely interacting animals is the ability to determine which animal is producing each vocal signal. This process is facilitated by using microphone arrays with multiple channels. NEW METHOD AND COMPARISON WITH EXISTING METHODS Here, we made important refinements to a state-of-the-art microphone array based system used to localize vocal signals produced by freely interacting laboratory mice. Key changes to the system included increasing the number of microphones as well as refining the methodology for localizing and assigning vocal signals to individual mice. RESULTS We systematically demonstrate that the improvements in the methodology for localizing mouse vocal signals led to an increase in the number of signals detected as well as the number of signals accurately assigned to an animal. CONCLUSIONS These changes facilitated the acquisition of larger and more comprehensive data sets that better represent the vocal activity within an experiment. Furthermore, this system will allow more thorough analyses of the role that vocal signals play in social communication. We expect that such advances will broaden our understanding of social communication deficits in mouse models of neurological disorders.
Collapse
|
15
|
Abstract
Method and preliminary results of multiple sound sources localization in free field using the acoustic vector sensor were presented in this study. Direction of arrival (DOA) for considered source was determined based on sound intensity method supported by Fourier analysis. Obtained spectrum components for considered signal allowed to determine the DOA value for the particular frequency independently. The accuracy of the developed and practically implemented algorithm was evaluated on the basis of laboratory tests. Both synthetic acoustic signals (pure tones and noises) and real sounds were used during the measurements. Real signals had the same or different energy distribution both on time and frequency domain. The setup of the experiment and obtained results were described in details in the text. Taking the obtained results into consideration is important to emphasize that the localization of the multiple sound sources using single acoustic vector sensor is possible. The localization accuracy was the best for signals which spectral energy distribution was different.
Collapse
Affiliation(s)
- Józef Kotus
- Multimedia Systems Department, Gdansk University of Technology, Narutowicza 11-12, 80-233 Gdansk, Poland
| |
Collapse
|